Main, Tuning

Interrupt Coalescence (also called Interrupt Moderation, Interrupt Blanking, or Interrupt Throttling)

A common bottleneck for high-speed data transfers is the high rate of interrupts that the receiving system has to process – traditionally, a network adapter generates an interrupt for each frame that it receives. These interrupts consume signaling resources on the system’s bus(es), and introduce significant CPU overhead as the system transitions back and forth between “productive” work and interrupt handling many thousand times a second.

To alleviate this load, some high-speed network adapters support interrupt coalescence. When multiple frames are received in a short timeframe (“back-to-back”), these adapters buffer those frames locally and only interrupt the system once.

Interrupt coalescence together with large-receive offload can roughly be seen as doing on the “receive” side what transmit chaining and large-send offload (LSO) do for the “transmit” side.

Issues with interrupt coalescence

While this scheme lowers interrupt-related system load significantly, it can have adverse effects on timing, and make TCP traffic more bursty or “clumpy”. Therefore it would make sense to combine interrupt coalescence with on-board timestamping functionality. Unfortunately that doesn’t seem to be implemented in commodity hardware/driver combinations yet.

The way that interrupt coalescence works, a network adapter that has received a frame doesn’t send an interrupt to the system right away, but waits for a little while in case more packets arrive. This can have a negative impact on latency.

In general, interrupt coalescence is configured such that the additional delay is bounded. On some implementations, these delay bounds are specified in units of milliseconds, on other systems in units of microseconds. It requires some thought to find a good trade-off between latency and load reduction. One should be careful to set the coalescence threshold low enough that the additional latency doesn’t cause problems. Setting a low threshold will prevent interrupt coalescence from occurring when successive packets are spaced too far apart. But in that case, the interrupt rate will probably be low enough so that this is not a problem.

Configuration

Configuration of interrupt coalescence is highly system dependent, although there are some parameters that are more or less common over implementations.

Linux

On Linux systems with additional driver support, the ethtool -C command can be used to modify the interrupt coalescence settings of network devices on the fly.

Some Ethernet drivers in Linux have parameters to control Interrupt Coalescence (Interrupt Moderation, as it is called in Linux). For example, the e1000 driver for the large family of Intel Gigabit Ethernet adapters has the following parameters according to the kernel documentation:

InterruptThrottleRate
limits the number of interrupts per second generated by the card. Values >= 100 are interpreted as the maximum number of interrupts per second. The default value used to be 8’000 up to and including kernel release 2.6.19. A value of zero (0) disabled interrupt moderation completely. Above 2.6.19, some values between 1 and 99 can be used to select adaptive interrupt rate control. The first adaptive modes are “dynamic conservative” (1) and dynamic with reduced latency (3). In conservative mode (1), the rate changes between 4’000 interrupts per second when only bulk traffic (“normal-size packets”) is seen, and 20’000 when small packets are present that might benefit from lower latency. The more aggressive mode (3), “low-latency” traffic may drive the interrupt rate up to 70’000 per second. This mode is supposed to be useful for cluster communication in grid applications.
RxIntDelay
specifies, in multiples of 1’024 microseconds, the time after reception of a frame to wait for another frame to arrive before sending an interrupt.
RxAbsIntDelay
bounds the delay between reception of a frame and generation of an interrupt. It is specified in units of 1’024 microseconds. Note that InterruptThrottleRate overrides RxAbsIntDelay, so even when a very short RxAbsIntDelay is specified, the interrupt rate should never exceed the rate specified (either directly or by the dynamic algorithm) by InterruptThrottleRate
RxDescriptors
specifies the number of descriptors to store incoming frames on the adapter. The default value is 256, which is also the maximum for some types of E1000-based adapters. Others can allocate up to 4’096 of these descriptors. The size of the receive buffer associated with each descriptor varies with the MTU configured on the adapter. It is always a power-of-two number of bytes. The number of descriptors available will also depend on the per-buffer size. When all buffers have been filled by incoming frames, an interrupt will have to be signaled in any case.

Solaris

As an example, see the Platform Notes: Sun GigaSwift Ethernet Device Driver. It lists the following parameters for that particular type of adapter:

rx_intr_pkts
Interrupt after this number of packets have arrived since the last packet was serviced. A value of zero indicates no packet blanking. (Range: 0 to 511, default=3)
rx_intr_time
Interrupt after 4.5 microsecond ticks have elapsed since the last packet was serviced. A value of zero indicates no time blanking. (Range: 0 to 524287, default=1250)
Advertisements
TIBCO

TIBCO Hawk v/s TIBCO BWPM (reblogged)

A short while ago I got the question from a customer that wanted to know the differences between TIBCO Hawk and TIBCO BWPM (BusinessWorks Process Monitor), since both are monitoring products from TIBCO. In this blog I will be briefly explaining my point of view and recommendations about when to use which product, which in my opinion cannot be compared as-is.

TIBCO Hawk
Let me start by indicating that TIBCO Hawk and BWPM are not products which can be directly compared with each other. There is partially overlap in purpose of the two products, namely gaining insight in the integration landscape, but at the same time the products are very different. TIBCO Hawk is as we may know a transport, distribution and monitoring product that underwater allows TIBCO administrators to technically monitor the integration landscape in runtime (including server behaviour etc.) and reactive respond on certain events by configuring so-called Hawk-rules and setting up dashboards for feedback. The technical monitoring capabilities are quite extensive and based on the information and log files which are available by both the TIBCO Administrator and the various micro Hawk agents. The target group of TIBCO Hawk are especially the administrators and to a lesser extent, the developers. The focus is on monitoring the various TIBCO components (or adapters) to satisfy corresponding SLA’s, not that what is taking place within the TIBCO components from functional points of perspective.

+ Very strong, comprehensive and proven tool for TIBCO administrators;
+ Reactive measure and (automatically) react to events in the landscape using Hawk-rules;
– Fairly technical, thus very higher threshold for non-technical users;
– Offer little or no insight into the actual data processed from a functional point of perspective;

TIBCO BWPM
TIBCO BWPM is a product that from a functional point of perspective provides insight during runtime at process level and is a branch and rebranding of the product called nJAMS by Integration Matters. It may impact the way of developing (standards and guidelines). By using so-called libraries throughout the development process-specific functional information can be made available in runtime. It has a rich web interface as an alternative to the TIBCO Administrator and offers rich visual insight into all process instances and correlates them together. The target group of TIBCO BWPM are the TIBCO developers, administrators, testers and even analysts. The focus is on gaining and understanding of that which is being taking place within the TIBCO components from functional points of perspective.

+ Very strong and comprehensive tool with a rich web interface;
+ Provides extensive logging capabilities, the availability of all related context and process data;
+ Easily accessible and intuitive to use, even for non-technical users;
– Less suitable to use for the daily technical monitoring of the landscape (including server behaviour etc.);
– It is important that the product is well designed and properly parameterized to prevent performance impact (should not be underestimate);

Conclusion
In my opinion, TIBCO BWPM is a very welcome addition to the standard TIBCO Administrator/TIBCO Hawk to gain insight in the related context and process data from a functional point of perspective. In addition, the product can also be used by TIBCO developers, administrators, testers and even analysts.

Source :-  http://www.rubix.nl

BusinessWorks, TIBCO

TIBCO BWPM – Missing Libraries Detected

Guys,

If at all you get an error like this

Capture

 

Don’t Panic, simply copy the following list of the following jars in $CATALINA_HOME/lib

For EMS :-

  • jms.jar (if using ems 8 and above rename jms2.0.jar with jms.jar)
  • tibcrypt.jar, tibjms.jar, tibjmsadmin.jar

For Database :-

  • ojdbc.jar (rename ojdbc6.jar or ojdbc7.jar to ojdbc.jar) – ORACLE
  • mssqlserver.jar (rename to sqljdbc4.jar) – MSSQL
Operating System, Redhat / CEntOS / Oracle Linux

/etc/security/limits.conf file – In A Nutshell

The /etc/security/limits.conf file contains a list line where each line describes a limit for a user in the form of:

<Domain> <type> <item> <shell limit value>

Where:

  • <domain> can be:
    • an user name
    • a group name, with @group syntax
    • the wildcard *, for default entry
    • the wildcard %, can be also used with %group syntax, for maxlogin limit
  • <type> can have the two values:
    • “soft” for enforcing the soft limits (soft is like warning)
    • “hard” for enforcing hard limits (hard is a real max limit)
  • <item> can be one of the following:
    • core – limits the core file size (KB)
  • <shell limit value> can be one of the following:
    • core – limits the core file size (KB)
    • data – max data size (KB)
    • fsize – maximum file size (KB)
    • memlock – max locked-in-memory address space (KB)
    • nofile – Maximum number of open file descriptors
    • rss – max resident set size (KB)
    • stack – max stack size (KB) – Maximum size of the stack segment of the process
    • cpu – max CPU time (MIN)
    • nproc – Maximum number of processes available to a single user
    • as – address space limit
    • maxlogins – max number of logins for this user
    • maxsyslogins – max number of logins on the system
    • priority – the priority to run user process with
    • locks – max number of file locks the user can hold
    • sigpending – max number of pending signals
    • msgqueue – max memory used by POSIX message queues (bytes)
    • nice – max nice priority allowed to raise to
    • rtprio – max realtime priority
    • chroot – change root to directory (Debian-specific)

 

  • Sigpending – examine pending signals.

sigpending () returns the set of signals that are pending for delivery to the calling thread (i.e., the signals which have been raised while blocked). The mask of pending signals is returned in set.

sigpending() returns 0 on success and -1 on error

 

credits :- Sagar Salunkhe

Operating System, Redhat / CEntOS / Oracle Linux

Linux KVM: Disable virbr0 NAT Interface

The virtual network (virbr0) used for Network address translation (NAT) which allows guests to access to network services. However, NAT slows down things and only recommended for desktop installations. To disable Network address translation (NAT) forwarding type the following commands:

Display Current Setup

Type the following command:
# ifconfig
Sample outputs:

virbr0    Link encap:Ethernet  HWaddr 00:00:00:00:00:00  
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:7921 (7.7 KiB)

Or use the following command:
# virsh net-list
Sample outputs:

Name                 State      Autostart
-----------------------------------------
default              active     yes       

To disable virbr0, enter:
# virsh net-destroy default
# virsh net-undefine default
# service libvirtd restart
# ifconfig

Enterprise Messaging Service, TIBCO

TIBCO EMS – Properties of Queues and Topics (Where Tuning can be done)

You can set the properties directly in the topics.conf or queues.conf file or by means of the setprop topic or setprop queue command in the EMS Administrator Tool.

1)   Failsafe

The failsafe property determines whether the server writes persistent messages to disk synchronously or asynchronously.

Ø  When failsafe is not set, messages are written to the file on disk in asynchronous mode to obtain maximum performance. In this mode, the data may remain in system buffers for a short time before it is written to disk and it is possible that, in case of software or hardware failure, some data could be lost without the possibility of recovery

Ø  In failsafe mode, all data for that queue or topic are written into external storage in synchronous mode. In synchronous mode, a write operation is not complete until the data is physically recorded on the external device

The failsafe property ensures that no messages are ever lost in case of server failure

2) Secure

Ø  When the secure property is enabled for a destination, it instructs the server to check user permissions whenever a user attempts to perform an operation on that destination.

Ø  If the secure property is not set for a destination, the server does not check permissions for that destination and any authenticated user can perform any operation on that topic or queue.

2)   Maxbytes

Ø  Topics and queues can specify the maxbytes property in the form:

Maxbytes=value [KB|MB|GB]                   Ex: maxbytes=1000MB

Ø  For queues, maxbytes defines the maximum size (in bytes) that the queue can store, summed over all messages in the queue. Should this limit be exceeded, messages will be rejected by the server and the message producers send calls will return an error

Ø  If maxbytes is zero, or is not set, the server does not limit the memory allocation for the queue

Ø  For queues, maxbytes defines the maximum size (in bytes) that the queue can store, summed over all messages in the queue. Should this limit be exceeded, messages will be rejected by the server and the message producer sends calls will return an error

4) maxmsgs

Ø  Where value defines the maximum number of messages that can be waiting in a queue. When adding a message would exceed this limit, the server does not accept the message into storage, and the message producer’s send call returns an error.

Ø  If maxmsgs is zero, or is not set, the server does not limit the number of messages in the queue.

Ø  You can set both maxmsgs and maxbytes properties on the same queue. Exceeding either limit causes the server to reject new messages until consumers reduce the the queue size to below these limits.

5) OverflowPolicy

Topics and queues can specify the overflowPolicy property to change the effect of exceeding the message capacity established by either maxbytes or maxmsgs.

o   OverflowPolicy=default | discardOld | rejectIncoming

  1. Default

Ø  For topics, default specifies that messages are sent to subscribers, regardless of maxbytes or maxmsgs setting.

Ø  For queues, default specifies that new messages are rejected by the server and an error is returned to the producer if the established maxbytes or maxmsgs value has been exceeded.

  1. DiscardOld

Ø  For topics, discardOld specifies that, if any of the subscribers have an outstanding number of undelivered messages on the server that are over the message limit, the oldest messages are discarded before they are delivered to the subscriber.

Ø  The discardOld setting impacts subscribers individually. For example, you might have three subscribers to a topic, but only one subscriber exceeds the message limit. In this case, only the oldest messages for the one subscriber are discarded, while the other two subscribers continue to receive all of their messages.

Ø  For queues, discardOld specifies that, if messages on the queue have exceeded the maxbytes or maxmsgs value, the oldest messages are discarded from the queue and an error is returned to the message producer

        III.                rejectIncoming

Ø  For topics, rejectIncoming specifies that, if any of the subscribers have an outstanding number of undelivered messages on the server that are over the message limit, all new messages are rejected and an error is returned to the producer.

Ø  For queues, rejectIncoming specifies that, if messages on the queue have exceeded the maxbytes or maxmsgs value, all new messages are rejected and an error is returned to the producer.

6) global

Ø  Messages destined for a topic or queue with the global property set are routed to the other servers that are participating in routing with this server.

You can set global using the form:   global

7) sender_name

Ø  The sender_ name property specifies that the server may include the sender’s username for messages sent to this destination.

You can set sender_name using the form:    sender_name

8) sender_name_enforced

Ø  The sender_name_enforced property specifies that messages sent to this destination must include the sender’s user name. The server retrieves the user name of the message producer using the same procedure described in the sender_name property above. However, unlike, the sender_name property, there is no way for message producers to override this property.

You can set sender_name_enforced using the form:    sender_name_enforced

Ø  If the sender_name property is also set on the destination, this property overrides the sender_name property.

9) FlowControl

Ø  The flowControl property specifies the target maximum size the server can use to store pending messages for the destination. Should the number of messages exceed the maximum; the server will slow down the producers to the rate required by the message consumers. This is useful when message producers send messages much more quickly than message consumers can consume them.

If you specify the flowControl property without a value, the target        maximum is set to 256KB.

Ø  The flow_control parameter in tibemsd.conf file must be set to enable before the value in this property is enforced by the server. See Flow Control for more information about flow control.

10) trace

Ø  Specifies that tracing should be enabled for this destination.

o    You can set trace using the form:    trace [=body]

Ø  Specifying trace (without =body), generates trace messages that include only the message sequence and message ID. Specifying trace=body generates trace messages that include the message body

11) import

Ø  The import property allows messages published by an external system to be received by an EMS destination (a topic or a queue), as long as the transport to the external system is configured.

o    You can set import using the form:    import=”list

12) export

Ø  The export property allows messages published by a client to a topic to be exported to the external systems with configured transports.

o    You can set import using the form:    export=”list

Ø  It supports for only topics not queues.

13) maxRedelivery

Ø  The maxRedelivery property specifies the number of attempts the server should make to redeliver a message sent to a queue.

o    You can set maxRedelivery using the form:    maxRedelivery=count

Ø  Where count is an integer between 2 and 255 that specifies the maximum number of times a message can be delivered to receivers. A value of zero disables maxRedelivery, so there is no maximum.

Ø  Once the server has attempted to deliver the message the specified number of times, the message is either destroyed or, if the JMS_TIBCO_PRESERVE_UNDELIVERED property on the message is set to true, the message is placed on the undelivered queue so it can be handled by a special consumer

Undelivered Message Queue

If a message expires or has exceeded the value specified by the maxRedelivery property on a queue, the server checks the message’s JMS_TIBCO_PRESERVE_UNDELIVERED property. If
JMS_TIBCO_PRESERVE_UNDELIVERED is set to true, the server moves the message to the undelivered message queue, $sys.undelivered. This undelivered message queue is a system queue that is always present and cannot be deleted. If JMS_TIBCO_PRESERVE_UNDELIVERED is set to false, the message will be deleted by the server.

14) exclusive

Ø  The exclusive property is available for queues only (not for topics).

Ø  When exclusive is set for a queue, the server sends all messages on that queue to one consumer. No other consumers can receive messages from the queue. Instead, these additional consumers act in a standby role; if the primary consumer fails, the server selects one of the s   tandby consumers as the new primary, and begins delivering messages to it.

Ø  By default, exclusive is not set for queues and the server distributes messages in a round-robin—one to each receiver that is ready. If any receivers are still ready to accept additional messages, the server distributes another round of messages—one to each receiver that is still ready. When none of the receivers are ready to receive more messages, the server waits until a queue receiver reports that it can accept a message.

15) prefetch

The message consumer portion of a client and the server cooperate to regulate fetching according to the prefetch property. The prefetch property applies to both topics and queues.

You can set  prefetch using the form:  prefetch=value

where value is one of the values in 2 0r more ,1,0,None.

Value Description

Ø  2 or more: The message consumer automatically fetches messages from the

server. The message consumer never fetches more than the number of messages specified by value.

Ø  1 :-The message consumer automatically fetches messages from the server initiating fetch only when it does not currently hold amessage.

Ø  None:-Disables automatic fetch. That is, the message consumer initiates fetch only when the client calls receive—either an explicit synchronous call, or an implicit call (in an asynchronous consumer).This value cannot be used with topics or global queues.

Ø  0:-The destination inherits the prefetch value from a parent

destination with a matching name. If it has no parent, or nodestination in the parent chain sets a value for prefetch, then the default value is 5 queues and 64 for topics.

Ø  When a destination does not set any value (i.e prefetch value is empty)for prefetch, then the default value is 0 (zero; that is, inherit the prefetch value).

16) expiration                                                                                    

Ø  If an expiration property is set for a destination, when the server delivers a message to that destination, the server overrides the JMSExpiration value set by the producer in the message header with the time specified by the expiration property.

o    You can set the expiration property for any queue and any topic using the form:

expiration=time [msec|sec|min|hour|day]

 

Ø  where time is the number of seconds. Zero is a special value that indicates messages to the destination never expire.

Operating System, Redhat / CEntOS / Oracle Linux, TIBCO

TIBCO Administrator – Error (Core Dump Error)

Sometimes the administrator process in UNIX Platform Stops intermittently and then in the following location,

$TIBCO_HOME/administrator/<version>/tomcat/hs_err_pid<pid_of_admin>.log

file you will see a core dump error something like this

#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007efcdb723df8, pid=12496, tid=139624169486080
#
# JRE version: Java(TM) SE Runtime Environment (8.0_51-b16) (build 1.8.0_51-b16)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.51-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# V [libjvm.so+0x404df8] PhaseChaitin::gather_lrg_masks(bool)+0x208
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try “ulimit -c unlimited” before starting Java again
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#

————— T H R E A D —————

Current thread (0x00000000023a8800): JavaThread “C2 CompilerThread1” daemon [_thread_in_native, id=12508, stack(0x00007efcc8f63000,0x00007efcc9064000)]

siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000000

Registers:
RAX=0x0000000000000000, RBX=0x00007efc901723e0, RCX=0x00007efc9016e890, RDX=0x0000000000000041
RSP=0x00007efcc905f650, RBP=0x00007efcc905f6c0, RSI=0x00007efcc9060f50, RDI=0x00007efc90b937a0
R8 =0x000000000000009a, R9 =0x0000000000000003, R10=0x0000000000000003, R11=0x0000000000000000
R12=0x0000000000000004, R13=0x0000000000000000, R14=0x0000000000000002, R15=0x00007efc90b937a0
RIP=0x00007efcdb723df8, EFLAGS=0x0000000000010246, CSGSFS=0x0000000000000033, ERR=0x0000000000000004
TRAPNO=0x000000000000000e

Top of Stack: (sp=0x00007efcc905f650)
0x00007efcc905f650: 01007efcc905f6c0 00007efcc9060f50
0x00007efcc905f660: 0000003dc905f870 00007efc9012d900
0x00007efcc905f670: 0000000100000002 ffffffff00000002
0x00007efcc905f680: 00007efc98009f40 00007efc90171d30
0x00007efcc905f690: 0000023ac9061038 00007efcc9060f50
0x00007efcc905f6a0: 0000000000000222 0000000000000090
0x00007efcc905f6b0: 00007efcc9061038 0000000000000222
0x00007efcc905f6c0: 00007efcc905f930 00007efcdb72705a
0x00007efcc905f6d0: 00007efcc905f750 00007efcc905f870
0x00007efcc905f6e0: 00007efcc905f830 00007efcc905f710
0x00007efcc905f6f0: 00007efcc905f7d0 00007efcc905f8a0
0x00007efcc905f700: 00007efcc9060f50 0000001200000117
0x00007efcc905f710: 00007efcdc2544b0 00007efc0000000c
0x00007efcc905f720: 00007efcc9061dd0 00007efcc9060f50
0x00007efcc905f730: 0000000000000807 00007efc9044a2e0
0x00007efcc905f740: 00007efc9012c610 0000000000000002
0x00007efcc905f750: 00007efcc905f820 00007efcdbae2ea3
0x00007efcc905f760: 000007c000000010 00007efcc90610d8
0x00007efcc905f770: 0000000000000028 ffffffe80000000e
0x00007efcc905f780: 00007efc9076dd60 00007efcc9061080
0x00007efcc905f790: 0000001100001f00 0000001a00000011
0x00007efcc905f7a0: 0000000100000001 00007efc903a1c78
0x00007efcc905f7b0: 00007efc908213e0 00007efcdbd7cd46
0x00007efcc905f7c0: 0000000000000008 00007efcdbd7cc97
0x00007efcc905f7d0: 00007efc00000009 00007efcc9061dd0
0x00007efcc905f7e0: 00007efc909059f0 00007efc900537a0
0x00007efcc905f7f0: 00007efc9040b7c0 00007efc9040c130
0x00007efcc905f800: 00007efc9081d280 00007efcc9061080
0x00007efcc905f810: 00007efcc9061060 0000000000000222
0x00007efcc905f820: 00007efcc905f870 00007efcdb8f8831
0x00007efcc905f830: 00007efc0000000b 00007efcc9061dd0
0x00007efcc905f840: 00007efc906cbd70 00007efcc9060f00

Instructions: (pc=0x00007efcdb723df8)
0x00007efcdb723dd8: 18 00 48 c7 c0 ff ff ff ff 4c 89 ff 49 0f 44 c7
0x00007efcdb723de8: 48 89 43 18 49 8b 07 ff 90 80 00 00 00 49 89 c5
0x00007efcdb723df8: 8b 00 21 43 38 41 8b 45 04 21 43 3c 4c 89 ff 41
0x00007efcdb723e08: 8b 45 08 21 43 40 41 8b 45 0c 21 43 44 41 8b 45

Register to memory mapping:

RAX=0x0000000000000000 is an unknown value
RBX=0x00007efc901723e0 is an unknown value
RCX=0x00007efc9016e890 is an unknown value
RDX=0x0000000000000041 is an unknown value
RSP=0x00007efcc905f650 is pointing into the stack for thread: 0x00000000023a8800
RBP=0x00007efcc905f6c0 is pointing into the stack for thread: 0x00000000023a8800
RSI=0x00007efcc9060f50 is pointing into the stack for thread: 0x00000000023a8800
RDI=0x00007efc90b937a0 is an unknown value
R8 =0x000000000000009a is an unknown value
R9 =0x0000000000000003 is an unknown value
R10=0x0000000000000003 is an unknown value
R11=0x0000000000000000 is an unknown value
R12=0x0000000000000004 is an unknown value
R13=0x0000000000000000 is an unknown value
R14=0x0000000000000002 is an unknown value
R15=0x00007efc90b937a0 is an unknown value
Stack: [0x00007efcc8f63000,0x00007efcc9064000], sp=0x00007efcc905f650, free space=1009k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x404df8] PhaseChaitin::gather_lrg_masks(bool)+0x208
V [libjvm.so+0x40805a] PhaseChaitin::Register_Allocate()+0x71a
V [libjvm.so+0x49abe0] Compile::Code_Gen()+0x260
V [libjvm.so+0x49e032] Compile::Compile(ciEnv*, C2Compiler*, ciMethod*, int, bool, bool, bool)+0x14b2
V [libjvm.so+0x3ebeb8] C2Compiler::compile_method(ciEnv*, ciMethod*, int)+0x198
V [libjvm.so+0x4a843a] CompileBroker::invoke_compiler_on_method(CompileTask*)+0xc9a
V [libjvm.so+0x4a93e6] CompileBroker::compiler_thread_loop()+0x5d6
V [libjvm.so+0xa5cbcf] JavaThread::thread_main_inner()+0xdf
V [libjvm.so+0xa5ccfc] JavaThread::run()+0x11c
V [libjvm.so+0x911048] java_start(Thread*)+0x108
C [libpthread.so.0+0x7aa1]
Current CompileTask:
C2:77124624 8967 ! 4 com.tibco.repo.RVRepoProcessBridge::handleServerHeartbeat (1075 bytes)
————— P R O C E S S —————

Java Threads: ( => current thread )
0x00007efcb8024000 JavaThread “Thread-41” daemon [_thread_blocked, id=13990, stack(0x00007efc372f5000,0x00007efc373f6000)]
0x00007efcac372000 JavaThread “http-bio-8989-exec-68” daemon [_thread_blocked, id=13853, stack(0x00007efc3ca41000,0x00007efc3cb42000)]
0x00007efc4002d000 JavaThread “http-bio-8989-exec-67” daemon [_thread_blocked, id=13837, stack(0x00007efc376f9000,0x00007efc377fa000)]
0x00007efc945b6800 JavaThread “http-bio-8989-exec-66” daemon [_thread_blocked, id=13828, stack(0x00007efc37cfd000,0x00007efc37dfe000)]
0x00007efc40028000 JavaThread “http-bio-8989-exec-65” daemon [_thread_blocked, id=13228, stack(0x00007efc374f7000,0x00007efc375f8000)]
0x00007efc993ba800 JavaThread “http-bio-8989-exec-64” daemon [_thread_blocked, id=13227, stack(0x00007efc3ce45000,0x00007efc3cf46000)]
0x00007efcc4012000 JavaThread “http-bio-8989-exec-63” daemon [_thread_blocked, id=13218, stack(0x00007efc3c63f000,0x00007efc3c740000)]
0x00007efc50006000 JavaThread “http-bio-8989-exec-62” daemon [_thread_blocked, id=13217, stack(0x00007efc373f6000,0x00007efc374f7000)]
0x00007efca800c000 JavaThread “http-bio-8989-exec-61” daemon [_thread_blocked, id=13216, stack(0x00007efc3c13a000,0x00007efc3c23b000)]
0x00007efc68004000 JavaThread “http-bio-8989-exec-60” daemon [_thread_blocked, id=13215, stack(0x00007efc3d34a000,0x00007efc3d44b000)]
0x00007efcb0006800 JavaThread “http-bio-8989-exec-59” daemon [_thread_blocked, id=13214, stack(0x00007efc3d54c000,0x00007efc3d64d000)]
0x00007efca8044800 JavaThread “http-bio-8989-exec-58” daemon [_thread_blocked, id=13213, stack(0x00007efc375f8000,0x00007efc376f9000)]
0x00007efc902c5800 JavaThread “http-bio-8989-exec-57” daemon [_thread_blocked, id=13212, stack(0x00007efc36ef1000,0x00007efc36ff2000)]
0x00007efcb4010800 JavaThread “http-bio-8989-exec-56” daemon [_thread_blocked, id=13211, stack(0x00007efc3d148000,0x00007efc3d249000)]
0x00007efc4408c800 JavaThread “http-bio-8989-exec-55” daemon [_thread_blocked, id=13210, stack(0x00007efc3e053000,0x00007efc3e154000)]
0x00007efcb4036800 JavaThread “http-bio-8989-exec-54” daemon [_thread_blocked, id=13201, stack(0x00007efc371f4000,0x00007efc372f5000)]
0x00007efcb0018800 JavaThread “http-bio-8989-exec-53” daemon [_thread_blocked, id=13200, stack(0x00007efc3c23b000,0x00007efc3c33c000)]
0x00007efc6c1e1000 JavaThread “http-bio-8989-exec-52” daemon [_thread_blocked, id=13199, stack(0x00007efc3c43d000,0x00007efc3c53e000)]
0x00007efc58005000 JavaThread “http-bio-8989-exec-51” daemon [_thread_blocked, id=13198, stack(0x00007efc3c039000,0x00007efc3c13a000)]
0x00007efc74006800 JavaThread “http-bio-8989-exec-50” daemon [_thread_blocked, id=13197, stack(0x00007efc9da1e000,0x00007efc9db1f000)]
0x00007efc54005800 JavaThread “AMI Worker 2” daemon [_thread_blocked, id=13120, stack(0x00007efcc2253000,0x00007efcc2354000)]
0x00007efc54003800 JavaThread “AMI Worker 1” daemon [_thread_blocked, id=13119, stack(0x00007efc36df0000,0x00007efc36ef1000)]
0x00007efcbc02f800 JavaThread “http-bio-8989-exec-49” daemon [_thread_blocked, id=13091, stack(0x00007efc37afb000,0x00007efc37bfc000)]
0x00007efc80083000 JavaThread “http-bio-8989-exec-48” daemon [_thread_blocked, id=13090, stack(0x00007efc3cd44000,0x00007efc3ce45000)]
0x00007efc4c01b000 JavaThread “http-bio-8989-exec-47” daemon [_thread_blocked, id=13089, stack(0x00007efc3d44b000,0x00007efc3d54c000)]
0x00007efca403a800 JavaThread “http-bio-8989-exec-46” daemon [_thread_blocked, id=13088, stack(0x00007efc36cef000,0x00007efc36df0000)]
0x00007efcc4023000 JavaThread “http-bio-8989-exec-45” daemon [_thread_blocked, id=13087, stack(0x00007efc3d249000,0x00007efc3d34a000)]
0x00007efc8468a000 JavaThread “http-bio-8989-exec-44” daemon [_thread_blocked, id=13086, stack(0x00007efc3d64d000,0x00007efc3d74e000)]
0x000000000252f800 JavaThread “http-bio-8989-AsyncTimeout” daemon [_thread_blocked, id=13032, stack(0x00007efc3d74e000,0x00007efc3d84f000)]
0x000000000252e800 JavaThread “http-bio-8989-Acceptor-0” daemon [_thread_in_native, id=13031, stack(0x00007efc3d84f000,0x00007efc3d950000)]
0x000000000252d000 JavaThread “ContainerBackgroundProcessor[StandardEngine[Catalina]]” daemon [_thread_blocked, id=13030, stack(0x00007efc3d950000,0x00007efc3da51000)]
0x00007efc44085800 JavaThread “Thread-37” daemon [_thread_blocked, id=13029, stack(0x00007efc3f0f5000,0x00007efc3f1f6000)]
0x00007efc78971000 JavaThread “Thread-36” daemon [_thread_blocked, id=13028, stack(0x00007efc3de51000,0x00007efc3df52000)]
0x00007efc78967000 JavaThread “Thread-33” daemon [_thread_blocked, id=13025, stack(0x00007efc3e154000,0x00007efc3e255000)]
0x00007efc788fe000 JavaThread “Tibrv Dispatcher” daemon [_thread_in_native, id=13024, stack(0x00007efc3e255000,0x00007efc3e356000)]
0x00007efc788fa800 JavaThread “RVAgentManagerTransport dispatch thread” daemon [_thread_in_native, id=13023, stack(0x00007efc3e356000,0x00007efc3e457000)]
0x00007efc788f8000 JavaThread “RacSubManager” daemon [_thread_blocked, id=13022, stack(0x00007efc3e457000,0x00007efc3e558000)]
0x00007efc788d8800 JavaThread “InitialListTimer” daemon [_thread_blocked, id=13021, stack(0x00007efc3e558000,0x00007efc3e659000)]
0x00007efc788d7000 JavaThread “RvHeartBeatTimer” daemon [_thread_blocked, id=13020, stack(0x00007efc3e659000,0x00007efc3e75a000)]
0x00007efc788d4000 JavaThread “AgentAliveMonitor dispatch thread” daemon [_thread_in_native, id=13019, stack(0x00007efc3ebf2000,0x00007efc3ecf3000)]
0x00007efc788aa000 JavaThread “AgentEventMonitor dispatch thread” daemon [_thread_in_native, id=13018, stack(0x00007efc3ecf3000,0x00007efc3edf4000)]
0x00007efc787c2800 JavaThread “Thread-30” daemon [_thread_blocked, id=13017, stack(0x00007efc3ea2b000,0x00007efc3eb2c000)]
0x00007efc4c009800 JavaThread “Thread-29(HawkConfig)” daemon [_thread_blocked, id=13016, stack(0x00007efc3eff4000,0x00007efc3f0f5000)]
0x00007efc4c007800 JavaThread “Thread-28” daemon [_thread_in_native, id=13015, stack(0x00007efc3f1f6000,0x00007efc3f2f7000)]
0x00007efc7827b800 JavaThread “Thread-27” daemon [_thread_blocked, id=13012, stack(0x00007efc3f2f7000,0x00007efc3f3f8000)]
0x00007efc780fb800 JavaThread “Thread-26(HawkConfig)” daemon [_thread_blocked, id=13011, stack(0x00007efc3f5f8000,0x00007efc3f6f9000)]
0x00007efc780f9800 JavaThread “Thread-25” daemon [_thread_in_native, id=13010, stack(0x00007efc3f6f9000,0x00007efc3f7fa000)]
0x00007efc78060000 JavaThread “Thread-24(MonitoringManagement)” daemon [_thread_blocked, id=13009, stack(0x00007efc3f7fa000,0x00007efc3f8fb000)]
0x00007efc7805e000 JavaThread “Thread-23” daemon [_thread_in_native, id=13008, stack(0x00007efc3f8fb000,0x00007efc3f9fc000)]
0x00007efc78210000 JavaThread “Thread-21(quality)” daemon [_thread_blocked, id=13007, stack(0x00007efc3f9fc000,0x00007efc3fafd000)]
0x00007efc7820f800 JavaThread “Thread-20” daemon [_thread_in_native, id=13006, stack(0x00007efc3fafd000,0x00007efc3fbfe000)]
0x00007efc6c1d6800 JavaThread “Thread-17” daemon [_thread_blocked, id=13003, stack(0x00007efc5d6fb000,0x00007efc5d7fc000)]
0x00007efc6c1ff800 JavaThread “CommitQueue4_0” daemon [_thread_in_native, id=13002, stack(0x00007efc3fbfe000,0x00007efc3fcff000)]
0x00007efc6c1fd000 JavaThread “NormalQueue4_2” daemon [_thread_in_native, id=13001, stack(0x00007efc3feff000,0x00007efc40000000)]
0x00007efc6c1fb000 JavaThread “NormalQueue4_1” daemon [_thread_in_native, id=13000, stack(0x00007efc5c0e9000,0x00007efc5c1ea000)]
0x00007efc6c1f9000 JavaThread “NormalQueue4_0” daemon [_thread_in_native, id=12999, stack(0x00007efc5c1ea000,0x00007efc5c2eb000)]
0x00007efc6c1f5000 JavaThread “CommitQueue3_0” daemon [_thread_in_native, id=12998, stack(0x00007efc5c2eb000,0x00007efc5c3ec000)]
0x00007efc6c1f3000 JavaThread “NormalQueue3_2” daemon [_thread_in_native, id=12997, stack(0x00007efc5c3ec000,0x00007efc5c4ed000)]
0x00007efc6c1f1800 JavaThread “NormalQueue3_1” daemon [_thread_in_native, id=12996, stack(0x00007efc5c4ed000,0x00007efc5c5ee000)]
0x00007efc6c1ef800 JavaThread “NormalQueue3_0” daemon [_thread_in_native, id=12995, stack(0x00007efc5c5ee000,0x00007efc5c6ef000)]
0x00007efc6c1eb800 JavaThread “CommitQueue2_0” daemon [_thread_in_native, id=12994, stack(0x00007efc5c6ef000,0x00007efc5c7f0000)]
0x00007efc6c1ea000 JavaThread “NormalQueue2_2” daemon [_thread_in_native, id=12993, stack(0x00007efc5c7f0000,0x00007efc5c8f1000)]
0x00007efc6c1e9800 JavaThread “NormalQueue2_1” daemon [_thread_in_native, id=12992, stack(0x00007efc5c8f1000,0x00007efc5c9f2000)]
0x00007efc6c1de800 JavaThread “NormalQueue2_0” daemon [_thread_in_native, id=12991, stack(0x00007efc5c9f2000,0x00007efc5caf3000)]
0x00007efc6c124000 JavaThread “Thread-16(AUTH_quality)” daemon [_thread_blocked, id=12989, stack(0x00007efc5ccf3000,0x00007efc5cdf4000)]
0x00007efc6c116800 JavaThread “Thread-15” daemon [_thread_in_native, id=12988, stack(0x00007efc5cdf4000,0x00007efc5cef5000)]
0x00007efc6c101000 JavaThread “Timer-0” daemon [_thread_blocked, id=12987, stack(0x00007efc5cef5000,0x00007efc5cff6000)]
0x00007efc6c0f7800 JavaThread “net.sf.ehcache.CacheManager@5a6a4d5b” daemon [_thread_blocked, id=12986, stack(0x00007efc5cff6000,0x00007efc5d0f7000)]
0x00007efc6c0c6000 JavaThread “CommitQueue1_0” daemon [_thread_in_native, id=12985, stack(0x00007efc5d0f7000,0x00007efc5d1f8000)]
0x00007efc6c0c4000 JavaThread “NormalQueue1_2” daemon [_thread_in_native, id=12984, stack(0x00007efc5d1f8000,0x00007efc5d2f9000)]
0x00007efc6c0c2800 JavaThread “NormalQueue1_1” daemon [_thread_in_native, id=12983, stack(0x00007efc5d2f9000,0x00007efc5d3fa000)]
0x00007efc6c0bd800 JavaThread “NormalQueue1_0” daemon [_thread_in_native, id=12982, stack(0x00007efc5d3fa000,0x00007efc5d4fb000)]
0x00007efc6c09e800 JavaThread “HB-1” daemon [_thread_in_native, id=12980, stack(0x00007efc9c013000,0x00007efc9c114000)]
0x00007efc6c09b800 JavaThread “HB-0” daemon [_thread_in_native, id=12979, stack(0x00007efc9c114000,0x00007efc9c215000)]
0x00007efc6c09a000 JavaThread “SYNC-0” daemon [_thread_in_native, id=12978, stack(0x00007efc9c215000,0x00007efc9c316000)]
0x00007efc6c067800 JavaThread “ImstMgmt” daemon [_thread_in_native, id=12973, stack(0x00007efc9c316000,0x00007efc9c417000)]
0x00007efc6c028800 JavaThread “HawkImplantDisp” daemon [_thread_in_native, id=12972, stack(0x00007efc9c417000,0x00007efc9c518000)]
0x00007efc781a1800 JavaThread “Thread-11” daemon [_thread_blocked, id=12967, stack(0x00007efc9cf19000,0x00007efc9d01a000)]
0x00000000030fe000 JavaThread “GC Daemon” daemon [_thread_blocked, id=12540, stack(0x00007efcc80bc000,0x00007efcc81bd000)]
0x00000000023bf800 JavaThread “Service Thread” daemon [_thread_blocked, id=12510, stack(0x00007efcc8d61000,0x00007efcc8e62000)]
0x00000000023aa800 JavaThread “C1 CompilerThread2” daemon [_thread_blocked, id=12509, stack(0x00007efcc8e62000,0x00007efcc8f63000)]
=>0x00000000023a8800 JavaThread “C2 CompilerThread1” daemon [_thread_in_native, id=12508, stack(0x00007efcc8f63000,0x00007efcc9064000)]
0x00000000023a5800 JavaThread “C2 CompilerThread0” daemon [_thread_blocked, id=12507, stack(0x00007efcc9064000,0x00007efcc9165000)]
0x00000000023a4000 JavaThread “Signal Dispatcher” daemon [_thread_blocked, id=12506, stack(0x00007efcc9165000,0x00007efcc9266000)]
0x000000000236c000 JavaThread “Finalizer” daemon [_thread_blocked, id=12505, stack(0x00007efcc9266000,0x00007efcc9367000)]
0x000000000236a000 JavaThread “Reference Handler” daemon [_thread_blocked, id=12504, stack(0x00007efcc9367000,0x00007efcc9468000)]
0x00000000022f6800 JavaThread “main” [_thread_in_native, id=12496, stack(0x00007ffec1ae5000,0x00007ffec1be5000)]

Other Threads:
0x0000000002364800 VMThread [stack: 0x00007efcc9468000,0x00007efcc9569000] [id=12503]
0x00000000023c2800 WatcherThread [stack: 0x00007efcc8c60000,0x00007efcc8d61000] [id=12511]

VM state:not at safepoint (normal execution)

VM Mutex/Monitor currently owned by a thread: None

Heap:
PSYoungGen total 46080K, used 23811K [0x00000000f5580000, 0x00000000f8600000, 0x0000000100000000)
eden space 45568K, 51% used [0x00000000f5580000,0x00000000f6ca0dc0,0x00000000f8200000)
from space 512K, 25% used [0x00000000f8280000,0x00000000f82a0000,0x00000000f8300000)
to space 2048K, 0% used [0x00000000f8400000,0x00000000f8400000,0x00000000f8600000)
ParOldGen total 122368K, used 45598K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c87800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K

Card table byte_map: [0x00007efccb5c4000,0x00007efccb6c5000] byte_map_base: 0x00007efccaec4000

Marking Bits: (ParMarkBitMap*) 0x00007efcdc2bd660
Begin Bits: [0x00007efcc3000000, 0x00007efcc3800000)
End Bits: [0x00007efcc3800000, 0x00007efcc4000000)

Polling page: 0x00007efcdc323000

CodeCache: size=245760Kb used=27975Kb max_used=28034Kb free=217784Kb
bounds [0x00007efccba85000, 0x00007efccd625000, 0x00007efcdaa85000]
total_blobs=7417 nmethods=6888 adapters=442
compilation: enabled

Compilation events (10 events):
Event: 75538.391 Thread 0x00000000023a5800 8963 4 org.hsqldb.Expression::collectInGroupByExpressions (61 bytes)
Event: 75538.393 Thread 0x00000000023a8800 nmethod 8962 0x00007efcccac8fd0 code [0x00007efcccac9140, 0x00007efcccac92b8]
Event: 75538.394 Thread 0x00000000023a8800 8964 4 org.hsqldb.Expression::isConstant (118 bytes)
Event: 75538.397 Thread 0x00000000023a8800 nmethod 8964 0x00007efcccb62110 code [0x00007efcccb62320, 0x00007efcccb62468]
Event: 75538.398 Thread 0x00000000023a5800 nmethod 8963 0x00007efccbf127d0 code [0x00007efccbf12ae0, 0x00007efccbf12e70]
Event: 76104.554 Thread 0x00000000023a8800 8965 4 com.tibco.tibrv.TibrvMsg::writeBool (164 bytes)
Event: 76104.565 Thread 0x00000000023a8800 nmethod 8965 0x00007efccca4e190 code [0x00007efccca4e420, 0x00007efccca4e7f0]
Event: 76857.543 Thread 0x00000000023aa800 8966 1 com.tibco.uac.monitor.server.MonitorServer::access$000 (4 bytes)
Event: 76857.544 Thread 0x00000000023aa800 nmethod 8966 0x00007efccc3c07d0 code [0x00007efccc3c0920, 0x00007efccc3c0a10]
Event: 77124.580 Thread 0x00000000023a8800 8967 ! 4 com.tibco.repo.RVRepoProcessBridge::handleServerHeartbeat (1075 bytes)

GC Heap History (10 events):
Event: 70999.039 GC heap before
{Heap before GC invocations=77 (full 8):
PSYoungGen total 50688K, used 48256K [0x00000000f5580000, 0x00000000f8a80000, 0x0000000100000000)
eden space 48128K, 100% used [0x00000000f5580000,0x00000000f8480000,0x00000000f8480000)
from space 2560K, 5% used [0x00000000f8800000,0x00000000f8820000,0x00000000f8a80000)
to space 3072K, 0% used [0x00000000f8480000,0x00000000f8480000,0x00000000f8780000)
ParOldGen total 122368K, used 45526K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c75800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 70999.045 GC heap after
Heap after GC invocations=77 (full 8):
PSYoungGen total 48128K, used 160K [0x00000000f5580000, 0x00000000f8a00000, 0x0000000100000000)
eden space 47616K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8400000)
from space 512K, 31% used [0x00000000f8480000,0x00000000f84a8000,0x00000000f8500000)
to space 3072K, 0% used [0x00000000f8700000,0x00000000f8700000,0x00000000f8a00000)
ParOldGen total 122368K, used 45550K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7b800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
}
Event: 72379.077 GC heap before
{Heap before GC invocations=78 (full 8):
PSYoungGen total 48128K, used 47776K [0x00000000f5580000, 0x00000000f8a00000, 0x0000000100000000)
eden space 47616K, 100% used [0x00000000f5580000,0x00000000f8400000,0x00000000f8400000)
from space 512K, 31% used [0x00000000f8480000,0x00000000f84a8000,0x00000000f8500000)
to space 3072K, 0% used [0x00000000f8700000,0x00000000f8700000,0x00000000f8a00000)
ParOldGen total 122368K, used 45550K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7b800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 72379.082 GC heap after
Heap after GC invocations=78 (full 8):
PSYoungGen total 48640K, used 128K [0x00000000f5580000, 0x00000000f8880000, 0x0000000100000000)
eden space 47104K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8380000)
from space 1536K, 8% used [0x00000000f8700000,0x00000000f8720000,0x00000000f8880000)
to space 2560K, 0% used [0x00000000f8380000,0x00000000f8380000,0x00000000f8600000)
ParOldGen total 122368K, used 45566K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7f800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
}
Event: 73744.273 GC heap before
{Heap before GC invocations=79 (full 8):
PSYoungGen total 48640K, used 47232K [0x00000000f5580000, 0x00000000f8880000, 0x0000000100000000)
eden space 47104K, 100% used [0x00000000f5580000,0x00000000f8380000,0x00000000f8380000)
from space 1536K, 8% used [0x00000000f8700000,0x00000000f8720000,0x00000000f8880000)
to space 2560K, 0% used [0x00000000f8380000,0x00000000f8380000,0x00000000f8600000)
ParOldGen total 122368K, used 45566K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7f800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 73744.279 GC heap after
Heap after GC invocations=79 (full 8):
PSYoungGen total 47104K, used 96K [0x00000000f5580000, 0x00000000f8800000, 0x0000000100000000)
eden space 46592K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8300000)
from space 512K, 18% used [0x00000000f8380000,0x00000000f8398000,0x00000000f8400000)
to space 2560K, 0% used [0x00000000f8580000,0x00000000f8580000,0x00000000f8800000)
ParOldGen total 122368K, used 45582K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c83800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
}
Event: 75098.826 GC heap before
{Heap before GC invocations=80 (full 8):
PSYoungGen total 47104K, used 46688K [0x00000000f5580000, 0x00000000f8800000, 0x0000000100000000)
eden space 46592K, 100% used [0x00000000f5580000,0x00000000f8300000,0x00000000f8300000)
from space 512K, 18% used [0x00000000f8380000,0x00000000f8398000,0x00000000f8400000)
to space 2560K, 0% used [0x00000000f8580000,0x00000000f8580000,0x00000000f8800000)
ParOldGen total 122368K, used 45582K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c83800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 75098.831 GC heap after
Heap after GC invocations=80 (full 8):
PSYoungGen total 48128K, used 160K [0x00000000f5580000, 0x00000000f8780000, 0x0000000100000000)
eden space 46080K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8280000)
from space 2048K, 7% used [0x00000000f8580000,0x00000000f85a8000,0x00000000f8780000)
to space 2560K, 0% used [0x00000000f8280000,0x00000000f8280000,0x00000000f8500000)
ParOldGen total 122368K, used 45590K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c85800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
}
Event: 76440.356 GC heap before
{Heap before GC invocations=81 (full 8):
PSYoungGen total 48128K, used 46240K [0x00000000f5580000, 0x00000000f8780000, 0x0000000100000000)
eden space 46080K, 100% used [0x00000000f5580000,0x00000000f8280000,0x00000000f8280000)
from space 2048K, 7% used [0x00000000f8580000,0x00000000f85a8000,0x00000000f8780000)
to space 2560K, 0% used [0x00000000f8280000,0x00000000f8280000,0x00000000f8500000)
ParOldGen total 122368K, used 45590K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c85800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 76440.360 GC heap after
Heap after GC invocations=81 (full 8):
PSYoungGen total 46080K, used 128K [0x00000000f5580000, 0x00000000f8600000, 0x0000000100000000)
eden space 45568K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8200000)
from space 512K, 25% used [0x00000000f8280000,0x00000000f82a0000,0x00000000f8300000)
to space 2048K, 0% used [0x00000000f8400000,0x00000000f8400000,0x00000000f8600000)
ParOldGen total 122368K, used 45598K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c87800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K

In Such cases the following are the checks we need to do

  • Check the ulimit
    • It is supposed to be unlimited
  • Check the limit for no of open files
    • You can use the following command to check the number of open files
    • lsof | grep <user>|grep -v grep|wc -l

    • Then Further check the limits.conf for the value set from your end for the same
    • <user> soft nofile 350000
      <user> hard nofile 350000
      <user> soft nproc 65536
      <user> hard nproc 65536
      <user> soft stack 10240
      <user> hard stack 10240
      <user> soft sigpending 1548380
      <user> hard sigpending 1548380
  • Hopefully the problem will be resolved.

Happy Troubleshooting Guys 🙂

Operating System, Redhat / CEntOS / Oracle Linux

/dev/random vs /dev/urandom

If you want random data in a Linux/Unix type OS, the standard way to do so is to use /dev/random or /dev/urandom. These devices are special files. They can be read like normal files and the read data is generated via multiple sources of entropy in the system which provide the randomness.

/dev/random will block after the entropy pool is exhausted. It will remain blocked until additional data has been collected from the sources of entropy that are available. This can slow down random data generation.

/dev/urandom will not block. Instead it will reuse the internal pool to produce more pseudo-random bits.

/dev/urandom is best used when:

  • You just want a large file with random data for some kind of testing.
  • You are using the dd command to wipe data off a disk by replacing it with random data.
  • Almost everywhere else where you don’t have a really good reason to use /dev/random instead.

/dev/random is likely to be the better choice when:

  • Randomness is critical to the security of cryptography in your application – one-time pads, key generation.
Rendezvous, TIBCO

TIBCO – TRDP vs PGM

These three are protocols that we are using in RV messaging Service.

In earlier versions, RV was using the PGM protocol and now we are using UDP, TRDP (Tibco Real-time Distributed Protocol) protocol while sending the messages in RV. Whenever we install the RV we need to select the protocol that we want to use. We can select PGM or UDP,TRDP.

TRDP is used for sends the acknowledgement back to the publisher in case of failures and sequence message delivery and hide the network details. TRDP (TIBCO Reliable Data-gram Protocol) is a proprietary protocol running on top of UDP.

It brings mechanisms to manage reliable message delivery in a broadcast/multicast paradigm, this includes :
– message numbering
– negative acknowledgement

TRDP is used by RV. it has three Quality of Service Reliable, Certified Messaging and Distributed Queue. In all of them sender stores the message. Reliable senders send stores the message that’s broadcast-ed for 60 secs. In Certified messaging the sender stores the message in a ledger file till it receives the confirmation from all the certified receivers. In Distributed Queue the message will be stored in the process ledger.In Certified messaging and DQ, it assures the sequence as well. Over all TRDP assures the delivery of the message.

 

 

Enterprise Messaging Service, TIBCO

EMS Queue Requester v/s EMS Queue Sender

JMS queue Sender: – It simply sends the message to specified queue and does not wait for any response.

JMS Queue Requestor: – It send the message to specified queue and waits for the response from JMS client   like SOAP request reply activity.

The flow will not proceed until it gets the response or the request gets timed out. This activity uses temporary queues to ensure that reply messages are received only by the process that sent the request. 

Input tab :-

Queue

Reply to Queue :- It is the name of the queue in which this activity is awaiting for the reply . If leave blank,then a temporary queue  will be automatically created by Tibco as shown below in the screenshot.

(Run show queues command in the Tibco EMS admin .All the queues starting with * is either temporary or dynamic queues)

Queue-1

We can also give the name of Temporary queue.

Request Timeout :- If the response come within the time limit specified then flow will proceed if not then the flow will error out.

We have to use  Reply to JMS Message  activity to reply the message pending on the temporary queue.Once the message is response is send back to  the  temporary queue, it will get automatically deleted.

Please find the screenshot below for reference :-

Queue1-1

Queue1-2

Redhat / CEntOS / Oracle Linux, Ubuntu

10 Important “rsync” command – UNIX

Rsync (Remote Sync) is a most commonly used command for copying and synchronizing files and directories remotely as well as locally in Linux/Unix systems. With the help of rsync command you can copy and synchronize your data remotely and locally across directories, across disks and networks, perform data backups and mirroring between two Linux machines.

This article explains 10 basic and advanced usage of the rsync command to transfer your files remotely and locally in Linux based machines. You don’t need to be root user to run rsync command.

Some advantages and features of Rsync command
  1. It efficiently copies and sync files to or from a remote system.
  2. Supports copying links, devices, owners, groups and permissions.
  3. It’s faster than scp (Secure Copy) because rsync uses remote-update protocol which allows to transfer just the differences between two sets of files. First time, it copies the whole content of a file or a directory from source to destination but from next time, it copies only the changed blocks and bytes to the destination.
  4. Rsync consumes less bandwidth as it uses compression and decompression method while sending and receiving data both ends.
Basic syntax of rsync command
# rsync options source destination
Some common options used with rsync commands
  1. -v : verbose
  2. -r : copies data recursively (but don’t preserve timestamps and permission while transferring data
  3. -a : archive mode, archive mode allows copying files recursively and it also preserves symbolic links, file permissions, user & group ownerships and timestamps
  4. -z : compress file data
  5. -h : human-readable, output numbers in a human-readable format

 

Install rsync in your Linux machine

We can install rsync package with the help of following command.

# yum install rsync (On Red Hat based systems)
# apt-get install rsync (On Debian based systems)

1. Copy/Sync Files and Directory Locally

Copy/Sync a File on a Local Computer

This following command will sync a single file on a local machine from one location to another location. Here in this example, a file name backup.tar needs to be copied or synced to /tmp/backups/ folder.

[root@tecmint]# rsync -zvh backup.tar /tmp/backups/
created directory /tmp/backups
backup.tar
sent 14.71M bytes  received 31 bytes  3.27M bytes/sec
total size is 16.18M  speedup is 1.10

In above example, you can see that if the destination is not already exists rsync will create a directory automatically for destination.

Copy/Sync a Directory on Local Computer

The following command will transfer or sync all the files of from one directory to a different directory in the same machine. Here in this example, /root/rpmpkgs contains some rpm package files and you want that directory to be copied inside /tmp/backups/ folder.

[root@tecmint]# rsync -avzh /root/rpmpkgs /tmp/backups/
sending incremental file list
rpmpkgs/
rpmpkgs/httpd-2.2.3-82.el5.centos.i386.rpm
rpmpkgs/mod_ssl-2.2.3-82.el5.centos.i386.rpm
rpmpkgs/nagios-3.5.0.tar.gz
rpmpkgs/nagios-plugins-1.4.16.tar.gz
sent 4.99M bytes  received 92 bytes  3.33M bytes/sec
total size is 4.99M  speedup is 1.00

2. Copy/Sync Files and Directory to or From a Server

Copy a Directory from Local Server to a Remote Server

This command will sync a directory from a local machine to a remote machine. For example: There is a folder in your local computer “rpmpkgs” which contains some RPM packages and you want that local directory’s content send to a remote server, you can use following command.

[root@tecmint]$ rsync -avz rpmpkgs/ root@192.168.0.101:/home/
root@192.168.0.101's password:
sending incremental file list
./
httpd-2.2.3-82.el5.centos.i386.rpm
mod_ssl-2.2.3-82.el5.centos.i386.rpm
nagios-3.5.0.tar.gz
nagios-plugins-1.4.16.tar.gz
sent 4993369 bytes  received 91 bytes  399476.80 bytes/sec
total size is 4991313  speedup is 1.00
Copy/Sync a Remote Directory to a Local Machine

This command will help you sync a remote directory to a local directory. Here in this example, a directory /home/hari/rpmpkgs which is on a remote server is being copied in your local computer in /tmp/myrpms.

[root@tecmint]# rsync -avzh root@192.168.0.100:/home/tarunika/rpmpkgs /tmp/myrpms
root@192.168.0.100's password:
receiving incremental file list
created directory /tmp/myrpms
rpmpkgs/
rpmpkgs/httpd-2.2.3-82.el5.centos.i386.rpm
rpmpkgs/mod_ssl-2.2.3-82.el5.centos.i386.rpm
rpmpkgs/nagios-3.5.0.tar.gz
rpmpkgs/nagios-plugins-1.4.16.tar.gz
sent 91 bytes  received 4.99M bytes  322.16K bytes/sec
total size is 4.99M  speedup is 1.00

3. Rsync Over SSH

With rsync, we can use SSH (Secure Shell) for data transfer, using SSH protocol while transferring our data you can be ensured that your data is being transferred in a secured connection with encryption so that nobody can read your data while it is being transferred over the wire on the internet.

Also when we use rsync we need to provide the user/root password to accomplish that particular task, so using SSH option will send your logins in an encrypted manner so that your password will be safe.

Copy a File from a Remote Server to a Local Server with SSH

To specify a protocol with rsync you need to give “-e” option with protocol name you want to use. Here in this example, We will be using “ssh” with “-e” option and perform data transfer.

[root@tecmint]# rsync -avzhe ssh root@192.168.0.100:/root/install.log /tmp/
root@192.168.0.100's password:
receiving incremental file list
install.log
sent 30 bytes  received 8.12K bytes  1.48K bytes/sec
total size is 30.74K  speedup is 3.77
Copy a File from a Local Server to a Remote Server with SSH
[root@tecmint]# rsync -avzhe ssh backup.tar root@192.168.0.100:/backups/
root@192.168.0.100's password:
sending incremental file list
backup.tar
sent 14.71M bytes  received 31 bytes  1.28M bytes/sec
total size is 16.18M  speedup is 1.10

 

4. Show Progress While Transferring Data with rsync

To show the progress while transferring the data from one machine to a different machine, we can use ‘–progress’ option for it. It displays the files and the time remaining to complete the transfer.

[root@tecmint]# rsync -avzhe ssh --progress /home/rpmpkgs root@192.168.0.100:/root/rpmpkgs
root@192.168.0.100's password:
sending incremental file list
created directory /root/rpmpkgs
rpmpkgs/
rpmpkgs/httpd-2.2.3-82.el5.centos.i386.rpm
1.02M 100%        2.72MB/s        0:00:00 (xfer#1, to-check=3/5)
rpmpkgs/mod_ssl-2.2.3-82.el5.centos.i386.rpm
99.04K 100%  241.19kB/s        0:00:00 (xfer#2, to-check=2/5)
rpmpkgs/nagios-3.5.0.tar.gz
1.79M 100%        1.56MB/s        0:00:01 (xfer#3, to-check=1/5)
rpmpkgs/nagios-plugins-1.4.16.tar.gz
2.09M 100%        1.47MB/s        0:00:01 (xfer#4, to-check=0/5)
sent 4.99M bytes  received 92 bytes  475.56K bytes/sec
total size is 4.99M  speedup is 1.00

5. Use of –include and –exclude Options

These two options allows us to include and exclude files by specifying parameters with these option helps us to specify those files or directories which you want to include in your sync and exclude files and folders with you don’t want to be transferred.

Here in this example, rsync command will include those files and directory only which starts with ‘R’ and exclude all other files and directory.

[root@tecmint]# rsync -avze ssh --include 'R*' --exclude '*' root@192.168.0.101:/var/lib/rpm/ /root/rpm
root@192.168.0.101's password:
receiving incremental file list
created directory /root/rpm
./
Requirename
Requireversion
sent 67 bytes  received 167289 bytes  7438.04 bytes/sec
total size is 434176  speedup is 2.59

6. Use of –delete Option

If a file or directory not exist at the source, but already exists at the destination, you might want to delete that existing file/directory at the target while syncing .

We can use ‘–delete‘ option to delete files that are not there in source directory.

Source and target are in sync. Now creating new file test.txt at the target.

[root@tecmint]# touch test.txt
[root@tecmint]# rsync -avz --delete root@192.168.0.100:/var/lib/rpm/ .
Password:
receiving file list ... done
deleting test.txt
./
sent 26 bytes  received 390 bytes  48.94 bytes/sec
total size is 45305958  speedup is 108908.55

Target has the new file called test.txt, when synchronize with the source with ‘–delete‘ option, it removed the file test.txt.

7. Set the Max Size of Files to be Transferred

You can specify the Max file size to be transferred or sync. You can do it with “–max-size” option. Here in this example, Max file size is 200k, so this command will transfer only those files which are equal or smaller than 200k.

[root@tecmint]# rsync -avzhe ssh --max-size='200k' /var/lib/rpm/ root@192.168.0.100:/root/tmprpm
root@192.168.0.100's password:
sending incremental file list
created directory /root/tmprpm
./
Conflictname
Group
Installtid
Name
Provideversion
Pubkeys
Requireversion
Sha1header
Sigmd5
Triggername
__db.001
sent 189.79K bytes  received 224 bytes  13.10K bytes/sec
total size is 38.08M  speedup is 200.43

8. Automatically Delete source Files after successful Transfer

Now, suppose you have a main web server and a data backup server, you created a daily backup and synced it with your backup server, now you don’t want to keep that local copy of backup in your web server.

So, will you wait for transfer to complete and then delete those local backup file manually? Of Course NO. This automatic deletion can be done using ‘–remove-source-files‘ option.

[root@tecmint]# rsync --remove-source-files -zvh backup.tar /tmp/backups/
backup.tar
sent 14.71M bytes  received 31 bytes  4.20M bytes/sec
total size is 16.18M  speedup is 1.10
[root@tecmint]# ll backup.tar
ls: backup.tar: No such file or directory

9. Do a Dry Run with rsync

If you are a newbie and using rsync and don’t know what exactly your command going do. Rsync could really mess up the things in your destination folder and then doing an undo can be a tedious job.

Use of this option will not make any changes only do a dry run of the command and shows the output of the command, if the output shows exactly same you want to do then you can remove ‘–dry-run‘ option from your command and run on the terminal.

root@tecmint]# rsync --dry-run --remove-source-files -zvh backup.tar /tmp/backups/
backup.tar
sent 35 bytes  received 15 bytes  100.00 bytes/sec
total size is 16.18M  speedup is 323584.00 (DRY RUN)

10. Set Bandwidth Limit and Transfer File

You can set the bandwidth limit while transferring data from one machine to another machine with the the help of ‘–bwlimit‘ option. This options helps us to limit I/O bandwidth.

[root@tecmint]# rsync --bwlimit=100 -avzhe ssh  /var/lib/rpm/  root@192.168.0.100:/root/tmprpm/
root@192.168.0.100's password:
sending incremental file list
sent 324 bytes  received 12 bytes  61.09 bytes/sec
total size is 38.08M  speedup is 113347.05

Also, by default rsync syncs changed blocks and bytes only, if you want explicitly want to sync whole file then you use ‘-W‘ option with it.

[root@tecmint]# rsync -zvhW backup.tar /tmp/backups/backup.tar
backup.tar
sent 14.71M bytes  received 31 bytes  3.27M bytes/sec
total size is 16.18M  speedup is 1.10


rrsync -azP –progress “<user>@<host>:<absolute path>” <location to be copied>

Source :- techmint.com
Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Fork: retry: Resource temporarily unavailable

Issue: 

It was reported that a particular application user is not able to Login.

Troubleshoot:
1. Tried Logging to the system with root user it was fine.
2. Tried to switich user it failed with an Error “Write Failed; Broken Pipe”
3. Created a file and it was working.
4. Tried switch the user. This time it goes through.
5. Tried running some jobs with the user. It throws an error saying “fork: retry: Resource temporarily unavailable”.
6. Then checked the “/etc/security/limits.d/90-nproc.conf” file to find out that all the users
are given nproc limit as 1024.

Resolution:
1. Changed it to a higher value and it solved the issue.

2. I changed the value to 4096.
Redhat / CEntOS / Oracle Linux, Ubuntu

How to use parallel ssh (PSSH) for executing ssh in parallel on a number of Linux/Unix/BSD servers

Recently I come across a nice little nifty tool called pssh to run a single command on multiple Linux / UNIX / BSD servers. You can easily increase your productivy with this SSH tool.
More about pssh
pssh is a command line tool for executing ssh in parallel on some hosts. It specialties includes:
  1. Sending input to all of the processes
  2. Inputting a password to ssh
  3. Saving output to files
  4. IT/sysadmin taks automation such as patching servers
  5. Timing out and more
Let us see how to install and use pssh on Linux and Unix-like system.
pssh-welcome
Installation
You can install pssh as per your Linux and Unix variant. Once package installed, you can get parallel versions of the openssh tools. Included in the installation:
  1. Parallel ssh (pssh command)
  2. Parallel scp (pscp command )
  3. Parallel rsync (prsync command)
  4. Parallel nuke (pnuke command)
  5. Parallel slurp (pslurp command)
Install pssh on Debian/Ubuntu Linux
Type the following apt-get command/apt command to install pssh:
$ sudo apt install pssh
OR
$ sudo apt-get install pssh
Sample outputs:
Fig.01: Installing pssh on Debian/Ubuntu Linux

Fig.01: Installing pssh on Debian/Ubuntu Linux

Install pssh on Apple MacOS X
Type the following brew command:
$ brew install pssh
Sample outputs:
Fig.02: Installing pssh on MacOS Unix

Fig.02: Installing pssh on MacOS Unix

Install pssh on FreeBSD unix
Type any one of the command:
# cd /usr/ports/security/pssh/ && make install clean
OR
# pkg install pssh
Sample outputs:
Fig.03: Installing pssh on FreeBSD

Fig.03: Installing pssh on FreeBSD

Install pssh on RHEL/CentOS/Fedora Linux
First turn on EPEL repo and type the following command yum command:
$ sudo yum install pssh
Sample outputs:
Fig.04: Installing pssh on RHEL/CentOS/Red Hat Enterprise Linux

Fig.04: Installing pssh on RHEL/CentOS/Red Hat Enterprise Linux

Install pssh on Fedora Linux
Type the following dnf command:
$ sudo dnf install pssh
Sample outputs:
Fig.05: Installing pssh on Fedora

Fig.05: Installing pssh on Fedora

Install pssh on Arch Linux
Type the following command:
$ sudo pacman -S python-pip
$ pip install pssh
How to use pssh command
First you need to create a text file called hosts file from which pssh read hosts names. The syntax is pretty simple. Each line in the host file are of the form [user@]host[:port] and can include blank lines and comments lines beginning with “#”. Here is my sample file named ~/.pssh_hosts_files:
$ cat ~/.pssh_hosts_files
vivek@dellm6700
root@192.168.2.30
root@192.168.2.45
root@192.168.2.46

Run the date command all hosts:
$ pssh -i -h ~/.pssh_hosts_files date
Sample outputs:
[1] 18:10:10 [SUCCESS] root@192.168.2.46 Sun Feb 26 18:10:10 IST 2017 [2] 18:10:10 [SUCCESS] vivek@dellm6700 Sun Feb 26 18:10:10 IST 2017 [3] 18:10:10 [SUCCESS] root@192.168.2.45 Sun Feb 26 18:10:10 IST 2017 [4] 18:10:10 [SUCCESS] root@192.168.2.30 Sun Feb 26 18:10:10 IST 2017
Run the uptime command on each host:
$ pssh -i -h ~/.pssh_hosts_files uptime
Sample outputs:
[1] 18:11:15 [SUCCESS] root@192.168.2.45 18:11:15 up 2:29, 0 users, load average: 0.00, 0.00, 0.00 [2] 18:11:15 [SUCCESS] vivek@dellm6700 18:11:15 up 19:06, 0 users, load average: 0.13, 0.25, 0.27 [3] 18:11:15 [SUCCESS] root@192.168.2.46 18:11:15 up 1:55, 0 users, load average: 0.00, 0.00, 0.00 [4] 18:11:15 [SUCCESS] root@192.168.2.30 6:11PM up 1 day, 21:38, 0 users, load averages: 0.12, 0.14, 0.09
You can now automate common sysadmin tasks such as patching all servers:
$ pssh -h ~/.pssh_hosts_files -- sudo yum -y update
OR
$ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y update
$ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y upgrade
How do I use pssh to copy file to all servers?
The syntax is:
pscp -h ~/.pssh_hosts_files src dest
To copy $HOME/demo.txt to /tmp/ on all servers, enter:
$ pscp -h ~/.pssh_hosts_files $HOME/demo.txt /tmp/
Sample outputs:
[1] 18:17:35 [SUCCESS] vivek@dellm6700 [2] 18:17:35 [SUCCESS] root@192.168.2.45 [3] 18:17:35 [SUCCESS] root@192.168.2.46 [4] 18:17:35 [SUCCESS] root@192.168.2.30
Or use the prsync command for efficient copying of files:
$ prsync -h ~/.pssh_hosts_files /etc/passwd /tmp/
$ prsync -h ~/.pssh_hosts_files *.html /var/www/html/
How do I kill processes in parallel on a number of hosts?
Use the pnuke command for killing processes in parallel on a number of hosts. The syntax is:
$ pnuke -h .pssh_hosts_files process_name
### kill nginx and firefox on hosts:
$ pnuke -h ~/.pssh_hosts_files firefox
$ pnuke -h ~/.pssh_hosts_files nginx

See pssh/pscp command man pages for more information.
Conclusion
pssh is a pretty good tool for parallel SSH command execution on many servers. It quite is useful if you have 5 or 10 servers. Nevertheless, if you need to do something complicated you should look into Ansible and co.
Elasticsearch Logstash and Kibana

ELK on CEntOS 7 – (Source UnixMen)

Introduction

For those who don’t know, Elastic Stack (ELK Stack) is an infrastructure software program made up of multiple components developed by Elastic. The components include:

  • Beats: open-source data shippers working as agents on the servers to send different types of operational data to Elasticsearch.
  • Elasticsearch: a highly scalable open source full-text search and analytics engine. It allows you to store, search, and analyze big volumes of data quickly and in near real time. It is generally used as the underlying engine/technology that powers applications that have complex search features and requirements.
  • Kibana: open source analytics and visualization platform designed to work with Elasticsearch. It is used to interact with data stored in Elasticsearch indices. It has a browser-based interface that enables quick creation and sharing of dynamic dashboards that display changes to Elasticsearch queries in real time.
  • Logstash: logs and events collection engine, which provides a real-time pipeline. It can take data from multiple sources and convert them into JSON documents.

This tutorial will take you through the process of installing the Elastic Stack on a CentOS 7 server.

Getting started

First of all, we need Java 8, so you’ll need to download the official Oracle rpm package.

# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm"

Install it with rpm:

# rpm -ivh jdk-8u77-linux-x64.rpm

Ensure that it is working properly by checking it on your server:

# java -version

Install Elasticsearch

First, download and install the public signing key:

# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Next, create a file called elasticsearch.repo in /etc/yum.repos.d/, and paste the following lines:

[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Now, the repository is ready for use. Install elasticsearch with yum code:

# yum install elasticsearch

Configuring Elasticsearch

Go to the configuration directory and edit the elasticsaerch.yml configuration file, like this:

# $EDITOR /etc/elasticsearch.yml

Enable memory lock removing comment on line 43:
bootstrap.memory_lock: true
Then, scroll until you reach the “Network” section, and there remove comment on lines:

network.host: 192.168.0.1
http.port: 9200

Save and exit.

Next, it’s time to configure memory lock. In /usr/lib/systemd/system/ edit elasticsearch.service. There, uncomment the line:

LimitMEMLOCK=infinity

Save and exit.

Now go to the configuration file for Elasticsearch:

# $EDITOR /etc/sysconfig/elasticsearch

Uncomment line 60 and be sure that it contains the following content:

MAX_LOCKED_MEMORY=unlimited

Now, Elastisearch is configured. It will run on the IP address you specified (change it to “localhost” if necessary) on port 9200. Next:

# systemctl daemon-reload
# systemctl enable elasticsearch
# systemctl start elasticsearch

Install Kibana

When Elasticsearch has been configured and started, install and configure Kibana with a web server. In this case, we will use Nginx.
As in the case of Elasticsearch, install Kibana with wget and rpm:

# wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
# rpm -ivh kibana-5.1.1-x86_64.rpm

Edit Kibana configuration file:

# $EDITOR /etc/kibana/kibana.yml

There, uncomment:

server.port: 5601
server.host: "localhost"
elasticsearch.url: "http://localhost:9200"

Save, exit and start Kibana.

# systemctl enable kibana
# systemctl start kibana

Now, install Nginx and configure it as reverse proxy. This way it’s possible to access Kibana from the public IP address.
Nginx is available in the Epel repository:

# yum -y install epel-release

Next:

# yum -y install nginx httpd-tools

In Nginx configuration file( /etc/nginx/nginx.conf) remove the server { } block. Then save and exit.

Create a Virtual Host configuration file:

# $EDITOR /etc/nginxconf.d/kibana.conf

There, paste the following content:

server {
    listen 80;
 
    server_name elk-stack.co;
 
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.kibana-user;
 
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Create a new authentication file:

# htpasswd -c /etc/nginx/.kibana-user admin
my_strong_password

Lastly:

# systemctl enable nginx
# systemctl start nginx

Install Logstash

As for Elastisearch and Kibana:

# wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
# rpm -ivh logstash-5.1.1.rpm

It’s necessary to create a new SSL certificate. First, edit the openssl.cnf file:

# $EDITOR /etc/pki/tls/openssl.cnf

In the [ v3_ca ] section for the server identification:

[ v3_ca ]

# Server IP Address
subjectAltName = IP: IP_ADDRESS

After saving and exiting, generate the certificate:

# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt

Next, you can create a new file to configure the log sources for Filebeat, then a file for syslog processing and the file to define the Elasticsearch output.

These configurations depends on how you want to filter the data.

Finally:

# systemctl enable logstash
# systemctl start logstash

You have now successfully installed and configured the ELK Stack server-side!

Redhat / CEntOS / Oracle Linux, Ubuntu

30 Shades of “Alias” Command – UNIX

You can define various types aliases as follows to save time and increase productivity.

#1: Control ls command output

The ls command lists directory contents and you can colorize the output:

## Colorize the ls output ##
alias ls='ls --color=auto'
 
## Use a long listing format ##
alias ll='ls -la' 
 
## Show hidden files ##
alias l.='ls -d .* --color=auto'

#2: Control cd command behavior

## get rid of command not found ##
alias cd..='cd ..' 
 
## a quick way to get out of current directory ##
alias ..='cd ..' 
alias ...='cd ../../../' 
alias ....='cd ../../../../' 
alias .....='cd ../../../../' 
alias .4='cd ../../../../' 
alias .5='cd ../../../../..'

#3: Control grep command output

grep command is a command-line utility for searching plain-text files for lines matching a regular expression:

## Colorize the grep command output for ease of use (good for log files)##
alias grep='grep --color=auto'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'

#4: Start calculator with math support

alias bc='bc -l'

#4: Generate sha1 digest

alias sha1='openssl sha1'

#5: Create parent directories on demand

mkdir command is used to create a directory:

alias mkdir='mkdir -pv'

#6: Colorize diff output

You can compare files line by line using diff and use a tool called colordiff to colorize diff output:

# install  colordiff package 🙂
alias diff='colordiff'

#7: Make mount command output pretty and human readable format

alias mount='mount |column -t'

#8: Command short cuts to save time

# handy short cuts #
alias h='history'
alias j='jobs -l'

#9: Create a new set of commands

alias path='echo -e ${PATH//:/\\n}'
alias now='date +"%T"'
alias nowtime=now
alias nowdate='date +"%d-%m-%Y"'

#10: Set vim as default

alias vi=vim 
alias svi='sudo vi' 
alias vis='vim "+set si"' 
alias edit='vim'

#11: Control output of networking tool called ping

# Stop after sending count ECHO_REQUEST packets #
alias ping='ping -c 5'
# Do not wait interval 1 second, go fast #
alias fastping='ping -c 100 -s.2'

#12: Show open ports

Use netstat command to quickly list all TCP/UDP port on the server:

alias ports='netstat -tulanp'

#13: Wakeup sleeping servers

Wake-on-LAN (WOL) is an Ethernet networking standard that allows a server to be turned on by a network message. You can quickly wakeup nas devices and server using the following aliases:

## replace mac with your actual server mac address #
alias wakeupnas01='/usr/bin/wakeonlan 00:11:32:11:15:FC'
alias wakeupnas02='/usr/bin/wakeonlan 00:11:32:11:15:FD'
alias wakeupnas03='/usr/bin/wakeonlan 00:11:32:11:15:FE'

#14: Control firewall (iptables) output

Netfilter is a host-based firewall for Linux operating systems. It is included as part of the Linux distribution and it is activated by default. This post list most common iptables solutions required by a new Linux user to secure his or her Linux operating system from intruders.

## shortcut  for iptables and pass it via sudo#
alias ipt='sudo /sbin/iptables'
 
# display all rules #
alias iptlist='sudo /sbin/iptables -L -n -v --line-numbers'
alias iptlistin='sudo /sbin/iptables -L INPUT -n -v --line-numbers'
alias iptlistout='sudo /sbin/iptables -L OUTPUT -n -v --line-numbers'
alias iptlistfw='sudo /sbin/iptables -L FORWARD -n -v --line-numbers'
alias firewall=iptlist

#15: Debug web server / cdn problems with curl

# get web server headers #
alias header='curl -I'
 
# find out if remote server supports gzip / mod_deflate or not #
alias headerc='curl -I --compress'

#16: Add safety nets

# do not delete / or prompt if deleting more than 3 files at a time #
alias rm='rm -I --preserve-root'
 
# confirmation #
alias mv='mv -i' 
alias cp='cp -i' 
alias ln='ln -i'
 
# Parenting changing perms on / #
alias chown='chown --preserve-root'
alias chmod='chmod --preserve-root'
alias chgrp='chgrp --preserve-root'

#17: Update Debian Linux server

apt-get command is used for installing packages over the internet (ftp or http). You can also upgrade all packages in a single operations:

# distro specific  - Debian / Ubuntu and friends #
# install with apt-get
alias apt-get="sudo apt-get" 
alias updatey="sudo apt-get --yes" 
 
# update on one command 
alias update='sudo apt-get update && sudo apt-get upgrade'

#18: Update RHEL / CentOS / Fedora Linux server

yum command is a package management tool for RHEL / CentOS / Fedora Linux and friends:

## distrp specifc RHEL/CentOS ##
alias update='yum update'
alias updatey='yum -y update'

#19: Tune sudo and su

# become root #
alias root='sudo -i'
alias su='sudo -i'

#20: Pass halt/reboot via sudo

shutdown command bring the Linux / Unix system down:

# reboot / halt / poweroff
alias reboot='sudo /sbin/reboot'
alias poweroff='sudo /sbin/poweroff'
alias halt='sudo /sbin/halt'
alias shutdown='sudo /sbin/shutdown'

#21: Control web servers

# also pass it via sudo so whoever is admin can reload it without calling you #
alias nginxreload='sudo /usr/local/nginx/sbin/nginx -s reload'
alias nginxtest='sudo /usr/local/nginx/sbin/nginx -t'
alias lightyload='sudo /etc/init.d/lighttpd reload'
alias lightytest='sudo /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf -t'
alias httpdreload='sudo /usr/sbin/apachectl -k graceful'
alias httpdtest='sudo /usr/sbin/apachectl -t && /usr/sbin/apachectl -t -D DUMP_VHOSTS'

#22: Alias into our backup stuff

# if cron fails or if you want backup on demand just run these commands # 
# again pass it via sudo so whoever is in admin group can start the job #
# Backup scripts #
alias backup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type local --taget /raid1/backups'
alias nasbackup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type nas --target nas01'
alias s3backup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type nas --target nas01 --auth /home/scripts/admin/.authdata/amazon.keys'
alias rsnapshothourly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotdaily='sudo  /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotweekly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotmonthly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias amazonbackup=s3backup

#23: Desktop specific – play avi/mp3 files on demand

## play video files in a current directory ##
# cd ~/Download/movie-name 
# playavi or vlc 
alias playavi='mplayer *.avi'
alias vlc='vlc *.avi'
 
# play all music files from the current directory #
alias playwave='for i in *.wav; do mplayer "$i"; done'
alias playogg='for i in *.ogg; do mplayer "$i"; done'
alias playmp3='for i in *.mp3; do mplayer "$i"; done'
 
# play files from nas devices #
alias nplaywave='for i in /nas/multimedia/wave/*.wav; do mplayer "$i"; done'
alias nplayogg='for i in /nas/multimedia/ogg/*.ogg; do mplayer "$i"; done'
alias nplaymp3='for i in /nas/multimedia/mp3/*.mp3; do mplayer "$i"; done'
 
# shuffle mp3/ogg etc by default #
alias music='mplayer --shuffle *'

#24: Set default interfaces for sys admin related commands

vnstat is console-based network traffic monitor. dnstop is console tool to analyze DNS traffic. tcptrack and iftop commands displays information about TCP/UDP connections it sees on a network interface and display bandwidth usage on an interface by host respectively.

## All of our servers eth1 is connected to the Internets via vlan / router etc  ##
alias dnstop='dnstop -l 5  eth1'
alias vnstat='vnstat -i eth1'
alias iftop='iftop -i eth1'
alias tcpdump='tcpdump -i eth1'
alias ethtool='ethtool eth1'
 
# work on wlan0 by default #
# Only useful for laptop as all servers are without wireless interface
alias iwconfig='iwconfig wlan0'

#25: Get system memory, cpu usage, and gpu memory info quickly

## pass options to free ## 
alias meminfo='free -m -l -t'
 
## get top process eating memory
alias psmem='ps auxf | sort -nr -k 4'
alias psmem10='ps auxf | sort -nr -k 4 | head -10'
 
## get top process eating cpu ##
alias pscpu='ps auxf | sort -nr -k 3'
alias pscpu10='ps auxf | sort -nr -k 3 | head -10'
 
## Get server cpu info ##
alias cpuinfo='lscpu'
 
## older system use /proc/cpuinfo ##
##alias cpuinfo='less /proc/cpuinfo' ##
 
## get GPU ram on desktop / laptop## 
alias gpumeminfo='grep -i --color memory /var/log/Xorg.0.log'

#26: Control Home Router

The curl command can be used to reboot Linksys routers.

# Reboot my home Linksys WAG160N / WAG54 / WAG320 / WAG120N Router / Gateway from *nix.
alias rebootlinksys="curl -u 'admin:my-super-password' 'http://192.168.1.2/setup.cgi?todo=reboot'"
 
# Reboot tomato based Asus NT16 wireless bridge 
alias reboottomato="ssh admin@192.168.1.1 /sbin/reboot"

#27 Resume wget by default

The GNU Wget is a free utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, and it can resume downloads too:

## this one saved by butt so many times ##
alias wget='wget -c'

#28 Use different browser for testing website

## this one saved by butt so many times ##
alias ff4='/opt/firefox4/firefox'
alias ff13='/opt/firefox13/firefox'
alias chrome='/opt/google/chrome/chrome'
alias opera='/opt/opera/opera'
 
#default ff 
alias ff=ff13
 
#my default browser 
alias browser=chrome

#29: A note about ssh alias

Do not create ssh alias, instead use ~/.ssh/config OpenSSH SSH client configuration files. It offers more option. An example:

Host server10
  Hostname 1.2.3.4
  IdentityFile ~/backups/.ssh/id_dsa
  user foobar
  Port 30000
  ForwardX11Trusted yes
  TCPKeepAlive yes

You can now connect to peer1 using the following syntax:
$ ssh server10

#30: It’s your turn to share…

## set some other defaults ##
alias df='df -H'
alias du='du -ch'
 
# top is atop, just like vi is vim
alias top='atop' 
 
## nfsrestart  - must be root  ##
## refresh nfs mount / cache etc for Apache ##
alias nfsrestart='sync && sleep 2 && /etc/init.d/httpd stop && umount netapp2:/exports/http && sleep 2 && mount -o rw,sync,rsize=32768,wsize=32768,intr,hard,proto=tcp,fsc natapp2:/exports /http/var/www/html &&  /etc/init.d/httpd start'
 
## Memcached server status  ##
alias mcdstats='/usr/bin/memcached-tool 10.10.27.11:11211 stats'
alias mcdshow='/usr/bin/memcached-tool 10.10.27.11:11211 display'
 
## quickly flush out memcached server ##
alias flushmcd='echo "flush_all" | nc 10.10.27.11 11211'
 
## Remove assets quickly from Akamai / Amazon cdn ##
alias cdndel='/home/scripts/admin/cdn/purge_cdn_cache --profile akamai'
alias amzcdndel='/home/scripts/admin/cdn/purge_cdn_cache --profile amazon'
 
## supply list of urls via file or stdin
alias cdnmdel='/home/scripts/admin/cdn/purge_cdn_cache --profile akamai --stdin'
alias amzcdnmdel='/home/scripts/admin/cdn/purge_cdn_cache --profile amazon --stdin'

Enterprise Messaging Service, TIBCO

Tibco EMS FAQs

 

What are the messaging models does EMS support?
Point-to-Point (Queue)
b. Publish and Subscribe (Topic)
c. Multicast (Topic)

There are two major models for messaging supported by JMS: queues and topics. Queues are based on a point-to-point messaging model. Topics make use of the new publish-and-subscribe messaging model.

Regardless whether queues or topics are used, the messages are not sent directly peer-to-peer. Messages are forwarded to a JMS infrastructure that is composed of one or more JMS servers. The servers are responsible for providing the quality-of-services to JMS and responsible for implementing all the components not addressed by JMS Specification.

When determining when to use queues versus topics consider the two fundamental messaging mechanisms. The first is point-to-point messaging, in which a message is sent by one publisher (sender) and received by one subscriber (receiver). The second is publish-subscribe messaging, in which a message is sent by one or more publishers and received by one or more subscribers. The messaging model as listed below will dictate when to use a queue or a topic:

One-to-one messaging                   Queue    point-to-point
One-to-many messaging   Topic   publish-subscribe
Many-to-many messaging   Topic    publish-subscribe model

What is the difference between a topic and queue?
Answer:

Topic-synchronous mode of communication.
-publisher and subscriber model.
-broad cast type messaging.
-there is no guarantee of delivery.
Queue-asynchronous mode of communication.
-point to point model.
-unidirectional type of messaging.
-there is guarantee of delivery.
When will u use a topic and when will u use a queue?
Answer: 1.only one producer and only one consumer is there then “Queues” are used. Simply “Queue” is of type “Point to Point”. One or more than one publishers and one or more than one subscriber is there than “Topic”. Simply “Topic” is of type “Broadcast messaging”.
Can queues be shared between two consumers?
Answer: Yes. If queue is non-exclusive (default) it can be shared by any number of consumers. Non-exclusive queues are useful for balancing the load of incoming messages across multiple receivers.
What is an exclusive queue? When would u use it?
Answer: If queue is exclusive, then all queue messages can only be retrieved by the first consumer specified for the queue. Exclusive queues are useful when we want only one application to receive messages for a specific queue.
A message in a queue can be consumed by how many consumers?
Answer: if all are non-exclusive queue any number of consumers will receive messages. If exclusive queue, the first consumer will get messages.
How will you grant privileges to topics and queues?
Answer: Only EMS admin have privilege
What are the protocols supported by JMS/EMS?
Answer: 1.SSL (HTTP) 2. TCP

What are the limitations of the Durable Subscriber?

As long as the durable subscriber exists
• Expiration time of the message
• Storage limit of that Topic

What is the difference between “Multicasting” and “publish and subscribe” messaging model?

BROADCAST-
Many publishers can publish to the same topic, and a message from a single publisher can be received by many subscribers. Subscribers subscribe to topics,
and all messages published to the topic are received by all subscribers to the topic.This type of message protocol is also known a broadcast messaging because
messages are sent over the network and received by all interested subscribers,similar to how radio or television signals are broadcast and received.

MULTICAST-
Multicast messaging allows one message producer to send a message to multiple subscribed consumers simultaneously.It is similar to publish and subscribe in that it
also addresses message to a topic. Instead of delivering a copy of the message to each individual subscriber over TCP,however, the EMS server broadcasts the message
over Pragmatic General Multicast (PGM). A daemon running on the machine with the subscribed EMS client receives the multicast message and delivers it to the message
consumer.
Multicast is highly scalable because of the reduction in bandwidth used to broadcast messages & it also uses less EMS resources.
However,one drawback of this model is that it does not guarantee the delivery of messages to all subscribers.

What are the EMS Destination features?
• Secure Property
• Trace Property
• Store Property
• Redelivery policy
• Flow control
• Exclusive property for queues

What are the extra features are available in EMS apart from JMS?

The JMS standard specifies two delivery modes for messages, PERSISTENT and NON_PERSISTENT. EMS also includes a RELIABLE_DELIVERY mode that eliminates some of the overhead associated with the other delivery modes.
• For consumer sessions, you can specify a NO_ACKNOWLEDGE mode so that consumers do not need to acknowledge receipt of messages, if desired. EMS also provides an EXPLICIT_CLIENT_ACKNOWLEDGE and EXPLICIT_CLIENT_DUPS_OK_ACKNOWLEDGE mode that restricts the acknowledgement to single messages
• EMS extends the MapMessage and StreamMessage body types. These extensions allow EMS to exchange messages with TIBCO Rendezvous and ActiveEnterprise formats that have certain features not available within the JMS MapMessage and StreamMessage

What is structure of JMS Message?

Header(Required)
• Properties(optional)
• Body(optional)

Where does the undelivered messages will be stored?

If a message expires or has exceeded the value specified by the maxRedelivery property on a queue, the server checks the message’s JMS_TIBCO_PRESERVE_UNDELIVERED property. If JMS_TIBCO_PRESERVE_UNDELIVERED is set to true, the server moves the message to the undelivered message queue, $sys.undelivered. This undelivered message queue is a system queue that is always present and cannot be deleted. If JMS_TIBCO_PRESERVE_UNDELIVERED is set to false, the message will be deleted by the server
• You can only set the undelivered property on individual messages, there is no way to set the undelivered message queue as an option at the per-topic or per-queue level

What are the messages bodies are supported by the EMS?
• Map Message
• Text Message
• Stream Message
• Bytes Message
• Object Message

What is the Maximum message size is supported by EMS?

EMS supports messages up to a maximum size of 512MB. However, we recommend that application programs use smaller messages, since messages approaching this maximum size will strain the performance limits of most current hardware and operating system platforms

What are the different delivery modes available in EMS?

Persistent
When a producer sends a PERSISTENT message, the producer must wait for the server to reply with a confirmation. The message is persisted on disk by the server. This delivery mode ensures delivery of messages to the destination on the server in almost all circumstances. However, the cost is that this delivery mode incurs two-way network traffic for each message or committed transaction of a group of messages

Non-Persistent
Sending a NON_PERSISTENT message omits the overhead of persisting the message on disk to improve performance.
If authorization is disabled on the server, the server does not send a confirmation to the message producer.
If authorization is enabled on the server, the default condition is for the producer to wait for the server to reply with a confirmation in the same manner as when using PERSISTENT mode.
Regardless of whether authorization is enabled or disabled, you can use the npsend_check_mode parameter in the tibemsd.conf file to specify the conditions under which the server is to send confirmation of NON_PERSISTENT messages to the producer

Reliable
EMS extends the JMS delivery modes to include reliable delivery. Sending a RELIABLE_DELIVERY message omits the server confirmation to improve performance regardless of the authorization setting.
When using RELIABLE_DELIVERY mode, the server never sends the producer a receipt confirmation or access denial and the producer does not wait for it. Reliable mode decreases the volume of message traffic, allowing higher message rates, which is useful for messages containing time-dependent data, such as stock price quotations.

If a persistent message is published on to a TOPIC, Does these messages will store on disk if topic doesn’t have durable subscriber or subscriber with a fault-tolerant connection?
No. Persistent messages published to a topic are written to disk only if that topic has at least one durable subscriber or one subscriber with a fault-tolerant connection to the EMS server. In the absence of a durable subscriber or subscriber with a fault-tolerant connection, there are no subscribers that need messages resent in the event of a server failure. In this case, the server does not needlessly save persistent messages. This improves performance by eliminating the unnecessary disk I/O to persist the messages

What are the different types of acknowledgement modes in EMS message delivery?
• Auto
• Client
• Dups_ok
• No_ack
• Explciit
• Explicit_client_dups_ok
• Transitional
• Local transitional.

What are the different types of messages that can be used in EMS
• Text
• Simple
• Bytes
• Map
• XML test
• Object
• Object ref
• Stream

Tell me about bridges. Why do we use them, Syntax to create bridges, use of message selector?
• Some applications require the same message to be sent to more than one destination possibly of different types. So we use bridges in that scenario.
• create bridge source=type:dest_name target=type:dest_name [selector=selector]

What is the purpose for stores.conf

This file defines the locations either store files or a database, where the EMS server will store messages or metadata. Each store configured is either a file-based or a database store.

How many modes are the messages written to store file.
• Two Modes: Synchronous and Asynchronous
• Default is asynchronous

What is tibemsd.conf
• It is the main configuration file that controls the characteristics of the EMS server

Name destination properties and explain them.
• Global, secure, maxmsgs, maxbytes, flowcontrol, sender_name, sender_name_enforced, trace,maxRedelivery

What are the different modes of installation in Ems?
• GUI mode
• Console mode
• Silent mode

What are the messaging models supported by JMS
• Point-to-point
• Publish-subscribe
• Multicast
What is the use of routes? What kind of destinations can be used in routes?
• Topics and queues m-hops

What happens if the message expires/exceeded the value specified by maxredelivery property on queue?
• If the jms_preserve_undelivered property is set to true, then it moves he message to undelivered message queue, if set to false, the message is deleted by the server.

In how many ways can a destination be created?
• Static-created by server
• Dynamic-created by client
• Temporary destinations.

What are the wild cards that we use in ems? how do they work for queues and topics
• *,> you can subscribe to wildcard topics but can’t publish to them. Where as in case of queues we can’t either send /receive.

Is bridges are transitive?
• NO

What Are flow control on destinations
• Some times the producer may send messages faster than the consumers can receive them. So, the message capacity on the server will be exhausted. So we use flow control. Flow control can be specified on destinations.

What Are flow control on bridges and routes
• Flow control has to be specified on both sides of bridges where as on routes it operates differently on sender side and receiver side.

What are the permissions that you can grant to users to access queues
• Receive
• Send
• Browse
What are the permissions that you can grant to users to access topics
• Subscribe
• Publish
• Durable
• Use_durable

Tell me about multicasting in EMS
• Multicast is a messaging model that broadcasts messages to many consumers at once rather than sending messages individually to each consumer. EMS uses Pragmatic general multicast to broadcast messages published to multicast enabled topics.
• Each multicast enabled topic is associated with a channel.

What are the advantages and disadvantages of multicasting?
• Advantages: as the message broadcasts only once thereby reducing the amount of bandwidth used in publish and subscribe model. Reduces the network traffic.
• Disadvantages: Offers only last-hop delivery. So can’t be used to send messages between servers.

On what destinations can you use multicast?
• Topics

Suppose, you got an error while accessing a queue, that you don’t have necessary permissions to access the queue. What might be the solution/reason?
• The user that is assigned to the queue and the user used while creating

How does the secondary server know that the primary server is failed?
• Based on heartbeat intervals

What is JMS queue requestor?
• The JMS Queue Requestor activity is used to send a request to a JMS queue name and receive a response back from the JMS client

What is JMS topic requestor?
• The JMS Topic Requestor activity is used to communicate with a JMS application’s request-response service. This service invokes an operation with input and output. The request is sent to a JMS topic and the JMS application returns the response to the request.

How do you add ems server to administrator?
• Using domain utility

How do you remove individual messages from destinations?
• Use Purge Command.

What are the messaging models does EMS support?
• Point-to-Point (Queue)
• Publish and Subscribe (Topic)
• Multicast (Topic)

What are the limitations of the Durable Subscriber?
• As long as the durable subscriber exists
• Expiration time of the message
• Storage limit of that Topic

What are the EMS Destination features?
• Secure Property
• Trace Property
• Store Property
• Redelivery policy
• Flow control
• Exclusive property for queues

What are the extra features are available in EMS apart from JMS?
• The JMS standard specifies two delivery modes for messages, PERSISTENT and NON_PERSISTENT. EMS also includes a RELIABLE_DELIVERY mode that eliminates some of the overhead associated with the other delivery modes.
For consumer sessions, you can specify a NO_ACKNOWLEDGE mode so that consumers do not need to acknowledge receipt of messages, if desired. EMS also provides an EXPLICIT_CLIENT_ACKNOWLEDGE and EXPLICIT_CLIENT_DUPS_OK_ACKNOWLEDGE mode that restricts the acknowledgement to single messages
• EMS extends the MapMessage and StreamMessage body types. These extensions allow EMS to exchange messages with TIBCO Rendezvous and ActiveEnterprise formats that have certain features not available within the JMS MapMessage and StreamMessage

What is structure of JMS Message?
• Header(Required)
• Properties(optional)
• Body(optional)

Where does the undelivered messages will be stored?
• If a message expires or has exceeded the value specified by the maxRedelivery property on a queue, the server checks the message’s JMS_TIBCO_PRESERVE_UNDELIVERED property. If JMS_TIBCO_PRESERVE_UNDELIVERED is set to true, the server moves the message to the undelivered message queue, $sys.undelivered. This undelivered message queue is a system queue that is always present and cannot be deleted. If JMS_TIBCO_PRESERVE_UNDELIVERED is set to false, the message will be deleted by the server
• You can only set the undelivered property on individual messages, there is no way to set the undelivered message queue as an option at the per-topic or per-queue level

What is the Maximum message size is supported by EMS?
• EMS supports messages up to a maximum size of 512MB. However, we recommend that application programs use smaller messages, since messages approaching this maximum size will strain the performance limits of most current hardware and operating system platforms

Adapters, TIBCO

TIBCO Adapters FAQs

What are Adapters?
Adapters are connectors to data sources to catch event changes. Once an Adapter catches a event change, it publishes the message to a message box using either EMS or RVD
Adapter is a gateway between different applications using messaging channels.

What are the different types of adapters?
Technical Adapters (File Adapter, DB Adapter)
Functional Adapters (PeopleSoft Adapter, SAP R3 Adapter)
Custom Adapters

Adapter Components
Each adapter has two main components, an adapter palette and a run-time adapter. In addition, some adapters include a design-time adapter. The adapter palette and design-time adapter are used during configuration, and the run-time adapter is used at production time.

Adapter Palette:
Each adapter includes a palette that is used for configuration. The palette is automatically loaded into TIBCO Designer during adapter installation and available the next time Designer is started. The palette enables you to configure adapter specific options, such as its connection to the vendor application, logging options, and adapter services. During the design phase, the palette connects to the vendor application and fetches information about connection options and data schemas. You can then graphically select the appropriate items. For example, during configuration of a TIBCO Adapter for ActiveDatabase adapter instance, the palette fetches all pertinent tables in the database. You then choose the tables that the particular service is to send or receive.

Run-time Adapter :
Once the adapter has been configured using TIBCO Designer, it can be deployed. A deployed adapter instance is referred to as a run-time adapter. A run-time adapter operates in a production environment, handling communication between a vendor application and other applications that are configured for the TIBCO environment.

Design-time Adapter :
Some adapters use a design-time adapter (DTA) to access a vendor application and return design-time configuration information. The palette is a client of the DTA process. The DTA connects to the vendor application, fetches data schemas and sends them to the palette.

Adapter Lifecycle:
The following is an overview of the adapter lifecycle:
1. Install the vendor application to which the adapter connects before installing the adapter. For many adapters, the adapter and vendor application need not be installed on the same machine.
2. Adapters depend on other software from TIBCO. Before installing an adapter, the TIBCO Runtime Agent™ software must be installed on each computer on which the adapter runs.
3. Create an adapter instance and save it in a project using TIBCO Designer™. A project contains configuration information required for a run-time adapter to interact with the vendor application and other applications.
4. Deploy the adapter. An adapter instance is deployed using TIBCO Administrator.
a) Using TIBCO Designer, create an Enterprise Archive (EAR) file, which contains information about the adapter instances and processes you wish to deploy.
b) Using TIBCO Administrator, upload the EAR, then deploy the adapter on the machine(s) of your choice. You can set runtime options before deployment.
c) Using TIBCO Administrator, start and stop the adapter.
d) Monitor the adapter using the built-in monitoring tools provided by TIBCO Administrator.

Adapter Services :
Adapters are responsible for making information from different applications available to other applications across an enterprise. To do so, an adapter is configured to provide one or more of the following services:
Publication Service
Subscription Service
Request-Response Service
Request-Response Invocation Service

Publication Service :
An adapter publication service recognizes when business events happen in a vendor application, and asynchronously sends out the event data in realtime to interested systems in the TIBCO environment. For example, an adapter can publish an event each time a new customer account is added to an application. Other applications that receive the event can then update their records just as the original application did. When an application receives a request to create a customer record, the application notifies the adapter about the request and the adapter publishes the event.
User Interface—————–Application X————–Adapter ————– TIBCO Messaging
Create record Send to adapter Publishing

Polls on the source data table (base table).
Reads data from the source table.
Sends the data to the message bus.

Subscription Service:
An adapter subscription service asynchronously performs an action such as updating business objects or invoking native APIs on a vendor application. The adapter service listens to external business events, which trigger the appropriate action. Referring to the previous example, an adapter subscription service can listen for customer record creation events (happening in an application and published to the TIBCO infrastructure) and update another application.
TIBCO Messaging————Adapter———–Application Y Subscribing Update record

Reads data from the message bus.
Gives the data to the destination table.

Request-Response Service:
In addition to asynchronously publishing and subscribing to events, an adapter can be used for synchronously retrieving data from or executing transactions within a vendor application. After the action is performed in the vendor application, the adapter service sends a response back to the requester with either the results of the action or a confirmation that the action occurred. This entire process is called request-response, and it is useful for actions such as adding or deleting business objects.

Receives requests from other applications.
Parses the requests.
Returns response (Sends only the requested data to the message bus).

Request-Response Invocation Service:
An adapter request-response invocation service is similar to the request-response service, except that the roles are reversed. The vendor application is now the requester or initiator of the service, instead of the provider of the service. The adapter service acts as a proxy, giving the vendor application the ability to invoke synchronously functionality on an external system.

How can u fine-tune an ADBAdapter? What are the different parameters that can be used?
a) we can use publish by value or publish by reference for high speed and data type support like oracle long respectively.
b) Can use polar or alerter for frequent and infrequent data changes respectively.
c) Adb.PollingInterval, _ADB.DUPDECT.adapter_instance_name parameters can be used to do flow control and avoid duplication respectively.

What are the quality of services we can have in adapter publishing services?
RV: reliable, certified, transactional

What are the wire formats we can have in adapter publishing services?
wire formats:
a) RV: active enterprise message, RV message, XML message.
b)JMS: XML message

What are the objects, which will be created if you configure and save ADB adapter?
Publishing table for source table, Trigger acts as a bridge between source and publishing table

Explain the internal functioning of ADB publication service?
When we configure ADB publishing service it creates Publishing table for source table, Trigger acts as a bridge between source and publishing table. Whenever data is being inserted/updated/deleted from source table, it will be inserted into publishing table by means of trigger. ADB has another component called polling agent. Polling agent will be keep looking for new inserts into publishing table and if it finds any then converts the record in p table into the specified wire format and publishes on specified quality of service

Can we filter the records from publishing when they get updated in source table? (Data from all regions are coming into table but I want to publish only New York data)
Yes – By modifying the trigger we can only insert the New York data into publishing table

Can we limit the number of columns to be published from the source table?
Yes, using the use? field in adapter publishing table tab. just uncheck the columns u dont want to use.

Can we publish parent and child table information by using single adapter configuration and how?
Yes, in the adapter publisher table tab create a parent table first by look up and then add the child table using the add child tab then click on the child table column to specify the foreign key than to establish a relationship between the primary key of the parent and the foreign key of the child go to the column in the child table and specify the primary key of the parent table.
In the subscription service the destination table is created and the child table mapping tab will have the child table on the left mapped with the parent table on the right.

What is publish by value and publish by reference. Explain the pros and cons.

publish by value: in this type the changes in the source table are reflected in the p_ table and the data is taken from there. its used when high speed is required. it dose not support data types like oracle long.

publish by reference: in this type the data is directly taken from the source table where only the primary key will come from p_ table. it allows data types like oracle long.
loss of changes in the source table can be lost bcos of the waiting time.(this can be avoided using alerter).

What are the types of message transfers in file adapters?
record transfer: to integrate file systems to TIBCO AE environment.
simple file transfer: to transfer files to other TIBCO adapters.

What is read schema and write schema in file adapter.
Read schema in the file adapter publisher config is used to define typr of expected input. Here we can use the delimited file tupe or the positional file type.

What is the difference between an ADBAdapter and JDBC palette activities?
• Using ADB we can only pick up the data from one database and put it in onother one.
• But using JDBC we can Query using JDBCQuery and manupulate data using JDBCUpdate. like we can use select statements and insert and update statements to selective query and update.
• ADB adapters might be useful in scenarios where we have large amount of data.

What is the difference between a FileAdapter and File palette activities?
In file activities the file polar cannot handle multi format data and record by record transfer .it takes care of particular format specified and does file transfer.
where as file adapter can handle multiple formats and does record by record transfer.

If the reference to Schema changes in “Activity Input” does it through error, how do you correct it?
Yes, and we have to correct the schema in the way the input expect it.

Where do we specify HTTP Port number?
In HTTP connection in the HTTP activities.

What is the difference between JDBC activities and ADB Adapter?
• ADB uses ODBC to connect, JDBC uses JDBC
• ADB is more suitable for instances where you have a lot of processing
• ADB is more suitable for instances where you want that a particular action on a DB Table triggers a BW process.
• ADB adapter is best for publishing from database.
• For simple inserts and updates then ADB subscriber is best.
• ADB is an adapter which is used to capture the events and take action, this has pub and sub mechanisms, pub is used to capture the events and publish the messages and sub will be used to upsert the operations.
• Jdbc is a collection of activities that can be used for custom operations
• In case of insert or update to database then check if you have complex JDBC inserts, transaction management and other dynamic queries then JDBC activities are best.
• JDBC is more suitable for running dynamic code where in runtime you can execute statements with different values depending on process execution.

What are modes of operation for File Adapter in Record Mode?
Synchronous mode upon receiving an event, the publication service will allow other services in the instance only after it completes the processing and publishing of all the files that match the specified criteria.
In Asynchronous mode the publication service allows other services of the instance to receive events while it is processing and publishing a file. By default Subscription service always operates in Asynchronous mode.

What is the diff between tibco adapter and BW component?
Adapters are connectors that use a messaging channel that can be configured over source/target systems which can be used in Pub, Sub or Replyrequest mode. BW components are designer, administrator, bw engine.

What is a synchronous service that an adapter supports?
Of the 4 Adapter services, Request/Response is the only adapter service that is synchronous.

What is Event Driven and Demand Driven?
Event Driven – Push
Demand Driven – Poll.

TIBCO Adapter for ActiveDatabase:
TIBCO Adapter for ActiveDatabase software (the adapter) allows data changes in a database to be sent as they occur to other databases and applications. It extends publish-subscribe and request-response technology to databases, making multiple levels of delivery services available to applications that need access to these databases. ODBC and JDBC compliant databases such as Oracle, Sybase, and Microsoft SQL Server are supported. While the adapter does not run on z/OS and iSeries systems, it can remotely connect to a DB2 database running on these systems. TIBCO Adapter for ActiveDatabase is written using the TIBCO Adapter SDK software, which allows the adapter to interoperate with other TIBCO products. The adapter can communicate with any application that is configured for the TIBCO environment.

What is File Adapter?
TIBCO Adapter for Files software processes data from text files and publishes the contents in real-time to the TIBCO environment. The adapter also listens for messages in the TIBCO environment and writes the contents to a file.

The adapter supports only text files when it is integrating a file system into the TIBCO ActiveEnterprise environment. It supports both text and binary files when it is transferring files between two or more TIBCO Adapter for Files installations.

File Adapter Operations Mode?
Selecting an operation mode is the first step in configuring a service. The operation mode determines whether the service will integrate the file system with the TIBCO ActiveEnterprise environment or transfer files between instances of TIBCO Adapter for Files.
In the Record Mode of operation, where the adapter integrates the file system with TIBCO ActiveEnterprise, you will have to define and use schemas.
In the Simple File Transfer Mode of operation, where the adapter transfers files among instances of TIBCO Adapter for Files, you will have to define various options for file transfer. However, there is no need to define a schema.

Can two adapters communicate with each other?
No two adapters can communicate with each other directly. They can communicate only through a messaging layer.
Considering Tibco to be the messaging layer,
Publishing Adapter always publishes to Tibco messaging bus.
Subscribing Adapter always subscribes from a Tibco messaging bus.

What are users and user-key columns in Adapter Publisher’s Table tab?
While configuring an ADB Publisher:
“Users” column specifies what columns have to be published to the publishing table.
“User-key” is selected means it acts as a primary key. Child tables can be joined to the Parent tables only using the primary keys. Publish by reference storage mode copies only the primary key from the source table. If a source table does not have a primary key column, we can use the user-key to do the same.

What are the columns available in a Adapter Publishing table?
An adapter Publishing table contains the actual data columns plus Internal Adapter columns.
Actual data colums:
Depending on the storage mode selected, the actual data colums in the publishing table varies:
For Publish by value, the actual data columns will be the exact copy of the base table data colums.(all the columns).
For Publish by reference, the actual data columns will be the exact copy of the base table’s primary key data colums (only Primary key column).

Internal Adapter columns:
ADB_SUBJECT
ADB_SEQUENCE
ADB_SET_SEQUENCE
ADB_TIMESTAMP
ADB_OPCODE
ADB_UPDATE_ALL
ADB_REF_OBJECT
ADB_L_DELIVERY_STATUS
ADB_L_CMSEQUENCE

Publication and Subscription formats ( file adapter)
Two types of formats are supported by adapters while exchanging data between applications
1. MInstances
2. MBusinessDocuments
MInstances is the entity that is exchanged among TIBCO applications. It is the schema instantiations. The runtime adapter parses the input file, identifies the schema associated with the publication service, creates the MInstances and publishes it.

MBusinessDocuments is a facility provided by TIBCO ActiveEnterprise for grouping MInstances. MBusinessDocument always contain MInstances created from the same file. If high throughput is desired from publication service MBusinessDocument attributes can be used.

Modes of operation(file adapter)
There are two modes of operation
Synchronous mode
Asynchronous mode

An adapter instance/configuration can have multiple publication and subscription services. Services are activated by events. The publication service can be activated by a timer event or message event. The event that activates the publication service is called polling agent.

In Synchronous mode upon receiving an event, the publication service will allow other services in the instance only after it completes the processing and publishing of all the files that match the specified criteria.

In Asynchronous mode the publication service allows other services of the instance to receive events while it is processing and publishing a file. By default Subscription service always operates in Asynchronous mode.

If the configuration has more than one service or if the publication service is expected to process large file sizes or large set of files, setting the publication service in asynchronous mode is recommended.

Types of file records (file adapter)
File records can be classified into two categories:
1. delimited file record
2. positional file record
Delimited file records are used to interpret lines that have a well-defined delimiter between the fields. Delimiters can be of single or multiple characters. These can be identified by the number of fields or by using a
constant field value.

Positional file records are used to interpret lines that have well defined field lengths. These can be identified using line or record length or by using a constant field value ie; constant line length.

what is opaque table
The subscription service uses two logical layers when processing a message. The first layer decodes data from the message and the second layer provides the database transaction. If an exception occurs in the first layer, the adapter logs the message to the opaque exception table. In the second layer, if any DML command fails at any level, the adapter rolls back this transaction and starts another transaction, inserting into exception tables. If the insert into exception table transaction fails, the adapter then logs the message to the opaque exception table.
what is the difference between exception table and opaque exception table?
The subscription service uses two logical layers when processing a message.
The first layer decodes data from the message and the second layer provides the database transaction. If an exception occurs in the first layer, the adapter logs the message to the opaque exception table.

In the second layer, if any DML command fails at any level, the adapter rolls back this transaction and starts another transaction, inserting into exception tables. If the insert into exception table transaction fails, the adapter then logs the message to the opaque exception table.

What are the transport types supported by ADB adapters?
The transport types supported by ADB adapters are:
1) Rendezvous
2) JMS

Rendezvous
Quality of service supported by Rendezvous:
1) Reliable
2) Certified
3) Transactional

Wire Formats Supported by Rendexvous:
1) Active Enterprise Message
2) Rendezvous Message
3) XML Message

JMS
Wire Formats Supported by JMS:
1) XML Message

Connection Factory Type Supported by JMS:
1)Topic
2)Queue

Delivery Mode Supported by JMS:
1) Persistent
2) Non-Persistent

— Hari Iyer

Operating System, Redhat / CEntOS / Oracle Linux, Tips and Tricks

Error :- sudo: effective uid is not 0, is sudo installed setuid root?

We all as a Linux administrator must have come across this error sometime in our lives.

[user@host dir]$ sudo bash
sudo: effective uid is not 0, is sudo installed setuid root?

This happens when sudo does not get the right access permissions.

The Solution for this error is giving the following permissions as root user

chmod u+s /usr/bin/sudo

That Must Sort the issue for CentOS Kind of distros.

 

Operating System, Redhat / CEntOS / Oracle Linux

/proc/sys for you to manipulate a running kernel

The /proc/sys directory in the /proc virtual filesytem contains a lot of useful and interesting files and directories. Many kernel settings can be manipulated by writing to files in the proc filesystem. A lot of important information can be retrieved from these files. This is especially useful when you are troubleshooting or fine tuning your linux system.
Following is a description of the most important files.
Especially the files in /proc/sys/vm are very interesting and useful.
You can also use the sysctl command to make this changes persistent, or to see all the possible kernel options you can change at run-time.

/proc/sys/dev

Contains device specific information. For instance the directory cdrom /proc/sys/dev/cdrom/info shows you cdrom capabilities. The other files in /proc/sys/dev/cdrom are writable and allow you to actually set options for your cdrom drive.
For instance echo 1 > /proc/sys/dev/cdrom/autoeject makes your tray open automagically when you unmount your cdrom.
/proc/sys/dev/parport holds information about parallel ports. Browse these directories to learn more about their contents.

/proc/sys/fs

Virtual filesystem information/tuning

/proc/sys/fs/binfmt_misc

binfmt_misc allows you to configure the system to execute miscellaneous binary formats. For instance it enables you to make the system execute .exe files using wine and java files using the java interpreter, just by typing a file name.

/proc/sys/fs/dentry-state

Linux caches directory access to speed up sub-sequential access to the same directory, this file contains information about the status of the directory cache.

/proc/sys/fs/dir-notify-enable

enable/disable dnotify interface, dnotify is a signal used to notify a process about file/directory changes. This is mainly interesting to programmers.

/proc/sys/fs/dquot-nr

number of allocated disk quota entries and the number of free disk quota entries

/proc/sys/fs/dquot-max

maximum number of cached disk quota entries.

/proc/sys/fs/file-max

system-wide limit on the number of open files for all processes.

/proc/sys/fs/file-nr

number of files the system has presently opened.

/proc/sys/fs/inode-max

maximum number of in-memory inodes

/proc/sys/fs/inode-nr

number of inodes and number of free inodes

/proc/sys/fs/inode-state

This file contains seven numbers: number of inodes, number of free inodes, preshrink, and four dummy values. nr_inodes is the number of inodes the system has allocated. Preshrink is non-zero when the nr_inodes is bigger than inode-max.

/proc/sys/fs/inotify
(since kernel 2.6.13)

This directory contains files that can be used to limit the amount of kernel memory consumed by the inotify interface.

/proc/sys/fs/lease-break-time

This file specifies the grace period that the kernel grants to a process holding a file lease after it has sent a signal to that process notifying it that another process is waiting to open the file.

/proc/sys/fs/leases-enable

This file can be used to enable or disable file leases on a system-wide basis.

/proc/sys/fs/mqueue
(since kernel 2.6.6)

This directory contains files controlling the resources used by POSIX message queues.

/proc/sys/fs/overflowgid

Allows you to change the value of the fixed GID, if a filesystem is mounted which only supports 16 bit GID’s the Linux GID’s which are 32 bit sometimes need to be converted to lower values this is fixed at this value.

/proc/sys/fs/overflowuid

Allows you to change the value of the fixed UID, if a filesystem is mounted which only supports 16 bit UID’s the Linux UID’s which are 32 bit sometimes need to be converted to lower values this is fixed at this value.

/proc/sys/fs/suid_dumpable
(since kernel 2.6.13)

Determines whether core dump files are produced for set-user-ID or otherwise protected/tainted binaries.
Possible values are 0,1,2:
0 (default) A core dump will not be produced for a process which has changed credentials or whose binary does not have read permission enabled.
1 (debug) All processes dump core when possible.
2 (suidsafe) Any binary which normally would not be dumped is dumped readable by root only. This allows the user to remove the core dump file but not to read it. For security reasons core dumps in this mode will not overwrite one another or other files. This mode is appropriate when administrators are attempting to debug problems in a normal environment.

/proc/sys/fs/super-max

Controls the maximum number of superblocks, and thus the maximum number of mounted file systems the kernel can have.

/proc/sys/fs/super-nr

The number of file systems currently mounted.

/proc/sys/kernel/acct

highwater, lowwater, and frequency. Used with BSD-style process accounting.

/proc/sys/kernel/cap-bound
(from Linux 2.2 to 2.6.24)

Holds the value of the kernel capability bounding set.

/proc/sys/kernel/core_pattern

Can be used to define a template for naming core dump files

/proc/sys/kernel/core_uses_pid

See core(5).

/proc/sys/kernel/ctrl-alt-del

Controls the handling of Ctrl-Alt-Del from the keyboard.
If it’s 0, Linux will do a graceful restart. When the value is > 0, Linux’s will do an immediate reboot, without even syncing its dirty buffers.

/proc/sys/kernel/hotplug

Contains the path for the hotplug policy agent. Can be used to set the NIS/YP domainname

/proc/sys/kernel/domainname

can be used to set the NIS/YP domainname

/proc/sys/kernel/hostname

can be used to set the hostname

/proc/sys/kernel/modprobe

Contains the path for the kernel module loader.

/proc/sys/kernel/msgmax

This file defines a system-wide limit specifying the maximum number of bytes in a single message written on a System V message queue.

/proc/sys/kernel/msgmnb

Defines a system-wide parameter used to initialize the msg_qbytes setting for subsequently created message queues.

/proc/sys/kernel/ostype and /proc/sys/kernel/osrelease

These files give substrings of /proc/version.

/proc/sys/kernel/overflowgid and /proc/sys/kernel/overflowuid

These files duplicate the files /proc/sys/fs/overflowgid and /proc/sys/fs/overflowuid.

/proc/sys/kernel/panic

Gives read/write access to the kernel variable panic_timeout. If this is zero, the kernel will loop on a panic; if non-zero it indicates that the kernel should autoreboot after this number of seconds.

/proc/sys/kernel/panic_on_oops
(since kernel 2.5.68)

This file controls the kernel’s behavior when an oops or BUG is encountered. If this file contains 0, then the system tries to continue operation. If it contains 1, then the system delays a few seconds and then panics. If the /proc/sys/kernel/panic file is also non-zero then the machine will be rebooted.

/proc/sys/kernel/pid_max
(since kernel 2.5.34)

This file specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID).

/proc/sys/kernel/printk

The four values in this file are console_loglevel, default_message_loglevel, minimum_console_level, and default_console_loglevel. This allows configuration of which messages will be logged to the console. (ever worked on a console printing messages all the time to your screen? Here’s how to fix that) Messages with a higher priority than console_loglevel will be printed to the console.

/proc/sys/kernel/pty
(since kernel 2.6.4)

This directory contains two files relating to the number of Unix 98 pseudo-terminals on the system.

/proc/sys/kernel/pty/max

Defines the maximum number of pseudo-terminals.

/proc/sys/kernel/pty/nr

This read-only file indicates how many pseudo-terminals are currently in use.

/proc/sys/kernel/random

This directory contains various parameters controlling the operation of the file /dev/random.

/proc/sys/kernel/real-root-dev

Used by the deprecated change_root initrd system

/proc/sys/kernel/rtsig-max

( until kernel 2.6.7)
Can be used to tune the maximum number of POSIX real-time (queued) signals that can be outstanding in the system.

/proc/sys/kernel/rtsig-nr

(until kernel 2.6.7)
This file shows the number POSIX real-time signals currently queued.

/proc/sys/kernel/sem
(since kernel 2.4)

Contains 4 numbers defining limits for System V IPC semaphores.

/proc/sys/kernel/sg-big-buff

Shows the size of the generic SCSI device (sg) buffer.

/proc/sys/kernel/shmall

Contains the system-wide limit on the total number of pages of System V shared memory.

/proc/sys/kernel/shmmax

This file can be used to query and set the run-time limit on the maximum (System V IPC) shared memory segment size that can be created.

/proc/sys/kernel/shmmni
(from kernel 2.4)

Specifies the system-wide maximum number of System V shared memory segments that can be created.

/proc/sys/kernel/version

Kernel version number and build date

/proc/sys/net

Networking information.

/proc/sys/net/core/somaxconn

Defines a ceiling value for the backlog argument of listen

/proc/sys/net/core/rmem_max

Maximum TCP Receive Window.

/proc/sys/net/core/wmem_maxx

Maximum TCP Send Window.

/proc/sys/net/ipv4/ip_forward

Enable or disable routing.

/proc/sys/sunrpc

This directory supports Sun remote procedure call for network file system (NFS).

/proc/sys/vm

This directory contains files for memory management tuning, buffer and cache management. One of the more interresting directories in proc sys as it allows manipulating memory handling in real time.

/proc/sys/vm/swappiness
(since kernel 2.6.16)

vm.swappiness takes a value between 0 and 100 to change the balance between swapping applications and freeing cache. At 100, the kernel will always prefer to find inactive pages and swap them out; in other cases, whether a swapout occurs depends on how much application memory is in use and how poorly the cache is doing at finding and releasing inactive items.

/proc/sys/vm/drop_caches
(since kernel 2.6.16)

Writing to this file causes the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free.
To free pagecache, write 1 to this file.
To free dentries and inodes, write 2 to this file.
To free pagecache, dentries and inodes, write 3 to this file.
Just try echo 1 > /proc/sys/vm/drop_caches, and watch your memory usage drop by all kernel cache memory.

/proc/sys/vm/legacy_va_layout
(since kernel 2.6.9)

If non-zero, this disables the new 32-bit memory-mapping layout; the kernel will use the legacy (2.4) layout for all processes.

/proc/sys/vm/oom_dump_tasks
(since kernel 2.6.25)

Enables a system-wide task dump (excluding kernel threads) to be produced when the kernel performs an OOM-killing. The dump includes the following information for each task (thread, process): thread ID, real user ID, thread group ID (process ID), virtual memory size, resident set size, the CPU that the task is scheduled on, oom_adj score (see the description of /proc[number]/oom_adj), and command name. This is helpful to determine why the OOM-killer was invoked and to identify the rogue task that caused it.
If this contains the value zero, this information is suppressed.
It defaults to 0, so if you have a problem requiring it, enable it :
echo 1 > /proc/sys/vm/oom_dump_tasks

/proc/sys/vm/oom_kill_allocating_task
(since kernel 2.6.24)

This enables or disables killing the OOM-triggering task in out-of-memory situations. If this is set to zero, the OOM-killer will scan through the entire tasklist and select a task based on heuristics to kill. This normally selects a rogue memory-hogging task that frees up a large amount of memory when killed.
If this is set to non-zero, the OOM-killer simply kills the task that triggered the out-of-memory condition. This avoids a possibly expensive tasklist scan.
If /proc/sys/vm/panic_on_oom is non-zero, it takes precedence over whatever value is used in /proc/sys/vm/oom_kill_allocating_task.
The default value is 0.

/proc/sys/vm/overcommit_memory

This file contains the kernel virtual memory accounting mode. Values are:
0: heuristic overcommit (default)
1: always overcommit, never check
2: always check, never overcommit

/proc/sys/vm/overcommit_ratio

Value used in calculating virtual address space

/proc/sys/vm/panic_on_oom
(since kernel 2.6.18)

This enables or disables a kernel panic in an out-of-memory situation.
0 (default) : no panic
1 : panic but not if a process limits allocations to certain nodes using memory
policies mbind or cpusets and those nodes reach memory exhaustion status.
2 : always panic
Network

SAN Switch basic concepts – Fabric Switch

SAN Switch basic concepts

SAN Switch basic concepts – SAN environment provides block-oriented I/O between the computer systems and the target disk systems. The SAN may use Fiber Channel or Ethernet (iSCSI) to provide connectivity between hosts and storage. In either case, the storage is physically decoupled from the hosts. The storage devices and the hosts now become peers attached to a common SAN fabric that provides high bandwidth, longer reach distance, the ability to share resources, enhanced availability, and other benefits of consolidated storage.

SAN is created by using the Fiber Channel to link peripheral devices such as disk storage and tape libraries
A SAN (Storage Area Network) Switch is device that connects the sever and shared pools of the storage devices and is dedicated to moving storage Traffic. It is shown as below

san-switch-300x63

Picture: SAN Switch

Basic Connectivity Diagram between Servers, SAN Storage, SAN Switch and Tape Library.

2.png

Picture: Basic Connectivity Diagram

SAN Switch will contain below physical parts

  1. One / Two Hot Swap-able Power supply units
  2. SFP (Small Form Factor pluggable) Ports
  3. Out Band Management Port (RJ45)
  4. Console Port
  5. USB ports
  6. FC ports (count is depend up on the model)

3

Switch Back View

4.png

Front View

Hot Swappable Power supply Units:  Hot swapping and hot plugging are terms used to describe the functions of replacing computer system components without shutting down the system. If we use 2 Power supply units it will be helpful for redundancy purpose. One unit will connect to PDU1 in a rack and another unit will connect to another PDU 2 in a rack.

SFP:  (Small Form-factor Pluggable) A small transceiver that plugs into the SFP port of a network switch and connects to Fibre Channel and Gigabit Ethernet (GbE) optical fibre cables at the other end. SFP is a hot-swappable input/output device that plugs into a switch port, allowing multiple options for connectivity superseding the GBIC transceiver, SFP modules are also called “mini-GBIC” due to their smaller size.

console-port-300x120

The Fiber cables are used to connect between Storage and Server as well as Storage and Tape Library. The Fiber cable is as shown below.

console-port-1-300x158

FC Cable

Fiber cable: A fiber optic cable consists of a bundle of glass threads, each of which is capable of transmitting messages modulated onto light waves. Fiber optics has several advantages over traditional metal communications lines: Fiber optic cables have a much greater bandwidth than metal cables.

Ethernet Port: out-of-band management involves the use of a dedicated channel for managing network devices. This allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It also can be used to ensure management connectivity (including the ability to determine the status of any network component) independent of the status of other in-band network components. A complete remote management system allows remote reboot, shut-down, powering on; hardware sensor monitoring (fan speed, power voltages, chassis intrusion, etc.); Out band Management ports is also called as Ethernet Management port (RJ45).

fibre-optic-cable-300x300

Console: Switch console ports are meant to allow root access to the switch via a dumb terminal interface, regardless of the state of the switch (unless it is completely dead). By connecting to the console port you can get remote access to the root level of a switch without using the network that the switch is connected to. This creates a secondary path to the switch outside the bandwidth of the network which needs to be secured without relying on the primary network.

<img class=”alignnone wp-image-2222″ src=”https://i1.wp.com/arkit.co.in/wp-content/uploads/2016/05/Console-Port-1-300×158.gif?resize=389%2C205″ alt=”Console Port ” data-recalc-dims=”1″ />

This allows a technician sitting in a Network Operations Center thousands of miles away the ability to restore a switch or perform an initialization configuration securely over a standard telephone line even if the primary network is in failure. Without a connection to the console port, a technician would have to visit the site to perform repairs or initialization.

Login to SAN Switch.

LC-connector.png

Putty: A free telnet and SSH terminal software for Windows and Unix platforms that enables users to remotely access computers over the Internet.

By typing the Switch IP Address in putty configuration we will login to SAN Switch (CLI).

We can also login to the SAN switch GUI console using web browser, open web browser and type the IP address / SAN Switch Name in the address bar

putty-300x290.png

Understanding Port Management. Port Types and Definitions

E_Port: This is an expansion port. A port is designated an E_Port when it is used as an inter switch expansion port (ISL) to connect to the E_Port of another switch, to enlarge the switch fabric.
F_Port: This is a fabric port that is not loop capable. It is used to connect an N_Port point-to-point to a switch.
FL_Port: This is a fabric port that is loop capable. It is used to connect NL_Ports to the switch in a public loop configuration (in switched fabric env.).
G_Port: This is a generic port that can operate as either an E_Port or an F_Port. A port is defined as a G_Port after it is connected but has not received response to loop initialization or has not yet completed the link initialization procedure with the adjacent Fiber Channel device.
L_Port: This is a loop capable node or switch port.
U_Port: This is a universal port. A more generic switch port than a G_Port. It can operate as either an E_Port, F_Port, or FL_Port. A port is defined as an U_Port when it is not connected or has not yet assumed a specific function in the fabric.
VE_Port – A virtual E_Port that terminates at the switch and does not propagate fabric services or routing topology information from one edge fabric to the other
EX_Port – An E_Port from a router to an edge fabric; the router terminates EX_Ports preventing fabric merges
VEX_Port – A virtual E_Port that terminates at the switch and does not propagate fabric services or routing topology information from one edge fabric to the other, when an FCIP connection is involved

Target Device (Device ports)

N_Port: This is a node port that is not loop capable. It is used to connect an equipment port to the fabric.
NL_Port: This is a node port that is loop capable. It is used to connect an equipment port to the fabric in a loop configuration through an L_Port or FL_Port.
T_Port: This was used previously by CNT (INRANGE) as a mechanism of connecting directors together. This has been largely replaced by the E_Port
No_Light: it indicates the port is free.

 

Source :- https://arkit.co.in/san-switch-basic

Operating System, Redhat / CEntOS / Oracle Linux

Linux Concepts :- Quotas

Rules : 1. Quotas can only be created for partitions

Eg., If a 1MB quota is set for partition [/home]

then every subdir under that /home can use a max of 1 MB

user | group

block [diskspace] inode [no of files] | block [diskspace] inode [files]

SOFT HARD GRACE SOFT HARD GRACE | SOFT HARD GRACE SOFT HARD GRACE

<—- Limits —->

==========================================

Part I. Configuring / Setting Up Quotas

==========================================

1. Configure /etc/fstab – usrquota or grpquota which ever you want

In the 4th column of /etc/fstab add ‘usrquota’

2. Refresh /etc/fstab – actually /etc/mtab :

a. If you do not want hassles just REBOOT and skip to Part II.

or

b. mount -a which will do nothing if FS’s are already mounted, which

they always are

or

c. umount /home and mount /home

But this will not work if any users are online which they always are

or

d. mount -o remount /home <<<=========================

[mount -o remount,rw /]

Refresh and will work even if users are online

or

e. reboot

which is the coward’s way

and then Check /etc/mtab or mount*

3. To create the aquota.user file

–> quotacheck -vc /home [force] – Note : ‘u’ is the default if not given

and ‘v’, of course, is always

optional but friendly

This will create the /home/aquota.user which monitors all

quota activity for the /home paritition

4. – To turn on quotas

–> quotaon -v /home – To turn on quotas

or load /home/aquota.user file

in to RAM

5. Check everybody’s current usage and quotas :

# repquota -a

Configuring quotas is now over !

Now we will implement quotas.

In short :

1. Configure /etc/fstab

2. mount -o remount /home Cross check with # cat /etc/mtab

3. quotacheck -vc /home Creates aquota.user

4. quotaon -v /home Loads aquota.user in RAM

5. repquota -a Enjoy your work !

eg

1. repquota -a

2. repquota -u /

3. repquota -u sachin

The first line shows quota info for all users and groups for all file

systems.

The second line shows user quota info for the / file system.

The third line shows quota information for user sachin on all file systems.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

=============================

Part II. Implementing Quotas

=============================

6. edquota -u user

7. edquota -p foo bar <——— use foo as quota prototype for bar

8. edquota -t <———— To change the grace period

or use :

# edquota -p foo `awk -F: ‘$3 > 499 {print $1}’ /etc/passwd`

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

========================================

Part III. Repairing the aquota.user file

========================================

If your system hangs and then restarts, the aquota.user file gets corrupted

and all quotas for all users are now in an unknown state :

To repair, boot into single user mode imme-asap, and do this _FIRST_ :

quotaoff -v /home

quotacheck -avug Minimum reqd is : quotacheck /home

since ‘u’ is the default

and we do not have ‘g’

and ‘v’ is optional

a is check /etc/mtab

Re-Creates file /home/aquota.user

quotaon -v /home

Misc : /etc/warnquota.conf

warnquota*

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

GNU/Linux LDUP : 27-Jul-2k3

PART II – SysAdministration

08. QUOTAS

1. What are the two aspects of disk storage that quotas allow you specify?

A: Disk space [Block] and Files [inode] quotas

2. Which init script checks for the presence or absence of quotas ?

A: /etc/rc.d/rc.sysinit

3. I wish to implement quotas on my /home dir ? Should /home be a partition?

A: Yes.

4. Which file is configured when setting up quotas ?

A: /etc/fstab

5. What is the min I have to do, to implement both user and group quotas for

my /home partition?

A: Configure /etc/fstab with :

LABEL=/home /home ext3 defaults,usrquota,grpquota 1 2

and then just reboot the machine !!

6. But I wish to to implement only user quotas, is this /etc/fstab OK ?

LABEL=/home /home ext3 defaults, usrquota 1 2

A: No. No space in 4th field after defaults

7. Is this /etc/fstab OK ?

LABEL=/home /home ext3 defaults,userquota 1 2

A: No. Its usrquota.

8. Where is this quota info for user and group quotas stored for /home?

A: In the /home partition, in 2 data files : aquota.user, aquota.group

9. Which file keeps track of all the mounted filesystems ?

A: /etc/mtab

10 How would you implement user quotas without rebooting your machine ?

A: Configure /etc/fstab with :

1. LABEL=/home /home ext3 defaults,usrquota 1 2

2. mount -o remount /home

3. quotacheck -vc /home

4. quotaon -v /home

11 Why would you want to do a quotaoff before you do a quotacheck manually

from the CLI ?

A: Corrupts the aquota.user file

12 Can a user – foo – modify/create his quota ?

A: Obviously not! Only root can do that! Or else every user will

abuse the quota system for his benefit !

13 Can foo at least see his quota status ?

A: Yes.

14 How ?

A: Login as foo and run ‘quota’.

15. What are these quotas that he will see ?

A: His soft limit, hard limit and grace period

16. How then will root set [create/modify] quotas for ‘foo’ ?

A: edquota -u foo

17. Examine the following o/p generated by the quota command given by foo:

Disk quotas for user foo (uid 500):

Filesystem blocks soft hard inodes soft hard

/dev/hda 20 100 0 14 0 0

18. How much disk space has foo already used ?

A: 20 blocks i.e. 20 KB

19. His soft limit appears to be 100 KB. Can he use 130 KB ?

A: Yes. But but he will get warning messages

20. Can he use unlimited disk space and fill the partition ?

A: Yes. But up and until the grace period expires. After that the soft limit

is enforced as the hard limit. Additionally, he will get warning messages

21. Then what is the use of this soft limit ? How can I remedy it ?

A: By giving a hard limit too. foo can never cross the hard limit.

22. Now I do this :

Disk quotas for user foo (uid 500):

Filesystem blocks soft hard inodes soft hard

/dev/hda 20 100 200 14 0 0

foo creates files worth 160 KB. What will happen then ?

A: He will be allowed to create up to 200 KB max. Also after 7 days

he will be shut down regardless of whether he has reached his hard

limit or not. He will have to clean up under the soft limit to work.

23. What command is used to change a user’s grace period?

A: edquota -t

24. What command is used to see the entire quota details of all users?

A: repquota -a

25. What command sets a quota template?

A: edquota -p

26. What does ‘p’ mean and how would you use it?

A: ‘prototype’.

Suppose ‘foo’ has his quota set. Then you could clone his details,

# edquota -p foo bar

bar now has the same quota limits as foo.

27. If you had 2000 users, the above would clearly be inconvenient. Solve!

a: edquota -p foo `awk -F: ‘$3 > 499 { print $1 }’ /etc/passwd`

28. An over-limit quota generates a mail message to the user on login.

Which file would you modify to customize the mail delivered ?

A: /etc/warnquota.conf

29. If quotas were run as a daily cron job, where would you find the script

file concerned?

a: /etc/crond.daily/

30 A user owns 150 inodes; the soft limit is 100 and the hard is 200.

Which of the following is correct if the grace period has not expired?

a. The user can create no more files

b. The user cannot append data to an existing file

c. The user cannot log off without deleting some files

d. The user will receive an email notice of violation

A: d.

31 What is the purpose of ‘convertquota’ ?

A: convertquota converts old quota files quota.user and quota.group to

files aquota.user and aquota.group in new format currently used by

2.4.0-ac and newer or by Red Hat Linux 2.4 kernels on filesystem.

New file format allows using quotas for 32-bit uids / gids, setting

quotas for root, accounting used space in bytes (and so allowing use of

quotas in ReiserFS) and it is also architecture independent. This format

introduces Radix Tree (a simple form of tree structure) to quota file.

Certificates, Secure Socket Layer

SSL – Create Root, Intermediate and Certificate in Chain

Create a Chain Certificate (Root, Intermediate & Normal Chain) – Step-by-step
——————————————————————————————
ROOT CERTIFICATE
——————————————————————————————
mkdir /root/ca
cd /root/ca
mkdir certs crl newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
vim openssl.cnf

[ ca ]
# `man ca`
default_ca = CA_default

[ CA_default ]
# Directory and file locations.
dir               = /root/ca
certs             = $dir/certs
crl_dir           = $dir/crl
new_certs_dir     = $dir/newcerts
database          = $dir/index.txt
serial            = $dir/serial
RANDFILE          = $dir/private/.rand

# The root key and root certificate.
private_key       = $dir/private/root_haritibco.key.pem
certificate       = $dir/certs/root_haritibco.cert.pem

# For certificate revocation lists.
crlnumber         = $dir/crlnumber
crl               = $dir/crl/ca.crl.pem
crl_extensions    = crl_ext
default_crl_days  = 30

# SHA-1 is deprecated, so use SHA-2 instead.
default_md        = sha256

name_opt          = ca_default
cert_opt          = ca_default
default_days      = 375
preserve          = no
policy            = policy_strict

[ policy_strict ]
# The root CA should only sign intermediate certificates that match.
# See the POLICY FORMAT section of `man ca`.
countryName             = match
stateOrProvinceName     = match
organizationName        = match
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ policy_loose ]
# Allow the intermediate CA to sign a more diverse range of certificates.
# See the POLICY FORMAT section of the `ca` man page.
countryName             = optional
stateOrProvinceName     = optional
localityName            = optional
organizationName        = optional
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ req ]
# Options for the `req` tool (`man req`).
default_bits        = 2048
distinguished_name  = req_distinguished_name
string_mask         = utf8only

# SHA-1 is deprecated, so use SHA-2 instead.
default_md          = sha256

# Extension to add when the -x509 option is used.
x509_extensions     = v3_ca

[ req_distinguished_name ]
# See <https://en.wikipedia.org/wiki/Certificate_signing_request&gt;.
countryName                     = Country Name (2 letter code)
stateOrProvinceName             = State or Province Name
localityName                    = Locality Name
0.organizationName              = Organization Name
organizationalUnitName          = Organizational Unit Name
commonName                      = Common Name
emailAddress                    = Email Address

# Optionally, specify some defaults.
countryName_default             = IN
stateOrProvinceName_default     = Maharashtra
localityName_default            = Mumbai
0.organizationName_default      = Hari TIBCO Blog Ltd
organizationalUnitName_default  = HLL
emailAddress_default            = hsivabc@gmail.com

[ v3_ca ]
# Extensions for a typical CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ v3_intermediate_ca ]
# Extensions for a typical intermediate CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ usr_cert ]
# Extensions for client certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = client, email
nsComment = “OpenSSL Generated Client Certificate”
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, emailProtection

[ server_cert ]
# Extensions for server certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = server
nsComment = “OpenSSL Generated Server Certificate”
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth

[ crl_ext ]
# Extension for CRLs (`man x509v3_config`).
authorityKeyIdentifier=keyid:always

[ ocsp ]
# Extension for OCSP signing certificates (`man ocsp`).
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, digitalSignature
extendedKeyUsage = critical, OCSPSigning

(Create root key)
cd /root/ca
openssl genrsa -aes256 -out private/root_haritibco.key.pem 4096
****test12345***
chmod 400 private/root_haritibco.key.pem
(Create root certificate)
cd /root/ca
openssl req -config openssl.cnf \
-key private/root_haritibco.key.pem \
-new -x509 -days 7300 -sha256 -extensions v3_ca \
-out certs/root_haritibco.cert.pem
chmod 444 certs/root_haritibco.cert.pem
(Verify Root Certificate)
openssl x509 -noout -text -in certs/root_haritibco.cert.pem

——————————————————————————————
INTERMEDIATE CERTIFICATE
——————————————————————————————
mkdir /root/ca/intermediate
cd /root/ca/intermediate
mkdir certs crl csr newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
echo 1000 > /root/ca/intermediate/crlnumber
cd /root/ca

vim openssl.cnf


# OpenSSL intermediate CA configuration file.
# Copy to `/root/ca/intermediate/openssl.cnf`.

[ ca ]
# `man ca`
default_ca = CA_default

[ CA_default ]
# Directory and file locations.
dir               = /root/ca/intermediate
certs             = $dir/certs
crl_dir           = $dir/crl
new_certs_dir     = $dir/newcerts
database          = $dir/index.txt
serial            = $dir/serial
RANDFILE          = $dir/private/.rand

# The root key and root certificate.
private_key       = $dir/private/inter_haritibco.key.pem
certificate       = $dir/certs/inter_haritibco.cert.pem

# For certificate revocation lists.
crlnumber         = $dir/crlnumber
crl               = $dir/crl/inter_haritibco.crl.pem
crl_extensions    = crl_ext
default_crl_days  = 30

# SHA-1 is deprecated, so use SHA-2 instead.
default_md        = sha256

name_opt          = ca_default
cert_opt          = ca_default
default_days      = 375
preserve          = no
policy            = policy_loose

[ policy_strict ]
# The root CA should only sign intermediate certificates that match.
# See the POLICY FORMAT section of `man ca`.
countryName             = match
stateOrProvinceName     = match
organizationName        = match
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ policy_loose ]
# Allow the intermediate CA to sign a more diverse range of certificates.
# See the POLICY FORMAT section of the `ca` man page.
countryName             = optional
stateOrProvinceName     = optional
localityName            = optional
organizationName        = optional
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ req ]
# Options for the `req` tool (`man req`).
default_bits        = 2048
distinguished_name  = req_distinguished_name
string_mask         = utf8only

# SHA-1 is deprecated, so use SHA-2 instead.
default_md          = sha256

# Extension to add when the -x509 option is used.
x509_extensions     = v3_ca

[ req_distinguished_name ]
# See <https://en.wikipedia.org/wiki/Certificate_signing_request&gt;.
countryName                     = Country Name (2 letter code)
stateOrProvinceName             = State or Province Name
localityName                    = Locality Name
0.organizationName              = Organization Name
organizationalUnitName          = Organizational Unit Name
commonName                      = Common Name
emailAddress                    = Email Address

# Optionally, specify some defaults.
countryName_default             = IN
stateOrProvinceName_default     = Maharashtra
localityName_default            = Mumbai
0.organizationName_default      = Hari TIBCO Blog Ltd
organizationalUnitName_default  = HLL
emailAddress_default            = hsivabc@gmail.com

[ v3_ca ]
# Extensions for a typical CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ v3_intermediate_ca ]
# Extensions for a typical intermediate CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ usr_cert ]
# Extensions for client certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = client, email
nsComment = “OpenSSL Generated Client Certificate”
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, emailProtection

[ server_cert ]
# Extensions for server certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = server
nsComment = “OpenSSL Generated Server Certificate”
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth

[ crl_ext ]
# Extension for CRLs (`man x509v3_config`).
authorityKeyIdentifier=keyid:always

[ ocsp ]
# Extension for OCSP signing certificates (`man ocsp`).
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, digitalSignature
extendedKeyUsage = critical, OCSPSigning

openssl genrsa -aes256 \
-out intermediate/private/inter_haritibco.key.pem 4096
*****test12345****
chmod 400 intermediate/private/inter_haritibco.key.pem

cd /root/ca
openssl req -config intermediate/openssl.cnf -new -sha256 \
-key intermediate/private/inter_haritibco.key.pem \
-out intermediate/csr/inter_haritibco.csr.pem
cd /root/ca
openssl ca -config openssl.cnf -extensions v3_intermediate_ca \
-days 3650 -notext -md sha256 \
-in intermediate/csr/inter_haritibco.csr.pem \
-out intermediate/certs/inter_haritibco.cert.pem
chmod 444 intermediate/certs/inter_haritibco.cert.pem

openssl x509 -noout -text \
-in intermediate/certs/inter_haritibco.cert.pem

openssl verify -CAfile certs/root_haritibco.cert.pem \
intermediate/certs/inter_haritibco.cert.pem

cat intermediate/certs/inter_haritibco.cert.pem \
certs/ca.cert.pem > intermediate/certs/chain_haritibco.cert.pem
chmod 444 intermediate/certs/chain_haritibco.cert.pem


—————————————————————————
CERTIFICATE
—————————————————————————
cd /root/ca
openssl genrsa -aes256 \
-out intermediate/private/haritibcoblog.key.pem 2048
chmod 400 intermediate/private/haritibcoblog.key.pem

cd /root/ca
openssl req -config intermediate/openssl.cnf \
-key intermediate/private/haritibcoblog.key.pem \
-new -sha256 -out intermediate/csr/haritibcoblog.csr.pem

cd /root/ca
openssl ca -config intermediate/openssl.cnf \
-extensions server_cert -days 375 -notext -md sha256 \
-in intermediate/csr/haritibcoblog.csr.pem \
-out intermediate/certs/haritibcoblog.cert.pem
chmod 444 intermediate/certs/haritibcoblog.cert.pem

openssl x509 -noout -text \
-in intermediate/certs/haritibcoblog.cert.pem

openssl verify -CAfile intermediate/certs/chain_haritibco.cert.pem \
intermediate/certs/haritibcoblog.cert.pem

 

 

 

Operating System, Redhat / CEntOS / Oracle Linux

REDHAT Linux Notes (Handwritten)

Guys this is my very own Hand Written Linux Notes.

I have a habit of writing notes and this is something i wrote back in 2012 when i was a newbie in Linux 🙂

So Enjoy and click the link below to download

HariIyerLinuxNotes

HTTP/WSDL/XML/HTML

HTTP Status Codes:- 1xx (Informational)

  • 100 Continue :- The client SHOULD continue with its request. This interim response is used to inform the client that the initial part of the request has been received and has not yet been rejected by the server. The client SHOULD continue by sending the remainder of the request or, if the request has already been completed, ignore this response. The server MUST send a final response after the request has been completed.
  • 102 Processing (WebDAV) :- The 102 (Processing) status code is an interim response used to inform the client that the server has accepted the complete request, but has not yet completed it. This status code SHOULD only be sent when the server has a reasonable expectation that the request will take significant time to complete. As guidance, if a method is taking longer than 20 seconds (a reasonable, but arbitrary value) to process the server SHOULD return a 102 (Processing) response. The server MUST send a final response after the request has been completed.Methods can potentially take a long period of time to process, especially methods that support the Depth header. In such cases the client may time-out the connection while waiting for a response. To prevent this the server may return a 102 (Processing) status code to indicate to the client that the server is still processing the method.
  • 101 Switching Protocols :-The server understands and is willing to comply with the client’s request, via the Upgrade message header field, for a change in the application protocol being used on this connection.The server will switch protocols to those defined by the response’s Upgrade header field immediately after the empty line which terminates the 101 response. The protocol SHOULD be switched only when it is advantageous to do so. For example, switching to a newer version of HTTP is advantageous over older versions, and switching to a real-time, synchronous protocol might be advantageous when delivering resources that use such features.

     

Java Performance Monitoring

java.lang.OutOfMemoryError – Types and Causes

The 8 symptoms that surface them

The many thousands of java.lang.OutOfMemoryErrors that I’ve met during my career all bear one of the below eight symptoms.

This post from haritibcoblog explains what causes a particular error to be thrown, offers code examples that can cause such errors, and gives you solution guidelines for a fix.

The content is all based on my own experience.

  • java.lang.OutOfMemoryError:Java heap space
  • java.lang.OutOfMemoryError:GC overhead limit exceeded
  • java.lang.OutOfMemoryError:Permgen space
  • java.lang.OutOfMemoryError:Metaspace
  • java.lang.OutOfMemoryError:Unable to create new native thread
  • java.lang.OutOfMemoryError:Out of swap space?
  • java.lang.OutOfMemoryError:Requested array size exceeds VM limit
  • Out of memory:Kill process or sacrifice child

java.lang.OutOfMemoryError:Java heap space

Java applications are only allowed to use a limited amount of memory. This limit is specified during application startup. To make things more complex, Java memory is separated into two different regions. These regions are called Heap space and Permgen (for Permanent Generation):

capture

The size of those regions is set during the Java Virtual Machine (JVM) launch and can be customized by specifying JVM parameters -Xmx and -XX:MaxPermSize. If you do not explicitly set the sizes, platform-specific defaults will be used.

The java.lang.OutOfMemoryError: Java heap space error will be triggered when the application attempts to add more data into the heap space area, but there is not enough room for it.

Note that there might be plenty of physical memory available, but the java.lang.OutOfMemoryError: Java heap space error is thrown whenever the JVM reaches the heap size limit.

What is causing it?

There most common reason for the java.lang.OutOfMemoryError: Java heap space error is simple – you try to fit an XXL application into an S-sized Java heap space. That is – the application just requires more Java heap space than available to it to operate normally. Other causes for this OutOfMemoryError message are more complex and are caused by a programming error:

  • Spikes in usage/data volume. The application was designed to handle a certain amount of users or a certain amount of data. When the number of users or the volume of data suddenly spikes and crosses that expected threshold, the operation which functioned normally before the spike ceases to operate and triggers the lang.OutOfMemoryError: Java heap spaceerror.

Memory leaks. A particular type of programming error will lead your application to constantly consume more memory. Every time the leaking functionality of the application is used it leaves some objects behind into the Java heap space. Over time the leaked objects consume all of the available Java heap space and trigger the already familiar java.lang.OutOfMemoryError: Java heap space error.

java.lang.OutOfMemoryError:GC overhead limit exceeded

Java runtime environment contains a built-in Garbage Collection (GC) process. In many other programming languages, the developers need to manually allocate and free memory regions so that the freed memory can be reused.

Java applications on the other hand only need to allocate memory. Whenever a particular space in memory is no longer used, a separate process called Garbage Collection clears the memory for them. How the GC detects that a particular part of memory is explained in more detail in the Garbage Collection Handbook, but you can trust the GC to do its job well.

captureThe java.lang.OutOfMemoryError: GC overhead limit exceeded error is displayed when your application has exhausted pretty much all the available memory and GC has repeatedly failed to clean it.

What is causing it?

The java.lang.OutOfMemoryError: GC overhead limit exceeded error is the JVM’s way of signalling that your application spends too much time doing garbage collection with too little result. By default the JVM is configured to throw this error if it spends more than 98% of the total time doing GC and when after the GC only less than 2% of the heap is recovered.

What would happen if this GC overhead limit would not exist? Note that the java.lang.OutOfMemoryError: GC overhead limit exceeded error is only thrown when 2% of the memory is freed after several GC cycles. This means that the small amount of heap the GC is able to clean will likely be quickly filled again, forcing the GC to restart the cleaning process again. This forms a vicious cycle where the CPU is 100% busy with GC and no actual work can be done. End users of the application face extreme slowdowns – operations which normally complete in milliseconds take minutes to finish.

So the “java.lang.OutOfMemoryError: GC overhead limit exceeded” message is a pretty nice example of a fail fast principle in action.

java.lang.OutOfMemoryError:Permgen space

Java applications are only allowed to use a limited amount of memory. The exact amount of memory your particular application can use is specified during application startup. To make things more complex, Java memory is separated into different regions which can be seen in the following figure:

The size of all those regions, including the permgen area, is set during the JVM launch. If you do not set the sizes yourself, platform-specific defaults will be used.

The java.lang.OutOfMemoryError: PermGen space message indicates that the Permanent Generation’s area in memory is exhausted.

capture

What is causing it?

To understand the cause for the java.lang.OutOfMemoryError: PermGen space, we would need to understand what this specific memory area is used for.

For practical purposes, the permanent generation consists mostly of class declarations loaded and stored into PermGen. This includes the name and fields of the class, methods with the method bytecode, constant pool information, object arrays and type arrays associated with a class and Just In Time compiler optimizations.

From the above definition you can deduce that the PermGen size requirements depend both on the number of classes loaded as well as the size of such class declarations. Therefore we can say that the main cause for the java.lang.OutOfMemoryError: PermGen space is that either too many classes or too big classes are loaded to the permanent generation.

java.lang.OutOfMemoryError:Metaspace

Java applications are allowed to use only a limited amount of memory. The exact amount of memory your particular application can use is specified during application startup. To make things more complex, Java memory is separated into different regions, as seen in the following figure:

The size of all those regions, including the metaspace area, can be specified during the JVM launch. If you do not determine the sizes yourself, platform-specific defaults will be used.

The java.lang.OutOfMemoryError: Metaspace message indicates that the Metaspace area in memory is exhausted.

capture

What is causing it?

If you are not a newcomer to the Java landscape, you might be familiar with another concept in Java memory management called PermGen. Starting from Java 8, the memory model in Java was significantly changed. A new memory area called Metaspace was introduced and Permgen was removed. This change was made due to variety of reasons, including but not limited to:

  • The required size of permgen was hard to predict. It resulted in either under-provisioning triggering lang.OutOfMemoryError: Permgen sizeerrors or over-provisioning resulting in wasted resources.
  • GC performanceimprovements, enabling concurrent class data de-allocation without GC pauses and specific iterators on metadata
  • Support for further optimizations such as G1concurrent class unloading.

So if you were familiar with PermGen then all you need to know as background is that – whatever was in PermGen before Java 8 (name and fields of the class, methods of a class with the bytecode of the methods, constant pool, JIT optimizations etc) – is now located in Metaspace.

As you can see, Metaspace size requirements depend both upon the number of classes loaded as well as the size of such class declarations. So it is easy to see the main cause for the java.lang.OutOfMemoryError: Metaspace is: either too many classes or too big classes being loaded to the Metaspace.

java.lang.OutOfMemoryError:Unable to create new native thread

Java applications are multi-threaded by nature. What this means is that the programs written in Java can do several things (seemingly) at once. For example – even on machines with just one processor – while you drag content from one window to another, the movie played in the background does not stop just because you carry out several operations at once.

A way to think about threads is to think of them as workers to whom you can submit tasks to carry out. If you had only one worker, he or she could only carry out one task at the time. But when you have a dozen workers at your disposal they can simultaneously fulfill several of your commands.

Now, as with workers in physical world, threads within the JVM need some elbow room to carry out the work they are summoned to deal with. When there are more threads than there is room in memory we have built a foundation for a problem:

The message java.lang.OutOfMemoryError: Unable to create new native thread means that the Java application has hit the limit of how many Threads it can launch.

capture

What is causing it?

You have a chance to face the java.lang.OutOfMemoryError: Unable to create new native thread whenever the JVM asks for a new thread from the OS. Whenever the underlying OS cannot allocate a new native thread, this OutOfMemoryError will be thrown. The exact limit for native threads is very platform-dependent thus we recommend to find out those limits by running a test similar to the below example. But, in general, the situation causing java.lang.OutOfMemoryError: Unable to create new native thread goes through the following phases:

  1. A new Java thread is requested by an application running inside the JVM
  2. JVM native code proxies the request to create a new native thread to the OS
  3. The OS tries to create a new native thread which requires memory to be allocated to the thread
  4. The OS will refuse native memory allocation either because the 32-bit Java process size has depleted its memory address space – e.g. (2-4) GB process size limit has been hit – or the virtual memory of the OS has been fully depleted
  5. The lang.OutOfMemoryError: Unable to create new native threaderror is thrown.

 

java.lang.OutOfMemoryError:Out of swap space?

Java applications are given limited amount of memory during the startup. This limit is specified via the -Xmx and other similar startup parameters. In situations where the total memory requested by the JVM is larger than the available physical memory, operating system starts swapping out the content from memory to hard drive.

The java.lang.OutOfMemoryError: Out of swap space? error indicates that the swap space is also exhausted and the new attempted allocation fails due to the lack of both physical memory and swap space.

capture

What is causing it?

The java.lang.OutOfmemoryError: Out of swap space? is thrown by JVM when an allocation request for bytes from the native heap fails and the native heap is close to exhaustion. The message indicates the size (in bytes) of the allocation which failed and the reason for the memory request.

The problem occurs in situations where the Java processes have started swapping, which, recalling that Java is a garbage collected language is already not a good situation. Modern GC algorithms do a good job, but when faced with latency issues caused by swapping, the GC pauses tend to increase to levels not tolerable by most applications.

java.lang.OutOfMemoryError: Out of swap space? is often caused by operating system level issues, such as:

  • The operating system is configured with insufficient swap space.
  • Another process on the system is consuming all memory resources.

It is also possible that the application fails due to a native leak, for example, if application or library code continuously allocates memory but does not release it to the operating system.

java.lang.OutOfMemoryError:Requested array size exceeds VM limit

Java has got a limit on the maximum array size your program can allocate. The exact limit is platform-specific but is generally somewhere between 1 and 2.1 billion elements.

When you face the java.lang.OutOfMemoryError: Requested array size exceeds VM limit, this means that the application that crashes with the error is trying to allocate an array larger than the Java Virtual Machine can support.

capture

What is causing it?

The error is thrown by the native code within the JVM. It happens before allocating memory for an array when the JVM performs a platform-specific check: whether the allocated data structure is addressable in this platform. This error is less common than you might initially think.

The reason you only seldom face this error is that Java arrays are indexed by int. The maximum positive int in Java is 2^31 – 1 = 2,147,483,647. And the platform-specific limits can be really close to this number – for example on my 64bit MB Pro on Java 1.7 I can happily initialize arrays with up to 2,147,483,645 or Integer.MAX_VALUE-2 elements.

Increasing the length of the array by one to Integer.MAX_VALUE-1 results in the familiar OutOfMemoryError:

Exception in thread “main” java.lang.OutOfMemoryError: Requested array size exceeds VM limit

But the limit might not be that high – on 32-bit Linux with OpenJDK 6, you will hit the “java.lang.OutOfMemoryError: Requested array size exceeds VM limit” already when allocating an array with ~1.1 billion elements. To understand the limits of your specific environments run the small test program described in the next chapter.

Out of memory:Kill process or sacrifice child

In order to understand this error, we need to recoup the operating system basics. As you know, operating systems are built on the concept of processes. Those processes are shepherded by several kernel jobs, one of which, named “Out of memory killer” is of interest to us in this particular case.

This kernel job can annihilate your processes under extremely low memory conditions. When such a condition is detected, the Out of memory killer is activated and picks a process to kill. The target is picked using a set of heuristics scoring all processes and selecting the one with the worst score to kill. The Out of memory: Kill process or sacrifice child is thus different from other errors covered in our OOM handbook as it is not triggered nor proxied by the JVM but is a safety net built into the operating system kernels.

The Out of memory: kill process or sacrifice child error is generated when the available virtual memory (including swap) is consumed to the extent where the overall operating system stability is put to risk. In such case the Out of memory killer picks the rogue process and kills it.

capture

What is causing it?

By default, Linux kernels allow processes to request more memory than currently available in the system. This makes all the sense in the world, considering that most of the processes never actually use all of the memory they allocate. The easiest comparison to this approach would be the broadband operators. They sell all the consumers a 100Mbit download promise, far exceeding the actual bandwidth present in their network. The bet is again on the fact that the users will not simultaneously all use their allocated download limit. Thus one 10Gbit link can successfully serve way more than the 100 users our simple math would permit.

A side effect of such an approach is visible in case some of your programs are on the path of depleting the system’s memory. This can lead to extremely low memory conditions, where no pages can be allocated to process. You might have faced such situation, where not even a root account cannot kill the offending task. To prevent such situations, the killer activates, and identifies the rogue process to be the killed.

You can read more about fine-tuning the behaviour of “Out of memory killer” in this article from RedHat documentation.

Now that we have the context, how can you know what triggered the “killer” and woke you up at 5AM? One common trigger for the activation is hidden in the operating system configuration. When you check the configuration in /proc/sys/vm/overcommit_memory, you have the first hint – the value specified here indicates whether all malloc() calls are allowed to succeed. Note that the path to the parameter in the proc file system varies depending on the system affected by the change.

Overcommitting configuration allows to allocate more and more memory for this rogue process which can eventually trigger the “Out of memory killer” to do exactly what it is meant to do.

 

Redhat / CEntOS / Oracle Linux, Secure Socket Layer, Ubuntu

Linux – Concepts – IPTABLES v/s FIREWALLD

Today we will walk through iptables and firewalld and we will learn about the history of these two along with installation & how we can configure these for our Linux distributions.

Let’s begin wihtout wasting further more time.

What is iptables?

First, we need to know what is iptables. Most of senior IT professionals knows about it and used to work with it as well. Iptables is an application / program that allows a user to configure the security or firewall security tables provided by the Linux kernel firewall and the chains so that a user can add / remove firewall rules to it accordingly to meet his / her security requirements. Iptables uses different kernel modules and different protocols so that user can take the best out of it. As for example, iptables is used for IPv4 ( IP version 4/32 bit ) and ip6tables for IPv6 ( IP version 6/64 bit ) for both tcp and udp. Normally, iptables rules are configured by System Administrator or System Analyst or IT Manager.  You must have root privileges to execute each iptables rules. Linux Kernel uses the Netfilter framework so that it can provide various networking-related operations which can be performed by using iptables. Previously, ipchains was used in most of the Linux distributions for the same purpose. Every iptables rules are directly handled by the Linux Kernel itself and it is known as kernel duty. Whatever GUI tools or other security tools you are using to configure your server’s firewall security, at the end of the day, it is converted into iptables rules and supplied to the kernel to perform the operation.

History of iptables

The rise of the iptables begin with netfilter. Paul Rusty Russell was the initial author and the head think tank behind netfilter / iptables. Later he was joined by many other tech people then form and build the Netfilter core team and develop & maintain the netfilter/iptables project as a joint effort like many other open source projects. Harald Welte was the former leader until 2007 and then Patrick McHardy was the head until 2013. Currently, netfilter core team head is Pablo Neira Ayuso.

To know more about netfilter, please visit this link. To know more about the historicity of netfilter, please visit this link.

To know more about iptables history, please visit this link.

How to install iptables

Now a days, every Linux Kernel comes with iptables and can be found pre build or pre installed on every famous modern Linux distributions. On most Linux systems, iptables is installed in this /usr/sbin/iptables directory. It can be also  found in /sbin/iptables, but since iptables is more like a service rather than an “essential binary”, the preferred location remains in /usr/sbin directory.

For Ubuntu or Debian

sudo apt-get install iptables

For CentOS

sudo yum install iptables-services

For RHEL

sudo yum install iptables

Iptables version

To know your iptables version, type the following command in your terminal.

sudo iptables --version

Start & Stopping your iptables firewall

For OpenSUSE 42.1, type the following to stop.

sudo /sbin/rcSuSEfirewall2 stop

To start it again

sudo /sbin/rcSuSEfirewall2 start

For Ubuntu, type the following to stop.

sudo service ufw stop

To start it again

sudo service ufw start

For Debian & RHEL , type the following to stop.

sudo /etc/init.d/iptables stop

To start it again

sudo /etc/init.d/iptables start

For CentOS, type the following to stop.

sudo service iptables stop

To start it again

sudo service iptables start

Getting all iptables rules lists

To know all the rules that is currently present & active in your iprables, simply open a terminal and type the following.

sudo iptables -L

If there are no rules exits on the iptables means if there are no rules added so far in your iptables firewall, you will see something like the below image.

Iptables_Lists_OpenSUSE42.1

In this above picture, you can see that , there are three (3) chains and they are INPUT, FORWARD, OUTPUT and there are no rules exists. Actually I haven’t add one yet.

Type the following to know the status of the chains of your iptables firewall.

sudo iptables -S

With the above command, you can learn whether your chains are accepting or not.

Clear all iptables rules

To clear all the rules from your iptables firewall, please type the following. This is normally known as flushing your iptables rules.

sudo iptables -F

If you want to flush the INPUT chain only, or any individual chains, issue the below commands as per your requirements.

sudo iptables -F INPUT
sudo iptables -F OUTPUT
sudo iptables -F FORWARD

ACCEPT or DROP Chains

To accept or drop a particular chain, issue any of the following command on your terminal to meet your requirements.

iptables --policy INPUT DROP

The above rule will not accept anything that is incoming to that server. To revert it again back to ACCEPT, do the following

iptables --policy INPUT ACCEPT

Same goes for other chains as well like

iptables --policy OUTPUT DROP
iptables --policy FORWARD DROP

Note: By default, all chains of iptables ( INPUT, OUTPUT, FORWARD ) are in ACCEPT mode. This is known as Policy Chain Default Behavior.

Allowing any port

If you are running any web server on your host, then you must allow your iptables firewall so that your server listen or respond to port 80. By default web server runs on port 80. Let’s do that then.

sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT

On the above line, A stands for append means we are adding a new rule to the iptables list. INPUT stands for the INPUT chain. P stands for protocol and dport stands for destination port. By default any web server runs on port 80. Similarly, you can allow SSH port as well.

sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT

By default, SSH runs on port 22. But it’s good practise not to run SSH on port 22. Always run SSH on a different port. To run SSH on a different port, open /etc/ssh/sshd_config file on your favorite editor and change the port 22 to a other port.

Blocking any port

Say we want to block port 135. We can do it by

sudo iptables -A INPUT -p tcp --dport 135 -j DROP

if you want to block your server to initiate any SSH connection from the server to another host/server, issue the following command

sudo iptables -A OUTPUT -p tcp --dport 22 -j DROP

By doing so, no one can use your sever to initiate a SSH connection from the server. The OUPUT chain will filter and DROP any outgoing tcp connection towards another hosts.

Allowing specific IP with Port

sudo iptables -A INPUT -p tcp -s 0/0 --dport 22  -j ACCEPT

Here -s 0/0 stand for any incoming source with any IP addresses. So, there is no way your server is going to respond for a tcp packet which destination port is 22. If you want to allow only any particular IP then use the following one.

sudo iptables -A INPUT -p tcp -s 12.12.12.12/32 --dport 22  -j ACCEPT

On the above example, you are only allowing 12.12.12.12 IP address to connect to port SSH. Rest IP addresses will not be able to connect to port 22. Similarly you can allow by using CIDR values. Such as

sudo iptables -A INPUT -p tcp -s 12.12.12.0/24 --dport 22  -j ACCEPT

The above example show how you can allow a whole IP block for accepting connection on port 22. It will accept IP starting from 12.12.12.1 to 12.12.12.255.

If you want to block such IP addresses range, do the reverse by replacing ACCEPT by DROP like the following

sudo iptables -A INPUT -p tcp -s 12.12.12.0/24 --dport 22  -j DROP

So, it will not allow to get a connection on port 22 from from 12.12.12.1 to 12.12.12.255 IP addresses.

Blocking ICMP

If you want to block ICMP (ping) request to and from on your server, you can try the following. The first one will block not to send ICMP ping echo request to another host.

sudo iptables -A OUTPUT -p icmp --icmp-type 8 -j DROP

Now, try to ping google.com. Your OpenSUSE server will not be able to ping google.com.

If you want block the incoming ICMP (ping) echo request for your server, just type the following on your terminal.

sudo iptables -I INPUT -p icmp --icmp-type 8 -j DROP

Now, It will not reply to any ICMP ping echo request. Say, your server IP address is 13.13.13.13. And if you ping ping that IP of your server then you will see that your server is not responding for that ping request.

Blocking MySql / MariaDB Port

As Mysql is holding your database so you must protect your database from outside attach. Allow your trusted application server IP addresses only to connect with your MySQL server. To block other

sudo iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT

So, it will not take any MySql connection except 192.168.1.0/24 IP block. By default MySql runs on 3306 port.

Blocking SMTP

If you not running any mail server on your host server or if your server is not configured to act like a mail server, you must block SMTP so that your server is not sending any spam or any mail towards any domain. You must do this to block any outgoing mail from your server. To do so,

sudo iptables -A OUTPUT -p tcp --dport 25 -j DROP

Block DDoS

We all are familiar with the term DDoS. To get rid of it, issue the following command in your terminal.

iptables -A INPUT -p tcp --dport 80 -m limit --limit 20/minute --limit-burst 100 -j ACCEPT

You need to configure the numerical value to meet your requirements. This is just a standard to maintain.

You can protect more by

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/tcp_syncookies
echo 0 > /proc/sys/net/ipv4/conf/all/accept_redirects
echo 0 > /proc/sys/net/ipv4/conf/all/accept_source_route
echo 1 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/lo/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
echo 1800 > /proc/sys/net/ipv4/tcp_keepalive_time
echo 1 > /proc/sys/net/ipv4/tcp_window_scaling 
echo 0 > /proc/sys/net/ipv4/tcp_sack
echo 1280 > /proc/sys/net/ipv4/tcp_max_syn_backlog

Blocking Port Scanning

There are hundred of people out there to scan your open ports of your server and try to break down your server security. To block it

sudo iptables -N block-scan
sudo iptables -A block-scan -p tcp —tcp-flags SYN,ACK,FIN,RST RST -m limit —limit 1/s -j RETURN
sudo iptables -A block-scan -j DROP

Here, block-scan is a name of a new chain.

Blocking Bad Ports

You may need to block some bad ports for your server as well. Here is how you can do this.

badport="135,136,137,138,139,445"
sudo iptables -A INPUT -p tcp -m multiport --dport $badport -j DROP
sudo iptables -A INPUT -p udp -m multiport --dport $badport -j DROP

You can add more ports according to your needs.

What is firewalld?

Firewalld provides a dynamically managed firewall with support for network/firewall zones that defines the trust level of network connections or interfaces. It has support for IPv4, IPv6 firewall settings, ethernet bridges and IP sets. There is a separation of runtime and permanent configuration options. It also provides an interface for services or applications to add firewall rules directly.

The former firewall model with system-config-firewall/lokkit was static and every change required a complete firewall restart. This included also to unload the firewall netfilter kernel modules and to load the modules that are needed for the new configuration. The unload of the modules was breaking stateful firewalling and established connections. The firewall daemon on the other hand manages the firewall dynamically and applies changes without restarting the whole firewall. Therefore there is no need to reload all firewall kernel modules. But using a firewall daemon requires that all firewall modifications are done with that daemon to make sure that the state in the daemon and the firewall in kernel are in sync. The firewall daemon can not parse firewall rules added by the iptables and ebtables command line tools. The daemon provides information about the current active firewall settings via D-BUS and also accepts changes via D-BUS using PolicyKit authentication methods.

So, firewalld uses zones and services instead of chain and rules for performing the operations and it can manages rule(s) dynamically allowing updates & modification without breaking existing sessions and connections.

It has following features.

  • D-Bus API.
  • Timed firewall rules.
  • Rich Language for specific firewall rules.
  • IPv4 and IPv6 NAT support.
  • Firewall zones.
  • IP set support.
  • Simple log of denied packets.
  • Direct interface.
  • Lockdown: Whitelisting of applications that may modify the firewall.
  • Support for iptables, ip6tables, ebtables and ipset firewall backends.
  • Automatic loading of Linux kernel modules.
  • Integration with Puppet.

To know more about firewalld, please visit this link.

How to install firewalld

Before installing firewalld, please make sure you stop iptables and also make sure that iptables are not using or working anymore. To do so,

sudo systemctl stop iptables

This will stop iptables form your system.

And then make sure iptables are not used by your system any more by issuing the below command in the terminal.

sudo systemctl mask iptables

Now, check the status of iptables.

sudo systemctl status iptables

iptables_status_unixmen

Now, we are ready to install firewalld on to our system.

For Ubuntu

To install it on Ubuntu, you must remove UFW first and then you can install Firewalld. To remove UFW, issue the below command on the terminal.

sudo apt-get remove ufw

After removing UFW, issue the below command in the terminal

sudo apt-get install firewall-applet

Or

You can open Ubuntu Software Center and look or seacrh  for “firewall-applet” then install it on to your Ubuntu system.

For RHEL, CentOS & Fedora

Type the below command to install firewalld on your CentOS system.

sudo yum install firewalld firewall-config -y

How to configure firewalld

Before configuring firewalld, we must know the status of firewalld after the installation. To know that, type the following.

sudo systemctl status firewalld

firewalld_status_unixmen

As firewalld works on zones basis, we need to check all the zones and services though we haven’t done any configuring yet.

For Zones

sudo firewall-cmd --get-active-zones

firewalld_activezones_unixmen

or

sudo firewall-cmd --get-zones

firewalld_getzones_unixmen

To know the default zone, issue the below command

sudo firewall-cmd --get-default-zone

firewalld_defaultszones_unixmen

And, For Services

sudo firewall-cmd --get-services

firewalld_services_unixmen

Here, you can see those services covered under firewalld.

Setting Default Zone

An important note is, after each modification, you need to reload firewalld so that your changes can take place.

To set the default zone

sudo firewall-cmd --set-default-zone=internal

or

sudo firewall-cmd --set-default-zone=public

After changing the zone, check whether it changes or not.

sudo firewall-cmd --get-default-zone

Adding Port in Public Zone

sudo firewall-cmd --permanent --zone=public --add-port=80/tcp

firewalld_addport_unixmen

This will add tcp port 80 in the public zone of firewalld. You can add your desired port as well by replacing 80 by your’s.

Now reload the firewalld.

sudo firewall-cmd --reload

Now, check the status to see whether tcp 80 port has been added or not.

sudo firewall-cmd --zone=public --list-ports

firewalld_statusafterport_unixmen

Here, you can see that tcp port 80 has been added.

Or even you can try something like this.

sudo firewall-cmd --zone=public --list-all

firewalld_statusall_unixmen

Removing Port from Public Zone

To remove Tcp 80 port from the public zone, type the following.

sudo firewall-cmd --zone=public --remove-port=80/tcp

You will see a “success” text echoing in your terminal.

You can put your desired port as well by replacing 80 by your’s own port.

Adding Services in Firewalld

To add ftp service in firewalld, issue the below command

sudo firewall-cmd --zone=public --add-service=ftp

You will see a “success” text echoing in your terminal.

Similarly for adding smtp service, issue the below command

sudo firewall-cmd --zone=public --add-service=smtp

Replace ftp and smtp by your’s own service that you want to add in the firewalld.

Removing Services from Firewalld

For removing ftp & smtp services from firewalld, issue the below command in the terminal.

sudo firewall-cmd --zone=public --remove-service=ftp
sudo firewall-cmd --zone=public --remove-service=smtp

Block Any Incoming and Any Outgoing Packet(s)

If you wish, you can block any incoming or outgoing packets / connections by using firewalld. This is known as “panic-on” of firewalld. To do so, issue the below command.

sudo firewall-cmd --panic-on

You will see a “success” text echoing in your terminal.

After doing this, you will not be able to ping a host or even browse any websites.

To turn this off, issue the below command in your terminal.

sudo firewall-cmd --panic-off

Adding IP Address in Firewalld

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.4" accept'

By doing so, firewalld will accept IP v4 packets from the source IP 192.168.1.4.

Blocking IP Address From Firewalld

Similarly, to block any IP address

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.4" reject'

By doing so, firewalld will drop / discards every IP v4 packets from the source IP 192.168.1.4.

I stuck with the very basic of Firewalld over here so that you can easily understand the working methodology of it and the differences of it with iptables.

That’s all for today. Hope you enjoy reading this article.

Take care.