Apache, Kafka

Apache Kafka – Consumer (Java Code)

Source :-


Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

How To Patch and Protect Linux Kernel Stack Clash Vulnerability CVE-2017-1000364 [ 19/June/2017 ]

Avery serious security problem has been found in the Linux kernel called “The Stack Clash.” It can be exploited by attackers to corrupt memory and execute arbitrary code. An attacker could leverage this with another vulnerability to execute arbitrary code and gain administrative/root account privileges. How do I fix this problem on Linux?

The Qualys Research Labs discovered various problems in the dynamic linker of the GNU C Library (CVE-2017-1000366) which allow local privilege escalation by clashing the stack including Linux kernel. This bug affects Linux, OpenBSD, NetBSD, FreeBSD and Solaris, on i386 and amd64. It can be exploited by attackers to corrupt memory and execute arbitrary code.

What is CVE-2017-1000364 bug?

From RHN:

A flaw was found in the way memory was being allocated on the stack for user space binaries. If heap (or different memory region) and stack memory regions were adjacent to each other, an attacker could use this flaw to jump over the stack guard gap, cause controlled memory corruption on process stack or the adjacent memory region, and thus increase their privileges on the system. This is a kernel-side mitigation which increases the stack guard gap size from one page to 1 MiB to make successful exploitation of this issue more difficult.

As per the original research post:

Each program running on a computer uses a special memory region called the stack. This memory region is special because it grows automatically when the program needs more stack memory. But if it grows too much and gets too close to another memory region, the program may confuse the stack with the other memory region. An attacker can exploit this confusion to overwrite the stack with the other memory region, or the other way around.

A list of affected Linux distros

  1. Red Hat Enterprise Linux Server 5.x
  2. Red Hat Enterprise Linux Server 6.x
  3. Red Hat Enterprise Linux Server 7.x
  4. CentOS Linux Server 5.x
  5. CentOS Linux Server 6.x
  6. CentOS Linux Server 7.x
  7. Oracle Enterprise Linux Server 5.x
  8. Oracle Enterprise Linux Server 6.x
  9. Oracle Enterprise Linux Server 7.x
  10. Ubuntu 17.10
  11. Ubuntu 17.04
  12. Ubuntu 16.10
  13. Ubuntu 16.04 LTS
  14. Ubuntu 12.04 ESM (Precise Pangolin)
  15. Debian 9 stretch
  16. Debian 8 jessie
  17. Debian 7 wheezy
  18. Debian unstable
  19. SUSE Linux Enterprise Desktop 12 SP2
  20. SUSE Linux Enterprise High Availability 12 SP2
  21. SUSE Linux Enterprise Live Patching 12
  22. SUSE Linux Enterprise Module for Public Cloud 12
  23. SUSE Linux Enterprise Build System Kit 12 SP2
  24. SUSE Openstack Cloud Magnum Orchestration 7
  25. SUSE Linux Enterprise Server 11 SP3-LTSS
  26. SUSE Linux Enterprise Server 11 SP4
  27. SUSE Linux Enterprise Server 12 SP1-LTSS
  28. SUSE Linux Enterprise Server 12 SP2
  29. SUSE Linux Enterprise Server for Raspberry Pi 12 SP2

Do I need to reboot my box?

Yes, as most services depends upon the dynamic linker of the GNU C Library and kernel itself needs to be reloaded in memory.

How do I fix CVE-2017-1000364 on Linux?

Type the commands as per your Linux distro. You need to reboot the box. Before you apply patch, note down your current kernel version:
$ uname -a
$ uname -mrs

Sample outputs:

Linux 4.4.0-78-generic x86_64

Debian or Ubuntu Linux

Type the following apt command/apt-get command to apply updates:
$ sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade
Sample outputs:

Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  libc-bin libc-dev-bin libc-l10n libc6 libc6-dev libc6-i386 linux-compiler-gcc-6-x86 linux-headers-4.9.0-3-amd64 linux-headers-4.9.0-3-common linux-image-4.9.0-3-amd64
  linux-kbuild-4.9 linux-libc-dev locales multiarch-support
14 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/62.0 MB of archives.
After this operation, 4,096 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Reading changelogs... Done
Preconfiguring packages ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../libc6-i386_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6-i386 (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../libc6-dev_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6-dev:amd64 (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../libc-dev-bin_2.24-11+deb9u1_amd64.deb ...
Unpacking libc-dev-bin (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../linux-libc-dev_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-libc-dev:amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../libc6_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6:amd64 (2.24-11+deb9u1) over (2.24-11) ...
Setting up libc6:amd64 (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../libc-bin_2.24-11+deb9u1_amd64.deb ...
Unpacking libc-bin (2.24-11+deb9u1) over (2.24-11) ...
Setting up libc-bin (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../multiarch-support_2.24-11+deb9u1_amd64.deb ...
Unpacking multiarch-support (2.24-11+deb9u1) over (2.24-11) ...
Setting up multiarch-support (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../0-libc-l10n_2.24-11+deb9u1_all.deb ...
Unpacking libc-l10n (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../1-locales_2.24-11+deb9u1_all.deb ...
Unpacking locales (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../2-linux-compiler-gcc-6-x86_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-compiler-gcc-6-x86 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../3-linux-headers-4.9.0-3-amd64_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-headers-4.9.0-3-amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../4-linux-headers-4.9.0-3-common_4.9.30-2+deb9u1_all.deb ...
Unpacking linux-headers-4.9.0-3-common (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../5-linux-kbuild-4.9_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-kbuild-4.9 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../6-linux-image-4.9.0-3-amd64_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-image-4.9.0-3-amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Setting up linux-libc-dev:amd64 (4.9.30-2+deb9u1) ...
Setting up linux-headers-4.9.0-3-common (4.9.30-2+deb9u1) ...
Setting up libc6-i386 (2.24-11+deb9u1) ...
Setting up linux-compiler-gcc-6-x86 (4.9.30-2+deb9u1) ...
Setting up linux-kbuild-4.9 (4.9.30-2+deb9u1) ...
Setting up libc-l10n (2.24-11+deb9u1) ...
Processing triggers for man-db ( ...
Setting up libc-dev-bin (2.24-11+deb9u1) ...
Setting up linux-image-4.9.0-3-amd64 (4.9.30-2+deb9u1) ...
update-initramfs: Generating /boot/initrd.img-4.9.0-3-amd64
cryptsetup: WARNING: failed to detect canonical device of /dev/md0
cryptsetup: WARNING: could not determine root device from /etc/fstab
W: initramfs-tools configuration sets RESUME=UUID=054b217a-306b-4c18-b0bf-0ed85af6c6e1
W: but no matching swap device is available.
I: The initramfs will attempt to resume from /dev/md1p1
I: (UUID=bf72f3d4-3be4-4f68-8aae-4edfe5431670)
I: Set the RESUME variable to override this.
Searching for GRUB installation directory ... found: /boot/grub
Searching for default file ... found: /boot/grub/default
Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst
Searching for splash image ... none found, skipping ...
Found kernel: /boot/vmlinuz-4.9.0-3-amd64
Found kernel: /boot/vmlinuz-3.16.0-4-amd64
Updating /boot/grub/menu.lst ... done

Setting up libc6-dev:amd64 (2.24-11+deb9u1) ...
Setting up locales (2.24-11+deb9u1) ...
Generating locales (this might take a while)...
  en_IN.UTF-8... done
Generation complete.
Setting up linux-headers-4.9.0-3-amd64 (4.9.30-2+deb9u1) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...

Reboot your server/desktop using reboot command:
$ sudo reboot

Oracle/RHEL/CentOS/Scientific Linux

Type the following yum command:
$ sudo yum update
$ sudo reboot

Fedora Linux

Type the following dnf command:
$ sudo dnf update
$ sudo reboot

Suse Enterprise Linux or Opensuse Linux

Type the following zypper command:
$ sudo zypper patch
$ sudo reboot

SUSE OpenStack Cloud 6

$ sudo zypper in -t patch SUSE-OpenStack-Cloud-6-2017-996=1
$ sudo reboot

SUSE Linux Enterprise Server for SAP 12-SP1

$ sudo zypper in -t patch SUSE-SLE-SAP-12-SP1-2017-996=1
$ sudo reboot

SUSE Linux Enterprise Server 12-SP1-LTSS

$ sudo zypper in -t patch SUSE-SLE-SERVER-12-SP1-2017-996=1
$ sudo reboot

SUSE Linux Enterprise Module for Public Cloud 12

$ sudo zypper in -t patch SUSE-SLE-Module-Public-Cloud-12-2017-996=1
$ sudo reboot


You need to make sure your version number changed after issuing reboot command
$ uname -a
$ uname -r
$ uname -mrs

Sample outputs:

Linux 4.4.0-81-generic x86_64
Apache, Kafka

Apache Kafka – Use cases

Here is a description of a few of the popular use cases for Apache Kafka™. For an overview of a number of these areas in action, see this blog post.


Kafka works well as a replacement for a more traditional message broker. Message brokers are used for a variety of reasons (to decouple processing from data producers, to buffer unprocessed messages, etc). In comparison to most messaging systems Kafka has better throughput, built-in partitioning, replication, and fault-tolerance which makes it a good solution for large scale message processing applications.

In our experience messaging uses are often comparatively low-throughput, but may require low end-to-end latency and often depend on the strong durability guarantees Kafka provides.

In this domain Kafka is comparable to traditional messaging systems such as ActiveMQ or RabbitMQ.

Website Activity Tracking

The original use case for Kafka was to be able to rebuild a user activity tracking pipeline as a set of real-time publish-subscribe feeds. This means site activity (page views, searches, or other actions users may take) is published to central topics with one topic per activity type. These feeds are available for subscription for a range of use cases including real-time processing, real-time monitoring, and loading into Hadoop or offline data warehousing systems for offline processing and reporting.

Activity tracking is often very high volume as many activity messages are generated for each user page view.


Kafka is often used for operational monitoring data. This involves aggregating statistics from distributed applications to produce centralized feeds of operational data.

Log Aggregation

Many people use Kafka as a replacement for a log aggregation solution. Log aggregation typically collects physical log files off servers and puts them in a central place (a file server or HDFS perhaps) for processing. Kafka abstracts away the details of files and gives a cleaner abstraction of log or event data as a stream of messages. This allows for lower-latency processing and easier support for multiple data sources and distributed data consumption. In comparison to log-centric systems like Scribe or Flume, Kafka offers equally good performance, stronger durability guarantees due to replication, and much lower end-to-end latency.

Stream Processing

Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing. For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an “articles” topic; further processing might normalize or deduplicate this content and published the cleansed article content to a new topic; a final processing stage might attempt to recommend this content to users. Such processing pipelines create graphs of real-time data flows based on the individual topics. Starting in, a light-weight but powerful stream processing library called Kafka Streams is available in Apache Kafka to perform such data processing as described above. Apart from Kafka Streams, alternative open source stream processing tools include Apache Storm and Apache Samza.

Event Sourcing

Event sourcing is a style of application design where state changes are logged as a time-ordered sequence of records. Kafka’s support for very large stored log data makes it an excellent backend for an application built in this style.

Commit Log

Kafka can serve as a kind of external commit-log for a distributed system. The log helps replicate data between nodes and acts as a re-syncing mechanism for failed nodes to restore their data. The log compaction feature in Kafka helps support this usage. In this usage Kafka is similar to Apache BookKeeperproject.

Apache, Kafka

Apache Kafka – Producer / Consumer Basic Test (With Youtube Video)

In Kafka Server Make the following changes in configuration.property

  • cd $KAFKA_HOME/config

  • vim configuration.property

Config File Changes :-

# the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an “AS IS” BASIS,
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.

############################# Socket Server Settings #############################



# The port the socket server listens on

# Hostname the broker will bind to. If not set, the server will bind to all interfaces

# Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for “host.name” if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients>

# The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients>

# The number of threads handling network requests

2.  Considering there is a Kafka Server and 2 different Servers in which the Client for Kafka is Installed

At Producer Client Server :-

  • cd $KAFKA_HOME

  • ./kafka-console-producer.sh –broker-list <kafka-server-ip>:<kafka-port> –topic <topic-name>

At Consumer Client Server :-

  • cd $KAFKA_HOME

  • ./kafka-console-consumer.sh –zookeeper <kafka-server-ip>:2181 –topic <topic-name> –from-beginning






Apache Kafka – Fundamentals & Workflow

Before moving deep into the Kafka, you must aware of the main terminologies such as topics, brokers, producers and consumers. The following diagram illustrates the main terminologies and the table describes the diagram components in detail.


In the above diagram, a topic is configured into three partitions. Partition 1 has two offset factors 0 and 1. Partition 2 has four offset factors 0, 1, 2, and 3. Partition 3 has one offset factor 0. The id of the replica is same as the id of the server that hosts it.

Assume, if the replication factor of the topic is set to 3, then Kafka will create 3 identical replicas of each partition and place them in the cluster to make available for all its operations. To balance a load in cluster, each broker stores one or more of those partitions. Multiple producers and consumers can publish and retrieve messages at the same time.

S.No Components and Description
1 Topics

A stream of messages belonging to a particular category is called a topic. Data is stored in topics.

Topics are split into partitions. For each topic, Kafka keeps a mini-mum of one partition. Each such partition contains messages in an immutable ordered sequence. A partition is implemented as a set of segment files of equal sizes.

2 Partition

Topics may have many partitions, so it can handle an arbitrary amount of data.

3 Partition offset

Each partitioned message has a unique sequence id called as offset.

4 Replicas of partition

Replicas are nothing but backups of a partition. Replicas are never read or write data. They are used to prevent data loss.

5 Brokers

  • Brokers are simple system responsible for maintaining the pub-lished data. Each broker may have zero or more partitions per topic. Assume, if there are N partitions in a topic and N number of brokers, each broker will have one partition.
  • Assume if there are N partitions in a topic and more than N brokers (n + m), the first N broker will have one partition and the next M broker will not have any partition for that particular topic.
  • Assume if there are N partitions in a topic and less than N brokers (n-m), each broker will have one or more partition sharing among them. This scenario is not recommended due to unequal load distri-bution among the broker.
6 Kafka Cluster

Kafka’s having more than one broker are called as Kafka cluster. A Kafka cluster can be expanded without downtime. These clusters are used to manage the persistence and replication of message data.

7 Producers

Producers are the publisher of messages to one or more Kafka topics. Producers send data to Kafka brokers. Every time a producer pub-lishes a message to a broker, the broker simply appends the message to the last segment file. Actually, the message will be appended to a partition. Producer can also send messages to a partition of their choice.

8 Consumers

Consumers read data from brokers. Consumers subscribes to one or more topics and consume published messages by pulling data from the brokers.

9 Leader

Leader is the node responsible for all reads and writes for the given partition. Every partition has one server acting as a leader.

10 Follower

Node which follows leader instructions are called as follower. If the leader fails, one of the follower will automatically become the new leader. A follower acts as normal consumer, pulls messages and up-dates its own data store.


As of now, we discussed the core concepts of Kafka. Let us now throw some light on the workflow of Kafka.

Kafka is simply a collection of topics split into one or more partitions. A Kafka partition is a linearly ordered sequence of messages, where each message is identified by their index (called as offset). All the data in a Kafka cluster is the disjointed union of partitions. Incoming messages are written at the end of a partition and messages are sequentially read by consumers. Durability is provided by replicating messages to different brokers.

Kafka provides both pub-sub and queue based messaging system in a fast, reliable, persisted, fault-tolerance and zero downtime manner. In both cases, producers simply send the message to a topic and consumer can choose any one type of messaging system depending on their need. Let us follow the steps in the next section to understand how the consumer can choose the messaging system of their choice.

Workflow of Pub-Sub Messaging

Following is the step wise workflow of the Pub-Sub Messaging −

  • Producers send message to a topic at regular intervals.
  • Kafka broker stores all messages in the partitions configured for that particular topic. It ensures the messages are equally shared between partitions. If the producer sends two messages and there are two partitions, Kafka will store one message in the first partition and the second message in the second partition.
  • Consumer subscribes to a specific topic.
  • Once the consumer subscribes to a topic, Kafka will provide the current offset of the topic to the consumer and also saves the offset in the Zookeeper ensemble.
  • Consumer will request the Kafka in a regular interval (like 100 Ms) for new messages.
  • Once Kafka receives the messages from producers, it forwards these messages to the consumers.
  • Consumer will receive the message and process it.
  • Once the messages are processed, consumer will send an acknowledgement to the Kafka broker.
  • Once Kafka receives an acknowledgement, it changes the offset to the new value and updates it in the Zookeeper. Since offsets are maintained in the Zookeeper, the consumer can read next message correctly even during server outrages.
  • This above flow will repeat until the consumer stops the request.
  • Consumer has the option to rewind/skip to the desired offset of a topic at any time and read all the subsequent messages.

Workflow of Queue Messaging / Consumer Group

In a queue messaging system instead of a single consumer, a group of consumers having the same Group ID will subscribe to a topic. In simple terms, consumers subscribing to a topic with same Group ID are considered as a single group and the messages are shared among them. Let us check the actual workflow of this system.

  • Producers send message to a topic in a regular interval.
  • Kafka stores all messages in the partitions configured for that particular topic similar to the earlier scenario.
  • A single consumer subscribes to a specific topic, assume Topic-01 with Group ID as Group-1.
  • Kafka interacts with the consumer in the same way as Pub-Sub Messaging until new consumer subscribes the same topic, Topic-01 with the same Group ID as Group-1.
  • Once the new consumer arrives, Kafka switches its operation to share mode and shares the data between the two consumers. This sharing will go on until the number of con-sumers reach the number of partition configured for that particular topic.
  • Once the number of consumer exceeds the number of partitions, the new consumer will not receive any further message until any one of the existing consumer unsubscribes. This scenario arises because each consumer in Kafka will be assigned a minimum of one partition and once all the partitions are assigned to the existing consumers, the new consumers will have to wait.
  • This feature is also called as Consumer Group. In the same way, Kafka will provide the best of both the systems in a very simple and efficient manner.

Role of ZooKeeper

A critical dependency of Apache Kafka is Apache Zookeeper, which is a distributed configuration and synchronization service. Zookeeper serves as the coordination interface between the Kafka brokers and consumers. The Kafka servers share information via a Zookeeper cluster. Kafka stores basic metadata in Zookeeper such as information about topics, brokers, consumer offsets (queue readers) and so on.

Since all the critical information is stored in the Zookeeper and it normally replicates this data across its ensemble, failure of Kafka broker / Zookeeper does not affect the state of the Kafka cluster. Kafka will restore the state, once the Zookeeper restarts. This gives zero downtime for Kafka. The leader election between the Kafka broker is also done by using Zookeeper in the event of leader failure.

Main, Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Cpustat – Monitors CPU Utilization by Running Processes in Linux


Apache Kafka – The New Beginning for Messaging


Apache Kafka is a popular distributed message broker designed to handle large volumes of real-time data efficiently. A Kafka cluster is not only highly scalable and fault-tolerant, but it also has a much higher throughput compared to other message brokers such as ActiveMQ and RabbitMQ. Though it is generally used as a pub/sub messaging system, a lot of organizations also use it for log aggregation because it offers persistent storage for published messages.

In this tutorial, you will learn how to install and use Apache Kafka on Ubuntu 16.04.


To follow along, you will need:

  • Ubuntu 16.04 Droplet
  • At least 4GB of swap space

Step 1 — Create a User for Kafka

As Kafka can handle requests over a network, you should create a dedicated user for it. This minimizes damage to your Ubuntu machine should the Kafka server be comprised.

Note: After setting up Apache Kafka, it is recommended that you create a different non-root user to perform other tasks on this server.

As root, create a user called kafka using the useradd command:

useradd kafka -m

Set its password using passwd:

passwd kafka

Add it to the sudo group so that it has the privileges required to install Kafka’s dependencies. This can be done using the adduser command:

adduser kafka sudo

Your Kafka user is now ready. Log into it using su:

su - kafka

Step 2 — Install Java

Before installing additional packages, update the list of available packages so you are installing the latest versions available in the repository:

sudo apt-get update

As Apache Kafka needs a Java runtime environment, use apt-get to install the default-jre package:

sudo apt-get install default-jre

Step 3 — Install ZooKeeper

Apache ZooKeeper is an open source service built to coordinate and synchronize configuration information of nodes that belong to a distributed system. A Kafka cluster depends on ZooKeeper to perform—among other things—operations such as detecting failed nodes and electing leaders.

Since the ZooKeeper package is available in Ubuntu’s default repositories, install it using apt-get.

sudo apt-get install zookeeperd

After the installation completes, ZooKeeper will be started as a daemon automatically. By default, it will listen on port 2181.

To make sure that it is working, connect to it via Telnet:

telnet localhost 2181

At the Telnet prompt, type in ruok and press ENTER.

If everything’s fine, ZooKeeper will say imok and end the Telnet session.

Step 4 — Download and Extract Kafka Binaries

Now that Java and ZooKeeper are installed, it is time to download and extract Kafka.

To start, create a directory called Downloads to store all your downloads.

mkdir -p ~/Downloads

Use wget to download the Kafka binaries.

wget "http://mirror.cc.columbia.edu/pub/software/apache/kafka/" -O ~/Downloads/kafka.tgz

Create a directory called kafka and change to this directory. This will be the base directory of the Kafka installation.

mkdir -p ~/kafka && cd ~/kafka

Extract the archive you downloaded using the tar command.

tar -xvzf ~/Downloads/kafka.tgz --strip 1

Step 5 — Configure the Kafka Server

The next step is to configure the Kakfa server.

Open server.properties using vi:

vi ~/kafka/config/server.properties

By default, Kafka doesn’t allow you to delete topics. To be able to delete topics, add the following line at the end of the file:


delete.topic.enable = true

Save the file, and exit vi.

Step 6 — Start the Kafka Server

Run the kafka-server-start.sh script using nohup to start the Kafka server (also called Kafka broker) as a background process that is independent of your shell session.

nohup ~/kafka/bin/kafka-server-start.sh ~/kafka/config/server.properties > ~/kafka/kafka.log 2>&1 &

Wait for a few seconds for it to start. You can be sure that the server has started successfully when you see the following messages in ~/kafka/kafka.log:

excerpt from ~/kafka/kafka.log

... [2015-07-29 06:02:41,736] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener) [2015-07-29 06:02:41,776] INFO [Kafka Server 0], started (kafka.server.KafkaServer)

You now have a Kafka server which is listening on port 9092.

Step 7 — Test the Installation

Let us now publish and consume a “Hello World” message to make sure that the Kafka server is behaving correctly.

To publish messages, you should create a Kafka producer. You can easily create one from the command line using the kafka-console-producer.sh script. It expects the Kafka server’s hostname and port, along with a topic name as its arguments.

Publish the string “Hello, World” to a topic called TutorialTopic by typing in the following:

echo "Wassup Playas" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic HariTopic > /dev/null

As the topic doesn’t exist, Kafka will create it automatically.

To consume messages, you can create a Kafka consumer using the kafka-console-consumer.sh script. It expects the ZooKeeper server’s hostname and port, along with a topic name as its arguments.

The following command consumes messages from the topic we published to. Note the use of the --from-beginning flag, which is present because we want to consume a message that was published before the consumer was started.

~/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic HariTopic --from-beginning

If there are no configuration issues, you should see Hello,
in the output now.

The script will continue to run, waiting for more messages to be published to the topic. Feel free to open a new terminal and start a producer to publish a few more messages. You should be able to see them all in the consumer’s output instantly.

When you are done testing, press CTRL+C to stop the consumer script.

Step 8 — Install KafkaT (Optional)

KafkaT is a handy little tool from Airbnb which makes it easier for you to view details about your Kafka cluster and also perform a few administrative tasks from the command line. As it is a Ruby gem, you will need Ruby to use it. You will also need the build-essential package to be able to build the other gems it depends on. Install them using apt-get:

sudo apt-get install ruby ruby-dev build-essential

You can now install KafkaT using the gem command:

sudo gem install kafkat --source https://rubygems.org --no-ri --no-rdoc

Use vi to create a new file called .kafkatcfg.

vi ~/.kafkatcfg

This is a configuration file which KafkaT uses to determine the installation and log directories of your Kafka server. It should also point KafkaT to your ZooKeeper instance. Accordingly, add the following lines to it:


{   "kafka_path": "~/kafka",   "log_path": "/tmp/kafka-logs",   "zk_path": "localhost:2181" }

You are now ready to use KafkaT. For a start, here’s how you would use it to view details about all Kafka partitions:

kafkat partitions

You should see the following output:

output of kafkat partitions

Topic           Partition   Leader      Replicas        ISRs     TutorialTopic   0             0           [0]           [0]

To learn more about KafkaT, refer to its GitHub repository.

Step 9 — Set Up a Multi-Node Cluster (Optional)

If you want to create a multi-broker cluster using more Ubuntu 16.04 machines, you should repeat Step 1, Step 3, Step 4 and Step 5 on each of the new machines. Additionally, you should make the following changes in the server.properties file in each of them:

  • the value of the broker.id property should be changed such that it is unique throughout the cluster
  • the value of the zookeeper.connect property should be changed such that all nodes point to the same ZooKeeper instance

If you want to have multiple ZooKeeper instances for your cluster, the value of the zookeeper.connect property on each node should be an identical, comma-separated string listing the IP addresses and port numbers of all the ZooKeeper instances.

Step 10 — Restrict the Kafka User

Now that all installations are done, you can remove the kafka user’s admin privileges. Before you do so, log out and log back in as any other non-root sudo user. If you are still running the same shell session you started this tutorial with, simply type exit.

To remove the kafka user’s admin privileges, remove it from the sudo group.

sudo deluser kafka sudo

To further improve your Kafka server’s security, lock the kafka user’s password using the passwd command. This makes sure that nobody can directly log into it.

sudo passwd kafka -l

At this point, only root or a sudo user can log in as kafka by typing in the following command:

sudo su - kafka

In the future, if you want to unlock it, use passwd with the -u option:

sudo passwd kafka -u


You now have a secure Apache Kafka running on your Ubuntu server. You can easily make use of it in your projects by creating Kafka producers and consumers using Kafka clients which are available for most programming languages. To learn more about Kafka, do go through its documentation.

Finally for GUI Download


Youtube Video Link [Watch Here]


Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Linux security alert: Bug in sudo’s get_process_ttyname() [ CVE-2017-1000367 ]

There is a serious vulnerability in sudo command that grants root access to anyone with a shell account. It works on SELinux enabled systems such as CentOS/RHEL and others too. A local user with privileges to execute commands via sudo could use this flaw to escalate their privileges to root. Patch your system as soon as possible.

It was discovered that Sudo did not properly parse the contents of /proc/[pid]/stat when attempting to determine its controlling tty. A local attacker in some configurations could possibly use this to overwrite any file on the filesystem, bypassing intended permissions or gain root shell.
From the description

We discovered a vulnerability in Sudo’s get_process_ttyname() for Linux:
this function opens “/proc/[pid]/stat” (man proc) and reads the device number of the tty from field 7 (tty_nr). Unfortunately, these fields are space-separated and field 2 (comm, the filename of the command) can
contain spaces (CVE-2017-1000367).

For example, if we execute Sudo through the symlink “./ 1 “, get_process_ttyname() calls sudo_ttyname_dev() to search for the non-existent tty device number “1” in the built-in search_devs[].

Next, sudo_ttyname_dev() calls the function sudo_ttyname_scan() to search for this non-existent tty device number “1” in a breadth-first traversal of “/dev”.

Last, we exploit this function during its traversal of the world-writable “/dev/shm”: through this vulnerability, a local user can pretend that his tty is any character device on the filesystem, and
after two race conditions, he can pretend that his tty is any file on the filesystem.

On an SELinux-enabled system, if a user is Sudoer for a command that does not grant him full root privileges, he can overwrite any file on the filesystem (including root-owned files) with his command’s output,
because relabel_tty() (in src/selinux.c) calls open(O_RDWR|O_NONBLOCK) on his tty and dup2()s it to the command’s stdin, stdout, and stderr. This allows any Sudoer user to obtain full root privileges.

A list of affected Linux distro

  1. Red Hat Enterprise Linux 6 (sudo)
  2. Red Hat Enterprise Linux 7 (sudo)
  3. Red Hat Enterprise Linux Server (v. 5 ELS) (sudo)
  4. Oracle Enterprise Linux 6
  5. Oracle Enterprise Linux 7
  6. Oracle Enterprise Linux Server 5
  7. CentOS Linux 6 (sudo)
  8. CentOS Linux 7 (sudo)
  9. Debian wheezy
  10. Debian jessie
  11. Debian stretch
  12. Debian sid
  13. Ubuntu 17.04
  14. Ubuntu 16.10
  15. Ubuntu 16.04 LTS
  16. Ubuntu 14.04 LTS
  17. SUSE Linux Enterprise Software Development Kit 12-SP2
  18. SUSE Linux Enterprise Server for Raspberry Pi 12-SP2
  19. SUSE Linux Enterprise Server 12-SP2
  20. SUSE Linux Enterprise Desktop 12-SP2
  21. OpenSuse, Slackware, and Gentoo Linux

How do I patch sudo on Debian/Ubuntu Linux server?

To patch Ubuntu/Debian Linux apt-get command or apt command:
$ sudo apt update
$ sudo apt upgrade

How do I patch sudo on CentOS/RHEL/Scientific/Oracle Linux server?

Run yum command:
$ sudo yum update

How do I patch sudo on Fedora Linux server?

Run dnf command:
$ sudo dnf update

How do I patch sudo on Suse/OpenSUSE Linux server?

Run zypper command:
$ sudo zypper update

How do I patch sudo on Arch Linux server?

Run pacman command:
$ sudo pacman -Syu

How do I patch sudo on Alpine Linux server?

Run apk command:
# apk update && apk upgrade

How do I patch sudo on Slackware Linux server?

Run upgradepkg command:
# upgradepkg sudo-1.8.20p1-i586-1_slack14.2.txz

How do I patch sudo on Gentoo Linux server?

Run emerge command:
# emerge --sync
# emerge --ask --oneshot --verbose ">=app-admin/sudo-1.8.20_p1"

Kernel Programming, Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Impermanence in Linux – Exclusive (By Hari Iyer)

Impermanence, also called Anicca or Anitya, is one of the essential doctrines and a part of three marks of existence in Buddhism The doctrine asserts that all of conditioned existence, without exception, is “transient, evanescent, inconstant”

On Linux, the root of all randomness is something called the kernel entropy pool. This is a large (4,096 bit) number kept privately in the kernel’s memory. There are 24096 possibilities for this number so it can contain up to 4,096 bits of entropy. There is one caveat – the kernel needs to be able to fill that memory from a source with 4,096 bits of entropy. And that’s the hard part: finding that much randomness.

The entropy pool is used in two ways: random numbers are generated from it and it is replenished with entropy by the kernel. When random numbers are generated from the pool the entropy of the pool is diminished (because the person receiving the random number has some information about the pool itself). So as the pool’s entropy diminishes as random numbers are handed out, the pool must be replenished.

Replenishing the pool is called stirring: new sources of entropy are stirred into the mix of bits in the pool.

This is the key to how random number generation works on Linux. If randomness is needed, it’s derived from the entropy pool. When available, other sources of randomness are used to stir the entropy pool and make it less predictable. The details are a little mathematical, but it’s interesting to understand how the Linux random number generator works as the principles and techniques apply to random number generation in other software and systems.

The kernel keeps a rough estimate of the number of bits of entropy in the pool. You can check the value of this estimate through the following command:

cat /proc/sys/kernel/random/entropy_avail

A healthy Linux system with a lot of entropy available will have return close to the full 4,096 bits of entropy. If the value returned is less than 200, the system is running low on entropy.

The kernel is watching you

I mentioned that the system takes other sources of randomness and uses this to stir the entropy pool. This is achieved using something called a timestamp.

Most systems have precise internal clocks. Every time that a user interacts with a system, the value of the clock at that time is recorded as a timestamp. Even though the year, month, day and hour are generally guessable, the millisecond and microsecond are not and therefore the timestamp contains some entropy. Timestamps obtained from the user’s mouse and keyboard along with timing information from the network and disk each have different amount of entropy.

How does the entropy found in a timestamp get transferred to the entropy pool? Simple, use math to mix it in. Well, simple if you like math.

Just mix it in

A fundamental property of entropy is that it mixes well. If you take two unrelated random streams and combine them, the new stream cannot have less entropy. Taking a number of low entropy sources and combining them results in a high entropy source.

All that’s needed is the right combination function: a function that can be used to combine two sources of entropy. One of the simplest such functions is the logical exclusive or (XOR). This truth table shows how bits x and y coming from different random streams are combined by the XOR function.

Even if one source of bits does not have much entropy, there is no harm in XORing it into another source. Entropy always increases. In the Linux kernel, a combination of XORs is used to mix timestamps into the main entropy pool.

Generating random numbers

Cryptographic applications require very high entropy. If a 128 bit key is generated with only 64 bits of entropy then it can be guessed in 264 attempts instead of 2128 attempts. That is the difference between needing a thousand computers running for a few years to brute force the key versus needing all the computers ever created running for longer than the history of the universe to do so.

Cryptographic applications require close to one bit of entropy per bit. If the system’s pool has fewer than 4,096 bits of entropy, how does the system return a fully random number? One way to do this is to use a cryptographic hash function.

A cryptographic hash function takes an input of any size and outputs a fixed size number. Changing one bit of the input will change the output completely. Hash functions are good at mixing things together. This mixing property spreads the entropy from the input evenly through the output. If the input has more bits of entropy than the size of the output, the output will be highly random. This is how highly entropic random numbers are derived from the entropy pool.

The hash function used by the Linux kernel is the standard SHA-1 cryptographic hash. By hashing the entire pool and and some additional arithmetic, 160 random bits are created for use by the system. When this happens, the system lowers its estimate of the entropy in the pool accordingly.

Above I said that applying a hash like SHA-1 could be dangerous if there wasn’t enough entropy in the pool. That’s why it’s critical to keep an eye on the available system entropy: if it drops too low the output of the random number generator could have less entropy that it appears to have.

Running out of entropy

One of the dangers of a system is running out of entropy. When the system’s entropy estimate drops to around the 160 bit level, the length of a SHA-1 hash, things get tricky, and how they effect programs and performance depends on which of two Linux random number generators are used.

Linux exposes two interfaces for random data that behave differently when the entropy level is low. They are /dev/random and /dev/urandom. When the entropy pool becomes predictable, both interfaces for requesting random numbers become problematic.

When the entropy level is too low, /dev/random blocks and does not return until the level of entropy in the system is high enough. This guarantees high entropy random numbers. If /dev/random is used in a time-critical service and the system runs low on entropy, the delays could be detrimental to the quality of service.

On the other hand, /dev/urandom does not block. It continues to return the hashed value of its entropy pool even though there is little to no entropy in it. This low-entropy data is not suited for cryptographic use.

The solution to the problem is to simply add more entropy into the system.

Hardware random number generation to the rescue?

Intel’s Ivy Bridge family of processors have an interesting feature called “secure key.” These processors contain a special piece of hardware inside that generates random numbers. The single assembly instruction RDRAND returns allegedly high entropy random data derived on the chip.

It has been suggested that Intel’s hardware number generator may not be fully random. Since it is baked into the silicon, that assertion is hard to audit and verify. As it turns out, even if the numbers generated have some bias, it can still help as long as this is not the only source of randomness in the system. Even if the random number generator itself had a back door, the mixing property of randomness means that it cannot lower the amount of entropy in the pool.

On Linux, if a hardware random number generator is present, the Linux kernel will use the XOR function to mix the output of RDRAND into the hash of the entropy pool. This happens here in the Linux source code (the XOR operator is ^ in C).

Third party entropy generators

Hardware number generation is not available everywhere, and the sources of randomness polled by the Linux kernel itself are somewhat limited. For this situation, a number of third party random number generation tools exist. Examples of these are haveged, which relies on processor cache timing, audio-entropyd and video-entropyd which work by sampling the noise from an external audio or video input device. By mixing these additional sources of locally collected entropy into the Linux entropy pool, the entropy can only go up.


TIBCO Universal Installer – Unix – The installer is unable to run in graphical mode. Try running the installer with the -console or -silent flag (SOLVED)

Many a times,

when you try to install TIBCO Rendezvous / TIBCO EMS or even certain BW Plugins ( That are 32 bit binaries ) on a 64 bit JVM based UNIX System (Linux / Solaris / AIX / UX / FreeBSD)

You typically encounter an error like this



Well, many people ain’t aware of the real deal to solve this issue,

After much Research with permutations and Combinations, there seems to be a solution for this :-

Follow the Steps mentioned below For RHEL 6.XX Systems (Cuz i ain’t tried for other NIX platform yet)

  1. sudo yum -y install libXtst*i686 *
  2. sudo yum -y install libXext*i686*
  3. sudo yum -y install libXrender*i686*

I am damn sure, it’ll work for GUI mode of installation

BusinessWorks, TIBCO

java.sql.SQLRecoverableException: IO Error: Connection reset ( Designer / BWEngine / interfaceName )

Sometimes, when you create a JDBC Connection in your Designer, or when you configure a JDBC Connection in your EAR, You might end up with an error like this :-

Designer :-


Runtime :-

java.sql.SQLRecoverableException: IO Error: Connection reset

(In your trace file)

This happens because of urandom

/dev/random is a random number generator often used to seed cryptography functions for better security.  /dev/urandom likewise is a (pseudo) random number generator.  Both are good at generating random numbers.  The key difference is that /dev/random has a blocking function that waits until entropy reaches a certain level before providing its result.  From a practical standpoint, this means that programs using /dev/random will generally take longer to complete than /dev/urandom.

With regards to why /dev/urandom vs /dev/./urandom.  That is something unique to Java versions 5 and following that resulted from problems with /dev/urandom on Linux systems back in 2004.  The easy fix was to force /dev/urandom to use /dev/random.  However, it doesn’t appear that Java will be updated to let /dev/urandom use /dev/urandom. So, the workaround is to fake Java out by obscuring /dev/urandom to /dev/./urandom which is functionally the same thing but looks different.

Therefore, Add the following Field to bwengine.tra and designer.tra OR your Individual track’s tra file and restart the bwengine or designer and it works like Magic Johnson’s Dunk.

java.extended.properties -Djava.security.egd=file:///dev/urandom

Main, Tuning

Interrupt Coalescence (also called Interrupt Moderation, Interrupt Blanking, or Interrupt Throttling)

A common bottleneck for high-speed data transfers is the high rate of interrupts that the receiving system has to process – traditionally, a network adapter generates an interrupt for each frame that it receives. These interrupts consume signaling resources on the system’s bus(es), and introduce significant CPU overhead as the system transitions back and forth between “productive” work and interrupt handling many thousand times a second.

To alleviate this load, some high-speed network adapters support interrupt coalescence. When multiple frames are received in a short timeframe (“back-to-back”), these adapters buffer those frames locally and only interrupt the system once.

Interrupt coalescence together with large-receive offload can roughly be seen as doing on the “receive” side what transmit chaining and large-send offload (LSO) do for the “transmit” side.

Issues with interrupt coalescence

While this scheme lowers interrupt-related system load significantly, it can have adverse effects on timing, and make TCP traffic more bursty or “clumpy”. Therefore it would make sense to combine interrupt coalescence with on-board timestamping functionality. Unfortunately that doesn’t seem to be implemented in commodity hardware/driver combinations yet.

The way that interrupt coalescence works, a network adapter that has received a frame doesn’t send an interrupt to the system right away, but waits for a little while in case more packets arrive. This can have a negative impact on latency.

In general, interrupt coalescence is configured such that the additional delay is bounded. On some implementations, these delay bounds are specified in units of milliseconds, on other systems in units of microseconds. It requires some thought to find a good trade-off between latency and load reduction. One should be careful to set the coalescence threshold low enough that the additional latency doesn’t cause problems. Setting a low threshold will prevent interrupt coalescence from occurring when successive packets are spaced too far apart. But in that case, the interrupt rate will probably be low enough so that this is not a problem.


Configuration of interrupt coalescence is highly system dependent, although there are some parameters that are more or less common over implementations.


On Linux systems with additional driver support, the ethtool -C command can be used to modify the interrupt coalescence settings of network devices on the fly.

Some Ethernet drivers in Linux have parameters to control Interrupt Coalescence (Interrupt Moderation, as it is called in Linux). For example, the e1000 driver for the large family of Intel Gigabit Ethernet adapters has the following parameters according to the kernel documentation:

limits the number of interrupts per second generated by the card. Values >= 100 are interpreted as the maximum number of interrupts per second. The default value used to be 8’000 up to and including kernel release 2.6.19. A value of zero (0) disabled interrupt moderation completely. Above 2.6.19, some values between 1 and 99 can be used to select adaptive interrupt rate control. The first adaptive modes are “dynamic conservative” (1) and dynamic with reduced latency (3). In conservative mode (1), the rate changes between 4’000 interrupts per second when only bulk traffic (“normal-size packets”) is seen, and 20’000 when small packets are present that might benefit from lower latency. The more aggressive mode (3), “low-latency” traffic may drive the interrupt rate up to 70’000 per second. This mode is supposed to be useful for cluster communication in grid applications.
specifies, in multiples of 1’024 microseconds, the time after reception of a frame to wait for another frame to arrive before sending an interrupt.
bounds the delay between reception of a frame and generation of an interrupt. It is specified in units of 1’024 microseconds. Note that InterruptThrottleRate overrides RxAbsIntDelay, so even when a very short RxAbsIntDelay is specified, the interrupt rate should never exceed the rate specified (either directly or by the dynamic algorithm) by InterruptThrottleRate
specifies the number of descriptors to store incoming frames on the adapter. The default value is 256, which is also the maximum for some types of E1000-based adapters. Others can allocate up to 4’096 of these descriptors. The size of the receive buffer associated with each descriptor varies with the MTU configured on the adapter. It is always a power-of-two number of bytes. The number of descriptors available will also depend on the per-buffer size. When all buffers have been filled by incoming frames, an interrupt will have to be signaled in any case.


As an example, see the Platform Notes: Sun GigaSwift Ethernet Device Driver. It lists the following parameters for that particular type of adapter:

Interrupt after this number of packets have arrived since the last packet was serviced. A value of zero indicates no packet blanking. (Range: 0 to 511, default=3)
Interrupt after 4.5 microsecond ticks have elapsed since the last packet was serviced. A value of zero indicates no time blanking. (Range: 0 to 524287, default=1250)

TIBCO Hawk v/s TIBCO BWPM (reblogged)

A short while ago I got the question from a customer that wanted to know the differences between TIBCO Hawk and TIBCO BWPM (BusinessWorks Process Monitor), since both are monitoring products from TIBCO. In this blog I will be briefly explaining my point of view and recommendations about when to use which product, which in my opinion cannot be compared as-is.

Let me start by indicating that TIBCO Hawk and BWPM are not products which can be directly compared with each other. There is partially overlap in purpose of the two products, namely gaining insight in the integration landscape, but at the same time the products are very different. TIBCO Hawk is as we may know a transport, distribution and monitoring product that underwater allows TIBCO administrators to technically monitor the integration landscape in runtime (including server behaviour etc.) and reactive respond on certain events by configuring so-called Hawk-rules and setting up dashboards for feedback. The technical monitoring capabilities are quite extensive and based on the information and log files which are available by both the TIBCO Administrator and the various micro Hawk agents. The target group of TIBCO Hawk are especially the administrators and to a lesser extent, the developers. The focus is on monitoring the various TIBCO components (or adapters) to satisfy corresponding SLA’s, not that what is taking place within the TIBCO components from functional points of perspective.

+ Very strong, comprehensive and proven tool for TIBCO administrators;
+ Reactive measure and (automatically) react to events in the landscape using Hawk-rules;
– Fairly technical, thus very higher threshold for non-technical users;
– Offer little or no insight into the actual data processed from a functional point of perspective;

TIBCO BWPM is a product that from a functional point of perspective provides insight during runtime at process level and is a branch and rebranding of the product called nJAMS by Integration Matters. It may impact the way of developing (standards and guidelines). By using so-called libraries throughout the development process-specific functional information can be made available in runtime. It has a rich web interface as an alternative to the TIBCO Administrator and offers rich visual insight into all process instances and correlates them together. The target group of TIBCO BWPM are the TIBCO developers, administrators, testers and even analysts. The focus is on gaining and understanding of that which is being taking place within the TIBCO components from functional points of perspective.

+ Very strong and comprehensive tool with a rich web interface;
+ Provides extensive logging capabilities, the availability of all related context and process data;
+ Easily accessible and intuitive to use, even for non-technical users;
– Less suitable to use for the daily technical monitoring of the landscape (including server behaviour etc.);
– It is important that the product is well designed and properly parameterized to prevent performance impact (should not be underestimate);

In my opinion, TIBCO BWPM is a very welcome addition to the standard TIBCO Administrator/TIBCO Hawk to gain insight in the related context and process data from a functional point of perspective. In addition, the product can also be used by TIBCO developers, administrators, testers and even analysts.

Source :-  http://www.rubix.nl

BusinessWorks, TIBCO

TIBCO BWPM – Missing Libraries Detected


If at all you get an error like this



Don’t Panic, simply copy the following list of the following jars in $CATALINA_HOME/lib

For EMS :-

  • jms.jar (if using ems 8 and above rename jms2.0.jar with jms.jar)
  • tibcrypt.jar, tibjms.jar, tibjmsadmin.jar

For Database :-

  • ojdbc.jar (rename ojdbc6.jar or ojdbc7.jar to ojdbc.jar) – ORACLE
  • mssqlserver.jar (rename to sqljdbc4.jar) – MSSQL
Operating System, Redhat / CEntOS / Oracle Linux

/etc/security/limits.conf file – In A Nutshell

The /etc/security/limits.conf file contains a list line where each line describes a limit for a user in the form of:

<Domain> <type> <item> <shell limit value>


  • <domain> can be:
    • an user name
    • a group name, with @group syntax
    • the wildcard *, for default entry
    • the wildcard %, can be also used with %group syntax, for maxlogin limit
  • <type> can have the two values:
    • “soft” for enforcing the soft limits (soft is like warning)
    • “hard” for enforcing hard limits (hard is a real max limit)
  • <item> can be one of the following:
    • core – limits the core file size (KB)
  • <shell limit value> can be one of the following:
    • core – limits the core file size (KB)
    • data – max data size (KB)
    • fsize – maximum file size (KB)
    • memlock – max locked-in-memory address space (KB)
    • nofile – Maximum number of open file descriptors
    • rss – max resident set size (KB)
    • stack – max stack size (KB) – Maximum size of the stack segment of the process
    • cpu – max CPU time (MIN)
    • nproc – Maximum number of processes available to a single user
    • as – address space limit
    • maxlogins – max number of logins for this user
    • maxsyslogins – max number of logins on the system
    • priority – the priority to run user process with
    • locks – max number of file locks the user can hold
    • sigpending – max number of pending signals
    • msgqueue – max memory used by POSIX message queues (bytes)
    • nice – max nice priority allowed to raise to
    • rtprio – max realtime priority
    • chroot – change root to directory (Debian-specific)


  • Sigpending – examine pending signals.

sigpending () returns the set of signals that are pending for delivery to the calling thread (i.e., the signals which have been raised while blocked). The mask of pending signals is returned in set.

sigpending() returns 0 on success and -1 on error


credits :- Sagar Salunkhe

Operating System, Redhat / CEntOS / Oracle Linux

Linux KVM: Disable virbr0 NAT Interface

The virtual network (virbr0) used for Network address translation (NAT) which allows guests to access to network services. However, NAT slows down things and only recommended for desktop installations. To disable Network address translation (NAT) forwarding type the following commands:

Display Current Setup

Type the following command:
# ifconfig
Sample outputs:

virbr0    Link encap:Ethernet  HWaddr 00:00:00:00:00:00  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:7921 (7.7 KiB)

Or use the following command:
# virsh net-list
Sample outputs:

Name                 State      Autostart
default              active     yes       

To disable virbr0, enter:
# virsh net-destroy default
# virsh net-undefine default
# service libvirtd restart
# ifconfig

Enterprise Messaging Service, TIBCO

TIBCO EMS – Properties of Queues and Topics (Where Tuning can be done)

You can set the properties directly in the topics.conf or queues.conf file or by means of the setprop topic or setprop queue command in the EMS Administrator Tool.

1)   Failsafe

The failsafe property determines whether the server writes persistent messages to disk synchronously or asynchronously.

Ø  When failsafe is not set, messages are written to the file on disk in asynchronous mode to obtain maximum performance. In this mode, the data may remain in system buffers for a short time before it is written to disk and it is possible that, in case of software or hardware failure, some data could be lost without the possibility of recovery

Ø  In failsafe mode, all data for that queue or topic are written into external storage in synchronous mode. In synchronous mode, a write operation is not complete until the data is physically recorded on the external device

The failsafe property ensures that no messages are ever lost in case of server failure

2) Secure

Ø  When the secure property is enabled for a destination, it instructs the server to check user permissions whenever a user attempts to perform an operation on that destination.

Ø  If the secure property is not set for a destination, the server does not check permissions for that destination and any authenticated user can perform any operation on that topic or queue.

2)   Maxbytes

Ø  Topics and queues can specify the maxbytes property in the form:

Maxbytes=value [KB|MB|GB]                   Ex: maxbytes=1000MB

Ø  For queues, maxbytes defines the maximum size (in bytes) that the queue can store, summed over all messages in the queue. Should this limit be exceeded, messages will be rejected by the server and the message producers send calls will return an error

Ø  If maxbytes is zero, or is not set, the server does not limit the memory allocation for the queue

Ø  For queues, maxbytes defines the maximum size (in bytes) that the queue can store, summed over all messages in the queue. Should this limit be exceeded, messages will be rejected by the server and the message producer sends calls will return an error

4) maxmsgs

Ø  Where value defines the maximum number of messages that can be waiting in a queue. When adding a message would exceed this limit, the server does not accept the message into storage, and the message producer’s send call returns an error.

Ø  If maxmsgs is zero, or is not set, the server does not limit the number of messages in the queue.

Ø  You can set both maxmsgs and maxbytes properties on the same queue. Exceeding either limit causes the server to reject new messages until consumers reduce the the queue size to below these limits.

5) OverflowPolicy

Topics and queues can specify the overflowPolicy property to change the effect of exceeding the message capacity established by either maxbytes or maxmsgs.

o   OverflowPolicy=default | discardOld | rejectIncoming

  1. Default

Ø  For topics, default specifies that messages are sent to subscribers, regardless of maxbytes or maxmsgs setting.

Ø  For queues, default specifies that new messages are rejected by the server and an error is returned to the producer if the established maxbytes or maxmsgs value has been exceeded.

  1. DiscardOld

Ø  For topics, discardOld specifies that, if any of the subscribers have an outstanding number of undelivered messages on the server that are over the message limit, the oldest messages are discarded before they are delivered to the subscriber.

Ø  The discardOld setting impacts subscribers individually. For example, you might have three subscribers to a topic, but only one subscriber exceeds the message limit. In this case, only the oldest messages for the one subscriber are discarded, while the other two subscribers continue to receive all of their messages.

Ø  For queues, discardOld specifies that, if messages on the queue have exceeded the maxbytes or maxmsgs value, the oldest messages are discarded from the queue and an error is returned to the message producer

        III.                rejectIncoming

Ø  For topics, rejectIncoming specifies that, if any of the subscribers have an outstanding number of undelivered messages on the server that are over the message limit, all new messages are rejected and an error is returned to the producer.

Ø  For queues, rejectIncoming specifies that, if messages on the queue have exceeded the maxbytes or maxmsgs value, all new messages are rejected and an error is returned to the producer.

6) global

Ø  Messages destined for a topic or queue with the global property set are routed to the other servers that are participating in routing with this server.

You can set global using the form:   global

7) sender_name

Ø  The sender_ name property specifies that the server may include the sender’s username for messages sent to this destination.

You can set sender_name using the form:    sender_name

8) sender_name_enforced

Ø  The sender_name_enforced property specifies that messages sent to this destination must include the sender’s user name. The server retrieves the user name of the message producer using the same procedure described in the sender_name property above. However, unlike, the sender_name property, there is no way for message producers to override this property.

You can set sender_name_enforced using the form:    sender_name_enforced

Ø  If the sender_name property is also set on the destination, this property overrides the sender_name property.

9) FlowControl

Ø  The flowControl property specifies the target maximum size the server can use to store pending messages for the destination. Should the number of messages exceed the maximum; the server will slow down the producers to the rate required by the message consumers. This is useful when message producers send messages much more quickly than message consumers can consume them.

If you specify the flowControl property without a value, the target        maximum is set to 256KB.

Ø  The flow_control parameter in tibemsd.conf file must be set to enable before the value in this property is enforced by the server. See Flow Control for more information about flow control.

10) trace

Ø  Specifies that tracing should be enabled for this destination.

o    You can set trace using the form:    trace [=body]

Ø  Specifying trace (without =body), generates trace messages that include only the message sequence and message ID. Specifying trace=body generates trace messages that include the message body

11) import

Ø  The import property allows messages published by an external system to be received by an EMS destination (a topic or a queue), as long as the transport to the external system is configured.

o    You can set import using the form:    import=”list

12) export

Ø  The export property allows messages published by a client to a topic to be exported to the external systems with configured transports.

o    You can set import using the form:    export=”list

Ø  It supports for only topics not queues.

13) maxRedelivery

Ø  The maxRedelivery property specifies the number of attempts the server should make to redeliver a message sent to a queue.

o    You can set maxRedelivery using the form:    maxRedelivery=count

Ø  Where count is an integer between 2 and 255 that specifies the maximum number of times a message can be delivered to receivers. A value of zero disables maxRedelivery, so there is no maximum.

Ø  Once the server has attempted to deliver the message the specified number of times, the message is either destroyed or, if the JMS_TIBCO_PRESERVE_UNDELIVERED property on the message is set to true, the message is placed on the undelivered queue so it can be handled by a special consumer

Undelivered Message Queue

If a message expires or has exceeded the value specified by the maxRedelivery property on a queue, the server checks the message’s JMS_TIBCO_PRESERVE_UNDELIVERED property. If
JMS_TIBCO_PRESERVE_UNDELIVERED is set to true, the server moves the message to the undelivered message queue, $sys.undelivered. This undelivered message queue is a system queue that is always present and cannot be deleted. If JMS_TIBCO_PRESERVE_UNDELIVERED is set to false, the message will be deleted by the server.

14) exclusive

Ø  The exclusive property is available for queues only (not for topics).

Ø  When exclusive is set for a queue, the server sends all messages on that queue to one consumer. No other consumers can receive messages from the queue. Instead, these additional consumers act in a standby role; if the primary consumer fails, the server selects one of the s   tandby consumers as the new primary, and begins delivering messages to it.

Ø  By default, exclusive is not set for queues and the server distributes messages in a round-robin—one to each receiver that is ready. If any receivers are still ready to accept additional messages, the server distributes another round of messages—one to each receiver that is still ready. When none of the receivers are ready to receive more messages, the server waits until a queue receiver reports that it can accept a message.

15) prefetch

The message consumer portion of a client and the server cooperate to regulate fetching according to the prefetch property. The prefetch property applies to both topics and queues.

You can set  prefetch using the form:  prefetch=value

where value is one of the values in 2 0r more ,1,0,None.

Value Description

Ø  2 or more: The message consumer automatically fetches messages from the

server. The message consumer never fetches more than the number of messages specified by value.

Ø  1 :-The message consumer automatically fetches messages from the server initiating fetch only when it does not currently hold amessage.

Ø  None:-Disables automatic fetch. That is, the message consumer initiates fetch only when the client calls receive—either an explicit synchronous call, or an implicit call (in an asynchronous consumer).This value cannot be used with topics or global queues.

Ø  0:-The destination inherits the prefetch value from a parent

destination with a matching name. If it has no parent, or nodestination in the parent chain sets a value for prefetch, then the default value is 5 queues and 64 for topics.

Ø  When a destination does not set any value (i.e prefetch value is empty)for prefetch, then the default value is 0 (zero; that is, inherit the prefetch value).

16) expiration                                                                                    

Ø  If an expiration property is set for a destination, when the server delivers a message to that destination, the server overrides the JMSExpiration value set by the producer in the message header with the time specified by the expiration property.

o    You can set the expiration property for any queue and any topic using the form:

expiration=time [msec|sec|min|hour|day]


Ø  where time is the number of seconds. Zero is a special value that indicates messages to the destination never expire.

Operating System, Redhat / CEntOS / Oracle Linux, TIBCO

TIBCO Administrator – Error (Core Dump Error)

Sometimes the administrator process in UNIX Platform Stops intermittently and then in the following location,


file you will see a core dump error something like this

# A fatal error has been detected by the Java Runtime Environment:
# SIGSEGV (0xb) at pc=0x00007efcdb723df8, pid=12496, tid=139624169486080
# JRE version: Java(TM) SE Runtime Environment (8.0_51-b16) (build 1.8.0_51-b16)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.51-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# V [libjvm.so+0x404df8] PhaseChaitin::gather_lrg_masks(bool)+0x208
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try “ulimit -c unlimited” before starting Java again
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp

————— T H R E A D —————

Current thread (0x00000000023a8800): JavaThread “C2 CompilerThread1” daemon [_thread_in_native, id=12508, stack(0x00007efcc8f63000,0x00007efcc9064000)]

siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000000

RAX=0x0000000000000000, RBX=0x00007efc901723e0, RCX=0x00007efc9016e890, RDX=0x0000000000000041
RSP=0x00007efcc905f650, RBP=0x00007efcc905f6c0, RSI=0x00007efcc9060f50, RDI=0x00007efc90b937a0
R8 =0x000000000000009a, R9 =0x0000000000000003, R10=0x0000000000000003, R11=0x0000000000000000
R12=0x0000000000000004, R13=0x0000000000000000, R14=0x0000000000000002, R15=0x00007efc90b937a0
RIP=0x00007efcdb723df8, EFLAGS=0x0000000000010246, CSGSFS=0x0000000000000033, ERR=0x0000000000000004

Top of Stack: (sp=0x00007efcc905f650)
0x00007efcc905f650: 01007efcc905f6c0 00007efcc9060f50
0x00007efcc905f660: 0000003dc905f870 00007efc9012d900
0x00007efcc905f670: 0000000100000002 ffffffff00000002
0x00007efcc905f680: 00007efc98009f40 00007efc90171d30
0x00007efcc905f690: 0000023ac9061038 00007efcc9060f50
0x00007efcc905f6a0: 0000000000000222 0000000000000090
0x00007efcc905f6b0: 00007efcc9061038 0000000000000222
0x00007efcc905f6c0: 00007efcc905f930 00007efcdb72705a
0x00007efcc905f6d0: 00007efcc905f750 00007efcc905f870
0x00007efcc905f6e0: 00007efcc905f830 00007efcc905f710
0x00007efcc905f6f0: 00007efcc905f7d0 00007efcc905f8a0
0x00007efcc905f700: 00007efcc9060f50 0000001200000117
0x00007efcc905f710: 00007efcdc2544b0 00007efc0000000c
0x00007efcc905f720: 00007efcc9061dd0 00007efcc9060f50
0x00007efcc905f730: 0000000000000807 00007efc9044a2e0
0x00007efcc905f740: 00007efc9012c610 0000000000000002
0x00007efcc905f750: 00007efcc905f820 00007efcdbae2ea3
0x00007efcc905f760: 000007c000000010 00007efcc90610d8
0x00007efcc905f770: 0000000000000028 ffffffe80000000e
0x00007efcc905f780: 00007efc9076dd60 00007efcc9061080
0x00007efcc905f790: 0000001100001f00 0000001a00000011
0x00007efcc905f7a0: 0000000100000001 00007efc903a1c78
0x00007efcc905f7b0: 00007efc908213e0 00007efcdbd7cd46
0x00007efcc905f7c0: 0000000000000008 00007efcdbd7cc97
0x00007efcc905f7d0: 00007efc00000009 00007efcc9061dd0
0x00007efcc905f7e0: 00007efc909059f0 00007efc900537a0
0x00007efcc905f7f0: 00007efc9040b7c0 00007efc9040c130
0x00007efcc905f800: 00007efc9081d280 00007efcc9061080
0x00007efcc905f810: 00007efcc9061060 0000000000000222
0x00007efcc905f820: 00007efcc905f870 00007efcdb8f8831
0x00007efcc905f830: 00007efc0000000b 00007efcc9061dd0
0x00007efcc905f840: 00007efc906cbd70 00007efcc9060f00

Instructions: (pc=0x00007efcdb723df8)
0x00007efcdb723dd8: 18 00 48 c7 c0 ff ff ff ff 4c 89 ff 49 0f 44 c7
0x00007efcdb723de8: 48 89 43 18 49 8b 07 ff 90 80 00 00 00 49 89 c5
0x00007efcdb723df8: 8b 00 21 43 38 41 8b 45 04 21 43 3c 4c 89 ff 41
0x00007efcdb723e08: 8b 45 08 21 43 40 41 8b 45 0c 21 43 44 41 8b 45

Register to memory mapping:

RAX=0x0000000000000000 is an unknown value
RBX=0x00007efc901723e0 is an unknown value
RCX=0x00007efc9016e890 is an unknown value
RDX=0x0000000000000041 is an unknown value
RSP=0x00007efcc905f650 is pointing into the stack for thread: 0x00000000023a8800
RBP=0x00007efcc905f6c0 is pointing into the stack for thread: 0x00000000023a8800
RSI=0x00007efcc9060f50 is pointing into the stack for thread: 0x00000000023a8800
RDI=0x00007efc90b937a0 is an unknown value
R8 =0x000000000000009a is an unknown value
R9 =0x0000000000000003 is an unknown value
R10=0x0000000000000003 is an unknown value
R11=0x0000000000000000 is an unknown value
R12=0x0000000000000004 is an unknown value
R13=0x0000000000000000 is an unknown value
R14=0x0000000000000002 is an unknown value
R15=0x00007efc90b937a0 is an unknown value
Stack: [0x00007efcc8f63000,0x00007efcc9064000], sp=0x00007efcc905f650, free space=1009k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x404df8] PhaseChaitin::gather_lrg_masks(bool)+0x208
V [libjvm.so+0x40805a] PhaseChaitin::Register_Allocate()+0x71a
V [libjvm.so+0x49abe0] Compile::Code_Gen()+0x260
V [libjvm.so+0x49e032] Compile::Compile(ciEnv*, C2Compiler*, ciMethod*, int, bool, bool, bool)+0x14b2
V [libjvm.so+0x3ebeb8] C2Compiler::compile_method(ciEnv*, ciMethod*, int)+0x198
V [libjvm.so+0x4a843a] CompileBroker::invoke_compiler_on_method(CompileTask*)+0xc9a
V [libjvm.so+0x4a93e6] CompileBroker::compiler_thread_loop()+0x5d6
V [libjvm.so+0xa5cbcf] JavaThread::thread_main_inner()+0xdf
V [libjvm.so+0xa5ccfc] JavaThread::run()+0x11c
V [libjvm.so+0x911048] java_start(Thread*)+0x108
C [libpthread.so.0+0x7aa1]
Current CompileTask:
C2:77124624 8967 ! 4 com.tibco.repo.RVRepoProcessBridge::handleServerHeartbeat (1075 bytes)
————— P R O C E S S —————

Java Threads: ( => current thread )
0x00007efcb8024000 JavaThread “Thread-41” daemon [_thread_blocked, id=13990, stack(0x00007efc372f5000,0x00007efc373f6000)]
0x00007efcac372000 JavaThread “http-bio-8989-exec-68” daemon [_thread_blocked, id=13853, stack(0x00007efc3ca41000,0x00007efc3cb42000)]
0x00007efc4002d000 JavaThread “http-bio-8989-exec-67” daemon [_thread_blocked, id=13837, stack(0x00007efc376f9000,0x00007efc377fa000)]
0x00007efc945b6800 JavaThread “http-bio-8989-exec-66” daemon [_thread_blocked, id=13828, stack(0x00007efc37cfd000,0x00007efc37dfe000)]
0x00007efc40028000 JavaThread “http-bio-8989-exec-65” daemon [_thread_blocked, id=13228, stack(0x00007efc374f7000,0x00007efc375f8000)]
0x00007efc993ba800 JavaThread “http-bio-8989-exec-64” daemon [_thread_blocked, id=13227, stack(0x00007efc3ce45000,0x00007efc3cf46000)]
0x00007efcc4012000 JavaThread “http-bio-8989-exec-63” daemon [_thread_blocked, id=13218, stack(0x00007efc3c63f000,0x00007efc3c740000)]
0x00007efc50006000 JavaThread “http-bio-8989-exec-62” daemon [_thread_blocked, id=13217, stack(0x00007efc373f6000,0x00007efc374f7000)]
0x00007efca800c000 JavaThread “http-bio-8989-exec-61” daemon [_thread_blocked, id=13216, stack(0x00007efc3c13a000,0x00007efc3c23b000)]
0x00007efc68004000 JavaThread “http-bio-8989-exec-60” daemon [_thread_blocked, id=13215, stack(0x00007efc3d34a000,0x00007efc3d44b000)]
0x00007efcb0006800 JavaThread “http-bio-8989-exec-59” daemon [_thread_blocked, id=13214, stack(0x00007efc3d54c000,0x00007efc3d64d000)]
0x00007efca8044800 JavaThread “http-bio-8989-exec-58” daemon [_thread_blocked, id=13213, stack(0x00007efc375f8000,0x00007efc376f9000)]
0x00007efc902c5800 JavaThread “http-bio-8989-exec-57” daemon [_thread_blocked, id=13212, stack(0x00007efc36ef1000,0x00007efc36ff2000)]
0x00007efcb4010800 JavaThread “http-bio-8989-exec-56” daemon [_thread_blocked, id=13211, stack(0x00007efc3d148000,0x00007efc3d249000)]
0x00007efc4408c800 JavaThread “http-bio-8989-exec-55” daemon [_thread_blocked, id=13210, stack(0x00007efc3e053000,0x00007efc3e154000)]
0x00007efcb4036800 JavaThread “http-bio-8989-exec-54” daemon [_thread_blocked, id=13201, stack(0x00007efc371f4000,0x00007efc372f5000)]
0x00007efcb0018800 JavaThread “http-bio-8989-exec-53” daemon [_thread_blocked, id=13200, stack(0x00007efc3c23b000,0x00007efc3c33c000)]
0x00007efc6c1e1000 JavaThread “http-bio-8989-exec-52” daemon [_thread_blocked, id=13199, stack(0x00007efc3c43d000,0x00007efc3c53e000)]
0x00007efc58005000 JavaThread “http-bio-8989-exec-51” daemon [_thread_blocked, id=13198, stack(0x00007efc3c039000,0x00007efc3c13a000)]
0x00007efc74006800 JavaThread “http-bio-8989-exec-50” daemon [_thread_blocked, id=13197, stack(0x00007efc9da1e000,0x00007efc9db1f000)]
0x00007efc54005800 JavaThread “AMI Worker 2” daemon [_thread_blocked, id=13120, stack(0x00007efcc2253000,0x00007efcc2354000)]
0x00007efc54003800 JavaThread “AMI Worker 1” daemon [_thread_blocked, id=13119, stack(0x00007efc36df0000,0x00007efc36ef1000)]
0x00007efcbc02f800 JavaThread “http-bio-8989-exec-49” daemon [_thread_blocked, id=13091, stack(0x00007efc37afb000,0x00007efc37bfc000)]
0x00007efc80083000 JavaThread “http-bio-8989-exec-48” daemon [_thread_blocked, id=13090, stack(0x00007efc3cd44000,0x00007efc3ce45000)]
0x00007efc4c01b000 JavaThread “http-bio-8989-exec-47” daemon [_thread_blocked, id=13089, stack(0x00007efc3d44b000,0x00007efc3d54c000)]
0x00007efca403a800 JavaThread “http-bio-8989-exec-46” daemon [_thread_blocked, id=13088, stack(0x00007efc36cef000,0x00007efc36df0000)]
0x00007efcc4023000 JavaThread “http-bio-8989-exec-45” daemon [_thread_blocked, id=13087, stack(0x00007efc3d249000,0x00007efc3d34a000)]
0x00007efc8468a000 JavaThread “http-bio-8989-exec-44” daemon [_thread_blocked, id=13086, stack(0x00007efc3d64d000,0x00007efc3d74e000)]
0x000000000252f800 JavaThread “http-bio-8989-AsyncTimeout” daemon [_thread_blocked, id=13032, stack(0x00007efc3d74e000,0x00007efc3d84f000)]
0x000000000252e800 JavaThread “http-bio-8989-Acceptor-0” daemon [_thread_in_native, id=13031, stack(0x00007efc3d84f000,0x00007efc3d950000)]
0x000000000252d000 JavaThread “ContainerBackgroundProcessor[StandardEngine[Catalina]]” daemon [_thread_blocked, id=13030, stack(0x00007efc3d950000,0x00007efc3da51000)]
0x00007efc44085800 JavaThread “Thread-37” daemon [_thread_blocked, id=13029, stack(0x00007efc3f0f5000,0x00007efc3f1f6000)]
0x00007efc78971000 JavaThread “Thread-36” daemon [_thread_blocked, id=13028, stack(0x00007efc3de51000,0x00007efc3df52000)]
0x00007efc78967000 JavaThread “Thread-33” daemon [_thread_blocked, id=13025, stack(0x00007efc3e154000,0x00007efc3e255000)]
0x00007efc788fe000 JavaThread “Tibrv Dispatcher” daemon [_thread_in_native, id=13024, stack(0x00007efc3e255000,0x00007efc3e356000)]
0x00007efc788fa800 JavaThread “RVAgentManagerTransport dispatch thread” daemon [_thread_in_native, id=13023, stack(0x00007efc3e356000,0x00007efc3e457000)]
0x00007efc788f8000 JavaThread “RacSubManager” daemon [_thread_blocked, id=13022, stack(0x00007efc3e457000,0x00007efc3e558000)]
0x00007efc788d8800 JavaThread “InitialListTimer” daemon [_thread_blocked, id=13021, stack(0x00007efc3e558000,0x00007efc3e659000)]
0x00007efc788d7000 JavaThread “RvHeartBeatTimer” daemon [_thread_blocked, id=13020, stack(0x00007efc3e659000,0x00007efc3e75a000)]
0x00007efc788d4000 JavaThread “AgentAliveMonitor dispatch thread” daemon [_thread_in_native, id=13019, stack(0x00007efc3ebf2000,0x00007efc3ecf3000)]
0x00007efc788aa000 JavaThread “AgentEventMonitor dispatch thread” daemon [_thread_in_native, id=13018, stack(0x00007efc3ecf3000,0x00007efc3edf4000)]
0x00007efc787c2800 JavaThread “Thread-30” daemon [_thread_blocked, id=13017, stack(0x00007efc3ea2b000,0x00007efc3eb2c000)]
0x00007efc4c009800 JavaThread “Thread-29(HawkConfig)” daemon [_thread_blocked, id=13016, stack(0x00007efc3eff4000,0x00007efc3f0f5000)]
0x00007efc4c007800 JavaThread “Thread-28” daemon [_thread_in_native, id=13015, stack(0x00007efc3f1f6000,0x00007efc3f2f7000)]
0x00007efc7827b800 JavaThread “Thread-27” daemon [_thread_blocked, id=13012, stack(0x00007efc3f2f7000,0x00007efc3f3f8000)]
0x00007efc780fb800 JavaThread “Thread-26(HawkConfig)” daemon [_thread_blocked, id=13011, stack(0x00007efc3f5f8000,0x00007efc3f6f9000)]
0x00007efc780f9800 JavaThread “Thread-25” daemon [_thread_in_native, id=13010, stack(0x00007efc3f6f9000,0x00007efc3f7fa000)]
0x00007efc78060000 JavaThread “Thread-24(MonitoringManagement)” daemon [_thread_blocked, id=13009, stack(0x00007efc3f7fa000,0x00007efc3f8fb000)]
0x00007efc7805e000 JavaThread “Thread-23” daemon [_thread_in_native, id=13008, stack(0x00007efc3f8fb000,0x00007efc3f9fc000)]
0x00007efc78210000 JavaThread “Thread-21(quality)” daemon [_thread_blocked, id=13007, stack(0x00007efc3f9fc000,0x00007efc3fafd000)]
0x00007efc7820f800 JavaThread “Thread-20” daemon [_thread_in_native, id=13006, stack(0x00007efc3fafd000,0x00007efc3fbfe000)]
0x00007efc6c1d6800 JavaThread “Thread-17” daemon [_thread_blocked, id=13003, stack(0x00007efc5d6fb000,0x00007efc5d7fc000)]
0x00007efc6c1ff800 JavaThread “CommitQueue4_0” daemon [_thread_in_native, id=13002, stack(0x00007efc3fbfe000,0x00007efc3fcff000)]
0x00007efc6c1fd000 JavaThread “NormalQueue4_2” daemon [_thread_in_native, id=13001, stack(0x00007efc3feff000,0x00007efc40000000)]
0x00007efc6c1fb000 JavaThread “NormalQueue4_1” daemon [_thread_in_native, id=13000, stack(0x00007efc5c0e9000,0x00007efc5c1ea000)]
0x00007efc6c1f9000 JavaThread “NormalQueue4_0” daemon [_thread_in_native, id=12999, stack(0x00007efc5c1ea000,0x00007efc5c2eb000)]
0x00007efc6c1f5000 JavaThread “CommitQueue3_0” daemon [_thread_in_native, id=12998, stack(0x00007efc5c2eb000,0x00007efc5c3ec000)]
0x00007efc6c1f3000 JavaThread “NormalQueue3_2” daemon [_thread_in_native, id=12997, stack(0x00007efc5c3ec000,0x00007efc5c4ed000)]
0x00007efc6c1f1800 JavaThread “NormalQueue3_1” daemon [_thread_in_native, id=12996, stack(0x00007efc5c4ed000,0x00007efc5c5ee000)]
0x00007efc6c1ef800 JavaThread “NormalQueue3_0” daemon [_thread_in_native, id=12995, stack(0x00007efc5c5ee000,0x00007efc5c6ef000)]
0x00007efc6c1eb800 JavaThread “CommitQueue2_0” daemon [_thread_in_native, id=12994, stack(0x00007efc5c6ef000,0x00007efc5c7f0000)]
0x00007efc6c1ea000 JavaThread “NormalQueue2_2” daemon [_thread_in_native, id=12993, stack(0x00007efc5c7f0000,0x00007efc5c8f1000)]
0x00007efc6c1e9800 JavaThread “NormalQueue2_1” daemon [_thread_in_native, id=12992, stack(0x00007efc5c8f1000,0x00007efc5c9f2000)]
0x00007efc6c1de800 JavaThread “NormalQueue2_0” daemon [_thread_in_native, id=12991, stack(0x00007efc5c9f2000,0x00007efc5caf3000)]
0x00007efc6c124000 JavaThread “Thread-16(AUTH_quality)” daemon [_thread_blocked, id=12989, stack(0x00007efc5ccf3000,0x00007efc5cdf4000)]
0x00007efc6c116800 JavaThread “Thread-15” daemon [_thread_in_native, id=12988, stack(0x00007efc5cdf4000,0x00007efc5cef5000)]
0x00007efc6c101000 JavaThread “Timer-0” daemon [_thread_blocked, id=12987, stack(0x00007efc5cef5000,0x00007efc5cff6000)]
0x00007efc6c0f7800 JavaThread “net.sf.ehcache.CacheManager@5a6a4d5b” daemon [_thread_blocked, id=12986, stack(0x00007efc5cff6000,0x00007efc5d0f7000)]
0x00007efc6c0c6000 JavaThread “CommitQueue1_0” daemon [_thread_in_native, id=12985, stack(0x00007efc5d0f7000,0x00007efc5d1f8000)]
0x00007efc6c0c4000 JavaThread “NormalQueue1_2” daemon [_thread_in_native, id=12984, stack(0x00007efc5d1f8000,0x00007efc5d2f9000)]
0x00007efc6c0c2800 JavaThread “NormalQueue1_1” daemon [_thread_in_native, id=12983, stack(0x00007efc5d2f9000,0x00007efc5d3fa000)]
0x00007efc6c0bd800 JavaThread “NormalQueue1_0” daemon [_thread_in_native, id=12982, stack(0x00007efc5d3fa000,0x00007efc5d4fb000)]
0x00007efc6c09e800 JavaThread “HB-1” daemon [_thread_in_native, id=12980, stack(0x00007efc9c013000,0x00007efc9c114000)]
0x00007efc6c09b800 JavaThread “HB-0” daemon [_thread_in_native, id=12979, stack(0x00007efc9c114000,0x00007efc9c215000)]
0x00007efc6c09a000 JavaThread “SYNC-0” daemon [_thread_in_native, id=12978, stack(0x00007efc9c215000,0x00007efc9c316000)]
0x00007efc6c067800 JavaThread “ImstMgmt” daemon [_thread_in_native, id=12973, stack(0x00007efc9c316000,0x00007efc9c417000)]
0x00007efc6c028800 JavaThread “HawkImplantDisp” daemon [_thread_in_native, id=12972, stack(0x00007efc9c417000,0x00007efc9c518000)]
0x00007efc781a1800 JavaThread “Thread-11” daemon [_thread_blocked, id=12967, stack(0x00007efc9cf19000,0x00007efc9d01a000)]
0x00000000030fe000 JavaThread “GC Daemon” daemon [_thread_blocked, id=12540, stack(0x00007efcc80bc000,0x00007efcc81bd000)]
0x00000000023bf800 JavaThread “Service Thread” daemon [_thread_blocked, id=12510, stack(0x00007efcc8d61000,0x00007efcc8e62000)]
0x00000000023aa800 JavaThread “C1 CompilerThread2” daemon [_thread_blocked, id=12509, stack(0x00007efcc8e62000,0x00007efcc8f63000)]
=>0x00000000023a8800 JavaThread “C2 CompilerThread1” daemon [_thread_in_native, id=12508, stack(0x00007efcc8f63000,0x00007efcc9064000)]
0x00000000023a5800 JavaThread “C2 CompilerThread0” daemon [_thread_blocked, id=12507, stack(0x00007efcc9064000,0x00007efcc9165000)]
0x00000000023a4000 JavaThread “Signal Dispatcher” daemon [_thread_blocked, id=12506, stack(0x00007efcc9165000,0x00007efcc9266000)]
0x000000000236c000 JavaThread “Finalizer” daemon [_thread_blocked, id=12505, stack(0x00007efcc9266000,0x00007efcc9367000)]
0x000000000236a000 JavaThread “Reference Handler” daemon [_thread_blocked, id=12504, stack(0x00007efcc9367000,0x00007efcc9468000)]
0x00000000022f6800 JavaThread “main” [_thread_in_native, id=12496, stack(0x00007ffec1ae5000,0x00007ffec1be5000)]

Other Threads:
0x0000000002364800 VMThread [stack: 0x00007efcc9468000,0x00007efcc9569000] [id=12503]
0x00000000023c2800 WatcherThread [stack: 0x00007efcc8c60000,0x00007efcc8d61000] [id=12511]

VM state:not at safepoint (normal execution)

VM Mutex/Monitor currently owned by a thread: None

PSYoungGen total 46080K, used 23811K [0x00000000f5580000, 0x00000000f8600000, 0x0000000100000000)
eden space 45568K, 51% used [0x00000000f5580000,0x00000000f6ca0dc0,0x00000000f8200000)
from space 512K, 25% used [0x00000000f8280000,0x00000000f82a0000,0x00000000f8300000)
to space 2048K, 0% used [0x00000000f8400000,0x00000000f8400000,0x00000000f8600000)
ParOldGen total 122368K, used 45598K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c87800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K

Card table byte_map: [0x00007efccb5c4000,0x00007efccb6c5000] byte_map_base: 0x00007efccaec4000

Marking Bits: (ParMarkBitMap*) 0x00007efcdc2bd660
Begin Bits: [0x00007efcc3000000, 0x00007efcc3800000)
End Bits: [0x00007efcc3800000, 0x00007efcc4000000)

Polling page: 0x00007efcdc323000

CodeCache: size=245760Kb used=27975Kb max_used=28034Kb free=217784Kb
bounds [0x00007efccba85000, 0x00007efccd625000, 0x00007efcdaa85000]
total_blobs=7417 nmethods=6888 adapters=442
compilation: enabled

Compilation events (10 events):
Event: 75538.391 Thread 0x00000000023a5800 8963 4 org.hsqldb.Expression::collectInGroupByExpressions (61 bytes)
Event: 75538.393 Thread 0x00000000023a8800 nmethod 8962 0x00007efcccac8fd0 code [0x00007efcccac9140, 0x00007efcccac92b8]
Event: 75538.394 Thread 0x00000000023a8800 8964 4 org.hsqldb.Expression::isConstant (118 bytes)
Event: 75538.397 Thread 0x00000000023a8800 nmethod 8964 0x00007efcccb62110 code [0x00007efcccb62320, 0x00007efcccb62468]
Event: 75538.398 Thread 0x00000000023a5800 nmethod 8963 0x00007efccbf127d0 code [0x00007efccbf12ae0, 0x00007efccbf12e70]
Event: 76104.554 Thread 0x00000000023a8800 8965 4 com.tibco.tibrv.TibrvMsg::writeBool (164 bytes)
Event: 76104.565 Thread 0x00000000023a8800 nmethod 8965 0x00007efccca4e190 code [0x00007efccca4e420, 0x00007efccca4e7f0]
Event: 76857.543 Thread 0x00000000023aa800 8966 1 com.tibco.uac.monitor.server.MonitorServer::access$000 (4 bytes)
Event: 76857.544 Thread 0x00000000023aa800 nmethod 8966 0x00007efccc3c07d0 code [0x00007efccc3c0920, 0x00007efccc3c0a10]
Event: 77124.580 Thread 0x00000000023a8800 8967 ! 4 com.tibco.repo.RVRepoProcessBridge::handleServerHeartbeat (1075 bytes)

GC Heap History (10 events):
Event: 70999.039 GC heap before
{Heap before GC invocations=77 (full 8):
PSYoungGen total 50688K, used 48256K [0x00000000f5580000, 0x00000000f8a80000, 0x0000000100000000)
eden space 48128K, 100% used [0x00000000f5580000,0x00000000f8480000,0x00000000f8480000)
from space 2560K, 5% used [0x00000000f8800000,0x00000000f8820000,0x00000000f8a80000)
to space 3072K, 0% used [0x00000000f8480000,0x00000000f8480000,0x00000000f8780000)
ParOldGen total 122368K, used 45526K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c75800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 70999.045 GC heap after
Heap after GC invocations=77 (full 8):
PSYoungGen total 48128K, used 160K [0x00000000f5580000, 0x00000000f8a00000, 0x0000000100000000)
eden space 47616K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8400000)
from space 512K, 31% used [0x00000000f8480000,0x00000000f84a8000,0x00000000f8500000)
to space 3072K, 0% used [0x00000000f8700000,0x00000000f8700000,0x00000000f8a00000)
ParOldGen total 122368K, used 45550K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7b800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 72379.077 GC heap before
{Heap before GC invocations=78 (full 8):
PSYoungGen total 48128K, used 47776K [0x00000000f5580000, 0x00000000f8a00000, 0x0000000100000000)
eden space 47616K, 100% used [0x00000000f5580000,0x00000000f8400000,0x00000000f8400000)
from space 512K, 31% used [0x00000000f8480000,0x00000000f84a8000,0x00000000f8500000)
to space 3072K, 0% used [0x00000000f8700000,0x00000000f8700000,0x00000000f8a00000)
ParOldGen total 122368K, used 45550K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7b800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 72379.082 GC heap after
Heap after GC invocations=78 (full 8):
PSYoungGen total 48640K, used 128K [0x00000000f5580000, 0x00000000f8880000, 0x0000000100000000)
eden space 47104K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8380000)
from space 1536K, 8% used [0x00000000f8700000,0x00000000f8720000,0x00000000f8880000)
to space 2560K, 0% used [0x00000000f8380000,0x00000000f8380000,0x00000000f8600000)
ParOldGen total 122368K, used 45566K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7f800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 73744.273 GC heap before
{Heap before GC invocations=79 (full 8):
PSYoungGen total 48640K, used 47232K [0x00000000f5580000, 0x00000000f8880000, 0x0000000100000000)
eden space 47104K, 100% used [0x00000000f5580000,0x00000000f8380000,0x00000000f8380000)
from space 1536K, 8% used [0x00000000f8700000,0x00000000f8720000,0x00000000f8880000)
to space 2560K, 0% used [0x00000000f8380000,0x00000000f8380000,0x00000000f8600000)
ParOldGen total 122368K, used 45566K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7f800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 73744.279 GC heap after
Heap after GC invocations=79 (full 8):
PSYoungGen total 47104K, used 96K [0x00000000f5580000, 0x00000000f8800000, 0x0000000100000000)
eden space 46592K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8300000)
from space 512K, 18% used [0x00000000f8380000,0x00000000f8398000,0x00000000f8400000)
to space 2560K, 0% used [0x00000000f8580000,0x00000000f8580000,0x00000000f8800000)
ParOldGen total 122368K, used 45582K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c83800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 75098.826 GC heap before
{Heap before GC invocations=80 (full 8):
PSYoungGen total 47104K, used 46688K [0x00000000f5580000, 0x00000000f8800000, 0x0000000100000000)
eden space 46592K, 100% used [0x00000000f5580000,0x00000000f8300000,0x00000000f8300000)
from space 512K, 18% used [0x00000000f8380000,0x00000000f8398000,0x00000000f8400000)
to space 2560K, 0% used [0x00000000f8580000,0x00000000f8580000,0x00000000f8800000)
ParOldGen total 122368K, used 45582K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c83800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 75098.831 GC heap after
Heap after GC invocations=80 (full 8):
PSYoungGen total 48128K, used 160K [0x00000000f5580000, 0x00000000f8780000, 0x0000000100000000)
eden space 46080K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8280000)
from space 2048K, 7% used [0x00000000f8580000,0x00000000f85a8000,0x00000000f8780000)
to space 2560K, 0% used [0x00000000f8280000,0x00000000f8280000,0x00000000f8500000)
ParOldGen total 122368K, used 45590K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c85800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 76440.356 GC heap before
{Heap before GC invocations=81 (full 8):
PSYoungGen total 48128K, used 46240K [0x00000000f5580000, 0x00000000f8780000, 0x0000000100000000)
eden space 46080K, 100% used [0x00000000f5580000,0x00000000f8280000,0x00000000f8280000)
from space 2048K, 7% used [0x00000000f8580000,0x00000000f85a8000,0x00000000f8780000)
to space 2560K, 0% used [0x00000000f8280000,0x00000000f8280000,0x00000000f8500000)
ParOldGen total 122368K, used 45590K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c85800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 76440.360 GC heap after
Heap after GC invocations=81 (full 8):
PSYoungGen total 46080K, used 128K [0x00000000f5580000, 0x00000000f8600000, 0x0000000100000000)
eden space 45568K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8200000)
from space 512K, 25% used [0x00000000f8280000,0x00000000f82a0000,0x00000000f8300000)
to space 2048K, 0% used [0x00000000f8400000,0x00000000f8400000,0x00000000f8600000)
ParOldGen total 122368K, used 45598K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c87800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K

In Such cases the following are the checks we need to do

  • Check the ulimit
    • It is supposed to be unlimited
  • Check the limit for no of open files
    • You can use the following command to check the number of open files
    • lsof | grep <user>|grep -v grep|wc -l

    • Then Further check the limits.conf for the value set from your end for the same
    • <user> soft nofile 350000
      <user> hard nofile 350000
      <user> soft nproc 65536
      <user> hard nproc 65536
      <user> soft stack 10240
      <user> hard stack 10240
      <user> soft sigpending 1548380
      <user> hard sigpending 1548380
  • Hopefully the problem will be resolved.

Happy Troubleshooting Guys 🙂

Operating System, Redhat / CEntOS / Oracle Linux

/dev/random vs /dev/urandom

If you want random data in a Linux/Unix type OS, the standard way to do so is to use /dev/random or /dev/urandom. These devices are special files. They can be read like normal files and the read data is generated via multiple sources of entropy in the system which provide the randomness.

/dev/random will block after the entropy pool is exhausted. It will remain blocked until additional data has been collected from the sources of entropy that are available. This can slow down random data generation.

/dev/urandom will not block. Instead it will reuse the internal pool to produce more pseudo-random bits.

/dev/urandom is best used when:

  • You just want a large file with random data for some kind of testing.
  • You are using the dd command to wipe data off a disk by replacing it with random data.
  • Almost everywhere else where you don’t have a really good reason to use /dev/random instead.

/dev/random is likely to be the better choice when:

  • Randomness is critical to the security of cryptography in your application – one-time pads, key generation.
Rendezvous, TIBCO


These three are protocols that we are using in RV messaging Service.

In earlier versions, RV was using the PGM protocol and now we are using UDP, TRDP (Tibco Real-time Distributed Protocol) protocol while sending the messages in RV. Whenever we install the RV we need to select the protocol that we want to use. We can select PGM or UDP,TRDP.

TRDP is used for sends the acknowledgement back to the publisher in case of failures and sequence message delivery and hide the network details. TRDP (TIBCO Reliable Data-gram Protocol) is a proprietary protocol running on top of UDP.

It brings mechanisms to manage reliable message delivery in a broadcast/multicast paradigm, this includes :
– message numbering
– negative acknowledgement

TRDP is used by RV. it has three Quality of Service Reliable, Certified Messaging and Distributed Queue. In all of them sender stores the message. Reliable senders send stores the message that’s broadcast-ed for 60 secs. In Certified messaging the sender stores the message in a ledger file till it receives the confirmation from all the certified receivers. In Distributed Queue the message will be stored in the process ledger.In Certified messaging and DQ, it assures the sequence as well. Over all TRDP assures the delivery of the message.



Enterprise Messaging Service, TIBCO

EMS Queue Requester v/s EMS Queue Sender

JMS queue Sender: – It simply sends the message to specified queue and does not wait for any response.

JMS Queue Requestor: – It send the message to specified queue and waits for the response from JMS client   like SOAP request reply activity.

The flow will not proceed until it gets the response or the request gets timed out. This activity uses temporary queues to ensure that reply messages are received only by the process that sent the request. 

Input tab :-


Reply to Queue :- It is the name of the queue in which this activity is awaiting for the reply . If leave blank,then a temporary queue  will be automatically created by Tibco as shown below in the screenshot.

(Run show queues command in the Tibco EMS admin .All the queues starting with * is either temporary or dynamic queues)


We can also give the name of Temporary queue.

Request Timeout :- If the response come within the time limit specified then flow will proceed if not then the flow will error out.

We have to use  Reply to JMS Message  activity to reply the message pending on the temporary queue.Once the message is response is send back to  the  temporary queue, it will get automatically deleted.

Please find the screenshot below for reference :-



Redhat / CEntOS / Oracle Linux, Ubuntu

10 Important “rsync” command – UNIX

Rsync (Remote Sync) is a most commonly used command for copying and synchronizing files and directories remotely as well as locally in Linux/Unix systems. With the help of rsync command you can copy and synchronize your data remotely and locally across directories, across disks and networks, perform data backups and mirroring between two Linux machines.

This article explains 10 basic and advanced usage of the rsync command to transfer your files remotely and locally in Linux based machines. You don’t need to be root user to run rsync command.

Some advantages and features of Rsync command
  1. It efficiently copies and sync files to or from a remote system.
  2. Supports copying links, devices, owners, groups and permissions.
  3. It’s faster than scp (Secure Copy) because rsync uses remote-update protocol which allows to transfer just the differences between two sets of files. First time, it copies the whole content of a file or a directory from source to destination but from next time, it copies only the changed blocks and bytes to the destination.
  4. Rsync consumes less bandwidth as it uses compression and decompression method while sending and receiving data both ends.
Basic syntax of rsync command
# rsync options source destination
Some common options used with rsync commands
  1. -v : verbose
  2. -r : copies data recursively (but don’t preserve timestamps and permission while transferring data
  3. -a : archive mode, archive mode allows copying files recursively and it also preserves symbolic links, file permissions, user & group ownerships and timestamps
  4. -z : compress file data
  5. -h : human-readable, output numbers in a human-readable format


Install rsync in your Linux machine

We can install rsync package with the help of following command.

# yum install rsync (On Red Hat based systems)
# apt-get install rsync (On Debian based systems)

1. Copy/Sync Files and Directory Locally

Copy/Sync a File on a Local Computer

This following command will sync a single file on a local machine from one location to another location. Here in this example, a file name backup.tar needs to be copied or synced to /tmp/backups/ folder.

[root@tecmint]# rsync -zvh backup.tar /tmp/backups/
created directory /tmp/backups
sent 14.71M bytes  received 31 bytes  3.27M bytes/sec
total size is 16.18M  speedup is 1.10

In above example, you can see that if the destination is not already exists rsync will create a directory automatically for destination.

Copy/Sync a Directory on Local Computer

The following command will transfer or sync all the files of from one directory to a different directory in the same machine. Here in this example, /root/rpmpkgs contains some rpm package files and you want that directory to be copied inside /tmp/backups/ folder.

[root@tecmint]# rsync -avzh /root/rpmpkgs /tmp/backups/
sending incremental file list
sent 4.99M bytes  received 92 bytes  3.33M bytes/sec
total size is 4.99M  speedup is 1.00

2. Copy/Sync Files and Directory to or From a Server

Copy a Directory from Local Server to a Remote Server

This command will sync a directory from a local machine to a remote machine. For example: There is a folder in your local computer “rpmpkgs” which contains some RPM packages and you want that local directory’s content send to a remote server, you can use following command.

[root@tecmint]$ rsync -avz rpmpkgs/ root@
root@'s password:
sending incremental file list
sent 4993369 bytes  received 91 bytes  399476.80 bytes/sec
total size is 4991313  speedup is 1.00
Copy/Sync a Remote Directory to a Local Machine

This command will help you sync a remote directory to a local directory. Here in this example, a directory /home/hari/rpmpkgs which is on a remote server is being copied in your local computer in /tmp/myrpms.

[root@tecmint]# rsync -avzh root@ /tmp/myrpms
root@'s password:
receiving incremental file list
created directory /tmp/myrpms
sent 91 bytes  received 4.99M bytes  322.16K bytes/sec
total size is 4.99M  speedup is 1.00

3. Rsync Over SSH

With rsync, we can use SSH (Secure Shell) for data transfer, using SSH protocol while transferring our data you can be ensured that your data is being transferred in a secured connection with encryption so that nobody can read your data while it is being transferred over the wire on the internet.

Also when we use rsync we need to provide the user/root password to accomplish that particular task, so using SSH option will send your logins in an encrypted manner so that your password will be safe.

Copy a File from a Remote Server to a Local Server with SSH

To specify a protocol with rsync you need to give “-e” option with protocol name you want to use. Here in this example, We will be using “ssh” with “-e” option and perform data transfer.

[root@tecmint]# rsync -avzhe ssh root@ /tmp/
root@'s password:
receiving incremental file list
sent 30 bytes  received 8.12K bytes  1.48K bytes/sec
total size is 30.74K  speedup is 3.77
Copy a File from a Local Server to a Remote Server with SSH
[root@tecmint]# rsync -avzhe ssh backup.tar root@
root@'s password:
sending incremental file list
sent 14.71M bytes  received 31 bytes  1.28M bytes/sec
total size is 16.18M  speedup is 1.10


4. Show Progress While Transferring Data with rsync

To show the progress while transferring the data from one machine to a different machine, we can use ‘–progress’ option for it. It displays the files and the time remaining to complete the transfer.

[root@tecmint]# rsync -avzhe ssh --progress /home/rpmpkgs root@
root@'s password:
sending incremental file list
created directory /root/rpmpkgs
1.02M 100%        2.72MB/s        0:00:00 (xfer#1, to-check=3/5)
99.04K 100%  241.19kB/s        0:00:00 (xfer#2, to-check=2/5)
1.79M 100%        1.56MB/s        0:00:01 (xfer#3, to-check=1/5)
2.09M 100%        1.47MB/s        0:00:01 (xfer#4, to-check=0/5)
sent 4.99M bytes  received 92 bytes  475.56K bytes/sec
total size is 4.99M  speedup is 1.00

5. Use of –include and –exclude Options

These two options allows us to include and exclude files by specifying parameters with these option helps us to specify those files or directories which you want to include in your sync and exclude files and folders with you don’t want to be transferred.

Here in this example, rsync command will include those files and directory only which starts with ‘R’ and exclude all other files and directory.

[root@tecmint]# rsync -avze ssh --include 'R*' --exclude '*' root@ /root/rpm
root@'s password:
receiving incremental file list
created directory /root/rpm
sent 67 bytes  received 167289 bytes  7438.04 bytes/sec
total size is 434176  speedup is 2.59

6. Use of –delete Option

If a file or directory not exist at the source, but already exists at the destination, you might want to delete that existing file/directory at the target while syncing .

We can use ‘–delete‘ option to delete files that are not there in source directory.

Source and target are in sync. Now creating new file test.txt at the target.

[root@tecmint]# touch test.txt
[root@tecmint]# rsync -avz --delete root@ .
receiving file list ... done
deleting test.txt
sent 26 bytes  received 390 bytes  48.94 bytes/sec
total size is 45305958  speedup is 108908.55

Target has the new file called test.txt, when synchronize with the source with ‘–delete‘ option, it removed the file test.txt.

7. Set the Max Size of Files to be Transferred

You can specify the Max file size to be transferred or sync. You can do it with “–max-size” option. Here in this example, Max file size is 200k, so this command will transfer only those files which are equal or smaller than 200k.

[root@tecmint]# rsync -avzhe ssh --max-size='200k' /var/lib/rpm/ root@
root@'s password:
sending incremental file list
created directory /root/tmprpm
sent 189.79K bytes  received 224 bytes  13.10K bytes/sec
total size is 38.08M  speedup is 200.43

8. Automatically Delete source Files after successful Transfer

Now, suppose you have a main web server and a data backup server, you created a daily backup and synced it with your backup server, now you don’t want to keep that local copy of backup in your web server.

So, will you wait for transfer to complete and then delete those local backup file manually? Of Course NO. This automatic deletion can be done using ‘–remove-source-files‘ option.

[root@tecmint]# rsync --remove-source-files -zvh backup.tar /tmp/backups/
sent 14.71M bytes  received 31 bytes  4.20M bytes/sec
total size is 16.18M  speedup is 1.10
[root@tecmint]# ll backup.tar
ls: backup.tar: No such file or directory

9. Do a Dry Run with rsync

If you are a newbie and using rsync and don’t know what exactly your command going do. Rsync could really mess up the things in your destination folder and then doing an undo can be a tedious job.

Use of this option will not make any changes only do a dry run of the command and shows the output of the command, if the output shows exactly same you want to do then you can remove ‘–dry-run‘ option from your command and run on the terminal.

root@tecmint]# rsync --dry-run --remove-source-files -zvh backup.tar /tmp/backups/
sent 35 bytes  received 15 bytes  100.00 bytes/sec
total size is 16.18M  speedup is 323584.00 (DRY RUN)

10. Set Bandwidth Limit and Transfer File

You can set the bandwidth limit while transferring data from one machine to another machine with the the help of ‘–bwlimit‘ option. This options helps us to limit I/O bandwidth.

[root@tecmint]# rsync --bwlimit=100 -avzhe ssh  /var/lib/rpm/  root@
root@'s password:
sending incremental file list
sent 324 bytes  received 12 bytes  61.09 bytes/sec
total size is 38.08M  speedup is 113347.05

Also, by default rsync syncs changed blocks and bytes only, if you want explicitly want to sync whole file then you use ‘-W‘ option with it.

[root@tecmint]# rsync -zvhW backup.tar /tmp/backups/backup.tar
sent 14.71M bytes  received 31 bytes  3.27M bytes/sec
total size is 16.18M  speedup is 1.10

rrsync -azP –progress “<user>@<host>:<absolute path>” <location to be copied>

Source :- techmint.com
Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Fork: retry: Resource temporarily unavailable


It was reported that a particular application user is not able to Login.

1. Tried Logging to the system with root user it was fine.
2. Tried to switich user it failed with an Error “Write Failed; Broken Pipe”
3. Created a file and it was working.
4. Tried switch the user. This time it goes through.
5. Tried running some jobs with the user. It throws an error saying “fork: retry: Resource temporarily unavailable”.
6. Then checked the “/etc/security/limits.d/90-nproc.conf” file to find out that all the users
are given nproc limit as 1024.

1. Changed it to a higher value and it solved the issue.

2. I changed the value to 4096.
Redhat / CEntOS / Oracle Linux, Ubuntu

How to use parallel ssh (PSSH) for executing ssh in parallel on a number of Linux/Unix/BSD servers

Recently I come across a nice little nifty tool called pssh to run a single command on multiple Linux / UNIX / BSD servers. You can easily increase your productivy with this SSH tool.
More about pssh
pssh is a command line tool for executing ssh in parallel on some hosts. It specialties includes:
  1. Sending input to all of the processes
  2. Inputting a password to ssh
  3. Saving output to files
  4. IT/sysadmin taks automation such as patching servers
  5. Timing out and more
Let us see how to install and use pssh on Linux and Unix-like system.
You can install pssh as per your Linux and Unix variant. Once package installed, you can get parallel versions of the openssh tools. Included in the installation:
  1. Parallel ssh (pssh command)
  2. Parallel scp (pscp command )
  3. Parallel rsync (prsync command)
  4. Parallel nuke (pnuke command)
  5. Parallel slurp (pslurp command)
Install pssh on Debian/Ubuntu Linux
Type the following apt-get command/apt command to install pssh:
$ sudo apt install pssh
$ sudo apt-get install pssh
Sample outputs:
Fig.01: Installing pssh on Debian/Ubuntu Linux

Fig.01: Installing pssh on Debian/Ubuntu Linux

Install pssh on Apple MacOS X
Type the following brew command:
$ brew install pssh
Sample outputs:
Fig.02: Installing pssh on MacOS Unix

Fig.02: Installing pssh on MacOS Unix

Install pssh on FreeBSD unix
Type any one of the command:
# cd /usr/ports/security/pssh/ && make install clean
# pkg install pssh
Sample outputs:
Fig.03: Installing pssh on FreeBSD

Fig.03: Installing pssh on FreeBSD

Install pssh on RHEL/CentOS/Fedora Linux
First turn on EPEL repo and type the following command yum command:
$ sudo yum install pssh
Sample outputs:
Fig.04: Installing pssh on RHEL/CentOS/Red Hat Enterprise Linux

Fig.04: Installing pssh on RHEL/CentOS/Red Hat Enterprise Linux

Install pssh on Fedora Linux
Type the following dnf command:
$ sudo dnf install pssh
Sample outputs:
Fig.05: Installing pssh on Fedora

Fig.05: Installing pssh on Fedora

Install pssh on Arch Linux
Type the following command:
$ sudo pacman -S python-pip
$ pip install pssh
How to use pssh command
First you need to create a text file called hosts file from which pssh read hosts names. The syntax is pretty simple. Each line in the host file are of the form [user@]host[:port] and can include blank lines and comments lines beginning with “#”. Here is my sample file named ~/.pssh_hosts_files:
$ cat ~/.pssh_hosts_files

Run the date command all hosts:
$ pssh -i -h ~/.pssh_hosts_files date
Sample outputs:
[1] 18:10:10 [SUCCESS] root@ Sun Feb 26 18:10:10 IST 2017 [2] 18:10:10 [SUCCESS] vivek@dellm6700 Sun Feb 26 18:10:10 IST 2017 [3] 18:10:10 [SUCCESS] root@ Sun Feb 26 18:10:10 IST 2017 [4] 18:10:10 [SUCCESS] root@ Sun Feb 26 18:10:10 IST 2017
Run the uptime command on each host:
$ pssh -i -h ~/.pssh_hosts_files uptime
Sample outputs:
[1] 18:11:15 [SUCCESS] root@ 18:11:15 up 2:29, 0 users, load average: 0.00, 0.00, 0.00 [2] 18:11:15 [SUCCESS] vivek@dellm6700 18:11:15 up 19:06, 0 users, load average: 0.13, 0.25, 0.27 [3] 18:11:15 [SUCCESS] root@ 18:11:15 up 1:55, 0 users, load average: 0.00, 0.00, 0.00 [4] 18:11:15 [SUCCESS] root@ 6:11PM up 1 day, 21:38, 0 users, load averages: 0.12, 0.14, 0.09
You can now automate common sysadmin tasks such as patching all servers:
$ pssh -h ~/.pssh_hosts_files -- sudo yum -y update
$ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y update
$ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y upgrade
How do I use pssh to copy file to all servers?
The syntax is:
pscp -h ~/.pssh_hosts_files src dest
To copy $HOME/demo.txt to /tmp/ on all servers, enter:
$ pscp -h ~/.pssh_hosts_files $HOME/demo.txt /tmp/
Sample outputs:
[1] 18:17:35 [SUCCESS] vivek@dellm6700 [2] 18:17:35 [SUCCESS] root@ [3] 18:17:35 [SUCCESS] root@ [4] 18:17:35 [SUCCESS] root@
Or use the prsync command for efficient copying of files:
$ prsync -h ~/.pssh_hosts_files /etc/passwd /tmp/
$ prsync -h ~/.pssh_hosts_files *.html /var/www/html/
How do I kill processes in parallel on a number of hosts?
Use the pnuke command for killing processes in parallel on a number of hosts. The syntax is:
$ pnuke -h .pssh_hosts_files process_name
### kill nginx and firefox on hosts:
$ pnuke -h ~/.pssh_hosts_files firefox
$ pnuke -h ~/.pssh_hosts_files nginx

See pssh/pscp command man pages for more information.
pssh is a pretty good tool for parallel SSH command execution on many servers. It quite is useful if you have 5 or 10 servers. Nevertheless, if you need to do something complicated you should look into Ansible and co.
Elasticsearch Logstash and Kibana

ELK on CEntOS 7 – (Source UnixMen)


For those who don’t know, Elastic Stack (ELK Stack) is an infrastructure software program made up of multiple components developed by Elastic. The components include:

  • Beats: open-source data shippers working as agents on the servers to send different types of operational data to Elasticsearch.
  • Elasticsearch: a highly scalable open source full-text search and analytics engine. It allows you to store, search, and analyze big volumes of data quickly and in near real time. It is generally used as the underlying engine/technology that powers applications that have complex search features and requirements.
  • Kibana: open source analytics and visualization platform designed to work with Elasticsearch. It is used to interact with data stored in Elasticsearch indices. It has a browser-based interface that enables quick creation and sharing of dynamic dashboards that display changes to Elasticsearch queries in real time.
  • Logstash: logs and events collection engine, which provides a real-time pipeline. It can take data from multiple sources and convert them into JSON documents.

This tutorial will take you through the process of installing the Elastic Stack on a CentOS 7 server.

Getting started

First of all, we need Java 8, so you’ll need to download the official Oracle rpm package.

# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm"

Install it with rpm:

# rpm -ivh jdk-8u77-linux-x64.rpm

Ensure that it is working properly by checking it on your server:

# java -version

Install Elasticsearch

First, download and install the public signing key:

# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Next, create a file called elasticsearch.repo in /etc/yum.repos.d/, and paste the following lines:

name=Elasticsearch repository for 5.x packages

Now, the repository is ready for use. Install elasticsearch with yum code:

# yum install elasticsearch

Configuring Elasticsearch

Go to the configuration directory and edit the elasticsaerch.yml configuration file, like this:

# $EDITOR /etc/elasticsearch.yml

Enable memory lock removing comment on line 43:
bootstrap.memory_lock: true
Then, scroll until you reach the “Network” section, and there remove comment on lines:

http.port: 9200

Save and exit.

Next, it’s time to configure memory lock. In /usr/lib/systemd/system/ edit elasticsearch.service. There, uncomment the line:


Save and exit.

Now go to the configuration file for Elasticsearch:

# $EDITOR /etc/sysconfig/elasticsearch

Uncomment line 60 and be sure that it contains the following content:


Now, Elastisearch is configured. It will run on the IP address you specified (change it to “localhost” if necessary) on port 9200. Next:

# systemctl daemon-reload
# systemctl enable elasticsearch
# systemctl start elasticsearch

Install Kibana

When Elasticsearch has been configured and started, install and configure Kibana with a web server. In this case, we will use Nginx.
As in the case of Elasticsearch, install Kibana with wget and rpm:

# wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
# rpm -ivh kibana-5.1.1-x86_64.rpm

Edit Kibana configuration file:

# $EDITOR /etc/kibana/kibana.yml

There, uncomment:

server.port: 5601
server.host: "localhost"
elasticsearch.url: "http://localhost:9200"

Save, exit and start Kibana.

# systemctl enable kibana
# systemctl start kibana

Now, install Nginx and configure it as reverse proxy. This way it’s possible to access Kibana from the public IP address.
Nginx is available in the Epel repository:

# yum -y install epel-release


# yum -y install nginx httpd-tools

In Nginx configuration file( /etc/nginx/nginx.conf) remove the server { } block. Then save and exit.

Create a Virtual Host configuration file:

# $EDITOR /etc/nginxconf.d/kibana.conf

There, paste the following content:

server {
    listen 80;
    server_name elk-stack.co;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.kibana-user;
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

Create a new authentication file:

# htpasswd -c /etc/nginx/.kibana-user admin


# systemctl enable nginx
# systemctl start nginx

Install Logstash

As for Elastisearch and Kibana:

# wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
# rpm -ivh logstash-5.1.1.rpm

It’s necessary to create a new SSL certificate. First, edit the openssl.cnf file:

# $EDITOR /etc/pki/tls/openssl.cnf

In the [ v3_ca ] section for the server identification:

[ v3_ca ]

# Server IP Address
subjectAltName = IP: IP_ADDRESS

After saving and exiting, generate the certificate:

# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt

Next, you can create a new file to configure the log sources for Filebeat, then a file for syslog processing and the file to define the Elasticsearch output.

These configurations depends on how you want to filter the data.


# systemctl enable logstash
# systemctl start logstash

You have now successfully installed and configured the ELK Stack server-side!

Redhat / CEntOS / Oracle Linux, Ubuntu

30 Shades of “Alias” Command – UNIX

You can define various types aliases as follows to save time and increase productivity.

#1: Control ls command output

The ls command lists directory contents and you can colorize the output:

## Colorize the ls output ##
alias ls='ls --color=auto'
## Use a long listing format ##
alias ll='ls -la' 
## Show hidden files ##
alias l.='ls -d .* --color=auto'

#2: Control cd command behavior

## get rid of command not found ##
alias cd..='cd ..' 
## a quick way to get out of current directory ##
alias ..='cd ..' 
alias ...='cd ../../../' 
alias ....='cd ../../../../' 
alias .....='cd ../../../../' 
alias .4='cd ../../../../' 
alias .5='cd ../../../../..'

#3: Control grep command output

grep command is a command-line utility for searching plain-text files for lines matching a regular expression:

## Colorize the grep command output for ease of use (good for log files)##
alias grep='grep --color=auto'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'

#4: Start calculator with math support

alias bc='bc -l'

#4: Generate sha1 digest

alias sha1='openssl sha1'

#5: Create parent directories on demand

mkdir command is used to create a directory:

alias mkdir='mkdir -pv'

#6: Colorize diff output

You can compare files line by line using diff and use a tool called colordiff to colorize diff output:

# install  colordiff package 🙂
alias diff='colordiff'

#7: Make mount command output pretty and human readable format

alias mount='mount |column -t'

#8: Command short cuts to save time

# handy short cuts #
alias h='history'
alias j='jobs -l'

#9: Create a new set of commands

alias path='echo -e ${PATH//:/\\n}'
alias now='date +"%T"'
alias nowtime=now
alias nowdate='date +"%d-%m-%Y"'

#10: Set vim as default

alias vi=vim 
alias svi='sudo vi' 
alias vis='vim "+set si"' 
alias edit='vim'

#11: Control output of networking tool called ping

# Stop after sending count ECHO_REQUEST packets #
alias ping='ping -c 5'
# Do not wait interval 1 second, go fast #
alias fastping='ping -c 100 -s.2'

#12: Show open ports

Use netstat command to quickly list all TCP/UDP port on the server:

alias ports='netstat -tulanp'

#13: Wakeup sleeping servers

Wake-on-LAN (WOL) is an Ethernet networking standard that allows a server to be turned on by a network message. You can quickly wakeup nas devices and server using the following aliases:

## replace mac with your actual server mac address #
alias wakeupnas01='/usr/bin/wakeonlan 00:11:32:11:15:FC'
alias wakeupnas02='/usr/bin/wakeonlan 00:11:32:11:15:FD'
alias wakeupnas03='/usr/bin/wakeonlan 00:11:32:11:15:FE'

#14: Control firewall (iptables) output

Netfilter is a host-based firewall for Linux operating systems. It is included as part of the Linux distribution and it is activated by default. This post list most common iptables solutions required by a new Linux user to secure his or her Linux operating system from intruders.

## shortcut  for iptables and pass it via sudo#
alias ipt='sudo /sbin/iptables'
# display all rules #
alias iptlist='sudo /sbin/iptables -L -n -v --line-numbers'
alias iptlistin='sudo /sbin/iptables -L INPUT -n -v --line-numbers'
alias iptlistout='sudo /sbin/iptables -L OUTPUT -n -v --line-numbers'
alias iptlistfw='sudo /sbin/iptables -L FORWARD -n -v --line-numbers'
alias firewall=iptlist

#15: Debug web server / cdn problems with curl

# get web server headers #
alias header='curl -I'
# find out if remote server supports gzip / mod_deflate or not #
alias headerc='curl -I --compress'

#16: Add safety nets

# do not delete / or prompt if deleting more than 3 files at a time #
alias rm='rm -I --preserve-root'
# confirmation #
alias mv='mv -i' 
alias cp='cp -i' 
alias ln='ln -i'
# Parenting changing perms on / #
alias chown='chown --preserve-root'
alias chmod='chmod --preserve-root'
alias chgrp='chgrp --preserve-root'

#17: Update Debian Linux server

apt-get command is used for installing packages over the internet (ftp or http). You can also upgrade all packages in a single operations:

# distro specific  - Debian / Ubuntu and friends #
# install with apt-get
alias apt-get="sudo apt-get" 
alias updatey="sudo apt-get --yes" 
# update on one command 
alias update='sudo apt-get update && sudo apt-get upgrade'

#18: Update RHEL / CentOS / Fedora Linux server

yum command is a package management tool for RHEL / CentOS / Fedora Linux and friends:

## distrp specifc RHEL/CentOS ##
alias update='yum update'
alias updatey='yum -y update'

#19: Tune sudo and su

# become root #
alias root='sudo -i'
alias su='sudo -i'

#20: Pass halt/reboot via sudo

shutdown command bring the Linux / Unix system down:

# reboot / halt / poweroff
alias reboot='sudo /sbin/reboot'
alias poweroff='sudo /sbin/poweroff'
alias halt='sudo /sbin/halt'
alias shutdown='sudo /sbin/shutdown'

#21: Control web servers

# also pass it via sudo so whoever is admin can reload it without calling you #
alias nginxreload='sudo /usr/local/nginx/sbin/nginx -s reload'
alias nginxtest='sudo /usr/local/nginx/sbin/nginx -t'
alias lightyload='sudo /etc/init.d/lighttpd reload'
alias lightytest='sudo /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf -t'
alias httpdreload='sudo /usr/sbin/apachectl -k graceful'
alias httpdtest='sudo /usr/sbin/apachectl -t && /usr/sbin/apachectl -t -D DUMP_VHOSTS'

#22: Alias into our backup stuff

# if cron fails or if you want backup on demand just run these commands # 
# again pass it via sudo so whoever is in admin group can start the job #
# Backup scripts #
alias backup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type local --taget /raid1/backups'
alias nasbackup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type nas --target nas01'
alias s3backup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type nas --target nas01 --auth /home/scripts/admin/.authdata/amazon.keys'
alias rsnapshothourly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotdaily='sudo  /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotweekly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotmonthly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias amazonbackup=s3backup

#23: Desktop specific – play avi/mp3 files on demand

## play video files in a current directory ##
# cd ~/Download/movie-name 
# playavi or vlc 
alias playavi='mplayer *.avi'
alias vlc='vlc *.avi'
# play all music files from the current directory #
alias playwave='for i in *.wav; do mplayer "$i"; done'
alias playogg='for i in *.ogg; do mplayer "$i"; done'
alias playmp3='for i in *.mp3; do mplayer "$i"; done'
# play files from nas devices #
alias nplaywave='for i in /nas/multimedia/wave/*.wav; do mplayer "$i"; done'
alias nplayogg='for i in /nas/multimedia/ogg/*.ogg; do mplayer "$i"; done'
alias nplaymp3='for i in /nas/multimedia/mp3/*.mp3; do mplayer "$i"; done'
# shuffle mp3/ogg etc by default #
alias music='mplayer --shuffle *'

#24: Set default interfaces for sys admin related commands

vnstat is console-based network traffic monitor. dnstop is console tool to analyze DNS traffic. tcptrack and iftop commands displays information about TCP/UDP connections it sees on a network interface and display bandwidth usage on an interface by host respectively.

## All of our servers eth1 is connected to the Internets via vlan / router etc  ##
alias dnstop='dnstop -l 5  eth1'
alias vnstat='vnstat -i eth1'
alias iftop='iftop -i eth1'
alias tcpdump='tcpdump -i eth1'
alias ethtool='ethtool eth1'
# work on wlan0 by default #
# Only useful for laptop as all servers are without wireless interface
alias iwconfig='iwconfig wlan0'

#25: Get system memory, cpu usage, and gpu memory info quickly

## pass options to free ## 
alias meminfo='free -m -l -t'
## get top process eating memory
alias psmem='ps auxf | sort -nr -k 4'
alias psmem10='ps auxf | sort -nr -k 4 | head -10'
## get top process eating cpu ##
alias pscpu='ps auxf | sort -nr -k 3'
alias pscpu10='ps auxf | sort -nr -k 3 | head -10'
## Get server cpu info ##
alias cpuinfo='lscpu'
## older system use /proc/cpuinfo ##
##alias cpuinfo='less /proc/cpuinfo' ##
## get GPU ram on desktop / laptop## 
alias gpumeminfo='grep -i --color memory /var/log/Xorg.0.log'

#26: Control Home Router

The curl command can be used to reboot Linksys routers.

# Reboot my home Linksys WAG160N / WAG54 / WAG320 / WAG120N Router / Gateway from *nix.
alias rebootlinksys="curl -u 'admin:my-super-password' ''"
# Reboot tomato based Asus NT16 wireless bridge 
alias reboottomato="ssh admin@ /sbin/reboot"

#27 Resume wget by default

The GNU Wget is a free utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, and it can resume downloads too:

## this one saved by butt so many times ##
alias wget='wget -c'

#28 Use different browser for testing website

## this one saved by butt so many times ##
alias ff4='/opt/firefox4/firefox'
alias ff13='/opt/firefox13/firefox'
alias chrome='/opt/google/chrome/chrome'
alias opera='/opt/opera/opera'
#default ff 
alias ff=ff13
#my default browser 
alias browser=chrome

#29: A note about ssh alias

Do not create ssh alias, instead use ~/.ssh/config OpenSSH SSH client configuration files. It offers more option. An example:

Host server10
  IdentityFile ~/backups/.ssh/id_dsa
  user foobar
  Port 30000
  ForwardX11Trusted yes
  TCPKeepAlive yes

You can now connect to peer1 using the following syntax:
$ ssh server10

#30: It’s your turn to share…

## set some other defaults ##
alias df='df -H'
alias du='du -ch'
# top is atop, just like vi is vim
alias top='atop' 
## nfsrestart  - must be root  ##
## refresh nfs mount / cache etc for Apache ##
alias nfsrestart='sync && sleep 2 && /etc/init.d/httpd stop && umount netapp2:/exports/http && sleep 2 && mount -o rw,sync,rsize=32768,wsize=32768,intr,hard,proto=tcp,fsc natapp2:/exports /http/var/www/html &&  /etc/init.d/httpd start'
## Memcached server status  ##
alias mcdstats='/usr/bin/memcached-tool stats'
alias mcdshow='/usr/bin/memcached-tool display'
## quickly flush out memcached server ##
alias flushmcd='echo "flush_all" | nc 11211'
## Remove assets quickly from Akamai / Amazon cdn ##
alias cdndel='/home/scripts/admin/cdn/purge_cdn_cache --profile akamai'
alias amzcdndel='/home/scripts/admin/cdn/purge_cdn_cache --profile amazon'
## supply list of urls via file or stdin
alias cdnmdel='/home/scripts/admin/cdn/purge_cdn_cache --profile akamai --stdin'
alias amzcdnmdel='/home/scripts/admin/cdn/purge_cdn_cache --profile amazon --stdin'