Apache Kafka – Broker and Partition – Part 1

Apache Kafka – Topic to Database (MySQL – Table replication from Employees to Employees_replica via kafka topic)



How to Delete all files except a Pattern in Unix

Good Morning To All My TECH Ghettos,

Today ima show ya’ll a fuckin command to delete all files except a pattern,

ya’ll can use it in a script or even commandline ……. Life gets easy as Fuck !!!!!!!!

find . -type f ! -name ‘<pattern>’ -delete

A Live Example



After the following Command

find . -type f ! -name ‘*.gz’ -delete


How To Patch and Protect Linux Kernel Stack Clash Vulnerability CVE-2017-1000364 [ 19/June/2017 ]

Avery serious security problem has been found in the Linux kernel called “The Stack Clash.” It can be exploited by attackers to corrupt memory and execute arbitrary code. An attacker could leverage this with another vulnerability to execute arbitrary code and gain administrative/root account privileges. How do I fix this problem on Linux?

The Qualys Research Labs discovered various problems in the dynamic linker of the GNU C Library (CVE-2017-1000366) which allow local privilege escalation by clashing the stack including Linux kernel. This bug affects Linux, OpenBSD, NetBSD, FreeBSD and Solaris, on i386 and amd64. It can be exploited by attackers to corrupt memory and execute arbitrary code.

What is CVE-2017-1000364 bug?

From RHN:

A flaw was found in the way memory was being allocated on the stack for user space binaries. If heap (or different memory region) and stack memory regions were adjacent to each other, an attacker could use this flaw to jump over the stack guard gap, cause controlled memory corruption on process stack or the adjacent memory region, and thus increase their privileges on the system. This is a kernel-side mitigation which increases the stack guard gap size from one page to 1 MiB to make successful exploitation of this issue more difficult.

As per the original research post:

Each program running on a computer uses a special memory region called the stack. This memory region is special because it grows automatically when the program needs more stack memory. But if it grows too much and gets too close to another memory region, the program may confuse the stack with the other memory region. An attacker can exploit this confusion to overwrite the stack with the other memory region, or the other way around.

A list of affected Linux distros

  1. Red Hat Enterprise Linux Server 5.x
  2. Red Hat Enterprise Linux Server 6.x
  3. Red Hat Enterprise Linux Server 7.x
  4. CentOS Linux Server 5.x
  5. CentOS Linux Server 6.x
  6. CentOS Linux Server 7.x
  7. Oracle Enterprise Linux Server 5.x
  8. Oracle Enterprise Linux Server 6.x
  9. Oracle Enterprise Linux Server 7.x
  10. Ubuntu 17.10
  11. Ubuntu 17.04
  12. Ubuntu 16.10
  13. Ubuntu 16.04 LTS
  14. Ubuntu 12.04 ESM (Precise Pangolin)
  15. Debian 9 stretch
  16. Debian 8 jessie
  17. Debian 7 wheezy
  18. Debian unstable
  19. SUSE Linux Enterprise Desktop 12 SP2
  20. SUSE Linux Enterprise High Availability 12 SP2
  21. SUSE Linux Enterprise Live Patching 12
  22. SUSE Linux Enterprise Module for Public Cloud 12
  23. SUSE Linux Enterprise Build System Kit 12 SP2
  24. SUSE Openstack Cloud Magnum Orchestration 7
  25. SUSE Linux Enterprise Server 11 SP3-LTSS
  26. SUSE Linux Enterprise Server 11 SP4
  27. SUSE Linux Enterprise Server 12 SP1-LTSS
  28. SUSE Linux Enterprise Server 12 SP2
  29. SUSE Linux Enterprise Server for Raspberry Pi 12 SP2

Do I need to reboot my box?

Yes, as most services depends upon the dynamic linker of the GNU C Library and kernel itself needs to be reloaded in memory.

How do I fix CVE-2017-1000364 on Linux?

Type the commands as per your Linux distro. You need to reboot the box. Before you apply patch, note down your current kernel version:
$ uname -a
$ uname -mrs

Sample outputs:

Linux 4.4.0-78-generic x86_64

Debian or Ubuntu Linux

Type the following apt command/apt-get command to apply updates:
$ sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade
Sample outputs:

Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  libc-bin libc-dev-bin libc-l10n libc6 libc6-dev libc6-i386 linux-compiler-gcc-6-x86 linux-headers-4.9.0-3-amd64 linux-headers-4.9.0-3-common linux-image-4.9.0-3-amd64
  linux-kbuild-4.9 linux-libc-dev locales multiarch-support
14 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/62.0 MB of archives.
After this operation, 4,096 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Reading changelogs... Done
Preconfiguring packages ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../libc6-i386_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6-i386 (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../libc6-dev_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6-dev:amd64 (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../libc-dev-bin_2.24-11+deb9u1_amd64.deb ...
Unpacking libc-dev-bin (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../linux-libc-dev_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-libc-dev:amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../libc6_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6:amd64 (2.24-11+deb9u1) over (2.24-11) ...
Setting up libc6:amd64 (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../libc-bin_2.24-11+deb9u1_amd64.deb ...
Unpacking libc-bin (2.24-11+deb9u1) over (2.24-11) ...
Setting up libc-bin (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../multiarch-support_2.24-11+deb9u1_amd64.deb ...
Unpacking multiarch-support (2.24-11+deb9u1) over (2.24-11) ...
Setting up multiarch-support (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../0-libc-l10n_2.24-11+deb9u1_all.deb ...
Unpacking libc-l10n (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../1-locales_2.24-11+deb9u1_all.deb ...
Unpacking locales (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../2-linux-compiler-gcc-6-x86_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-compiler-gcc-6-x86 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../3-linux-headers-4.9.0-3-amd64_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-headers-4.9.0-3-amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../4-linux-headers-4.9.0-3-common_4.9.30-2+deb9u1_all.deb ...
Unpacking linux-headers-4.9.0-3-common (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../5-linux-kbuild-4.9_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-kbuild-4.9 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../6-linux-image-4.9.0-3-amd64_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-image-4.9.0-3-amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Setting up linux-libc-dev:amd64 (4.9.30-2+deb9u1) ...
Setting up linux-headers-4.9.0-3-common (4.9.30-2+deb9u1) ...
Setting up libc6-i386 (2.24-11+deb9u1) ...
Setting up linux-compiler-gcc-6-x86 (4.9.30-2+deb9u1) ...
Setting up linux-kbuild-4.9 (4.9.30-2+deb9u1) ...
Setting up libc-l10n (2.24-11+deb9u1) ...
Processing triggers for man-db ( ...
Setting up libc-dev-bin (2.24-11+deb9u1) ...
Setting up linux-image-4.9.0-3-amd64 (4.9.30-2+deb9u1) ...
update-initramfs: Generating /boot/initrd.img-4.9.0-3-amd64
cryptsetup: WARNING: failed to detect canonical device of /dev/md0
cryptsetup: WARNING: could not determine root device from /etc/fstab
W: initramfs-tools configuration sets RESUME=UUID=054b217a-306b-4c18-b0bf-0ed85af6c6e1
W: but no matching swap device is available.
I: The initramfs will attempt to resume from /dev/md1p1
I: (UUID=bf72f3d4-3be4-4f68-8aae-4edfe5431670)
I: Set the RESUME variable to override this.
Searching for GRUB installation directory ... found: /boot/grub
Searching for default file ... found: /boot/grub/default
Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst
Searching for splash image ... none found, skipping ...
Found kernel: /boot/vmlinuz-4.9.0-3-amd64
Found kernel: /boot/vmlinuz-3.16.0-4-amd64
Updating /boot/grub/menu.lst ... done

Setting up libc6-dev:amd64 (2.24-11+deb9u1) ...
Setting up locales (2.24-11+deb9u1) ...
Generating locales (this might take a while)...
  en_IN.UTF-8... done
Generation complete.
Setting up linux-headers-4.9.0-3-amd64 (4.9.30-2+deb9u1) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...

Reboot your server/desktop using reboot command:
$ sudo reboot

Oracle/RHEL/CentOS/Scientific Linux

Type the following yum command:
$ sudo yum update
$ sudo reboot

Fedora Linux

Type the following dnf command:
$ sudo dnf update
$ sudo reboot

Suse Enterprise Linux or Opensuse Linux

Type the following zypper command:
$ sudo zypper patch
$ sudo reboot

SUSE OpenStack Cloud 6

$ sudo zypper in -t patch SUSE-OpenStack-Cloud-6-2017-996=1
$ sudo reboot

SUSE Linux Enterprise Server for SAP 12-SP1

$ sudo zypper in -t patch SUSE-SLE-SAP-12-SP1-2017-996=1
$ sudo reboot

SUSE Linux Enterprise Server 12-SP1-LTSS

$ sudo zypper in -t patch SUSE-SLE-SERVER-12-SP1-2017-996=1
$ sudo reboot

SUSE Linux Enterprise Module for Public Cloud 12

$ sudo zypper in -t patch SUSE-SLE-Module-Public-Cloud-12-2017-996=1
$ sudo reboot


You need to make sure your version number changed after issuing reboot command
$ uname -a
$ uname -r
$ uname -mrs

Sample outputs:

Linux 4.4.0-81-generic x86_64

Apache Kafka – Use cases

Here is a description of a few of the popular use cases for Apache Kafka™. For an overview of a number of these areas in action, see this blog post.


Kafka works well as a replacement for a more traditional message broker. Message brokers are used for a variety of reasons (to decouple processing from data producers, to buffer unprocessed messages, etc). In comparison to most messaging systems Kafka has better throughput, built-in partitioning, replication, and fault-tolerance which makes it a good solution for large scale message processing applications.

In our experience messaging uses are often comparatively low-throughput, but may require low end-to-end latency and often depend on the strong durability guarantees Kafka provides.

In this domain Kafka is comparable to traditional messaging systems such as ActiveMQ or RabbitMQ.

Website Activity Tracking

The original use case for Kafka was to be able to rebuild a user activity tracking pipeline as a set of real-time publish-subscribe feeds. This means site activity (page views, searches, or other actions users may take) is published to central topics with one topic per activity type. These feeds are available for subscription for a range of use cases including real-time processing, real-time monitoring, and loading into Hadoop or offline data warehousing systems for offline processing and reporting.

Activity tracking is often very high volume as many activity messages are generated for each user page view.


Kafka is often used for operational monitoring data. This involves aggregating statistics from distributed applications to produce centralized feeds of operational data.

Log Aggregation

Many people use Kafka as a replacement for a log aggregation solution. Log aggregation typically collects physical log files off servers and puts them in a central place (a file server or HDFS perhaps) for processing. Kafka abstracts away the details of files and gives a cleaner abstraction of log or event data as a stream of messages. This allows for lower-latency processing and easier support for multiple data sources and distributed data consumption. In comparison to log-centric systems like Scribe or Flume, Kafka offers equally good performance, stronger durability guarantees due to replication, and much lower end-to-end latency.

Stream Processing

Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing. For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an “articles” topic; further processing might normalize or deduplicate this content and published the cleansed article content to a new topic; a final processing stage might attempt to recommend this content to users. Such processing pipelines create graphs of real-time data flows based on the individual topics. Starting in, a light-weight but powerful stream processing library called Kafka Streams is available in Apache Kafka to perform such data processing as described above. Apart from Kafka Streams, alternative open source stream processing tools include Apache Storm and Apache Samza.

Event Sourcing

Event sourcing is a style of application design where state changes are logged as a time-ordered sequence of records. Kafka’s support for very large stored log data makes it an excellent backend for an application built in this style.

Commit Log

Kafka can serve as a kind of external commit-log for a distributed system. The log helps replicate data between nodes and acts as a re-syncing mechanism for failed nodes to restore their data. The log compaction feature in Kafka helps support this usage. In this usage Kafka is similar to Apache BookKeeperproject.

Apache Kafka – Producer / Consumer Basic Test (With Youtube Video)

In Kafka Server Make the following changes in configuration.property

  • cd $KAFKA_HOME/config

  • vim configuration.property

Config File Changes :-

# the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an “AS IS” BASIS,
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.

############################# Socket Server Settings #############################



# The port the socket server listens on

# Hostname the broker will bind to. If not set, the server will bind to all interfaces

# Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for “host.name” if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients>

# The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients>

# The number of threads handling network requests

2.  Considering there is a Kafka Server and 2 different Servers in which the Client for Kafka is Installed

At Producer Client Server :-

  • cd $KAFKA_HOME

  • ./kafka-console-producer.sh –broker-list <kafka-server-ip>:<kafka-port> –topic <topic-name>

At Consumer Client Server :-

  • cd $KAFKA_HOME

  • ./kafka-console-consumer.sh –zookeeper <kafka-server-ip>:2181 –topic <topic-name> –from-beginning





Apache Kafka – Fundamentals & Workflow

Before moving deep into the Kafka, you must aware of the main terminologies such as topics, brokers, producers and consumers. The following diagram illustrates the main terminologies and the table describes the diagram components in detail.


In the above diagram, a topic is configured into three partitions. Partition 1 has two offset factors 0 and 1. Partition 2 has four offset factors 0, 1, 2, and 3. Partition 3 has one offset factor 0. The id of the replica is same as the id of the server that hosts it.

Assume, if the replication factor of the topic is set to 3, then Kafka will create 3 identical replicas of each partition and place them in the cluster to make available for all its operations. To balance a load in cluster, each broker stores one or more of those partitions. Multiple producers and consumers can publish and retrieve messages at the same time.

S.No Components and Description
1 Topics

A stream of messages belonging to a particular category is called a topic. Data is stored in topics.

Topics are split into partitions. For each topic, Kafka keeps a mini-mum of one partition. Each such partition contains messages in an immutable ordered sequence. A partition is implemented as a set of segment files of equal sizes.

2 Partition

Topics may have many partitions, so it can handle an arbitrary amount of data.

3 Partition offset

Each partitioned message has a unique sequence id called as offset.

4 Replicas of partition

Replicas are nothing but backups of a partition. Replicas are never read or write data. They are used to prevent data loss.

5 Brokers

  • Brokers are simple system responsible for maintaining the pub-lished data. Each broker may have zero or more partitions per topic. Assume, if there are N partitions in a topic and N number of brokers, each broker will have one partition.
  • Assume if there are N partitions in a topic and more than N brokers (n + m), the first N broker will have one partition and the next M broker will not have any partition for that particular topic.
  • Assume if there are N partitions in a topic and less than N brokers (n-m), each broker will have one or more partition sharing among them. This scenario is not recommended due to unequal load distri-bution among the broker.
6 Kafka Cluster

Kafka’s having more than one broker are called as Kafka cluster. A Kafka cluster can be expanded without downtime. These clusters are used to manage the persistence and replication of message data.

7 Producers

Producers are the publisher of messages to one or more Kafka topics. Producers send data to Kafka brokers. Every time a producer pub-lishes a message to a broker, the broker simply appends the message to the last segment file. Actually, the message will be appended to a partition. Producer can also send messages to a partition of their choice.

8 Consumers

Consumers read data from brokers. Consumers subscribes to one or more topics and consume published messages by pulling data from the brokers.

9 Leader

Leader is the node responsible for all reads and writes for the given partition. Every partition has one server acting as a leader.

10 Follower

Node which follows leader instructions are called as follower. If the leader fails, one of the follower will automatically become the new leader. A follower acts as normal consumer, pulls messages and up-dates its own data store.


As of now, we discussed the core concepts of Kafka. Let us now throw some light on the workflow of Kafka.

Kafka is simply a collection of topics split into one or more partitions. A Kafka partition is a linearly ordered sequence of messages, where each message is identified by their index (called as offset). All the data in a Kafka cluster is the disjointed union of partitions. Incoming messages are written at the end of a partition and messages are sequentially read by consumers. Durability is provided by replicating messages to different brokers.

Kafka provides both pub-sub and queue based messaging system in a fast, reliable, persisted, fault-tolerance and zero downtime manner. In both cases, producers simply send the message to a topic and consumer can choose any one type of messaging system depending on their need. Let us follow the steps in the next section to understand how the consumer can choose the messaging system of their choice.

Workflow of Pub-Sub Messaging

Following is the step wise workflow of the Pub-Sub Messaging −

  • Producers send message to a topic at regular intervals.
  • Kafka broker stores all messages in the partitions configured for that particular topic. It ensures the messages are equally shared between partitions. If the producer sends two messages and there are two partitions, Kafka will store one message in the first partition and the second message in the second partition.
  • Consumer subscribes to a specific topic.
  • Once the consumer subscribes to a topic, Kafka will provide the current offset of the topic to the consumer and also saves the offset in the Zookeeper ensemble.
  • Consumer will request the Kafka in a regular interval (like 100 Ms) for new messages.
  • Once Kafka receives the messages from producers, it forwards these messages to the consumers.
  • Consumer will receive the message and process it.
  • Once the messages are processed, consumer will send an acknowledgement to the Kafka broker.
  • Once Kafka receives an acknowledgement, it changes the offset to the new value and updates it in the Zookeeper. Since offsets are maintained in the Zookeeper, the consumer can read next message correctly even during server outrages.
  • This above flow will repeat until the consumer stops the request.
  • Consumer has the option to rewind/skip to the desired offset of a topic at any time and read all the subsequent messages.

Workflow of Queue Messaging / Consumer Group

In a queue messaging system instead of a single consumer, a group of consumers having the same Group ID will subscribe to a topic. In simple terms, consumers subscribing to a topic with same Group ID are considered as a single group and the messages are shared among them. Let us check the actual workflow of this system.

  • Producers send message to a topic in a regular interval.
  • Kafka stores all messages in the partitions configured for that particular topic similar to the earlier scenario.
  • A single consumer subscribes to a specific topic, assume Topic-01 with Group ID as Group-1.
  • Kafka interacts with the consumer in the same way as Pub-Sub Messaging until new consumer subscribes the same topic, Topic-01 with the same Group ID as Group-1.
  • Once the new consumer arrives, Kafka switches its operation to share mode and shares the data between the two consumers. This sharing will go on until the number of con-sumers reach the number of partition configured for that particular topic.
  • Once the number of consumer exceeds the number of partitions, the new consumer will not receive any further message until any one of the existing consumer unsubscribes. This scenario arises because each consumer in Kafka will be assigned a minimum of one partition and once all the partitions are assigned to the existing consumers, the new consumers will have to wait.
  • This feature is also called as Consumer Group. In the same way, Kafka will provide the best of both the systems in a very simple and efficient manner.

Role of ZooKeeper

A critical dependency of Apache Kafka is Apache Zookeeper, which is a distributed configuration and synchronization service. Zookeeper serves as the coordination interface between the Kafka brokers and consumers. The Kafka servers share information via a Zookeeper cluster. Kafka stores basic metadata in Zookeeper such as information about topics, brokers, consumer offsets (queue readers) and so on.

Since all the critical information is stored in the Zookeeper and it normally replicates this data across its ensemble, failure of Kafka broker / Zookeeper does not affect the state of the Kafka cluster. Kafka will restore the state, once the Zookeeper restarts. This gives zero downtime for Kafka. The leader election between the Kafka broker is also done by using Zookeeper in the event of leader failure.

Cpustat – Monitors CPU Utilization by Running Processes in Linux

Apache Kafka – The New Beginning for Messaging


Apache Kafka is a popular distributed message broker designed to handle large volumes of real-time data efficiently. A Kafka cluster is not only highly scalable and fault-tolerant, but it also has a much higher throughput compared to other message brokers such as ActiveMQ and RabbitMQ. Though it is generally used as a pub/sub messaging system, a lot of organizations also use it for log aggregation because it offers persistent storage for published messages.

In this tutorial, you will learn how to install and use Apache Kafka on Ubuntu 16.04.


To follow along, you will need:

  • Ubuntu 16.04 Droplet
  • At least 4GB of swap space

Step 1 — Create a User for Kafka

As Kafka can handle requests over a network, you should create a dedicated user for it. This minimizes damage to your Ubuntu machine should the Kafka server be comprised.

Note: After setting up Apache Kafka, it is recommended that you create a different non-root user to perform other tasks on this server.

As root, create a user called kafka using the useradd command:

useradd kafka -m

Set its password using passwd:

passwd kafka

Add it to the sudo group so that it has the privileges required to install Kafka’s dependencies. This can be done using the adduser command:

adduser kafka sudo

Your Kafka user is now ready. Log into it using su:

su - kafka

Step 2 — Install Java

Before installing additional packages, update the list of available packages so you are installing the latest versions available in the repository:

sudo apt-get update

As Apache Kafka needs a Java runtime environment, use apt-get to install the default-jre package:

sudo apt-get install default-jre

Step 3 — Install ZooKeeper

Apache ZooKeeper is an open source service built to coordinate and synchronize configuration information of nodes that belong to a distributed system. A Kafka cluster depends on ZooKeeper to perform—among other things—operations such as detecting failed nodes and electing leaders.

Since the ZooKeeper package is available in Ubuntu’s default repositories, install it using apt-get.

sudo apt-get install zookeeperd

After the installation completes, ZooKeeper will be started as a daemon automatically. By default, it will listen on port 2181.

To make sure that it is working, connect to it via Telnet:

telnet localhost 2181

At the Telnet prompt, type in ruok and press ENTER.

If everything’s fine, ZooKeeper will say imok and end the Telnet session.

Step 4 — Download and Extract Kafka Binaries

Now that Java and ZooKeeper are installed, it is time to download and extract Kafka.

To start, create a directory called Downloads to store all your downloads.

mkdir -p ~/Downloads

Use wget to download the Kafka binaries.

wget "http://mirror.cc.columbia.edu/pub/software/apache/kafka/" -O ~/Downloads/kafka.tgz

Create a directory called kafka and change to this directory. This will be the base directory of the Kafka installation.

mkdir -p ~/kafka && cd ~/kafka

Extract the archive you downloaded using the tar command.

tar -xvzf ~/Downloads/kafka.tgz --strip 1

Step 5 — Configure the Kafka Server

The next step is to configure the Kakfa server.

Open server.properties using vi:

vi ~/kafka/config/server.properties

By default, Kafka doesn’t allow you to delete topics. To be able to delete topics, add the following line at the end of the file:


delete.topic.enable = true

Save the file, and exit vi.

Step 6 — Start the Kafka Server

Run the kafka-server-start.sh script using nohup to start the Kafka server (also called Kafka broker) as a background process that is independent of your shell session.

nohup ~/kafka/bin/kafka-server-start.sh ~/kafka/config/server.properties > ~/kafka/kafka.log 2>&1 &

Wait for a few seconds for it to start. You can be sure that the server has started successfully when you see the following messages in ~/kafka/kafka.log:

excerpt from ~/kafka/kafka.log

... [2015-07-29 06:02:41,736] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener) [2015-07-29 06:02:41,776] INFO [Kafka Server 0], started (kafka.server.KafkaServer)

You now have a Kafka server which is listening on port 9092.

Step 7 — Test the Installation

Let us now publish and consume a “Hello World” message to make sure that the Kafka server is behaving correctly.

To publish messages, you should create a Kafka producer. You can easily create one from the command line using the kafka-console-producer.sh script. It expects the Kafka server’s hostname and port, along with a topic name as its arguments.

Publish the string “Hello, World” to a topic called TutorialTopic by typing in the following:

echo "Wassup Playas" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic HariTopic > /dev/null

As the topic doesn’t exist, Kafka will create it automatically.

To consume messages, you can create a Kafka consumer using the kafka-console-consumer.sh script. It expects the ZooKeeper server’s hostname and port, along with a topic name as its arguments.

The following command consumes messages from the topic we published to. Note the use of the --from-beginning flag, which is present because we want to consume a message that was published before the consumer was started.

~/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic HariTopic --from-beginning

If there are no configuration issues, you should see Hello,
in the output now.

The script will continue to run, waiting for more messages to be published to the topic. Feel free to open a new terminal and start a producer to publish a few more messages. You should be able to see them all in the consumer’s output instantly.

When you are done testing, press CTRL+C to stop the consumer script.

Step 8 — Install KafkaT (Optional)

KafkaT is a handy little tool from Airbnb which makes it easier for you to view details about your Kafka cluster and also perform a few administrative tasks from the command line. As it is a Ruby gem, you will need Ruby to use it. You will also need the build-essential package to be able to build the other gems it depends on. Install them using apt-get:

sudo apt-get install ruby ruby-dev build-essential

You can now install KafkaT using the gem command:

sudo gem install kafkat --source https://rubygems.org --no-ri --no-rdoc

Use vi to create a new file called .kafkatcfg.

vi ~/.kafkatcfg

This is a configuration file which KafkaT uses to determine the installation and log directories of your Kafka server. It should also point KafkaT to your ZooKeeper instance. Accordingly, add the following lines to it:


{   "kafka_path": "~/kafka",   "log_path": "/tmp/kafka-logs",   "zk_path": "localhost:2181" }

You are now ready to use KafkaT. For a start, here’s how you would use it to view details about all Kafka partitions:

kafkat partitions

You should see the following output:

output of kafkat partitions

Topic           Partition   Leader      Replicas        ISRs     TutorialTopic   0             0           [0]           [0]

To learn more about KafkaT, refer to its GitHub repository.

Step 9 — Set Up a Multi-Node Cluster (Optional)

If you want to create a multi-broker cluster using more Ubuntu 16.04 machines, you should repeat Step 1, Step 3, Step 4 and Step 5 on each of the new machines. Additionally, you should make the following changes in the server.properties file in each of them:

  • the value of the broker.id property should be changed such that it is unique throughout the cluster
  • the value of the zookeeper.connect property should be changed such that all nodes point to the same ZooKeeper instance

If you want to have multiple ZooKeeper instances for your cluster, the value of the zookeeper.connect property on each node should be an identical, comma-separated string listing the IP addresses and port numbers of all the ZooKeeper instances.

Step 10 — Restrict the Kafka User

Now that all installations are done, you can remove the kafka user’s admin privileges. Before you do so, log out and log back in as any other non-root sudo user. If you are still running the same shell session you started this tutorial with, simply type exit.

To remove the kafka user’s admin privileges, remove it from the sudo group.

sudo deluser kafka sudo

To further improve your Kafka server’s security, lock the kafka user’s password using the passwd command. This makes sure that nobody can directly log into it.

sudo passwd kafka -l

At this point, only root or a sudo user can log in as kafka by typing in the following command:

sudo su - kafka

In the future, if you want to unlock it, use passwd with the -u option:

sudo passwd kafka -u


You now have a secure Apache Kafka running on your Ubuntu server. You can easily make use of it in your projects by creating Kafka producers and consumers using Kafka clients which are available for most programming languages. To learn more about Kafka, do go through its documentation.

Finally for GUI Download


Youtube Video Link [Watch Here]


Linux security alert: Bug in sudo’s get_process_ttyname() [ CVE-2017-1000367 ]

There is a serious vulnerability in sudo command that grants root access to anyone with a shell account. It works on SELinux enabled systems such as CentOS/RHEL and others too. A local user with privileges to execute commands via sudo could use this flaw to escalate their privileges to root. Patch your system as soon as possible.

It was discovered that Sudo did not properly parse the contents of /proc/[pid]/stat when attempting to determine its controlling tty. A local attacker in some configurations could possibly use this to overwrite any file on the filesystem, bypassing intended permissions or gain root shell.
From the description

We discovered a vulnerability in Sudo’s get_process_ttyname() for Linux:
this function opens “/proc/[pid]/stat” (man proc) and reads the device number of the tty from field 7 (tty_nr). Unfortunately, these fields are space-separated and field 2 (comm, the filename of the command) can
contain spaces (CVE-2017-1000367).

For example, if we execute Sudo through the symlink “./ 1 “, get_process_ttyname() calls sudo_ttyname_dev() to search for the non-existent tty device number “1” in the built-in search_devs[].

Next, sudo_ttyname_dev() calls the function sudo_ttyname_scan() to search for this non-existent tty device number “1” in a breadth-first traversal of “/dev”.

Last, we exploit this function during its traversal of the world-writable “/dev/shm”: through this vulnerability, a local user can pretend that his tty is any character device on the filesystem, and
after two race conditions, he can pretend that his tty is any file on the filesystem.

On an SELinux-enabled system, if a user is Sudoer for a command that does not grant him full root privileges, he can overwrite any file on the filesystem (including root-owned files) with his command’s output,
because relabel_tty() (in src/selinux.c) calls open(O_RDWR|O_NONBLOCK) on his tty and dup2()s it to the command’s stdin, stdout, and stderr. This allows any Sudoer user to obtain full root privileges.

A list of affected Linux distro

  1. Red Hat Enterprise Linux 6 (sudo)
  2. Red Hat Enterprise Linux 7 (sudo)
  3. Red Hat Enterprise Linux Server (v. 5 ELS) (sudo)
  4. Oracle Enterprise Linux 6
  5. Oracle Enterprise Linux 7
  6. Oracle Enterprise Linux Server 5
  7. CentOS Linux 6 (sudo)
  8. CentOS Linux 7 (sudo)
  9. Debian wheezy
  10. Debian jessie
  11. Debian stretch
  12. Debian sid
  13. Ubuntu 17.04
  14. Ubuntu 16.10
  15. Ubuntu 16.04 LTS
  16. Ubuntu 14.04 LTS
  17. SUSE Linux Enterprise Software Development Kit 12-SP2
  18. SUSE Linux Enterprise Server for Raspberry Pi 12-SP2
  19. SUSE Linux Enterprise Server 12-SP2
  20. SUSE Linux Enterprise Desktop 12-SP2
  21. OpenSuse, Slackware, and Gentoo Linux

How do I patch sudo on Debian/Ubuntu Linux server?

To patch Ubuntu/Debian Linux apt-get command or apt command:
$ sudo apt update
$ sudo apt upgrade

How do I patch sudo on CentOS/RHEL/Scientific/Oracle Linux server?

Run yum command:
$ sudo yum update

How do I patch sudo on Fedora Linux server?

Run dnf command:
$ sudo dnf update

How do I patch sudo on Suse/OpenSUSE Linux server?

Run zypper command:
$ sudo zypper update

How do I patch sudo on Arch Linux server?

Run pacman command:
$ sudo pacman -Syu

How do I patch sudo on Alpine Linux server?

Run apk command:
# apk update && apk upgrade

How do I patch sudo on Slackware Linux server?

Run upgradepkg command:
# upgradepkg sudo-1.8.20p1-i586-1_slack14.2.txz

How do I patch sudo on Gentoo Linux server?

Run emerge command:
# emerge --sync
# emerge --ask --oneshot --verbose ">=app-admin/sudo-1.8.20_p1"

Impermanence in Linux – Exclusive (By Hari Iyer)

Impermanence, also called Anicca or Anitya, is one of the essential doctrines and a part of three marks of existence in Buddhism The doctrine asserts that all of conditioned existence, without exception, is “transient, evanescent, inconstant”

On Linux, the root of all randomness is something called the kernel entropy pool. This is a large (4,096 bit) number kept privately in the kernel’s memory. There are 24096 possibilities for this number so it can contain up to 4,096 bits of entropy. There is one caveat – the kernel needs to be able to fill that memory from a source with 4,096 bits of entropy. And that’s the hard part: finding that much randomness.

The entropy pool is used in two ways: random numbers are generated from it and it is replenished with entropy by the kernel. When random numbers are generated from the pool the entropy of the pool is diminished (because the person receiving the random number has some information about the pool itself). So as the pool’s entropy diminishes as random numbers are handed out, the pool must be replenished.

Replenishing the pool is called stirring: new sources of entropy are stirred into the mix of bits in the pool.

This is the key to how random number generation works on Linux. If randomness is needed, it’s derived from the entropy pool. When available, other sources of randomness are used to stir the entropy pool and make it less predictable. The details are a little mathematical, but it’s interesting to understand how the Linux random number generator works as the principles and techniques apply to random number generation in other software and systems.

The kernel keeps a rough estimate of the number of bits of entropy in the pool. You can check the value of this estimate through the following command:

cat /proc/sys/kernel/random/entropy_avail

A healthy Linux system with a lot of entropy available will have return close to the full 4,096 bits of entropy. If the value returned is less than 200, the system is running low on entropy.

The kernel is watching you

I mentioned that the system takes other sources of randomness and uses this to stir the entropy pool. This is achieved using something called a timestamp.

Most systems have precise internal clocks. Every time that a user interacts with a system, the value of the clock at that time is recorded as a timestamp. Even though the year, month, day and hour are generally guessable, the millisecond and microsecond are not and therefore the timestamp contains some entropy. Timestamps obtained from the user’s mouse and keyboard along with timing information from the network and disk each have different amount of entropy.

How does the entropy found in a timestamp get transferred to the entropy pool? Simple, use math to mix it in. Well, simple if you like math.

Just mix it in

A fundamental property of entropy is that it mixes well. If you take two unrelated random streams and combine them, the new stream cannot have less entropy. Taking a number of low entropy sources and combining them results in a high entropy source.

All that’s needed is the right combination function: a function that can be used to combine two sources of entropy. One of the simplest such functions is the logical exclusive or (XOR). This truth table shows how bits x and y coming from different random streams are combined by the XOR function.

Even if one source of bits does not have much entropy, there is no harm in XORing it into another source. Entropy always increases. In the Linux kernel, a combination of XORs is used to mix timestamps into the main entropy pool.

Generating random numbers

Cryptographic applications require very high entropy. If a 128 bit key is generated with only 64 bits of entropy then it can be guessed in 264 attempts instead of 2128 attempts. That is the difference between needing a thousand computers running for a few years to brute force the key versus needing all the computers ever created running for longer than the history of the universe to do so.

Cryptographic applications require close to one bit of entropy per bit. If the system’s pool has fewer than 4,096 bits of entropy, how does the system return a fully random number? One way to do this is to use a cryptographic hash function.

A cryptographic hash function takes an input of any size and outputs a fixed size number. Changing one bit of the input will change the output completely. Hash functions are good at mixing things together. This mixing property spreads the entropy from the input evenly through the output. If the input has more bits of entropy than the size of the output, the output will be highly random. This is how highly entropic random numbers are derived from the entropy pool.

The hash function used by the Linux kernel is the standard SHA-1 cryptographic hash. By hashing the entire pool and and some additional arithmetic, 160 random bits are created for use by the system. When this happens, the system lowers its estimate of the entropy in the pool accordingly.

Above I said that applying a hash like SHA-1 could be dangerous if there wasn’t enough entropy in the pool. That’s why it’s critical to keep an eye on the available system entropy: if it drops too low the output of the random number generator could have less entropy that it appears to have.

Running out of entropy

One of the dangers of a system is running out of entropy. When the system’s entropy estimate drops to around the 160 bit level, the length of a SHA-1 hash, things get tricky, and how they effect programs and performance depends on which of two Linux random number generators are used.

Linux exposes two interfaces for random data that behave differently when the entropy level is low. They are /dev/random and /dev/urandom. When the entropy pool becomes predictable, both interfaces for requesting random numbers become problematic.

When the entropy level is too low, /dev/random blocks and does not return until the level of entropy in the system is high enough. This guarantees high entropy random numbers. If /dev/random is used in a time-critical service and the system runs low on entropy, the delays could be detrimental to the quality of service.

On the other hand, /dev/urandom does not block. It continues to return the hashed value of its entropy pool even though there is little to no entropy in it. This low-entropy data is not suited for cryptographic use.

The solution to the problem is to simply add more entropy into the system.

Hardware random number generation to the rescue?

Intel’s Ivy Bridge family of processors have an interesting feature called “secure key.” These processors contain a special piece of hardware inside that generates random numbers. The single assembly instruction RDRAND returns allegedly high entropy random data derived on the chip.

It has been suggested that Intel’s hardware number generator may not be fully random. Since it is baked into the silicon, that assertion is hard to audit and verify. As it turns out, even if the numbers generated have some bias, it can still help as long as this is not the only source of randomness in the system. Even if the random number generator itself had a back door, the mixing property of randomness means that it cannot lower the amount of entropy in the pool.

On Linux, if a hardware random number generator is present, the Linux kernel will use the XOR function to mix the output of RDRAND into the hash of the entropy pool. This happens here in the Linux source code (the XOR operator is ^ in C).

Third party entropy generators

Hardware number generation is not available everywhere, and the sources of randomness polled by the Linux kernel itself are somewhat limited. For this situation, a number of third party random number generation tools exist. Examples of these are haveged, which relies on processor cache timing, audio-entropyd and video-entropyd which work by sampling the noise from an external audio or video input device. By mixing these additional sources of locally collected entropy into the Linux entropy pool, the entropy can only go up.

TIBCO Universal Installer – Unix – The installer is unable to run in graphical mode. Try running the installer with the -console or -silent flag (SOLVED)

Many a times,

when you try to install TIBCO Rendezvous / TIBCO EMS or even certain BW Plugins ( That are 32 bit binaries ) on a 64 bit JVM based UNIX System (Linux / Solaris / AIX / UX / FreeBSD)

You typically encounter an error like this



Well, many people ain’t aware of the real deal to solve this issue,

After much Research with permutations and Combinations, there seems to be a solution for this :-

Follow the Steps mentioned below For RHEL 6.XX Systems (Cuz i ain’t tried for other NIX platform yet)

  1. sudo yum -y install libXtst*i686 *
  2. sudo yum -y install libXext*i686*
  3. sudo yum -y install libXrender*i686*

I am damn sure, it’ll work for GUI mode of installation

java.sql.SQLRecoverableException: IO Error: Connection reset ( Designer / BWEngine / interfaceName )

Sometimes, when you create a JDBC Connection in your Designer, or when you configure a JDBC Connection in your EAR, You might end up with an error like this :-

Designer :-


Runtime :-

java.sql.SQLRecoverableException: IO Error: Connection reset

(In your trace file)

This happens because of urandom

/dev/random is a random number generator often used to seed cryptography functions for better security.  /dev/urandom likewise is a (pseudo) random number generator.  Both are good at generating random numbers.  The key difference is that /dev/random has a blocking function that waits until entropy reaches a certain level before providing its result.  From a practical standpoint, this means that programs using /dev/random will generally take longer to complete than /dev/urandom.

With regards to why /dev/urandom vs /dev/./urandom.  That is something unique to Java versions 5 and following that resulted from problems with /dev/urandom on Linux systems back in 2004.  The easy fix was to force /dev/urandom to use /dev/random.  However, it doesn’t appear that Java will be updated to let /dev/urandom use /dev/urandom. So, the workaround is to fake Java out by obscuring /dev/urandom to /dev/./urandom which is functionally the same thing but looks different.

Therefore, Add the following Field to bwengine.tra and designer.tra OR your Individual track’s tra file and restart the bwengine or designer and it works like Magic Johnson’s Dunk.

java.extended.properties -Djava.security.egd=file:///dev/urandom

Interrupt Coalescence (also called Interrupt Moderation, Interrupt Blanking, or Interrupt Throttling)

A common bottleneck for high-speed data transfers is the high rate of interrupts that the receiving system has to process – traditionally, a network adapter generates an interrupt for each frame that it receives. These interrupts consume signaling resources on the system’s bus(es), and introduce significant CPU overhead as the system transitions back and forth between “productive” work and interrupt handling many thousand times a second.

To alleviate this load, some high-speed network adapters support interrupt coalescence. When multiple frames are received in a short timeframe (“back-to-back”), these adapters buffer those frames locally and only interrupt the system once.

Interrupt coalescence together with large-receive offload can roughly be seen as doing on the “receive” side what transmit chaining and large-send offload (LSO) do for the “transmit” side.

Issues with interrupt coalescence

While this scheme lowers interrupt-related system load significantly, it can have adverse effects on timing, and make TCP traffic more bursty or “clumpy”. Therefore it would make sense to combine interrupt coalescence with on-board timestamping functionality. Unfortunately that doesn’t seem to be implemented in commodity hardware/driver combinations yet.

The way that interrupt coalescence works, a network adapter that has received a frame doesn’t send an interrupt to the system right away, but waits for a little while in case more packets arrive. This can have a negative impact on latency.

In general, interrupt coalescence is configured such that the additional delay is bounded. On some implementations, these delay bounds are specified in units of milliseconds, on other systems in units of microseconds. It requires some thought to find a good trade-off between latency and load reduction. One should be careful to set the coalescence threshold low enough that the additional latency doesn’t cause problems. Setting a low threshold will prevent interrupt coalescence from occurring when successive packets are spaced too far apart. But in that case, the interrupt rate will probably be low enough so that this is not a problem.


Configuration of interrupt coalescence is highly system dependent, although there are some parameters that are more or less common over implementations.


On Linux systems with additional driver support, the ethtool -C command can be used to modify the interrupt coalescence settings of network devices on the fly.

Some Ethernet drivers in Linux have parameters to control Interrupt Coalescence (Interrupt Moderation, as it is called in Linux). For example, the e1000 driver for the large family of Intel Gigabit Ethernet adapters has the following parameters according to the kernel documentation:

limits the number of interrupts per second generated by the card. Values >= 100 are interpreted as the maximum number of interrupts per second. The default value used to be 8’000 up to and including kernel release 2.6.19. A value of zero (0) disabled interrupt moderation completely. Above 2.6.19, some values between 1 and 99 can be used to select adaptive interrupt rate control. The first adaptive modes are “dynamic conservative” (1) and dynamic with reduced latency (3). In conservative mode (1), the rate changes between 4’000 interrupts per second when only bulk traffic (“normal-size packets”) is seen, and 20’000 when small packets are present that might benefit from lower latency. The more aggressive mode (3), “low-latency” traffic may drive the interrupt rate up to 70’000 per second. This mode is supposed to be useful for cluster communication in grid applications.
specifies, in multiples of 1’024 microseconds, the time after reception of a frame to wait for another frame to arrive before sending an interrupt.
bounds the delay between reception of a frame and generation of an interrupt. It is specified in units of 1’024 microseconds. Note that InterruptThrottleRate overrides RxAbsIntDelay, so even when a very short RxAbsIntDelay is specified, the interrupt rate should never exceed the rate specified (either directly or by the dynamic algorithm) by InterruptThrottleRate
specifies the number of descriptors to store incoming frames on the adapter. The default value is 256, which is also the maximum for some types of E1000-based adapters. Others can allocate up to 4’096 of these descriptors. The size of the receive buffer associated with each descriptor varies with the MTU configured on the adapter. It is always a power-of-two number of bytes. The number of descriptors available will also depend on the per-buffer size. When all buffers have been filled by incoming frames, an interrupt will have to be signaled in any case.


As an example, see the Platform Notes: Sun GigaSwift Ethernet Device Driver. It lists the following parameters for that particular type of adapter:

Interrupt after this number of packets have arrived since the last packet was serviced. A value of zero indicates no packet blanking. (Range: 0 to 511, default=3)
Interrupt after 4.5 microsecond ticks have elapsed since the last packet was serviced. A value of zero indicates no time blanking. (Range: 0 to 524287, default=1250)

TIBCO Hawk v/s TIBCO BWPM (reblogged)

A short while ago I got the question from a customer that wanted to know the differences between TIBCO Hawk and TIBCO BWPM (BusinessWorks Process Monitor), since both are monitoring products from TIBCO. In this blog I will be briefly explaining my point of view and recommendations about when to use which product, which in my opinion cannot be compared as-is.

Let me start by indicating that TIBCO Hawk and BWPM are not products which can be directly compared with each other. There is partially overlap in purpose of the two products, namely gaining insight in the integration landscape, but at the same time the products are very different. TIBCO Hawk is as we may know a transport, distribution and monitoring product that underwater allows TIBCO administrators to technically monitor the integration landscape in runtime (including server behaviour etc.) and reactive respond on certain events by configuring so-called Hawk-rules and setting up dashboards for feedback. The technical monitoring capabilities are quite extensive and based on the information and log files which are available by both the TIBCO Administrator and the various micro Hawk agents. The target group of TIBCO Hawk are especially the administrators and to a lesser extent, the developers. The focus is on monitoring the various TIBCO components (or adapters) to satisfy corresponding SLA’s, not that what is taking place within the TIBCO components from functional points of perspective.

+ Very strong, comprehensive and proven tool for TIBCO administrators;
+ Reactive measure and (automatically) react to events in the landscape using Hawk-rules;
– Fairly technical, thus very higher threshold for non-technical users;
– Offer little or no insight into the actual data processed from a functional point of perspective;

TIBCO BWPM is a product that from a functional point of perspective provides insight during runtime at process level and is a branch and rebranding of the product called nJAMS by Integration Matters. It may impact the way of developing (standards and guidelines). By using so-called libraries throughout the development process-specific functional information can be made available in runtime. It has a rich web interface as an alternative to the TIBCO Administrator and offers rich visual insight into all process instances and correlates them together. The target group of TIBCO BWPM are the TIBCO developers, administrators, testers and even analysts. The focus is on gaining and understanding of that which is being taking place within the TIBCO components from functional points of perspective.

+ Very strong and comprehensive tool with a rich web interface;
+ Provides extensive logging capabilities, the availability of all related context and process data;
+ Easily accessible and intuitive to use, even for non-technical users;
– Less suitable to use for the daily technical monitoring of the landscape (including server behaviour etc.);
– It is important that the product is well designed and properly parameterized to prevent performance impact (should not be underestimate);

In my opinion, TIBCO BWPM is a very welcome addition to the standard TIBCO Administrator/TIBCO Hawk to gain insight in the related context and process data from a functional point of perspective. In addition, the product can also be used by TIBCO developers, administrators, testers and even analysts.

Source :-  http://www.rubix.nl

TIBCO BWPM – Missing Libraries Detected


If at all you get an error like this



Don’t Panic, simply copy the following list of the following jars in $CATALINA_HOME/lib

For EMS :-

  • jms.jar (if using ems 8 and above rename jms2.0.jar with jms.jar)
  • tibcrypt.jar, tibjms.jar, tibjmsadmin.jar

For Database :-

  • ojdbc.jar (rename ojdbc6.jar or ojdbc7.jar to ojdbc.jar) – ORACLE
  • mssqlserver.jar (rename to sqljdbc4.jar) – MSSQL

/etc/security/limits.conf file – In A Nutshell

The /etc/security/limits.conf file contains a list line where each line describes a limit for a user in the form of:

<Domain> <type> <item> <shell limit value>


  • <domain> can be:
    • an user name
    • a group name, with @group syntax
    • the wildcard *, for default entry
    • the wildcard %, can be also used with %group syntax, for maxlogin limit
  • <type> can have the two values:
    • “soft” for enforcing the soft limits (soft is like warning)
    • “hard” for enforcing hard limits (hard is a real max limit)
  • <item> can be one of the following:
    • core – limits the core file size (KB)
  • <shell limit value> can be one of the following:
    • core – limits the core file size (KB)
    • data – max data size (KB)
    • fsize – maximum file size (KB)
    • memlock – max locked-in-memory address space (KB)
    • nofile – Maximum number of open file descriptors
    • rss – max resident set size (KB)
    • stack – max stack size (KB) – Maximum size of the stack segment of the process
    • cpu – max CPU time (MIN)
    • nproc – Maximum number of processes available to a single user
    • as – address space limit
    • maxlogins – max number of logins for this user
    • maxsyslogins – max number of logins on the system
    • priority – the priority to run user process with
    • locks – max number of file locks the user can hold
    • sigpending – max number of pending signals
    • msgqueue – max memory used by POSIX message queues (bytes)
    • nice – max nice priority allowed to raise to
    • rtprio – max realtime priority
    • chroot – change root to directory (Debian-specific)


  • Sigpending – examine pending signals.

sigpending () returns the set of signals that are pending for delivery to the calling thread (i.e., the signals which have been raised while blocked). The mask of pending signals is returned in set.

sigpending() returns 0 on success and -1 on error


credits :- Sagar Salunkhe

Linux KVM: Disable virbr0 NAT Interface

The virtual network (virbr0) used for Network address translation (NAT) which allows guests to access to network services. However, NAT slows down things and only recommended for desktop installations. To disable Network address translation (NAT) forwarding type the following commands:

Display Current Setup

Type the following command:
# ifconfig
Sample outputs:

virbr0    Link encap:Ethernet  HWaddr 00:00:00:00:00:00  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:7921 (7.7 KiB)

Or use the following command:
# virsh net-list
Sample outputs:

Name                 State      Autostart
default              active     yes       

To disable virbr0, enter:
# virsh net-destroy default
# virsh net-undefine default
# service libvirtd restart
# ifconfig

TIBCO EMS – Properties of Queues and Topics (Where Tuning can be done)

You can set the properties directly in the topics.conf or queues.conf file or by means of the setprop topic or setprop queue command in the EMS Administrator Tool.

1)   Failsafe

The failsafe property determines whether the server writes persistent messages to disk synchronously or asynchronously.

Ø  When failsafe is not set, messages are written to the file on disk in asynchronous mode to obtain maximum performance. In this mode, the data may remain in system buffers for a short time before it is written to disk and it is possible that, in case of software or hardware failure, some data could be lost without the possibility of recovery

Ø  In failsafe mode, all data for that queue or topic are written into external storage in synchronous mode. In synchronous mode, a write operation is not complete until the data is physically recorded on the external device

The failsafe property ensures that no messages are ever lost in case of server failure

2) Secure

Ø  When the secure property is enabled for a destination, it instructs the server to check user permissions whenever a user attempts to perform an operation on that destination.

Ø  If the secure property is not set for a destination, the server does not check permissions for that destination and any authenticated user can perform any operation on that topic or queue.

2)   Maxbytes

Ø  Topics and queues can specify the maxbytes property in the form:

Maxbytes=value [KB|MB|GB]                   Ex: maxbytes=1000MB

Ø  For queues, maxbytes defines the maximum size (in bytes) that the queue can store, summed over all messages in the queue. Should this limit be exceeded, messages will be rejected by the server and the message producers send calls will return an error

Ø  If maxbytes is zero, or is not set, the server does not limit the memory allocation for the queue

Ø  For queues, maxbytes defines the maximum size (in bytes) that the queue can store, summed over all messages in the queue. Should this limit be exceeded, messages will be rejected by the server and the message producer sends calls will return an error

4) maxmsgs

Ø  Where value defines the maximum number of messages that can be waiting in a queue. When adding a message would exceed this limit, the server does not accept the message into storage, and the message producer’s send call returns an error.

Ø  If maxmsgs is zero, or is not set, the server does not limit the number of messages in the queue.

Ø  You can set both maxmsgs and maxbytes properties on the same queue. Exceeding either limit causes the server to reject new messages until consumers reduce the the queue size to below these limits.

5) OverflowPolicy

Topics and queues can specify the overflowPolicy property to change the effect of exceeding the message capacity established by either maxbytes or maxmsgs.

o   OverflowPolicy=default | discardOld | rejectIncoming

  1. Default

Ø  For topics, default specifies that messages are sent to subscribers, regardless of maxbytes or maxmsgs setting.

Ø  For queues, default specifies that new messages are rejected by the server and an error is returned to the producer if the established maxbytes or maxmsgs value has been exceeded.

  1. DiscardOld

Ø  For topics, discardOld specifies that, if any of the subscribers have an outstanding number of undelivered messages on the server that are over the message limit, the oldest messages are discarded before they are delivered to the subscriber.

Ø  The discardOld setting impacts subscribers individually. For example, you might have three subscribers to a topic, but only one subscriber exceeds the message limit. In this case, only the oldest messages for the one subscriber are discarded, while the other two subscribers continue to receive all of their messages.

Ø  For queues, discardOld specifies that, if messages on the queue have exceeded the maxbytes or maxmsgs value, the oldest messages are discarded from the queue and an error is returned to the message producer

        III.                rejectIncoming

Ø  For topics, rejectIncoming specifies that, if any of the subscribers have an outstanding number of undelivered messages on the server that are over the message limit, all new messages are rejected and an error is returned to the producer.

Ø  For queues, rejectIncoming specifies that, if messages on the queue have exceeded the maxbytes or maxmsgs value, all new messages are rejected and an error is returned to the producer.

6) global

Ø  Messages destined for a topic or queue with the global property set are routed to the other servers that are participating in routing with this server.

You can set global using the form:   global

7) sender_name

Ø  The sender_ name property specifies that the server may include the sender’s username for messages sent to this destination.

You can set sender_name using the form:    sender_name

8) sender_name_enforced

Ø  The sender_name_enforced property specifies that messages sent to this destination must include the sender’s user name. The server retrieves the user name of the message producer using the same procedure described in the sender_name property above. However, unlike, the sender_name property, there is no way for message producers to override this property.

You can set sender_name_enforced using the form:    sender_name_enforced

Ø  If the sender_name property is also set on the destination, this property overrides the sender_name property.

9) FlowControl

Ø  The flowControl property specifies the target maximum size the server can use to store pending messages for the destination. Should the number of messages exceed the maximum; the server will slow down the producers to the rate required by the message consumers. This is useful when message producers send messages much more quickly than message consumers can consume them.

If you specify the flowControl property without a value, the target        maximum is set to 256KB.

Ø  The flow_control parameter in tibemsd.conf file must be set to enable before the value in this property is enforced by the server. See Flow Control for more information about flow control.

10) trace

Ø  Specifies that tracing should be enabled for this destination.

o    You can set trace using the form:    trace [=body]

Ø  Specifying trace (without =body), generates trace messages that include only the message sequence and message ID. Specifying trace=body generates trace messages that include the message body

11) import

Ø  The import property allows messages published by an external system to be received by an EMS destination (a topic or a queue), as long as the transport to the external system is configured.

o    You can set import using the form:    import=”list

12) export

Ø  The export property allows messages published by a client to a topic to be exported to the external systems with configured transports.

o    You can set import using the form:    export=”list

Ø  It supports for only topics not queues.

13) maxRedelivery

Ø  The maxRedelivery property specifies the number of attempts the server should make to redeliver a message sent to a queue.

o    You can set maxRedelivery using the form:    maxRedelivery=count

Ø  Where count is an integer between 2 and 255 that specifies the maximum number of times a message can be delivered to receivers. A value of zero disables maxRedelivery, so there is no maximum.

Ø  Once the server has attempted to deliver the message the specified number of times, the message is either destroyed or, if the JMS_TIBCO_PRESERVE_UNDELIVERED property on the message is set to true, the message is placed on the undelivered queue so it can be handled by a special consumer

Undelivered Message Queue

If a message expires or has exceeded the value specified by the maxRedelivery property on a queue, the server checks the message’s JMS_TIBCO_PRESERVE_UNDELIVERED property. If
JMS_TIBCO_PRESERVE_UNDELIVERED is set to true, the server moves the message to the undelivered message queue, $sys.undelivered. This undelivered message queue is a system queue that is always present and cannot be deleted. If JMS_TIBCO_PRESERVE_UNDELIVERED is set to false, the message will be deleted by the server.

14) exclusive

Ø  The exclusive property is available for queues only (not for topics).

Ø  When exclusive is set for a queue, the server sends all messages on that queue to one consumer. No other consumers can receive messages from the queue. Instead, these additional consumers act in a standby role; if the primary consumer fails, the server selects one of the s   tandby consumers as the new primary, and begins delivering messages to it.

Ø  By default, exclusive is not set for queues and the server distributes messages in a round-robin—one to each receiver that is ready. If any receivers are still ready to accept additional messages, the server distributes another round of messages—one to each receiver that is still ready. When none of the receivers are ready to receive more messages, the server waits until a queue receiver reports that it can accept a message.

15) prefetch

The message consumer portion of a client and the server cooperate to regulate fetching according to the prefetch property. The prefetch property applies to both topics and queues.

You can set  prefetch using the form:  prefetch=value

where value is one of the values in 2 0r more ,1,0,None.

Value Description

Ø  2 or more: The message consumer automatically fetches messages from the

server. The message consumer never fetches more than the number of messages specified by value.

Ø  1 :-The message consumer automatically fetches messages from the server initiating fetch only when it does not currently hold amessage.

Ø  None:-Disables automatic fetch. That is, the message consumer initiates fetch only when the client calls receive—either an explicit synchronous call, or an implicit call (in an asynchronous consumer).This value cannot be used with topics or global queues.

Ø  0:-The destination inherits the prefetch value from a parent

destination with a matching name. If it has no parent, or nodestination in the parent chain sets a value for prefetch, then the default value is 5 queues and 64 for topics.

Ø  When a destination does not set any value (i.e prefetch value is empty)for prefetch, then the default value is 0 (zero; that is, inherit the prefetch value).

16) expiration                                                                                    

Ø  If an expiration property is set for a destination, when the server delivers a message to that destination, the server overrides the JMSExpiration value set by the producer in the message header with the time specified by the expiration property.

o    You can set the expiration property for any queue and any topic using the form:

expiration=time [msec|sec|min|hour|day]


Ø  where time is the number of seconds. Zero is a special value that indicates messages to the destination never expire.

TIBCO Administrator – Error (Core Dump Error)

Sometimes the administrator process in UNIX Platform Stops intermittently and then in the following location,


file you will see a core dump error something like this

# A fatal error has been detected by the Java Runtime Environment:
# SIGSEGV (0xb) at pc=0x00007efcdb723df8, pid=12496, tid=139624169486080
# JRE version: Java(TM) SE Runtime Environment (8.0_51-b16) (build 1.8.0_51-b16)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.51-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# V [libjvm.so+0x404df8] PhaseChaitin::gather_lrg_masks(bool)+0x208
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try “ulimit -c unlimited” before starting Java again
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp

————— T H R E A D —————

Current thread (0x00000000023a8800): JavaThread “C2 CompilerThread1” daemon [_thread_in_native, id=12508, stack(0x00007efcc8f63000,0x00007efcc9064000)]

siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000000

RAX=0x0000000000000000, RBX=0x00007efc901723e0, RCX=0x00007efc9016e890, RDX=0x0000000000000041
RSP=0x00007efcc905f650, RBP=0x00007efcc905f6c0, RSI=0x00007efcc9060f50, RDI=0x00007efc90b937a0
R8 =0x000000000000009a, R9 =0x0000000000000003, R10=0x0000000000000003, R11=0x0000000000000000
R12=0x0000000000000004, R13=0x0000000000000000, R14=0x0000000000000002, R15=0x00007efc90b937a0
RIP=0x00007efcdb723df8, EFLAGS=0x0000000000010246, CSGSFS=0x0000000000000033, ERR=0x0000000000000004

Top of Stack: (sp=0x00007efcc905f650)
0x00007efcc905f650: 01007efcc905f6c0 00007efcc9060f50
0x00007efcc905f660: 0000003dc905f870 00007efc9012d900
0x00007efcc905f670: 0000000100000002 ffffffff00000002
0x00007efcc905f680: 00007efc98009f40 00007efc90171d30
0x00007efcc905f690: 0000023ac9061038 00007efcc9060f50
0x00007efcc905f6a0: 0000000000000222 0000000000000090
0x00007efcc905f6b0: 00007efcc9061038 0000000000000222
0x00007efcc905f6c0: 00007efcc905f930 00007efcdb72705a
0x00007efcc905f6d0: 00007efcc905f750 00007efcc905f870
0x00007efcc905f6e0: 00007efcc905f830 00007efcc905f710
0x00007efcc905f6f0: 00007efcc905f7d0 00007efcc905f8a0
0x00007efcc905f700: 00007efcc9060f50 0000001200000117
0x00007efcc905f710: 00007efcdc2544b0 00007efc0000000c
0x00007efcc905f720: 00007efcc9061dd0 00007efcc9060f50
0x00007efcc905f730: 0000000000000807 00007efc9044a2e0
0x00007efcc905f740: 00007efc9012c610 0000000000000002
0x00007efcc905f750: 00007efcc905f820 00007efcdbae2ea3
0x00007efcc905f760: 000007c000000010 00007efcc90610d8
0x00007efcc905f770: 0000000000000028 ffffffe80000000e
0x00007efcc905f780: 00007efc9076dd60 00007efcc9061080
0x00007efcc905f790: 0000001100001f00 0000001a00000011
0x00007efcc905f7a0: 0000000100000001 00007efc903a1c78
0x00007efcc905f7b0: 00007efc908213e0 00007efcdbd7cd46
0x00007efcc905f7c0: 0000000000000008 00007efcdbd7cc97
0x00007efcc905f7d0: 00007efc00000009 00007efcc9061dd0
0x00007efcc905f7e0: 00007efc909059f0 00007efc900537a0
0x00007efcc905f7f0: 00007efc9040b7c0 00007efc9040c130
0x00007efcc905f800: 00007efc9081d280 00007efcc9061080
0x00007efcc905f810: 00007efcc9061060 0000000000000222
0x00007efcc905f820: 00007efcc905f870 00007efcdb8f8831
0x00007efcc905f830: 00007efc0000000b 00007efcc9061dd0
0x00007efcc905f840: 00007efc906cbd70 00007efcc9060f00

Instructions: (pc=0x00007efcdb723df8)
0x00007efcdb723dd8: 18 00 48 c7 c0 ff ff ff ff 4c 89 ff 49 0f 44 c7
0x00007efcdb723de8: 48 89 43 18 49 8b 07 ff 90 80 00 00 00 49 89 c5
0x00007efcdb723df8: 8b 00 21 43 38 41 8b 45 04 21 43 3c 4c 89 ff 41
0x00007efcdb723e08: 8b 45 08 21 43 40 41 8b 45 0c 21 43 44 41 8b 45

Register to memory mapping:

RAX=0x0000000000000000 is an unknown value
RBX=0x00007efc901723e0 is an unknown value
RCX=0x00007efc9016e890 is an unknown value
RDX=0x0000000000000041 is an unknown value
RSP=0x00007efcc905f650 is pointing into the stack for thread: 0x00000000023a8800
RBP=0x00007efcc905f6c0 is pointing into the stack for thread: 0x00000000023a8800
RSI=0x00007efcc9060f50 is pointing into the stack for thread: 0x00000000023a8800
RDI=0x00007efc90b937a0 is an unknown value
R8 =0x000000000000009a is an unknown value
R9 =0x0000000000000003 is an unknown value
R10=0x0000000000000003 is an unknown value
R11=0x0000000000000000 is an unknown value
R12=0x0000000000000004 is an unknown value
R13=0x0000000000000000 is an unknown value
R14=0x0000000000000002 is an unknown value
R15=0x00007efc90b937a0 is an unknown value
Stack: [0x00007efcc8f63000,0x00007efcc9064000], sp=0x00007efcc905f650, free space=1009k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x404df8] PhaseChaitin::gather_lrg_masks(bool)+0x208
V [libjvm.so+0x40805a] PhaseChaitin::Register_Allocate()+0x71a
V [libjvm.so+0x49abe0] Compile::Code_Gen()+0x260
V [libjvm.so+0x49e032] Compile::Compile(ciEnv*, C2Compiler*, ciMethod*, int, bool, bool, bool)+0x14b2
V [libjvm.so+0x3ebeb8] C2Compiler::compile_method(ciEnv*, ciMethod*, int)+0x198
V [libjvm.so+0x4a843a] CompileBroker::invoke_compiler_on_method(CompileTask*)+0xc9a
V [libjvm.so+0x4a93e6] CompileBroker::compiler_thread_loop()+0x5d6
V [libjvm.so+0xa5cbcf] JavaThread::thread_main_inner()+0xdf
V [libjvm.so+0xa5ccfc] JavaThread::run()+0x11c
V [libjvm.so+0x911048] java_start(Thread*)+0x108
C [libpthread.so.0+0x7aa1]
Current CompileTask:
C2:77124624 8967 ! 4 com.tibco.repo.RVRepoProcessBridge::handleServerHeartbeat (1075 bytes)
————— P R O C E S S —————

Java Threads: ( => current thread )
0x00007efcb8024000 JavaThread “Thread-41” daemon [_thread_blocked, id=13990, stack(0x00007efc372f5000,0x00007efc373f6000)]
0x00007efcac372000 JavaThread “http-bio-8989-exec-68” daemon [_thread_blocked, id=13853, stack(0x00007efc3ca41000,0x00007efc3cb42000)]
0x00007efc4002d000 JavaThread “http-bio-8989-exec-67” daemon [_thread_blocked, id=13837, stack(0x00007efc376f9000,0x00007efc377fa000)]
0x00007efc945b6800 JavaThread “http-bio-8989-exec-66” daemon [_thread_blocked, id=13828, stack(0x00007efc37cfd000,0x00007efc37dfe000)]
0x00007efc40028000 JavaThread “http-bio-8989-exec-65” daemon [_thread_blocked, id=13228, stack(0x00007efc374f7000,0x00007efc375f8000)]
0x00007efc993ba800 JavaThread “http-bio-8989-exec-64” daemon [_thread_blocked, id=13227, stack(0x00007efc3ce45000,0x00007efc3cf46000)]
0x00007efcc4012000 JavaThread “http-bio-8989-exec-63” daemon [_thread_blocked, id=13218, stack(0x00007efc3c63f000,0x00007efc3c740000)]
0x00007efc50006000 JavaThread “http-bio-8989-exec-62” daemon [_thread_blocked, id=13217, stack(0x00007efc373f6000,0x00007efc374f7000)]
0x00007efca800c000 JavaThread “http-bio-8989-exec-61” daemon [_thread_blocked, id=13216, stack(0x00007efc3c13a000,0x00007efc3c23b000)]
0x00007efc68004000 JavaThread “http-bio-8989-exec-60” daemon [_thread_blocked, id=13215, stack(0x00007efc3d34a000,0x00007efc3d44b000)]
0x00007efcb0006800 JavaThread “http-bio-8989-exec-59” daemon [_thread_blocked, id=13214, stack(0x00007efc3d54c000,0x00007efc3d64d000)]
0x00007efca8044800 JavaThread “http-bio-8989-exec-58” daemon [_thread_blocked, id=13213, stack(0x00007efc375f8000,0x00007efc376f9000)]
0x00007efc902c5800 JavaThread “http-bio-8989-exec-57” daemon [_thread_blocked, id=13212, stack(0x00007efc36ef1000,0x00007efc36ff2000)]
0x00007efcb4010800 JavaThread “http-bio-8989-exec-56” daemon [_thread_blocked, id=13211, stack(0x00007efc3d148000,0x00007efc3d249000)]
0x00007efc4408c800 JavaThread “http-bio-8989-exec-55” daemon [_thread_blocked, id=13210, stack(0x00007efc3e053000,0x00007efc3e154000)]
0x00007efcb4036800 JavaThread “http-bio-8989-exec-54” daemon [_thread_blocked, id=13201, stack(0x00007efc371f4000,0x00007efc372f5000)]
0x00007efcb0018800 JavaThread “http-bio-8989-exec-53” daemon [_thread_blocked, id=13200, stack(0x00007efc3c23b000,0x00007efc3c33c000)]
0x00007efc6c1e1000 JavaThread “http-bio-8989-exec-52” daemon [_thread_blocked, id=13199, stack(0x00007efc3c43d000,0x00007efc3c53e000)]
0x00007efc58005000 JavaThread “http-bio-8989-exec-51” daemon [_thread_blocked, id=13198, stack(0x00007efc3c039000,0x00007efc3c13a000)]
0x00007efc74006800 JavaThread “http-bio-8989-exec-50” daemon [_thread_blocked, id=13197, stack(0x00007efc9da1e000,0x00007efc9db1f000)]
0x00007efc54005800 JavaThread “AMI Worker 2” daemon [_thread_blocked, id=13120, stack(0x00007efcc2253000,0x00007efcc2354000)]
0x00007efc54003800 JavaThread “AMI Worker 1” daemon [_thread_blocked, id=13119, stack(0x00007efc36df0000,0x00007efc36ef1000)]
0x00007efcbc02f800 JavaThread “http-bio-8989-exec-49” daemon [_thread_blocked, id=13091, stack(0x00007efc37afb000,0x00007efc37bfc000)]
0x00007efc80083000 JavaThread “http-bio-8989-exec-48” daemon [_thread_blocked, id=13090, stack(0x00007efc3cd44000,0x00007efc3ce45000)]
0x00007efc4c01b000 JavaThread “http-bio-8989-exec-47” daemon [_thread_blocked, id=13089, stack(0x00007efc3d44b000,0x00007efc3d54c000)]
0x00007efca403a800 JavaThread “http-bio-8989-exec-46” daemon [_thread_blocked, id=13088, stack(0x00007efc36cef000,0x00007efc36df0000)]
0x00007efcc4023000 JavaThread “http-bio-8989-exec-45” daemon [_thread_blocked, id=13087, stack(0x00007efc3d249000,0x00007efc3d34a000)]
0x00007efc8468a000 JavaThread “http-bio-8989-exec-44” daemon [_thread_blocked, id=13086, stack(0x00007efc3d64d000,0x00007efc3d74e000)]
0x000000000252f800 JavaThread “http-bio-8989-AsyncTimeout” daemon [_thread_blocked, id=13032, stack(0x00007efc3d74e000,0x00007efc3d84f000)]
0x000000000252e800 JavaThread “http-bio-8989-Acceptor-0” daemon [_thread_in_native, id=13031, stack(0x00007efc3d84f000,0x00007efc3d950000)]
0x000000000252d000 JavaThread “ContainerBackgroundProcessor[StandardEngine[Catalina]]” daemon [_thread_blocked, id=13030, stack(0x00007efc3d950000,0x00007efc3da51000)]
0x00007efc44085800 JavaThread “Thread-37” daemon [_thread_blocked, id=13029, stack(0x00007efc3f0f5000,0x00007efc3f1f6000)]
0x00007efc78971000 JavaThread “Thread-36” daemon [_thread_blocked, id=13028, stack(0x00007efc3de51000,0x00007efc3df52000)]
0x00007efc78967000 JavaThread “Thread-33” daemon [_thread_blocked, id=13025, stack(0x00007efc3e154000,0x00007efc3e255000)]
0x00007efc788fe000 JavaThread “Tibrv Dispatcher” daemon [_thread_in_native, id=13024, stack(0x00007efc3e255000,0x00007efc3e356000)]
0x00007efc788fa800 JavaThread “RVAgentManagerTransport dispatch thread” daemon [_thread_in_native, id=13023, stack(0x00007efc3e356000,0x00007efc3e457000)]
0x00007efc788f8000 JavaThread “RacSubManager” daemon [_thread_blocked, id=13022, stack(0x00007efc3e457000,0x00007efc3e558000)]
0x00007efc788d8800 JavaThread “InitialListTimer” daemon [_thread_blocked, id=13021, stack(0x00007efc3e558000,0x00007efc3e659000)]
0x00007efc788d7000 JavaThread “RvHeartBeatTimer” daemon [_thread_blocked, id=13020, stack(0x00007efc3e659000,0x00007efc3e75a000)]
0x00007efc788d4000 JavaThread “AgentAliveMonitor dispatch thread” daemon [_thread_in_native, id=13019, stack(0x00007efc3ebf2000,0x00007efc3ecf3000)]
0x00007efc788aa000 JavaThread “AgentEventMonitor dispatch thread” daemon [_thread_in_native, id=13018, stack(0x00007efc3ecf3000,0x00007efc3edf4000)]
0x00007efc787c2800 JavaThread “Thread-30” daemon [_thread_blocked, id=13017, stack(0x00007efc3ea2b000,0x00007efc3eb2c000)]
0x00007efc4c009800 JavaThread “Thread-29(HawkConfig)” daemon [_thread_blocked, id=13016, stack(0x00007efc3eff4000,0x00007efc3f0f5000)]
0x00007efc4c007800 JavaThread “Thread-28” daemon [_thread_in_native, id=13015, stack(0x00007efc3f1f6000,0x00007efc3f2f7000)]
0x00007efc7827b800 JavaThread “Thread-27” daemon [_thread_blocked, id=13012, stack(0x00007efc3f2f7000,0x00007efc3f3f8000)]
0x00007efc780fb800 JavaThread “Thread-26(HawkConfig)” daemon [_thread_blocked, id=13011, stack(0x00007efc3f5f8000,0x00007efc3f6f9000)]
0x00007efc780f9800 JavaThread “Thread-25” daemon [_thread_in_native, id=13010, stack(0x00007efc3f6f9000,0x00007efc3f7fa000)]
0x00007efc78060000 JavaThread “Thread-24(MonitoringManagement)” daemon [_thread_blocked, id=13009, stack(0x00007efc3f7fa000,0x00007efc3f8fb000)]
0x00007efc7805e000 JavaThread “Thread-23” daemon [_thread_in_native, id=13008, stack(0x00007efc3f8fb000,0x00007efc3f9fc000)]
0x00007efc78210000 JavaThread “Thread-21(quality)” daemon [_thread_blocked, id=13007, stack(0x00007efc3f9fc000,0x00007efc3fafd000)]
0x00007efc7820f800 JavaThread “Thread-20” daemon [_thread_in_native, id=13006, stack(0x00007efc3fafd000,0x00007efc3fbfe000)]
0x00007efc6c1d6800 JavaThread “Thread-17” daemon [_thread_blocked, id=13003, stack(0x00007efc5d6fb000,0x00007efc5d7fc000)]
0x00007efc6c1ff800 JavaThread “CommitQueue4_0” daemon [_thread_in_native, id=13002, stack(0x00007efc3fbfe000,0x00007efc3fcff000)]
0x00007efc6c1fd000 JavaThread “NormalQueue4_2” daemon [_thread_in_native, id=13001, stack(0x00007efc3feff000,0x00007efc40000000)]
0x00007efc6c1fb000 JavaThread “NormalQueue4_1” daemon [_thread_in_native, id=13000, stack(0x00007efc5c0e9000,0x00007efc5c1ea000)]
0x00007efc6c1f9000 JavaThread “NormalQueue4_0” daemon [_thread_in_native, id=12999, stack(0x00007efc5c1ea000,0x00007efc5c2eb000)]
0x00007efc6c1f5000 JavaThread “CommitQueue3_0” daemon [_thread_in_native, id=12998, stack(0x00007efc5c2eb000,0x00007efc5c3ec000)]
0x00007efc6c1f3000 JavaThread “NormalQueue3_2” daemon [_thread_in_native, id=12997, stack(0x00007efc5c3ec000,0x00007efc5c4ed000)]
0x00007efc6c1f1800 JavaThread “NormalQueue3_1” daemon [_thread_in_native, id=12996, stack(0x00007efc5c4ed000,0x00007efc5c5ee000)]
0x00007efc6c1ef800 JavaThread “NormalQueue3_0” daemon [_thread_in_native, id=12995, stack(0x00007efc5c5ee000,0x00007efc5c6ef000)]
0x00007efc6c1eb800 JavaThread “CommitQueue2_0” daemon [_thread_in_native, id=12994, stack(0x00007efc5c6ef000,0x00007efc5c7f0000)]
0x00007efc6c1ea000 JavaThread “NormalQueue2_2” daemon [_thread_in_native, id=12993, stack(0x00007efc5c7f0000,0x00007efc5c8f1000)]
0x00007efc6c1e9800 JavaThread “NormalQueue2_1” daemon [_thread_in_native, id=12992, stack(0x00007efc5c8f1000,0x00007efc5c9f2000)]
0x00007efc6c1de800 JavaThread “NormalQueue2_0” daemon [_thread_in_native, id=12991, stack(0x00007efc5c9f2000,0x00007efc5caf3000)]
0x00007efc6c124000 JavaThread “Thread-16(AUTH_quality)” daemon [_thread_blocked, id=12989, stack(0x00007efc5ccf3000,0x00007efc5cdf4000)]
0x00007efc6c116800 JavaThread “Thread-15” daemon [_thread_in_native, id=12988, stack(0x00007efc5cdf4000,0x00007efc5cef5000)]
0x00007efc6c101000 JavaThread “Timer-0” daemon [_thread_blocked, id=12987, stack(0x00007efc5cef5000,0x00007efc5cff6000)]
0x00007efc6c0f7800 JavaThread “net.sf.ehcache.CacheManager@5a6a4d5b” daemon [_thread_blocked, id=12986, stack(0x00007efc5cff6000,0x00007efc5d0f7000)]
0x00007efc6c0c6000 JavaThread “CommitQueue1_0” daemon [_thread_in_native, id=12985, stack(0x00007efc5d0f7000,0x00007efc5d1f8000)]
0x00007efc6c0c4000 JavaThread “NormalQueue1_2” daemon [_thread_in_native, id=12984, stack(0x00007efc5d1f8000,0x00007efc5d2f9000)]
0x00007efc6c0c2800 JavaThread “NormalQueue1_1” daemon [_thread_in_native, id=12983, stack(0x00007efc5d2f9000,0x00007efc5d3fa000)]
0x00007efc6c0bd800 JavaThread “NormalQueue1_0” daemon [_thread_in_native, id=12982, stack(0x00007efc5d3fa000,0x00007efc5d4fb000)]
0x00007efc6c09e800 JavaThread “HB-1” daemon [_thread_in_native, id=12980, stack(0x00007efc9c013000,0x00007efc9c114000)]
0x00007efc6c09b800 JavaThread “HB-0” daemon [_thread_in_native, id=12979, stack(0x00007efc9c114000,0x00007efc9c215000)]
0x00007efc6c09a000 JavaThread “SYNC-0” daemon [_thread_in_native, id=12978, stack(0x00007efc9c215000,0x00007efc9c316000)]
0x00007efc6c067800 JavaThread “ImstMgmt” daemon [_thread_in_native, id=12973, stack(0x00007efc9c316000,0x00007efc9c417000)]
0x00007efc6c028800 JavaThread “HawkImplantDisp” daemon [_thread_in_native, id=12972, stack(0x00007efc9c417000,0x00007efc9c518000)]
0x00007efc781a1800 JavaThread “Thread-11” daemon [_thread_blocked, id=12967, stack(0x00007efc9cf19000,0x00007efc9d01a000)]
0x00000000030fe000 JavaThread “GC Daemon” daemon [_thread_blocked, id=12540, stack(0x00007efcc80bc000,0x00007efcc81bd000)]
0x00000000023bf800 JavaThread “Service Thread” daemon [_thread_blocked, id=12510, stack(0x00007efcc8d61000,0x00007efcc8e62000)]
0x00000000023aa800 JavaThread “C1 CompilerThread2” daemon [_thread_blocked, id=12509, stack(0x00007efcc8e62000,0x00007efcc8f63000)]
=>0x00000000023a8800 JavaThread “C2 CompilerThread1” daemon [_thread_in_native, id=12508, stack(0x00007efcc8f63000,0x00007efcc9064000)]
0x00000000023a5800 JavaThread “C2 CompilerThread0” daemon [_thread_blocked, id=12507, stack(0x00007efcc9064000,0x00007efcc9165000)]
0x00000000023a4000 JavaThread “Signal Dispatcher” daemon [_thread_blocked, id=12506, stack(0x00007efcc9165000,0x00007efcc9266000)]
0x000000000236c000 JavaThread “Finalizer” daemon [_thread_blocked, id=12505, stack(0x00007efcc9266000,0x00007efcc9367000)]
0x000000000236a000 JavaThread “Reference Handler” daemon [_thread_blocked, id=12504, stack(0x00007efcc9367000,0x00007efcc9468000)]
0x00000000022f6800 JavaThread “main” [_thread_in_native, id=12496, stack(0x00007ffec1ae5000,0x00007ffec1be5000)]

Other Threads:
0x0000000002364800 VMThread [stack: 0x00007efcc9468000,0x00007efcc9569000] [id=12503]
0x00000000023c2800 WatcherThread [stack: 0x00007efcc8c60000,0x00007efcc8d61000] [id=12511]

VM state:not at safepoint (normal execution)

VM Mutex/Monitor currently owned by a thread: None

PSYoungGen total 46080K, used 23811K [0x00000000f5580000, 0x00000000f8600000, 0x0000000100000000)
eden space 45568K, 51% used [0x00000000f5580000,0x00000000f6ca0dc0,0x00000000f8200000)
from space 512K, 25% used [0x00000000f8280000,0x00000000f82a0000,0x00000000f8300000)
to space 2048K, 0% used [0x00000000f8400000,0x00000000f8400000,0x00000000f8600000)
ParOldGen total 122368K, used 45598K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c87800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K

Card table byte_map: [0x00007efccb5c4000,0x00007efccb6c5000] byte_map_base: 0x00007efccaec4000

Marking Bits: (ParMarkBitMap*) 0x00007efcdc2bd660
Begin Bits: [0x00007efcc3000000, 0x00007efcc3800000)
End Bits: [0x00007efcc3800000, 0x00007efcc4000000)

Polling page: 0x00007efcdc323000

CodeCache: size=245760Kb used=27975Kb max_used=28034Kb free=217784Kb
bounds [0x00007efccba85000, 0x00007efccd625000, 0x00007efcdaa85000]
total_blobs=7417 nmethods=6888 adapters=442
compilation: enabled

Compilation events (10 events):
Event: 75538.391 Thread 0x00000000023a5800 8963 4 org.hsqldb.Expression::collectInGroupByExpressions (61 bytes)
Event: 75538.393 Thread 0x00000000023a8800 nmethod 8962 0x00007efcccac8fd0 code [0x00007efcccac9140, 0x00007efcccac92b8]
Event: 75538.394 Thread 0x00000000023a8800 8964 4 org.hsqldb.Expression::isConstant (118 bytes)
Event: 75538.397 Thread 0x00000000023a8800 nmethod 8964 0x00007efcccb62110 code [0x00007efcccb62320, 0x00007efcccb62468]
Event: 75538.398 Thread 0x00000000023a5800 nmethod 8963 0x00007efccbf127d0 code [0x00007efccbf12ae0, 0x00007efccbf12e70]
Event: 76104.554 Thread 0x00000000023a8800 8965 4 com.tibco.tibrv.TibrvMsg::writeBool (164 bytes)
Event: 76104.565 Thread 0x00000000023a8800 nmethod 8965 0x00007efccca4e190 code [0x00007efccca4e420, 0x00007efccca4e7f0]
Event: 76857.543 Thread 0x00000000023aa800 8966 1 com.tibco.uac.monitor.server.MonitorServer::access$000 (4 bytes)
Event: 76857.544 Thread 0x00000000023aa800 nmethod 8966 0x00007efccc3c07d0 code [0x00007efccc3c0920, 0x00007efccc3c0a10]
Event: 77124.580 Thread 0x00000000023a8800 8967 ! 4 com.tibco.repo.RVRepoProcessBridge::handleServerHeartbeat (1075 bytes)

GC Heap History (10 events):
Event: 70999.039 GC heap before
{Heap before GC invocations=77 (full 8):
PSYoungGen total 50688K, used 48256K [0x00000000f5580000, 0x00000000f8a80000, 0x0000000100000000)
eden space 48128K, 100% used [0x00000000f5580000,0x00000000f8480000,0x00000000f8480000)
from space 2560K, 5% used [0x00000000f8800000,0x00000000f8820000,0x00000000f8a80000)
to space 3072K, 0% used [0x00000000f8480000,0x00000000f8480000,0x00000000f8780000)
ParOldGen total 122368K, used 45526K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c75800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 70999.045 GC heap after
Heap after GC invocations=77 (full 8):
PSYoungGen total 48128K, used 160K [0x00000000f5580000, 0x00000000f8a00000, 0x0000000100000000)
eden space 47616K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8400000)
from space 512K, 31% used [0x00000000f8480000,0x00000000f84a8000,0x00000000f8500000)
to space 3072K, 0% used [0x00000000f8700000,0x00000000f8700000,0x00000000f8a00000)
ParOldGen total 122368K, used 45550K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7b800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 72379.077 GC heap before
{Heap before GC invocations=78 (full 8):
PSYoungGen total 48128K, used 47776K [0x00000000f5580000, 0x00000000f8a00000, 0x0000000100000000)
eden space 47616K, 100% used [0x00000000f5580000,0x00000000f8400000,0x00000000f8400000)
from space 512K, 31% used [0x00000000f8480000,0x00000000f84a8000,0x00000000f8500000)
to space 3072K, 0% used [0x00000000f8700000,0x00000000f8700000,0x00000000f8a00000)
ParOldGen total 122368K, used 45550K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7b800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 72379.082 GC heap after
Heap after GC invocations=78 (full 8):
PSYoungGen total 48640K, used 128K [0x00000000f5580000, 0x00000000f8880000, 0x0000000100000000)
eden space 47104K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8380000)
from space 1536K, 8% used [0x00000000f8700000,0x00000000f8720000,0x00000000f8880000)
to space 2560K, 0% used [0x00000000f8380000,0x00000000f8380000,0x00000000f8600000)
ParOldGen total 122368K, used 45566K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7f800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 73744.273 GC heap before
{Heap before GC invocations=79 (full 8):
PSYoungGen total 48640K, used 47232K [0x00000000f5580000, 0x00000000f8880000, 0x0000000100000000)
eden space 47104K, 100% used [0x00000000f5580000,0x00000000f8380000,0x00000000f8380000)
from space 1536K, 8% used [0x00000000f8700000,0x00000000f8720000,0x00000000f8880000)
to space 2560K, 0% used [0x00000000f8380000,0x00000000f8380000,0x00000000f8600000)
ParOldGen total 122368K, used 45566K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7f800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 73744.279 GC heap after
Heap after GC invocations=79 (full 8):
PSYoungGen total 47104K, used 96K [0x00000000f5580000, 0x00000000f8800000, 0x0000000100000000)
eden space 46592K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8300000)
from space 512K, 18% used [0x00000000f8380000,0x00000000f8398000,0x00000000f8400000)
to space 2560K, 0% used [0x00000000f8580000,0x00000000f8580000,0x00000000f8800000)
ParOldGen total 122368K, used 45582K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c83800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 75098.826 GC heap before
{Heap before GC invocations=80 (full 8):
PSYoungGen total 47104K, used 46688K [0x00000000f5580000, 0x00000000f8800000, 0x0000000100000000)
eden space 46592K, 100% used [0x00000000f5580000,0x00000000f8300000,0x00000000f8300000)
from space 512K, 18% used [0x00000000f8380000,0x00000000f8398000,0x00000000f8400000)
to space 2560K, 0% used [0x00000000f8580000,0x00000000f8580000,0x00000000f8800000)
ParOldGen total 122368K, used 45582K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c83800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 75098.831 GC heap after
Heap after GC invocations=80 (full 8):
PSYoungGen total 48128K, used 160K [0x00000000f5580000, 0x00000000f8780000, 0x0000000100000000)
eden space 46080K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8280000)
from space 2048K, 7% used [0x00000000f8580000,0x00000000f85a8000,0x00000000f8780000)
to space 2560K, 0% used [0x00000000f8280000,0x00000000f8280000,0x00000000f8500000)
ParOldGen total 122368K, used 45590K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c85800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 76440.356 GC heap before
{Heap before GC invocations=81 (full 8):
PSYoungGen total 48128K, used 46240K [0x00000000f5580000, 0x00000000f8780000, 0x0000000100000000)
eden space 46080K, 100% used [0x00000000f5580000,0x00000000f8280000,0x00000000f8280000)
from space 2048K, 7% used [0x00000000f8580000,0x00000000f85a8000,0x00000000f8780000)
to space 2560K, 0% used [0x00000000f8280000,0x00000000f8280000,0x00000000f8500000)
ParOldGen total 122368K, used 45590K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c85800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 76440.360 GC heap after
Heap after GC invocations=81 (full 8):
PSYoungGen total 46080K, used 128K [0x00000000f5580000, 0x00000000f8600000, 0x0000000100000000)
eden space 45568K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8200000)
from space 512K, 25% used [0x00000000f8280000,0x00000000f82a0000,0x00000000f8300000)
to space 2048K, 0% used [0x00000000f8400000,0x00000000f8400000,0x00000000f8600000)
ParOldGen total 122368K, used 45598K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c87800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K

In Such cases the following are the checks we need to do

  • Check the ulimit
    • It is supposed to be unlimited
  • Check the limit for no of open files
    • You can use the following command to check the number of open files
    • lsof | grep <user>|grep -v grep|wc -l

    • Then Further check the limits.conf for the value set from your end for the same
    • <user> soft nofile 350000
      <user> hard nofile 350000
      <user> soft nproc 65536
      <user> hard nproc 65536
      <user> soft stack 10240
      <user> hard stack 10240
      <user> soft sigpending 1548380
      <user> hard sigpending 1548380
  • Hopefully the problem will be resolved.

Happy Troubleshooting Guys 🙂

/dev/random vs /dev/urandom

If you want random data in a Linux/Unix type OS, the standard way to do so is to use /dev/random or /dev/urandom. These devices are special files. They can be read like normal files and the read data is generated via multiple sources of entropy in the system which provide the randomness.

/dev/random will block after the entropy pool is exhausted. It will remain blocked until additional data has been collected from the sources of entropy that are available. This can slow down random data generation.

/dev/urandom will not block. Instead it will reuse the internal pool to produce more pseudo-random bits.

/dev/urandom is best used when:

  • You just want a large file with random data for some kind of testing.
  • You are using the dd command to wipe data off a disk by replacing it with random data.
  • Almost everywhere else where you don’t have a really good reason to use /dev/random instead.

/dev/random is likely to be the better choice when:

  • Randomness is critical to the security of cryptography in your application – one-time pads, key generation.


These three are protocols that we are using in RV messaging Service.

In earlier versions, RV was using the PGM protocol and now we are using UDP, TRDP (Tibco Real-time Distributed Protocol) protocol while sending the messages in RV. Whenever we install the RV we need to select the protocol that we want to use. We can select PGM or UDP,TRDP.

TRDP is used for sends the acknowledgement back to the publisher in case of failures and sequence message delivery and hide the network details. TRDP (TIBCO Reliable Data-gram Protocol) is a proprietary protocol running on top of UDP.

It brings mechanisms to manage reliable message delivery in a broadcast/multicast paradigm, this includes :
– message numbering
– negative acknowledgement

TRDP is used by RV. it has three Quality of Service Reliable, Certified Messaging and Distributed Queue. In all of them sender stores the message. Reliable senders send stores the message that’s broadcast-ed for 60 secs. In Certified messaging the sender stores the message in a ledger file till it receives the confirmation from all the certified receivers. In Distributed Queue the message will be stored in the process ledger.In Certified messaging and DQ, it assures the sequence as well. Over all TRDP assures the delivery of the message.