Impermanence in Linux – Exclusive (By Hari Iyer)

Impermanence, also called Anicca or Anitya, is one of the essential doctrines and a part of three marks of existence in Buddhism The doctrine asserts that all of conditioned existence, without exception, is “transient, evanescent, inconstant”

On Linux, the root of all randomness is something called the kernel entropy pool. This is a large (4,096 bit) number kept privately in the kernel’s memory. There are 24096 possibilities for this number so it can contain up to 4,096 bits of entropy. There is one caveat – the kernel needs to be able to fill that memory from a source with 4,096 bits of entropy. And that’s the hard part: finding that much randomness.

The entropy pool is used in two ways: random numbers are generated from it and it is replenished with entropy by the kernel. When random numbers are generated from the pool the entropy of the pool is diminished (because the person receiving the random number has some information about the pool itself). So as the pool’s entropy diminishes as random numbers are handed out, the pool must be replenished.

Replenishing the pool is called stirring: new sources of entropy are stirred into the mix of bits in the pool.

This is the key to how random number generation works on Linux. If randomness is needed, it’s derived from the entropy pool. When available, other sources of randomness are used to stir the entropy pool and make it less predictable. The details are a little mathematical, but it’s interesting to understand how the Linux random number generator works as the principles and techniques apply to random number generation in other software and systems.

The kernel keeps a rough estimate of the number of bits of entropy in the pool. You can check the value of this estimate through the following command:

cat /proc/sys/kernel/random/entropy_avail

A healthy Linux system with a lot of entropy available will have return close to the full 4,096 bits of entropy. If the value returned is less than 200, the system is running low on entropy.

The kernel is watching you

I mentioned that the system takes other sources of randomness and uses this to stir the entropy pool. This is achieved using something called a timestamp.

Most systems have precise internal clocks. Every time that a user interacts with a system, the value of the clock at that time is recorded as a timestamp. Even though the year, month, day and hour are generally guessable, the millisecond and microsecond are not and therefore the timestamp contains some entropy. Timestamps obtained from the user’s mouse and keyboard along with timing information from the network and disk each have different amount of entropy.

How does the entropy found in a timestamp get transferred to the entropy pool? Simple, use math to mix it in. Well, simple if you like math.

Just mix it in

A fundamental property of entropy is that it mixes well. If you take two unrelated random streams and combine them, the new stream cannot have less entropy. Taking a number of low entropy sources and combining them results in a high entropy source.

All that’s needed is the right combination function: a function that can be used to combine two sources of entropy. One of the simplest such functions is the logical exclusive or (XOR). This truth table shows how bits x and y coming from different random streams are combined by the XOR function.

Even if one source of bits does not have much entropy, there is no harm in XORing it into another source. Entropy always increases. In the Linux kernel, a combination of XORs is used to mix timestamps into the main entropy pool.

Generating random numbers

Cryptographic applications require very high entropy. If a 128 bit key is generated with only 64 bits of entropy then it can be guessed in 264 attempts instead of 2128 attempts. That is the difference between needing a thousand computers running for a few years to brute force the key versus needing all the computers ever created running for longer than the history of the universe to do so.

Cryptographic applications require close to one bit of entropy per bit. If the system’s pool has fewer than 4,096 bits of entropy, how does the system return a fully random number? One way to do this is to use a cryptographic hash function.

A cryptographic hash function takes an input of any size and outputs a fixed size number. Changing one bit of the input will change the output completely. Hash functions are good at mixing things together. This mixing property spreads the entropy from the input evenly through the output. If the input has more bits of entropy than the size of the output, the output will be highly random. This is how highly entropic random numbers are derived from the entropy pool.

The hash function used by the Linux kernel is the standard SHA-1 cryptographic hash. By hashing the entire pool and and some additional arithmetic, 160 random bits are created for use by the system. When this happens, the system lowers its estimate of the entropy in the pool accordingly.

Above I said that applying a hash like SHA-1 could be dangerous if there wasn’t enough entropy in the pool. That’s why it’s critical to keep an eye on the available system entropy: if it drops too low the output of the random number generator could have less entropy that it appears to have.

Running out of entropy

One of the dangers of a system is running out of entropy. When the system’s entropy estimate drops to around the 160 bit level, the length of a SHA-1 hash, things get tricky, and how they effect programs and performance depends on which of two Linux random number generators are used.

Linux exposes two interfaces for random data that behave differently when the entropy level is low. They are /dev/random and /dev/urandom. When the entropy pool becomes predictable, both interfaces for requesting random numbers become problematic.

When the entropy level is too low, /dev/random blocks and does not return until the level of entropy in the system is high enough. This guarantees high entropy random numbers. If /dev/random is used in a time-critical service and the system runs low on entropy, the delays could be detrimental to the quality of service.

On the other hand, /dev/urandom does not block. It continues to return the hashed value of its entropy pool even though there is little to no entropy in it. This low-entropy data is not suited for cryptographic use.

The solution to the problem is to simply add more entropy into the system.

Hardware random number generation to the rescue?

Intel’s Ivy Bridge family of processors have an interesting feature called “secure key.” These processors contain a special piece of hardware inside that generates random numbers. The single assembly instruction RDRAND returns allegedly high entropy random data derived on the chip.

It has been suggested that Intel’s hardware number generator may not be fully random. Since it is baked into the silicon, that assertion is hard to audit and verify. As it turns out, even if the numbers generated have some bias, it can still help as long as this is not the only source of randomness in the system. Even if the random number generator itself had a back door, the mixing property of randomness means that it cannot lower the amount of entropy in the pool.

On Linux, if a hardware random number generator is present, the Linux kernel will use the XOR function to mix the output of RDRAND into the hash of the entropy pool. This happens here in the Linux source code (the XOR operator is ^ in C).

Third party entropy generators

Hardware number generation is not available everywhere, and the sources of randomness polled by the Linux kernel itself are somewhat limited. For this situation, a number of third party random number generation tools exist. Examples of these are haveged, which relies on processor cache timing, audio-entropyd and video-entropyd which work by sampling the noise from an external audio or video input device. By mixing these additional sources of locally collected entropy into the Linux entropy pool, the entropy can only go up.

Advertisements

TIBCO Universal Installer – Unix – The installer is unable to run in graphical mode. Try running the installer with the -console or -silent flag (SOLVED)

Many a times,

when you try to install TIBCO Rendezvous / TIBCO EMS or even certain BW Plugins ( That are 32 bit binaries ) on a 64 bit JVM based UNIX System (Linux / Solaris / AIX / UX / FreeBSD)

You typically encounter an error like this

Capture

 

Well, many people ain’t aware of the real deal to solve this issue,

After much Research with permutations and Combinations, there seems to be a solution for this :-

Follow the Steps mentioned below For RHEL 6.XX Systems (Cuz i ain’t tried for other NIX platform yet)

  1. sudo yum -y install libXtst*i686 *
  2. sudo yum -y install libXext*i686*
  3. sudo yum -y install libXrender*i686*

I am damn sure, it’ll work for GUI mode of installation

java.sql.SQLRecoverableException: IO Error: Connection reset ( Designer / BWEngine / interfaceName )

Sometimes, when you create a JDBC Connection in your Designer, or when you configure a JDBC Connection in your EAR, You might end up with an error like this :-

Designer :-

Capture

Runtime :-

java.sql.SQLRecoverableException: IO Error: Connection reset

(In your trace file)

This happens because of urandom

/dev/random is a random number generator often used to seed cryptography functions for better security.  /dev/urandom likewise is a (pseudo) random number generator.  Both are good at generating random numbers.  The key difference is that /dev/random has a blocking function that waits until entropy reaches a certain level before providing its result.  From a practical standpoint, this means that programs using /dev/random will generally take longer to complete than /dev/urandom.

With regards to why /dev/urandom vs /dev/./urandom.  That is something unique to Java versions 5 and following that resulted from problems with /dev/urandom on Linux systems back in 2004.  The easy fix was to force /dev/urandom to use /dev/random.  However, it doesn’t appear that Java will be updated to let /dev/urandom use /dev/urandom. So, the workaround is to fake Java out by obscuring /dev/urandom to /dev/./urandom which is functionally the same thing but looks different.

Therefore, Add the following Field to bwengine.tra and designer.tra OR your Individual track’s tra file and restart the bwengine or designer and it works like Magic Johnson’s Dunk.

java.extended.properties -Djava.security.egd=file:///dev/urandom

Interrupt Coalescence (also called Interrupt Moderation, Interrupt Blanking, or Interrupt Throttling)

A common bottleneck for high-speed data transfers is the high rate of interrupts that the receiving system has to process – traditionally, a network adapter generates an interrupt for each frame that it receives. These interrupts consume signaling resources on the system’s bus(es), and introduce significant CPU overhead as the system transitions back and forth between “productive” work and interrupt handling many thousand times a second.

To alleviate this load, some high-speed network adapters support interrupt coalescence. When multiple frames are received in a short timeframe (“back-to-back”), these adapters buffer those frames locally and only interrupt the system once.

Interrupt coalescence together with large-receive offload can roughly be seen as doing on the “receive” side what transmit chaining and large-send offload (LSO) do for the “transmit” side.

Issues with interrupt coalescence

While this scheme lowers interrupt-related system load significantly, it can have adverse effects on timing, and make TCP traffic more bursty or “clumpy”. Therefore it would make sense to combine interrupt coalescence with on-board timestamping functionality. Unfortunately that doesn’t seem to be implemented in commodity hardware/driver combinations yet.

The way that interrupt coalescence works, a network adapter that has received a frame doesn’t send an interrupt to the system right away, but waits for a little while in case more packets arrive. This can have a negative impact on latency.

In general, interrupt coalescence is configured such that the additional delay is bounded. On some implementations, these delay bounds are specified in units of milliseconds, on other systems in units of microseconds. It requires some thought to find a good trade-off between latency and load reduction. One should be careful to set the coalescence threshold low enough that the additional latency doesn’t cause problems. Setting a low threshold will prevent interrupt coalescence from occurring when successive packets are spaced too far apart. But in that case, the interrupt rate will probably be low enough so that this is not a problem.

Configuration

Configuration of interrupt coalescence is highly system dependent, although there are some parameters that are more or less common over implementations.

Linux

On Linux systems with additional driver support, the ethtool -C command can be used to modify the interrupt coalescence settings of network devices on the fly.

Some Ethernet drivers in Linux have parameters to control Interrupt Coalescence (Interrupt Moderation, as it is called in Linux). For example, the e1000 driver for the large family of Intel Gigabit Ethernet adapters has the following parameters according to the kernel documentation:

InterruptThrottleRate
limits the number of interrupts per second generated by the card. Values >= 100 are interpreted as the maximum number of interrupts per second. The default value used to be 8’000 up to and including kernel release 2.6.19. A value of zero (0) disabled interrupt moderation completely. Above 2.6.19, some values between 1 and 99 can be used to select adaptive interrupt rate control. The first adaptive modes are “dynamic conservative” (1) and dynamic with reduced latency (3). In conservative mode (1), the rate changes between 4’000 interrupts per second when only bulk traffic (“normal-size packets”) is seen, and 20’000 when small packets are present that might benefit from lower latency. The more aggressive mode (3), “low-latency” traffic may drive the interrupt rate up to 70’000 per second. This mode is supposed to be useful for cluster communication in grid applications.
RxIntDelay
specifies, in multiples of 1’024 microseconds, the time after reception of a frame to wait for another frame to arrive before sending an interrupt.
RxAbsIntDelay
bounds the delay between reception of a frame and generation of an interrupt. It is specified in units of 1’024 microseconds. Note that InterruptThrottleRate overrides RxAbsIntDelay, so even when a very short RxAbsIntDelay is specified, the interrupt rate should never exceed the rate specified (either directly or by the dynamic algorithm) by InterruptThrottleRate
RxDescriptors
specifies the number of descriptors to store incoming frames on the adapter. The default value is 256, which is also the maximum for some types of E1000-based adapters. Others can allocate up to 4’096 of these descriptors. The size of the receive buffer associated with each descriptor varies with the MTU configured on the adapter. It is always a power-of-two number of bytes. The number of descriptors available will also depend on the per-buffer size. When all buffers have been filled by incoming frames, an interrupt will have to be signaled in any case.

Solaris

As an example, see the Platform Notes: Sun GigaSwift Ethernet Device Driver. It lists the following parameters for that particular type of adapter:

rx_intr_pkts
Interrupt after this number of packets have arrived since the last packet was serviced. A value of zero indicates no packet blanking. (Range: 0 to 511, default=3)
rx_intr_time
Interrupt after 4.5 microsecond ticks have elapsed since the last packet was serviced. A value of zero indicates no time blanking. (Range: 0 to 524287, default=1250)

TIBCO Hawk v/s TIBCO BWPM (reblogged)

A short while ago I got the question from a customer that wanted to know the differences between TIBCO Hawk and TIBCO BWPM (BusinessWorks Process Monitor), since both are monitoring products from TIBCO. In this blog I will be briefly explaining my point of view and recommendations about when to use which product, which in my opinion cannot be compared as-is.

TIBCO Hawk
Let me start by indicating that TIBCO Hawk and BWPM are not products which can be directly compared with each other. There is partially overlap in purpose of the two products, namely gaining insight in the integration landscape, but at the same time the products are very different. TIBCO Hawk is as we may know a transport, distribution and monitoring product that underwater allows TIBCO administrators to technically monitor the integration landscape in runtime (including server behaviour etc.) and reactive respond on certain events by configuring so-called Hawk-rules and setting up dashboards for feedback. The technical monitoring capabilities are quite extensive and based on the information and log files which are available by both the TIBCO Administrator and the various micro Hawk agents. The target group of TIBCO Hawk are especially the administrators and to a lesser extent, the developers. The focus is on monitoring the various TIBCO components (or adapters) to satisfy corresponding SLA’s, not that what is taking place within the TIBCO components from functional points of perspective.

+ Very strong, comprehensive and proven tool for TIBCO administrators;
+ Reactive measure and (automatically) react to events in the landscape using Hawk-rules;
– Fairly technical, thus very higher threshold for non-technical users;
– Offer little or no insight into the actual data processed from a functional point of perspective;

TIBCO BWPM
TIBCO BWPM is a product that from a functional point of perspective provides insight during runtime at process level and is a branch and rebranding of the product called nJAMS by Integration Matters. It may impact the way of developing (standards and guidelines). By using so-called libraries throughout the development process-specific functional information can be made available in runtime. It has a rich web interface as an alternative to the TIBCO Administrator and offers rich visual insight into all process instances and correlates them together. The target group of TIBCO BWPM are the TIBCO developers, administrators, testers and even analysts. The focus is on gaining and understanding of that which is being taking place within the TIBCO components from functional points of perspective.

+ Very strong and comprehensive tool with a rich web interface;
+ Provides extensive logging capabilities, the availability of all related context and process data;
+ Easily accessible and intuitive to use, even for non-technical users;
– Less suitable to use for the daily technical monitoring of the landscape (including server behaviour etc.);
– It is important that the product is well designed and properly parameterized to prevent performance impact (should not be underestimate);

Conclusion
In my opinion, TIBCO BWPM is a very welcome addition to the standard TIBCO Administrator/TIBCO Hawk to gain insight in the related context and process data from a functional point of perspective. In addition, the product can also be used by TIBCO developers, administrators, testers and even analysts.

Source :-  http://www.rubix.nl

TIBCO BWPM – Missing Libraries Detected

Guys,

If at all you get an error like this

Capture

 

Don’t Panic, simply copy the following list of the following jars in $CATALINA_HOME/lib

For EMS :-

  • jms.jar (if using ems 8 and above rename jms2.0.jar with jms.jar)
  • tibcrypt.jar, tibjms.jar, tibjmsadmin.jar

For Database :-

  • ojdbc.jar (rename ojdbc6.jar or ojdbc7.jar to ojdbc.jar) – ORACLE
  • mssqlserver.jar (rename to sqljdbc4.jar) – MSSQL

/etc/security/limits.conf file – In A Nutshell

The /etc/security/limits.conf file contains a list line where each line describes a limit for a user in the form of:

<Domain> <type> <item> <shell limit value>

Where:

  • <domain> can be:
    • an user name
    • a group name, with @group syntax
    • the wildcard *, for default entry
    • the wildcard %, can be also used with %group syntax, for maxlogin limit
  • <type> can have the two values:
    • “soft” for enforcing the soft limits (soft is like warning)
    • “hard” for enforcing hard limits (hard is a real max limit)
  • <item> can be one of the following:
    • core – limits the core file size (KB)
  • <shell limit value> can be one of the following:
    • core – limits the core file size (KB)
    • data – max data size (KB)
    • fsize – maximum file size (KB)
    • memlock – max locked-in-memory address space (KB)
    • nofile – Maximum number of open file descriptors
    • rss – max resident set size (KB)
    • stack – max stack size (KB) – Maximum size of the stack segment of the process
    • cpu – max CPU time (MIN)
    • nproc – Maximum number of processes available to a single user
    • as – address space limit
    • maxlogins – max number of logins for this user
    • maxsyslogins – max number of logins on the system
    • priority – the priority to run user process with
    • locks – max number of file locks the user can hold
    • sigpending – max number of pending signals
    • msgqueue – max memory used by POSIX message queues (bytes)
    • nice – max nice priority allowed to raise to
    • rtprio – max realtime priority
    • chroot – change root to directory (Debian-specific)

 

  • Sigpending – examine pending signals.

sigpending () returns the set of signals that are pending for delivery to the calling thread (i.e., the signals which have been raised while blocked). The mask of pending signals is returned in set.

sigpending() returns 0 on success and -1 on error

 

credits :- Sagar Salunkhe