HTTP Status Codes:- 1xx (Informational)

  • 100 Continue :- The client SHOULD continue with its request. This interim response is used to inform the client that the initial part of the request has been received and has not yet been rejected by the server. The client SHOULD continue by sending the remainder of the request or, if the request has already been completed, ignore this response. The server MUST send a final response after the request has been completed.
  • 102 Processing (WebDAV) :- The 102 (Processing) status code is an interim response used to inform the client that the server has accepted the complete request, but has not yet completed it. This status code SHOULD only be sent when the server has a reasonable expectation that the request will take significant time to complete. As guidance, if a method is taking longer than 20 seconds (a reasonable, but arbitrary value) to process the server SHOULD return a 102 (Processing) response. The server MUST send a final response after the request has been completed.Methods can potentially take a long period of time to process, especially methods that support the Depth header. In such cases the client may time-out the connection while waiting for a response. To prevent this the server may return a 102 (Processing) status code to indicate to the client that the server is still processing the method.
  • 101 Switching Protocols :-The server understands and is willing to comply with the client’s request, via the Upgrade message header field, for a change in the application protocol being used on this connection.The server will switch protocols to those defined by the response’s Upgrade header field immediately after the empty line which terminates the 101 response. The protocol SHOULD be switched only when it is advantageous to do so. For example, switching to a newer version of HTTP is advantageous over older versions, and switching to a real-time, synchronous protocol might be advantageous when delivering resources that use such features.

     

Advertisements

java.lang.OutOfMemoryError – Types and Causes

The 8 symptoms that surface them

The many thousands of java.lang.OutOfMemoryErrors that I’ve met during my career all bear one of the below eight symptoms.

This post from haritibcoblog explains what causes a particular error to be thrown, offers code examples that can cause such errors, and gives you solution guidelines for a fix.

The content is all based on my own experience.

  • java.lang.OutOfMemoryError:Java heap space
  • java.lang.OutOfMemoryError:GC overhead limit exceeded
  • java.lang.OutOfMemoryError:Permgen space
  • java.lang.OutOfMemoryError:Metaspace
  • java.lang.OutOfMemoryError:Unable to create new native thread
  • java.lang.OutOfMemoryError:Out of swap space?
  • java.lang.OutOfMemoryError:Requested array size exceeds VM limit
  • Out of memory:Kill process or sacrifice child

java.lang.OutOfMemoryError:Java heap space

Java applications are only allowed to use a limited amount of memory. This limit is specified during application startup. To make things more complex, Java memory is separated into two different regions. These regions are called Heap space and Permgen (for Permanent Generation):

capture

The size of those regions is set during the Java Virtual Machine (JVM) launch and can be customized by specifying JVM parameters -Xmx and -XX:MaxPermSize. If you do not explicitly set the sizes, platform-specific defaults will be used.

The java.lang.OutOfMemoryError: Java heap space error will be triggered when the application attempts to add more data into the heap space area, but there is not enough room for it.

Note that there might be plenty of physical memory available, but the java.lang.OutOfMemoryError: Java heap space error is thrown whenever the JVM reaches the heap size limit.

What is causing it?

There most common reason for the java.lang.OutOfMemoryError: Java heap space error is simple – you try to fit an XXL application into an S-sized Java heap space. That is – the application just requires more Java heap space than available to it to operate normally. Other causes for this OutOfMemoryError message are more complex and are caused by a programming error:

  • Spikes in usage/data volume. The application was designed to handle a certain amount of users or a certain amount of data. When the number of users or the volume of data suddenly spikes and crosses that expected threshold, the operation which functioned normally before the spike ceases to operate and triggers the lang.OutOfMemoryError: Java heap spaceerror.

Memory leaks. A particular type of programming error will lead your application to constantly consume more memory. Every time the leaking functionality of the application is used it leaves some objects behind into the Java heap space. Over time the leaked objects consume all of the available Java heap space and trigger the already familiar java.lang.OutOfMemoryError: Java heap space error.

java.lang.OutOfMemoryError:GC overhead limit exceeded

Java runtime environment contains a built-in Garbage Collection (GC) process. In many other programming languages, the developers need to manually allocate and free memory regions so that the freed memory can be reused.

Java applications on the other hand only need to allocate memory. Whenever a particular space in memory is no longer used, a separate process called Garbage Collection clears the memory for them. How the GC detects that a particular part of memory is explained in more detail in the Garbage Collection Handbook, but you can trust the GC to do its job well.

captureThe java.lang.OutOfMemoryError: GC overhead limit exceeded error is displayed when your application has exhausted pretty much all the available memory and GC has repeatedly failed to clean it.

What is causing it?

The java.lang.OutOfMemoryError: GC overhead limit exceeded error is the JVM’s way of signalling that your application spends too much time doing garbage collection with too little result. By default the JVM is configured to throw this error if it spends more than 98% of the total time doing GC and when after the GC only less than 2% of the heap is recovered.

What would happen if this GC overhead limit would not exist? Note that the java.lang.OutOfMemoryError: GC overhead limit exceeded error is only thrown when 2% of the memory is freed after several GC cycles. This means that the small amount of heap the GC is able to clean will likely be quickly filled again, forcing the GC to restart the cleaning process again. This forms a vicious cycle where the CPU is 100% busy with GC and no actual work can be done. End users of the application face extreme slowdowns – operations which normally complete in milliseconds take minutes to finish.

So the “java.lang.OutOfMemoryError: GC overhead limit exceeded” message is a pretty nice example of a fail fast principle in action.

java.lang.OutOfMemoryError:Permgen space

Java applications are only allowed to use a limited amount of memory. The exact amount of memory your particular application can use is specified during application startup. To make things more complex, Java memory is separated into different regions which can be seen in the following figure:

The size of all those regions, including the permgen area, is set during the JVM launch. If you do not set the sizes yourself, platform-specific defaults will be used.

The java.lang.OutOfMemoryError: PermGen space message indicates that the Permanent Generation’s area in memory is exhausted.

capture

What is causing it?

To understand the cause for the java.lang.OutOfMemoryError: PermGen space, we would need to understand what this specific memory area is used for.

For practical purposes, the permanent generation consists mostly of class declarations loaded and stored into PermGen. This includes the name and fields of the class, methods with the method bytecode, constant pool information, object arrays and type arrays associated with a class and Just In Time compiler optimizations.

From the above definition you can deduce that the PermGen size requirements depend both on the number of classes loaded as well as the size of such class declarations. Therefore we can say that the main cause for the java.lang.OutOfMemoryError: PermGen space is that either too many classes or too big classes are loaded to the permanent generation.

java.lang.OutOfMemoryError:Metaspace

Java applications are allowed to use only a limited amount of memory. The exact amount of memory your particular application can use is specified during application startup. To make things more complex, Java memory is separated into different regions, as seen in the following figure:

The size of all those regions, including the metaspace area, can be specified during the JVM launch. If you do not determine the sizes yourself, platform-specific defaults will be used.

The java.lang.OutOfMemoryError: Metaspace message indicates that the Metaspace area in memory is exhausted.

capture

What is causing it?

If you are not a newcomer to the Java landscape, you might be familiar with another concept in Java memory management called PermGen. Starting from Java 8, the memory model in Java was significantly changed. A new memory area called Metaspace was introduced and Permgen was removed. This change was made due to variety of reasons, including but not limited to:

  • The required size of permgen was hard to predict. It resulted in either under-provisioning triggering lang.OutOfMemoryError: Permgen sizeerrors or over-provisioning resulting in wasted resources.
  • GC performanceimprovements, enabling concurrent class data de-allocation without GC pauses and specific iterators on metadata
  • Support for further optimizations such as G1concurrent class unloading.

So if you were familiar with PermGen then all you need to know as background is that – whatever was in PermGen before Java 8 (name and fields of the class, methods of a class with the bytecode of the methods, constant pool, JIT optimizations etc) – is now located in Metaspace.

As you can see, Metaspace size requirements depend both upon the number of classes loaded as well as the size of such class declarations. So it is easy to see the main cause for the java.lang.OutOfMemoryError: Metaspace is: either too many classes or too big classes being loaded to the Metaspace.

java.lang.OutOfMemoryError:Unable to create new native thread

Java applications are multi-threaded by nature. What this means is that the programs written in Java can do several things (seemingly) at once. For example – even on machines with just one processor – while you drag content from one window to another, the movie played in the background does not stop just because you carry out several operations at once.

A way to think about threads is to think of them as workers to whom you can submit tasks to carry out. If you had only one worker, he or she could only carry out one task at the time. But when you have a dozen workers at your disposal they can simultaneously fulfill several of your commands.

Now, as with workers in physical world, threads within the JVM need some elbow room to carry out the work they are summoned to deal with. When there are more threads than there is room in memory we have built a foundation for a problem:

The message java.lang.OutOfMemoryError: Unable to create new native thread means that the Java application has hit the limit of how many Threads it can launch.

capture

What is causing it?

You have a chance to face the java.lang.OutOfMemoryError: Unable to create new native thread whenever the JVM asks for a new thread from the OS. Whenever the underlying OS cannot allocate a new native thread, this OutOfMemoryError will be thrown. The exact limit for native threads is very platform-dependent thus we recommend to find out those limits by running a test similar to the below example. But, in general, the situation causing java.lang.OutOfMemoryError: Unable to create new native thread goes through the following phases:

  1. A new Java thread is requested by an application running inside the JVM
  2. JVM native code proxies the request to create a new native thread to the OS
  3. The OS tries to create a new native thread which requires memory to be allocated to the thread
  4. The OS will refuse native memory allocation either because the 32-bit Java process size has depleted its memory address space – e.g. (2-4) GB process size limit has been hit – or the virtual memory of the OS has been fully depleted
  5. The lang.OutOfMemoryError: Unable to create new native threaderror is thrown.

 

java.lang.OutOfMemoryError:Out of swap space?

Java applications are given limited amount of memory during the startup. This limit is specified via the -Xmx and other similar startup parameters. In situations where the total memory requested by the JVM is larger than the available physical memory, operating system starts swapping out the content from memory to hard drive.

The java.lang.OutOfMemoryError: Out of swap space? error indicates that the swap space is also exhausted and the new attempted allocation fails due to the lack of both physical memory and swap space.

capture

What is causing it?

The java.lang.OutOfmemoryError: Out of swap space? is thrown by JVM when an allocation request for bytes from the native heap fails and the native heap is close to exhaustion. The message indicates the size (in bytes) of the allocation which failed and the reason for the memory request.

The problem occurs in situations where the Java processes have started swapping, which, recalling that Java is a garbage collected language is already not a good situation. Modern GC algorithms do a good job, but when faced with latency issues caused by swapping, the GC pauses tend to increase to levels not tolerable by most applications.

java.lang.OutOfMemoryError: Out of swap space? is often caused by operating system level issues, such as:

  • The operating system is configured with insufficient swap space.
  • Another process on the system is consuming all memory resources.

It is also possible that the application fails due to a native leak, for example, if application or library code continuously allocates memory but does not release it to the operating system.

java.lang.OutOfMemoryError:Requested array size exceeds VM limit

Java has got a limit on the maximum array size your program can allocate. The exact limit is platform-specific but is generally somewhere between 1 and 2.1 billion elements.

When you face the java.lang.OutOfMemoryError: Requested array size exceeds VM limit, this means that the application that crashes with the error is trying to allocate an array larger than the Java Virtual Machine can support.

capture

What is causing it?

The error is thrown by the native code within the JVM. It happens before allocating memory for an array when the JVM performs a platform-specific check: whether the allocated data structure is addressable in this platform. This error is less common than you might initially think.

The reason you only seldom face this error is that Java arrays are indexed by int. The maximum positive int in Java is 2^31 – 1 = 2,147,483,647. And the platform-specific limits can be really close to this number – for example on my 64bit MB Pro on Java 1.7 I can happily initialize arrays with up to 2,147,483,645 or Integer.MAX_VALUE-2 elements.

Increasing the length of the array by one to Integer.MAX_VALUE-1 results in the familiar OutOfMemoryError:

Exception in thread “main” java.lang.OutOfMemoryError: Requested array size exceeds VM limit

But the limit might not be that high – on 32-bit Linux with OpenJDK 6, you will hit the “java.lang.OutOfMemoryError: Requested array size exceeds VM limit” already when allocating an array with ~1.1 billion elements. To understand the limits of your specific environments run the small test program described in the next chapter.

Out of memory:Kill process or sacrifice child

In order to understand this error, we need to recoup the operating system basics. As you know, operating systems are built on the concept of processes. Those processes are shepherded by several kernel jobs, one of which, named “Out of memory killer” is of interest to us in this particular case.

This kernel job can annihilate your processes under extremely low memory conditions. When such a condition is detected, the Out of memory killer is activated and picks a process to kill. The target is picked using a set of heuristics scoring all processes and selecting the one with the worst score to kill. The Out of memory: Kill process or sacrifice child is thus different from other errors covered in our OOM handbook as it is not triggered nor proxied by the JVM but is a safety net built into the operating system kernels.

The Out of memory: kill process or sacrifice child error is generated when the available virtual memory (including swap) is consumed to the extent where the overall operating system stability is put to risk. In such case the Out of memory killer picks the rogue process and kills it.

capture

What is causing it?

By default, Linux kernels allow processes to request more memory than currently available in the system. This makes all the sense in the world, considering that most of the processes never actually use all of the memory they allocate. The easiest comparison to this approach would be the broadband operators. They sell all the consumers a 100Mbit download promise, far exceeding the actual bandwidth present in their network. The bet is again on the fact that the users will not simultaneously all use their allocated download limit. Thus one 10Gbit link can successfully serve way more than the 100 users our simple math would permit.

A side effect of such an approach is visible in case some of your programs are on the path of depleting the system’s memory. This can lead to extremely low memory conditions, where no pages can be allocated to process. You might have faced such situation, where not even a root account cannot kill the offending task. To prevent such situations, the killer activates, and identifies the rogue process to be the killed.

You can read more about fine-tuning the behaviour of “Out of memory killer” in this article from RedHat documentation.

Now that we have the context, how can you know what triggered the “killer” and woke you up at 5AM? One common trigger for the activation is hidden in the operating system configuration. When you check the configuration in /proc/sys/vm/overcommit_memory, you have the first hint – the value specified here indicates whether all malloc() calls are allowed to succeed. Note that the path to the parameter in the proc file system varies depending on the system affected by the change.

Overcommitting configuration allows to allocate more and more memory for this rogue process which can eventually trigger the “Out of memory killer” to do exactly what it is meant to do.

 

Linux – Concepts – IPTABLES v/s FIREWALLD

Today we will walk through iptables and firewalld and we will learn about the history of these two along with installation & how we can configure these for our Linux distributions.

Let’s begin wihtout wasting further more time.

What is iptables?

First, we need to know what is iptables. Most of senior IT professionals knows about it and used to work with it as well. Iptables is an application / program that allows a user to configure the security or firewall security tables provided by the Linux kernel firewall and the chains so that a user can add / remove firewall rules to it accordingly to meet his / her security requirements. Iptables uses different kernel modules and different protocols so that user can take the best out of it. As for example, iptables is used for IPv4 ( IP version 4/32 bit ) and ip6tables for IPv6 ( IP version 6/64 bit ) for both tcp and udp. Normally, iptables rules are configured by System Administrator or System Analyst or IT Manager.  You must have root privileges to execute each iptables rules. Linux Kernel uses the Netfilter framework so that it can provide various networking-related operations which can be performed by using iptables. Previously, ipchains was used in most of the Linux distributions for the same purpose. Every iptables rules are directly handled by the Linux Kernel itself and it is known as kernel duty. Whatever GUI tools or other security tools you are using to configure your server’s firewall security, at the end of the day, it is converted into iptables rules and supplied to the kernel to perform the operation.

History of iptables

The rise of the iptables begin with netfilter. Paul Rusty Russell was the initial author and the head think tank behind netfilter / iptables. Later he was joined by many other tech people then form and build the Netfilter core team and develop & maintain the netfilter/iptables project as a joint effort like many other open source projects. Harald Welte was the former leader until 2007 and then Patrick McHardy was the head until 2013. Currently, netfilter core team head is Pablo Neira Ayuso.

To know more about netfilter, please visit this link. To know more about the historicity of netfilter, please visit this link.

To know more about iptables history, please visit this link.

How to install iptables

Now a days, every Linux Kernel comes with iptables and can be found pre build or pre installed on every famous modern Linux distributions. On most Linux systems, iptables is installed in this /usr/sbin/iptables directory. It can be also  found in /sbin/iptables, but since iptables is more like a service rather than an “essential binary”, the preferred location remains in /usr/sbin directory.

For Ubuntu or Debian

sudo apt-get install iptables

For CentOS

sudo yum install iptables-services

For RHEL

sudo yum install iptables

Iptables version

To know your iptables version, type the following command in your terminal.

sudo iptables --version

Start & Stopping your iptables firewall

For OpenSUSE 42.1, type the following to stop.

sudo /sbin/rcSuSEfirewall2 stop

To start it again

sudo /sbin/rcSuSEfirewall2 start

For Ubuntu, type the following to stop.

sudo service ufw stop

To start it again

sudo service ufw start

For Debian & RHEL , type the following to stop.

sudo /etc/init.d/iptables stop

To start it again

sudo /etc/init.d/iptables start

For CentOS, type the following to stop.

sudo service iptables stop

To start it again

sudo service iptables start

Getting all iptables rules lists

To know all the rules that is currently present & active in your iprables, simply open a terminal and type the following.

sudo iptables -L

If there are no rules exits on the iptables means if there are no rules added so far in your iptables firewall, you will see something like the below image.

Iptables_Lists_OpenSUSE42.1

In this above picture, you can see that , there are three (3) chains and they are INPUT, FORWARD, OUTPUT and there are no rules exists. Actually I haven’t add one yet.

Type the following to know the status of the chains of your iptables firewall.

sudo iptables -S

With the above command, you can learn whether your chains are accepting or not.

Clear all iptables rules

To clear all the rules from your iptables firewall, please type the following. This is normally known as flushing your iptables rules.

sudo iptables -F

If you want to flush the INPUT chain only, or any individual chains, issue the below commands as per your requirements.

sudo iptables -F INPUT
sudo iptables -F OUTPUT
sudo iptables -F FORWARD

ACCEPT or DROP Chains

To accept or drop a particular chain, issue any of the following command on your terminal to meet your requirements.

iptables --policy INPUT DROP

The above rule will not accept anything that is incoming to that server. To revert it again back to ACCEPT, do the following

iptables --policy INPUT ACCEPT

Same goes for other chains as well like

iptables --policy OUTPUT DROP
iptables --policy FORWARD DROP

Note: By default, all chains of iptables ( INPUT, OUTPUT, FORWARD ) are in ACCEPT mode. This is known as Policy Chain Default Behavior.

Allowing any port

If you are running any web server on your host, then you must allow your iptables firewall so that your server listen or respond to port 80. By default web server runs on port 80. Let’s do that then.

sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT

On the above line, A stands for append means we are adding a new rule to the iptables list. INPUT stands for the INPUT chain. P stands for protocol and dport stands for destination port. By default any web server runs on port 80. Similarly, you can allow SSH port as well.

sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT

By default, SSH runs on port 22. But it’s good practise not to run SSH on port 22. Always run SSH on a different port. To run SSH on a different port, open /etc/ssh/sshd_config file on your favorite editor and change the port 22 to a other port.

Blocking any port

Say we want to block port 135. We can do it by

sudo iptables -A INPUT -p tcp --dport 135 -j DROP

if you want to block your server to initiate any SSH connection from the server to another host/server, issue the following command

sudo iptables -A OUTPUT -p tcp --dport 22 -j DROP

By doing so, no one can use your sever to initiate a SSH connection from the server. The OUPUT chain will filter and DROP any outgoing tcp connection towards another hosts.

Allowing specific IP with Port

sudo iptables -A INPUT -p tcp -s 0/0 --dport 22  -j ACCEPT

Here -s 0/0 stand for any incoming source with any IP addresses. So, there is no way your server is going to respond for a tcp packet which destination port is 22. If you want to allow only any particular IP then use the following one.

sudo iptables -A INPUT -p tcp -s 12.12.12.12/32 --dport 22  -j ACCEPT

On the above example, you are only allowing 12.12.12.12 IP address to connect to port SSH. Rest IP addresses will not be able to connect to port 22. Similarly you can allow by using CIDR values. Such as

sudo iptables -A INPUT -p tcp -s 12.12.12.0/24 --dport 22  -j ACCEPT

The above example show how you can allow a whole IP block for accepting connection on port 22. It will accept IP starting from 12.12.12.1 to 12.12.12.255.

If you want to block such IP addresses range, do the reverse by replacing ACCEPT by DROP like the following

sudo iptables -A INPUT -p tcp -s 12.12.12.0/24 --dport 22  -j DROP

So, it will not allow to get a connection on port 22 from from 12.12.12.1 to 12.12.12.255 IP addresses.

Blocking ICMP

If you want to block ICMP (ping) request to and from on your server, you can try the following. The first one will block not to send ICMP ping echo request to another host.

sudo iptables -A OUTPUT -p icmp --icmp-type 8 -j DROP

Now, try to ping google.com. Your OpenSUSE server will not be able to ping google.com.

If you want block the incoming ICMP (ping) echo request for your server, just type the following on your terminal.

sudo iptables -I INPUT -p icmp --icmp-type 8 -j DROP

Now, It will not reply to any ICMP ping echo request. Say, your server IP address is 13.13.13.13. And if you ping ping that IP of your server then you will see that your server is not responding for that ping request.

Blocking MySql / MariaDB Port

As Mysql is holding your database so you must protect your database from outside attach. Allow your trusted application server IP addresses only to connect with your MySQL server. To block other

sudo iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT

So, it will not take any MySql connection except 192.168.1.0/24 IP block. By default MySql runs on 3306 port.

Blocking SMTP

If you not running any mail server on your host server or if your server is not configured to act like a mail server, you must block SMTP so that your server is not sending any spam or any mail towards any domain. You must do this to block any outgoing mail from your server. To do so,

sudo iptables -A OUTPUT -p tcp --dport 25 -j DROP

Block DDoS

We all are familiar with the term DDoS. To get rid of it, issue the following command in your terminal.

iptables -A INPUT -p tcp --dport 80 -m limit --limit 20/minute --limit-burst 100 -j ACCEPT

You need to configure the numerical value to meet your requirements. This is just a standard to maintain.

You can protect more by

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/tcp_syncookies
echo 0 > /proc/sys/net/ipv4/conf/all/accept_redirects
echo 0 > /proc/sys/net/ipv4/conf/all/accept_source_route
echo 1 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/lo/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
echo 1800 > /proc/sys/net/ipv4/tcp_keepalive_time
echo 1 > /proc/sys/net/ipv4/tcp_window_scaling 
echo 0 > /proc/sys/net/ipv4/tcp_sack
echo 1280 > /proc/sys/net/ipv4/tcp_max_syn_backlog

Blocking Port Scanning

There are hundred of people out there to scan your open ports of your server and try to break down your server security. To block it

sudo iptables -N block-scan
sudo iptables -A block-scan -p tcp —tcp-flags SYN,ACK,FIN,RST RST -m limit —limit 1/s -j RETURN
sudo iptables -A block-scan -j DROP

Here, block-scan is a name of a new chain.

Blocking Bad Ports

You may need to block some bad ports for your server as well. Here is how you can do this.

badport="135,136,137,138,139,445"
sudo iptables -A INPUT -p tcp -m multiport --dport $badport -j DROP
sudo iptables -A INPUT -p udp -m multiport --dport $badport -j DROP

You can add more ports according to your needs.

What is firewalld?

Firewalld provides a dynamically managed firewall with support for network/firewall zones that defines the trust level of network connections or interfaces. It has support for IPv4, IPv6 firewall settings, ethernet bridges and IP sets. There is a separation of runtime and permanent configuration options. It also provides an interface for services or applications to add firewall rules directly.

The former firewall model with system-config-firewall/lokkit was static and every change required a complete firewall restart. This included also to unload the firewall netfilter kernel modules and to load the modules that are needed for the new configuration. The unload of the modules was breaking stateful firewalling and established connections. The firewall daemon on the other hand manages the firewall dynamically and applies changes without restarting the whole firewall. Therefore there is no need to reload all firewall kernel modules. But using a firewall daemon requires that all firewall modifications are done with that daemon to make sure that the state in the daemon and the firewall in kernel are in sync. The firewall daemon can not parse firewall rules added by the iptables and ebtables command line tools. The daemon provides information about the current active firewall settings via D-BUS and also accepts changes via D-BUS using PolicyKit authentication methods.

So, firewalld uses zones and services instead of chain and rules for performing the operations and it can manages rule(s) dynamically allowing updates & modification without breaking existing sessions and connections.

It has following features.

  • D-Bus API.
  • Timed firewall rules.
  • Rich Language for specific firewall rules.
  • IPv4 and IPv6 NAT support.
  • Firewall zones.
  • IP set support.
  • Simple log of denied packets.
  • Direct interface.
  • Lockdown: Whitelisting of applications that may modify the firewall.
  • Support for iptables, ip6tables, ebtables and ipset firewall backends.
  • Automatic loading of Linux kernel modules.
  • Integration with Puppet.

To know more about firewalld, please visit this link.

How to install firewalld

Before installing firewalld, please make sure you stop iptables and also make sure that iptables are not using or working anymore. To do so,

sudo systemctl stop iptables

This will stop iptables form your system.

And then make sure iptables are not used by your system any more by issuing the below command in the terminal.

sudo systemctl mask iptables

Now, check the status of iptables.

sudo systemctl status iptables

iptables_status_unixmen

Now, we are ready to install firewalld on to our system.

For Ubuntu

To install it on Ubuntu, you must remove UFW first and then you can install Firewalld. To remove UFW, issue the below command on the terminal.

sudo apt-get remove ufw

After removing UFW, issue the below command in the terminal

sudo apt-get install firewall-applet

Or

You can open Ubuntu Software Center and look or seacrh  for “firewall-applet” then install it on to your Ubuntu system.

For RHEL, CentOS & Fedora

Type the below command to install firewalld on your CentOS system.

sudo yum install firewalld firewall-config -y

How to configure firewalld

Before configuring firewalld, we must know the status of firewalld after the installation. To know that, type the following.

sudo systemctl status firewalld

firewalld_status_unixmen

As firewalld works on zones basis, we need to check all the zones and services though we haven’t done any configuring yet.

For Zones

sudo firewall-cmd --get-active-zones

firewalld_activezones_unixmen

or

sudo firewall-cmd --get-zones

firewalld_getzones_unixmen

To know the default zone, issue the below command

sudo firewall-cmd --get-default-zone

firewalld_defaultszones_unixmen

And, For Services

sudo firewall-cmd --get-services

firewalld_services_unixmen

Here, you can see those services covered under firewalld.

Setting Default Zone

An important note is, after each modification, you need to reload firewalld so that your changes can take place.

To set the default zone

sudo firewall-cmd --set-default-zone=internal

or

sudo firewall-cmd --set-default-zone=public

After changing the zone, check whether it changes or not.

sudo firewall-cmd --get-default-zone

Adding Port in Public Zone

sudo firewall-cmd --permanent --zone=public --add-port=80/tcp

firewalld_addport_unixmen

This will add tcp port 80 in the public zone of firewalld. You can add your desired port as well by replacing 80 by your’s.

Now reload the firewalld.

sudo firewall-cmd --reload

Now, check the status to see whether tcp 80 port has been added or not.

sudo firewall-cmd --zone=public --list-ports

firewalld_statusafterport_unixmen

Here, you can see that tcp port 80 has been added.

Or even you can try something like this.

sudo firewall-cmd --zone=public --list-all

firewalld_statusall_unixmen

Removing Port from Public Zone

To remove Tcp 80 port from the public zone, type the following.

sudo firewall-cmd --zone=public --remove-port=80/tcp

You will see a “success” text echoing in your terminal.

You can put your desired port as well by replacing 80 by your’s own port.

Adding Services in Firewalld

To add ftp service in firewalld, issue the below command

sudo firewall-cmd --zone=public --add-service=ftp

You will see a “success” text echoing in your terminal.

Similarly for adding smtp service, issue the below command

sudo firewall-cmd --zone=public --add-service=smtp

Replace ftp and smtp by your’s own service that you want to add in the firewalld.

Removing Services from Firewalld

For removing ftp & smtp services from firewalld, issue the below command in the terminal.

sudo firewall-cmd --zone=public --remove-service=ftp
sudo firewall-cmd --zone=public --remove-service=smtp

Block Any Incoming and Any Outgoing Packet(s)

If you wish, you can block any incoming or outgoing packets / connections by using firewalld. This is known as “panic-on” of firewalld. To do so, issue the below command.

sudo firewall-cmd --panic-on

You will see a “success” text echoing in your terminal.

After doing this, you will not be able to ping a host or even browse any websites.

To turn this off, issue the below command in your terminal.

sudo firewall-cmd --panic-off

Adding IP Address in Firewalld

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.4" accept'

By doing so, firewalld will accept IP v4 packets from the source IP 192.168.1.4.

Blocking IP Address From Firewalld

Similarly, to block any IP address

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.4" reject'

By doing so, firewalld will drop / discards every IP v4 packets from the source IP 192.168.1.4.

I stuck with the very basic of Firewalld over here so that you can easily understand the working methodology of it and the differences of it with iptables.

That’s all for today. Hope you enjoy reading this article.

Take care.

Linux – Concepts – Ulimits & Sysctl

ulimit and sysctl

The ulimit and sysctl programs allow to limit system-wide resource use. This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users.

Code Listing 1: ulimit example

# ulimit -a 
core file size          (blocks, -c) 0 
data seg size           (kbytes, -d) unlimited 
file size               (blocks, -f) unlimited 
pending signals                 (-i) 8191 
max locked memory       (kbytes, -l) 32 
max memory size         (kbytes, -m) unlimited 
open files                      (-n) 1024 
pipe size            (512 bytes, -p) 8 
POSIX message queues     (bytes, -q) 819200 
stack size              (kbytes, -s) 8192 
cpu time               (seconds, -t) unlimited 
max user processes              (-u) 8191 
virtual memory          (kbytes, -v) unlimited 
file locks                      (-x) unlimited

All these settings can be manipulated. A good example is this bash forkbomb that forks as many processes as possible and can crash systems where no user limits are set:

Warning: Do not run this in a shell! If no limits are set your system will either become unresponsive or might even crash.

Code Listing 2: A bash forkbomb

$ :(){ :|:& };:
refer to my Post :- 
https://haritibcoblog.com/2016/07/10/linux-fork-bomb-test-epic/

Now this is not good – any user with shell access to your box could take it down. But if that user can only start 30 processes the damage will be minimal. So let’s set a process limit:

Gentoo Note: A too small number of processes can break the use of portage. So, don’t be too strict.

Code Listing 3: Setting a process limit

# ulimit -u 30 
# ulimit -a 
… 
max user processes              (-u) 30 
…

If you try to run the forkbomb now it should run, but throw error messages “fork: resource temporarily unavailable”. This means that your system has not allowed the forkbomb to start more processes. The other options of ulimit can help with similar problems, but you should be careful that you don’t lock yourself out – setting data seg size too small will even prevent bash from starting!

sysctl is a similar tool: It allows to configure kernel parameters at runtime. If you wish to keep settings persistent across reboots you should edit /etc/sysctl.conf – be aware that wrong settings may break things in unforeseen ways.

Code Listing 4: Exploring sysctl variables

# sysctl -a 
… 
vm.swappiness = 60 
…

The list of variables is quite long (367 lines on my system), but I picked out vm.swappiness here. It controls how aggressive swapping will be, the higher it is (with a maximum of 100) the more swap will be used. This can affect performance a lot on systems with little memory, depending on load and other factors.

Code Listing 5: Reducing swappiness

# sysctl vm.swappiness=0 
vm.swappiness = 0

The effects of changing this setting are usually not felt instantly. But you can change many settings, especially network-related, this way. For servers this can offer a nice performance boost, but as with ulimit careless usage might cause your system to misbehave or slow down. If you don’t know what a variable controls, you should not modify it!

Linux Command – Using Netstat the Proper Way !!

How to install netstat

netstat is a useful tool for checking your network configuration and activity. It is in fact a collection of several tools lumped together.

Install “net-tools” package using yum

[root@livedvd ~]$ sudo yum install net-tools
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.mirror.secureax.com
* extras: centos.mirror.secureax.com
* updates: centos.mirror.secureax.com
Resolving Dependencies
--> Running transaction check
---> Package net-tools.x86_64 0:2.0-0.17.20131004git.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================
Package         Arch         Version                          Repository  Size
================================================================================
Installing:
net-tools       x86_64       2.0-0.17.20131004git.el7         base       304 k
Transaction Summary
================================================================================
Install  1 Package
Total download size: 304 k
Installed size: 917 k
Is this ok [y/d/N]: y
Downloading packages:
net-tools-2.0-0.17.20131004git.el7.x86_64.rpm              | 304 kB   00:00
Running transaction check

Running transaction test
Transaction test succeeded
Running transaction
Installing : net-tools-2.0-0.17.20131004git.el7.x86_64                    1/1
Verifying  : net-tools-2.0-0.17.20131004git.el7.x86_64                    1/1
Installed:
net-tools.x86_64 0:2.0-0.17.20131004git.el7

 

Complete!

 

The netstat Command

Displaying the Routing Table

When you invoke netstat with the –r flag, it displays the kernel routing table in the way we’ve been doing with route. On vstout, it produces:

# netstat -nr

 Kernel IP routing table
 Destination   Gateway      Genmask         Flags  MSS Window  irtt Iface
 127.0.0.1     *            255.255.255.255 UH       0 0          0 lo
 172.16.1.0    *            255.255.255.0   U        0 0          0 eth0
 172.16.2.0    172.16.1.1   255.255.255.0   UG       0 0          0 eth0

The –n option makes netstat print addresses as dotted quad IP numbers rather than the symbolic host and network names. This option is especially useful when you want to avoid address lookups over the network (e.g., to a DNS or NIS server).

The second column of netstat‘s output shows the gateway to which the routing entry points. If no gateway is used, an asterisk is printed instead. The third column shows the “generality” of the route, i.e., the network mask for this route. When given an IP address to find a suitable route for, the kernel steps through each of the routing table entries, taking the bitwise AND of the address and the genmask before comparing it to the target of the route.

The fourth column displays the following flags that describe the route:

G The route uses a gateway.
U The interface to be used is up.
H Only a single host can be reached through the route. For example, this is the case for the loopback entry 127.0.0.1.
D This route is dynamically created. It is set if the table entry has been generated by a routing daemon like gated or by an ICMP redirect message
M This route is set if the table entry was modified by an ICMP redirect message.
! The route is a reject route and datagrams will be dropped.

 

The next three columns show the MSS, Window and irtt that will be applied to TCP connections established via this route. The MSS is the Maximum Segment Size and is the size of the largest datagram the kernel will construct for transmission via this route. The Window is the maximum amount of data the system will accept in a single burst from a remote host. The acronym irtt stands for “initial round trip time.” The TCP protocol ensures that data is reliably delivered between hosts by retransmitting a datagram if it has been lost. The TCP protocol keeps a running count of how long it takes for a datagram to be delivered to the remote end, and an acknowledgement to be received so that it knows how long to wait before assuming a datagram needs to retransmitted; this process is called the round-trip time. The initial round-trip time is the value that the TCP protocol will use when a connection is first established. For most network types, the default value is okay, but for some slow networks, notably certain types of amateur packet radio networks, the time is too short and causes unnecessary retransmission. The irtt value can be set using the route command. Values of zero in these fields mean that the default is being used.

Finally, the last field displays the network interface that this route will use.

Displaying Interface Statistics

When invoked with the –i flag, netstat displays statistics for the network interfaces currently configured. If the –a option is also given, it prints all interfaces present in the kernel, not only those that have been configured currently. On vstout, the output from netstat will look like this:

# netstat -i
 Kernel Interface table
 Iface MTU Met  RX-OK RX-ERR RX-DRP RX-OVR  TX-OK TX-ERR TX-DRP TX-OVR Flags
 lo      0   0   3185      0      0      0   3185      0      0      0 BLRU
 eth0 1500   0 972633     17     20    120 628711    217      0      0 BRU

The MTU and Met fields show the current MTU and metric values for that interface. The RX and TX columns show how many packets have been received or transmitted error-free (RX-OK/TX-OK) or damaged (RX-ERR/TX-ERR); how many were dropped (RX-DRP/TX-DRP); and how many were lost because of an overrun (RX-OVR/TX-OVR).

The last column shows the flags that have been set for this interface. These characters are one-character versions of the long flag names that are printed when you display the interface configuration with ifconfig:

B A broadcast address has been set.
L This interface is a loopback device.
M All packets are received (promiscuous mode).
O ARP is turned off for this interface.
P This is a point-to-point connection.
R Interface is running.
U Interface is up.

 

Displaying Connections

netstat supports a set of options to display active or passive sockets. The options –t, –u, –w, and –x show active TCP, UDP, RAW, or Unix socket connections. If you provide the –a flag in addition, sockets that are waiting for a connection (i.e., listening) are displayed as well. This display will give you a list of all servers that are currently running on your system.

Invoking netstat -ta on vlager produces this output:

$ netstat -ta
 Active Internet Connections
 Proto Recv-Q Send-Q Local Address    Foreign Address    (State)
 tcp        0      0 *:domain         *:*                LISTEN
 tcp        0      0 *:time           *:*                LISTEN
 tcp        0      0 *:smtp           *:*                LISTEN
 tcp        0      0 vlager:smtp      vstout:1040        ESTABLISHED
 tcp        0      0 *:telnet         *:*                LISTEN
 tcp        0      0 localhost:1046   vbardolino:telnet  ESTABLISHED
 tcp        0      0 *:chargen        *:*                LISTEN
 tcp        0      0 *:daytime        *:*                LISTEN
 tcp        0      0 *:discard        *:*                LISTEN
 tcp        0      0 *:echo           *:*                LISTEN
 tcp        0      0 *:shell          *:*                LISTEN
 tcp        0      0 *:login          *:*                LISTEN

This output shows most servers simply waiting for an incoming connection. However, the fourth line shows an incoming SMTP connection from vstout, and the sixth line tells you there is an outgoing telnetconnection to vbardolino.

Using the –a flag by itself will display all sockets from all families.

Top 20 command netstat for network management

  1. Listing all the LISTENING Ports of TCP and UDP connections

Listing all ports (both TCP and UDP) using netstat -a option.

# netstat -a | more

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0      0 *:59482                     *:*                         LISTEN
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*

Active UNIX domain sockets (servers and established)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     16972  /tmp/orbit-root/linc-76b-0-6fa08790553d6
 unix  2      [ ACC ]     STREAM     LISTENING     17149  /tmp/orbit-root/linc-794-0-7058d584166d2
 unix  2      [ ACC ]     STREAM     LISTENING     17161  /tmp/orbit-root/linc-792-0-546fe905321cc
 unix  2      [ ACC ]     STREAM     LISTENING     15938  /tmp/orbit-root/linc-74b-0-415135cb6aeab

 

  1. Listing TCP Ports connections

Listing only TCP (Transmission Control Protocol) port connections using netstat -at.

# netstat -at

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT

 

  1. Listing UDP Ports connections

Listing only UDP (User Datagram Protocol ) port connections using netstat -au.

# netstat -au

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*
 udp        0      0 *:mdns                      *:*

 

  1. Listing all LISTENING Connections

Listing all active listening ports connections with netstat -l.

# netstat -l

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:58642                     *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*

Active UNIX domain sockets (only servers)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     16972  /tmp/orbit-root/linc-76b-0-6fa08790553d6
 unix  2      [ ACC ]     STREAM     LISTENING     17149  /tmp/orbit-root/linc-794-0-7058d584166d2
 unix  2      [ ACC ]     STREAM     LISTENING     17161  /tmp/orbit-root/linc-792-0-546fe905321cc
 unix  2      [ ACC ]     STREAM     LISTENING     15938  /tmp/orbit-root/linc-74b-0-415135cb6aeab

 

  1. Listing all TCP Listening Ports

Listing all active listening TCP ports by using option netstat -lt.

# netstat -lt

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:dctp                      *:*                         LISTEN
 tcp        0      0 *:mysql                     *:*                         LISTEN
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:munin                     *:*                         LISTEN
 tcp        0      0 *:ftp                       *:*                         LISTEN
 tcp        0      0 localhost.localdomain:ipp   *:*                         LISTEN
 tcp        0      0 localhost.localdomain:smtp  *:*                         LISTEN
 tcp        0      0 *:http                      *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 *:https                     *:*                         LISTEN

 

  1. Listing all UDP Listening Ports

Listing all active listening UDP ports by using option netstat -lu.

# netstat -lu

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 udp        0      0 *:39578                     *:*
 udp        0      0 *:meregister                *:*
 udp        0      0 *:vpps-qua                  *:*
 udp        0      0 *:openvpn                   *:*
 udp        0      0 *:mdns                      *:*
 udp        0      0 *:sunrpc                    *:*
 udp        0      0 *:ipp                       *:*
 udp        0      0 *:60222                     *:*
 udp        0      0 *:mdns                      *:*

 

  1. Listing all UNIX Listening Ports

Listing all active UNIX listening ports using netstat -lx.

# netstat -lx

Active UNIX domain sockets (only servers)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     4171   @ISCSIADM_ABSTRACT_NAMESPACE
 unix  2      [ ACC ]     STREAM     LISTENING     5767   /var/run/cups/cups.sock
 unix  2      [ ACC ]     STREAM     LISTENING     7082   @/tmp/fam-root-
 unix  2      [ ACC ]     STREAM     LISTENING     6157   /dev/gpmctl
 unix  2      [ ACC ]     STREAM     LISTENING     6215   @/var/run/hald/dbus-IcefTIUkHm
 unix  2      [ ACC ]     STREAM     LISTENING     6038   /tmp/.font-unix/fs7100
 unix  2      [ ACC ]     STREAM     LISTENING     6175   /var/run/avahi-daemon/socket
 unix  2      [ ACC ]     STREAM     LISTENING     4157   @ISCSID_UIP_ABSTRACT_NAMESPACE
 unix  2      [ ACC ]     STREAM     LISTENING     60835836 /var/lib/mysql/mysql.sock
 unix  2      [ ACC ]     STREAM     LISTENING     4645   /var/run/audispd_events
 unix  2      [ ACC ]     STREAM     LISTENING     5136   /var/run/dbus/system_bus_socket
 unix  2      [ ACC ]     STREAM     LISTENING     6216   @/var/run/hald/dbus-wsUBI30V2I
 unix  2      [ ACC ]     STREAM     LISTENING     5517   /var/run/acpid.socket
 unix  2      [ ACC ]     STREAM     LISTENING     5531   /var/run/pcscd.comm

 

  1. Showing Statistics by Protocol

Displays statistics by protocol. By default, statistics are shown for the TCP, UDP, ICMP, and IP protocols. The -s parameter can be used to specify a set of protocols.

# netstat -s

Ip:
 2461 total packets received
 0 forwarded
 0 incoming packets discarded
 2431 incoming packets delivered
 2049 requests sent out
 Icmp:
 0 ICMP messages received
 0 input ICMP message failed.
 ICMP input histogram:
 1 ICMP messages sent
 0 ICMP messages failed
 ICMP output histogram:
 destination unreachable: 1
 Tcp:
 159 active connections openings
 1 passive connection openings
 4 failed connection attempts
 0 connection resets received
 1 connections established
 2191 segments received
 1745 segments send out
 24 segments retransmited
 0 bad segments received.
 4 resets sent
 Udp:
 243 packets received
 1 packets to unknown port received.
 0 packet receive errors
 281 packets sent

 

  1. Showing Statistics by TCP Protocol

Showing statistics of only TCP protocol by using option netstat -st.

# netstat -st

Tcp:
 2805201 active connections openings
 1597466 passive connection openings
 1522484 failed connection attempts
 37806 connection resets received
 1 connections established
 57718706 segments received
 64280042 segments send out
 3135688 segments retransmited
 74 bad segments received.
 17580 resets sent

 

  1. Showing Statistics by UDP Protocol
# netstat -su

Udp:
 1774823 packets received
 901848 packets to unknown port received.
 0 packet receive errors
 2968722 packets sent

 

  1. Displaying Service name with PID

Displaying service name with their PID number, using option netstat -tp will display “PID/Program Name”.

# netstat -tp

Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
 tcp        0      0 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED 2179/sshd
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT  1939/clock-applet

 

  1. Displaying Promiscuous Mode

Displaying Promiscuous mode with -ac switch, netstat print the selected information or refresh screen every five second. Default screen refresh in every second.

# netstat -ac 5 | grep tcp

tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:58642                     *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        1      0 192.168.0.2:59447           www.gov.com:http            CLOSE_WAIT
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0      0 *:59482                     *:*                         LISTEN

 

  1. Displaying Kernel IP routing

Display Kernel IP routing table with netstat and route command.

# netstat -r

Kernel IP routing table
 Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
 192.168.0.0     *               255.255.255.0   U         0 0          0 eth0
 link-local      *               255.255.0.0     U         0 0          0 eth0
 default         192.168.0.1     0.0.0.0         UG        0 0          0 eth0

 

  1. Showing Network Interface Transactions

Showing network interface packet transactions including both transferring and receiving packets with MTU size.

# netstat -i

Kernel Interface table
 Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
 eth0       1500   0     4459      0      0      0     4057      0      0      0 BMRU
 lo        16436   0        8      0      0      0        8      0      0      0 LRU

 

  1. Showing Kernel Interface Table

Showing Kernel interface table, similar to ifconfig command.

# netstat -ie

Kernel Interface table
 eth0      Link encap:Ethernet  HWaddr 00:0C:29:B4:DA:21
 inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
 inet6 addr: fe80::20c:29ff:feb4:da21/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:4486 errors:0 dropped:0 overruns:0 frame:0
 TX packets:4077 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:2720253 (2.5 MiB)  TX bytes:1161745 (1.1 MiB)
 Interrupt:18 Base address:0x2000

lo        Link encap:Local Loopback
 inet addr:127.0.0.1  Mask:255.0.0.0
 inet6 addr: ::1/128 Scope:Host
 UP LOOPBACK RUNNING  MTU:16436  Metric:1
 RX packets:8 errors:0 dropped:0 overruns:0 frame:0
 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:480 (480.0 b)  TX bytes:480 (480.0 b)

 

  1. Displaying IPv4 and IPv6 Information

Displays multicast group membership information for both IPv4 and IPv6.

# netstat -g

IPv6/IPv4 Group Memberships
 Interface       RefCnt Group
 --------------- ------ ---------------------
 lo              1      all-systems.mcast.net
 eth0            1      224.0.0.251
 eth0            1      all-systems.mcast.net
 lo              1      ff02::1
 eth0            1      ff02::202
 eth0            1      ff02::1:ffb4:da21
 eth0            1      ff02::1

 

  1. Print Netstat Information Continuously

To get netstat information every few second, then use the following command, it will print netstat information continuously, say every few seconds.

# netstat -c

Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:36944 TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg010.shr.prod.s:42110 TIME_WAIT
 tcp        0    132 tecmint.com:ssh    115.113.134.3.static-:64662 ESTABLISHED
 tcp        0      0 tecmint.com:http   crawl-66-249-71-240.g:41166 TIME_WAIT
 tcp        0      0 localhost.localdomain:54823 localhost.localdomain:smtp  TIME_WAIT
 tcp        0      0 localhost.localdomain:54822 localhost.localdomain:smtp  TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg010.shr.prod.s:42091 TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:36998 TIME_WAIT

 

  1. Finding non supportive Address

Finding un-configured address families with some useful information.

# netstat --verbose

netstat: no support for `AF IPX' on this system.
 netstat: no support for `AF AX25' on this system.
 netstat: no support for `AF X25' on this system.
 netstat: no support for `AF NETROM' on this system.

 

  1. Finding Listening Programs

Find out how many listening programs running on a port.

# netstat -ap | grep http

tcp        0      0 *:http                      *:*                         LISTEN      9056/httpd
 tcp        0      0 *:https                     *:*                         LISTEN      9056/httpd
 tcp        0      0 tecmint.com:http   sg2nlhg008.shr.prod.s:35248 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:57783 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:57769 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg008.shr.prod.s:35270 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg009.shr.prod.s:41637 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg009.shr.prod.s:41614 TIME_WAIT   -
 unix  2      [ ]         STREAM     CONNECTED     88586726 10394/httpd

 

  1. Displaying RAW Network Statistics
# netstat --statistics --raw

Ip:
 62175683 total packets received
 52970 with invalid addresses
 0 forwarded
 Icmp:
 875519 ICMP messages received
 destination unreachable: 901671
 echo request: 8
 echo replies: 16253
 IcmpMsg:
 InType0: 83
 IpExt:
 InMcastPkts: 117

 

Source :- UnixMen

Knockd – Detailed And Simpler (Silent Assassin….)

As I could see there are lot of articles about knockd and it’s implementation. So, what are my efforts to make this unique? I made it simple, but detail oriented  and have commented on controversies and criticism that exist.

port-knocking-2

Here is an outline on what I’ve discussed.

What is port knocking?

What is knockd?

How it works?

Installation

What we are trying to achieve

Pre-requisite before implementation of knockd:

Implementation scenario

Testing

Disclaimer

So, here we go.

What is port knocking?

Wikipedia Definition:

Port knocking is a method of externally opening ports on a firewall by generating a connection attempt on a set of pre-specified closed ports (in this case, telnet). Once a correct sequence of connection attempts is received, the firewall rules are dynamically modified to allow the host which sent the connection attempts to connect over specific port(s)

/* in this article point of view, it’s ssh port 22 */

It’s basically like, every request would knock the door (firewall) to get through it. Knocking is necessary to get past the door. You shall either implement it using knockd and iptables or just iptables alone.

Now, using knockd.

What is knockd?

knockd is a port-knock server. It listens to all traffic on an Ethernet interface, looking for special “knock” sequences of port-hits. A client makes these port-hits by sending a TCP (or UDP) packet to a port on the server. This port need not be open — since knockd listens at the link-layer level, it sees all traffic even if it’s destined for a closed port. When the server detects a specific sequence of port-hits, it runs a command defined in its configuration file. This can be used to open up holes in a firewall for quick access.

How it works?

  1. Knockd daemon installed/running in the server.
    2. Configure some port sequences (tcp, udp, or both), and the appropriate actions for each sequence.
    3. once knockd sees a defined port sequence, it will run the configured action for that sequence

Note:

It is completely stealth and it will not open any ports on the server,  by default.

When a port knock is successfully used to open a port, the firewall rules are generally only opened to the ip_address that supplied the correct knock.

Installation

Note: Don’t copy/paste the commands. Type it manually to avoid errors that could occur due to the format.

# yum install libpcap*

/* dependency – * in the end installs libpcap-devel which is a pre-requisite, as well */

There are several ways to install, whereas I have followed rpm installation.

Download suitable rpm package from http://pkgs.repoforge.org/knock/

Then run,

# rpm –ivh knock-0.5-3.el6.rf.x86_64.rpm

/*Here, I have downloaded knock 0.5-3 for 64-bit centos and hence the rpm name*/

Now, what all got installed?

Knockd – knock server daemon

Knock – knock client, which aids in knocking.

Note that this (knock) is default client comes along with knockd, whereas there are other advanced clients like hping, sendip & packit.

What we are trying to achieve:

A way to stop the attacks altogether, yet allow ssh access from anywhere, when needed.

Pre-requisite before implementation of knockd:

As mentioned earlier, an existing firewall (iptables) is a pre-requisite.

Follow the below steps to configure firewall

# iptables -I INPUT -p tcp -m state –state RELATED,ESTABLISHED -j ACCEPT

-I —– Inserting the rule as first line in the firewall setup

-p —– protocol

-m —–match against the states RELATED, ESTABLISHED, in this case

-j —– jump to action, which is ACCPET here.

/* This rule says to allow currently on-going session through firewall. It is essential so that if you have currrently taken remote session on this computer using SSH, it will be preserved and not get terminated by further rules where you might want to block ssh or all services */

# iptables -I INPUT -p icmp -j ACCEPT

/* This is to make your machine ping-able from any machine, so that you can check the availability of your machine (whether it’s up or down) */

# iptables –A INPUT –j REJECT

/* Rejecting everything else – Appending it as last line, since, if inserted as first line all other rules will not be considered and every request would be rejected*/

Implementation scenario:

Now, try to ssh to the machine where you have implemented firewall. Let’s call the machine as server.

You could not ssh to the server since the firewall setup in server rejects everything except on-going session and ping requests.

Now, knockd implementation:

Now, in server, that you have installed knockd, run the following commands

# vi /etc/knockd.conf

/*As a result of rpm installation, this configuration file will exist */

Edit the file as below and save/exit.

[options] logfile = /var/log/knockd.log [opencloseSSH] sequence        = 2222:udp,3333:tcp,4444:udp seq_timeout     = 15 start_command   = /sbin/iptables -I INPUT -s %IP% -p tcp –dport 22 -j ACCEPT cmd_timeout     = 10 stop_command    = /sbin/iptables -D INPUT -s %IP% -p tcp –dport 22 -j ACCEPT tcpflags        = syn

First line of the file defines where the error/warnings pertaining to knockd gets logged. In case, if you’re unable to successfully launch connection between client and server using knockd/knock, then this is where you need to check – Server log.

sequence        = 2222:udp,3333:tcp,4444:udp                                /* the sequence of knocking */

seq_timeout     = 15                                                                         /* once the above mentioned sequence is knocked, it’s valid only for next 15 seconds */

start_command   = /sbin/iptables -I INPUT -s %IP% -p tcp –dport 22 -j ACCEPT

/* once the right sequence is knocked, the above command gets executed, where %IP% is the ip_addr of host which knocks. Hence, this allows, ssh connection from knocker (client) to server. One thing to be noted is that, ssh connection need to be established within next 15 seconds of knock. Also, note that I have given iptables –I, since I want this rule to be inserted as first line, because, in case if I append, it won’t have any effect, since iptables reject everything rule comes before this in the list*/

cmd_timeout     = 10                       /* the stop_command will be execued in 10 seconds from start_command execution. */

stop_command    = /sbin/iptables -D INPUT -s %IP% -p tcp –dport 22 -j ACCEPT

/* I am deleting the rule which I inserted to allow the SSH connection from knocker (client) to server. Now, doesn’t it ends my ssh connection? – It doesn’t.

How?  The existing rule in my iptables, #iptables -I INPUT -p tcp -m state –state RELATED,ESTABLISHED -j ACCEPT helps me to retain the established connection. In case, if this rule is not specified, then ur ssh connection will be lost in 10 seconds (i.e) cmd_timeout */

tcpflags                  = syn                       /* In TCP 3 way handshake, Client sends a TCP SYNchronize packet to Server */

Note that there are other ways to configure your knockd server. For example, a port sequence to open ssh, and another port sequence to close your ssh (unlike the above mentioned one), but in this case, after terminating your ssh session, you need to manually hit the port sequence from client, in order to close the ssh port for it.

Now, that we have configured our server to allow ssh for client only when the client hits the right sequence, which is 2222:udp,3333:tcp,4444:udp here.

Now, you need to start knockd service. Copy the below script as/etc/rc.d/init.d/knockd in your machine.

Runlevel script.

#!/bin/sh # description: Start and stop knockd # Check if that config file exist [ -f /etc/knockd.conf ] || exit 0 # Source function library . /etc/rc.d/init.d/functions # Source networking configuration . /etc/sysconfig/network # Check that networking is up [ “$NETWORKING” = “no” ] && exit 0 start() { echo “Starting knockd …” /usr/sbin/knockd & } stop() { echo “Shutting down knockd …” kill `pidof /usr/sbin/knockd` } case “$1” in start) start ;; stop) stop ;; restart) stop start ;; *) echo “Usage: $0 {start|stop|restart}” ;; esac exit 0

Now, run the following commands, so that you shall start/stop/restart knockd like other services

# chmod 755 /etc/rc.d/init.d/knocd         /* making the script executable by root */ # chkconfig –add knockd                             /* adding knockd to chkconfig list */ # chkconfig –level 35 knockd on              /* service will be started as part of runlevel 3 & 5 */ # service knockd start

whenever you modify your /etc/knockd.conf, you need to restart the service.

Testing:

From client, run the following commands.

Note that there are other ways to knock, whereas  I am using knock client which comes along with the knockd package. So, you need to have it installed in client, as well.

Now,

# knock –v server_ip 2222:udp 3333:tcp 4444:udp ;ssh server_ip

-v for verbose

server_ip substituted by your server’s ip address.

We are knocking the sequence which we mentioned in server’s /etc/knockd.conffile

Now, you shall successfully establish ssh session to server. Also, the port gets closed for further sessions. So, every time when you want to establish ssh connection to server, you need to knock and then ssh.

Comments on controversies and criticism:

What if the knock sequence are determined by brute force attack ?

Excerpt from wikipedia answers this:

>>>>>>>>>> YO BREDAS <<<<<<<<<<<<<<<<<

Consider that, if an external attacker did not know the port knock sequence, even the simplest of sequences would require a massive brute force effort in order to be discovered. A three-knock simple TCP sequence (e.g. port 1000, 2000, 3000) would require an attacker without prior knowledge of the sequence to test every combination of three ports in the range 1-65535, and then to scan each port in between to see if anything had opened. As a stateful system, the port would not open until after the correct three-digit sequence had been received in order, without other packets in between.
That equates to a maximum of 655363 packets in order to obtain and detect a single successful opening, in the worst case scenario. That’s 281,474,976,710,656 or over 281 trillion packets. On average, an attempt would take approximately 9.2 quintillion packets to successfully open a single, simple three-port TCP-only knock by brute force. This is made even more impractical when knock attempt-limiting is used to stop brute force attacks, longer and more complex sequences are used.

Also, there are other port knocking solutions, where the knocks are encrypted, which makes it even harder to hack.

When we have simple, robust and reliable solution named VPN access, why to go for port knocking?

Port knocking plus VPN tunnels for added security. If you have another VPN server in your network, you shall protect it from brute force attacks by hiding it behind a knock sequence.

Disclaimer:

Port knocking is not intended to be complete solution, but can be an added layer of security.

Also, it needs to be configured properly according to the needs.

A network that is connected to the Internet is never secure. There is no perfect lock!

The best that we can do is to make the system as secure as possible for today. Possibly
tomorrow someone will figure out how to get through our security code.

A security system is only as strong as its weakest link.

But, I believe “Port knocking” adds security to the existing setup, and doesn’t make your system vulnerable to attacks, as few people claim.

Thanks for reading. Cheers !

 

Credits :- GOKUL (Unixmen)

Linux Command – Dstat (Culprit Catcher)

Introduction

Whether a system is used as a web server or a normal PC, in a daily workflow, to keep under control its usage of resources is almost necessary : GNU/Linux provides several tools for monitoring purposes: iostat, vmstat, netstat, ifstat and others. Every system admin know these products, and how to analyse their outputs. However, there’s another alternative, a single program which can replace almost all of them. Its name is dstat.

With dstat, users can view all system resources instantly. For instance, someone could decide to compare the network bandwidth numbers directly with the disk throughput, having a more general view on what’s going on; this is very useful in case of troubleshooting, or to analyse a system for bench marking.

Features

  • Combines vmstat, iostat, ifstat, netstat information and more
  • Shows stats in exactly the same timeframe
  • Enable/order counters as they make most sense during analysis/troubleshooting
  • Modular design
  • Written in Python, so easy extendable
  • Includes many external plugins
  • Can show interrupts per device
  • Very accurate timeframes, no timeshifts when system is stressed
  • Shows exact units and limits conversion mistakes
  • Indicate different units with different colors
  • Show intermediate results when delay > 1
  • Allows to export CSV output, which can be imported in Gnumeric and Excel to make graphs

Installation

To install dstat is a simple task, since it’s packaged in .deb and .rpm.
For Debian-based distro:

# apt install dstat

In RHEL, CentOS, Fedora:

# yum install dstat

Getting sta(r)ted

To run the program, users can just write the command in its simplest form:

$ dstat

as a result, it will show different infos in a table, which help admins to have a general overview.

dstat

Plugins

First of all, it’s important to note that dstat comes with a lot of plugins; for obtaining a complete list:

$ dstat --list

which returns:

list

Of course, it is possible to add more, for some special use case.

So, let’s see how they work.

Who wants to use some plugin must simply pass its name as a command-line argument. For instance, if someone need to verify only the total CPU usage, he can do:

$ dstat --cpu

or, in a shorter form

$ dstat -c
As previously said, program can show different stats at the same time. As an example:

$ dstat --cpu --top-cpu --disk --top-bio --top-latency

output

This command, which is a combination of internal stats and external plugins, will give the total CPU usage, most expensive CPU process, disk statistics, most expensive block I/O process and the process with the highest total latency (expressed in milliseconds).

Output

By default, dstat displays output in columns (as a table, indeed) directly in terminal window, in real-time, for an immediate analysis made by a human. But there is also the possibility to send it to a .csv file, which software like Libreoffice Calc or Gnumeric can use for creating graphs, or any kind of statistical analysis. Exporting data to .csv is a quite easy task:

$ dstat [arguments] --output /path/to/outputfile

GnuPG – Public / Private Key Method – Encryption / Decryption (No Digital Signature)

 

<When You Encounter This Error Don’t Panic>

GPG key generation: Not enough random bytes available.

Not enough random bytes available

Generating keys with GnuPG

 

Anyone who wants to create a new key set via GnuPG (GPG) may run into this error:

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 142 more bytes)

The problem is caused by the lack of entropy (or random system noise). While trying to get more, you might keep running into this message. In our case running a find on the disk, while making sha1sums and putting that into files, was actually not enough.

To check the available entropy, check the kernel parameters:

cat /proc/sys/kernel/random/entropy_avail
36

To solve it, run the rngd command: rngd -f -r /dev/urandom

Debian / Ubuntu: apt-get install rng-tools

Red Hat / CentOS: yum install -y rng-utils

Checking the available entropy again revealed in our case a stunning 3100, almost 100 times more than before. GnuPG is now happy again and can finish creating the keys.

Mulesoft – Mule ESB (Introduction)

What is Mule ESB?

Mule, the runtime engine of Anypoint Platform, is a lightweight Java-based enterprise service bus (ESB) and integration platform that allows developers to connect applications together quickly and easily, enabling them to exchange data.

It enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, Web Services, JDBC, HTTP, and more.

The ESB can be deployed anywhere, can integrate and orchestrate events in real time or in batch, and has universal connectivity.

The key advantage of an ESB is that it allows different applications to communicate with each other by acting as a transit system for carrying data between applications within your enterprise or across the Internet.

Mule has powerful capabilities that include:

  • Service creation and hosting — expose and host reusable services, using the ESB as a lightweight service container
  • Service mediation — shield services from message formats and protocols, separate business logic from messaging, and enable location-independent service calls
  • Message routing — route, filter, aggregate, and re-sequence messages based on content and rules
  • Data transformation — exchange data across varying formats and transport protocols

what mule esb

Do I need an ESB?

Mule and other ESBs offer real value in scenarios where there are at least a few integration points or at least 3 applications to integrate.

They are also well suited to scenarios where loose coupling, scalability and robustness are required.

Below is a quick ESB selection checklist.

To read a much more comprehensive take on when to select an ESB, read this article written by MuleSoft founder and VP of Product Strategy Ross Mason: To ESB or not to ESB.

  • 1. Are you integrating 3 or more applications/services?
  • 2. Will you need to plug in more applications in the future?
  • 3. Do you need to use more than one type of communication protocol?
  • 4. Do you need message routing capabilities such as forking and aggregating message flows, or content-based routing?
  • 5. Do you need to publish services for consumption by other applications?

Why Mule?

Mule is lightweight but highly scalable, allowing you to start small and connect more applications over time. The ESB manages all the interactions between applications and components transparently, regardless of whether they exist in the same virtual machine or over the Internet, and regardless of the underlying transport protocol used.

There are currently several commercial ESB implementations on the market. However, many of these provide limited functionality or are built on top of an existing application server or messaging server, locking you into that specific vendor. Mule is vendor-neutral, so different vendor implementations can plug in to it. You are never locked in to a specific vendor when you use Mule.

Mule provides many advantages over competitors, including:

  • Mule components can be any type you want. You can easily integrate anything from a “plain old Java object” (POJO) to a component from another framework.
  • Mule and the ESB model enable significant component reuse. Unlike other frameworks, Mule allows you to use your existing components without any changes. Components do not require any Mule-specific code to run in Mule, and there is no programmatic API required. The business logic is kept completely separate from the messaging logic.
  • Messages can be in any format from SOAP to binary image files. Mule does not force any design constraints on the architect, such as XML messaging or WSDL service contracts.
  • You can deploy Mule in a variety of topologies, not just ESB. Because it is lightweight and embeddable, Mule can dramatically decrease time to market and increases productivity for projects to provide secure, scalable applications that are adaptive to change and can scale up or down as needed.
  • Mule’s stage event-driven architecture (SEDA) makes it highly scalable. A major airline processes over 10,000 business transactions per second with Mule while H&R Block uses 13,000 Mule servers to support their highly distributed environment.

Mulesoft – Fundamentals

Mule Fundamentals

Mule is the lightweight integration run-time engine that allows you to connect anything, anywhere.

Rather than creating multiple point-to-point integrations between systems, services, APIs, and devices, use Mule to intelligently manage message routing, data mapping, orchestration, reliability, security, and scalability between nodes.

Plug other systems and applications into Mule and let it handle communication between systems, allowing you to track and monitor your application ecosystem and external resources.

 

Mule is so named because it “carries the heavy load” of developing an infrastructure that supports multiple applications and systems both flexibly and intelligently.

The Mule Approach

Flexible and reliable by design, Mule is proactive in adjusting to the changing demands of an interconnected network of systems and applications.

As you quickly learn in Anypoint Studio from designing even the simplest Mule applications, there is a plethora of tools that can be used to transform and capture data at each event (“component”) in a flow, always according to your objectives.

In turn, Mule accounts for any data you load on top of it, and does not mind executing operations concurrently.

Getting the Most out of Mule…

  • Deploy or integrate applications or systems on premises or in the cloud
  • Employ out-of-the-box connectors to create SaaS integration applications
  • Build and expose APIs
  • Consume APIs
  • Create Web services which orchestrate calls to other services
  • Create interfaces to expose applications for mobile consumption
  • Integrate B2B with solutions that are secure, efficient, and quick to build and deploy
  • Shift applications onto the cloud
  • Connect B2B e-commerce activities

Source – UNIX, Destination – Windows Cygwin (SSH Password-less Authentication)

On Windows Server

In windows cygwin create user, say MyUser, locally and also create user in cygwin

cd C:\cygwin

Cygwin.bat

 

Administrator@MYWINDOWSHOST ~

$ /bin/mkpasswd -l -u MyUser >>/etc/passwd

MyUser@MYWINDOWSHOST ~

$ ls

MyUser@MYWINDOWSHOST ~

$ ls -al

total 24

drwxr-xr-x+ 1 MyUser        None    0 Mar 17 12:54 .

drwxrwxrwt+ 1 Administrator None    0 Mar 17 12:54 ..

-rwxr-xr-x  1 MyUser        None 1494 Oct 29 15:34 .bash_profile

-rwxr-xr-x  1 MyUser        None 6054 Oct 29 15:34 .bashrc

-rwxr-xr-x  1 MyUser        None 1919 Oct 29 15:34 .inputrc

-rwxr-xr-x  1 MyUser        None 1236 Oct 29 15:34 .profile

MyUser@MYWINDOWSHOST ~

$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/MyUser/.ssh/id_rsa):

Created directory ‘/home/MyUser/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/MyUser/.ssh/id_rsa.

Your public key has been saved in /home/MyUser/.ssh/id_rsa.pub.

The key fingerprint is:

7d:40:12:1c:7b:c1:7f:39:ac:f5:1a:c5:73:ae:81:34 MyUser@MYWINDOWSHOST

The key’s randomart image is:

+–[ RSA 2048]—-+

|       .++o      |

|        .+..     |

|        . o. . o |

|         o .E *.+|

|        S …* =o|

|           .o o o|

|               = |

|              o  |

|                 |

+—————–+

MyUser@MYWINDOWSHOST ~

$ cd .ssh

MyUser@MYWINDOWSHOST ~/.ssh

$ ls

id_rsa  id_rsa.pub

MyUser@MYWINDOWSHOST ~/.ssh

$ touch authorized_keys

Generate the key in source ON UNIX SERVER

 $ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/MyUser/.ssh/id_rsa):

Created directory ‘/home/MyUser/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/MyUser/.ssh/id_rsa.

Your public key has been saved in /home/MyUser/.ssh/id_rsa.pub.

The key fingerprint is:

7d:40:12:1c:7b:c1:7f:39:ac:f5:1a:c5:73:ae:81:34 MyUser@MYWINDOWSHOST

The key’s randomart image is:

+–[ RSA 2048]—-+

|       .++o      |

|        .+..     |

|        . o. . o |

|         o .E *.+|

|        S …* o|

|           .o o o|

|               = |

|              o  |

|                 |

+—————–+

MyUser@MYUNIXSHOST ~

$ cd .ssh

MyUser@MYUNIXHOST ~/.ssh

$ ls

id_rsa  id_rsa.pub

Then push that rsa_pub into that destination authorized_keys

cat .ssh/id_rsa.pub | ssh MyUser@MYWINDOWSHOST ‘cat >>  .ssh/authorized_keys’

 

ssh  -v MyUser@MYWINDOWSHOST

—> YOU SHOULD BE ABLE LOGIN PASSWORD LESS

Credits :- Priyanka Padad (Operations Expert)

TIBCO EMS – No memory for operation – Filesystem as datastore.

Very often we come across this kind of an error :-

Most probably if your EMS is running as a 32bit Binary and your message size is exceeding 1.6 – 2 GB in size

EMS For 32 bit has a size limitation of 2 GB

When you are using Filesystem as a datastore,

and when you come across such a trace

2007-07-24 10:46:38 Recovering state, please wait.
2007-07-24 10:48:47 SEVERE ERROR: Failed writing message to ‘datastore/meta.db’: No memory for operation.
2007-07-24 10:48:50 SEVERE ERROR: No memory processing purge record.
2007-07-24 10:48:52 FATAL: Exception in startup, exiting.

434692644 Jul 24 11:16 meta.db
29220 Jul 24 08:54 sync-msgs.db
5805854244 Jul 24 08:37 async-msgs.db

Don’t Panic, Its a very Generic issue

Just follow these checklists

  1. Size of your message
  2. Ensure you are using 64 bit EMS binary
  3. Set max_msg_memory= <XX>GB in your tibemsd.conf, where XX indicated the Memory in GB you wanna assign.
  4. Set msg_swapping=enabled.
  • max_msg_memory

max_msg_memory = size [KB|MB|GB]
Maximum memory the server can use for messages.
This parameter lets you limit the memory that the server uses for messages, so server memory usage cannot grow beyond the system’s memory capacity.
When msg_swapping is enabled, and messages overflow this limit, the server begins to swap messages from process memory to disk. Swapping allows the server to free process memory for incoming messages, and to process message volume in excess of this limit.
When the server swaps a message to disk, a small record of the swapped message remains in memory. If all messages are swapped out to disk, and their remains still exceed this memory limit, then the server has no room for new incoming messages. The server stops accepting new messages, and send calls in message producers result in an error. (This situation probably indicates either a very low value for this parameter, or a very high message volume.)
Specify units as KB, MB or GB. The minimum value is 8 MB. The default value of 0 (zero) indicates no limit.
For example:
max_msg_memory = 512MB

msg_swapping

msg_swapping = enable | disable
This parameter enables and disables the message swapping feature (described above for max_msg_memory).
The default value is enabled, unless you explicitly set it to disabled.

reserve_memory

reserve_memory = size
When reserve_memory is non-zero, the daemon allocates a block of memory for use in emergency situations to prevent the EMS server from being unstable in low memory situations. When the daemon process exhausts memory resources, it disables clients and routes from producing new messages, and frees this block of memory to allow consumers to continue operation (which tends to free memory).
The EMS server attempts to reallocate its reserve memory once the number of pending messages in the server has dropped to 10% of the number of pending messages that were in the server when it experienced the allocation error. If the server successfully reallocates memory, it begins accepting new messages.
The reserve_memory parameter only triggers when the EMS server has run out of memory and therefore is a reactive mechanism. The appropriate administrative action when an EMS server has triggered release of reserve memory is to drain the majority of the messages by consuming them and then to stop and restart the EMS server. This allows the operating system to reclaim all the virtual memory resources that have been consumed by the EMS server. A trace option, MEMORY, is also available to help show what the server is doing during the period when it is not accepting messages.
Specify size in units of MB. When non-zero, the minimum block is 16MB. When absent, the default is zero.

Linux Commands – chage (Manage your Passwords for User Accounts in UNIX)

chage

chage Enables you to modify the parameters surrounding passwords (complexity, age,expiration). We can edit and manage the password expiration details with the chage command. However, a root user can execute chage command for any user account, but not the other users.

Syntax: chage [options] USERNAME

Options:

-d LAST_DAY Indicates the day the password was last changed
-E EXPIRE_DATE Sets the account expiration date
-I INACTIVE Changes the password in an inactive state after the account expires
-l Shows account aging information
-m MIN_DAYS Sets the minimum number of days between password changes
-M MAX_DAYS Sets the maximum number of days a password is valid
-W WARN_DAYS Sets the number of days to warn before the password expires

For example we can find the particular user information by using chage command as follows.

[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : never
Account expires                                         : Dec 31, 2025
Minimum number of days between password change          : 365
Maximum number of days between password change          : 500
Number of days of warning before password expires       : 7

How to force users to change their password, may be this is a big question for system administrators to force their users to change password on regular intervals for security basis.

For this we Set Password Expiry Date for an user using chage option -M. Root user  can set the password expiry date for any user.

Please note that option -M will update both “Password expires” and “Maximum number of days between password change” entries as shown below.

Syntax: # chage -M number-of-days username
[root@localhost ~]# chage -M 60 root

The above command sets the password expiry to 60days.

[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        :Jul 25, 2017
Password inactive                                       : never
Account expires                                         : Dec 31, 2025
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

Set the Account Expiry Date for an User by using -E option

we can also use chage command to set the account expiry date as shown below using option -E. The date given below is in “YYYY-MM-DD” format. This will update the “Account expires” value as shown below.

[root@localhost ~]# chage -E "2017-10-03" root
[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : never
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

Force the user account to be locked after n number of inactivity days
Typically if the password is expired, users are forced to change it during their next login. You can also set an additional condition, where after the password is expired, if the user never tried to login for 6 days, you can automatically lock their account using option -I as shown below. In this example, the “Password inactive” date is set to 10 days from the “Password expires” value.Once an account is locked, only system administrators will be able to unlock it.

# chage -I 6 root
# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : Aug 31,2017
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

How to set Minimum no.of days between password change

we can set the minimum number of days between password change by using the option -m along with chage command as follows.

chage -m 10 USERNAME
[root@localhost ~]# chage -m 10 root
[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : Aug 31,2017
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 10
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

How to set the number of days of warning before password expires

we can set the number of days of warning before password expires by using the option -W along with chage command

[root@localhost ~]# chage -W 10 root
[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : Aug 31,2017
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 10
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 10

How to disable password aging for an user account

To disable password aging for the account for particular account we must set the following on that account.

    -m 0 will set the minimum number of days between password change to 0
    -M 99999 will set the maximum number of days between password change to 99999
    -I -1 (number minus one) will set the “Password inactive” to never
    -E -1 (number minus one) will set “Account expires” to never.

when we using the chage command to set specifics on an account, do not reset the
password with the passwd command because doing so erases any changes to the account
expiring.

Have a great fun with Linux!

 

Credits :- https://www.unixmen.com

TIBCO – EMS – Multicasting

Multicast Messaging:  Multicast is a messaging model that allows the EMS server to send messages to multiple consumers simultaneously by broadcasting them over an existing network.

Overview:
  Multicast is a messaging model that broadcasts messages to many consumers at once, as opposed to sending copies of a message to each subscribing consumer individually.

The server sends multicast messages over a multicast channel. Each multicast-enabled topic is associated with a channel. The channel determines the multicast port and multicast group address to which the server sends messages.

The multicast message is received by a multicast daemon running on the same computer with the message consumer.

When an EMS client subscribes to a multicast-enabled topic, it automatically connects to the multicast daemon. The multicast daemon begins listening on the channel associated with that topic, receives any broadcast messages, and delivers them to subscribed clients.

When to use Multicast:
  Because multicast reduces the number of operations performed by the server and
reduces the amount of bandwidth used in the publish-and-subscribe model, multicast is highly scalable.

Where publish and subscribe messaging creates a copy of a published message for each message consumer, multicast broadcasts the message only once.

sample Pictures

Features

Multicast is highly scalable
Multicast reduces the amount of bandwidth consumed
Multicast reduces the number of operations performed by the server
Multicast broadcasts the message only once, where as in case of publish and subscribe for each of the consumer a copy of the message is getting published

Facts

Multicast does not guarantee message delivery
Messages requiring a high degree of reliability should not use multicast
Multicast offers last-hop delivery only; it cannot be used to send messages between servers.
Multicast should not be used in applications where security is a priority

 

Multicast Messaging Example:

Multicast channels can only be configured statically by modifying the configuration files. There are no commands in the administration tool to configure multicast channels.

 

Step 1: Enable the EMS Server for Multicast

To enable multicast in the server, set the multicast property to enabled in the tibemsd.conf configuration file:
multicast = enabled

 

Step2: Create a Multicast Channel

The EMS server broadcasts messages to consumers over multicast channels. Each channel has a defined multicast address and port. Messages published to a multicast-enabled topic are sent by the server and received by the subscribers on these multicast channels.
To create a multicast channel, add the following definition to the multicast channels configuration file, channels.conf:
[multicast-1]
address=234.5.6.7:1

 

Start the EMS Server.

 

Step3: Create a topic with Multicast enabled 

To create a multicast-enabled topic, use the administration tool to issue the following command:


> create topic testtopic channel=multicast-1

 

then execute the show topic command to verify multicast is enabled or not

 >show topics

 

the result will be like …

 

Topic Name SNFGEIBCTM Subs Durs Msgs Size

testtopic                  ———+           0 0 0 0.0 Kb

 

+ at M means multicast is enabled.

 

Step 4: Start the Multicast Daemon

go to Start menu, follow the path All Programs > TIBCO > TIBCO EMS 7.0 > Start
EMS Multicast Daemon.

 

Step5: Start the Subscriber

Execute the tibjmsMsgConsumer client to assign user1 as a subscriber to the multicastTopic topic with a Session acknowledgment mode of NO_ACKNOWLEDGE:
C:\tibco\ems\7.0\samples\java> java tibjmsMsgConsumer –topic testtopic –user user1 –ackmode NO

in Administration Console type the below command for cross verification…

>show consumers topic=testtopic

 

Step6: Start the producer

 Setting up a client to publish multicast message is no different from setting up a client to send publish and subscribe messages. Because the topic is enabled for multicast in the EMS server, the message producer does not need to follow any additional steps.

 

C:\tibco\ems\7.0\samples\java>java tibjmsMsgProducer -topic testtopic -user admin hello

 

Finally the messages (hello) is displayed in the subscriber’s window.

 

Certificates – PKCS12

pkcs12

NAME

pkcs12 – PKCS#12 file utility

SYNOPSIS

openssl pkcs12 [-help] [-export] [-chain] [-inkey filename] [-certfile filename] [-name name] [-caname name] [-in filename] [-out filename] [-noout] [-nomacver] [-nocerts] [-clcerts] [-cacerts] [-nokeys] [-info] [-des | -des3 | -idea | -aes128 | -aes192 | -aes256 | -camellia128 | -camellia192 | -camellia256 | -nodes] [-noiter] [-maciter | -nomaciter | -nomac] [-twopass] [-descert] [-certpbe cipher] [-keypbe cipher] [-macalg digest] [-keyex] [-keysig] [-password arg] [-passin arg] [-passout arg] [-rand file(s)] [-CAfile file] [-CApath dir] [-no-CAfile] [-no-CApath] [-CSP name]

DESCRIPTION

The pkcs12 command allows PKCS#12 files (sometimes referred to as PFX files) to be created and parsed. PKCS#12 files are used by several programs including Netscape, MSIE and MS Outlook.

COMMAND OPTIONS

There are a lot of options the meaning of some depends of whether a PKCS#12 file is being created or parsed. By default a PKCS#12 file is parsed. A PKCS#12 file can be created by using the-export option (see below).

PARSING OPTIONS

-help
Print out a usage message.

-in filename
This specifies filename of the PKCS#12 file to be parsed. Standard input is used by default.

-out filename
The filename to write certificates and private keys to, standard output by default. They are all written in PEM format.

-passin arg
the PKCS#12 file (i.e. input file) password source. For more information about the format of arg see the PASS PHRASE ARGUMENTS section in openssl.

-passout arg
pass phrase source to encrypt any outputted private keys with. For more information about the format of arg see the PASS PHRASE ARGUMENTS section in openssl.

-password arg
With -export, -password is equivalent to -passout. Otherwise, -password is equivalent to -passin.

-noout
this option inhibits output of the keys and certificates to the output file version of the PKCS#12 file.

-clcerts
only output client certificates (not CA certificates).

-cacerts
only output CA certificates (not client certificates).

-nocerts
no certificates at all will be output.

-nokeys
no private keys will be output.

-info
output additional information about the PKCS#12 file structure, algorithms used and iteration counts.

-des
use DES to encrypt private keys before outputting.

-des3
use triple DES to encrypt private keys before outputting, this is the default.

-idea
use IDEA to encrypt private keys before outputting.

-aes128, -aes192, -aes256
use AES to encrypt private keys before outputting.

-camellia128, -camellia192, -camellia256
use Camellia to encrypt private keys before outputting.

-nodes
don’t encrypt the private keys at all.

-nomacver
don’t attempt to verify the integrity MAC before reading the file.

-twopass
prompt for separate integrity and encryption passwords: most software always assumes these are the same so this option will render such PKCS#12 files unreadable.

FILE CREATION OPTIONS

-export
This option specifies that a PKCS#12 file will be created rather than parsed.

-out filename
This specifies filename to write the PKCS#12 file to. Standard output is used by default.

-in filename
The filename to read certificates and private keys from, standard input by default. They must all be in PEM format. The order doesn’t matter but one private key and its corresponding certificate should be present. If additional certificates are present they will also be included in the PKCS#12 file.

-inkey filename
file to read private key from. If not present then a private key must be present in the input file.

-name friendlyname
This specifies the “friendly name” for the certificate and private key. This name is typically displayed in list boxes by software importing the file.

-certfile filename
A filename to read additional certificates from.

-caname friendlyname
This specifies the “friendly name” for other certificates. This option may be used multiple times to specify names for all certificates in the order they appear. Netscape ignores friendly names on other certificates whereas MSIE displays them.

-pass arg, -passout arg
the PKCS#12 file (i.e. output file) password source. For more information about the format of arg see the PASS PHRASE ARGUMENTS section in openssl.

-passin password
pass phrase source to decrypt any input private keys with. For more information about the format of arg see the PASS PHRASE ARGUMENTS section in openssl.

-chain
if this option is present then an attempt is made to include the entire certificate chain of the user certificate. The standard CA store is used for this search. If the search fails it is considered a fatal error.

-descert
encrypt the certificate using triple DES, this may render the PKCS#12 file unreadable by some “export grade” software. By default the private key is encrypted using triple DES and the certificate using 40 bit RC2.

-keypbe alg, -certpbe alg
these options allow the algorithm used to encrypt the private key and certificates to be selected. Any PKCS#5 v1.5 or PKCS#12 PBE algorithm name can be used (see NOTESsection for more information). If a cipher name (as output by the list-cipher-algorithmscommand is specified then it is used with PKCS#5 v2.0. For interoperability reasons it is advisable to only use PKCS#12 algorithms.

-keyex|-keysig
specifies that the private key is to be used for key exchange or just signing. This option is only interpreted by MSIE and similar MS software. Normally “export grade” software will only allow 512 bit RSA keys to be used for encryption purposes but arbitrary length keys for signing. The -keysig option marks the key for signing only. Signing only keys can be used for S/MIME signing, authenticode (ActiveX control signing) and SSL client authentication, however due to a bug only MSIE 5.0 and later support the use of signing only keys for SSL client authentication.

-macalg digest
specify the MAC digest algorithm. If not included them SHA1 will be used.

-nomaciter, -noiter
these options affect the iteration counts on the MAC and key algorithms. Unless you wish to produce files compatible with MSIE 4.0 you should leave these options alone.

To discourage attacks by using large dictionaries of common passwords the algorithm that derives keys from passwords can have an iteration count applied to it: this causes a certain part of the algorithm to be repeated and slows it down. The MAC is used to check the file integrity but since it will normally have the same password as the keys and certificates it could also be attacked. By default both MAC and encryption iteration counts are set to 2048, using these options the MAC and encryption iteration counts can be set to 1, since this reduces the file security you should not use these options unless you really have to. Most software supports both MAC and key iteration counts. MSIE 4.0 doesn’t support MAC iteration counts so it needs the -nomaciter option.

-maciter
This option is included for compatibility with previous versions, it used to be needed to use MAC iterations counts but they are now used by default.

-nomac
don’t attempt to provide the MAC integrity.

-rand file(s)
a file or files containing random data used to seed the random number generator, or an EGD socket (see RAND_egd). Multiple files can be specified separated by an OS-dependent character. The separator is ; for MS-Windows, , for OpenVMS, and : for all others.

-CAfile file
CA storage as a file.

-CApath dir
CA storage as a directory. This directory must be a standard certificate directory: that is a hash of each subject name (using x509 -hash) should be linked to each certificate.

-no-CAfile
Do not load the trusted CA certificates from the default file location

-no-CApath
Do not load the trusted CA certificates from the default directory location

-CSP name
write name as a Microsoft CSP name.

NOTES

Although there are a large number of options most of them are very rarely used. For PKCS#12 file parsing only -in and -out need to be used for PKCS#12 file creation -export and -name are also used.

If none of the -clcerts, -cacerts or -nocerts options are present then all certificates will be output in the order they appear in the input PKCS#12 files. There is no guarantee that the first certificate present is the one corresponding to the private key. Certain software which requires a private key and certificate and assumes the first certificate in the file is the one corresponding to the private key: this may not always be the case. Using the -clcerts option will solve this problem by only outputting the certificate corresponding to the private key. If the CA certificates are required then they can be output to a separate file using the -nokeys -cacerts options to just output CA certificates.

The -keypbe and -certpbe algorithms allow the precise encryption algorithms for private keys and certificates to be specified. Normally the defaults are fine but occasionally software can’t handle triple DES encrypted private keys, then the option -keypbe PBE-SHA1-RC2-40 can be used to reduce the private key encryption to 40 bit RC2. A complete description of all algorithms is contained in the pkcs8 manual page.

Prior 1.1 release passwords containing non-ASCII characters were encoded in non-compliant manner, which limited interoperability, in first hand with Windows. But switching to standard-compliant password encoding poses problem accessing old data protected with broken encoding. For this reason even legacy encodings is attempted when reading the data. If you use PKCS#12 files in production application you are advised to convert the data, because implemented heuristic approach is not MT-safe, its sole goal is to facilitate the data upgrade with this utility.

EXAMPLES

Parse a PKCS#12 file and output it to a file:

 openssl pkcs12 -in file.p12 -out file.pem

Output only client certificates to a file:

 openssl pkcs12 -in file.p12 -clcerts -out file.pem

Don’t encrypt the private key:

 openssl pkcs12 -in file.p12 -out file.pem -nodes

Print some info about a PKCS#12 file:

 openssl pkcs12 -in file.p12 -info -noout

Create a PKCS#12 file:

 openssl pkcs12 -export -in file.pem -out file.p12 -name "My Certificate"

Include some extra certificates:

 openssl pkcs12 -export -in file.pem -out file.p12 -name "My Certificate" \
  -certfile othercerts.pem

 

Java Performance Tuning – Introduction

Java Performance Tuning

Garbage Collection

The Java programming language is an object oriented and includes automatic garbage collection.

Garbage collection is the process of reclaiming memory taken up by unreferenced objects.
The following sections will try to resume some of the concepts from this document and their impact on Application Server performance.

Memory Management, Java and Impact on Application Servers

The task of memory management that was always challenging with compiled object‐oriented languages such as C++.

On the other hand, Java is an interpretive language that takes this memory management
out of the hands of developers and gives it directly to the virtual machine where the code will be run.

This means that for best performance and stability, it is critical that the Java parameters for the virtual machine be understood and managed by the Application Server deployment team.

This section will describe the various parts of the Java heap and then list some useful parameters and tuning tips for ensuring  correct  runtime  stability  and  performance  of  Application Servers.

Java Virtual Machines

The Java specification as to what is “standard” for a given release is written and maintained by the Java Soft division of Sun Microsystems.

This specification is then delivered to other JVM providers (IBM, HP, etc). JavaSoft provides the standard implementation on Windows, Solaris, and LINUX.

Other platforms are required to deliver the code functionality but their JVM options can be different.

Java  options that are preceded with “‐X” or “‐XX” are typically platform‐specific.

That being said, many options are used on all platforms.

One must read in detail the README notes from the various releases on various platforms to be kept up‐to‐date with the variations.

This guide will mention the most critical ones and distinguish between those which are implemented on most platforms and those which are platform‐specific.

 

——- Stay Tuned for More Java Performance Monitoring Posts ——-

Certificates – Digital Certificates (Summary)

Introduction to Digital Certificates

Digital Certificates provide a means of proving your identity in electronic transactions, much like a driver license or a passport does in face-to-face interactions.

With a Digital Certificate, you can assure friends, business associates, and online services that the electronic information they receive from you are authentic.

This document introduces Digital Certificates and answers questions you might have about how Digital Certificates are used for information about the cryptography technologies used in Digital Certificates.

Digital certificates are the equivalent of a driver’s license, a marriage license, or any other form of identity.

The only difference is that a digital certificate is used in conjunction with a public key encryption system. Digital certificates are electronic files that simply work as an online passport.

Digital certificates are issued by a third party known as a Certification Authority such as VeriSign or Thawte.

These third party certificate authorities have the responsibility to confirm the identity of the certificate holder as well as provide assurance to the website visitors that the website is one that is trustworthy and capable of serving them in a trustworthy manner.

Digital certificates have two basic functions.

The first is to certify that the people, the website and the network resources such as servers and routers are reliable sources, in other words, who or what they claim to be.

The second function is to provide protection for the data exchanged from the visitor and the website from tampering or even theft, such as credit card information.

Who Uses Digital Certificates

Digital Certificates can be used for a variety of electronic transactions including e-mail, electronic commerce, groupware and electronic funds transfers.

Netscape’s popular Enterprise Server requires a Digital Certificate for each secure server.

For example, a customer shopping at an electronic mall run by Netscape’s server software requests the Digital Certificate of the server to authenticate the identity of the mall operator and the content provided by the merchant.

Without authenticating the server, the shopper should not trust the operator or merchant with sensitive information like a credit card number.

The Digital Certificate is instrumental in establishing a secure channel for communicating any sensitive information back to the mall operator Virtual malls, electronic banking, and other electronic services are becoming more commonplace, offering the convenience and flexibility of round-the-clock service direct from your home.

However, our concerns about privacy and security might be preventing you from taking advantage of this new medium for your personal business.

Encryption alone is not enough, as it provides no proof of the identity of the sender of the encrypted information. Without special safeguards, you risk being impersonated online.

Digital Certificates address this problem, providing an electronic means of verifying someone’s identity.

Used in conjunction with encryption, Digital Certificates provide a more complete security solution, assuring the identity of all parties involved in a transaction.

Similarly, a secure server must have its own Digital Certificate to assure users that the server is run by the organization it claims to be affiliated with and that the content provided is legitimate.

Types of Digital Certificate:-

  1. Identity Certificates

An Identity Certificate is one that contains a signature verification key combined with sufficient information to identify (hopefully uniquely) the key holder.

This type of certificate is much subtler than might first be imagined and will be considered in more detail later.

  1. Accreditation Certificates

This is a certificate that identifies the key holder as a member of a specified group or organization without necessarily identifying them.

For example, such a certificate could indicate that the key holder is a medical doctor or a lawyer.

In many circumstances, a particular signature is needed to authorize a transaction but the identity of the key holder is not relevant.

For example, pharmacists might need to ensure that medical prescriptions are signed by doctors but they do not need to know the specific identities of the doctors involved.

Here the certificate states in effect that the key holder, whoever they are, has permission to write medical prescriptions’.

Accreditation certificates can also be viewed as authorization (or permission) certificates.

It might be thought that a doctor’s key without identity would undermine the ability to audit the issue of medical prescriptions.

However, while such certificate might not contain key holder identity data, the certificate issuer will know this so such requirements can be met if necessary.

  1. Authorizations and Permission Certificates

In these forms of certificate, the certificate signing authority delegates some form of authority to the key being signed.

For example, a Bank will issue an authorization certificate to its customers saying ‘the key in this certificate can be used to authorize the withdrawal of money from account number 271828’.

In general, the owner of any resource that involves  electronic  access  can  use  an  authorization  certificate  to  control  access  to  it.

Other examples include control of access to secure computing facilities and to World Wide Web pages.

In banking an identity certificate might be used to set up an account but the authorization certificate for the account will not itself contain identity data.

To identify the owner of a certificate a bank will typically look up the link between account numbers and owners in its internal databases.

Placing such information in an authorization certificate is actually undesirable since it could expose the bank or its customers to additional risks.

The Parties to a Digital Certificate

In principle there are three different interests associated with a digital certificate:

  1. The Requesting Party

The  party  who  needs  the  certificate  and  will  offer it  for  use  by  others  –  they  will  generally provide some or all of the information it contains.

  1. The Issuing Party

The  party  that  digitally  signs  the  certificate  after  creating  the  information  in  the  certificate  or checking its correctness.

  1. The Verifying Party (or Parties)

They are Parties that validate the signature on the certificate and then rely on its contents for some purpose.

For example, a person – the requesting party –they might present paper documents  giving proof of identity to a government agency – the issuing party – who will then provide an identity certificate that could then be used by a bank – the verifying party – when the requesting party opens a bank account.

The term ‘relying party’ is sometimes uses instead of ‘verifying party’ but this can be misleading since  the  real  purpose  is  to  identify  a  party  who  checks  the  certificate  before  relying  on  it.

In  a credit card transaction many parties might handle a certificate and hence rely on it in some way but  only  a  few  of  these  might  actually  check  the  validity  of  the  certificate.

Hence  a  ‘verifying party’  is  a  party  that  checks  and  then  relies  on  the  contents  of  a  certificate,  not  just  one  that depends on it without checking its validity.

The actual parties involved in using a certificate will vary depending on the type of certificate.

 

Certificates – CSR (Certificate Signing Request).

What is a CSR (Certificate Signing Request)?

What is a CSR?

A CSR or Certificate Signing request is a block of encrypted text that is generated on the server that the certificate will be used on.

It contains information that will be included in your certificate such as your organization name, common name (domain name), locality, and country.

It also contains the public key that will be included in your certificate.

A private key is usually created at the same time that you create the CSR.

A certificate authority will use a CSR to create your SSL certificate, but it does not need your private key.

You need to keep your private key secret.

What is a CSR and private key good for if someone else can potentially read your communications?

The certificate created with a particular CSR will only work with the private key that was generated with it.

So if you lose the private key, the certificate will no longer work.

What is contained in a CSR?

NAME EXPLANATION EXAMPLES
Common Name The fully qualified domain name (FQDN) of your server. This must match exactly what you type in your web browser or you will receive a name mismatch error. *.google.com
mail.google.com
Organization The legal name of your organization. This should not be abbreviated and should include suffixes such as Inc, Corp, or LLC. Google Inc.
Organizational Unit The division of your organization handling the certificate. Information Technology
IT Department
City/Locality The city where your organization is located. Mountain View
State/County/Region The state/region where your organization is located. This shouldn’t be abbreviated. California
Country The two-letter ISO code for the country where your organization is location. US
GB
Email address An email address used to contact your organization. webmaster@google.com
Public Key The public key that will go into the certificate. The public key is created automatically

What is a CSR’s format?

Most CSRs are created in the Base-64 encoded PEM format.

This format includes the “—–BEGIN CERTIFICATE REQUEST—–” and “—–END CERTIFICATE REQUEST—–” lines at the begining and end of the CSR.

A PEM format CSR can be opened in a text editor and looks like the following example:

-----BEGIN CERTIFICATE REQUEST-----
MIIByjCCATMCAQAwgYkxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh
MRYwFAYDVQQHEw1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKEwpHb29nbGUgSW5jMR8w
HQYDVQQLExZJbmZvcm1hdGlvbiBUZWNobm9sb2d5MRcwFQYDVQQDEw53d3cuZ29v
Z2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEApZtYJCHJ4VpVXHfV
IlstQTlO4qC03hjX+ZkPyvdYd1Q4+qbAeTwXmCUKYHThVRd5aXSqlPzyIBwieMZr
WFlRQddZ1IzXAlVRDWwAo60KecqeAXnnUK+5fXoTI/UgWshre8tJ+x/TMHaQKR/J
cIWPhqaQhsJuzZbvAdGA80BLxdMCAwEAAaAAMA0GCSqGSIb3DQEBBQUAA4GBAIhl
4PvFq+e7ipARgI5ZM+GZx6mpCz44DTo0JkwfRDf+BtrsaC0q68eTf2XhYOsq4fkH
Q0uA0aVog3f5iJxCa3Hp5gxbJQ6zV6kJ0TEsuaaOhEko9sdpCoPOnRBm2i/XRD2D
6iNh8f8z0ShGsFqjDgFHyF3o+lUyj+UC6H1QW7bn
-----END CERTIFICATE REQUEST-----

How to generate a CSR Certificate using Openssl in Linux ?

openssl req -out <name>.csr -new -newkey rsa:2048 -nodes -keyout privateKey_<name>.key

How do I decode a CSR?

In order to decode a CSR on your own machine using OpenSSL, use the following command:

openssl req -in <name>.csr -noout -text

What is a CSR/Private Key’s bit length?

The bit-length of a CSR and private key pair determine how easily the key can be cracked using brute force methods.

A key size of 512 bits is considered weak and could potentially be broken in a few months or less with enough computing power.

If a private key is broken, all the connections initiated with it would be exposed to whomever had the key.

A bit-length of 1024 is exponentially stronger, however, it is more and more likely to be broken as computing power increases.

The Extended Validation guidelines that SSL certificate providers are required to follow (http://cabforum.org/documents.html), require that all EV certificates use a 2048-bit key size to ensure their security well into the future.

Because of this, most providers encourage 2048-bit keys on all certificates whether they are EV or not.

Disabling HTTP methods in TIBCO Administrator Tomcat

Title: Disabling HTTP methods in Administrator Tomcat.
Description: To restrict the response to specific HTTP Methods such as OPTIONS, PUT, DELETE, CONNECT and TRACE, Tomcat can be configured to not respond to any of these HTTP Methods.
Environment: All  Linux  Windows
Resolution: This can be configured at the instance level by inserting a <security-constraint> element directly under the <web-app> element in the installation’s web.xml file located at: [tomcatinstallation]/conf/web.xml

Below is the added configuration.

< security-constraint>
< web-resource-collection>
< web-resource-name>restricted methods</web-resource-name>
< url-pattern>/*</url-pattern>
< http-method>TRACE</http-method>
< http-method>PUT</http-method>
< http-method>OPTIONS</http-method>
< http-method>DELETE</http-method>
< /web-resource-collection>
< auth-constraint />
< /security-constraint>

The configuration above will disable the HTTP Methods TRACE, PUT, OPTIONS or DELETE.  Specificly for TRACE, open the Tibco_home/administrator/domain<domain_name>/tomcat/conf/server.xml and set the allowTrace=”false” in the HTTP connector string used by the admin server. After this attribute is set, restart  admin server.

Reference:
 

“BW-EXT-LOG-100000 All threads (75) are currently busy, waiting. Increase maxThreads (75) or check the servlet status”

Title: Why Do I get “BW-EXT-LOG-100000 All threads (75) are currently busy, waiting. Increase maxThreads (75) or check the servlet status” error?

Environment: Product: TIBCO BusinessWorks
Version:
OS: Any

Resolution: The error you got indicates that BW has reached the maximum number of threads available for incoming HTTP requests.
To solve your problem, you can modify the following BW property to specify higher value:

bw.plugin.http.server.minProcessors
This property specifies the minimum number of threads available for incoming HTTP requests. The HTTP server creates the number of threads specified by this parameter when it starts up. The default minimum number of threads is 10.

bw.plugin.http.server.maxProcessors
This property specifies the maximum number of threads available for incoming HTTP requests. The HTTP server will not create more than the number of threads specified by this parameter. The default maximum number of threads is 75.

You can get more details about these properties in TIBCO ActiveMatrix BusinessWorks? Palette Reference — > Chapter 6 HTTP Palette : HTTP Connection –> Custom Properties for the HTTP Palette.


Keywords: BW HTTP Threads request minProcessors

Hack #21 -> Sed Basics – Find and Replace Using RegEx

This hack explains how to use sed substitute command “s”.

The `s’ command is probably the most important in `sed’ and has a lot

of different options.

The `s’ command attempts to match the pattern space against the

supplied REGEXP; if the match is successful, then that portion of the

pattern space which was matched is replaced with REPLACEMENT.

Syntax:

 

#sed ‘ADDRESSs/REGEXP/REPLACEMENT/FLAGS’ filename

#sed ‘PATTERNs/REGEXP/REPLACEMENT/FLAGS’ filename

 

  • s is substitute command
  • / is a delimiter
  • REGEXP is regular expression to match
  • REPLACEMENT is a value to replace

FLAGS can be any of the following :

  • g Replace all the instance of REGEXP with REPLACEMENT
  • n Could be any number,replace nth instance of the REGEXP with

REPLACEMENT.

  • p If substitution was made, then prints the new pattern space.
  • i match REGEXP in a case-insensitive manner.
  • w file If substitution was made, write out the result to the given

file.

  • We can use different delimiters ( one of @ % ; : ) instead of /

Let us first create thegeekstuff.txt file that will be used in all the

examples mentioned below.

 

$ cat thegeekstuff.txt

 

# Instruction Guides

  1. Linux Sysadmin, Linux Scripting etc.
  2. Databases – Oracle, mySQL etc.
  3. Security (Firewall, Network, Online Security etc)
  4. Storage in Linux
  5. Productivity (Too many technologies to explore, not

much time available)

# Additional FAQS

  1. Windows- Sysadmin, reboot etc.

 

Substitute Word “Linux” to “Linux-Unix” Using sed s//

In the example below, in the output line “1. Linux-Unix Sysadmin, Linux

Scripting etc” only first Linux is replaced by Linux-Unix. If no flags are

specified the first match of line is replaced.

 

$ sed ‘s/Linux/Linux-Unix/’ thegeekstuff.txt

 

# Instruction Guides

  1. Linux-Unix Sysadmin, Linux Scripting etc.
  2. Databases – Oracle, mySQL etc.
  3. Security (Firewall, Network, Online Security etc)
  4. Storage in Linux-Unix
  5. Productivity (Too many technologies to explore, not

much time available)

# Additional FAQS

  1. Windows- Sysadmin, reboot etc.

 

Substitute all Appearances of a Word Using sed s//g

The below sed command replaces all occurrences of Linux to Linux-Unix

using global substitution flag “g”.

 

$ sed ‘s/Linux/Linux-Unix/g’ thegeekstuff.txt

 

# Instruction Guides

  1. Linux-Unix Sysadmin, Linux-Unix Scripting etc.
  2. Databases – Oracle, mySQL etc.
  3. Security (Firewall, Network, Online Security etc)
  4. Storage in Linux-Unix
  5. Productivity (Too many technologies to explore, not

much time available)

# Additional FAQS

  1. Windows- Sysadmin, reboot etc.

 

Substitute Only 2nd Occurrence of a Word Using sed s//2

In the example below, in the output line “1. Linux Sysadmin, Linux-Unix

Scripting etc.” only 2nd occurrence of Linux is replaced by Linux-Unix.

 

$ sed ‘s/Linux/Linux-Unix/2’ thegeekstuff.txt

 

# Instruction Guides

  1. Linux Sysadmin, Linux-Unix Scripting etc.
  2. Databases – Oracle, mySQL etc.
  3. Security (Firewall, Network, Online Security etc)
  4. Storage in Linux
  5. Productivity (Too many technologies to explore, not

much time available)

# Additional FAQS

  1. Windows- Sysadmin, reboot etc.
  2. Write Changes to a File and Print the Changes Using sed

s//gpw

The example below has substitution with three flags. It substitutes all

the occurrence of Linux to Linux-Unix and prints the substituted output

as well as written the same to the given the file.

 

$ sed -n ‘s/Linux/Linux-Unix/gpw output’ thegeekstuff.txt

 

  1. Linux-Unix Sysadmin, Linux-Unix Scripting etc.
  2. Storage in Linux-Unix

 

$ cat output

 

  1. Linux-Unix Sysadmin, Linux-Unix Scripting etc.
  2. Storage in Linux-Unix
  3. Substitute Only When the Line Matches with the Pattern

Using sed

In this example, if the line matches with the pattern “-”, then it replaces

all the characters from “-” with the empty.

 

$ sed ‘/\-/s/\-.*//g’ thegeekstuff.txt

 

# Instruction Guides

  1. Linux Sysadmin, Linux Scripting etc.
  2. Databases
  3. Security (Firewall, Network, Online Security etc)
  4. Storage in Linux
  5. Productivity (Too many technologies to explore, not

much time available)

# Additional FAQS

  1. Windows

 

Delete Last X Number of Characters From Each Line Using

sed

 

This sed example deletes last 3 characters from each line.

$ sed ‘s/…$//’ thegeekstuff.txt

 

# Instruction Gui

  1. Linux Sysadmin, Linux Scripting e
  2. Databases – Oracle, mySQL e
  3. Security (Firewall, Network, Online Security e
  4. Storage in Li
  5. Productivity (Too many technologies to explore, not

much time availab

# Additional F

  1. Windows- Sysadmin, reboot e

 

Eliminate Comments Using sed

Delete all the comment lines from a file as shown below using sed

command.

$ sed -e ‘s/#.*//’ thegeekstuff.txt

 

  1. Linux Sysadmin, Linux Scripting etc.
  2. Databases – Oracle, mySQL etc.
  3. Security (Firewall, Network, Online Security etc)
  4. Storage in Linux
  5. Productivity (Too many technologies to explore, not

much time available)

  1. Windows- Sysadmin, reboot etc.
  2. Eliminate Comments and Empty Lines Using sed

 

In the following example, there are two commands separated by ‘;’

First command replaces the lines starting with the # to the blank lines

Second command deletes the empty lines.

 

$ sed -e ‘s/#.*//;/^$/d’ thegeekstuff.txt

 

  1. Linux Sysadmin, Linux Scripting etc.
  2. Databases – Oracle, mySQL etc.
  3. Security (Firewall, Network, Online Security etc)
  4. Storage in Linux
  5. Productivity (Too many technologies to explore, not

much time available)

  1. Windows- Sysadmin, reboot etc.
  2. Convert DOS newlines (CR/LF) to Unix format Using sed

 

Eliminate HTML Tags from file Using sed

 

In this example, the regular expression given in the sed command

matches the html tags and replaces with the empty.

$ sed -e ‘s/<[^>]*>//g’

 

This <b> is </b> an <i>example</i>.

This is an example

Hack #20 -> Execute Commands in the Background

You can use one of the 5 methods explained in this hack to execute a

Linux command, or shell script in the background.

 

Method 1. Use &

You can execute a command (or shell script) as a background job by

appending an ampersand to the command as shown below.

 

$ ./my-shell-script.sh &

 

Method 2. Nohup

After you execute a command (or shell script) in the background using

&, if you logout from the session, the command will get killed. To avoid

that, you should use nohup as shown below.

 

$ nohup ./my-shell-script.sh &

 

Method 3. Screen Command

After you execute a command in the background using nohup and &, the

command will get executed even after you logout. But, you cannot

connect to the same session again to see exactly what is happening on

the screen. To do that, you should use screen command.

Linux screen command offers the ability to detach a session that is

running some process, and then attach it at a later time. When you

reattach the session later, your terminals will be there exactly in the way

you left them earlier.

 

Method 4. At Command

Using at command you can schedule a job to run at a particular date

and time. For example, to execute the backup script at 10 a.m

tomorrow, do the following.

 

$ at -f backup.sh 10 am tomorrow

 

Method 5. Watch Command

To execute a command continuously at a certain interval, use watch

command as shown below.

 

$ watch df -h

Hack #19 -> Display Total Connect Time of Users

Ac command will display the statistics about the user’s connect time.

 

Connect time for the current logged in user

With the option –d, it will break down the output for the individual days.

In this example, I’ve been logged in to the system for more than 6 hours

today. On Dec 1st, I was logged in for about 1 hour.

$ ac –d

Dec 1 total 1.08

Dec 2 total 0.99

Dec 3 total 3.39

Dec 4 total 4.50

Today total 6.10

 

Connect time for all the users

To display connect time for all the users use –p as shown below. Please

note that this indicates the cumulative connect time for the individual

users.

$ ac -p

john 3.64

madison 0.06

sanjay 88.17

nisha 105.92

ramesh 111.42

total 309.21

 

Connect time for a specific user

To get a connect time report for a specific user, execute the following:

 

$ ac -d sanjay

Jul 2 total 12.85

Aug 25 total 5.05

Sep 3 total 1.03

Sep 4 total 5.37

Dec 24 total 8.15

Dec 29 total 1.42

Today total 2.95

Hack #18 -> Diff Command

Diff command compares two different files and reports the difference.

The output of the diff command is very cryptic and not straight forward

to read.

Syntax: diff [options] file1 file2

 

What was modified in my new file when compare to my old

file?

 

The option -w in the diff command will ignore the white space while

performing the comparison.

In the following diff output:

  • The lines above —, indicates the changes happened in first file in

the diff command (i.e name_list.txt).

  • The lines below —, indicates the changes happened to the

second file in the diff command (i.e name_list_new.txt). The lines

that belong to the first file starts with < and the lines of second

file starts with >.

 

# diff -w name_list.txt name_list_new.txt

2c2,3

< John Doe

> John M Doe

> Jason Bourne

Hack #17 -> Stat Command

Stat command can be used either to check the status/properties of a

single file or the filesystem.

Display statistics of a file or directory.

 

$ stat /etc/my.cnf

File: `/etc/my.cnf’

Size: 346 Blocks: 16 IO Block: 4096 regular file

Device: 801h/2049d Inode: 279856 Links: 1

Access: (0644/-rw-r–r–) Uid: (0/root) Gid: (0/root)

Access: 2009-01-01 02:58:30.000000000 -0800

Modify: 2006-06-01 20:42:27.000000000 -0700

Change: 2007-02-02 14:17:27.000000000 -0800

$ stat /home/ramesh

File: `/home/ramesh’

Size: 4096 Blocks: 8 IO Block: 4096

directory

Device: 803h/2051d Inode: 5521409 Links: 7

Access: (0755/drwxr-xr-x) Uid: (401/ramesh) Gid:

(401/ramesh)

Access: 2009-01-01 12:17:42.000000000 -0800

Modify: 2009-01-01 12:07:33.000000000 -0800

Change: 2009-01-09 12:07:33.000000000 -0800

 

Display the status of the filesystem using option –f

 

$ stat -f /

File: “/”

ID: 0 Namelen: 255 Type: ext2/ext3

Blocks: Total: 2579457 Free: 2008027 Available:

1876998 Size: 4096

Inodes: Total: 1310720 Free: 1215892

Hack #16 -> Cut Command

Cut command can be used to display only specific columns from a text

file or other command outputs.

The following are some of the examples.

Display the 1st field (employee name) from a colon delimited file

$ cut -d: -f 1 names.txt

Emma Thomas

Alex Jason

Madison Randy

Sanjay Gupta

Nisha Singh

Display 1st and 3rd field from a colon delimited file

$ cut -d: -f 1,3 names.txt

Emma Thomas:Marketing

Alex Jason:Sales

Madison Randy:Product Development

Sanjay Gupta:Support

Nisha Singh:Sales

Display only the first 8 characters of every line in a file

$ cut -c 1-8 names.txt

Emma Tho

Alex Jas

Madison

Sanjay G

Nisha Si

Misc Cut command examples

  • cut -d: -f1 /etc/passwd Displays the unix login names for all the

users in the system.

  • free | tr -s ‘ ‘ | sed ‘/^Mem/!d’ | cut -d” ” -f2 Displays the

total memory available on the system.