Tech Humour # 8

Advertisements

How to use parallel ssh (PSSH) for executing ssh in parallel on a number of Linux/Unix/BSD servers

Recently I come across a nice little nifty tool called pssh to run a single command on multiple Linux / UNIX / BSD servers. You can easily increase your productivy with this SSH tool.
More about pssh
pssh is a command line tool for executing ssh in parallel on some hosts. It specialties includes:
  1. Sending input to all of the processes
  2. Inputting a password to ssh
  3. Saving output to files
  4. IT/sysadmin taks automation such as patching servers
  5. Timing out and more
Let us see how to install and use pssh on Linux and Unix-like system.
pssh-welcome
Installation
You can install pssh as per your Linux and Unix variant. Once package installed, you can get parallel versions of the openssh tools. Included in the installation:
  1. Parallel ssh (pssh command)
  2. Parallel scp (pscp command )
  3. Parallel rsync (prsync command)
  4. Parallel nuke (pnuke command)
  5. Parallel slurp (pslurp command)
Install pssh on Debian/Ubuntu Linux
Type the following apt-get command/apt command to install pssh:
$ sudo apt install pssh
OR
$ sudo apt-get install pssh
Sample outputs:
Fig.01: Installing pssh on Debian/Ubuntu Linux

Fig.01: Installing pssh on Debian/Ubuntu Linux

Install pssh on Apple MacOS X
Type the following brew command:
$ brew install pssh
Sample outputs:
Fig.02: Installing pssh on MacOS Unix

Fig.02: Installing pssh on MacOS Unix

Install pssh on FreeBSD unix
Type any one of the command:
# cd /usr/ports/security/pssh/ && make install clean
OR
# pkg install pssh
Sample outputs:
Fig.03: Installing pssh on FreeBSD

Fig.03: Installing pssh on FreeBSD

Install pssh on RHEL/CentOS/Fedora Linux
First turn on EPEL repo and type the following command yum command:
$ sudo yum install pssh
Sample outputs:
Fig.04: Installing pssh on RHEL/CentOS/Red Hat Enterprise Linux

Fig.04: Installing pssh on RHEL/CentOS/Red Hat Enterprise Linux

Install pssh on Fedora Linux
Type the following dnf command:
$ sudo dnf install pssh
Sample outputs:
Fig.05: Installing pssh on Fedora

Fig.05: Installing pssh on Fedora

Install pssh on Arch Linux
Type the following command:
$ sudo pacman -S python-pip
$ pip install pssh
How to use pssh command
First you need to create a text file called hosts file from which pssh read hosts names. The syntax is pretty simple. Each line in the host file are of the form [user@]host[:port] and can include blank lines and comments lines beginning with “#”. Here is my sample file named ~/.pssh_hosts_files:
$ cat ~/.pssh_hosts_files
vivek@dellm6700
root@192.168.2.30
root@192.168.2.45
root@192.168.2.46

Run the date command all hosts:
$ pssh -i -h ~/.pssh_hosts_files date
Sample outputs:
[1] 18:10:10 [SUCCESS] root@192.168.2.46 Sun Feb 26 18:10:10 IST 2017 [2] 18:10:10 [SUCCESS] vivek@dellm6700 Sun Feb 26 18:10:10 IST 2017 [3] 18:10:10 [SUCCESS] root@192.168.2.45 Sun Feb 26 18:10:10 IST 2017 [4] 18:10:10 [SUCCESS] root@192.168.2.30 Sun Feb 26 18:10:10 IST 2017
Run the uptime command on each host:
$ pssh -i -h ~/.pssh_hosts_files uptime
Sample outputs:
[1] 18:11:15 [SUCCESS] root@192.168.2.45 18:11:15 up 2:29, 0 users, load average: 0.00, 0.00, 0.00 [2] 18:11:15 [SUCCESS] vivek@dellm6700 18:11:15 up 19:06, 0 users, load average: 0.13, 0.25, 0.27 [3] 18:11:15 [SUCCESS] root@192.168.2.46 18:11:15 up 1:55, 0 users, load average: 0.00, 0.00, 0.00 [4] 18:11:15 [SUCCESS] root@192.168.2.30 6:11PM up 1 day, 21:38, 0 users, load averages: 0.12, 0.14, 0.09
You can now automate common sysadmin tasks such as patching all servers:
$ pssh -h ~/.pssh_hosts_files -- sudo yum -y update
OR
$ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y update
$ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y upgrade
How do I use pssh to copy file to all servers?
The syntax is:
pscp -h ~/.pssh_hosts_files src dest
To copy $HOME/demo.txt to /tmp/ on all servers, enter:
$ pscp -h ~/.pssh_hosts_files $HOME/demo.txt /tmp/
Sample outputs:
[1] 18:17:35 [SUCCESS] vivek@dellm6700 [2] 18:17:35 [SUCCESS] root@192.168.2.45 [3] 18:17:35 [SUCCESS] root@192.168.2.46 [4] 18:17:35 [SUCCESS] root@192.168.2.30
Or use the prsync command for efficient copying of files:
$ prsync -h ~/.pssh_hosts_files /etc/passwd /tmp/
$ prsync -h ~/.pssh_hosts_files *.html /var/www/html/
How do I kill processes in parallel on a number of hosts?
Use the pnuke command for killing processes in parallel on a number of hosts. The syntax is:
$ pnuke -h .pssh_hosts_files process_name
### kill nginx and firefox on hosts:
$ pnuke -h ~/.pssh_hosts_files firefox
$ pnuke -h ~/.pssh_hosts_files nginx

See pssh/pscp command man pages for more information.
Conclusion
pssh is a pretty good tool for parallel SSH command execution on many servers. It quite is useful if you have 5 or 10 servers. Nevertheless, if you need to do something complicated you should look into Ansible and co.

ELK on CEntOS 7 – (Source UnixMen)

Introduction

For those who don’t know, Elastic Stack (ELK Stack) is an infrastructure software program made up of multiple components developed by Elastic. The components include:

  • Beats: open-source data shippers working as agents on the servers to send different types of operational data to Elasticsearch.
  • Elasticsearch: a highly scalable open source full-text search and analytics engine. It allows you to store, search, and analyze big volumes of data quickly and in near real time. It is generally used as the underlying engine/technology that powers applications that have complex search features and requirements.
  • Kibana: open source analytics and visualization platform designed to work with Elasticsearch. It is used to interact with data stored in Elasticsearch indices. It has a browser-based interface that enables quick creation and sharing of dynamic dashboards that display changes to Elasticsearch queries in real time.
  • Logstash: logs and events collection engine, which provides a real-time pipeline. It can take data from multiple sources and convert them into JSON documents.

This tutorial will take you through the process of installing the Elastic Stack on a CentOS 7 server.

Getting started

First of all, we need Java 8, so you’ll need to download the official Oracle rpm package.

# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm"

Install it with rpm:

# rpm -ivh jdk-8u77-linux-x64.rpm

Ensure that it is working properly by checking it on your server:

# java -version

Install Elasticsearch

First, download and install the public signing key:

# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Next, create a file called elasticsearch.repo in /etc/yum.repos.d/, and paste the following lines:

[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Now, the repository is ready for use. Install elasticsearch with yum code:

# yum install elasticsearch

Configuring Elasticsearch

Go to the configuration directory and edit the elasticsaerch.yml configuration file, like this:

# $EDITOR /etc/elasticsearch.yml

Enable memory lock removing comment on line 43:
bootstrap.memory_lock: true
Then, scroll until you reach the “Network” section, and there remove comment on lines:

network.host: 192.168.0.1
http.port: 9200

Save and exit.

Next, it’s time to configure memory lock. In /usr/lib/systemd/system/ edit elasticsearch.service. There, uncomment the line:

LimitMEMLOCK=infinity

Save and exit.

Now go to the configuration file for Elasticsearch:

# $EDITOR /etc/sysconfig/elasticsearch

Uncomment line 60 and be sure that it contains the following content:

MAX_LOCKED_MEMORY=unlimited

Now, Elastisearch is configured. It will run on the IP address you specified (change it to “localhost” if necessary) on port 9200. Next:

# systemctl daemon-reload
# systemctl enable elasticsearch
# systemctl start elasticsearch

Install Kibana

When Elasticsearch has been configured and started, install and configure Kibana with a web server. In this case, we will use Nginx.
As in the case of Elasticsearch, install Kibana with wget and rpm:

# wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
# rpm -ivh kibana-5.1.1-x86_64.rpm

Edit Kibana configuration file:

# $EDITOR /etc/kibana/kibana.yml

There, uncomment:

server.port: 5601
server.host: "localhost"
elasticsearch.url: "http://localhost:9200"

Save, exit and start Kibana.

# systemctl enable kibana
# systemctl start kibana

Now, install Nginx and configure it as reverse proxy. This way it’s possible to access Kibana from the public IP address.
Nginx is available in the Epel repository:

# yum -y install epel-release

Next:

# yum -y install nginx httpd-tools

In Nginx configuration file( /etc/nginx/nginx.conf) remove the server { } block. Then save and exit.

Create a Virtual Host configuration file:

# $EDITOR /etc/nginxconf.d/kibana.conf

There, paste the following content:

server {
    listen 80;
 
    server_name elk-stack.co;
 
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.kibana-user;
 
    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Create a new authentication file:

# htpasswd -c /etc/nginx/.kibana-user admin
my_strong_password

Lastly:

# systemctl enable nginx
# systemctl start nginx

Install Logstash

As for Elastisearch and Kibana:

# wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm
# rpm -ivh logstash-5.1.1.rpm

It’s necessary to create a new SSL certificate. First, edit the openssl.cnf file:

# $EDITOR /etc/pki/tls/openssl.cnf

In the [ v3_ca ] section for the server identification:

[ v3_ca ]

# Server IP Address
subjectAltName = IP: IP_ADDRESS

After saving and exiting, generate the certificate:

# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt

Next, you can create a new file to configure the log sources for Filebeat, then a file for syslog processing and the file to define the Elasticsearch output.

These configurations depends on how you want to filter the data.

Finally:

# systemctl enable logstash
# systemctl start logstash

You have now successfully installed and configured the ELK Stack server-side!

30 Shades of “Alias” Command – UNIX

You can define various types aliases as follows to save time and increase productivity.

#1: Control ls command output

The ls command lists directory contents and you can colorize the output:

## Colorize the ls output ##
alias ls='ls --color=auto'
 
## Use a long listing format ##
alias ll='ls -la' 
 
## Show hidden files ##
alias l.='ls -d .* --color=auto'

#2: Control cd command behavior

## get rid of command not found ##
alias cd..='cd ..' 
 
## a quick way to get out of current directory ##
alias ..='cd ..' 
alias ...='cd ../../../' 
alias ....='cd ../../../../' 
alias .....='cd ../../../../' 
alias .4='cd ../../../../' 
alias .5='cd ../../../../..'

#3: Control grep command output

grep command is a command-line utility for searching plain-text files for lines matching a regular expression:

## Colorize the grep command output for ease of use (good for log files)##
alias grep='grep --color=auto'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'

#4: Start calculator with math support

alias bc='bc -l'

#4: Generate sha1 digest

alias sha1='openssl sha1'

#5: Create parent directories on demand

mkdir command is used to create a directory:

alias mkdir='mkdir -pv'

#6: Colorize diff output

You can compare files line by line using diff and use a tool called colordiff to colorize diff output:

# install  colordiff package 🙂
alias diff='colordiff'

#7: Make mount command output pretty and human readable format

alias mount='mount |column -t'

#8: Command short cuts to save time

# handy short cuts #
alias h='history'
alias j='jobs -l'

#9: Create a new set of commands

alias path='echo -e ${PATH//:/\\n}'
alias now='date +"%T"'
alias nowtime=now
alias nowdate='date +"%d-%m-%Y"'

#10: Set vim as default

alias vi=vim 
alias svi='sudo vi' 
alias vis='vim "+set si"' 
alias edit='vim'

#11: Control output of networking tool called ping

# Stop after sending count ECHO_REQUEST packets #
alias ping='ping -c 5'
# Do not wait interval 1 second, go fast #
alias fastping='ping -c 100 -s.2'

#12: Show open ports

Use netstat command to quickly list all TCP/UDP port on the server:

alias ports='netstat -tulanp'

#13: Wakeup sleeping servers

Wake-on-LAN (WOL) is an Ethernet networking standard that allows a server to be turned on by a network message. You can quickly wakeup nas devices and server using the following aliases:

## replace mac with your actual server mac address #
alias wakeupnas01='/usr/bin/wakeonlan 00:11:32:11:15:FC'
alias wakeupnas02='/usr/bin/wakeonlan 00:11:32:11:15:FD'
alias wakeupnas03='/usr/bin/wakeonlan 00:11:32:11:15:FE'

#14: Control firewall (iptables) output

Netfilter is a host-based firewall for Linux operating systems. It is included as part of the Linux distribution and it is activated by default. This post list most common iptables solutions required by a new Linux user to secure his or her Linux operating system from intruders.

## shortcut  for iptables and pass it via sudo#
alias ipt='sudo /sbin/iptables'
 
# display all rules #
alias iptlist='sudo /sbin/iptables -L -n -v --line-numbers'
alias iptlistin='sudo /sbin/iptables -L INPUT -n -v --line-numbers'
alias iptlistout='sudo /sbin/iptables -L OUTPUT -n -v --line-numbers'
alias iptlistfw='sudo /sbin/iptables -L FORWARD -n -v --line-numbers'
alias firewall=iptlist

#15: Debug web server / cdn problems with curl

# get web server headers #
alias header='curl -I'
 
# find out if remote server supports gzip / mod_deflate or not #
alias headerc='curl -I --compress'

#16: Add safety nets

# do not delete / or prompt if deleting more than 3 files at a time #
alias rm='rm -I --preserve-root'
 
# confirmation #
alias mv='mv -i' 
alias cp='cp -i' 
alias ln='ln -i'
 
# Parenting changing perms on / #
alias chown='chown --preserve-root'
alias chmod='chmod --preserve-root'
alias chgrp='chgrp --preserve-root'

#17: Update Debian Linux server

apt-get command is used for installing packages over the internet (ftp or http). You can also upgrade all packages in a single operations:

# distro specific  - Debian / Ubuntu and friends #
# install with apt-get
alias apt-get="sudo apt-get" 
alias updatey="sudo apt-get --yes" 
 
# update on one command 
alias update='sudo apt-get update && sudo apt-get upgrade'

#18: Update RHEL / CentOS / Fedora Linux server

yum command is a package management tool for RHEL / CentOS / Fedora Linux and friends:

## distrp specifc RHEL/CentOS ##
alias update='yum update'
alias updatey='yum -y update'

#19: Tune sudo and su

# become root #
alias root='sudo -i'
alias su='sudo -i'

#20: Pass halt/reboot via sudo

shutdown command bring the Linux / Unix system down:

# reboot / halt / poweroff
alias reboot='sudo /sbin/reboot'
alias poweroff='sudo /sbin/poweroff'
alias halt='sudo /sbin/halt'
alias shutdown='sudo /sbin/shutdown'

#21: Control web servers

# also pass it via sudo so whoever is admin can reload it without calling you #
alias nginxreload='sudo /usr/local/nginx/sbin/nginx -s reload'
alias nginxtest='sudo /usr/local/nginx/sbin/nginx -t'
alias lightyload='sudo /etc/init.d/lighttpd reload'
alias lightytest='sudo /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf -t'
alias httpdreload='sudo /usr/sbin/apachectl -k graceful'
alias httpdtest='sudo /usr/sbin/apachectl -t && /usr/sbin/apachectl -t -D DUMP_VHOSTS'

#22: Alias into our backup stuff

# if cron fails or if you want backup on demand just run these commands # 
# again pass it via sudo so whoever is in admin group can start the job #
# Backup scripts #
alias backup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type local --taget /raid1/backups'
alias nasbackup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type nas --target nas01'
alias s3backup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type nas --target nas01 --auth /home/scripts/admin/.authdata/amazon.keys'
alias rsnapshothourly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotdaily='sudo  /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotweekly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotmonthly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias amazonbackup=s3backup

#23: Desktop specific – play avi/mp3 files on demand

## play video files in a current directory ##
# cd ~/Download/movie-name 
# playavi or vlc 
alias playavi='mplayer *.avi'
alias vlc='vlc *.avi'
 
# play all music files from the current directory #
alias playwave='for i in *.wav; do mplayer "$i"; done'
alias playogg='for i in *.ogg; do mplayer "$i"; done'
alias playmp3='for i in *.mp3; do mplayer "$i"; done'
 
# play files from nas devices #
alias nplaywave='for i in /nas/multimedia/wave/*.wav; do mplayer "$i"; done'
alias nplayogg='for i in /nas/multimedia/ogg/*.ogg; do mplayer "$i"; done'
alias nplaymp3='for i in /nas/multimedia/mp3/*.mp3; do mplayer "$i"; done'
 
# shuffle mp3/ogg etc by default #
alias music='mplayer --shuffle *'

#24: Set default interfaces for sys admin related commands

vnstat is console-based network traffic monitor. dnstop is console tool to analyze DNS traffic. tcptrack and iftop commands displays information about TCP/UDP connections it sees on a network interface and display bandwidth usage on an interface by host respectively.

## All of our servers eth1 is connected to the Internets via vlan / router etc  ##
alias dnstop='dnstop -l 5  eth1'
alias vnstat='vnstat -i eth1'
alias iftop='iftop -i eth1'
alias tcpdump='tcpdump -i eth1'
alias ethtool='ethtool eth1'
 
# work on wlan0 by default #
# Only useful for laptop as all servers are without wireless interface
alias iwconfig='iwconfig wlan0'

#25: Get system memory, cpu usage, and gpu memory info quickly

## pass options to free ## 
alias meminfo='free -m -l -t'
 
## get top process eating memory
alias psmem='ps auxf | sort -nr -k 4'
alias psmem10='ps auxf | sort -nr -k 4 | head -10'
 
## get top process eating cpu ##
alias pscpu='ps auxf | sort -nr -k 3'
alias pscpu10='ps auxf | sort -nr -k 3 | head -10'
 
## Get server cpu info ##
alias cpuinfo='lscpu'
 
## older system use /proc/cpuinfo ##
##alias cpuinfo='less /proc/cpuinfo' ##
 
## get GPU ram on desktop / laptop## 
alias gpumeminfo='grep -i --color memory /var/log/Xorg.0.log'

#26: Control Home Router

The curl command can be used to reboot Linksys routers.

# Reboot my home Linksys WAG160N / WAG54 / WAG320 / WAG120N Router / Gateway from *nix.
alias rebootlinksys="curl -u 'admin:my-super-password' 'http://192.168.1.2/setup.cgi?todo=reboot'"
 
# Reboot tomato based Asus NT16 wireless bridge 
alias reboottomato="ssh admin@192.168.1.1 /sbin/reboot"

#27 Resume wget by default

The GNU Wget is a free utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, and it can resume downloads too:

## this one saved by butt so many times ##
alias wget='wget -c'

#28 Use different browser for testing website

## this one saved by butt so many times ##
alias ff4='/opt/firefox4/firefox'
alias ff13='/opt/firefox13/firefox'
alias chrome='/opt/google/chrome/chrome'
alias opera='/opt/opera/opera'
 
#default ff 
alias ff=ff13
 
#my default browser 
alias browser=chrome

#29: A note about ssh alias

Do not create ssh alias, instead use ~/.ssh/config OpenSSH SSH client configuration files. It offers more option. An example:

Host server10
  Hostname 1.2.3.4
  IdentityFile ~/backups/.ssh/id_dsa
  user foobar
  Port 30000
  ForwardX11Trusted yes
  TCPKeepAlive yes

You can now connect to peer1 using the following syntax:
$ ssh server10

#30: It’s your turn to share…

## set some other defaults ##
alias df='df -H'
alias du='du -ch'
 
# top is atop, just like vi is vim
alias top='atop' 
 
## nfsrestart  - must be root  ##
## refresh nfs mount / cache etc for Apache ##
alias nfsrestart='sync && sleep 2 && /etc/init.d/httpd stop && umount netapp2:/exports/http && sleep 2 && mount -o rw,sync,rsize=32768,wsize=32768,intr,hard,proto=tcp,fsc natapp2:/exports /http/var/www/html &&  /etc/init.d/httpd start'
 
## Memcached server status  ##
alias mcdstats='/usr/bin/memcached-tool 10.10.27.11:11211 stats'
alias mcdshow='/usr/bin/memcached-tool 10.10.27.11:11211 display'
 
## quickly flush out memcached server ##
alias flushmcd='echo "flush_all" | nc 10.10.27.11 11211'
 
## Remove assets quickly from Akamai / Amazon cdn ##
alias cdndel='/home/scripts/admin/cdn/purge_cdn_cache --profile akamai'
alias amzcdndel='/home/scripts/admin/cdn/purge_cdn_cache --profile amazon'
 
## supply list of urls via file or stdin
alias cdnmdel='/home/scripts/admin/cdn/purge_cdn_cache --profile akamai --stdin'
alias amzcdnmdel='/home/scripts/admin/cdn/purge_cdn_cache --profile amazon --stdin'

Tibco EMS FAQs

 

What are the messaging models does EMS support?
Point-to-Point (Queue)
b. Publish and Subscribe (Topic)
c. Multicast (Topic)

There are two major models for messaging supported by JMS: queues and topics. Queues are based on a point-to-point messaging model. Topics make use of the new publish-and-subscribe messaging model.

Regardless whether queues or topics are used, the messages are not sent directly peer-to-peer. Messages are forwarded to a JMS infrastructure that is composed of one or more JMS servers. The servers are responsible for providing the quality-of-services to JMS and responsible for implementing all the components not addressed by JMS Specification.

When determining when to use queues versus topics consider the two fundamental messaging mechanisms. The first is point-to-point messaging, in which a message is sent by one publisher (sender) and received by one subscriber (receiver). The second is publish-subscribe messaging, in which a message is sent by one or more publishers and received by one or more subscribers. The messaging model as listed below will dictate when to use a queue or a topic:

One-to-one messaging                   Queue    point-to-point
One-to-many messaging   Topic   publish-subscribe
Many-to-many messaging   Topic    publish-subscribe model

What is the difference between a topic and queue?
Answer:

Topic-synchronous mode of communication.
-publisher and subscriber model.
-broad cast type messaging.
-there is no guarantee of delivery.
Queue-asynchronous mode of communication.
-point to point model.
-unidirectional type of messaging.
-there is guarantee of delivery.
When will u use a topic and when will u use a queue?
Answer: 1.only one producer and only one consumer is there then “Queues” are used. Simply “Queue” is of type “Point to Point”. One or more than one publishers and one or more than one subscriber is there than “Topic”. Simply “Topic” is of type “Broadcast messaging”.
Can queues be shared between two consumers?
Answer: Yes. If queue is non-exclusive (default) it can be shared by any number of consumers. Non-exclusive queues are useful for balancing the load of incoming messages across multiple receivers.
What is an exclusive queue? When would u use it?
Answer: If queue is exclusive, then all queue messages can only be retrieved by the first consumer specified for the queue. Exclusive queues are useful when we want only one application to receive messages for a specific queue.
A message in a queue can be consumed by how many consumers?
Answer: if all are non-exclusive queue any number of consumers will receive messages. If exclusive queue, the first consumer will get messages.
How will you grant privileges to topics and queues?
Answer: Only EMS admin have privilege
What are the protocols supported by JMS/EMS?
Answer: 1.SSL (HTTP) 2. TCP

What are the limitations of the Durable Subscriber?

As long as the durable subscriber exists
• Expiration time of the message
• Storage limit of that Topic

What is the difference between “Multicasting” and “publish and subscribe” messaging model?

BROADCAST-
Many publishers can publish to the same topic, and a message from a single publisher can be received by many subscribers. Subscribers subscribe to topics,
and all messages published to the topic are received by all subscribers to the topic.This type of message protocol is also known a broadcast messaging because
messages are sent over the network and received by all interested subscribers,similar to how radio or television signals are broadcast and received.

MULTICAST-
Multicast messaging allows one message producer to send a message to multiple subscribed consumers simultaneously.It is similar to publish and subscribe in that it
also addresses message to a topic. Instead of delivering a copy of the message to each individual subscriber over TCP,however, the EMS server broadcasts the message
over Pragmatic General Multicast (PGM). A daemon running on the machine with the subscribed EMS client receives the multicast message and delivers it to the message
consumer.
Multicast is highly scalable because of the reduction in bandwidth used to broadcast messages & it also uses less EMS resources.
However,one drawback of this model is that it does not guarantee the delivery of messages to all subscribers.

What are the EMS Destination features?
• Secure Property
• Trace Property
• Store Property
• Redelivery policy
• Flow control
• Exclusive property for queues

What are the extra features are available in EMS apart from JMS?

The JMS standard specifies two delivery modes for messages, PERSISTENT and NON_PERSISTENT. EMS also includes a RELIABLE_DELIVERY mode that eliminates some of the overhead associated with the other delivery modes.
• For consumer sessions, you can specify a NO_ACKNOWLEDGE mode so that consumers do not need to acknowledge receipt of messages, if desired. EMS also provides an EXPLICIT_CLIENT_ACKNOWLEDGE and EXPLICIT_CLIENT_DUPS_OK_ACKNOWLEDGE mode that restricts the acknowledgement to single messages
• EMS extends the MapMessage and StreamMessage body types. These extensions allow EMS to exchange messages with TIBCO Rendezvous and ActiveEnterprise formats that have certain features not available within the JMS MapMessage and StreamMessage

What is structure of JMS Message?

Header(Required)
• Properties(optional)
• Body(optional)

Where does the undelivered messages will be stored?

If a message expires or has exceeded the value specified by the maxRedelivery property on a queue, the server checks the message’s JMS_TIBCO_PRESERVE_UNDELIVERED property. If JMS_TIBCO_PRESERVE_UNDELIVERED is set to true, the server moves the message to the undelivered message queue, $sys.undelivered. This undelivered message queue is a system queue that is always present and cannot be deleted. If JMS_TIBCO_PRESERVE_UNDELIVERED is set to false, the message will be deleted by the server
• You can only set the undelivered property on individual messages, there is no way to set the undelivered message queue as an option at the per-topic or per-queue level

What are the messages bodies are supported by the EMS?
• Map Message
• Text Message
• Stream Message
• Bytes Message
• Object Message

What is the Maximum message size is supported by EMS?

EMS supports messages up to a maximum size of 512MB. However, we recommend that application programs use smaller messages, since messages approaching this maximum size will strain the performance limits of most current hardware and operating system platforms

What are the different delivery modes available in EMS?

Persistent
When a producer sends a PERSISTENT message, the producer must wait for the server to reply with a confirmation. The message is persisted on disk by the server. This delivery mode ensures delivery of messages to the destination on the server in almost all circumstances. However, the cost is that this delivery mode incurs two-way network traffic for each message or committed transaction of a group of messages

Non-Persistent
Sending a NON_PERSISTENT message omits the overhead of persisting the message on disk to improve performance.
If authorization is disabled on the server, the server does not send a confirmation to the message producer.
If authorization is enabled on the server, the default condition is for the producer to wait for the server to reply with a confirmation in the same manner as when using PERSISTENT mode.
Regardless of whether authorization is enabled or disabled, you can use the npsend_check_mode parameter in the tibemsd.conf file to specify the conditions under which the server is to send confirmation of NON_PERSISTENT messages to the producer

Reliable
EMS extends the JMS delivery modes to include reliable delivery. Sending a RELIABLE_DELIVERY message omits the server confirmation to improve performance regardless of the authorization setting.
When using RELIABLE_DELIVERY mode, the server never sends the producer a receipt confirmation or access denial and the producer does not wait for it. Reliable mode decreases the volume of message traffic, allowing higher message rates, which is useful for messages containing time-dependent data, such as stock price quotations.

If a persistent message is published on to a TOPIC, Does these messages will store on disk if topic doesn’t have durable subscriber or subscriber with a fault-tolerant connection?
No. Persistent messages published to a topic are written to disk only if that topic has at least one durable subscriber or one subscriber with a fault-tolerant connection to the EMS server. In the absence of a durable subscriber or subscriber with a fault-tolerant connection, there are no subscribers that need messages resent in the event of a server failure. In this case, the server does not needlessly save persistent messages. This improves performance by eliminating the unnecessary disk I/O to persist the messages

What are the different types of acknowledgement modes in EMS message delivery?
• Auto
• Client
• Dups_ok
• No_ack
• Explciit
• Explicit_client_dups_ok
• Transitional
• Local transitional.

What are the different types of messages that can be used in EMS
• Text
• Simple
• Bytes
• Map
• XML test
• Object
• Object ref
• Stream

Tell me about bridges. Why do we use them, Syntax to create bridges, use of message selector?
• Some applications require the same message to be sent to more than one destination possibly of different types. So we use bridges in that scenario.
• create bridge source=type:dest_name target=type:dest_name [selector=selector]

What is the purpose for stores.conf

This file defines the locations either store files or a database, where the EMS server will store messages or metadata. Each store configured is either a file-based or a database store.

How many modes are the messages written to store file.
• Two Modes: Synchronous and Asynchronous
• Default is asynchronous

What is tibemsd.conf
• It is the main configuration file that controls the characteristics of the EMS server

Name destination properties and explain them.
• Global, secure, maxmsgs, maxbytes, flowcontrol, sender_name, sender_name_enforced, trace,maxRedelivery

What are the different modes of installation in Ems?
• GUI mode
• Console mode
• Silent mode

What are the messaging models supported by JMS
• Point-to-point
• Publish-subscribe
• Multicast
What is the use of routes? What kind of destinations can be used in routes?
• Topics and queues m-hops

What happens if the message expires/exceeded the value specified by maxredelivery property on queue?
• If the jms_preserve_undelivered property is set to true, then it moves he message to undelivered message queue, if set to false, the message is deleted by the server.

In how many ways can a destination be created?
• Static-created by server
• Dynamic-created by client
• Temporary destinations.

What are the wild cards that we use in ems? how do they work for queues and topics
• *,> you can subscribe to wildcard topics but can’t publish to them. Where as in case of queues we can’t either send /receive.

Is bridges are transitive?
• NO

What Are flow control on destinations
• Some times the producer may send messages faster than the consumers can receive them. So, the message capacity on the server will be exhausted. So we use flow control. Flow control can be specified on destinations.

What Are flow control on bridges and routes
• Flow control has to be specified on both sides of bridges where as on routes it operates differently on sender side and receiver side.

What are the permissions that you can grant to users to access queues
• Receive
• Send
• Browse
What are the permissions that you can grant to users to access topics
• Subscribe
• Publish
• Durable
• Use_durable

Tell me about multicasting in EMS
• Multicast is a messaging model that broadcasts messages to many consumers at once rather than sending messages individually to each consumer. EMS uses Pragmatic general multicast to broadcast messages published to multicast enabled topics.
• Each multicast enabled topic is associated with a channel.

What are the advantages and disadvantages of multicasting?
• Advantages: as the message broadcasts only once thereby reducing the amount of bandwidth used in publish and subscribe model. Reduces the network traffic.
• Disadvantages: Offers only last-hop delivery. So can’t be used to send messages between servers.

On what destinations can you use multicast?
• Topics

Suppose, you got an error while accessing a queue, that you don’t have necessary permissions to access the queue. What might be the solution/reason?
• The user that is assigned to the queue and the user used while creating

How does the secondary server know that the primary server is failed?
• Based on heartbeat intervals

What is JMS queue requestor?
• The JMS Queue Requestor activity is used to send a request to a JMS queue name and receive a response back from the JMS client

What is JMS topic requestor?
• The JMS Topic Requestor activity is used to communicate with a JMS application’s request-response service. This service invokes an operation with input and output. The request is sent to a JMS topic and the JMS application returns the response to the request.

How do you add ems server to administrator?
• Using domain utility

How do you remove individual messages from destinations?
• Use Purge Command.

What are the messaging models does EMS support?
• Point-to-Point (Queue)
• Publish and Subscribe (Topic)
• Multicast (Topic)

What are the limitations of the Durable Subscriber?
• As long as the durable subscriber exists
• Expiration time of the message
• Storage limit of that Topic

What are the EMS Destination features?
• Secure Property
• Trace Property
• Store Property
• Redelivery policy
• Flow control
• Exclusive property for queues

What are the extra features are available in EMS apart from JMS?
• The JMS standard specifies two delivery modes for messages, PERSISTENT and NON_PERSISTENT. EMS also includes a RELIABLE_DELIVERY mode that eliminates some of the overhead associated with the other delivery modes.
For consumer sessions, you can specify a NO_ACKNOWLEDGE mode so that consumers do not need to acknowledge receipt of messages, if desired. EMS also provides an EXPLICIT_CLIENT_ACKNOWLEDGE and EXPLICIT_CLIENT_DUPS_OK_ACKNOWLEDGE mode that restricts the acknowledgement to single messages
• EMS extends the MapMessage and StreamMessage body types. These extensions allow EMS to exchange messages with TIBCO Rendezvous and ActiveEnterprise formats that have certain features not available within the JMS MapMessage and StreamMessage

What is structure of JMS Message?
• Header(Required)
• Properties(optional)
• Body(optional)

Where does the undelivered messages will be stored?
• If a message expires or has exceeded the value specified by the maxRedelivery property on a queue, the server checks the message’s JMS_TIBCO_PRESERVE_UNDELIVERED property. If JMS_TIBCO_PRESERVE_UNDELIVERED is set to true, the server moves the message to the undelivered message queue, $sys.undelivered. This undelivered message queue is a system queue that is always present and cannot be deleted. If JMS_TIBCO_PRESERVE_UNDELIVERED is set to false, the message will be deleted by the server
• You can only set the undelivered property on individual messages, there is no way to set the undelivered message queue as an option at the per-topic or per-queue level

What is the Maximum message size is supported by EMS?
• EMS supports messages up to a maximum size of 512MB. However, we recommend that application programs use smaller messages, since messages approaching this maximum size will strain the performance limits of most current hardware and operating system platforms

TIBCO Adapters FAQs

What are Adapters?
Adapters are connectors to data sources to catch event changes. Once an Adapter catches a event change, it publishes the message to a message box using either EMS or RVD
Adapter is a gateway between different applications using messaging channels.

What are the different types of adapters?
Technical Adapters (File Adapter, DB Adapter)
Functional Adapters (PeopleSoft Adapter, SAP R3 Adapter)
Custom Adapters

Adapter Components
Each adapter has two main components, an adapter palette and a run-time adapter. In addition, some adapters include a design-time adapter. The adapter palette and design-time adapter are used during configuration, and the run-time adapter is used at production time.

Adapter Palette:
Each adapter includes a palette that is used for configuration. The palette is automatically loaded into TIBCO Designer during adapter installation and available the next time Designer is started. The palette enables you to configure adapter specific options, such as its connection to the vendor application, logging options, and adapter services. During the design phase, the palette connects to the vendor application and fetches information about connection options and data schemas. You can then graphically select the appropriate items. For example, during configuration of a TIBCO Adapter for ActiveDatabase adapter instance, the palette fetches all pertinent tables in the database. You then choose the tables that the particular service is to send or receive.

Run-time Adapter :
Once the adapter has been configured using TIBCO Designer, it can be deployed. A deployed adapter instance is referred to as a run-time adapter. A run-time adapter operates in a production environment, handling communication between a vendor application and other applications that are configured for the TIBCO environment.

Design-time Adapter :
Some adapters use a design-time adapter (DTA) to access a vendor application and return design-time configuration information. The palette is a client of the DTA process. The DTA connects to the vendor application, fetches data schemas and sends them to the palette.

Adapter Lifecycle:
The following is an overview of the adapter lifecycle:
1. Install the vendor application to which the adapter connects before installing the adapter. For many adapters, the adapter and vendor application need not be installed on the same machine.
2. Adapters depend on other software from TIBCO. Before installing an adapter, the TIBCO Runtime Agent™ software must be installed on each computer on which the adapter runs.
3. Create an adapter instance and save it in a project using TIBCO Designer™. A project contains configuration information required for a run-time adapter to interact with the vendor application and other applications.
4. Deploy the adapter. An adapter instance is deployed using TIBCO Administrator.
a) Using TIBCO Designer, create an Enterprise Archive (EAR) file, which contains information about the adapter instances and processes you wish to deploy.
b) Using TIBCO Administrator, upload the EAR, then deploy the adapter on the machine(s) of your choice. You can set runtime options before deployment.
c) Using TIBCO Administrator, start and stop the adapter.
d) Monitor the adapter using the built-in monitoring tools provided by TIBCO Administrator.

Adapter Services :
Adapters are responsible for making information from different applications available to other applications across an enterprise. To do so, an adapter is configured to provide one or more of the following services:
Publication Service
Subscription Service
Request-Response Service
Request-Response Invocation Service

Publication Service :
An adapter publication service recognizes when business events happen in a vendor application, and asynchronously sends out the event data in realtime to interested systems in the TIBCO environment. For example, an adapter can publish an event each time a new customer account is added to an application. Other applications that receive the event can then update their records just as the original application did. When an application receives a request to create a customer record, the application notifies the adapter about the request and the adapter publishes the event.
User Interface—————–Application X————–Adapter ————– TIBCO Messaging
Create record Send to adapter Publishing

Polls on the source data table (base table).
Reads data from the source table.
Sends the data to the message bus.

Subscription Service:
An adapter subscription service asynchronously performs an action such as updating business objects or invoking native APIs on a vendor application. The adapter service listens to external business events, which trigger the appropriate action. Referring to the previous example, an adapter subscription service can listen for customer record creation events (happening in an application and published to the TIBCO infrastructure) and update another application.
TIBCO Messaging————Adapter———–Application Y Subscribing Update record

Reads data from the message bus.
Gives the data to the destination table.

Request-Response Service:
In addition to asynchronously publishing and subscribing to events, an adapter can be used for synchronously retrieving data from or executing transactions within a vendor application. After the action is performed in the vendor application, the adapter service sends a response back to the requester with either the results of the action or a confirmation that the action occurred. This entire process is called request-response, and it is useful for actions such as adding or deleting business objects.

Receives requests from other applications.
Parses the requests.
Returns response (Sends only the requested data to the message bus).

Request-Response Invocation Service:
An adapter request-response invocation service is similar to the request-response service, except that the roles are reversed. The vendor application is now the requester or initiator of the service, instead of the provider of the service. The adapter service acts as a proxy, giving the vendor application the ability to invoke synchronously functionality on an external system.

How can u fine-tune an ADBAdapter? What are the different parameters that can be used?
a) we can use publish by value or publish by reference for high speed and data type support like oracle long respectively.
b) Can use polar or alerter for frequent and infrequent data changes respectively.
c) Adb.PollingInterval, _ADB.DUPDECT.adapter_instance_name parameters can be used to do flow control and avoid duplication respectively.

What are the quality of services we can have in adapter publishing services?
RV: reliable, certified, transactional

What are the wire formats we can have in adapter publishing services?
wire formats:
a) RV: active enterprise message, RV message, XML message.
b)JMS: XML message

What are the objects, which will be created if you configure and save ADB adapter?
Publishing table for source table, Trigger acts as a bridge between source and publishing table

Explain the internal functioning of ADB publication service?
When we configure ADB publishing service it creates Publishing table for source table, Trigger acts as a bridge between source and publishing table. Whenever data is being inserted/updated/deleted from source table, it will be inserted into publishing table by means of trigger. ADB has another component called polling agent. Polling agent will be keep looking for new inserts into publishing table and if it finds any then converts the record in p table into the specified wire format and publishes on specified quality of service

Can we filter the records from publishing when they get updated in source table? (Data from all regions are coming into table but I want to publish only New York data)
Yes – By modifying the trigger we can only insert the New York data into publishing table

Can we limit the number of columns to be published from the source table?
Yes, using the use? field in adapter publishing table tab. just uncheck the columns u dont want to use.

Can we publish parent and child table information by using single adapter configuration and how?
Yes, in the adapter publisher table tab create a parent table first by look up and then add the child table using the add child tab then click on the child table column to specify the foreign key than to establish a relationship between the primary key of the parent and the foreign key of the child go to the column in the child table and specify the primary key of the parent table.
In the subscription service the destination table is created and the child table mapping tab will have the child table on the left mapped with the parent table on the right.

What is publish by value and publish by reference. Explain the pros and cons.

publish by value: in this type the changes in the source table are reflected in the p_ table and the data is taken from there. its used when high speed is required. it dose not support data types like oracle long.

publish by reference: in this type the data is directly taken from the source table where only the primary key will come from p_ table. it allows data types like oracle long.
loss of changes in the source table can be lost bcos of the waiting time.(this can be avoided using alerter).

What are the types of message transfers in file adapters?
record transfer: to integrate file systems to TIBCO AE environment.
simple file transfer: to transfer files to other TIBCO adapters.

What is read schema and write schema in file adapter.
Read schema in the file adapter publisher config is used to define typr of expected input. Here we can use the delimited file tupe or the positional file type.

What is the difference between an ADBAdapter and JDBC palette activities?
• Using ADB we can only pick up the data from one database and put it in onother one.
• But using JDBC we can Query using JDBCQuery and manupulate data using JDBCUpdate. like we can use select statements and insert and update statements to selective query and update.
• ADB adapters might be useful in scenarios where we have large amount of data.

What is the difference between a FileAdapter and File palette activities?
In file activities the file polar cannot handle multi format data and record by record transfer .it takes care of particular format specified and does file transfer.
where as file adapter can handle multiple formats and does record by record transfer.

If the reference to Schema changes in “Activity Input” does it through error, how do you correct it?
Yes, and we have to correct the schema in the way the input expect it.

Where do we specify HTTP Port number?
In HTTP connection in the HTTP activities.

What is the difference between JDBC activities and ADB Adapter?
• ADB uses ODBC to connect, JDBC uses JDBC
• ADB is more suitable for instances where you have a lot of processing
• ADB is more suitable for instances where you want that a particular action on a DB Table triggers a BW process.
• ADB adapter is best for publishing from database.
• For simple inserts and updates then ADB subscriber is best.
• ADB is an adapter which is used to capture the events and take action, this has pub and sub mechanisms, pub is used to capture the events and publish the messages and sub will be used to upsert the operations.
• Jdbc is a collection of activities that can be used for custom operations
• In case of insert or update to database then check if you have complex JDBC inserts, transaction management and other dynamic queries then JDBC activities are best.
• JDBC is more suitable for running dynamic code where in runtime you can execute statements with different values depending on process execution.

What are modes of operation for File Adapter in Record Mode?
Synchronous mode upon receiving an event, the publication service will allow other services in the instance only after it completes the processing and publishing of all the files that match the specified criteria.
In Asynchronous mode the publication service allows other services of the instance to receive events while it is processing and publishing a file. By default Subscription service always operates in Asynchronous mode.

What is the diff between tibco adapter and BW component?
Adapters are connectors that use a messaging channel that can be configured over source/target systems which can be used in Pub, Sub or Replyrequest mode. BW components are designer, administrator, bw engine.

What is a synchronous service that an adapter supports?
Of the 4 Adapter services, Request/Response is the only adapter service that is synchronous.

What is Event Driven and Demand Driven?
Event Driven – Push
Demand Driven – Poll.

TIBCO Adapter for ActiveDatabase:
TIBCO Adapter for ActiveDatabase software (the adapter) allows data changes in a database to be sent as they occur to other databases and applications. It extends publish-subscribe and request-response technology to databases, making multiple levels of delivery services available to applications that need access to these databases. ODBC and JDBC compliant databases such as Oracle, Sybase, and Microsoft SQL Server are supported. While the adapter does not run on z/OS and iSeries systems, it can remotely connect to a DB2 database running on these systems. TIBCO Adapter for ActiveDatabase is written using the TIBCO Adapter SDK software, which allows the adapter to interoperate with other TIBCO products. The adapter can communicate with any application that is configured for the TIBCO environment.

What is File Adapter?
TIBCO Adapter for Files software processes data from text files and publishes the contents in real-time to the TIBCO environment. The adapter also listens for messages in the TIBCO environment and writes the contents to a file.

The adapter supports only text files when it is integrating a file system into the TIBCO ActiveEnterprise environment. It supports both text and binary files when it is transferring files between two or more TIBCO Adapter for Files installations.

File Adapter Operations Mode?
Selecting an operation mode is the first step in configuring a service. The operation mode determines whether the service will integrate the file system with the TIBCO ActiveEnterprise environment or transfer files between instances of TIBCO Adapter for Files.
In the Record Mode of operation, where the adapter integrates the file system with TIBCO ActiveEnterprise, you will have to define and use schemas.
In the Simple File Transfer Mode of operation, where the adapter transfers files among instances of TIBCO Adapter for Files, you will have to define various options for file transfer. However, there is no need to define a schema.

Can two adapters communicate with each other?
No two adapters can communicate with each other directly. They can communicate only through a messaging layer.
Considering Tibco to be the messaging layer,
Publishing Adapter always publishes to Tibco messaging bus.
Subscribing Adapter always subscribes from a Tibco messaging bus.

What are users and user-key columns in Adapter Publisher’s Table tab?
While configuring an ADB Publisher:
“Users” column specifies what columns have to be published to the publishing table.
“User-key” is selected means it acts as a primary key. Child tables can be joined to the Parent tables only using the primary keys. Publish by reference storage mode copies only the primary key from the source table. If a source table does not have a primary key column, we can use the user-key to do the same.

What are the columns available in a Adapter Publishing table?
An adapter Publishing table contains the actual data columns plus Internal Adapter columns.
Actual data colums:
Depending on the storage mode selected, the actual data colums in the publishing table varies:
For Publish by value, the actual data columns will be the exact copy of the base table data colums.(all the columns).
For Publish by reference, the actual data columns will be the exact copy of the base table’s primary key data colums (only Primary key column).

Internal Adapter columns:
ADB_SUBJECT
ADB_SEQUENCE
ADB_SET_SEQUENCE
ADB_TIMESTAMP
ADB_OPCODE
ADB_UPDATE_ALL
ADB_REF_OBJECT
ADB_L_DELIVERY_STATUS
ADB_L_CMSEQUENCE

Publication and Subscription formats ( file adapter)
Two types of formats are supported by adapters while exchanging data between applications
1. MInstances
2. MBusinessDocuments
MInstances is the entity that is exchanged among TIBCO applications. It is the schema instantiations. The runtime adapter parses the input file, identifies the schema associated with the publication service, creates the MInstances and publishes it.

MBusinessDocuments is a facility provided by TIBCO ActiveEnterprise for grouping MInstances. MBusinessDocument always contain MInstances created from the same file. If high throughput is desired from publication service MBusinessDocument attributes can be used.

Modes of operation(file adapter)
There are two modes of operation
Synchronous mode
Asynchronous mode

An adapter instance/configuration can have multiple publication and subscription services. Services are activated by events. The publication service can be activated by a timer event or message event. The event that activates the publication service is called polling agent.

In Synchronous mode upon receiving an event, the publication service will allow other services in the instance only after it completes the processing and publishing of all the files that match the specified criteria.

In Asynchronous mode the publication service allows other services of the instance to receive events while it is processing and publishing a file. By default Subscription service always operates in Asynchronous mode.

If the configuration has more than one service or if the publication service is expected to process large file sizes or large set of files, setting the publication service in asynchronous mode is recommended.

Types of file records (file adapter)
File records can be classified into two categories:
1. delimited file record
2. positional file record
Delimited file records are used to interpret lines that have a well-defined delimiter between the fields. Delimiters can be of single or multiple characters. These can be identified by the number of fields or by using a
constant field value.

Positional file records are used to interpret lines that have well defined field lengths. These can be identified using line or record length or by using a constant field value ie; constant line length.

what is opaque table
The subscription service uses two logical layers when processing a message. The first layer decodes data from the message and the second layer provides the database transaction. If an exception occurs in the first layer, the adapter logs the message to the opaque exception table. In the second layer, if any DML command fails at any level, the adapter rolls back this transaction and starts another transaction, inserting into exception tables. If the insert into exception table transaction fails, the adapter then logs the message to the opaque exception table.
what is the difference between exception table and opaque exception table?
The subscription service uses two logical layers when processing a message.
The first layer decodes data from the message and the second layer provides the database transaction. If an exception occurs in the first layer, the adapter logs the message to the opaque exception table.

In the second layer, if any DML command fails at any level, the adapter rolls back this transaction and starts another transaction, inserting into exception tables. If the insert into exception table transaction fails, the adapter then logs the message to the opaque exception table.

What are the transport types supported by ADB adapters?
The transport types supported by ADB adapters are:
1) Rendezvous
2) JMS

Rendezvous
Quality of service supported by Rendezvous:
1) Reliable
2) Certified
3) Transactional

Wire Formats Supported by Rendexvous:
1) Active Enterprise Message
2) Rendezvous Message
3) XML Message

JMS
Wire Formats Supported by JMS:
1) XML Message

Connection Factory Type Supported by JMS:
1)Topic
2)Queue

Delivery Mode Supported by JMS:
1) Persistent
2) Non-Persistent

— Hari Iyer

Error :- sudo: effective uid is not 0, is sudo installed setuid root?

We all as a Linux administrator must have come across this error sometime in our lives.

[user@host dir]$ sudo bash
sudo: effective uid is not 0, is sudo installed setuid root?

This happens when sudo does not get the right access permissions.

The Solution for this error is giving the following permissions as root user

chmod u+s /usr/bin/sudo

That Must Sort the issue for CentOS Kind of distros.

 

/proc/sys for you to manipulate a running kernel

The /proc/sys directory in the /proc virtual filesytem contains a lot of useful and interesting files and directories. Many kernel settings can be manipulated by writing to files in the proc filesystem. A lot of important information can be retrieved from these files. This is especially useful when you are troubleshooting or fine tuning your linux system.
Following is a description of the most important files.
Especially the files in /proc/sys/vm are very interesting and useful.
You can also use the sysctl command to make this changes persistent, or to see all the possible kernel options you can change at run-time.

/proc/sys/dev

Contains device specific information. For instance the directory cdrom /proc/sys/dev/cdrom/info shows you cdrom capabilities. The other files in /proc/sys/dev/cdrom are writable and allow you to actually set options for your cdrom drive.
For instance echo 1 > /proc/sys/dev/cdrom/autoeject makes your tray open automagically when you unmount your cdrom.
/proc/sys/dev/parport holds information about parallel ports. Browse these directories to learn more about their contents.

/proc/sys/fs

Virtual filesystem information/tuning

/proc/sys/fs/binfmt_misc

binfmt_misc allows you to configure the system to execute miscellaneous binary formats. For instance it enables you to make the system execute .exe files using wine and java files using the java interpreter, just by typing a file name.

/proc/sys/fs/dentry-state

Linux caches directory access to speed up sub-sequential access to the same directory, this file contains information about the status of the directory cache.

/proc/sys/fs/dir-notify-enable

enable/disable dnotify interface, dnotify is a signal used to notify a process about file/directory changes. This is mainly interesting to programmers.

/proc/sys/fs/dquot-nr

number of allocated disk quota entries and the number of free disk quota entries

/proc/sys/fs/dquot-max

maximum number of cached disk quota entries.

/proc/sys/fs/file-max

system-wide limit on the number of open files for all processes.

/proc/sys/fs/file-nr

number of files the system has presently opened.

/proc/sys/fs/inode-max

maximum number of in-memory inodes

/proc/sys/fs/inode-nr

number of inodes and number of free inodes

/proc/sys/fs/inode-state

This file contains seven numbers: number of inodes, number of free inodes, preshrink, and four dummy values. nr_inodes is the number of inodes the system has allocated. Preshrink is non-zero when the nr_inodes is bigger than inode-max.

/proc/sys/fs/inotify
(since kernel 2.6.13)

This directory contains files that can be used to limit the amount of kernel memory consumed by the inotify interface.

/proc/sys/fs/lease-break-time

This file specifies the grace period that the kernel grants to a process holding a file lease after it has sent a signal to that process notifying it that another process is waiting to open the file.

/proc/sys/fs/leases-enable

This file can be used to enable or disable file leases on a system-wide basis.

/proc/sys/fs/mqueue
(since kernel 2.6.6)

This directory contains files controlling the resources used by POSIX message queues.

/proc/sys/fs/overflowgid

Allows you to change the value of the fixed GID, if a filesystem is mounted which only supports 16 bit GID’s the Linux GID’s which are 32 bit sometimes need to be converted to lower values this is fixed at this value.

/proc/sys/fs/overflowuid

Allows you to change the value of the fixed UID, if a filesystem is mounted which only supports 16 bit UID’s the Linux UID’s which are 32 bit sometimes need to be converted to lower values this is fixed at this value.

/proc/sys/fs/suid_dumpable
(since kernel 2.6.13)

Determines whether core dump files are produced for set-user-ID or otherwise protected/tainted binaries.
Possible values are 0,1,2:
0 (default) A core dump will not be produced for a process which has changed credentials or whose binary does not have read permission enabled.
1 (debug) All processes dump core when possible.
2 (suidsafe) Any binary which normally would not be dumped is dumped readable by root only. This allows the user to remove the core dump file but not to read it. For security reasons core dumps in this mode will not overwrite one another or other files. This mode is appropriate when administrators are attempting to debug problems in a normal environment.

/proc/sys/fs/super-max

Controls the maximum number of superblocks, and thus the maximum number of mounted file systems the kernel can have.

/proc/sys/fs/super-nr

The number of file systems currently mounted.

/proc/sys/kernel/acct

highwater, lowwater, and frequency. Used with BSD-style process accounting.

/proc/sys/kernel/cap-bound
(from Linux 2.2 to 2.6.24)

Holds the value of the kernel capability bounding set.

/proc/sys/kernel/core_pattern

Can be used to define a template for naming core dump files

/proc/sys/kernel/core_uses_pid

See core(5).

/proc/sys/kernel/ctrl-alt-del

Controls the handling of Ctrl-Alt-Del from the keyboard.
If it’s 0, Linux will do a graceful restart. When the value is > 0, Linux’s will do an immediate reboot, without even syncing its dirty buffers.

/proc/sys/kernel/hotplug

Contains the path for the hotplug policy agent. Can be used to set the NIS/YP domainname

/proc/sys/kernel/domainname

can be used to set the NIS/YP domainname

/proc/sys/kernel/hostname

can be used to set the hostname

/proc/sys/kernel/modprobe

Contains the path for the kernel module loader.

/proc/sys/kernel/msgmax

This file defines a system-wide limit specifying the maximum number of bytes in a single message written on a System V message queue.

/proc/sys/kernel/msgmnb

Defines a system-wide parameter used to initialize the msg_qbytes setting for subsequently created message queues.

/proc/sys/kernel/ostype and /proc/sys/kernel/osrelease

These files give substrings of /proc/version.

/proc/sys/kernel/overflowgid and /proc/sys/kernel/overflowuid

These files duplicate the files /proc/sys/fs/overflowgid and /proc/sys/fs/overflowuid.

/proc/sys/kernel/panic

Gives read/write access to the kernel variable panic_timeout. If this is zero, the kernel will loop on a panic; if non-zero it indicates that the kernel should autoreboot after this number of seconds.

/proc/sys/kernel/panic_on_oops
(since kernel 2.5.68)

This file controls the kernel’s behavior when an oops or BUG is encountered. If this file contains 0, then the system tries to continue operation. If it contains 1, then the system delays a few seconds and then panics. If the /proc/sys/kernel/panic file is also non-zero then the machine will be rebooted.

/proc/sys/kernel/pid_max
(since kernel 2.5.34)

This file specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID).

/proc/sys/kernel/printk

The four values in this file are console_loglevel, default_message_loglevel, minimum_console_level, and default_console_loglevel. This allows configuration of which messages will be logged to the console. (ever worked on a console printing messages all the time to your screen? Here’s how to fix that) Messages with a higher priority than console_loglevel will be printed to the console.

/proc/sys/kernel/pty
(since kernel 2.6.4)

This directory contains two files relating to the number of Unix 98 pseudo-terminals on the system.

/proc/sys/kernel/pty/max

Defines the maximum number of pseudo-terminals.

/proc/sys/kernel/pty/nr

This read-only file indicates how many pseudo-terminals are currently in use.

/proc/sys/kernel/random

This directory contains various parameters controlling the operation of the file /dev/random.

/proc/sys/kernel/real-root-dev

Used by the deprecated change_root initrd system

/proc/sys/kernel/rtsig-max

( until kernel 2.6.7)
Can be used to tune the maximum number of POSIX real-time (queued) signals that can be outstanding in the system.

/proc/sys/kernel/rtsig-nr

(until kernel 2.6.7)
This file shows the number POSIX real-time signals currently queued.

/proc/sys/kernel/sem
(since kernel 2.4)

Contains 4 numbers defining limits for System V IPC semaphores.

/proc/sys/kernel/sg-big-buff

Shows the size of the generic SCSI device (sg) buffer.

/proc/sys/kernel/shmall

Contains the system-wide limit on the total number of pages of System V shared memory.

/proc/sys/kernel/shmmax

This file can be used to query and set the run-time limit on the maximum (System V IPC) shared memory segment size that can be created.

/proc/sys/kernel/shmmni
(from kernel 2.4)

Specifies the system-wide maximum number of System V shared memory segments that can be created.

/proc/sys/kernel/version

Kernel version number and build date

/proc/sys/net

Networking information.

/proc/sys/net/core/somaxconn

Defines a ceiling value for the backlog argument of listen

/proc/sys/net/core/rmem_max

Maximum TCP Receive Window.

/proc/sys/net/core/wmem_maxx

Maximum TCP Send Window.

/proc/sys/net/ipv4/ip_forward

Enable or disable routing.

/proc/sys/sunrpc

This directory supports Sun remote procedure call for network file system (NFS).

/proc/sys/vm

This directory contains files for memory management tuning, buffer and cache management. One of the more interresting directories in proc sys as it allows manipulating memory handling in real time.

/proc/sys/vm/swappiness
(since kernel 2.6.16)

vm.swappiness takes a value between 0 and 100 to change the balance between swapping applications and freeing cache. At 100, the kernel will always prefer to find inactive pages and swap them out; in other cases, whether a swapout occurs depends on how much application memory is in use and how poorly the cache is doing at finding and releasing inactive items.

/proc/sys/vm/drop_caches
(since kernel 2.6.16)

Writing to this file causes the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free.
To free pagecache, write 1 to this file.
To free dentries and inodes, write 2 to this file.
To free pagecache, dentries and inodes, write 3 to this file.
Just try echo 1 > /proc/sys/vm/drop_caches, and watch your memory usage drop by all kernel cache memory.

/proc/sys/vm/legacy_va_layout
(since kernel 2.6.9)

If non-zero, this disables the new 32-bit memory-mapping layout; the kernel will use the legacy (2.4) layout for all processes.

/proc/sys/vm/oom_dump_tasks
(since kernel 2.6.25)

Enables a system-wide task dump (excluding kernel threads) to be produced when the kernel performs an OOM-killing. The dump includes the following information for each task (thread, process): thread ID, real user ID, thread group ID (process ID), virtual memory size, resident set size, the CPU that the task is scheduled on, oom_adj score (see the description of /proc[number]/oom_adj), and command name. This is helpful to determine why the OOM-killer was invoked and to identify the rogue task that caused it.
If this contains the value zero, this information is suppressed.
It defaults to 0, so if you have a problem requiring it, enable it :
echo 1 > /proc/sys/vm/oom_dump_tasks

/proc/sys/vm/oom_kill_allocating_task
(since kernel 2.6.24)

This enables or disables killing the OOM-triggering task in out-of-memory situations. If this is set to zero, the OOM-killer will scan through the entire tasklist and select a task based on heuristics to kill. This normally selects a rogue memory-hogging task that frees up a large amount of memory when killed.
If this is set to non-zero, the OOM-killer simply kills the task that triggered the out-of-memory condition. This avoids a possibly expensive tasklist scan.
If /proc/sys/vm/panic_on_oom is non-zero, it takes precedence over whatever value is used in /proc/sys/vm/oom_kill_allocating_task.
The default value is 0.

/proc/sys/vm/overcommit_memory

This file contains the kernel virtual memory accounting mode. Values are:
0: heuristic overcommit (default)
1: always overcommit, never check
2: always check, never overcommit

/proc/sys/vm/overcommit_ratio

Value used in calculating virtual address space

/proc/sys/vm/panic_on_oom
(since kernel 2.6.18)

This enables or disables a kernel panic in an out-of-memory situation.
0 (default) : no panic
1 : panic but not if a process limits allocations to certain nodes using memory
policies mbind or cpusets and those nodes reach memory exhaustion status.
2 : always panic

SAN Switch basic concepts – Fabric Switch

SAN Switch basic concepts

SAN Switch basic concepts – SAN environment provides block-oriented I/O between the computer systems and the target disk systems. The SAN may use Fiber Channel or Ethernet (iSCSI) to provide connectivity between hosts and storage. In either case, the storage is physically decoupled from the hosts. The storage devices and the hosts now become peers attached to a common SAN fabric that provides high bandwidth, longer reach distance, the ability to share resources, enhanced availability, and other benefits of consolidated storage.

SAN is created by using the Fiber Channel to link peripheral devices such as disk storage and tape libraries
A SAN (Storage Area Network) Switch is device that connects the sever and shared pools of the storage devices and is dedicated to moving storage Traffic. It is shown as below

san-switch-300x63

Picture: SAN Switch

Basic Connectivity Diagram between Servers, SAN Storage, SAN Switch and Tape Library.

2.png

Picture: Basic Connectivity Diagram

SAN Switch will contain below physical parts

  1. One / Two Hot Swap-able Power supply units
  2. SFP (Small Form Factor pluggable) Ports
  3. Out Band Management Port (RJ45)
  4. Console Port
  5. USB ports
  6. FC ports (count is depend up on the model)

3

Switch Back View

4.png

Front View

Hot Swappable Power supply Units:  Hot swapping and hot plugging are terms used to describe the functions of replacing computer system components without shutting down the system. If we use 2 Power supply units it will be helpful for redundancy purpose. One unit will connect to PDU1 in a rack and another unit will connect to another PDU 2 in a rack.

SFP:  (Small Form-factor Pluggable) A small transceiver that plugs into the SFP port of a network switch and connects to Fibre Channel and Gigabit Ethernet (GbE) optical fibre cables at the other end. SFP is a hot-swappable input/output device that plugs into a switch port, allowing multiple options for connectivity superseding the GBIC transceiver, SFP modules are also called “mini-GBIC” due to their smaller size.

console-port-300x120

The Fiber cables are used to connect between Storage and Server as well as Storage and Tape Library. The Fiber cable is as shown below.

console-port-1-300x158

FC Cable

Fiber cable: A fiber optic cable consists of a bundle of glass threads, each of which is capable of transmitting messages modulated onto light waves. Fiber optics has several advantages over traditional metal communications lines: Fiber optic cables have a much greater bandwidth than metal cables.

Ethernet Port: out-of-band management involves the use of a dedicated channel for managing network devices. This allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It also can be used to ensure management connectivity (including the ability to determine the status of any network component) independent of the status of other in-band network components. A complete remote management system allows remote reboot, shut-down, powering on; hardware sensor monitoring (fan speed, power voltages, chassis intrusion, etc.); Out band Management ports is also called as Ethernet Management port (RJ45).

fibre-optic-cable-300x300

Console: Switch console ports are meant to allow root access to the switch via a dumb terminal interface, regardless of the state of the switch (unless it is completely dead). By connecting to the console port you can get remote access to the root level of a switch without using the network that the switch is connected to. This creates a secondary path to the switch outside the bandwidth of the network which needs to be secured without relying on the primary network.

<img class=”alignnone wp-image-2222″ src=”https://i1.wp.com/arkit.co.in/wp-content/uploads/2016/05/Console-Port-1-300×158.gif?resize=389%2C205″ alt=”Console Port ” data-recalc-dims=”1″ />

This allows a technician sitting in a Network Operations Center thousands of miles away the ability to restore a switch or perform an initialization configuration securely over a standard telephone line even if the primary network is in failure. Without a connection to the console port, a technician would have to visit the site to perform repairs or initialization.

Login to SAN Switch.

LC-connector.png

Putty: A free telnet and SSH terminal software for Windows and Unix platforms that enables users to remotely access computers over the Internet.

By typing the Switch IP Address in putty configuration we will login to SAN Switch (CLI).

We can also login to the SAN switch GUI console using web browser, open web browser and type the IP address / SAN Switch Name in the address bar

putty-300x290.png

Understanding Port Management. Port Types and Definitions

E_Port: This is an expansion port. A port is designated an E_Port when it is used as an inter switch expansion port (ISL) to connect to the E_Port of another switch, to enlarge the switch fabric.
F_Port: This is a fabric port that is not loop capable. It is used to connect an N_Port point-to-point to a switch.
FL_Port: This is a fabric port that is loop capable. It is used to connect NL_Ports to the switch in a public loop configuration (in switched fabric env.).
G_Port: This is a generic port that can operate as either an E_Port or an F_Port. A port is defined as a G_Port after it is connected but has not received response to loop initialization or has not yet completed the link initialization procedure with the adjacent Fiber Channel device.
L_Port: This is a loop capable node or switch port.
U_Port: This is a universal port. A more generic switch port than a G_Port. It can operate as either an E_Port, F_Port, or FL_Port. A port is defined as an U_Port when it is not connected or has not yet assumed a specific function in the fabric.
VE_Port – A virtual E_Port that terminates at the switch and does not propagate fabric services or routing topology information from one edge fabric to the other
EX_Port – An E_Port from a router to an edge fabric; the router terminates EX_Ports preventing fabric merges
VEX_Port – A virtual E_Port that terminates at the switch and does not propagate fabric services or routing topology information from one edge fabric to the other, when an FCIP connection is involved

Target Device (Device ports)

N_Port: This is a node port that is not loop capable. It is used to connect an equipment port to the fabric.
NL_Port: This is a node port that is loop capable. It is used to connect an equipment port to the fabric in a loop configuration through an L_Port or FL_Port.
T_Port: This was used previously by CNT (INRANGE) as a mechanism of connecting directors together. This has been largely replaced by the E_Port
No_Light: it indicates the port is free.

 

Source :- https://arkit.co.in/san-switch-basic

Linux Concepts :- Quotas

Rules : 1. Quotas can only be created for partitions

Eg., If a 1MB quota is set for partition [/home]

then every subdir under that /home can use a max of 1 MB

user | group

block [diskspace] inode [no of files] | block [diskspace] inode [files]

SOFT HARD GRACE SOFT HARD GRACE | SOFT HARD GRACE SOFT HARD GRACE

<—- Limits —->

==========================================

Part I. Configuring / Setting Up Quotas

==========================================

1. Configure /etc/fstab – usrquota or grpquota which ever you want

In the 4th column of /etc/fstab add ‘usrquota’

2. Refresh /etc/fstab – actually /etc/mtab :

a. If you do not want hassles just REBOOT and skip to Part II.

or

b. mount -a which will do nothing if FS’s are already mounted, which

they always are

or

c. umount /home and mount /home

But this will not work if any users are online which they always are

or

d. mount -o remount /home <<<=========================

[mount -o remount,rw /]

Refresh and will work even if users are online

or

e. reboot

which is the coward’s way

and then Check /etc/mtab or mount*

3. To create the aquota.user file

–> quotacheck -vc /home [force] – Note : ‘u’ is the default if not given

and ‘v’, of course, is always

optional but friendly

This will create the /home/aquota.user which monitors all

quota activity for the /home paritition

4. – To turn on quotas

–> quotaon -v /home – To turn on quotas

or load /home/aquota.user file

in to RAM

5. Check everybody’s current usage and quotas :

# repquota -a

Configuring quotas is now over !

Now we will implement quotas.

In short :

1. Configure /etc/fstab

2. mount -o remount /home Cross check with # cat /etc/mtab

3. quotacheck -vc /home Creates aquota.user

4. quotaon -v /home Loads aquota.user in RAM

5. repquota -a Enjoy your work !

eg

1. repquota -a

2. repquota -u /

3. repquota -u sachin

The first line shows quota info for all users and groups for all file

systems.

The second line shows user quota info for the / file system.

The third line shows quota information for user sachin on all file systems.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

=============================

Part II. Implementing Quotas

=============================

6. edquota -u user

7. edquota -p foo bar <——— use foo as quota prototype for bar

8. edquota -t <———— To change the grace period

or use :

# edquota -p foo `awk -F: ‘$3 > 499 {print $1}’ /etc/passwd`

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

========================================

Part III. Repairing the aquota.user file

========================================

If your system hangs and then restarts, the aquota.user file gets corrupted

and all quotas for all users are now in an unknown state :

To repair, boot into single user mode imme-asap, and do this _FIRST_ :

quotaoff -v /home

quotacheck -avug Minimum reqd is : quotacheck /home

since ‘u’ is the default

and we do not have ‘g’

and ‘v’ is optional

a is check /etc/mtab

Re-Creates file /home/aquota.user

quotaon -v /home

Misc : /etc/warnquota.conf

warnquota*

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

GNU/Linux LDUP : 27-Jul-2k3

PART II – SysAdministration

08. QUOTAS

1. What are the two aspects of disk storage that quotas allow you specify?

A: Disk space [Block] and Files [inode] quotas

2. Which init script checks for the presence or absence of quotas ?

A: /etc/rc.d/rc.sysinit

3. I wish to implement quotas on my /home dir ? Should /home be a partition?

A: Yes.

4. Which file is configured when setting up quotas ?

A: /etc/fstab

5. What is the min I have to do, to implement both user and group quotas for

my /home partition?

A: Configure /etc/fstab with :

LABEL=/home /home ext3 defaults,usrquota,grpquota 1 2

and then just reboot the machine !!

6. But I wish to to implement only user quotas, is this /etc/fstab OK ?

LABEL=/home /home ext3 defaults, usrquota 1 2

A: No. No space in 4th field after defaults

7. Is this /etc/fstab OK ?

LABEL=/home /home ext3 defaults,userquota 1 2

A: No. Its usrquota.

8. Where is this quota info for user and group quotas stored for /home?

A: In the /home partition, in 2 data files : aquota.user, aquota.group

9. Which file keeps track of all the mounted filesystems ?

A: /etc/mtab

10 How would you implement user quotas without rebooting your machine ?

A: Configure /etc/fstab with :

1. LABEL=/home /home ext3 defaults,usrquota 1 2

2. mount -o remount /home

3. quotacheck -vc /home

4. quotaon -v /home

11 Why would you want to do a quotaoff before you do a quotacheck manually

from the CLI ?

A: Corrupts the aquota.user file

12 Can a user – foo – modify/create his quota ?

A: Obviously not! Only root can do that! Or else every user will

abuse the quota system for his benefit !

13 Can foo at least see his quota status ?

A: Yes.

14 How ?

A: Login as foo and run ‘quota’.

15. What are these quotas that he will see ?

A: His soft limit, hard limit and grace period

16. How then will root set [create/modify] quotas for ‘foo’ ?

A: edquota -u foo

17. Examine the following o/p generated by the quota command given by foo:

Disk quotas for user foo (uid 500):

Filesystem blocks soft hard inodes soft hard

/dev/hda 20 100 0 14 0 0

18. How much disk space has foo already used ?

A: 20 blocks i.e. 20 KB

19. His soft limit appears to be 100 KB. Can he use 130 KB ?

A: Yes. But but he will get warning messages

20. Can he use unlimited disk space and fill the partition ?

A: Yes. But up and until the grace period expires. After that the soft limit

is enforced as the hard limit. Additionally, he will get warning messages

21. Then what is the use of this soft limit ? How can I remedy it ?

A: By giving a hard limit too. foo can never cross the hard limit.

22. Now I do this :

Disk quotas for user foo (uid 500):

Filesystem blocks soft hard inodes soft hard

/dev/hda 20 100 200 14 0 0

foo creates files worth 160 KB. What will happen then ?

A: He will be allowed to create up to 200 KB max. Also after 7 days

he will be shut down regardless of whether he has reached his hard

limit or not. He will have to clean up under the soft limit to work.

23. What command is used to change a user’s grace period?

A: edquota -t

24. What command is used to see the entire quota details of all users?

A: repquota -a

25. What command sets a quota template?

A: edquota -p

26. What does ‘p’ mean and how would you use it?

A: ‘prototype’.

Suppose ‘foo’ has his quota set. Then you could clone his details,

# edquota -p foo bar

bar now has the same quota limits as foo.

27. If you had 2000 users, the above would clearly be inconvenient. Solve!

a: edquota -p foo `awk -F: ‘$3 > 499 { print $1 }’ /etc/passwd`

28. An over-limit quota generates a mail message to the user on login.

Which file would you modify to customize the mail delivered ?

A: /etc/warnquota.conf

29. If quotas were run as a daily cron job, where would you find the script

file concerned?

a: /etc/crond.daily/

30 A user owns 150 inodes; the soft limit is 100 and the hard is 200.

Which of the following is correct if the grace period has not expired?

a. The user can create no more files

b. The user cannot append data to an existing file

c. The user cannot log off without deleting some files

d. The user will receive an email notice of violation

A: d.

31 What is the purpose of ‘convertquota’ ?

A: convertquota converts old quota files quota.user and quota.group to

files aquota.user and aquota.group in new format currently used by

2.4.0-ac and newer or by Red Hat Linux 2.4 kernels on filesystem.

New file format allows using quotas for 32-bit uids / gids, setting

quotas for root, accounting used space in bytes (and so allowing use of

quotas in ReiserFS) and it is also architecture independent. This format

introduces Radix Tree (a simple form of tree structure) to quota file.

SSL – Create Root, Intermediate and Certificate in Chain

Create a Chain Certificate (Root, Intermediate & Normal Chain) – Step-by-step
——————————————————————————————
ROOT CERTIFICATE
——————————————————————————————
mkdir /root/ca
cd /root/ca
mkdir certs crl newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
vim openssl.cnf

[ ca ]
# `man ca`
default_ca = CA_default

[ CA_default ]
# Directory and file locations.
dir               = /root/ca
certs             = $dir/certs
crl_dir           = $dir/crl
new_certs_dir     = $dir/newcerts
database          = $dir/index.txt
serial            = $dir/serial
RANDFILE          = $dir/private/.rand

# The root key and root certificate.
private_key       = $dir/private/root_haritibco.key.pem
certificate       = $dir/certs/root_haritibco.cert.pem

# For certificate revocation lists.
crlnumber         = $dir/crlnumber
crl               = $dir/crl/ca.crl.pem
crl_extensions    = crl_ext
default_crl_days  = 30

# SHA-1 is deprecated, so use SHA-2 instead.
default_md        = sha256

name_opt          = ca_default
cert_opt          = ca_default
default_days      = 375
preserve          = no
policy            = policy_strict

[ policy_strict ]
# The root CA should only sign intermediate certificates that match.
# See the POLICY FORMAT section of `man ca`.
countryName             = match
stateOrProvinceName     = match
organizationName        = match
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ policy_loose ]
# Allow the intermediate CA to sign a more diverse range of certificates.
# See the POLICY FORMAT section of the `ca` man page.
countryName             = optional
stateOrProvinceName     = optional
localityName            = optional
organizationName        = optional
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ req ]
# Options for the `req` tool (`man req`).
default_bits        = 2048
distinguished_name  = req_distinguished_name
string_mask         = utf8only

# SHA-1 is deprecated, so use SHA-2 instead.
default_md          = sha256

# Extension to add when the -x509 option is used.
x509_extensions     = v3_ca

[ req_distinguished_name ]
# See <https://en.wikipedia.org/wiki/Certificate_signing_request&gt;.
countryName                     = Country Name (2 letter code)
stateOrProvinceName             = State or Province Name
localityName                    = Locality Name
0.organizationName              = Organization Name
organizationalUnitName          = Organizational Unit Name
commonName                      = Common Name
emailAddress                    = Email Address

# Optionally, specify some defaults.
countryName_default             = IN
stateOrProvinceName_default     = Maharashtra
localityName_default            = Mumbai
0.organizationName_default      = Hari TIBCO Blog Ltd
organizationalUnitName_default  = HLL
emailAddress_default            = hsivabc@gmail.com

[ v3_ca ]
# Extensions for a typical CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ v3_intermediate_ca ]
# Extensions for a typical intermediate CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ usr_cert ]
# Extensions for client certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = client, email
nsComment = “OpenSSL Generated Client Certificate”
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, emailProtection

[ server_cert ]
# Extensions for server certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = server
nsComment = “OpenSSL Generated Server Certificate”
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth

[ crl_ext ]
# Extension for CRLs (`man x509v3_config`).
authorityKeyIdentifier=keyid:always

[ ocsp ]
# Extension for OCSP signing certificates (`man ocsp`).
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, digitalSignature
extendedKeyUsage = critical, OCSPSigning

(Create root key)
cd /root/ca
openssl genrsa -aes256 -out private/root_haritibco.key.pem 4096
****test12345***
chmod 400 private/root_haritibco.key.pem
(Create root certificate)
cd /root/ca
openssl req -config openssl.cnf \
-key private/root_haritibco.key.pem \
-new -x509 -days 7300 -sha256 -extensions v3_ca \
-out certs/root_haritibco.cert.pem
chmod 444 certs/root_haritibco.cert.pem
(Verify Root Certificate)
openssl x509 -noout -text -in certs/root_haritibco.cert.pem

——————————————————————————————
INTERMEDIATE CERTIFICATE
——————————————————————————————
mkdir /root/ca/intermediate
cd /root/ca/intermediate
mkdir certs crl csr newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
echo 1000 > /root/ca/intermediate/crlnumber
cd /root/ca

vim openssl.cnf


# OpenSSL intermediate CA configuration file.
# Copy to `/root/ca/intermediate/openssl.cnf`.

[ ca ]
# `man ca`
default_ca = CA_default

[ CA_default ]
# Directory and file locations.
dir               = /root/ca/intermediate
certs             = $dir/certs
crl_dir           = $dir/crl
new_certs_dir     = $dir/newcerts
database          = $dir/index.txt
serial            = $dir/serial
RANDFILE          = $dir/private/.rand

# The root key and root certificate.
private_key       = $dir/private/inter_haritibco.key.pem
certificate       = $dir/certs/inter_haritibco.cert.pem

# For certificate revocation lists.
crlnumber         = $dir/crlnumber
crl               = $dir/crl/inter_haritibco.crl.pem
crl_extensions    = crl_ext
default_crl_days  = 30

# SHA-1 is deprecated, so use SHA-2 instead.
default_md        = sha256

name_opt          = ca_default
cert_opt          = ca_default
default_days      = 375
preserve          = no
policy            = policy_loose

[ policy_strict ]
# The root CA should only sign intermediate certificates that match.
# See the POLICY FORMAT section of `man ca`.
countryName             = match
stateOrProvinceName     = match
organizationName        = match
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ policy_loose ]
# Allow the intermediate CA to sign a more diverse range of certificates.
# See the POLICY FORMAT section of the `ca` man page.
countryName             = optional
stateOrProvinceName     = optional
localityName            = optional
organizationName        = optional
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ req ]
# Options for the `req` tool (`man req`).
default_bits        = 2048
distinguished_name  = req_distinguished_name
string_mask         = utf8only

# SHA-1 is deprecated, so use SHA-2 instead.
default_md          = sha256

# Extension to add when the -x509 option is used.
x509_extensions     = v3_ca

[ req_distinguished_name ]
# See <https://en.wikipedia.org/wiki/Certificate_signing_request&gt;.
countryName                     = Country Name (2 letter code)
stateOrProvinceName             = State or Province Name
localityName                    = Locality Name
0.organizationName              = Organization Name
organizationalUnitName          = Organizational Unit Name
commonName                      = Common Name
emailAddress                    = Email Address

# Optionally, specify some defaults.
countryName_default             = IN
stateOrProvinceName_default     = Maharashtra
localityName_default            = Mumbai
0.organizationName_default      = Hari TIBCO Blog Ltd
organizationalUnitName_default  = HLL
emailAddress_default            = hsivabc@gmail.com

[ v3_ca ]
# Extensions for a typical CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ v3_intermediate_ca ]
# Extensions for a typical intermediate CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ usr_cert ]
# Extensions for client certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = client, email
nsComment = “OpenSSL Generated Client Certificate”
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, emailProtection

[ server_cert ]
# Extensions for server certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = server
nsComment = “OpenSSL Generated Server Certificate”
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth

[ crl_ext ]
# Extension for CRLs (`man x509v3_config`).
authorityKeyIdentifier=keyid:always

[ ocsp ]
# Extension for OCSP signing certificates (`man ocsp`).
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, digitalSignature
extendedKeyUsage = critical, OCSPSigning

openssl genrsa -aes256 \
-out intermediate/private/inter_haritibco.key.pem 4096
*****test12345****
chmod 400 intermediate/private/inter_haritibco.key.pem

cd /root/ca
openssl req -config intermediate/openssl.cnf -new -sha256 \
-key intermediate/private/inter_haritibco.key.pem \
-out intermediate/csr/inter_haritibco.csr.pem
cd /root/ca
openssl ca -config openssl.cnf -extensions v3_intermediate_ca \
-days 3650 -notext -md sha256 \
-in intermediate/csr/inter_haritibco.csr.pem \
-out intermediate/certs/inter_haritibco.cert.pem
chmod 444 intermediate/certs/inter_haritibco.cert.pem

openssl x509 -noout -text \
-in intermediate/certs/inter_haritibco.cert.pem

openssl verify -CAfile certs/root_haritibco.cert.pem \
intermediate/certs/inter_haritibco.cert.pem

cat intermediate/certs/inter_haritibco.cert.pem \
certs/ca.cert.pem > intermediate/certs/chain_haritibco.cert.pem
chmod 444 intermediate/certs/chain_haritibco.cert.pem


—————————————————————————
CERTIFICATE
—————————————————————————
cd /root/ca
openssl genrsa -aes256 \
-out intermediate/private/haritibcoblog.key.pem 2048
chmod 400 intermediate/private/haritibcoblog.key.pem

cd /root/ca
openssl req -config intermediate/openssl.cnf \
-key intermediate/private/haritibcoblog.key.pem \
-new -sha256 -out intermediate/csr/haritibcoblog.csr.pem

cd /root/ca
openssl ca -config intermediate/openssl.cnf \
-extensions server_cert -days 375 -notext -md sha256 \
-in intermediate/csr/haritibcoblog.csr.pem \
-out intermediate/certs/haritibcoblog.cert.pem
chmod 444 intermediate/certs/haritibcoblog.cert.pem

openssl x509 -noout -text \
-in intermediate/certs/haritibcoblog.cert.pem

openssl verify -CAfile intermediate/certs/chain_haritibco.cert.pem \
intermediate/certs/haritibcoblog.cert.pem

 

 

 

HTTP Status Codes:- 1xx (Informational)

  • 100 Continue :- The client SHOULD continue with its request. This interim response is used to inform the client that the initial part of the request has been received and has not yet been rejected by the server. The client SHOULD continue by sending the remainder of the request or, if the request has already been completed, ignore this response. The server MUST send a final response after the request has been completed.
  • 102 Processing (WebDAV) :- The 102 (Processing) status code is an interim response used to inform the client that the server has accepted the complete request, but has not yet completed it. This status code SHOULD only be sent when the server has a reasonable expectation that the request will take significant time to complete. As guidance, if a method is taking longer than 20 seconds (a reasonable, but arbitrary value) to process the server SHOULD return a 102 (Processing) response. The server MUST send a final response after the request has been completed.Methods can potentially take a long period of time to process, especially methods that support the Depth header. In such cases the client may time-out the connection while waiting for a response. To prevent this the server may return a 102 (Processing) status code to indicate to the client that the server is still processing the method.
  • 101 Switching Protocols :-The server understands and is willing to comply with the client’s request, via the Upgrade message header field, for a change in the application protocol being used on this connection.The server will switch protocols to those defined by the response’s Upgrade header field immediately after the empty line which terminates the 101 response. The protocol SHOULD be switched only when it is advantageous to do so. For example, switching to a newer version of HTTP is advantageous over older versions, and switching to a real-time, synchronous protocol might be advantageous when delivering resources that use such features.

     

java.lang.OutOfMemoryError – Types and Causes

The 8 symptoms that surface them

The many thousands of java.lang.OutOfMemoryErrors that I’ve met during my career all bear one of the below eight symptoms.

This post from haritibcoblog explains what causes a particular error to be thrown, offers code examples that can cause such errors, and gives you solution guidelines for a fix.

The content is all based on my own experience.

  • java.lang.OutOfMemoryError:Java heap space
  • java.lang.OutOfMemoryError:GC overhead limit exceeded
  • java.lang.OutOfMemoryError:Permgen space
  • java.lang.OutOfMemoryError:Metaspace
  • java.lang.OutOfMemoryError:Unable to create new native thread
  • java.lang.OutOfMemoryError:Out of swap space?
  • java.lang.OutOfMemoryError:Requested array size exceeds VM limit
  • Out of memory:Kill process or sacrifice child

java.lang.OutOfMemoryError:Java heap space

Java applications are only allowed to use a limited amount of memory. This limit is specified during application startup. To make things more complex, Java memory is separated into two different regions. These regions are called Heap space and Permgen (for Permanent Generation):

capture

The size of those regions is set during the Java Virtual Machine (JVM) launch and can be customized by specifying JVM parameters -Xmx and -XX:MaxPermSize. If you do not explicitly set the sizes, platform-specific defaults will be used.

The java.lang.OutOfMemoryError: Java heap space error will be triggered when the application attempts to add more data into the heap space area, but there is not enough room for it.

Note that there might be plenty of physical memory available, but the java.lang.OutOfMemoryError: Java heap space error is thrown whenever the JVM reaches the heap size limit.

What is causing it?

There most common reason for the java.lang.OutOfMemoryError: Java heap space error is simple – you try to fit an XXL application into an S-sized Java heap space. That is – the application just requires more Java heap space than available to it to operate normally. Other causes for this OutOfMemoryError message are more complex and are caused by a programming error:

  • Spikes in usage/data volume. The application was designed to handle a certain amount of users or a certain amount of data. When the number of users or the volume of data suddenly spikes and crosses that expected threshold, the operation which functioned normally before the spike ceases to operate and triggers the lang.OutOfMemoryError: Java heap spaceerror.

Memory leaks. A particular type of programming error will lead your application to constantly consume more memory. Every time the leaking functionality of the application is used it leaves some objects behind into the Java heap space. Over time the leaked objects consume all of the available Java heap space and trigger the already familiar java.lang.OutOfMemoryError: Java heap space error.

java.lang.OutOfMemoryError:GC overhead limit exceeded

Java runtime environment contains a built-in Garbage Collection (GC) process. In many other programming languages, the developers need to manually allocate and free memory regions so that the freed memory can be reused.

Java applications on the other hand only need to allocate memory. Whenever a particular space in memory is no longer used, a separate process called Garbage Collection clears the memory for them. How the GC detects that a particular part of memory is explained in more detail in the Garbage Collection Handbook, but you can trust the GC to do its job well.

captureThe java.lang.OutOfMemoryError: GC overhead limit exceeded error is displayed when your application has exhausted pretty much all the available memory and GC has repeatedly failed to clean it.

What is causing it?

The java.lang.OutOfMemoryError: GC overhead limit exceeded error is the JVM’s way of signalling that your application spends too much time doing garbage collection with too little result. By default the JVM is configured to throw this error if it spends more than 98% of the total time doing GC and when after the GC only less than 2% of the heap is recovered.

What would happen if this GC overhead limit would not exist? Note that the java.lang.OutOfMemoryError: GC overhead limit exceeded error is only thrown when 2% of the memory is freed after several GC cycles. This means that the small amount of heap the GC is able to clean will likely be quickly filled again, forcing the GC to restart the cleaning process again. This forms a vicious cycle where the CPU is 100% busy with GC and no actual work can be done. End users of the application face extreme slowdowns – operations which normally complete in milliseconds take minutes to finish.

So the “java.lang.OutOfMemoryError: GC overhead limit exceeded” message is a pretty nice example of a fail fast principle in action.

java.lang.OutOfMemoryError:Permgen space

Java applications are only allowed to use a limited amount of memory. The exact amount of memory your particular application can use is specified during application startup. To make things more complex, Java memory is separated into different regions which can be seen in the following figure:

The size of all those regions, including the permgen area, is set during the JVM launch. If you do not set the sizes yourself, platform-specific defaults will be used.

The java.lang.OutOfMemoryError: PermGen space message indicates that the Permanent Generation’s area in memory is exhausted.

capture

What is causing it?

To understand the cause for the java.lang.OutOfMemoryError: PermGen space, we would need to understand what this specific memory area is used for.

For practical purposes, the permanent generation consists mostly of class declarations loaded and stored into PermGen. This includes the name and fields of the class, methods with the method bytecode, constant pool information, object arrays and type arrays associated with a class and Just In Time compiler optimizations.

From the above definition you can deduce that the PermGen size requirements depend both on the number of classes loaded as well as the size of such class declarations. Therefore we can say that the main cause for the java.lang.OutOfMemoryError: PermGen space is that either too many classes or too big classes are loaded to the permanent generation.

java.lang.OutOfMemoryError:Metaspace

Java applications are allowed to use only a limited amount of memory. The exact amount of memory your particular application can use is specified during application startup. To make things more complex, Java memory is separated into different regions, as seen in the following figure:

The size of all those regions, including the metaspace area, can be specified during the JVM launch. If you do not determine the sizes yourself, platform-specific defaults will be used.

The java.lang.OutOfMemoryError: Metaspace message indicates that the Metaspace area in memory is exhausted.

capture

What is causing it?

If you are not a newcomer to the Java landscape, you might be familiar with another concept in Java memory management called PermGen. Starting from Java 8, the memory model in Java was significantly changed. A new memory area called Metaspace was introduced and Permgen was removed. This change was made due to variety of reasons, including but not limited to:

  • The required size of permgen was hard to predict. It resulted in either under-provisioning triggering lang.OutOfMemoryError: Permgen sizeerrors or over-provisioning resulting in wasted resources.
  • GC performanceimprovements, enabling concurrent class data de-allocation without GC pauses and specific iterators on metadata
  • Support for further optimizations such as G1concurrent class unloading.

So if you were familiar with PermGen then all you need to know as background is that – whatever was in PermGen before Java 8 (name and fields of the class, methods of a class with the bytecode of the methods, constant pool, JIT optimizations etc) – is now located in Metaspace.

As you can see, Metaspace size requirements depend both upon the number of classes loaded as well as the size of such class declarations. So it is easy to see the main cause for the java.lang.OutOfMemoryError: Metaspace is: either too many classes or too big classes being loaded to the Metaspace.

java.lang.OutOfMemoryError:Unable to create new native thread

Java applications are multi-threaded by nature. What this means is that the programs written in Java can do several things (seemingly) at once. For example – even on machines with just one processor – while you drag content from one window to another, the movie played in the background does not stop just because you carry out several operations at once.

A way to think about threads is to think of them as workers to whom you can submit tasks to carry out. If you had only one worker, he or she could only carry out one task at the time. But when you have a dozen workers at your disposal they can simultaneously fulfill several of your commands.

Now, as with workers in physical world, threads within the JVM need some elbow room to carry out the work they are summoned to deal with. When there are more threads than there is room in memory we have built a foundation for a problem:

The message java.lang.OutOfMemoryError: Unable to create new native thread means that the Java application has hit the limit of how many Threads it can launch.

capture

What is causing it?

You have a chance to face the java.lang.OutOfMemoryError: Unable to create new native thread whenever the JVM asks for a new thread from the OS. Whenever the underlying OS cannot allocate a new native thread, this OutOfMemoryError will be thrown. The exact limit for native threads is very platform-dependent thus we recommend to find out those limits by running a test similar to the below example. But, in general, the situation causing java.lang.OutOfMemoryError: Unable to create new native thread goes through the following phases:

  1. A new Java thread is requested by an application running inside the JVM
  2. JVM native code proxies the request to create a new native thread to the OS
  3. The OS tries to create a new native thread which requires memory to be allocated to the thread
  4. The OS will refuse native memory allocation either because the 32-bit Java process size has depleted its memory address space – e.g. (2-4) GB process size limit has been hit – or the virtual memory of the OS has been fully depleted
  5. The lang.OutOfMemoryError: Unable to create new native threaderror is thrown.

 

java.lang.OutOfMemoryError:Out of swap space?

Java applications are given limited amount of memory during the startup. This limit is specified via the -Xmx and other similar startup parameters. In situations where the total memory requested by the JVM is larger than the available physical memory, operating system starts swapping out the content from memory to hard drive.

The java.lang.OutOfMemoryError: Out of swap space? error indicates that the swap space is also exhausted and the new attempted allocation fails due to the lack of both physical memory and swap space.

capture

What is causing it?

The java.lang.OutOfmemoryError: Out of swap space? is thrown by JVM when an allocation request for bytes from the native heap fails and the native heap is close to exhaustion. The message indicates the size (in bytes) of the allocation which failed and the reason for the memory request.

The problem occurs in situations where the Java processes have started swapping, which, recalling that Java is a garbage collected language is already not a good situation. Modern GC algorithms do a good job, but when faced with latency issues caused by swapping, the GC pauses tend to increase to levels not tolerable by most applications.

java.lang.OutOfMemoryError: Out of swap space? is often caused by operating system level issues, such as:

  • The operating system is configured with insufficient swap space.
  • Another process on the system is consuming all memory resources.

It is also possible that the application fails due to a native leak, for example, if application or library code continuously allocates memory but does not release it to the operating system.

java.lang.OutOfMemoryError:Requested array size exceeds VM limit

Java has got a limit on the maximum array size your program can allocate. The exact limit is platform-specific but is generally somewhere between 1 and 2.1 billion elements.

When you face the java.lang.OutOfMemoryError: Requested array size exceeds VM limit, this means that the application that crashes with the error is trying to allocate an array larger than the Java Virtual Machine can support.

capture

What is causing it?

The error is thrown by the native code within the JVM. It happens before allocating memory for an array when the JVM performs a platform-specific check: whether the allocated data structure is addressable in this platform. This error is less common than you might initially think.

The reason you only seldom face this error is that Java arrays are indexed by int. The maximum positive int in Java is 2^31 – 1 = 2,147,483,647. And the platform-specific limits can be really close to this number – for example on my 64bit MB Pro on Java 1.7 I can happily initialize arrays with up to 2,147,483,645 or Integer.MAX_VALUE-2 elements.

Increasing the length of the array by one to Integer.MAX_VALUE-1 results in the familiar OutOfMemoryError:

Exception in thread “main” java.lang.OutOfMemoryError: Requested array size exceeds VM limit

But the limit might not be that high – on 32-bit Linux with OpenJDK 6, you will hit the “java.lang.OutOfMemoryError: Requested array size exceeds VM limit” already when allocating an array with ~1.1 billion elements. To understand the limits of your specific environments run the small test program described in the next chapter.

Out of memory:Kill process or sacrifice child

In order to understand this error, we need to recoup the operating system basics. As you know, operating systems are built on the concept of processes. Those processes are shepherded by several kernel jobs, one of which, named “Out of memory killer” is of interest to us in this particular case.

This kernel job can annihilate your processes under extremely low memory conditions. When such a condition is detected, the Out of memory killer is activated and picks a process to kill. The target is picked using a set of heuristics scoring all processes and selecting the one with the worst score to kill. The Out of memory: Kill process or sacrifice child is thus different from other errors covered in our OOM handbook as it is not triggered nor proxied by the JVM but is a safety net built into the operating system kernels.

The Out of memory: kill process or sacrifice child error is generated when the available virtual memory (including swap) is consumed to the extent where the overall operating system stability is put to risk. In such case the Out of memory killer picks the rogue process and kills it.

capture

What is causing it?

By default, Linux kernels allow processes to request more memory than currently available in the system. This makes all the sense in the world, considering that most of the processes never actually use all of the memory they allocate. The easiest comparison to this approach would be the broadband operators. They sell all the consumers a 100Mbit download promise, far exceeding the actual bandwidth present in their network. The bet is again on the fact that the users will not simultaneously all use their allocated download limit. Thus one 10Gbit link can successfully serve way more than the 100 users our simple math would permit.

A side effect of such an approach is visible in case some of your programs are on the path of depleting the system’s memory. This can lead to extremely low memory conditions, where no pages can be allocated to process. You might have faced such situation, where not even a root account cannot kill the offending task. To prevent such situations, the killer activates, and identifies the rogue process to be the killed.

You can read more about fine-tuning the behaviour of “Out of memory killer” in this article from RedHat documentation.

Now that we have the context, how can you know what triggered the “killer” and woke you up at 5AM? One common trigger for the activation is hidden in the operating system configuration. When you check the configuration in /proc/sys/vm/overcommit_memory, you have the first hint – the value specified here indicates whether all malloc() calls are allowed to succeed. Note that the path to the parameter in the proc file system varies depending on the system affected by the change.

Overcommitting configuration allows to allocate more and more memory for this rogue process which can eventually trigger the “Out of memory killer” to do exactly what it is meant to do.

 

Linux – Concepts – IPTABLES v/s FIREWALLD

Today we will walk through iptables and firewalld and we will learn about the history of these two along with installation & how we can configure these for our Linux distributions.

Let’s begin wihtout wasting further more time.

What is iptables?

First, we need to know what is iptables. Most of senior IT professionals knows about it and used to work with it as well. Iptables is an application / program that allows a user to configure the security or firewall security tables provided by the Linux kernel firewall and the chains so that a user can add / remove firewall rules to it accordingly to meet his / her security requirements. Iptables uses different kernel modules and different protocols so that user can take the best out of it. As for example, iptables is used for IPv4 ( IP version 4/32 bit ) and ip6tables for IPv6 ( IP version 6/64 bit ) for both tcp and udp. Normally, iptables rules are configured by System Administrator or System Analyst or IT Manager.  You must have root privileges to execute each iptables rules. Linux Kernel uses the Netfilter framework so that it can provide various networking-related operations which can be performed by using iptables. Previously, ipchains was used in most of the Linux distributions for the same purpose. Every iptables rules are directly handled by the Linux Kernel itself and it is known as kernel duty. Whatever GUI tools or other security tools you are using to configure your server’s firewall security, at the end of the day, it is converted into iptables rules and supplied to the kernel to perform the operation.

History of iptables

The rise of the iptables begin with netfilter. Paul Rusty Russell was the initial author and the head think tank behind netfilter / iptables. Later he was joined by many other tech people then form and build the Netfilter core team and develop & maintain the netfilter/iptables project as a joint effort like many other open source projects. Harald Welte was the former leader until 2007 and then Patrick McHardy was the head until 2013. Currently, netfilter core team head is Pablo Neira Ayuso.

To know more about netfilter, please visit this link. To know more about the historicity of netfilter, please visit this link.

To know more about iptables history, please visit this link.

How to install iptables

Now a days, every Linux Kernel comes with iptables and can be found pre build or pre installed on every famous modern Linux distributions. On most Linux systems, iptables is installed in this /usr/sbin/iptables directory. It can be also  found in /sbin/iptables, but since iptables is more like a service rather than an “essential binary”, the preferred location remains in /usr/sbin directory.

For Ubuntu or Debian

sudo apt-get install iptables

For CentOS

sudo yum install iptables-services

For RHEL

sudo yum install iptables

Iptables version

To know your iptables version, type the following command in your terminal.

sudo iptables --version

Start & Stopping your iptables firewall

For OpenSUSE 42.1, type the following to stop.

sudo /sbin/rcSuSEfirewall2 stop

To start it again

sudo /sbin/rcSuSEfirewall2 start

For Ubuntu, type the following to stop.

sudo service ufw stop

To start it again

sudo service ufw start

For Debian & RHEL , type the following to stop.

sudo /etc/init.d/iptables stop

To start it again

sudo /etc/init.d/iptables start

For CentOS, type the following to stop.

sudo service iptables stop

To start it again

sudo service iptables start

Getting all iptables rules lists

To know all the rules that is currently present & active in your iprables, simply open a terminal and type the following.

sudo iptables -L

If there are no rules exits on the iptables means if there are no rules added so far in your iptables firewall, you will see something like the below image.

Iptables_Lists_OpenSUSE42.1

In this above picture, you can see that , there are three (3) chains and they are INPUT, FORWARD, OUTPUT and there are no rules exists. Actually I haven’t add one yet.

Type the following to know the status of the chains of your iptables firewall.

sudo iptables -S

With the above command, you can learn whether your chains are accepting or not.

Clear all iptables rules

To clear all the rules from your iptables firewall, please type the following. This is normally known as flushing your iptables rules.

sudo iptables -F

If you want to flush the INPUT chain only, or any individual chains, issue the below commands as per your requirements.

sudo iptables -F INPUT
sudo iptables -F OUTPUT
sudo iptables -F FORWARD

ACCEPT or DROP Chains

To accept or drop a particular chain, issue any of the following command on your terminal to meet your requirements.

iptables --policy INPUT DROP

The above rule will not accept anything that is incoming to that server. To revert it again back to ACCEPT, do the following

iptables --policy INPUT ACCEPT

Same goes for other chains as well like

iptables --policy OUTPUT DROP
iptables --policy FORWARD DROP

Note: By default, all chains of iptables ( INPUT, OUTPUT, FORWARD ) are in ACCEPT mode. This is known as Policy Chain Default Behavior.

Allowing any port

If you are running any web server on your host, then you must allow your iptables firewall so that your server listen or respond to port 80. By default web server runs on port 80. Let’s do that then.

sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT

On the above line, A stands for append means we are adding a new rule to the iptables list. INPUT stands for the INPUT chain. P stands for protocol and dport stands for destination port. By default any web server runs on port 80. Similarly, you can allow SSH port as well.

sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT

By default, SSH runs on port 22. But it’s good practise not to run SSH on port 22. Always run SSH on a different port. To run SSH on a different port, open /etc/ssh/sshd_config file on your favorite editor and change the port 22 to a other port.

Blocking any port

Say we want to block port 135. We can do it by

sudo iptables -A INPUT -p tcp --dport 135 -j DROP

if you want to block your server to initiate any SSH connection from the server to another host/server, issue the following command

sudo iptables -A OUTPUT -p tcp --dport 22 -j DROP

By doing so, no one can use your sever to initiate a SSH connection from the server. The OUPUT chain will filter and DROP any outgoing tcp connection towards another hosts.

Allowing specific IP with Port

sudo iptables -A INPUT -p tcp -s 0/0 --dport 22  -j ACCEPT

Here -s 0/0 stand for any incoming source with any IP addresses. So, there is no way your server is going to respond for a tcp packet which destination port is 22. If you want to allow only any particular IP then use the following one.

sudo iptables -A INPUT -p tcp -s 12.12.12.12/32 --dport 22  -j ACCEPT

On the above example, you are only allowing 12.12.12.12 IP address to connect to port SSH. Rest IP addresses will not be able to connect to port 22. Similarly you can allow by using CIDR values. Such as

sudo iptables -A INPUT -p tcp -s 12.12.12.0/24 --dport 22  -j ACCEPT

The above example show how you can allow a whole IP block for accepting connection on port 22. It will accept IP starting from 12.12.12.1 to 12.12.12.255.

If you want to block such IP addresses range, do the reverse by replacing ACCEPT by DROP like the following

sudo iptables -A INPUT -p tcp -s 12.12.12.0/24 --dport 22  -j DROP

So, it will not allow to get a connection on port 22 from from 12.12.12.1 to 12.12.12.255 IP addresses.

Blocking ICMP

If you want to block ICMP (ping) request to and from on your server, you can try the following. The first one will block not to send ICMP ping echo request to another host.

sudo iptables -A OUTPUT -p icmp --icmp-type 8 -j DROP

Now, try to ping google.com. Your OpenSUSE server will not be able to ping google.com.

If you want block the incoming ICMP (ping) echo request for your server, just type the following on your terminal.

sudo iptables -I INPUT -p icmp --icmp-type 8 -j DROP

Now, It will not reply to any ICMP ping echo request. Say, your server IP address is 13.13.13.13. And if you ping ping that IP of your server then you will see that your server is not responding for that ping request.

Blocking MySql / MariaDB Port

As Mysql is holding your database so you must protect your database from outside attach. Allow your trusted application server IP addresses only to connect with your MySQL server. To block other

sudo iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT

So, it will not take any MySql connection except 192.168.1.0/24 IP block. By default MySql runs on 3306 port.

Blocking SMTP

If you not running any mail server on your host server or if your server is not configured to act like a mail server, you must block SMTP so that your server is not sending any spam or any mail towards any domain. You must do this to block any outgoing mail from your server. To do so,

sudo iptables -A OUTPUT -p tcp --dport 25 -j DROP

Block DDoS

We all are familiar with the term DDoS. To get rid of it, issue the following command in your terminal.

iptables -A INPUT -p tcp --dport 80 -m limit --limit 20/minute --limit-burst 100 -j ACCEPT

You need to configure the numerical value to meet your requirements. This is just a standard to maintain.

You can protect more by

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/tcp_syncookies
echo 0 > /proc/sys/net/ipv4/conf/all/accept_redirects
echo 0 > /proc/sys/net/ipv4/conf/all/accept_source_route
echo 1 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/lo/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
echo 1800 > /proc/sys/net/ipv4/tcp_keepalive_time
echo 1 > /proc/sys/net/ipv4/tcp_window_scaling 
echo 0 > /proc/sys/net/ipv4/tcp_sack
echo 1280 > /proc/sys/net/ipv4/tcp_max_syn_backlog

Blocking Port Scanning

There are hundred of people out there to scan your open ports of your server and try to break down your server security. To block it

sudo iptables -N block-scan
sudo iptables -A block-scan -p tcp —tcp-flags SYN,ACK,FIN,RST RST -m limit —limit 1/s -j RETURN
sudo iptables -A block-scan -j DROP

Here, block-scan is a name of a new chain.

Blocking Bad Ports

You may need to block some bad ports for your server as well. Here is how you can do this.

badport="135,136,137,138,139,445"
sudo iptables -A INPUT -p tcp -m multiport --dport $badport -j DROP
sudo iptables -A INPUT -p udp -m multiport --dport $badport -j DROP

You can add more ports according to your needs.

What is firewalld?

Firewalld provides a dynamically managed firewall with support for network/firewall zones that defines the trust level of network connections or interfaces. It has support for IPv4, IPv6 firewall settings, ethernet bridges and IP sets. There is a separation of runtime and permanent configuration options. It also provides an interface for services or applications to add firewall rules directly.

The former firewall model with system-config-firewall/lokkit was static and every change required a complete firewall restart. This included also to unload the firewall netfilter kernel modules and to load the modules that are needed for the new configuration. The unload of the modules was breaking stateful firewalling and established connections. The firewall daemon on the other hand manages the firewall dynamically and applies changes without restarting the whole firewall. Therefore there is no need to reload all firewall kernel modules. But using a firewall daemon requires that all firewall modifications are done with that daemon to make sure that the state in the daemon and the firewall in kernel are in sync. The firewall daemon can not parse firewall rules added by the iptables and ebtables command line tools. The daemon provides information about the current active firewall settings via D-BUS and also accepts changes via D-BUS using PolicyKit authentication methods.

So, firewalld uses zones and services instead of chain and rules for performing the operations and it can manages rule(s) dynamically allowing updates & modification without breaking existing sessions and connections.

It has following features.

  • D-Bus API.
  • Timed firewall rules.
  • Rich Language for specific firewall rules.
  • IPv4 and IPv6 NAT support.
  • Firewall zones.
  • IP set support.
  • Simple log of denied packets.
  • Direct interface.
  • Lockdown: Whitelisting of applications that may modify the firewall.
  • Support for iptables, ip6tables, ebtables and ipset firewall backends.
  • Automatic loading of Linux kernel modules.
  • Integration with Puppet.

To know more about firewalld, please visit this link.

How to install firewalld

Before installing firewalld, please make sure you stop iptables and also make sure that iptables are not using or working anymore. To do so,

sudo systemctl stop iptables

This will stop iptables form your system.

And then make sure iptables are not used by your system any more by issuing the below command in the terminal.

sudo systemctl mask iptables

Now, check the status of iptables.

sudo systemctl status iptables

iptables_status_unixmen

Now, we are ready to install firewalld on to our system.

For Ubuntu

To install it on Ubuntu, you must remove UFW first and then you can install Firewalld. To remove UFW, issue the below command on the terminal.

sudo apt-get remove ufw

After removing UFW, issue the below command in the terminal

sudo apt-get install firewall-applet

Or

You can open Ubuntu Software Center and look or seacrh  for “firewall-applet” then install it on to your Ubuntu system.

For RHEL, CentOS & Fedora

Type the below command to install firewalld on your CentOS system.

sudo yum install firewalld firewall-config -y

How to configure firewalld

Before configuring firewalld, we must know the status of firewalld after the installation. To know that, type the following.

sudo systemctl status firewalld

firewalld_status_unixmen

As firewalld works on zones basis, we need to check all the zones and services though we haven’t done any configuring yet.

For Zones

sudo firewall-cmd --get-active-zones

firewalld_activezones_unixmen

or

sudo firewall-cmd --get-zones

firewalld_getzones_unixmen

To know the default zone, issue the below command

sudo firewall-cmd --get-default-zone

firewalld_defaultszones_unixmen

And, For Services

sudo firewall-cmd --get-services

firewalld_services_unixmen

Here, you can see those services covered under firewalld.

Setting Default Zone

An important note is, after each modification, you need to reload firewalld so that your changes can take place.

To set the default zone

sudo firewall-cmd --set-default-zone=internal

or

sudo firewall-cmd --set-default-zone=public

After changing the zone, check whether it changes or not.

sudo firewall-cmd --get-default-zone

Adding Port in Public Zone

sudo firewall-cmd --permanent --zone=public --add-port=80/tcp

firewalld_addport_unixmen

This will add tcp port 80 in the public zone of firewalld. You can add your desired port as well by replacing 80 by your’s.

Now reload the firewalld.

sudo firewall-cmd --reload

Now, check the status to see whether tcp 80 port has been added or not.

sudo firewall-cmd --zone=public --list-ports

firewalld_statusafterport_unixmen

Here, you can see that tcp port 80 has been added.

Or even you can try something like this.

sudo firewall-cmd --zone=public --list-all

firewalld_statusall_unixmen

Removing Port from Public Zone

To remove Tcp 80 port from the public zone, type the following.

sudo firewall-cmd --zone=public --remove-port=80/tcp

You will see a “success” text echoing in your terminal.

You can put your desired port as well by replacing 80 by your’s own port.

Adding Services in Firewalld

To add ftp service in firewalld, issue the below command

sudo firewall-cmd --zone=public --add-service=ftp

You will see a “success” text echoing in your terminal.

Similarly for adding smtp service, issue the below command

sudo firewall-cmd --zone=public --add-service=smtp

Replace ftp and smtp by your’s own service that you want to add in the firewalld.

Removing Services from Firewalld

For removing ftp & smtp services from firewalld, issue the below command in the terminal.

sudo firewall-cmd --zone=public --remove-service=ftp
sudo firewall-cmd --zone=public --remove-service=smtp

Block Any Incoming and Any Outgoing Packet(s)

If you wish, you can block any incoming or outgoing packets / connections by using firewalld. This is known as “panic-on” of firewalld. To do so, issue the below command.

sudo firewall-cmd --panic-on

You will see a “success” text echoing in your terminal.

After doing this, you will not be able to ping a host or even browse any websites.

To turn this off, issue the below command in your terminal.

sudo firewall-cmd --panic-off

Adding IP Address in Firewalld

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.4" accept'

By doing so, firewalld will accept IP v4 packets from the source IP 192.168.1.4.

Blocking IP Address From Firewalld

Similarly, to block any IP address

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.4" reject'

By doing so, firewalld will drop / discards every IP v4 packets from the source IP 192.168.1.4.

I stuck with the very basic of Firewalld over here so that you can easily understand the working methodology of it and the differences of it with iptables.

That’s all for today. Hope you enjoy reading this article.

Take care.

Linux – Concepts – Ulimits & Sysctl

ulimit and sysctl

The ulimit and sysctl programs allow to limit system-wide resource use. This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users.

Code Listing 1: ulimit example

# ulimit -a 
core file size          (blocks, -c) 0 
data seg size           (kbytes, -d) unlimited 
file size               (blocks, -f) unlimited 
pending signals                 (-i) 8191 
max locked memory       (kbytes, -l) 32 
max memory size         (kbytes, -m) unlimited 
open files                      (-n) 1024 
pipe size            (512 bytes, -p) 8 
POSIX message queues     (bytes, -q) 819200 
stack size              (kbytes, -s) 8192 
cpu time               (seconds, -t) unlimited 
max user processes              (-u) 8191 
virtual memory          (kbytes, -v) unlimited 
file locks                      (-x) unlimited

All these settings can be manipulated. A good example is this bash forkbomb that forks as many processes as possible and can crash systems where no user limits are set:

Warning: Do not run this in a shell! If no limits are set your system will either become unresponsive or might even crash.

Code Listing 2: A bash forkbomb

$ :(){ :|:& };:
refer to my Post :- 
https://haritibcoblog.com/2016/07/10/linux-fork-bomb-test-epic/

Now this is not good – any user with shell access to your box could take it down. But if that user can only start 30 processes the damage will be minimal. So let’s set a process limit:

Gentoo Note: A too small number of processes can break the use of portage. So, don’t be too strict.

Code Listing 3: Setting a process limit

# ulimit -u 30 
# ulimit -a 
… 
max user processes              (-u) 30 
…

If you try to run the forkbomb now it should run, but throw error messages “fork: resource temporarily unavailable”. This means that your system has not allowed the forkbomb to start more processes. The other options of ulimit can help with similar problems, but you should be careful that you don’t lock yourself out – setting data seg size too small will even prevent bash from starting!

sysctl is a similar tool: It allows to configure kernel parameters at runtime. If you wish to keep settings persistent across reboots you should edit /etc/sysctl.conf – be aware that wrong settings may break things in unforeseen ways.

Code Listing 4: Exploring sysctl variables

# sysctl -a 
… 
vm.swappiness = 60 
…

The list of variables is quite long (367 lines on my system), but I picked out vm.swappiness here. It controls how aggressive swapping will be, the higher it is (with a maximum of 100) the more swap will be used. This can affect performance a lot on systems with little memory, depending on load and other factors.

Code Listing 5: Reducing swappiness

# sysctl vm.swappiness=0 
vm.swappiness = 0

The effects of changing this setting are usually not felt instantly. But you can change many settings, especially network-related, this way. For servers this can offer a nice performance boost, but as with ulimit careless usage might cause your system to misbehave or slow down. If you don’t know what a variable controls, you should not modify it!

Linux Command – Using Netstat the Proper Way !!

How to install netstat

netstat is a useful tool for checking your network configuration and activity. It is in fact a collection of several tools lumped together.

Install “net-tools” package using yum

[root@livedvd ~]$ sudo yum install net-tools
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.mirror.secureax.com
* extras: centos.mirror.secureax.com
* updates: centos.mirror.secureax.com
Resolving Dependencies
--> Running transaction check
---> Package net-tools.x86_64 0:2.0-0.17.20131004git.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================
Package         Arch         Version                          Repository  Size
================================================================================
Installing:
net-tools       x86_64       2.0-0.17.20131004git.el7         base       304 k
Transaction Summary
================================================================================
Install  1 Package
Total download size: 304 k
Installed size: 917 k
Is this ok [y/d/N]: y
Downloading packages:
net-tools-2.0-0.17.20131004git.el7.x86_64.rpm              | 304 kB   00:00
Running transaction check

Running transaction test
Transaction test succeeded
Running transaction
Installing : net-tools-2.0-0.17.20131004git.el7.x86_64                    1/1
Verifying  : net-tools-2.0-0.17.20131004git.el7.x86_64                    1/1
Installed:
net-tools.x86_64 0:2.0-0.17.20131004git.el7

 

Complete!

 

The netstat Command

Displaying the Routing Table

When you invoke netstat with the –r flag, it displays the kernel routing table in the way we’ve been doing with route. On vstout, it produces:

# netstat -nr

 Kernel IP routing table
 Destination   Gateway      Genmask         Flags  MSS Window  irtt Iface
 127.0.0.1     *            255.255.255.255 UH       0 0          0 lo
 172.16.1.0    *            255.255.255.0   U        0 0          0 eth0
 172.16.2.0    172.16.1.1   255.255.255.0   UG       0 0          0 eth0

The –n option makes netstat print addresses as dotted quad IP numbers rather than the symbolic host and network names. This option is especially useful when you want to avoid address lookups over the network (e.g., to a DNS or NIS server).

The second column of netstat‘s output shows the gateway to which the routing entry points. If no gateway is used, an asterisk is printed instead. The third column shows the “generality” of the route, i.e., the network mask for this route. When given an IP address to find a suitable route for, the kernel steps through each of the routing table entries, taking the bitwise AND of the address and the genmask before comparing it to the target of the route.

The fourth column displays the following flags that describe the route:

G The route uses a gateway.
U The interface to be used is up.
H Only a single host can be reached through the route. For example, this is the case for the loopback entry 127.0.0.1.
D This route is dynamically created. It is set if the table entry has been generated by a routing daemon like gated or by an ICMP redirect message
M This route is set if the table entry was modified by an ICMP redirect message.
! The route is a reject route and datagrams will be dropped.

 

The next three columns show the MSS, Window and irtt that will be applied to TCP connections established via this route. The MSS is the Maximum Segment Size and is the size of the largest datagram the kernel will construct for transmission via this route. The Window is the maximum amount of data the system will accept in a single burst from a remote host. The acronym irtt stands for “initial round trip time.” The TCP protocol ensures that data is reliably delivered between hosts by retransmitting a datagram if it has been lost. The TCP protocol keeps a running count of how long it takes for a datagram to be delivered to the remote end, and an acknowledgement to be received so that it knows how long to wait before assuming a datagram needs to retransmitted; this process is called the round-trip time. The initial round-trip time is the value that the TCP protocol will use when a connection is first established. For most network types, the default value is okay, but for some slow networks, notably certain types of amateur packet radio networks, the time is too short and causes unnecessary retransmission. The irtt value can be set using the route command. Values of zero in these fields mean that the default is being used.

Finally, the last field displays the network interface that this route will use.

Displaying Interface Statistics

When invoked with the –i flag, netstat displays statistics for the network interfaces currently configured. If the –a option is also given, it prints all interfaces present in the kernel, not only those that have been configured currently. On vstout, the output from netstat will look like this:

# netstat -i
 Kernel Interface table
 Iface MTU Met  RX-OK RX-ERR RX-DRP RX-OVR  TX-OK TX-ERR TX-DRP TX-OVR Flags
 lo      0   0   3185      0      0      0   3185      0      0      0 BLRU
 eth0 1500   0 972633     17     20    120 628711    217      0      0 BRU

The MTU and Met fields show the current MTU and metric values for that interface. The RX and TX columns show how many packets have been received or transmitted error-free (RX-OK/TX-OK) or damaged (RX-ERR/TX-ERR); how many were dropped (RX-DRP/TX-DRP); and how many were lost because of an overrun (RX-OVR/TX-OVR).

The last column shows the flags that have been set for this interface. These characters are one-character versions of the long flag names that are printed when you display the interface configuration with ifconfig:

B A broadcast address has been set.
L This interface is a loopback device.
M All packets are received (promiscuous mode).
O ARP is turned off for this interface.
P This is a point-to-point connection.
R Interface is running.
U Interface is up.

 

Displaying Connections

netstat supports a set of options to display active or passive sockets. The options –t, –u, –w, and –x show active TCP, UDP, RAW, or Unix socket connections. If you provide the –a flag in addition, sockets that are waiting for a connection (i.e., listening) are displayed as well. This display will give you a list of all servers that are currently running on your system.

Invoking netstat -ta on vlager produces this output:

$ netstat -ta
 Active Internet Connections
 Proto Recv-Q Send-Q Local Address    Foreign Address    (State)
 tcp        0      0 *:domain         *:*                LISTEN
 tcp        0      0 *:time           *:*                LISTEN
 tcp        0      0 *:smtp           *:*                LISTEN
 tcp        0      0 vlager:smtp      vstout:1040        ESTABLISHED
 tcp        0      0 *:telnet         *:*                LISTEN
 tcp        0      0 localhost:1046   vbardolino:telnet  ESTABLISHED
 tcp        0      0 *:chargen        *:*                LISTEN
 tcp        0      0 *:daytime        *:*                LISTEN
 tcp        0      0 *:discard        *:*                LISTEN
 tcp        0      0 *:echo           *:*                LISTEN
 tcp        0      0 *:shell          *:*                LISTEN
 tcp        0      0 *:login          *:*                LISTEN

This output shows most servers simply waiting for an incoming connection. However, the fourth line shows an incoming SMTP connection from vstout, and the sixth line tells you there is an outgoing telnetconnection to vbardolino.

Using the –a flag by itself will display all sockets from all families.

Top 20 command netstat for network management

  1. Listing all the LISTENING Ports of TCP and UDP connections

Listing all ports (both TCP and UDP) using netstat -a option.

# netstat -a | more

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0      0 *:59482                     *:*                         LISTEN
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*

Active UNIX domain sockets (servers and established)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     16972  /tmp/orbit-root/linc-76b-0-6fa08790553d6
 unix  2      [ ACC ]     STREAM     LISTENING     17149  /tmp/orbit-root/linc-794-0-7058d584166d2
 unix  2      [ ACC ]     STREAM     LISTENING     17161  /tmp/orbit-root/linc-792-0-546fe905321cc
 unix  2      [ ACC ]     STREAM     LISTENING     15938  /tmp/orbit-root/linc-74b-0-415135cb6aeab

 

  1. Listing TCP Ports connections

Listing only TCP (Transmission Control Protocol) port connections using netstat -at.

# netstat -at

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT

 

  1. Listing UDP Ports connections

Listing only UDP (User Datagram Protocol ) port connections using netstat -au.

# netstat -au

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*
 udp        0      0 *:mdns                      *:*

 

  1. Listing all LISTENING Connections

Listing all active listening ports connections with netstat -l.

# netstat -l

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:58642                     *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*

Active UNIX domain sockets (only servers)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     16972  /tmp/orbit-root/linc-76b-0-6fa08790553d6
 unix  2      [ ACC ]     STREAM     LISTENING     17149  /tmp/orbit-root/linc-794-0-7058d584166d2
 unix  2      [ ACC ]     STREAM     LISTENING     17161  /tmp/orbit-root/linc-792-0-546fe905321cc
 unix  2      [ ACC ]     STREAM     LISTENING     15938  /tmp/orbit-root/linc-74b-0-415135cb6aeab

 

  1. Listing all TCP Listening Ports

Listing all active listening TCP ports by using option netstat -lt.

# netstat -lt

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:dctp                      *:*                         LISTEN
 tcp        0      0 *:mysql                     *:*                         LISTEN
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:munin                     *:*                         LISTEN
 tcp        0      0 *:ftp                       *:*                         LISTEN
 tcp        0      0 localhost.localdomain:ipp   *:*                         LISTEN
 tcp        0      0 localhost.localdomain:smtp  *:*                         LISTEN
 tcp        0      0 *:http                      *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 *:https                     *:*                         LISTEN

 

  1. Listing all UDP Listening Ports

Listing all active listening UDP ports by using option netstat -lu.

# netstat -lu

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 udp        0      0 *:39578                     *:*
 udp        0      0 *:meregister                *:*
 udp        0      0 *:vpps-qua                  *:*
 udp        0      0 *:openvpn                   *:*
 udp        0      0 *:mdns                      *:*
 udp        0      0 *:sunrpc                    *:*
 udp        0      0 *:ipp                       *:*
 udp        0      0 *:60222                     *:*
 udp        0      0 *:mdns                      *:*

 

  1. Listing all UNIX Listening Ports

Listing all active UNIX listening ports using netstat -lx.

# netstat -lx

Active UNIX domain sockets (only servers)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     4171   @ISCSIADM_ABSTRACT_NAMESPACE
 unix  2      [ ACC ]     STREAM     LISTENING     5767   /var/run/cups/cups.sock
 unix  2      [ ACC ]     STREAM     LISTENING     7082   @/tmp/fam-root-
 unix  2      [ ACC ]     STREAM     LISTENING     6157   /dev/gpmctl
 unix  2      [ ACC ]     STREAM     LISTENING     6215   @/var/run/hald/dbus-IcefTIUkHm
 unix  2      [ ACC ]     STREAM     LISTENING     6038   /tmp/.font-unix/fs7100
 unix  2      [ ACC ]     STREAM     LISTENING     6175   /var/run/avahi-daemon/socket
 unix  2      [ ACC ]     STREAM     LISTENING     4157   @ISCSID_UIP_ABSTRACT_NAMESPACE
 unix  2      [ ACC ]     STREAM     LISTENING     60835836 /var/lib/mysql/mysql.sock
 unix  2      [ ACC ]     STREAM     LISTENING     4645   /var/run/audispd_events
 unix  2      [ ACC ]     STREAM     LISTENING     5136   /var/run/dbus/system_bus_socket
 unix  2      [ ACC ]     STREAM     LISTENING     6216   @/var/run/hald/dbus-wsUBI30V2I
 unix  2      [ ACC ]     STREAM     LISTENING     5517   /var/run/acpid.socket
 unix  2      [ ACC ]     STREAM     LISTENING     5531   /var/run/pcscd.comm

 

  1. Showing Statistics by Protocol

Displays statistics by protocol. By default, statistics are shown for the TCP, UDP, ICMP, and IP protocols. The -s parameter can be used to specify a set of protocols.

# netstat -s

Ip:
 2461 total packets received
 0 forwarded
 0 incoming packets discarded
 2431 incoming packets delivered
 2049 requests sent out
 Icmp:
 0 ICMP messages received
 0 input ICMP message failed.
 ICMP input histogram:
 1 ICMP messages sent
 0 ICMP messages failed
 ICMP output histogram:
 destination unreachable: 1
 Tcp:
 159 active connections openings
 1 passive connection openings
 4 failed connection attempts
 0 connection resets received
 1 connections established
 2191 segments received
 1745 segments send out
 24 segments retransmited
 0 bad segments received.
 4 resets sent
 Udp:
 243 packets received
 1 packets to unknown port received.
 0 packet receive errors
 281 packets sent

 

  1. Showing Statistics by TCP Protocol

Showing statistics of only TCP protocol by using option netstat -st.

# netstat -st

Tcp:
 2805201 active connections openings
 1597466 passive connection openings
 1522484 failed connection attempts
 37806 connection resets received
 1 connections established
 57718706 segments received
 64280042 segments send out
 3135688 segments retransmited
 74 bad segments received.
 17580 resets sent

 

  1. Showing Statistics by UDP Protocol
# netstat -su

Udp:
 1774823 packets received
 901848 packets to unknown port received.
 0 packet receive errors
 2968722 packets sent

 

  1. Displaying Service name with PID

Displaying service name with their PID number, using option netstat -tp will display “PID/Program Name”.

# netstat -tp

Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
 tcp        0      0 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED 2179/sshd
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT  1939/clock-applet

 

  1. Displaying Promiscuous Mode

Displaying Promiscuous mode with -ac switch, netstat print the selected information or refresh screen every five second. Default screen refresh in every second.

# netstat -ac 5 | grep tcp

tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:58642                     *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        1      0 192.168.0.2:59447           www.gov.com:http            CLOSE_WAIT
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0      0 *:59482                     *:*                         LISTEN

 

  1. Displaying Kernel IP routing

Display Kernel IP routing table with netstat and route command.

# netstat -r

Kernel IP routing table
 Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
 192.168.0.0     *               255.255.255.0   U         0 0          0 eth0
 link-local      *               255.255.0.0     U         0 0          0 eth0
 default         192.168.0.1     0.0.0.0         UG        0 0          0 eth0

 

  1. Showing Network Interface Transactions

Showing network interface packet transactions including both transferring and receiving packets with MTU size.

# netstat -i

Kernel Interface table
 Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
 eth0       1500   0     4459      0      0      0     4057      0      0      0 BMRU
 lo        16436   0        8      0      0      0        8      0      0      0 LRU

 

  1. Showing Kernel Interface Table

Showing Kernel interface table, similar to ifconfig command.

# netstat -ie

Kernel Interface table
 eth0      Link encap:Ethernet  HWaddr 00:0C:29:B4:DA:21
 inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
 inet6 addr: fe80::20c:29ff:feb4:da21/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:4486 errors:0 dropped:0 overruns:0 frame:0
 TX packets:4077 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:2720253 (2.5 MiB)  TX bytes:1161745 (1.1 MiB)
 Interrupt:18 Base address:0x2000

lo        Link encap:Local Loopback
 inet addr:127.0.0.1  Mask:255.0.0.0
 inet6 addr: ::1/128 Scope:Host
 UP LOOPBACK RUNNING  MTU:16436  Metric:1
 RX packets:8 errors:0 dropped:0 overruns:0 frame:0
 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:480 (480.0 b)  TX bytes:480 (480.0 b)

 

  1. Displaying IPv4 and IPv6 Information

Displays multicast group membership information for both IPv4 and IPv6.

# netstat -g

IPv6/IPv4 Group Memberships
 Interface       RefCnt Group
 --------------- ------ ---------------------
 lo              1      all-systems.mcast.net
 eth0            1      224.0.0.251
 eth0            1      all-systems.mcast.net
 lo              1      ff02::1
 eth0            1      ff02::202
 eth0            1      ff02::1:ffb4:da21
 eth0            1      ff02::1

 

  1. Print Netstat Information Continuously

To get netstat information every few second, then use the following command, it will print netstat information continuously, say every few seconds.

# netstat -c

Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:36944 TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg010.shr.prod.s:42110 TIME_WAIT
 tcp        0    132 tecmint.com:ssh    115.113.134.3.static-:64662 ESTABLISHED
 tcp        0      0 tecmint.com:http   crawl-66-249-71-240.g:41166 TIME_WAIT
 tcp        0      0 localhost.localdomain:54823 localhost.localdomain:smtp  TIME_WAIT
 tcp        0      0 localhost.localdomain:54822 localhost.localdomain:smtp  TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg010.shr.prod.s:42091 TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:36998 TIME_WAIT

 

  1. Finding non supportive Address

Finding un-configured address families with some useful information.

# netstat --verbose

netstat: no support for `AF IPX' on this system.
 netstat: no support for `AF AX25' on this system.
 netstat: no support for `AF X25' on this system.
 netstat: no support for `AF NETROM' on this system.

 

  1. Finding Listening Programs

Find out how many listening programs running on a port.

# netstat -ap | grep http

tcp        0      0 *:http                      *:*                         LISTEN      9056/httpd
 tcp        0      0 *:https                     *:*                         LISTEN      9056/httpd
 tcp        0      0 tecmint.com:http   sg2nlhg008.shr.prod.s:35248 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:57783 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:57769 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg008.shr.prod.s:35270 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg009.shr.prod.s:41637 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg009.shr.prod.s:41614 TIME_WAIT   -
 unix  2      [ ]         STREAM     CONNECTED     88586726 10394/httpd

 

  1. Displaying RAW Network Statistics
# netstat --statistics --raw

Ip:
 62175683 total packets received
 52970 with invalid addresses
 0 forwarded
 Icmp:
 875519 ICMP messages received
 destination unreachable: 901671
 echo request: 8
 echo replies: 16253
 IcmpMsg:
 InType0: 83
 IpExt:
 InMcastPkts: 117

 

Source :- UnixMen

Knockd – Detailed And Simpler (Silent Assassin….)

As I could see there are lot of articles about knockd and it’s implementation. So, what are my efforts to make this unique? I made it simple, but detail oriented  and have commented on controversies and criticism that exist.

port-knocking-2

Here is an outline on what I’ve discussed.

What is port knocking?

What is knockd?

How it works?

Installation

What we are trying to achieve

Pre-requisite before implementation of knockd:

Implementation scenario

Testing

Disclaimer

So, here we go.

What is port knocking?

Wikipedia Definition:

Port knocking is a method of externally opening ports on a firewall by generating a connection attempt on a set of pre-specified closed ports (in this case, telnet). Once a correct sequence of connection attempts is received, the firewall rules are dynamically modified to allow the host which sent the connection attempts to connect over specific port(s)

/* in this article point of view, it’s ssh port 22 */

It’s basically like, every request would knock the door (firewall) to get through it. Knocking is necessary to get past the door. You shall either implement it using knockd and iptables or just iptables alone.

Now, using knockd.

What is knockd?

knockd is a port-knock server. It listens to all traffic on an Ethernet interface, looking for special “knock” sequences of port-hits. A client makes these port-hits by sending a TCP (or UDP) packet to a port on the server. This port need not be open — since knockd listens at the link-layer level, it sees all traffic even if it’s destined for a closed port. When the server detects a specific sequence of port-hits, it runs a command defined in its configuration file. This can be used to open up holes in a firewall for quick access.

How it works?

  1. Knockd daemon installed/running in the server.
    2. Configure some port sequences (tcp, udp, or both), and the appropriate actions for each sequence.
    3. once knockd sees a defined port sequence, it will run the configured action for that sequence

Note:

It is completely stealth and it will not open any ports on the server,  by default.

When a port knock is successfully used to open a port, the firewall rules are generally only opened to the ip_address that supplied the correct knock.

Installation

Note: Don’t copy/paste the commands. Type it manually to avoid errors that could occur due to the format.

# yum install libpcap*

/* dependency – * in the end installs libpcap-devel which is a pre-requisite, as well */

There are several ways to install, whereas I have followed rpm installation.

Download suitable rpm package from http://pkgs.repoforge.org/knock/

Then run,

# rpm –ivh knock-0.5-3.el6.rf.x86_64.rpm

/*Here, I have downloaded knock 0.5-3 for 64-bit centos and hence the rpm name*/

Now, what all got installed?

Knockd – knock server daemon

Knock – knock client, which aids in knocking.

Note that this (knock) is default client comes along with knockd, whereas there are other advanced clients like hping, sendip & packit.

What we are trying to achieve:

A way to stop the attacks altogether, yet allow ssh access from anywhere, when needed.

Pre-requisite before implementation of knockd:

As mentioned earlier, an existing firewall (iptables) is a pre-requisite.

Follow the below steps to configure firewall

# iptables -I INPUT -p tcp -m state –state RELATED,ESTABLISHED -j ACCEPT

-I —– Inserting the rule as first line in the firewall setup

-p —– protocol

-m —–match against the states RELATED, ESTABLISHED, in this case

-j —– jump to action, which is ACCPET here.

/* This rule says to allow currently on-going session through firewall. It is essential so that if you have currrently taken remote session on this computer using SSH, it will be preserved and not get terminated by further rules where you might want to block ssh or all services */

# iptables -I INPUT -p icmp -j ACCEPT

/* This is to make your machine ping-able from any machine, so that you can check the availability of your machine (whether it’s up or down) */

# iptables –A INPUT –j REJECT

/* Rejecting everything else – Appending it as last line, since, if inserted as first line all other rules will not be considered and every request would be rejected*/

Implementation scenario:

Now, try to ssh to the machine where you have implemented firewall. Let’s call the machine as server.

You could not ssh to the server since the firewall setup in server rejects everything except on-going session and ping requests.

Now, knockd implementation:

Now, in server, that you have installed knockd, run the following commands

# vi /etc/knockd.conf

/*As a result of rpm installation, this configuration file will exist */

Edit the file as below and save/exit.

[options] logfile = /var/log/knockd.log [opencloseSSH] sequence        = 2222:udp,3333:tcp,4444:udp seq_timeout     = 15 start_command   = /sbin/iptables -I INPUT -s %IP% -p tcp –dport 22 -j ACCEPT cmd_timeout     = 10 stop_command    = /sbin/iptables -D INPUT -s %IP% -p tcp –dport 22 -j ACCEPT tcpflags        = syn

First line of the file defines where the error/warnings pertaining to knockd gets logged. In case, if you’re unable to successfully launch connection between client and server using knockd/knock, then this is where you need to check – Server log.

sequence        = 2222:udp,3333:tcp,4444:udp                                /* the sequence of knocking */

seq_timeout     = 15                                                                         /* once the above mentioned sequence is knocked, it’s valid only for next 15 seconds */

start_command   = /sbin/iptables -I INPUT -s %IP% -p tcp –dport 22 -j ACCEPT

/* once the right sequence is knocked, the above command gets executed, where %IP% is the ip_addr of host which knocks. Hence, this allows, ssh connection from knocker (client) to server. One thing to be noted is that, ssh connection need to be established within next 15 seconds of knock. Also, note that I have given iptables –I, since I want this rule to be inserted as first line, because, in case if I append, it won’t have any effect, since iptables reject everything rule comes before this in the list*/

cmd_timeout     = 10                       /* the stop_command will be execued in 10 seconds from start_command execution. */

stop_command    = /sbin/iptables -D INPUT -s %IP% -p tcp –dport 22 -j ACCEPT

/* I am deleting the rule which I inserted to allow the SSH connection from knocker (client) to server. Now, doesn’t it ends my ssh connection? – It doesn’t.

How?  The existing rule in my iptables, #iptables -I INPUT -p tcp -m state –state RELATED,ESTABLISHED -j ACCEPT helps me to retain the established connection. In case, if this rule is not specified, then ur ssh connection will be lost in 10 seconds (i.e) cmd_timeout */

tcpflags                  = syn                       /* In TCP 3 way handshake, Client sends a TCP SYNchronize packet to Server */

Note that there are other ways to configure your knockd server. For example, a port sequence to open ssh, and another port sequence to close your ssh (unlike the above mentioned one), but in this case, after terminating your ssh session, you need to manually hit the port sequence from client, in order to close the ssh port for it.

Now, that we have configured our server to allow ssh for client only when the client hits the right sequence, which is 2222:udp,3333:tcp,4444:udp here.

Now, you need to start knockd service. Copy the below script as/etc/rc.d/init.d/knockd in your machine.

Runlevel script.

#!/bin/sh # description: Start and stop knockd # Check if that config file exist [ -f /etc/knockd.conf ] || exit 0 # Source function library . /etc/rc.d/init.d/functions # Source networking configuration . /etc/sysconfig/network # Check that networking is up [ “$NETWORKING” = “no” ] && exit 0 start() { echo “Starting knockd …” /usr/sbin/knockd & } stop() { echo “Shutting down knockd …” kill `pidof /usr/sbin/knockd` } case “$1” in start) start ;; stop) stop ;; restart) stop start ;; *) echo “Usage: $0 {start|stop|restart}” ;; esac exit 0

Now, run the following commands, so that you shall start/stop/restart knockd like other services

# chmod 755 /etc/rc.d/init.d/knocd         /* making the script executable by root */ # chkconfig –add knockd                             /* adding knockd to chkconfig list */ # chkconfig –level 35 knockd on              /* service will be started as part of runlevel 3 & 5 */ # service knockd start

whenever you modify your /etc/knockd.conf, you need to restart the service.

Testing:

From client, run the following commands.

Note that there are other ways to knock, whereas  I am using knock client which comes along with the knockd package. So, you need to have it installed in client, as well.

Now,

# knock –v server_ip 2222:udp 3333:tcp 4444:udp ;ssh server_ip

-v for verbose

server_ip substituted by your server’s ip address.

We are knocking the sequence which we mentioned in server’s /etc/knockd.conffile

Now, you shall successfully establish ssh session to server. Also, the port gets closed for further sessions. So, every time when you want to establish ssh connection to server, you need to knock and then ssh.

Comments on controversies and criticism:

What if the knock sequence are determined by brute force attack ?

Excerpt from wikipedia answers this:

>>>>>>>>>> YO BREDAS <<<<<<<<<<<<<<<<<

Consider that, if an external attacker did not know the port knock sequence, even the simplest of sequences would require a massive brute force effort in order to be discovered. A three-knock simple TCP sequence (e.g. port 1000, 2000, 3000) would require an attacker without prior knowledge of the sequence to test every combination of three ports in the range 1-65535, and then to scan each port in between to see if anything had opened. As a stateful system, the port would not open until after the correct three-digit sequence had been received in order, without other packets in between.
That equates to a maximum of 655363 packets in order to obtain and detect a single successful opening, in the worst case scenario. That’s 281,474,976,710,656 or over 281 trillion packets. On average, an attempt would take approximately 9.2 quintillion packets to successfully open a single, simple three-port TCP-only knock by brute force. This is made even more impractical when knock attempt-limiting is used to stop brute force attacks, longer and more complex sequences are used.

Also, there are other port knocking solutions, where the knocks are encrypted, which makes it even harder to hack.

When we have simple, robust and reliable solution named VPN access, why to go for port knocking?

Port knocking plus VPN tunnels for added security. If you have another VPN server in your network, you shall protect it from brute force attacks by hiding it behind a knock sequence.

Disclaimer:

Port knocking is not intended to be complete solution, but can be an added layer of security.

Also, it needs to be configured properly according to the needs.

A network that is connected to the Internet is never secure. There is no perfect lock!

The best that we can do is to make the system as secure as possible for today. Possibly
tomorrow someone will figure out how to get through our security code.

A security system is only as strong as its weakest link.

But, I believe “Port knocking” adds security to the existing setup, and doesn’t make your system vulnerable to attacks, as few people claim.

Thanks for reading. Cheers !

 

Credits :- GOKUL (Unixmen)

Linux Command – Dstat (Culprit Catcher)

Introduction

Whether a system is used as a web server or a normal PC, in a daily workflow, to keep under control its usage of resources is almost necessary : GNU/Linux provides several tools for monitoring purposes: iostat, vmstat, netstat, ifstat and others. Every system admin know these products, and how to analyse their outputs. However, there’s another alternative, a single program which can replace almost all of them. Its name is dstat.

With dstat, users can view all system resources instantly. For instance, someone could decide to compare the network bandwidth numbers directly with the disk throughput, having a more general view on what’s going on; this is very useful in case of troubleshooting, or to analyse a system for bench marking.

Features

  • Combines vmstat, iostat, ifstat, netstat information and more
  • Shows stats in exactly the same timeframe
  • Enable/order counters as they make most sense during analysis/troubleshooting
  • Modular design
  • Written in Python, so easy extendable
  • Includes many external plugins
  • Can show interrupts per device
  • Very accurate timeframes, no timeshifts when system is stressed
  • Shows exact units and limits conversion mistakes
  • Indicate different units with different colors
  • Show intermediate results when delay > 1
  • Allows to export CSV output, which can be imported in Gnumeric and Excel to make graphs

Installation

To install dstat is a simple task, since it’s packaged in .deb and .rpm.
For Debian-based distro:

# apt install dstat

In RHEL, CentOS, Fedora:

# yum install dstat

Getting sta(r)ted

To run the program, users can just write the command in its simplest form:

$ dstat

as a result, it will show different infos in a table, which help admins to have a general overview.

dstat

Plugins

First of all, it’s important to note that dstat comes with a lot of plugins; for obtaining a complete list:

$ dstat --list

which returns:

list

Of course, it is possible to add more, for some special use case.

So, let’s see how they work.

Who wants to use some plugin must simply pass its name as a command-line argument. For instance, if someone need to verify only the total CPU usage, he can do:

$ dstat --cpu

or, in a shorter form

$ dstat -c
As previously said, program can show different stats at the same time. As an example:

$ dstat --cpu --top-cpu --disk --top-bio --top-latency

output

This command, which is a combination of internal stats and external plugins, will give the total CPU usage, most expensive CPU process, disk statistics, most expensive block I/O process and the process with the highest total latency (expressed in milliseconds).

Output

By default, dstat displays output in columns (as a table, indeed) directly in terminal window, in real-time, for an immediate analysis made by a human. But there is also the possibility to send it to a .csv file, which software like Libreoffice Calc or Gnumeric can use for creating graphs, or any kind of statistical analysis. Exporting data to .csv is a quite easy task:

$ dstat [arguments] --output /path/to/outputfile