Error :- sudo: effective uid is not 0, is sudo installed setuid root?

We all as a Linux administrator must have come across this error sometime in our lives.

[user@host dir]$ sudo bash
sudo: effective uid is not 0, is sudo installed setuid root?

This happens when sudo does not get the right access permissions.

The Solution for this error is giving the following permissions as root user

chmod u+s /usr/bin/sudo

That Must Sort the issue for CentOS Kind of distros.

 

/proc/sys for you to manipulate a running kernel

The /proc/sys directory in the /proc virtual filesytem contains a lot of useful and interesting files and directories. Many kernel settings can be manipulated by writing to files in the proc filesystem. A lot of important information can be retrieved from these files. This is especially useful when you are troubleshooting or fine tuning your linux system.
Following is a description of the most important files.
Especially the files in /proc/sys/vm are very interesting and useful.
You can also use the sysctl command to make this changes persistent, or to see all the possible kernel options you can change at run-time.

/proc/sys/dev

Contains device specific information. For instance the directory cdrom /proc/sys/dev/cdrom/info shows you cdrom capabilities. The other files in /proc/sys/dev/cdrom are writable and allow you to actually set options for your cdrom drive.
For instance echo 1 > /proc/sys/dev/cdrom/autoeject makes your tray open automagically when you unmount your cdrom.
/proc/sys/dev/parport holds information about parallel ports. Browse these directories to learn more about their contents.

/proc/sys/fs

Virtual filesystem information/tuning

/proc/sys/fs/binfmt_misc

binfmt_misc allows you to configure the system to execute miscellaneous binary formats. For instance it enables you to make the system execute .exe files using wine and java files using the java interpreter, just by typing a file name.

/proc/sys/fs/dentry-state

Linux caches directory access to speed up sub-sequential access to the same directory, this file contains information about the status of the directory cache.

/proc/sys/fs/dir-notify-enable

enable/disable dnotify interface, dnotify is a signal used to notify a process about file/directory changes. This is mainly interesting to programmers.

/proc/sys/fs/dquot-nr

number of allocated disk quota entries and the number of free disk quota entries

/proc/sys/fs/dquot-max

maximum number of cached disk quota entries.

/proc/sys/fs/file-max

system-wide limit on the number of open files for all processes.

/proc/sys/fs/file-nr

number of files the system has presently opened.

/proc/sys/fs/inode-max

maximum number of in-memory inodes

/proc/sys/fs/inode-nr

number of inodes and number of free inodes

/proc/sys/fs/inode-state

This file contains seven numbers: number of inodes, number of free inodes, preshrink, and four dummy values. nr_inodes is the number of inodes the system has allocated. Preshrink is non-zero when the nr_inodes is bigger than inode-max.

/proc/sys/fs/inotify
(since kernel 2.6.13)

This directory contains files that can be used to limit the amount of kernel memory consumed by the inotify interface.

/proc/sys/fs/lease-break-time

This file specifies the grace period that the kernel grants to a process holding a file lease after it has sent a signal to that process notifying it that another process is waiting to open the file.

/proc/sys/fs/leases-enable

This file can be used to enable or disable file leases on a system-wide basis.

/proc/sys/fs/mqueue
(since kernel 2.6.6)

This directory contains files controlling the resources used by POSIX message queues.

/proc/sys/fs/overflowgid

Allows you to change the value of the fixed GID, if a filesystem is mounted which only supports 16 bit GID’s the Linux GID’s which are 32 bit sometimes need to be converted to lower values this is fixed at this value.

/proc/sys/fs/overflowuid

Allows you to change the value of the fixed UID, if a filesystem is mounted which only supports 16 bit UID’s the Linux UID’s which are 32 bit sometimes need to be converted to lower values this is fixed at this value.

/proc/sys/fs/suid_dumpable
(since kernel 2.6.13)

Determines whether core dump files are produced for set-user-ID or otherwise protected/tainted binaries.
Possible values are 0,1,2:
0 (default) A core dump will not be produced for a process which has changed credentials or whose binary does not have read permission enabled.
1 (debug) All processes dump core when possible.
2 (suidsafe) Any binary which normally would not be dumped is dumped readable by root only. This allows the user to remove the core dump file but not to read it. For security reasons core dumps in this mode will not overwrite one another or other files. This mode is appropriate when administrators are attempting to debug problems in a normal environment.

/proc/sys/fs/super-max

Controls the maximum number of superblocks, and thus the maximum number of mounted file systems the kernel can have.

/proc/sys/fs/super-nr

The number of file systems currently mounted.

/proc/sys/kernel/acct

highwater, lowwater, and frequency. Used with BSD-style process accounting.

/proc/sys/kernel/cap-bound
(from Linux 2.2 to 2.6.24)

Holds the value of the kernel capability bounding set.

/proc/sys/kernel/core_pattern

Can be used to define a template for naming core dump files

/proc/sys/kernel/core_uses_pid

See core(5).

/proc/sys/kernel/ctrl-alt-del

Controls the handling of Ctrl-Alt-Del from the keyboard.
If it’s 0, Linux will do a graceful restart. When the value is > 0, Linux’s will do an immediate reboot, without even syncing its dirty buffers.

/proc/sys/kernel/hotplug

Contains the path for the hotplug policy agent. Can be used to set the NIS/YP domainname

/proc/sys/kernel/domainname

can be used to set the NIS/YP domainname

/proc/sys/kernel/hostname

can be used to set the hostname

/proc/sys/kernel/modprobe

Contains the path for the kernel module loader.

/proc/sys/kernel/msgmax

This file defines a system-wide limit specifying the maximum number of bytes in a single message written on a System V message queue.

/proc/sys/kernel/msgmnb

Defines a system-wide parameter used to initialize the msg_qbytes setting for subsequently created message queues.

/proc/sys/kernel/ostype and /proc/sys/kernel/osrelease

These files give substrings of /proc/version.

/proc/sys/kernel/overflowgid and /proc/sys/kernel/overflowuid

These files duplicate the files /proc/sys/fs/overflowgid and /proc/sys/fs/overflowuid.

/proc/sys/kernel/panic

Gives read/write access to the kernel variable panic_timeout. If this is zero, the kernel will loop on a panic; if non-zero it indicates that the kernel should autoreboot after this number of seconds.

/proc/sys/kernel/panic_on_oops
(since kernel 2.5.68)

This file controls the kernel’s behavior when an oops or BUG is encountered. If this file contains 0, then the system tries to continue operation. If it contains 1, then the system delays a few seconds and then panics. If the /proc/sys/kernel/panic file is also non-zero then the machine will be rebooted.

/proc/sys/kernel/pid_max
(since kernel 2.5.34)

This file specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID).

/proc/sys/kernel/printk

The four values in this file are console_loglevel, default_message_loglevel, minimum_console_level, and default_console_loglevel. This allows configuration of which messages will be logged to the console. (ever worked on a console printing messages all the time to your screen? Here’s how to fix that) Messages with a higher priority than console_loglevel will be printed to the console.

/proc/sys/kernel/pty
(since kernel 2.6.4)

This directory contains two files relating to the number of Unix 98 pseudo-terminals on the system.

/proc/sys/kernel/pty/max

Defines the maximum number of pseudo-terminals.

/proc/sys/kernel/pty/nr

This read-only file indicates how many pseudo-terminals are currently in use.

/proc/sys/kernel/random

This directory contains various parameters controlling the operation of the file /dev/random.

/proc/sys/kernel/real-root-dev

Used by the deprecated change_root initrd system

/proc/sys/kernel/rtsig-max

( until kernel 2.6.7)
Can be used to tune the maximum number of POSIX real-time (queued) signals that can be outstanding in the system.

/proc/sys/kernel/rtsig-nr

(until kernel 2.6.7)
This file shows the number POSIX real-time signals currently queued.

/proc/sys/kernel/sem
(since kernel 2.4)

Contains 4 numbers defining limits for System V IPC semaphores.

/proc/sys/kernel/sg-big-buff

Shows the size of the generic SCSI device (sg) buffer.

/proc/sys/kernel/shmall

Contains the system-wide limit on the total number of pages of System V shared memory.

/proc/sys/kernel/shmmax

This file can be used to query and set the run-time limit on the maximum (System V IPC) shared memory segment size that can be created.

/proc/sys/kernel/shmmni
(from kernel 2.4)

Specifies the system-wide maximum number of System V shared memory segments that can be created.

/proc/sys/kernel/version

Kernel version number and build date

/proc/sys/net

Networking information.

/proc/sys/net/core/somaxconn

Defines a ceiling value for the backlog argument of listen

/proc/sys/net/core/rmem_max

Maximum TCP Receive Window.

/proc/sys/net/core/wmem_maxx

Maximum TCP Send Window.

/proc/sys/net/ipv4/ip_forward

Enable or disable routing.

/proc/sys/sunrpc

This directory supports Sun remote procedure call for network file system (NFS).

/proc/sys/vm

This directory contains files for memory management tuning, buffer and cache management. One of the more interresting directories in proc sys as it allows manipulating memory handling in real time.

/proc/sys/vm/swappiness
(since kernel 2.6.16)

vm.swappiness takes a value between 0 and 100 to change the balance between swapping applications and freeing cache. At 100, the kernel will always prefer to find inactive pages and swap them out; in other cases, whether a swapout occurs depends on how much application memory is in use and how poorly the cache is doing at finding and releasing inactive items.

/proc/sys/vm/drop_caches
(since kernel 2.6.16)

Writing to this file causes the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free.
To free pagecache, write 1 to this file.
To free dentries and inodes, write 2 to this file.
To free pagecache, dentries and inodes, write 3 to this file.
Just try echo 1 > /proc/sys/vm/drop_caches, and watch your memory usage drop by all kernel cache memory.

/proc/sys/vm/legacy_va_layout
(since kernel 2.6.9)

If non-zero, this disables the new 32-bit memory-mapping layout; the kernel will use the legacy (2.4) layout for all processes.

/proc/sys/vm/oom_dump_tasks
(since kernel 2.6.25)

Enables a system-wide task dump (excluding kernel threads) to be produced when the kernel performs an OOM-killing. The dump includes the following information for each task (thread, process): thread ID, real user ID, thread group ID (process ID), virtual memory size, resident set size, the CPU that the task is scheduled on, oom_adj score (see the description of /proc[number]/oom_adj), and command name. This is helpful to determine why the OOM-killer was invoked and to identify the rogue task that caused it.
If this contains the value zero, this information is suppressed.
It defaults to 0, so if you have a problem requiring it, enable it :
echo 1 > /proc/sys/vm/oom_dump_tasks

/proc/sys/vm/oom_kill_allocating_task
(since kernel 2.6.24)

This enables or disables killing the OOM-triggering task in out-of-memory situations. If this is set to zero, the OOM-killer will scan through the entire tasklist and select a task based on heuristics to kill. This normally selects a rogue memory-hogging task that frees up a large amount of memory when killed.
If this is set to non-zero, the OOM-killer simply kills the task that triggered the out-of-memory condition. This avoids a possibly expensive tasklist scan.
If /proc/sys/vm/panic_on_oom is non-zero, it takes precedence over whatever value is used in /proc/sys/vm/oom_kill_allocating_task.
The default value is 0.

/proc/sys/vm/overcommit_memory

This file contains the kernel virtual memory accounting mode. Values are:
0: heuristic overcommit (default)
1: always overcommit, never check
2: always check, never overcommit

/proc/sys/vm/overcommit_ratio

Value used in calculating virtual address space

/proc/sys/vm/panic_on_oom
(since kernel 2.6.18)

This enables or disables a kernel panic in an out-of-memory situation.
0 (default) : no panic
1 : panic but not if a process limits allocations to certain nodes using memory
policies mbind or cpusets and those nodes reach memory exhaustion status.
2 : always panic

SAN Switch basic concepts – Fabric Switch

SAN Switch basic concepts

SAN Switch basic concepts – SAN environment provides block-oriented I/O between the computer systems and the target disk systems. The SAN may use Fiber Channel or Ethernet (iSCSI) to provide connectivity between hosts and storage. In either case, the storage is physically decoupled from the hosts. The storage devices and the hosts now become peers attached to a common SAN fabric that provides high bandwidth, longer reach distance, the ability to share resources, enhanced availability, and other benefits of consolidated storage.

SAN is created by using the Fiber Channel to link peripheral devices such as disk storage and tape libraries
A SAN (Storage Area Network) Switch is device that connects the sever and shared pools of the storage devices and is dedicated to moving storage Traffic. It is shown as below

san-switch-300x63

Picture: SAN Switch

Basic Connectivity Diagram between Servers, SAN Storage, SAN Switch and Tape Library.

2.png

Picture: Basic Connectivity Diagram

SAN Switch will contain below physical parts

  1. One / Two Hot Swap-able Power supply units
  2. SFP (Small Form Factor pluggable) Ports
  3. Out Band Management Port (RJ45)
  4. Console Port
  5. USB ports
  6. FC ports (count is depend up on the model)

3

Switch Back View

4.png

Front View

Hot Swappable Power supply Units:  Hot swapping and hot plugging are terms used to describe the functions of replacing computer system components without shutting down the system. If we use 2 Power supply units it will be helpful for redundancy purpose. One unit will connect to PDU1 in a rack and another unit will connect to another PDU 2 in a rack.

SFP:  (Small Form-factor Pluggable) A small transceiver that plugs into the SFP port of a network switch and connects to Fibre Channel and Gigabit Ethernet (GbE) optical fibre cables at the other end. SFP is a hot-swappable input/output device that plugs into a switch port, allowing multiple options for connectivity superseding the GBIC transceiver, SFP modules are also called “mini-GBIC” due to their smaller size.

console-port-300x120

The Fiber cables are used to connect between Storage and Server as well as Storage and Tape Library. The Fiber cable is as shown below.

console-port-1-300x158

FC Cable

Fiber cable: A fiber optic cable consists of a bundle of glass threads, each of which is capable of transmitting messages modulated onto light waves. Fiber optics has several advantages over traditional metal communications lines: Fiber optic cables have a much greater bandwidth than metal cables.

Ethernet Port: out-of-band management involves the use of a dedicated channel for managing network devices. This allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It also can be used to ensure management connectivity (including the ability to determine the status of any network component) independent of the status of other in-band network components. A complete remote management system allows remote reboot, shut-down, powering on; hardware sensor monitoring (fan speed, power voltages, chassis intrusion, etc.); Out band Management ports is also called as Ethernet Management port (RJ45).

fibre-optic-cable-300x300

Console: Switch console ports are meant to allow root access to the switch via a dumb terminal interface, regardless of the state of the switch (unless it is completely dead). By connecting to the console port you can get remote access to the root level of a switch without using the network that the switch is connected to. This creates a secondary path to the switch outside the bandwidth of the network which needs to be secured without relying on the primary network.

<img class=”alignnone wp-image-2222″ src=”https://i1.wp.com/arkit.co.in/wp-content/uploads/2016/05/Console-Port-1-300×158.gif?resize=389%2C205″ alt=”Console Port ” data-recalc-dims=”1″ />

This allows a technician sitting in a Network Operations Center thousands of miles away the ability to restore a switch or perform an initialization configuration securely over a standard telephone line even if the primary network is in failure. Without a connection to the console port, a technician would have to visit the site to perform repairs or initialization.

Login to SAN Switch.

LC-connector.png

Putty: A free telnet and SSH terminal software for Windows and Unix platforms that enables users to remotely access computers over the Internet.

By typing the Switch IP Address in putty configuration we will login to SAN Switch (CLI).

We can also login to the SAN switch GUI console using web browser, open web browser and type the IP address / SAN Switch Name in the address bar

putty-300x290.png

Understanding Port Management. Port Types and Definitions

E_Port: This is an expansion port. A port is designated an E_Port when it is used as an inter switch expansion port (ISL) to connect to the E_Port of another switch, to enlarge the switch fabric.
F_Port: This is a fabric port that is not loop capable. It is used to connect an N_Port point-to-point to a switch.
FL_Port: This is a fabric port that is loop capable. It is used to connect NL_Ports to the switch in a public loop configuration (in switched fabric env.).
G_Port: This is a generic port that can operate as either an E_Port or an F_Port. A port is defined as a G_Port after it is connected but has not received response to loop initialization or has not yet completed the link initialization procedure with the adjacent Fiber Channel device.
L_Port: This is a loop capable node or switch port.
U_Port: This is a universal port. A more generic switch port than a G_Port. It can operate as either an E_Port, F_Port, or FL_Port. A port is defined as an U_Port when it is not connected or has not yet assumed a specific function in the fabric.
VE_Port – A virtual E_Port that terminates at the switch and does not propagate fabric services or routing topology information from one edge fabric to the other
EX_Port – An E_Port from a router to an edge fabric; the router terminates EX_Ports preventing fabric merges
VEX_Port – A virtual E_Port that terminates at the switch and does not propagate fabric services or routing topology information from one edge fabric to the other, when an FCIP connection is involved

Target Device (Device ports)

N_Port: This is a node port that is not loop capable. It is used to connect an equipment port to the fabric.
NL_Port: This is a node port that is loop capable. It is used to connect an equipment port to the fabric in a loop configuration through an L_Port or FL_Port.
T_Port: This was used previously by CNT (INRANGE) as a mechanism of connecting directors together. This has been largely replaced by the E_Port
No_Light: it indicates the port is free.

 

Source :- https://arkit.co.in/san-switch-basic

Linux Concepts :- Quotas

Rules : 1. Quotas can only be created for partitions

Eg., If a 1MB quota is set for partition [/home]

then every subdir under that /home can use a max of 1 MB

user | group

block [diskspace] inode [no of files] | block [diskspace] inode [files]

SOFT HARD GRACE SOFT HARD GRACE | SOFT HARD GRACE SOFT HARD GRACE

<—- Limits —->

==========================================

Part I. Configuring / Setting Up Quotas

==========================================

1. Configure /etc/fstab – usrquota or grpquota which ever you want

In the 4th column of /etc/fstab add ‘usrquota’

2. Refresh /etc/fstab – actually /etc/mtab :

a. If you do not want hassles just REBOOT and skip to Part II.

or

b. mount -a which will do nothing if FS’s are already mounted, which

they always are

or

c. umount /home and mount /home

But this will not work if any users are online which they always are

or

d. mount -o remount /home <<<=========================

[mount -o remount,rw /]

Refresh and will work even if users are online

or

e. reboot

which is the coward’s way

and then Check /etc/mtab or mount*

3. To create the aquota.user file

–> quotacheck -vc /home [force] – Note : ‘u’ is the default if not given

and ‘v’, of course, is always

optional but friendly

This will create the /home/aquota.user which monitors all

quota activity for the /home paritition

4. – To turn on quotas

–> quotaon -v /home – To turn on quotas

or load /home/aquota.user file

in to RAM

5. Check everybody’s current usage and quotas :

# repquota -a

Configuring quotas is now over !

Now we will implement quotas.

In short :

1. Configure /etc/fstab

2. mount -o remount /home Cross check with # cat /etc/mtab

3. quotacheck -vc /home Creates aquota.user

4. quotaon -v /home Loads aquota.user in RAM

5. repquota -a Enjoy your work !

eg

1. repquota -a

2. repquota -u /

3. repquota -u sachin

The first line shows quota info for all users and groups for all file

systems.

The second line shows user quota info for the / file system.

The third line shows quota information for user sachin on all file systems.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

=============================

Part II. Implementing Quotas

=============================

6. edquota -u user

7. edquota -p foo bar <——— use foo as quota prototype for bar

8. edquota -t <———— To change the grace period

or use :

# edquota -p foo `awk -F: ‘$3 > 499 {print $1}’ /etc/passwd`

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

========================================

Part III. Repairing the aquota.user file

========================================

If your system hangs and then restarts, the aquota.user file gets corrupted

and all quotas for all users are now in an unknown state :

To repair, boot into single user mode imme-asap, and do this _FIRST_ :

quotaoff -v /home

quotacheck -avug Minimum reqd is : quotacheck /home

since ‘u’ is the default

and we do not have ‘g’

and ‘v’ is optional

a is check /etc/mtab

Re-Creates file /home/aquota.user

quotaon -v /home

Misc : /etc/warnquota.conf

warnquota*

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

GNU/Linux LDUP : 27-Jul-2k3

PART II – SysAdministration

08. QUOTAS

1. What are the two aspects of disk storage that quotas allow you specify?

A: Disk space [Block] and Files [inode] quotas

2. Which init script checks for the presence or absence of quotas ?

A: /etc/rc.d/rc.sysinit

3. I wish to implement quotas on my /home dir ? Should /home be a partition?

A: Yes.

4. Which file is configured when setting up quotas ?

A: /etc/fstab

5. What is the min I have to do, to implement both user and group quotas for

my /home partition?

A: Configure /etc/fstab with :

LABEL=/home /home ext3 defaults,usrquota,grpquota 1 2

and then just reboot the machine !!

6. But I wish to to implement only user quotas, is this /etc/fstab OK ?

LABEL=/home /home ext3 defaults, usrquota 1 2

A: No. No space in 4th field after defaults

7. Is this /etc/fstab OK ?

LABEL=/home /home ext3 defaults,userquota 1 2

A: No. Its usrquota.

8. Where is this quota info for user and group quotas stored for /home?

A: In the /home partition, in 2 data files : aquota.user, aquota.group

9. Which file keeps track of all the mounted filesystems ?

A: /etc/mtab

10 How would you implement user quotas without rebooting your machine ?

A: Configure /etc/fstab with :

1. LABEL=/home /home ext3 defaults,usrquota 1 2

2. mount -o remount /home

3. quotacheck -vc /home

4. quotaon -v /home

11 Why would you want to do a quotaoff before you do a quotacheck manually

from the CLI ?

A: Corrupts the aquota.user file

12 Can a user – foo – modify/create his quota ?

A: Obviously not! Only root can do that! Or else every user will

abuse the quota system for his benefit !

13 Can foo at least see his quota status ?

A: Yes.

14 How ?

A: Login as foo and run ‘quota’.

15. What are these quotas that he will see ?

A: His soft limit, hard limit and grace period

16. How then will root set [create/modify] quotas for ‘foo’ ?

A: edquota -u foo

17. Examine the following o/p generated by the quota command given by foo:

Disk quotas for user foo (uid 500):

Filesystem blocks soft hard inodes soft hard

/dev/hda 20 100 0 14 0 0

18. How much disk space has foo already used ?

A: 20 blocks i.e. 20 KB

19. His soft limit appears to be 100 KB. Can he use 130 KB ?

A: Yes. But but he will get warning messages

20. Can he use unlimited disk space and fill the partition ?

A: Yes. But up and until the grace period expires. After that the soft limit

is enforced as the hard limit. Additionally, he will get warning messages

21. Then what is the use of this soft limit ? How can I remedy it ?

A: By giving a hard limit too. foo can never cross the hard limit.

22. Now I do this :

Disk quotas for user foo (uid 500):

Filesystem blocks soft hard inodes soft hard

/dev/hda 20 100 200 14 0 0

foo creates files worth 160 KB. What will happen then ?

A: He will be allowed to create up to 200 KB max. Also after 7 days

he will be shut down regardless of whether he has reached his hard

limit or not. He will have to clean up under the soft limit to work.

23. What command is used to change a user’s grace period?

A: edquota -t

24. What command is used to see the entire quota details of all users?

A: repquota -a

25. What command sets a quota template?

A: edquota -p

26. What does ‘p’ mean and how would you use it?

A: ‘prototype’.

Suppose ‘foo’ has his quota set. Then you could clone his details,

# edquota -p foo bar

bar now has the same quota limits as foo.

27. If you had 2000 users, the above would clearly be inconvenient. Solve!

a: edquota -p foo `awk -F: ‘$3 > 499 { print $1 }’ /etc/passwd`

28. An over-limit quota generates a mail message to the user on login.

Which file would you modify to customize the mail delivered ?

A: /etc/warnquota.conf

29. If quotas were run as a daily cron job, where would you find the script

file concerned?

a: /etc/crond.daily/

30 A user owns 150 inodes; the soft limit is 100 and the hard is 200.

Which of the following is correct if the grace period has not expired?

a. The user can create no more files

b. The user cannot append data to an existing file

c. The user cannot log off without deleting some files

d. The user will receive an email notice of violation

A: d.

31 What is the purpose of ‘convertquota’ ?

A: convertquota converts old quota files quota.user and quota.group to

files aquota.user and aquota.group in new format currently used by

2.4.0-ac and newer or by Red Hat Linux 2.4 kernels on filesystem.

New file format allows using quotas for 32-bit uids / gids, setting

quotas for root, accounting used space in bytes (and so allowing use of

quotas in ReiserFS) and it is also architecture independent. This format

introduces Radix Tree (a simple form of tree structure) to quota file.

SSL – Create Root, Intermediate and Certificate in Chain

Create a Chain Certificate (Root, Intermediate & Normal Chain) – Step-by-step
——————————————————————————————
ROOT CERTIFICATE
——————————————————————————————
mkdir /root/ca
cd /root/ca
mkdir certs crl newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
vim openssl.cnf

[ ca ]
# `man ca`
default_ca = CA_default

[ CA_default ]
# Directory and file locations.
dir               = /root/ca
certs             = $dir/certs
crl_dir           = $dir/crl
new_certs_dir     = $dir/newcerts
database          = $dir/index.txt
serial            = $dir/serial
RANDFILE          = $dir/private/.rand

# The root key and root certificate.
private_key       = $dir/private/root_haritibco.key.pem
certificate       = $dir/certs/root_haritibco.cert.pem

# For certificate revocation lists.
crlnumber         = $dir/crlnumber
crl               = $dir/crl/ca.crl.pem
crl_extensions    = crl_ext
default_crl_days  = 30

# SHA-1 is deprecated, so use SHA-2 instead.
default_md        = sha256

name_opt          = ca_default
cert_opt          = ca_default
default_days      = 375
preserve          = no
policy            = policy_strict

[ policy_strict ]
# The root CA should only sign intermediate certificates that match.
# See the POLICY FORMAT section of `man ca`.
countryName             = match
stateOrProvinceName     = match
organizationName        = match
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ policy_loose ]
# Allow the intermediate CA to sign a more diverse range of certificates.
# See the POLICY FORMAT section of the `ca` man page.
countryName             = optional
stateOrProvinceName     = optional
localityName            = optional
organizationName        = optional
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ req ]
# Options for the `req` tool (`man req`).
default_bits        = 2048
distinguished_name  = req_distinguished_name
string_mask         = utf8only

# SHA-1 is deprecated, so use SHA-2 instead.
default_md          = sha256

# Extension to add when the -x509 option is used.
x509_extensions     = v3_ca

[ req_distinguished_name ]
# See <https://en.wikipedia.org/wiki/Certificate_signing_request&gt;.
countryName                     = Country Name (2 letter code)
stateOrProvinceName             = State or Province Name
localityName                    = Locality Name
0.organizationName              = Organization Name
organizationalUnitName          = Organizational Unit Name
commonName                      = Common Name
emailAddress                    = Email Address

# Optionally, specify some defaults.
countryName_default             = IN
stateOrProvinceName_default     = Maharashtra
localityName_default            = Mumbai
0.organizationName_default      = Hari TIBCO Blog Ltd
organizationalUnitName_default  = HLL
emailAddress_default            = hsivabc@gmail.com

[ v3_ca ]
# Extensions for a typical CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ v3_intermediate_ca ]
# Extensions for a typical intermediate CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ usr_cert ]
# Extensions for client certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = client, email
nsComment = “OpenSSL Generated Client Certificate”
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, emailProtection

[ server_cert ]
# Extensions for server certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = server
nsComment = “OpenSSL Generated Server Certificate”
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth

[ crl_ext ]
# Extension for CRLs (`man x509v3_config`).
authorityKeyIdentifier=keyid:always

[ ocsp ]
# Extension for OCSP signing certificates (`man ocsp`).
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, digitalSignature
extendedKeyUsage = critical, OCSPSigning

(Create root key)
cd /root/ca
openssl genrsa -aes256 -out private/root_haritibco.key.pem 4096
****test12345***
chmod 400 private/root_haritibco.key.pem
(Create root certificate)
cd /root/ca
openssl req -config openssl.cnf \
-key private/root_haritibco.key.pem \
-new -x509 -days 7300 -sha256 -extensions v3_ca \
-out certs/root_haritibco.cert.pem
chmod 444 certs/root_haritibco.cert.pem
(Verify Root Certificate)
openssl x509 -noout -text -in certs/root_haritibco.cert.pem

——————————————————————————————
INTERMEDIATE CERTIFICATE
——————————————————————————————
mkdir /root/ca/intermediate
cd /root/ca/intermediate
mkdir certs crl csr newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial
echo 1000 > /root/ca/intermediate/crlnumber
cd /root/ca

vim openssl.cnf


# OpenSSL intermediate CA configuration file.
# Copy to `/root/ca/intermediate/openssl.cnf`.

[ ca ]
# `man ca`
default_ca = CA_default

[ CA_default ]
# Directory and file locations.
dir               = /root/ca/intermediate
certs             = $dir/certs
crl_dir           = $dir/crl
new_certs_dir     = $dir/newcerts
database          = $dir/index.txt
serial            = $dir/serial
RANDFILE          = $dir/private/.rand

# The root key and root certificate.
private_key       = $dir/private/inter_haritibco.key.pem
certificate       = $dir/certs/inter_haritibco.cert.pem

# For certificate revocation lists.
crlnumber         = $dir/crlnumber
crl               = $dir/crl/inter_haritibco.crl.pem
crl_extensions    = crl_ext
default_crl_days  = 30

# SHA-1 is deprecated, so use SHA-2 instead.
default_md        = sha256

name_opt          = ca_default
cert_opt          = ca_default
default_days      = 375
preserve          = no
policy            = policy_loose

[ policy_strict ]
# The root CA should only sign intermediate certificates that match.
# See the POLICY FORMAT section of `man ca`.
countryName             = match
stateOrProvinceName     = match
organizationName        = match
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ policy_loose ]
# Allow the intermediate CA to sign a more diverse range of certificates.
# See the POLICY FORMAT section of the `ca` man page.
countryName             = optional
stateOrProvinceName     = optional
localityName            = optional
organizationName        = optional
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ req ]
# Options for the `req` tool (`man req`).
default_bits        = 2048
distinguished_name  = req_distinguished_name
string_mask         = utf8only

# SHA-1 is deprecated, so use SHA-2 instead.
default_md          = sha256

# Extension to add when the -x509 option is used.
x509_extensions     = v3_ca

[ req_distinguished_name ]
# See <https://en.wikipedia.org/wiki/Certificate_signing_request&gt;.
countryName                     = Country Name (2 letter code)
stateOrProvinceName             = State or Province Name
localityName                    = Locality Name
0.organizationName              = Organization Name
organizationalUnitName          = Organizational Unit Name
commonName                      = Common Name
emailAddress                    = Email Address

# Optionally, specify some defaults.
countryName_default             = IN
stateOrProvinceName_default     = Maharashtra
localityName_default            = Mumbai
0.organizationName_default      = Hari TIBCO Blog Ltd
organizationalUnitName_default  = HLL
emailAddress_default            = hsivabc@gmail.com

[ v3_ca ]
# Extensions for a typical CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ v3_intermediate_ca ]
# Extensions for a typical intermediate CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true, pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ usr_cert ]
# Extensions for client certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = client, email
nsComment = “OpenSSL Generated Client Certificate”
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, emailProtection

[ server_cert ]
# Extensions for server certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = server
nsComment = “OpenSSL Generated Server Certificate”
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth

[ crl_ext ]
# Extension for CRLs (`man x509v3_config`).
authorityKeyIdentifier=keyid:always

[ ocsp ]
# Extension for OCSP signing certificates (`man ocsp`).
basicConstraints = CA:FALSE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, digitalSignature
extendedKeyUsage = critical, OCSPSigning

openssl genrsa -aes256 \
-out intermediate/private/inter_haritibco.key.pem 4096
*****test12345****
chmod 400 intermediate/private/inter_haritibco.key.pem

cd /root/ca
openssl req -config intermediate/openssl.cnf -new -sha256 \
-key intermediate/private/inter_haritibco.key.pem \
-out intermediate/csr/inter_haritibco.csr.pem
cd /root/ca
openssl ca -config openssl.cnf -extensions v3_intermediate_ca \
-days 3650 -notext -md sha256 \
-in intermediate/csr/inter_haritibco.csr.pem \
-out intermediate/certs/inter_haritibco.cert.pem
chmod 444 intermediate/certs/inter_haritibco.cert.pem

openssl x509 -noout -text \
-in intermediate/certs/inter_haritibco.cert.pem

openssl verify -CAfile certs/root_haritibco.cert.pem \
intermediate/certs/inter_haritibco.cert.pem

cat intermediate/certs/inter_haritibco.cert.pem \
certs/ca.cert.pem > intermediate/certs/chain_haritibco.cert.pem
chmod 444 intermediate/certs/chain_haritibco.cert.pem


—————————————————————————
CERTIFICATE
—————————————————————————
cd /root/ca
openssl genrsa -aes256 \
-out intermediate/private/haritibcoblog.key.pem 2048
chmod 400 intermediate/private/haritibcoblog.key.pem

cd /root/ca
openssl req -config intermediate/openssl.cnf \
-key intermediate/private/haritibcoblog.key.pem \
-new -sha256 -out intermediate/csr/haritibcoblog.csr.pem

cd /root/ca
openssl ca -config intermediate/openssl.cnf \
-extensions server_cert -days 375 -notext -md sha256 \
-in intermediate/csr/haritibcoblog.csr.pem \
-out intermediate/certs/haritibcoblog.cert.pem
chmod 444 intermediate/certs/haritibcoblog.cert.pem

openssl x509 -noout -text \
-in intermediate/certs/haritibcoblog.cert.pem

openssl verify -CAfile intermediate/certs/chain_haritibco.cert.pem \
intermediate/certs/haritibcoblog.cert.pem

 

 

 

HTTP Status Codes:- 1xx (Informational)

  • 100 Continue :- The client SHOULD continue with its request. This interim response is used to inform the client that the initial part of the request has been received and has not yet been rejected by the server. The client SHOULD continue by sending the remainder of the request or, if the request has already been completed, ignore this response. The server MUST send a final response after the request has been completed.
  • 102 Processing (WebDAV) :- The 102 (Processing) status code is an interim response used to inform the client that the server has accepted the complete request, but has not yet completed it. This status code SHOULD only be sent when the server has a reasonable expectation that the request will take significant time to complete. As guidance, if a method is taking longer than 20 seconds (a reasonable, but arbitrary value) to process the server SHOULD return a 102 (Processing) response. The server MUST send a final response after the request has been completed.Methods can potentially take a long period of time to process, especially methods that support the Depth header. In such cases the client may time-out the connection while waiting for a response. To prevent this the server may return a 102 (Processing) status code to indicate to the client that the server is still processing the method.
  • 101 Switching Protocols :-The server understands and is willing to comply with the client’s request, via the Upgrade message header field, for a change in the application protocol being used on this connection.The server will switch protocols to those defined by the response’s Upgrade header field immediately after the empty line which terminates the 101 response. The protocol SHOULD be switched only when it is advantageous to do so. For example, switching to a newer version of HTTP is advantageous over older versions, and switching to a real-time, synchronous protocol might be advantageous when delivering resources that use such features.

     

java.lang.OutOfMemoryError – Types and Causes

The 8 symptoms that surface them

The many thousands of java.lang.OutOfMemoryErrors that I’ve met during my career all bear one of the below eight symptoms.

This post from haritibcoblog explains what causes a particular error to be thrown, offers code examples that can cause such errors, and gives you solution guidelines for a fix.

The content is all based on my own experience.

  • java.lang.OutOfMemoryError:Java heap space
  • java.lang.OutOfMemoryError:GC overhead limit exceeded
  • java.lang.OutOfMemoryError:Permgen space
  • java.lang.OutOfMemoryError:Metaspace
  • java.lang.OutOfMemoryError:Unable to create new native thread
  • java.lang.OutOfMemoryError:Out of swap space?
  • java.lang.OutOfMemoryError:Requested array size exceeds VM limit
  • Out of memory:Kill process or sacrifice child

java.lang.OutOfMemoryError:Java heap space

Java applications are only allowed to use a limited amount of memory. This limit is specified during application startup. To make things more complex, Java memory is separated into two different regions. These regions are called Heap space and Permgen (for Permanent Generation):

capture

The size of those regions is set during the Java Virtual Machine (JVM) launch and can be customized by specifying JVM parameters -Xmx and -XX:MaxPermSize. If you do not explicitly set the sizes, platform-specific defaults will be used.

The java.lang.OutOfMemoryError: Java heap space error will be triggered when the application attempts to add more data into the heap space area, but there is not enough room for it.

Note that there might be plenty of physical memory available, but the java.lang.OutOfMemoryError: Java heap space error is thrown whenever the JVM reaches the heap size limit.

What is causing it?

There most common reason for the java.lang.OutOfMemoryError: Java heap space error is simple – you try to fit an XXL application into an S-sized Java heap space. That is – the application just requires more Java heap space than available to it to operate normally. Other causes for this OutOfMemoryError message are more complex and are caused by a programming error:

  • Spikes in usage/data volume. The application was designed to handle a certain amount of users or a certain amount of data. When the number of users or the volume of data suddenly spikes and crosses that expected threshold, the operation which functioned normally before the spike ceases to operate and triggers the lang.OutOfMemoryError: Java heap spaceerror.

Memory leaks. A particular type of programming error will lead your application to constantly consume more memory. Every time the leaking functionality of the application is used it leaves some objects behind into the Java heap space. Over time the leaked objects consume all of the available Java heap space and trigger the already familiar java.lang.OutOfMemoryError: Java heap space error.

java.lang.OutOfMemoryError:GC overhead limit exceeded

Java runtime environment contains a built-in Garbage Collection (GC) process. In many other programming languages, the developers need to manually allocate and free memory regions so that the freed memory can be reused.

Java applications on the other hand only need to allocate memory. Whenever a particular space in memory is no longer used, a separate process called Garbage Collection clears the memory for them. How the GC detects that a particular part of memory is explained in more detail in the Garbage Collection Handbook, but you can trust the GC to do its job well.

captureThe java.lang.OutOfMemoryError: GC overhead limit exceeded error is displayed when your application has exhausted pretty much all the available memory and GC has repeatedly failed to clean it.

What is causing it?

The java.lang.OutOfMemoryError: GC overhead limit exceeded error is the JVM’s way of signalling that your application spends too much time doing garbage collection with too little result. By default the JVM is configured to throw this error if it spends more than 98% of the total time doing GC and when after the GC only less than 2% of the heap is recovered.

What would happen if this GC overhead limit would not exist? Note that the java.lang.OutOfMemoryError: GC overhead limit exceeded error is only thrown when 2% of the memory is freed after several GC cycles. This means that the small amount of heap the GC is able to clean will likely be quickly filled again, forcing the GC to restart the cleaning process again. This forms a vicious cycle where the CPU is 100% busy with GC and no actual work can be done. End users of the application face extreme slowdowns – operations which normally complete in milliseconds take minutes to finish.

So the “java.lang.OutOfMemoryError: GC overhead limit exceeded” message is a pretty nice example of a fail fast principle in action.

java.lang.OutOfMemoryError:Permgen space

Java applications are only allowed to use a limited amount of memory. The exact amount of memory your particular application can use is specified during application startup. To make things more complex, Java memory is separated into different regions which can be seen in the following figure:

The size of all those regions, including the permgen area, is set during the JVM launch. If you do not set the sizes yourself, platform-specific defaults will be used.

The java.lang.OutOfMemoryError: PermGen space message indicates that the Permanent Generation’s area in memory is exhausted.

capture

What is causing it?

To understand the cause for the java.lang.OutOfMemoryError: PermGen space, we would need to understand what this specific memory area is used for.

For practical purposes, the permanent generation consists mostly of class declarations loaded and stored into PermGen. This includes the name and fields of the class, methods with the method bytecode, constant pool information, object arrays and type arrays associated with a class and Just In Time compiler optimizations.

From the above definition you can deduce that the PermGen size requirements depend both on the number of classes loaded as well as the size of such class declarations. Therefore we can say that the main cause for the java.lang.OutOfMemoryError: PermGen space is that either too many classes or too big classes are loaded to the permanent generation.

java.lang.OutOfMemoryError:Metaspace

Java applications are allowed to use only a limited amount of memory. The exact amount of memory your particular application can use is specified during application startup. To make things more complex, Java memory is separated into different regions, as seen in the following figure:

The size of all those regions, including the metaspace area, can be specified during the JVM launch. If you do not determine the sizes yourself, platform-specific defaults will be used.

The java.lang.OutOfMemoryError: Metaspace message indicates that the Metaspace area in memory is exhausted.

capture

What is causing it?

If you are not a newcomer to the Java landscape, you might be familiar with another concept in Java memory management called PermGen. Starting from Java 8, the memory model in Java was significantly changed. A new memory area called Metaspace was introduced and Permgen was removed. This change was made due to variety of reasons, including but not limited to:

  • The required size of permgen was hard to predict. It resulted in either under-provisioning triggering lang.OutOfMemoryError: Permgen sizeerrors or over-provisioning resulting in wasted resources.
  • GC performanceimprovements, enabling concurrent class data de-allocation without GC pauses and specific iterators on metadata
  • Support for further optimizations such as G1concurrent class unloading.

So if you were familiar with PermGen then all you need to know as background is that – whatever was in PermGen before Java 8 (name and fields of the class, methods of a class with the bytecode of the methods, constant pool, JIT optimizations etc) – is now located in Metaspace.

As you can see, Metaspace size requirements depend both upon the number of classes loaded as well as the size of such class declarations. So it is easy to see the main cause for the java.lang.OutOfMemoryError: Metaspace is: either too many classes or too big classes being loaded to the Metaspace.

java.lang.OutOfMemoryError:Unable to create new native thread

Java applications are multi-threaded by nature. What this means is that the programs written in Java can do several things (seemingly) at once. For example – even on machines with just one processor – while you drag content from one window to another, the movie played in the background does not stop just because you carry out several operations at once.

A way to think about threads is to think of them as workers to whom you can submit tasks to carry out. If you had only one worker, he or she could only carry out one task at the time. But when you have a dozen workers at your disposal they can simultaneously fulfill several of your commands.

Now, as with workers in physical world, threads within the JVM need some elbow room to carry out the work they are summoned to deal with. When there are more threads than there is room in memory we have built a foundation for a problem:

The message java.lang.OutOfMemoryError: Unable to create new native thread means that the Java application has hit the limit of how many Threads it can launch.

capture

What is causing it?

You have a chance to face the java.lang.OutOfMemoryError: Unable to create new native thread whenever the JVM asks for a new thread from the OS. Whenever the underlying OS cannot allocate a new native thread, this OutOfMemoryError will be thrown. The exact limit for native threads is very platform-dependent thus we recommend to find out those limits by running a test similar to the below example. But, in general, the situation causing java.lang.OutOfMemoryError: Unable to create new native thread goes through the following phases:

  1. A new Java thread is requested by an application running inside the JVM
  2. JVM native code proxies the request to create a new native thread to the OS
  3. The OS tries to create a new native thread which requires memory to be allocated to the thread
  4. The OS will refuse native memory allocation either because the 32-bit Java process size has depleted its memory address space – e.g. (2-4) GB process size limit has been hit – or the virtual memory of the OS has been fully depleted
  5. The lang.OutOfMemoryError: Unable to create new native threaderror is thrown.

 

java.lang.OutOfMemoryError:Out of swap space?

Java applications are given limited amount of memory during the startup. This limit is specified via the -Xmx and other similar startup parameters. In situations where the total memory requested by the JVM is larger than the available physical memory, operating system starts swapping out the content from memory to hard drive.

The java.lang.OutOfMemoryError: Out of swap space? error indicates that the swap space is also exhausted and the new attempted allocation fails due to the lack of both physical memory and swap space.

capture

What is causing it?

The java.lang.OutOfmemoryError: Out of swap space? is thrown by JVM when an allocation request for bytes from the native heap fails and the native heap is close to exhaustion. The message indicates the size (in bytes) of the allocation which failed and the reason for the memory request.

The problem occurs in situations where the Java processes have started swapping, which, recalling that Java is a garbage collected language is already not a good situation. Modern GC algorithms do a good job, but when faced with latency issues caused by swapping, the GC pauses tend to increase to levels not tolerable by most applications.

java.lang.OutOfMemoryError: Out of swap space? is often caused by operating system level issues, such as:

  • The operating system is configured with insufficient swap space.
  • Another process on the system is consuming all memory resources.

It is also possible that the application fails due to a native leak, for example, if application or library code continuously allocates memory but does not release it to the operating system.

java.lang.OutOfMemoryError:Requested array size exceeds VM limit

Java has got a limit on the maximum array size your program can allocate. The exact limit is platform-specific but is generally somewhere between 1 and 2.1 billion elements.

When you face the java.lang.OutOfMemoryError: Requested array size exceeds VM limit, this means that the application that crashes with the error is trying to allocate an array larger than the Java Virtual Machine can support.

capture

What is causing it?

The error is thrown by the native code within the JVM. It happens before allocating memory for an array when the JVM performs a platform-specific check: whether the allocated data structure is addressable in this platform. This error is less common than you might initially think.

The reason you only seldom face this error is that Java arrays are indexed by int. The maximum positive int in Java is 2^31 – 1 = 2,147,483,647. And the platform-specific limits can be really close to this number – for example on my 64bit MB Pro on Java 1.7 I can happily initialize arrays with up to 2,147,483,645 or Integer.MAX_VALUE-2 elements.

Increasing the length of the array by one to Integer.MAX_VALUE-1 results in the familiar OutOfMemoryError:

Exception in thread “main” java.lang.OutOfMemoryError: Requested array size exceeds VM limit

But the limit might not be that high – on 32-bit Linux with OpenJDK 6, you will hit the “java.lang.OutOfMemoryError: Requested array size exceeds VM limit” already when allocating an array with ~1.1 billion elements. To understand the limits of your specific environments run the small test program described in the next chapter.

Out of memory:Kill process or sacrifice child

In order to understand this error, we need to recoup the operating system basics. As you know, operating systems are built on the concept of processes. Those processes are shepherded by several kernel jobs, one of which, named “Out of memory killer” is of interest to us in this particular case.

This kernel job can annihilate your processes under extremely low memory conditions. When such a condition is detected, the Out of memory killer is activated and picks a process to kill. The target is picked using a set of heuristics scoring all processes and selecting the one with the worst score to kill. The Out of memory: Kill process or sacrifice child is thus different from other errors covered in our OOM handbook as it is not triggered nor proxied by the JVM but is a safety net built into the operating system kernels.

The Out of memory: kill process or sacrifice child error is generated when the available virtual memory (including swap) is consumed to the extent where the overall operating system stability is put to risk. In such case the Out of memory killer picks the rogue process and kills it.

capture

What is causing it?

By default, Linux kernels allow processes to request more memory than currently available in the system. This makes all the sense in the world, considering that most of the processes never actually use all of the memory they allocate. The easiest comparison to this approach would be the broadband operators. They sell all the consumers a 100Mbit download promise, far exceeding the actual bandwidth present in their network. The bet is again on the fact that the users will not simultaneously all use their allocated download limit. Thus one 10Gbit link can successfully serve way more than the 100 users our simple math would permit.

A side effect of such an approach is visible in case some of your programs are on the path of depleting the system’s memory. This can lead to extremely low memory conditions, where no pages can be allocated to process. You might have faced such situation, where not even a root account cannot kill the offending task. To prevent such situations, the killer activates, and identifies the rogue process to be the killed.

You can read more about fine-tuning the behaviour of “Out of memory killer” in this article from RedHat documentation.

Now that we have the context, how can you know what triggered the “killer” and woke you up at 5AM? One common trigger for the activation is hidden in the operating system configuration. When you check the configuration in /proc/sys/vm/overcommit_memory, you have the first hint – the value specified here indicates whether all malloc() calls are allowed to succeed. Note that the path to the parameter in the proc file system varies depending on the system affected by the change.

Overcommitting configuration allows to allocate more and more memory for this rogue process which can eventually trigger the “Out of memory killer” to do exactly what it is meant to do.

 

Linux – Concepts – IPTABLES v/s FIREWALLD

Today we will walk through iptables and firewalld and we will learn about the history of these two along with installation & how we can configure these for our Linux distributions.

Let’s begin wihtout wasting further more time.

What is iptables?

First, we need to know what is iptables. Most of senior IT professionals knows about it and used to work with it as well. Iptables is an application / program that allows a user to configure the security or firewall security tables provided by the Linux kernel firewall and the chains so that a user can add / remove firewall rules to it accordingly to meet his / her security requirements. Iptables uses different kernel modules and different protocols so that user can take the best out of it. As for example, iptables is used for IPv4 ( IP version 4/32 bit ) and ip6tables for IPv6 ( IP version 6/64 bit ) for both tcp and udp. Normally, iptables rules are configured by System Administrator or System Analyst or IT Manager.  You must have root privileges to execute each iptables rules. Linux Kernel uses the Netfilter framework so that it can provide various networking-related operations which can be performed by using iptables. Previously, ipchains was used in most of the Linux distributions for the same purpose. Every iptables rules are directly handled by the Linux Kernel itself and it is known as kernel duty. Whatever GUI tools or other security tools you are using to configure your server’s firewall security, at the end of the day, it is converted into iptables rules and supplied to the kernel to perform the operation.

History of iptables

The rise of the iptables begin with netfilter. Paul Rusty Russell was the initial author and the head think tank behind netfilter / iptables. Later he was joined by many other tech people then form and build the Netfilter core team and develop & maintain the netfilter/iptables project as a joint effort like many other open source projects. Harald Welte was the former leader until 2007 and then Patrick McHardy was the head until 2013. Currently, netfilter core team head is Pablo Neira Ayuso.

To know more about netfilter, please visit this link. To know more about the historicity of netfilter, please visit this link.

To know more about iptables history, please visit this link.

How to install iptables

Now a days, every Linux Kernel comes with iptables and can be found pre build or pre installed on every famous modern Linux distributions. On most Linux systems, iptables is installed in this /usr/sbin/iptables directory. It can be also  found in /sbin/iptables, but since iptables is more like a service rather than an “essential binary”, the preferred location remains in /usr/sbin directory.

For Ubuntu or Debian

sudo apt-get install iptables

For CentOS

sudo yum install iptables-services

For RHEL

sudo yum install iptables

Iptables version

To know your iptables version, type the following command in your terminal.

sudo iptables --version

Start & Stopping your iptables firewall

For OpenSUSE 42.1, type the following to stop.

sudo /sbin/rcSuSEfirewall2 stop

To start it again

sudo /sbin/rcSuSEfirewall2 start

For Ubuntu, type the following to stop.

sudo service ufw stop

To start it again

sudo service ufw start

For Debian & RHEL , type the following to stop.

sudo /etc/init.d/iptables stop

To start it again

sudo /etc/init.d/iptables start

For CentOS, type the following to stop.

sudo service iptables stop

To start it again

sudo service iptables start

Getting all iptables rules lists

To know all the rules that is currently present & active in your iprables, simply open a terminal and type the following.

sudo iptables -L

If there are no rules exits on the iptables means if there are no rules added so far in your iptables firewall, you will see something like the below image.

Iptables_Lists_OpenSUSE42.1

In this above picture, you can see that , there are three (3) chains and they are INPUT, FORWARD, OUTPUT and there are no rules exists. Actually I haven’t add one yet.

Type the following to know the status of the chains of your iptables firewall.

sudo iptables -S

With the above command, you can learn whether your chains are accepting or not.

Clear all iptables rules

To clear all the rules from your iptables firewall, please type the following. This is normally known as flushing your iptables rules.

sudo iptables -F

If you want to flush the INPUT chain only, or any individual chains, issue the below commands as per your requirements.

sudo iptables -F INPUT
sudo iptables -F OUTPUT
sudo iptables -F FORWARD

ACCEPT or DROP Chains

To accept or drop a particular chain, issue any of the following command on your terminal to meet your requirements.

iptables --policy INPUT DROP

The above rule will not accept anything that is incoming to that server. To revert it again back to ACCEPT, do the following

iptables --policy INPUT ACCEPT

Same goes for other chains as well like

iptables --policy OUTPUT DROP
iptables --policy FORWARD DROP

Note: By default, all chains of iptables ( INPUT, OUTPUT, FORWARD ) are in ACCEPT mode. This is known as Policy Chain Default Behavior.

Allowing any port

If you are running any web server on your host, then you must allow your iptables firewall so that your server listen or respond to port 80. By default web server runs on port 80. Let’s do that then.

sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT

On the above line, A stands for append means we are adding a new rule to the iptables list. INPUT stands for the INPUT chain. P stands for protocol and dport stands for destination port. By default any web server runs on port 80. Similarly, you can allow SSH port as well.

sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT

By default, SSH runs on port 22. But it’s good practise not to run SSH on port 22. Always run SSH on a different port. To run SSH on a different port, open /etc/ssh/sshd_config file on your favorite editor and change the port 22 to a other port.

Blocking any port

Say we want to block port 135. We can do it by

sudo iptables -A INPUT -p tcp --dport 135 -j DROP

if you want to block your server to initiate any SSH connection from the server to another host/server, issue the following command

sudo iptables -A OUTPUT -p tcp --dport 22 -j DROP

By doing so, no one can use your sever to initiate a SSH connection from the server. The OUPUT chain will filter and DROP any outgoing tcp connection towards another hosts.

Allowing specific IP with Port

sudo iptables -A INPUT -p tcp -s 0/0 --dport 22  -j ACCEPT

Here -s 0/0 stand for any incoming source with any IP addresses. So, there is no way your server is going to respond for a tcp packet which destination port is 22. If you want to allow only any particular IP then use the following one.

sudo iptables -A INPUT -p tcp -s 12.12.12.12/32 --dport 22  -j ACCEPT

On the above example, you are only allowing 12.12.12.12 IP address to connect to port SSH. Rest IP addresses will not be able to connect to port 22. Similarly you can allow by using CIDR values. Such as

sudo iptables -A INPUT -p tcp -s 12.12.12.0/24 --dport 22  -j ACCEPT

The above example show how you can allow a whole IP block for accepting connection on port 22. It will accept IP starting from 12.12.12.1 to 12.12.12.255.

If you want to block such IP addresses range, do the reverse by replacing ACCEPT by DROP like the following

sudo iptables -A INPUT -p tcp -s 12.12.12.0/24 --dport 22  -j DROP

So, it will not allow to get a connection on port 22 from from 12.12.12.1 to 12.12.12.255 IP addresses.

Blocking ICMP

If you want to block ICMP (ping) request to and from on your server, you can try the following. The first one will block not to send ICMP ping echo request to another host.

sudo iptables -A OUTPUT -p icmp --icmp-type 8 -j DROP

Now, try to ping google.com. Your OpenSUSE server will not be able to ping google.com.

If you want block the incoming ICMP (ping) echo request for your server, just type the following on your terminal.

sudo iptables -I INPUT -p icmp --icmp-type 8 -j DROP

Now, It will not reply to any ICMP ping echo request. Say, your server IP address is 13.13.13.13. And if you ping ping that IP of your server then you will see that your server is not responding for that ping request.

Blocking MySql / MariaDB Port

As Mysql is holding your database so you must protect your database from outside attach. Allow your trusted application server IP addresses only to connect with your MySQL server. To block other

sudo iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT

So, it will not take any MySql connection except 192.168.1.0/24 IP block. By default MySql runs on 3306 port.

Blocking SMTP

If you not running any mail server on your host server or if your server is not configured to act like a mail server, you must block SMTP so that your server is not sending any spam or any mail towards any domain. You must do this to block any outgoing mail from your server. To do so,

sudo iptables -A OUTPUT -p tcp --dport 25 -j DROP

Block DDoS

We all are familiar with the term DDoS. To get rid of it, issue the following command in your terminal.

iptables -A INPUT -p tcp --dport 80 -m limit --limit 20/minute --limit-burst 100 -j ACCEPT

You need to configure the numerical value to meet your requirements. This is just a standard to maintain.

You can protect more by

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/tcp_syncookies
echo 0 > /proc/sys/net/ipv4/conf/all/accept_redirects
echo 0 > /proc/sys/net/ipv4/conf/all/accept_source_route
echo 1 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/lo/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
echo 1800 > /proc/sys/net/ipv4/tcp_keepalive_time
echo 1 > /proc/sys/net/ipv4/tcp_window_scaling 
echo 0 > /proc/sys/net/ipv4/tcp_sack
echo 1280 > /proc/sys/net/ipv4/tcp_max_syn_backlog

Blocking Port Scanning

There are hundred of people out there to scan your open ports of your server and try to break down your server security. To block it

sudo iptables -N block-scan
sudo iptables -A block-scan -p tcp —tcp-flags SYN,ACK,FIN,RST RST -m limit —limit 1/s -j RETURN
sudo iptables -A block-scan -j DROP

Here, block-scan is a name of a new chain.

Blocking Bad Ports

You may need to block some bad ports for your server as well. Here is how you can do this.

badport="135,136,137,138,139,445"
sudo iptables -A INPUT -p tcp -m multiport --dport $badport -j DROP
sudo iptables -A INPUT -p udp -m multiport --dport $badport -j DROP

You can add more ports according to your needs.

What is firewalld?

Firewalld provides a dynamically managed firewall with support for network/firewall zones that defines the trust level of network connections or interfaces. It has support for IPv4, IPv6 firewall settings, ethernet bridges and IP sets. There is a separation of runtime and permanent configuration options. It also provides an interface for services or applications to add firewall rules directly.

The former firewall model with system-config-firewall/lokkit was static and every change required a complete firewall restart. This included also to unload the firewall netfilter kernel modules and to load the modules that are needed for the new configuration. The unload of the modules was breaking stateful firewalling and established connections. The firewall daemon on the other hand manages the firewall dynamically and applies changes without restarting the whole firewall. Therefore there is no need to reload all firewall kernel modules. But using a firewall daemon requires that all firewall modifications are done with that daemon to make sure that the state in the daemon and the firewall in kernel are in sync. The firewall daemon can not parse firewall rules added by the iptables and ebtables command line tools. The daemon provides information about the current active firewall settings via D-BUS and also accepts changes via D-BUS using PolicyKit authentication methods.

So, firewalld uses zones and services instead of chain and rules for performing the operations and it can manages rule(s) dynamically allowing updates & modification without breaking existing sessions and connections.

It has following features.

  • D-Bus API.
  • Timed firewall rules.
  • Rich Language for specific firewall rules.
  • IPv4 and IPv6 NAT support.
  • Firewall zones.
  • IP set support.
  • Simple log of denied packets.
  • Direct interface.
  • Lockdown: Whitelisting of applications that may modify the firewall.
  • Support for iptables, ip6tables, ebtables and ipset firewall backends.
  • Automatic loading of Linux kernel modules.
  • Integration with Puppet.

To know more about firewalld, please visit this link.

How to install firewalld

Before installing firewalld, please make sure you stop iptables and also make sure that iptables are not using or working anymore. To do so,

sudo systemctl stop iptables

This will stop iptables form your system.

And then make sure iptables are not used by your system any more by issuing the below command in the terminal.

sudo systemctl mask iptables

Now, check the status of iptables.

sudo systemctl status iptables

iptables_status_unixmen

Now, we are ready to install firewalld on to our system.

For Ubuntu

To install it on Ubuntu, you must remove UFW first and then you can install Firewalld. To remove UFW, issue the below command on the terminal.

sudo apt-get remove ufw

After removing UFW, issue the below command in the terminal

sudo apt-get install firewall-applet

Or

You can open Ubuntu Software Center and look or seacrh  for “firewall-applet” then install it on to your Ubuntu system.

For RHEL, CentOS & Fedora

Type the below command to install firewalld on your CentOS system.

sudo yum install firewalld firewall-config -y

How to configure firewalld

Before configuring firewalld, we must know the status of firewalld after the installation. To know that, type the following.

sudo systemctl status firewalld

firewalld_status_unixmen

As firewalld works on zones basis, we need to check all the zones and services though we haven’t done any configuring yet.

For Zones

sudo firewall-cmd --get-active-zones

firewalld_activezones_unixmen

or

sudo firewall-cmd --get-zones

firewalld_getzones_unixmen

To know the default zone, issue the below command

sudo firewall-cmd --get-default-zone

firewalld_defaultszones_unixmen

And, For Services

sudo firewall-cmd --get-services

firewalld_services_unixmen

Here, you can see those services covered under firewalld.

Setting Default Zone

An important note is, after each modification, you need to reload firewalld so that your changes can take place.

To set the default zone

sudo firewall-cmd --set-default-zone=internal

or

sudo firewall-cmd --set-default-zone=public

After changing the zone, check whether it changes or not.

sudo firewall-cmd --get-default-zone

Adding Port in Public Zone

sudo firewall-cmd --permanent --zone=public --add-port=80/tcp

firewalld_addport_unixmen

This will add tcp port 80 in the public zone of firewalld. You can add your desired port as well by replacing 80 by your’s.

Now reload the firewalld.

sudo firewall-cmd --reload

Now, check the status to see whether tcp 80 port has been added or not.

sudo firewall-cmd --zone=public --list-ports

firewalld_statusafterport_unixmen

Here, you can see that tcp port 80 has been added.

Or even you can try something like this.

sudo firewall-cmd --zone=public --list-all

firewalld_statusall_unixmen

Removing Port from Public Zone

To remove Tcp 80 port from the public zone, type the following.

sudo firewall-cmd --zone=public --remove-port=80/tcp

You will see a “success” text echoing in your terminal.

You can put your desired port as well by replacing 80 by your’s own port.

Adding Services in Firewalld

To add ftp service in firewalld, issue the below command

sudo firewall-cmd --zone=public --add-service=ftp

You will see a “success” text echoing in your terminal.

Similarly for adding smtp service, issue the below command

sudo firewall-cmd --zone=public --add-service=smtp

Replace ftp and smtp by your’s own service that you want to add in the firewalld.

Removing Services from Firewalld

For removing ftp & smtp services from firewalld, issue the below command in the terminal.

sudo firewall-cmd --zone=public --remove-service=ftp
sudo firewall-cmd --zone=public --remove-service=smtp

Block Any Incoming and Any Outgoing Packet(s)

If you wish, you can block any incoming or outgoing packets / connections by using firewalld. This is known as “panic-on” of firewalld. To do so, issue the below command.

sudo firewall-cmd --panic-on

You will see a “success” text echoing in your terminal.

After doing this, you will not be able to ping a host or even browse any websites.

To turn this off, issue the below command in your terminal.

sudo firewall-cmd --panic-off

Adding IP Address in Firewalld

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.4" accept'

By doing so, firewalld will accept IP v4 packets from the source IP 192.168.1.4.

Blocking IP Address From Firewalld

Similarly, to block any IP address

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.4" reject'

By doing so, firewalld will drop / discards every IP v4 packets from the source IP 192.168.1.4.

I stuck with the very basic of Firewalld over here so that you can easily understand the working methodology of it and the differences of it with iptables.

That’s all for today. Hope you enjoy reading this article.

Take care.

Linux – Concepts – Ulimits & Sysctl

ulimit and sysctl

The ulimit and sysctl programs allow to limit system-wide resource use. This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users.

Code Listing 1: ulimit example

# ulimit -a 
core file size          (blocks, -c) 0 
data seg size           (kbytes, -d) unlimited 
file size               (blocks, -f) unlimited 
pending signals                 (-i) 8191 
max locked memory       (kbytes, -l) 32 
max memory size         (kbytes, -m) unlimited 
open files                      (-n) 1024 
pipe size            (512 bytes, -p) 8 
POSIX message queues     (bytes, -q) 819200 
stack size              (kbytes, -s) 8192 
cpu time               (seconds, -t) unlimited 
max user processes              (-u) 8191 
virtual memory          (kbytes, -v) unlimited 
file locks                      (-x) unlimited

All these settings can be manipulated. A good example is this bash forkbomb that forks as many processes as possible and can crash systems where no user limits are set:

Warning: Do not run this in a shell! If no limits are set your system will either become unresponsive or might even crash.

Code Listing 2: A bash forkbomb

$ :(){ :|:& };:
refer to my Post :- 
https://haritibcoblog.com/2016/07/10/linux-fork-bomb-test-epic/

Now this is not good – any user with shell access to your box could take it down. But if that user can only start 30 processes the damage will be minimal. So let’s set a process limit:

Gentoo Note: A too small number of processes can break the use of portage. So, don’t be too strict.

Code Listing 3: Setting a process limit

# ulimit -u 30 
# ulimit -a 
… 
max user processes              (-u) 30 
…

If you try to run the forkbomb now it should run, but throw error messages “fork: resource temporarily unavailable”. This means that your system has not allowed the forkbomb to start more processes. The other options of ulimit can help with similar problems, but you should be careful that you don’t lock yourself out – setting data seg size too small will even prevent bash from starting!

sysctl is a similar tool: It allows to configure kernel parameters at runtime. If you wish to keep settings persistent across reboots you should edit /etc/sysctl.conf – be aware that wrong settings may break things in unforeseen ways.

Code Listing 4: Exploring sysctl variables

# sysctl -a 
… 
vm.swappiness = 60 
…

The list of variables is quite long (367 lines on my system), but I picked out vm.swappiness here. It controls how aggressive swapping will be, the higher it is (with a maximum of 100) the more swap will be used. This can affect performance a lot on systems with little memory, depending on load and other factors.

Code Listing 5: Reducing swappiness

# sysctl vm.swappiness=0 
vm.swappiness = 0

The effects of changing this setting are usually not felt instantly. But you can change many settings, especially network-related, this way. For servers this can offer a nice performance boost, but as with ulimit careless usage might cause your system to misbehave or slow down. If you don’t know what a variable controls, you should not modify it!

Linux Command – Using Netstat the Proper Way !!

How to install netstat

netstat is a useful tool for checking your network configuration and activity. It is in fact a collection of several tools lumped together.

Install “net-tools” package using yum

[root@livedvd ~]$ sudo yum install net-tools
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.mirror.secureax.com
* extras: centos.mirror.secureax.com
* updates: centos.mirror.secureax.com
Resolving Dependencies
--> Running transaction check
---> Package net-tools.x86_64 0:2.0-0.17.20131004git.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================
Package         Arch         Version                          Repository  Size
================================================================================
Installing:
net-tools       x86_64       2.0-0.17.20131004git.el7         base       304 k
Transaction Summary
================================================================================
Install  1 Package
Total download size: 304 k
Installed size: 917 k
Is this ok [y/d/N]: y
Downloading packages:
net-tools-2.0-0.17.20131004git.el7.x86_64.rpm              | 304 kB   00:00
Running transaction check

Running transaction test
Transaction test succeeded
Running transaction
Installing : net-tools-2.0-0.17.20131004git.el7.x86_64                    1/1
Verifying  : net-tools-2.0-0.17.20131004git.el7.x86_64                    1/1
Installed:
net-tools.x86_64 0:2.0-0.17.20131004git.el7

 

Complete!

 

The netstat Command

Displaying the Routing Table

When you invoke netstat with the –r flag, it displays the kernel routing table in the way we’ve been doing with route. On vstout, it produces:

# netstat -nr

 Kernel IP routing table
 Destination   Gateway      Genmask         Flags  MSS Window  irtt Iface
 127.0.0.1     *            255.255.255.255 UH       0 0          0 lo
 172.16.1.0    *            255.255.255.0   U        0 0          0 eth0
 172.16.2.0    172.16.1.1   255.255.255.0   UG       0 0          0 eth0

The –n option makes netstat print addresses as dotted quad IP numbers rather than the symbolic host and network names. This option is especially useful when you want to avoid address lookups over the network (e.g., to a DNS or NIS server).

The second column of netstat‘s output shows the gateway to which the routing entry points. If no gateway is used, an asterisk is printed instead. The third column shows the “generality” of the route, i.e., the network mask for this route. When given an IP address to find a suitable route for, the kernel steps through each of the routing table entries, taking the bitwise AND of the address and the genmask before comparing it to the target of the route.

The fourth column displays the following flags that describe the route:

G The route uses a gateway.
U The interface to be used is up.
H Only a single host can be reached through the route. For example, this is the case for the loopback entry 127.0.0.1.
D This route is dynamically created. It is set if the table entry has been generated by a routing daemon like gated or by an ICMP redirect message
M This route is set if the table entry was modified by an ICMP redirect message.
! The route is a reject route and datagrams will be dropped.

 

The next three columns show the MSS, Window and irtt that will be applied to TCP connections established via this route. The MSS is the Maximum Segment Size and is the size of the largest datagram the kernel will construct for transmission via this route. The Window is the maximum amount of data the system will accept in a single burst from a remote host. The acronym irtt stands for “initial round trip time.” The TCP protocol ensures that data is reliably delivered between hosts by retransmitting a datagram if it has been lost. The TCP protocol keeps a running count of how long it takes for a datagram to be delivered to the remote end, and an acknowledgement to be received so that it knows how long to wait before assuming a datagram needs to retransmitted; this process is called the round-trip time. The initial round-trip time is the value that the TCP protocol will use when a connection is first established. For most network types, the default value is okay, but for some slow networks, notably certain types of amateur packet radio networks, the time is too short and causes unnecessary retransmission. The irtt value can be set using the route command. Values of zero in these fields mean that the default is being used.

Finally, the last field displays the network interface that this route will use.

Displaying Interface Statistics

When invoked with the –i flag, netstat displays statistics for the network interfaces currently configured. If the –a option is also given, it prints all interfaces present in the kernel, not only those that have been configured currently. On vstout, the output from netstat will look like this:

# netstat -i
 Kernel Interface table
 Iface MTU Met  RX-OK RX-ERR RX-DRP RX-OVR  TX-OK TX-ERR TX-DRP TX-OVR Flags
 lo      0   0   3185      0      0      0   3185      0      0      0 BLRU
 eth0 1500   0 972633     17     20    120 628711    217      0      0 BRU

The MTU and Met fields show the current MTU and metric values for that interface. The RX and TX columns show how many packets have been received or transmitted error-free (RX-OK/TX-OK) or damaged (RX-ERR/TX-ERR); how many were dropped (RX-DRP/TX-DRP); and how many were lost because of an overrun (RX-OVR/TX-OVR).

The last column shows the flags that have been set for this interface. These characters are one-character versions of the long flag names that are printed when you display the interface configuration with ifconfig:

B A broadcast address has been set.
L This interface is a loopback device.
M All packets are received (promiscuous mode).
O ARP is turned off for this interface.
P This is a point-to-point connection.
R Interface is running.
U Interface is up.

 

Displaying Connections

netstat supports a set of options to display active or passive sockets. The options –t, –u, –w, and –x show active TCP, UDP, RAW, or Unix socket connections. If you provide the –a flag in addition, sockets that are waiting for a connection (i.e., listening) are displayed as well. This display will give you a list of all servers that are currently running on your system.

Invoking netstat -ta on vlager produces this output:

$ netstat -ta
 Active Internet Connections
 Proto Recv-Q Send-Q Local Address    Foreign Address    (State)
 tcp        0      0 *:domain         *:*                LISTEN
 tcp        0      0 *:time           *:*                LISTEN
 tcp        0      0 *:smtp           *:*                LISTEN
 tcp        0      0 vlager:smtp      vstout:1040        ESTABLISHED
 tcp        0      0 *:telnet         *:*                LISTEN
 tcp        0      0 localhost:1046   vbardolino:telnet  ESTABLISHED
 tcp        0      0 *:chargen        *:*                LISTEN
 tcp        0      0 *:daytime        *:*                LISTEN
 tcp        0      0 *:discard        *:*                LISTEN
 tcp        0      0 *:echo           *:*                LISTEN
 tcp        0      0 *:shell          *:*                LISTEN
 tcp        0      0 *:login          *:*                LISTEN

This output shows most servers simply waiting for an incoming connection. However, the fourth line shows an incoming SMTP connection from vstout, and the sixth line tells you there is an outgoing telnetconnection to vbardolino.

Using the –a flag by itself will display all sockets from all families.

Top 20 command netstat for network management

  1. Listing all the LISTENING Ports of TCP and UDP connections

Listing all ports (both TCP and UDP) using netstat -a option.

# netstat -a | more

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0      0 *:59482                     *:*                         LISTEN
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*

Active UNIX domain sockets (servers and established)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     16972  /tmp/orbit-root/linc-76b-0-6fa08790553d6
 unix  2      [ ACC ]     STREAM     LISTENING     17149  /tmp/orbit-root/linc-794-0-7058d584166d2
 unix  2      [ ACC ]     STREAM     LISTENING     17161  /tmp/orbit-root/linc-792-0-546fe905321cc
 unix  2      [ ACC ]     STREAM     LISTENING     15938  /tmp/orbit-root/linc-74b-0-415135cb6aeab

 

  1. Listing TCP Ports connections

Listing only TCP (Transmission Control Protocol) port connections using netstat -at.

# netstat -at

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT

 

  1. Listing UDP Ports connections

Listing only UDP (User Datagram Protocol ) port connections using netstat -au.

# netstat -au

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*
 udp        0      0 *:mdns                      *:*

 

  1. Listing all LISTENING Connections

Listing all active listening ports connections with netstat -l.

# netstat -l

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:58642                     *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*

Active UNIX domain sockets (only servers)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     16972  /tmp/orbit-root/linc-76b-0-6fa08790553d6
 unix  2      [ ACC ]     STREAM     LISTENING     17149  /tmp/orbit-root/linc-794-0-7058d584166d2
 unix  2      [ ACC ]     STREAM     LISTENING     17161  /tmp/orbit-root/linc-792-0-546fe905321cc
 unix  2      [ ACC ]     STREAM     LISTENING     15938  /tmp/orbit-root/linc-74b-0-415135cb6aeab

 

  1. Listing all TCP Listening Ports

Listing all active listening TCP ports by using option netstat -lt.

# netstat -lt

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:dctp                      *:*                         LISTEN
 tcp        0      0 *:mysql                     *:*                         LISTEN
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:munin                     *:*                         LISTEN
 tcp        0      0 *:ftp                       *:*                         LISTEN
 tcp        0      0 localhost.localdomain:ipp   *:*                         LISTEN
 tcp        0      0 localhost.localdomain:smtp  *:*                         LISTEN
 tcp        0      0 *:http                      *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 *:https                     *:*                         LISTEN

 

  1. Listing all UDP Listening Ports

Listing all active listening UDP ports by using option netstat -lu.

# netstat -lu

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 udp        0      0 *:39578                     *:*
 udp        0      0 *:meregister                *:*
 udp        0      0 *:vpps-qua                  *:*
 udp        0      0 *:openvpn                   *:*
 udp        0      0 *:mdns                      *:*
 udp        0      0 *:sunrpc                    *:*
 udp        0      0 *:ipp                       *:*
 udp        0      0 *:60222                     *:*
 udp        0      0 *:mdns                      *:*

 

  1. Listing all UNIX Listening Ports

Listing all active UNIX listening ports using netstat -lx.

# netstat -lx

Active UNIX domain sockets (only servers)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     4171   @ISCSIADM_ABSTRACT_NAMESPACE
 unix  2      [ ACC ]     STREAM     LISTENING     5767   /var/run/cups/cups.sock
 unix  2      [ ACC ]     STREAM     LISTENING     7082   @/tmp/fam-root-
 unix  2      [ ACC ]     STREAM     LISTENING     6157   /dev/gpmctl
 unix  2      [ ACC ]     STREAM     LISTENING     6215   @/var/run/hald/dbus-IcefTIUkHm
 unix  2      [ ACC ]     STREAM     LISTENING     6038   /tmp/.font-unix/fs7100
 unix  2      [ ACC ]     STREAM     LISTENING     6175   /var/run/avahi-daemon/socket
 unix  2      [ ACC ]     STREAM     LISTENING     4157   @ISCSID_UIP_ABSTRACT_NAMESPACE
 unix  2      [ ACC ]     STREAM     LISTENING     60835836 /var/lib/mysql/mysql.sock
 unix  2      [ ACC ]     STREAM     LISTENING     4645   /var/run/audispd_events
 unix  2      [ ACC ]     STREAM     LISTENING     5136   /var/run/dbus/system_bus_socket
 unix  2      [ ACC ]     STREAM     LISTENING     6216   @/var/run/hald/dbus-wsUBI30V2I
 unix  2      [ ACC ]     STREAM     LISTENING     5517   /var/run/acpid.socket
 unix  2      [ ACC ]     STREAM     LISTENING     5531   /var/run/pcscd.comm

 

  1. Showing Statistics by Protocol

Displays statistics by protocol. By default, statistics are shown for the TCP, UDP, ICMP, and IP protocols. The -s parameter can be used to specify a set of protocols.

# netstat -s

Ip:
 2461 total packets received
 0 forwarded
 0 incoming packets discarded
 2431 incoming packets delivered
 2049 requests sent out
 Icmp:
 0 ICMP messages received
 0 input ICMP message failed.
 ICMP input histogram:
 1 ICMP messages sent
 0 ICMP messages failed
 ICMP output histogram:
 destination unreachable: 1
 Tcp:
 159 active connections openings
 1 passive connection openings
 4 failed connection attempts
 0 connection resets received
 1 connections established
 2191 segments received
 1745 segments send out
 24 segments retransmited
 0 bad segments received.
 4 resets sent
 Udp:
 243 packets received
 1 packets to unknown port received.
 0 packet receive errors
 281 packets sent

 

  1. Showing Statistics by TCP Protocol

Showing statistics of only TCP protocol by using option netstat -st.

# netstat -st

Tcp:
 2805201 active connections openings
 1597466 passive connection openings
 1522484 failed connection attempts
 37806 connection resets received
 1 connections established
 57718706 segments received
 64280042 segments send out
 3135688 segments retransmited
 74 bad segments received.
 17580 resets sent

 

  1. Showing Statistics by UDP Protocol
# netstat -su

Udp:
 1774823 packets received
 901848 packets to unknown port received.
 0 packet receive errors
 2968722 packets sent

 

  1. Displaying Service name with PID

Displaying service name with their PID number, using option netstat -tp will display “PID/Program Name”.

# netstat -tp

Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
 tcp        0      0 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED 2179/sshd
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT  1939/clock-applet

 

  1. Displaying Promiscuous Mode

Displaying Promiscuous mode with -ac switch, netstat print the selected information or refresh screen every five second. Default screen refresh in every second.

# netstat -ac 5 | grep tcp

tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:58642                     *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        1      0 192.168.0.2:59447           www.gov.com:http            CLOSE_WAIT
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0      0 *:59482                     *:*                         LISTEN

 

  1. Displaying Kernel IP routing

Display Kernel IP routing table with netstat and route command.

# netstat -r

Kernel IP routing table
 Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
 192.168.0.0     *               255.255.255.0   U         0 0          0 eth0
 link-local      *               255.255.0.0     U         0 0          0 eth0
 default         192.168.0.1     0.0.0.0         UG        0 0          0 eth0

 

  1. Showing Network Interface Transactions

Showing network interface packet transactions including both transferring and receiving packets with MTU size.

# netstat -i

Kernel Interface table
 Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
 eth0       1500   0     4459      0      0      0     4057      0      0      0 BMRU
 lo        16436   0        8      0      0      0        8      0      0      0 LRU

 

  1. Showing Kernel Interface Table

Showing Kernel interface table, similar to ifconfig command.

# netstat -ie

Kernel Interface table
 eth0      Link encap:Ethernet  HWaddr 00:0C:29:B4:DA:21
 inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
 inet6 addr: fe80::20c:29ff:feb4:da21/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:4486 errors:0 dropped:0 overruns:0 frame:0
 TX packets:4077 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:2720253 (2.5 MiB)  TX bytes:1161745 (1.1 MiB)
 Interrupt:18 Base address:0x2000

lo        Link encap:Local Loopback
 inet addr:127.0.0.1  Mask:255.0.0.0
 inet6 addr: ::1/128 Scope:Host
 UP LOOPBACK RUNNING  MTU:16436  Metric:1
 RX packets:8 errors:0 dropped:0 overruns:0 frame:0
 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:480 (480.0 b)  TX bytes:480 (480.0 b)

 

  1. Displaying IPv4 and IPv6 Information

Displays multicast group membership information for both IPv4 and IPv6.

# netstat -g

IPv6/IPv4 Group Memberships
 Interface       RefCnt Group
 --------------- ------ ---------------------
 lo              1      all-systems.mcast.net
 eth0            1      224.0.0.251
 eth0            1      all-systems.mcast.net
 lo              1      ff02::1
 eth0            1      ff02::202
 eth0            1      ff02::1:ffb4:da21
 eth0            1      ff02::1

 

  1. Print Netstat Information Continuously

To get netstat information every few second, then use the following command, it will print netstat information continuously, say every few seconds.

# netstat -c

Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:36944 TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg010.shr.prod.s:42110 TIME_WAIT
 tcp        0    132 tecmint.com:ssh    115.113.134.3.static-:64662 ESTABLISHED
 tcp        0      0 tecmint.com:http   crawl-66-249-71-240.g:41166 TIME_WAIT
 tcp        0      0 localhost.localdomain:54823 localhost.localdomain:smtp  TIME_WAIT
 tcp        0      0 localhost.localdomain:54822 localhost.localdomain:smtp  TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg010.shr.prod.s:42091 TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:36998 TIME_WAIT

 

  1. Finding non supportive Address

Finding un-configured address families with some useful information.

# netstat --verbose

netstat: no support for `AF IPX' on this system.
 netstat: no support for `AF AX25' on this system.
 netstat: no support for `AF X25' on this system.
 netstat: no support for `AF NETROM' on this system.

 

  1. Finding Listening Programs

Find out how many listening programs running on a port.

# netstat -ap | grep http

tcp        0      0 *:http                      *:*                         LISTEN      9056/httpd
 tcp        0      0 *:https                     *:*                         LISTEN      9056/httpd
 tcp        0      0 tecmint.com:http   sg2nlhg008.shr.prod.s:35248 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:57783 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:57769 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg008.shr.prod.s:35270 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg009.shr.prod.s:41637 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg009.shr.prod.s:41614 TIME_WAIT   -
 unix  2      [ ]         STREAM     CONNECTED     88586726 10394/httpd

 

  1. Displaying RAW Network Statistics
# netstat --statistics --raw

Ip:
 62175683 total packets received
 52970 with invalid addresses
 0 forwarded
 Icmp:
 875519 ICMP messages received
 destination unreachable: 901671
 echo request: 8
 echo replies: 16253
 IcmpMsg:
 InType0: 83
 IpExt:
 InMcastPkts: 117

 

Source :- UnixMen

Knockd – Detailed And Simpler (Silent Assassin….)

As I could see there are lot of articles about knockd and it’s implementation. So, what are my efforts to make this unique? I made it simple, but detail oriented  and have commented on controversies and criticism that exist.

port-knocking-2

Here is an outline on what I’ve discussed.

What is port knocking?

What is knockd?

How it works?

Installation

What we are trying to achieve

Pre-requisite before implementation of knockd:

Implementation scenario

Testing

Disclaimer

So, here we go.

What is port knocking?

Wikipedia Definition:

Port knocking is a method of externally opening ports on a firewall by generating a connection attempt on a set of pre-specified closed ports (in this case, telnet). Once a correct sequence of connection attempts is received, the firewall rules are dynamically modified to allow the host which sent the connection attempts to connect over specific port(s)

/* in this article point of view, it’s ssh port 22 */

It’s basically like, every request would knock the door (firewall) to get through it. Knocking is necessary to get past the door. You shall either implement it using knockd and iptables or just iptables alone.

Now, using knockd.

What is knockd?

knockd is a port-knock server. It listens to all traffic on an Ethernet interface, looking for special “knock” sequences of port-hits. A client makes these port-hits by sending a TCP (or UDP) packet to a port on the server. This port need not be open — since knockd listens at the link-layer level, it sees all traffic even if it’s destined for a closed port. When the server detects a specific sequence of port-hits, it runs a command defined in its configuration file. This can be used to open up holes in a firewall for quick access.

How it works?

  1. Knockd daemon installed/running in the server.
    2. Configure some port sequences (tcp, udp, or both), and the appropriate actions for each sequence.
    3. once knockd sees a defined port sequence, it will run the configured action for that sequence

Note:

It is completely stealth and it will not open any ports on the server,  by default.

When a port knock is successfully used to open a port, the firewall rules are generally only opened to the ip_address that supplied the correct knock.

Installation

Note: Don’t copy/paste the commands. Type it manually to avoid errors that could occur due to the format.

# yum install libpcap*

/* dependency – * in the end installs libpcap-devel which is a pre-requisite, as well */

There are several ways to install, whereas I have followed rpm installation.

Download suitable rpm package from http://pkgs.repoforge.org/knock/

Then run,

# rpm –ivh knock-0.5-3.el6.rf.x86_64.rpm

/*Here, I have downloaded knock 0.5-3 for 64-bit centos and hence the rpm name*/

Now, what all got installed?

Knockd – knock server daemon

Knock – knock client, which aids in knocking.

Note that this (knock) is default client comes along with knockd, whereas there are other advanced clients like hping, sendip & packit.

What we are trying to achieve:

A way to stop the attacks altogether, yet allow ssh access from anywhere, when needed.

Pre-requisite before implementation of knockd:

As mentioned earlier, an existing firewall (iptables) is a pre-requisite.

Follow the below steps to configure firewall

# iptables -I INPUT -p tcp -m state –state RELATED,ESTABLISHED -j ACCEPT

-I —– Inserting the rule as first line in the firewall setup

-p —– protocol

-m —–match against the states RELATED, ESTABLISHED, in this case

-j —– jump to action, which is ACCPET here.

/* This rule says to allow currently on-going session through firewall. It is essential so that if you have currrently taken remote session on this computer using SSH, it will be preserved and not get terminated by further rules where you might want to block ssh or all services */

# iptables -I INPUT -p icmp -j ACCEPT

/* This is to make your machine ping-able from any machine, so that you can check the availability of your machine (whether it’s up or down) */

# iptables –A INPUT –j REJECT

/* Rejecting everything else – Appending it as last line, since, if inserted as first line all other rules will not be considered and every request would be rejected*/

Implementation scenario:

Now, try to ssh to the machine where you have implemented firewall. Let’s call the machine as server.

You could not ssh to the server since the firewall setup in server rejects everything except on-going session and ping requests.

Now, knockd implementation:

Now, in server, that you have installed knockd, run the following commands

# vi /etc/knockd.conf

/*As a result of rpm installation, this configuration file will exist */

Edit the file as below and save/exit.

[options] logfile = /var/log/knockd.log [opencloseSSH] sequence        = 2222:udp,3333:tcp,4444:udp seq_timeout     = 15 start_command   = /sbin/iptables -I INPUT -s %IP% -p tcp –dport 22 -j ACCEPT cmd_timeout     = 10 stop_command    = /sbin/iptables -D INPUT -s %IP% -p tcp –dport 22 -j ACCEPT tcpflags        = syn

First line of the file defines where the error/warnings pertaining to knockd gets logged. In case, if you’re unable to successfully launch connection between client and server using knockd/knock, then this is where you need to check – Server log.

sequence        = 2222:udp,3333:tcp,4444:udp                                /* the sequence of knocking */

seq_timeout     = 15                                                                         /* once the above mentioned sequence is knocked, it’s valid only for next 15 seconds */

start_command   = /sbin/iptables -I INPUT -s %IP% -p tcp –dport 22 -j ACCEPT

/* once the right sequence is knocked, the above command gets executed, where %IP% is the ip_addr of host which knocks. Hence, this allows, ssh connection from knocker (client) to server. One thing to be noted is that, ssh connection need to be established within next 15 seconds of knock. Also, note that I have given iptables –I, since I want this rule to be inserted as first line, because, in case if I append, it won’t have any effect, since iptables reject everything rule comes before this in the list*/

cmd_timeout     = 10                       /* the stop_command will be execued in 10 seconds from start_command execution. */

stop_command    = /sbin/iptables -D INPUT -s %IP% -p tcp –dport 22 -j ACCEPT

/* I am deleting the rule which I inserted to allow the SSH connection from knocker (client) to server. Now, doesn’t it ends my ssh connection? – It doesn’t.

How?  The existing rule in my iptables, #iptables -I INPUT -p tcp -m state –state RELATED,ESTABLISHED -j ACCEPT helps me to retain the established connection. In case, if this rule is not specified, then ur ssh connection will be lost in 10 seconds (i.e) cmd_timeout */

tcpflags                  = syn                       /* In TCP 3 way handshake, Client sends a TCP SYNchronize packet to Server */

Note that there are other ways to configure your knockd server. For example, a port sequence to open ssh, and another port sequence to close your ssh (unlike the above mentioned one), but in this case, after terminating your ssh session, you need to manually hit the port sequence from client, in order to close the ssh port for it.

Now, that we have configured our server to allow ssh for client only when the client hits the right sequence, which is 2222:udp,3333:tcp,4444:udp here.

Now, you need to start knockd service. Copy the below script as/etc/rc.d/init.d/knockd in your machine.

Runlevel script.

#!/bin/sh # description: Start and stop knockd # Check if that config file exist [ -f /etc/knockd.conf ] || exit 0 # Source function library . /etc/rc.d/init.d/functions # Source networking configuration . /etc/sysconfig/network # Check that networking is up [ “$NETWORKING” = “no” ] && exit 0 start() { echo “Starting knockd …” /usr/sbin/knockd & } stop() { echo “Shutting down knockd …” kill `pidof /usr/sbin/knockd` } case “$1” in start) start ;; stop) stop ;; restart) stop start ;; *) echo “Usage: $0 {start|stop|restart}” ;; esac exit 0

Now, run the following commands, so that you shall start/stop/restart knockd like other services

# chmod 755 /etc/rc.d/init.d/knocd         /* making the script executable by root */ # chkconfig –add knockd                             /* adding knockd to chkconfig list */ # chkconfig –level 35 knockd on              /* service will be started as part of runlevel 3 & 5 */ # service knockd start

whenever you modify your /etc/knockd.conf, you need to restart the service.

Testing:

From client, run the following commands.

Note that there are other ways to knock, whereas  I am using knock client which comes along with the knockd package. So, you need to have it installed in client, as well.

Now,

# knock –v server_ip 2222:udp 3333:tcp 4444:udp ;ssh server_ip

-v for verbose

server_ip substituted by your server’s ip address.

We are knocking the sequence which we mentioned in server’s /etc/knockd.conffile

Now, you shall successfully establish ssh session to server. Also, the port gets closed for further sessions. So, every time when you want to establish ssh connection to server, you need to knock and then ssh.

Comments on controversies and criticism:

What if the knock sequence are determined by brute force attack ?

Excerpt from wikipedia answers this:

>>>>>>>>>> YO BREDAS <<<<<<<<<<<<<<<<<

Consider that, if an external attacker did not know the port knock sequence, even the simplest of sequences would require a massive brute force effort in order to be discovered. A three-knock simple TCP sequence (e.g. port 1000, 2000, 3000) would require an attacker without prior knowledge of the sequence to test every combination of three ports in the range 1-65535, and then to scan each port in between to see if anything had opened. As a stateful system, the port would not open until after the correct three-digit sequence had been received in order, without other packets in between.
That equates to a maximum of 655363 packets in order to obtain and detect a single successful opening, in the worst case scenario. That’s 281,474,976,710,656 or over 281 trillion packets. On average, an attempt would take approximately 9.2 quintillion packets to successfully open a single, simple three-port TCP-only knock by brute force. This is made even more impractical when knock attempt-limiting is used to stop brute force attacks, longer and more complex sequences are used.

Also, there are other port knocking solutions, where the knocks are encrypted, which makes it even harder to hack.

When we have simple, robust and reliable solution named VPN access, why to go for port knocking?

Port knocking plus VPN tunnels for added security. If you have another VPN server in your network, you shall protect it from brute force attacks by hiding it behind a knock sequence.

Disclaimer:

Port knocking is not intended to be complete solution, but can be an added layer of security.

Also, it needs to be configured properly according to the needs.

A network that is connected to the Internet is never secure. There is no perfect lock!

The best that we can do is to make the system as secure as possible for today. Possibly
tomorrow someone will figure out how to get through our security code.

A security system is only as strong as its weakest link.

But, I believe “Port knocking” adds security to the existing setup, and doesn’t make your system vulnerable to attacks, as few people claim.

Thanks for reading. Cheers !

 

Credits :- GOKUL (Unixmen)

Linux Command – Dstat (Culprit Catcher)

Introduction

Whether a system is used as a web server or a normal PC, in a daily workflow, to keep under control its usage of resources is almost necessary : GNU/Linux provides several tools for monitoring purposes: iostat, vmstat, netstat, ifstat and others. Every system admin know these products, and how to analyse their outputs. However, there’s another alternative, a single program which can replace almost all of them. Its name is dstat.

With dstat, users can view all system resources instantly. For instance, someone could decide to compare the network bandwidth numbers directly with the disk throughput, having a more general view on what’s going on; this is very useful in case of troubleshooting, or to analyse a system for bench marking.

Features

  • Combines vmstat, iostat, ifstat, netstat information and more
  • Shows stats in exactly the same timeframe
  • Enable/order counters as they make most sense during analysis/troubleshooting
  • Modular design
  • Written in Python, so easy extendable
  • Includes many external plugins
  • Can show interrupts per device
  • Very accurate timeframes, no timeshifts when system is stressed
  • Shows exact units and limits conversion mistakes
  • Indicate different units with different colors
  • Show intermediate results when delay > 1
  • Allows to export CSV output, which can be imported in Gnumeric and Excel to make graphs

Installation

To install dstat is a simple task, since it’s packaged in .deb and .rpm.
For Debian-based distro:

# apt install dstat

In RHEL, CentOS, Fedora:

# yum install dstat

Getting sta(r)ted

To run the program, users can just write the command in its simplest form:

$ dstat

as a result, it will show different infos in a table, which help admins to have a general overview.

dstat

Plugins

First of all, it’s important to note that dstat comes with a lot of plugins; for obtaining a complete list:

$ dstat --list

which returns:

list

Of course, it is possible to add more, for some special use case.

So, let’s see how they work.

Who wants to use some plugin must simply pass its name as a command-line argument. For instance, if someone need to verify only the total CPU usage, he can do:

$ dstat --cpu

or, in a shorter form

$ dstat -c
As previously said, program can show different stats at the same time. As an example:

$ dstat --cpu --top-cpu --disk --top-bio --top-latency

output

This command, which is a combination of internal stats and external plugins, will give the total CPU usage, most expensive CPU process, disk statistics, most expensive block I/O process and the process with the highest total latency (expressed in milliseconds).

Output

By default, dstat displays output in columns (as a table, indeed) directly in terminal window, in real-time, for an immediate analysis made by a human. But there is also the possibility to send it to a .csv file, which software like Libreoffice Calc or Gnumeric can use for creating graphs, or any kind of statistical analysis. Exporting data to .csv is a quite easy task:

$ dstat [arguments] --output /path/to/outputfile

GnuPG – Public / Private Key Method – Encryption / Decryption (No Digital Signature)

 

<When You Encounter This Error Don’t Panic>

GPG key generation: Not enough random bytes available.

Not enough random bytes available

Generating keys with GnuPG

 

Anyone who wants to create a new key set via GnuPG (GPG) may run into this error:

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 142 more bytes)

The problem is caused by the lack of entropy (or random system noise). While trying to get more, you might keep running into this message. In our case running a find on the disk, while making sha1sums and putting that into files, was actually not enough.

To check the available entropy, check the kernel parameters:

cat /proc/sys/kernel/random/entropy_avail
36

To solve it, run the rngd command: rngd -f -r /dev/urandom

Debian / Ubuntu: apt-get install rng-tools

Red Hat / CentOS: yum install -y rng-utils

Checking the available entropy again revealed in our case a stunning 3100, almost 100 times more than before. GnuPG is now happy again and can finish creating the keys.

Mulesoft – Mule ESB (Introduction)

What is Mule ESB?

Mule, the runtime engine of Anypoint Platform, is a lightweight Java-based enterprise service bus (ESB) and integration platform that allows developers to connect applications together quickly and easily, enabling them to exchange data.

It enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, Web Services, JDBC, HTTP, and more.

The ESB can be deployed anywhere, can integrate and orchestrate events in real time or in batch, and has universal connectivity.

The key advantage of an ESB is that it allows different applications to communicate with each other by acting as a transit system for carrying data between applications within your enterprise or across the Internet.

Mule has powerful capabilities that include:

  • Service creation and hosting — expose and host reusable services, using the ESB as a lightweight service container
  • Service mediation — shield services from message formats and protocols, separate business logic from messaging, and enable location-independent service calls
  • Message routing — route, filter, aggregate, and re-sequence messages based on content and rules
  • Data transformation — exchange data across varying formats and transport protocols

what mule esb

Do I need an ESB?

Mule and other ESBs offer real value in scenarios where there are at least a few integration points or at least 3 applications to integrate.

They are also well suited to scenarios where loose coupling, scalability and robustness are required.

Below is a quick ESB selection checklist.

To read a much more comprehensive take on when to select an ESB, read this article written by MuleSoft founder and VP of Product Strategy Ross Mason: To ESB or not to ESB.

  • 1. Are you integrating 3 or more applications/services?
  • 2. Will you need to plug in more applications in the future?
  • 3. Do you need to use more than one type of communication protocol?
  • 4. Do you need message routing capabilities such as forking and aggregating message flows, or content-based routing?
  • 5. Do you need to publish services for consumption by other applications?

Why Mule?

Mule is lightweight but highly scalable, allowing you to start small and connect more applications over time. The ESB manages all the interactions between applications and components transparently, regardless of whether they exist in the same virtual machine or over the Internet, and regardless of the underlying transport protocol used.

There are currently several commercial ESB implementations on the market. However, many of these provide limited functionality or are built on top of an existing application server or messaging server, locking you into that specific vendor. Mule is vendor-neutral, so different vendor implementations can plug in to it. You are never locked in to a specific vendor when you use Mule.

Mule provides many advantages over competitors, including:

  • Mule components can be any type you want. You can easily integrate anything from a “plain old Java object” (POJO) to a component from another framework.
  • Mule and the ESB model enable significant component reuse. Unlike other frameworks, Mule allows you to use your existing components without any changes. Components do not require any Mule-specific code to run in Mule, and there is no programmatic API required. The business logic is kept completely separate from the messaging logic.
  • Messages can be in any format from SOAP to binary image files. Mule does not force any design constraints on the architect, such as XML messaging or WSDL service contracts.
  • You can deploy Mule in a variety of topologies, not just ESB. Because it is lightweight and embeddable, Mule can dramatically decrease time to market and increases productivity for projects to provide secure, scalable applications that are adaptive to change and can scale up or down as needed.
  • Mule’s stage event-driven architecture (SEDA) makes it highly scalable. A major airline processes over 10,000 business transactions per second with Mule while H&R Block uses 13,000 Mule servers to support their highly distributed environment.

Mulesoft – Fundamentals

Mule Fundamentals

Mule is the lightweight integration run-time engine that allows you to connect anything, anywhere.

Rather than creating multiple point-to-point integrations between systems, services, APIs, and devices, use Mule to intelligently manage message routing, data mapping, orchestration, reliability, security, and scalability between nodes.

Plug other systems and applications into Mule and let it handle communication between systems, allowing you to track and monitor your application ecosystem and external resources.

 

Mule is so named because it “carries the heavy load” of developing an infrastructure that supports multiple applications and systems both flexibly and intelligently.

The Mule Approach

Flexible and reliable by design, Mule is proactive in adjusting to the changing demands of an interconnected network of systems and applications.

As you quickly learn in Anypoint Studio from designing even the simplest Mule applications, there is a plethora of tools that can be used to transform and capture data at each event (“component”) in a flow, always according to your objectives.

In turn, Mule accounts for any data you load on top of it, and does not mind executing operations concurrently.

Getting the Most out of Mule…

  • Deploy or integrate applications or systems on premises or in the cloud
  • Employ out-of-the-box connectors to create SaaS integration applications
  • Build and expose APIs
  • Consume APIs
  • Create Web services which orchestrate calls to other services
  • Create interfaces to expose applications for mobile consumption
  • Integrate B2B with solutions that are secure, efficient, and quick to build and deploy
  • Shift applications onto the cloud
  • Connect B2B e-commerce activities

Source – UNIX, Destination – Windows Cygwin (SSH Password-less Authentication)

On Windows Server

In windows cygwin create user, say MyUser, locally and also create user in cygwin

cd C:\cygwin

Cygwin.bat

 

Administrator@MYWINDOWSHOST ~

$ /bin/mkpasswd -l -u MyUser >>/etc/passwd

MyUser@MYWINDOWSHOST ~

$ ls

MyUser@MYWINDOWSHOST ~

$ ls -al

total 24

drwxr-xr-x+ 1 MyUser        None    0 Mar 17 12:54 .

drwxrwxrwt+ 1 Administrator None    0 Mar 17 12:54 ..

-rwxr-xr-x  1 MyUser        None 1494 Oct 29 15:34 .bash_profile

-rwxr-xr-x  1 MyUser        None 6054 Oct 29 15:34 .bashrc

-rwxr-xr-x  1 MyUser        None 1919 Oct 29 15:34 .inputrc

-rwxr-xr-x  1 MyUser        None 1236 Oct 29 15:34 .profile

MyUser@MYWINDOWSHOST ~

$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/MyUser/.ssh/id_rsa):

Created directory ‘/home/MyUser/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/MyUser/.ssh/id_rsa.

Your public key has been saved in /home/MyUser/.ssh/id_rsa.pub.

The key fingerprint is:

7d:40:12:1c:7b:c1:7f:39:ac:f5:1a:c5:73:ae:81:34 MyUser@MYWINDOWSHOST

The key’s randomart image is:

+–[ RSA 2048]—-+

|       .++o      |

|        .+..     |

|        . o. . o |

|         o .E *.+|

|        S …* =o|

|           .o o o|

|               = |

|              o  |

|                 |

+—————–+

MyUser@MYWINDOWSHOST ~

$ cd .ssh

MyUser@MYWINDOWSHOST ~/.ssh

$ ls

id_rsa  id_rsa.pub

MyUser@MYWINDOWSHOST ~/.ssh

$ touch authorized_keys

Generate the key in source ON UNIX SERVER

 $ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/MyUser/.ssh/id_rsa):

Created directory ‘/home/MyUser/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/MyUser/.ssh/id_rsa.

Your public key has been saved in /home/MyUser/.ssh/id_rsa.pub.

The key fingerprint is:

7d:40:12:1c:7b:c1:7f:39:ac:f5:1a:c5:73:ae:81:34 MyUser@MYWINDOWSHOST

The key’s randomart image is:

+–[ RSA 2048]—-+

|       .++o      |

|        .+..     |

|        . o. . o |

|         o .E *.+|

|        S …* o|

|           .o o o|

|               = |

|              o  |

|                 |

+—————–+

MyUser@MYUNIXSHOST ~

$ cd .ssh

MyUser@MYUNIXHOST ~/.ssh

$ ls

id_rsa  id_rsa.pub

Then push that rsa_pub into that destination authorized_keys

cat .ssh/id_rsa.pub | ssh MyUser@MYWINDOWSHOST ‘cat >>  .ssh/authorized_keys’

 

ssh  -v MyUser@MYWINDOWSHOST

—> YOU SHOULD BE ABLE LOGIN PASSWORD LESS

Credits :- Priyanka Padad (Operations Expert)

TIBCO EMS – No memory for operation – Filesystem as datastore.

Very often we come across this kind of an error :-

Most probably if your EMS is running as a 32bit Binary and your message size is exceeding 1.6 – 2 GB in size

EMS For 32 bit has a size limitation of 2 GB

When you are using Filesystem as a datastore,

and when you come across such a trace

2007-07-24 10:46:38 Recovering state, please wait.
2007-07-24 10:48:47 SEVERE ERROR: Failed writing message to ‘datastore/meta.db’: No memory for operation.
2007-07-24 10:48:50 SEVERE ERROR: No memory processing purge record.
2007-07-24 10:48:52 FATAL: Exception in startup, exiting.

434692644 Jul 24 11:16 meta.db
29220 Jul 24 08:54 sync-msgs.db
5805854244 Jul 24 08:37 async-msgs.db

Don’t Panic, Its a very Generic issue

Just follow these checklists

  1. Size of your message
  2. Ensure you are using 64 bit EMS binary
  3. Set max_msg_memory= <XX>GB in your tibemsd.conf, where XX indicated the Memory in GB you wanna assign.
  4. Set msg_swapping=enabled.
  • max_msg_memory

max_msg_memory = size [KB|MB|GB]
Maximum memory the server can use for messages.
This parameter lets you limit the memory that the server uses for messages, so server memory usage cannot grow beyond the system’s memory capacity.
When msg_swapping is enabled, and messages overflow this limit, the server begins to swap messages from process memory to disk. Swapping allows the server to free process memory for incoming messages, and to process message volume in excess of this limit.
When the server swaps a message to disk, a small record of the swapped message remains in memory. If all messages are swapped out to disk, and their remains still exceed this memory limit, then the server has no room for new incoming messages. The server stops accepting new messages, and send calls in message producers result in an error. (This situation probably indicates either a very low value for this parameter, or a very high message volume.)
Specify units as KB, MB or GB. The minimum value is 8 MB. The default value of 0 (zero) indicates no limit.
For example:
max_msg_memory = 512MB

msg_swapping

msg_swapping = enable | disable
This parameter enables and disables the message swapping feature (described above for max_msg_memory).
The default value is enabled, unless you explicitly set it to disabled.

reserve_memory

reserve_memory = size
When reserve_memory is non-zero, the daemon allocates a block of memory for use in emergency situations to prevent the EMS server from being unstable in low memory situations. When the daemon process exhausts memory resources, it disables clients and routes from producing new messages, and frees this block of memory to allow consumers to continue operation (which tends to free memory).
The EMS server attempts to reallocate its reserve memory once the number of pending messages in the server has dropped to 10% of the number of pending messages that were in the server when it experienced the allocation error. If the server successfully reallocates memory, it begins accepting new messages.
The reserve_memory parameter only triggers when the EMS server has run out of memory and therefore is a reactive mechanism. The appropriate administrative action when an EMS server has triggered release of reserve memory is to drain the majority of the messages by consuming them and then to stop and restart the EMS server. This allows the operating system to reclaim all the virtual memory resources that have been consumed by the EMS server. A trace option, MEMORY, is also available to help show what the server is doing during the period when it is not accepting messages.
Specify size in units of MB. When non-zero, the minimum block is 16MB. When absent, the default is zero.

Linux Commands – chage (Manage your Passwords for User Accounts in UNIX)

chage

chage Enables you to modify the parameters surrounding passwords (complexity, age,expiration). We can edit and manage the password expiration details with the chage command. However, a root user can execute chage command for any user account, but not the other users.

Syntax: chage [options] USERNAME

Options:

-d LAST_DAY Indicates the day the password was last changed
-E EXPIRE_DATE Sets the account expiration date
-I INACTIVE Changes the password in an inactive state after the account expires
-l Shows account aging information
-m MIN_DAYS Sets the minimum number of days between password changes
-M MAX_DAYS Sets the maximum number of days a password is valid
-W WARN_DAYS Sets the number of days to warn before the password expires

For example we can find the particular user information by using chage command as follows.

[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : never
Account expires                                         : Dec 31, 2025
Minimum number of days between password change          : 365
Maximum number of days between password change          : 500
Number of days of warning before password expires       : 7

How to force users to change their password, may be this is a big question for system administrators to force their users to change password on regular intervals for security basis.

For this we Set Password Expiry Date for an user using chage option -M. Root user  can set the password expiry date for any user.

Please note that option -M will update both “Password expires” and “Maximum number of days between password change” entries as shown below.

Syntax: # chage -M number-of-days username
[root@localhost ~]# chage -M 60 root

The above command sets the password expiry to 60days.

[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        :Jul 25, 2017
Password inactive                                       : never
Account expires                                         : Dec 31, 2025
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

Set the Account Expiry Date for an User by using -E option

we can also use chage command to set the account expiry date as shown below using option -E. The date given below is in “YYYY-MM-DD” format. This will update the “Account expires” value as shown below.

[root@localhost ~]# chage -E "2017-10-03" root
[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : never
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

Force the user account to be locked after n number of inactivity days
Typically if the password is expired, users are forced to change it during their next login. You can also set an additional condition, where after the password is expired, if the user never tried to login for 6 days, you can automatically lock their account using option -I as shown below. In this example, the “Password inactive” date is set to 10 days from the “Password expires” value.Once an account is locked, only system administrators will be able to unlock it.

# chage -I 6 root
# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : Aug 31,2017
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

How to set Minimum no.of days between password change

we can set the minimum number of days between password change by using the option -m along with chage command as follows.

chage -m 10 USERNAME
[root@localhost ~]# chage -m 10 root
[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : Aug 31,2017
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 10
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

How to set the number of days of warning before password expires

we can set the number of days of warning before password expires by using the option -W along with chage command

[root@localhost ~]# chage -W 10 root
[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : Aug 31,2017
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 10
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 10

How to disable password aging for an user account

To disable password aging for the account for particular account we must set the following on that account.

    -m 0 will set the minimum number of days between password change to 0
    -M 99999 will set the maximum number of days between password change to 99999
    -I -1 (number minus one) will set the “Password inactive” to never
    -E -1 (number minus one) will set “Account expires” to never.

when we using the chage command to set specifics on an account, do not reset the
password with the passwd command because doing so erases any changes to the account
expiring.

Have a great fun with Linux!

 

Credits :- https://www.unixmen.com

TIBCO – EMS – Multicasting

Multicast Messaging:  Multicast is a messaging model that allows the EMS server to send messages to multiple consumers simultaneously by broadcasting them over an existing network.

Overview:
  Multicast is a messaging model that broadcasts messages to many consumers at once, as opposed to sending copies of a message to each subscribing consumer individually.

The server sends multicast messages over a multicast channel. Each multicast-enabled topic is associated with a channel. The channel determines the multicast port and multicast group address to which the server sends messages.

The multicast message is received by a multicast daemon running on the same computer with the message consumer.

When an EMS client subscribes to a multicast-enabled topic, it automatically connects to the multicast daemon. The multicast daemon begins listening on the channel associated with that topic, receives any broadcast messages, and delivers them to subscribed clients.

When to use Multicast:
  Because multicast reduces the number of operations performed by the server and
reduces the amount of bandwidth used in the publish-and-subscribe model, multicast is highly scalable.

Where publish and subscribe messaging creates a copy of a published message for each message consumer, multicast broadcasts the message only once.

sample Pictures

Features

Multicast is highly scalable
Multicast reduces the amount of bandwidth consumed
Multicast reduces the number of operations performed by the server
Multicast broadcasts the message only once, where as in case of publish and subscribe for each of the consumer a copy of the message is getting published

Facts

Multicast does not guarantee message delivery
Messages requiring a high degree of reliability should not use multicast
Multicast offers last-hop delivery only; it cannot be used to send messages between servers.
Multicast should not be used in applications where security is a priority

 

Multicast Messaging Example:

Multicast channels can only be configured statically by modifying the configuration files. There are no commands in the administration tool to configure multicast channels.

 

Step 1: Enable the EMS Server for Multicast

To enable multicast in the server, set the multicast property to enabled in the tibemsd.conf configuration file:
multicast = enabled

 

Step2: Create a Multicast Channel

The EMS server broadcasts messages to consumers over multicast channels. Each channel has a defined multicast address and port. Messages published to a multicast-enabled topic are sent by the server and received by the subscribers on these multicast channels.
To create a multicast channel, add the following definition to the multicast channels configuration file, channels.conf:
[multicast-1]
address=234.5.6.7:1

 

Start the EMS Server.

 

Step3: Create a topic with Multicast enabled 

To create a multicast-enabled topic, use the administration tool to issue the following command:


> create topic testtopic channel=multicast-1

 

then execute the show topic command to verify multicast is enabled or not

 >show topics

 

the result will be like …

 

Topic Name SNFGEIBCTM Subs Durs Msgs Size

testtopic                  ———+           0 0 0 0.0 Kb

 

+ at M means multicast is enabled.

 

Step 4: Start the Multicast Daemon

go to Start menu, follow the path All Programs > TIBCO > TIBCO EMS 7.0 > Start
EMS Multicast Daemon.

 

Step5: Start the Subscriber

Execute the tibjmsMsgConsumer client to assign user1 as a subscriber to the multicastTopic topic with a Session acknowledgment mode of NO_ACKNOWLEDGE:
C:\tibco\ems\7.0\samples\java> java tibjmsMsgConsumer –topic testtopic –user user1 –ackmode NO

in Administration Console type the below command for cross verification…

>show consumers topic=testtopic

 

Step6: Start the producer

 Setting up a client to publish multicast message is no different from setting up a client to send publish and subscribe messages. Because the topic is enabled for multicast in the EMS server, the message producer does not need to follow any additional steps.

 

C:\tibco\ems\7.0\samples\java>java tibjmsMsgProducer -topic testtopic -user admin hello

 

Finally the messages (hello) is displayed in the subscriber’s window.

 

Certificates – PKCS12

pkcs12

NAME

pkcs12 – PKCS#12 file utility

SYNOPSIS

openssl pkcs12 [-help] [-export] [-chain] [-inkey filename] [-certfile filename] [-name name] [-caname name] [-in filename] [-out filename] [-noout] [-nomacver] [-nocerts] [-clcerts] [-cacerts] [-nokeys] [-info] [-des | -des3 | -idea | -aes128 | -aes192 | -aes256 | -camellia128 | -camellia192 | -camellia256 | -nodes] [-noiter] [-maciter | -nomaciter | -nomac] [-twopass] [-descert] [-certpbe cipher] [-keypbe cipher] [-macalg digest] [-keyex] [-keysig] [-password arg] [-passin arg] [-passout arg] [-rand file(s)] [-CAfile file] [-CApath dir] [-no-CAfile] [-no-CApath] [-CSP name]

DESCRIPTION

The pkcs12 command allows PKCS#12 files (sometimes referred to as PFX files) to be created and parsed. PKCS#12 files are used by several programs including Netscape, MSIE and MS Outlook.

COMMAND OPTIONS

There are a lot of options the meaning of some depends of whether a PKCS#12 file is being created or parsed. By default a PKCS#12 file is parsed. A PKCS#12 file can be created by using the-export option (see below).

PARSING OPTIONS

-help
Print out a usage message.

-in filename
This specifies filename of the PKCS#12 file to be parsed. Standard input is used by default.

-out filename
The filename to write certificates and private keys to, standard output by default. They are all written in PEM format.

-passin arg
the PKCS#12 file (i.e. input file) password source. For more information about the format of arg see the PASS PHRASE ARGUMENTS section in openssl.

-passout arg
pass phrase source to encrypt any outputted private keys with. For more information about the format of arg see the PASS PHRASE ARGUMENTS section in openssl.

-password arg
With -export, -password is equivalent to -passout. Otherwise, -password is equivalent to -passin.

-noout
this option inhibits output of the keys and certificates to the output file version of the PKCS#12 file.

-clcerts
only output client certificates (not CA certificates).

-cacerts
only output CA certificates (not client certificates).

-nocerts
no certificates at all will be output.

-nokeys
no private keys will be output.

-info
output additional information about the PKCS#12 file structure, algorithms used and iteration counts.

-des
use DES to encrypt private keys before outputting.

-des3
use triple DES to encrypt private keys before outputting, this is the default.

-idea
use IDEA to encrypt private keys before outputting.

-aes128, -aes192, -aes256
use AES to encrypt private keys before outputting.

-camellia128, -camellia192, -camellia256
use Camellia to encrypt private keys before outputting.

-nodes
don’t encrypt the private keys at all.

-nomacver
don’t attempt to verify the integrity MAC before reading the file.

-twopass
prompt for separate integrity and encryption passwords: most software always assumes these are the same so this option will render such PKCS#12 files unreadable.

FILE CREATION OPTIONS

-export
This option specifies that a PKCS#12 file will be created rather than parsed.

-out filename
This specifies filename to write the PKCS#12 file to. Standard output is used by default.

-in filename
The filename to read certificates and private keys from, standard input by default. They must all be in PEM format. The order doesn’t matter but one private key and its corresponding certificate should be present. If additional certificates are present they will also be included in the PKCS#12 file.

-inkey filename
file to read private key from. If not present then a private key must be present in the input file.

-name friendlyname
This specifies the “friendly name” for the certificate and private key. This name is typically displayed in list boxes by software importing the file.

-certfile filename
A filename to read additional certificates from.

-caname friendlyname
This specifies the “friendly name” for other certificates. This option may be used multiple times to specify names for all certificates in the order they appear. Netscape ignores friendly names on other certificates whereas MSIE displays them.

-pass arg, -passout arg
the PKCS#12 file (i.e. output file) password source. For more information about the format of arg see the PASS PHRASE ARGUMENTS section in openssl.

-passin password
pass phrase source to decrypt any input private keys with. For more information about the format of arg see the PASS PHRASE ARGUMENTS section in openssl.

-chain
if this option is present then an attempt is made to include the entire certificate chain of the user certificate. The standard CA store is used for this search. If the search fails it is considered a fatal error.

-descert
encrypt the certificate using triple DES, this may render the PKCS#12 file unreadable by some “export grade” software. By default the private key is encrypted using triple DES and the certificate using 40 bit RC2.

-keypbe alg, -certpbe alg
these options allow the algorithm used to encrypt the private key and certificates to be selected. Any PKCS#5 v1.5 or PKCS#12 PBE algorithm name can be used (see NOTESsection for more information). If a cipher name (as output by the list-cipher-algorithmscommand is specified then it is used with PKCS#5 v2.0. For interoperability reasons it is advisable to only use PKCS#12 algorithms.

-keyex|-keysig
specifies that the private key is to be used for key exchange or just signing. This option is only interpreted by MSIE and similar MS software. Normally “export grade” software will only allow 512 bit RSA keys to be used for encryption purposes but arbitrary length keys for signing. The -keysig option marks the key for signing only. Signing only keys can be used for S/MIME signing, authenticode (ActiveX control signing) and SSL client authentication, however due to a bug only MSIE 5.0 and later support the use of signing only keys for SSL client authentication.

-macalg digest
specify the MAC digest algorithm. If not included them SHA1 will be used.

-nomaciter, -noiter
these options affect the iteration counts on the MAC and key algorithms. Unless you wish to produce files compatible with MSIE 4.0 you should leave these options alone.

To discourage attacks by using large dictionaries of common passwords the algorithm that derives keys from passwords can have an iteration count applied to it: this causes a certain part of the algorithm to be repeated and slows it down. The MAC is used to check the file integrity but since it will normally have the same password as the keys and certificates it could also be attacked. By default both MAC and encryption iteration counts are set to 2048, using these options the MAC and encryption iteration counts can be set to 1, since this reduces the file security you should not use these options unless you really have to. Most software supports both MAC and key iteration counts. MSIE 4.0 doesn’t support MAC iteration counts so it needs the -nomaciter option.

-maciter
This option is included for compatibility with previous versions, it used to be needed to use MAC iterations counts but they are now used by default.

-nomac
don’t attempt to provide the MAC integrity.

-rand file(s)
a file or files containing random data used to seed the random number generator, or an EGD socket (see RAND_egd). Multiple files can be specified separated by an OS-dependent character. The separator is ; for MS-Windows, , for OpenVMS, and : for all others.

-CAfile file
CA storage as a file.

-CApath dir
CA storage as a directory. This directory must be a standard certificate directory: that is a hash of each subject name (using x509 -hash) should be linked to each certificate.

-no-CAfile
Do not load the trusted CA certificates from the default file location

-no-CApath
Do not load the trusted CA certificates from the default directory location

-CSP name
write name as a Microsoft CSP name.

NOTES

Although there are a large number of options most of them are very rarely used. For PKCS#12 file parsing only -in and -out need to be used for PKCS#12 file creation -export and -name are also used.

If none of the -clcerts, -cacerts or -nocerts options are present then all certificates will be output in the order they appear in the input PKCS#12 files. There is no guarantee that the first certificate present is the one corresponding to the private key. Certain software which requires a private key and certificate and assumes the first certificate in the file is the one corresponding to the private key: this may not always be the case. Using the -clcerts option will solve this problem by only outputting the certificate corresponding to the private key. If the CA certificates are required then they can be output to a separate file using the -nokeys -cacerts options to just output CA certificates.

The -keypbe and -certpbe algorithms allow the precise encryption algorithms for private keys and certificates to be specified. Normally the defaults are fine but occasionally software can’t handle triple DES encrypted private keys, then the option -keypbe PBE-SHA1-RC2-40 can be used to reduce the private key encryption to 40 bit RC2. A complete description of all algorithms is contained in the pkcs8 manual page.

Prior 1.1 release passwords containing non-ASCII characters were encoded in non-compliant manner, which limited interoperability, in first hand with Windows. But switching to standard-compliant password encoding poses problem accessing old data protected with broken encoding. For this reason even legacy encodings is attempted when reading the data. If you use PKCS#12 files in production application you are advised to convert the data, because implemented heuristic approach is not MT-safe, its sole goal is to facilitate the data upgrade with this utility.

EXAMPLES

Parse a PKCS#12 file and output it to a file:

 openssl pkcs12 -in file.p12 -out file.pem

Output only client certificates to a file:

 openssl pkcs12 -in file.p12 -clcerts -out file.pem

Don’t encrypt the private key:

 openssl pkcs12 -in file.p12 -out file.pem -nodes

Print some info about a PKCS#12 file:

 openssl pkcs12 -in file.p12 -info -noout

Create a PKCS#12 file:

 openssl pkcs12 -export -in file.pem -out file.p12 -name "My Certificate"

Include some extra certificates:

 openssl pkcs12 -export -in file.pem -out file.p12 -name "My Certificate" \
  -certfile othercerts.pem

 

Java Performance Tuning – Introduction

Java Performance Tuning

Garbage Collection

The Java programming language is an object oriented and includes automatic garbage collection.

Garbage collection is the process of reclaiming memory taken up by unreferenced objects.
The following sections will try to resume some of the concepts from this document and their impact on Application Server performance.

Memory Management, Java and Impact on Application Servers

The task of memory management that was always challenging with compiled object‐oriented languages such as C++.

On the other hand, Java is an interpretive language that takes this memory management
out of the hands of developers and gives it directly to the virtual machine where the code will be run.

This means that for best performance and stability, it is critical that the Java parameters for the virtual machine be understood and managed by the Application Server deployment team.

This section will describe the various parts of the Java heap and then list some useful parameters and tuning tips for ensuring  correct  runtime  stability  and  performance  of  Application Servers.

Java Virtual Machines

The Java specification as to what is “standard” for a given release is written and maintained by the Java Soft division of Sun Microsystems.

This specification is then delivered to other JVM providers (IBM, HP, etc). JavaSoft provides the standard implementation on Windows, Solaris, and LINUX.

Other platforms are required to deliver the code functionality but their JVM options can be different.

Java  options that are preceded with “‐X” or “‐XX” are typically platform‐specific.

That being said, many options are used on all platforms.

One must read in detail the README notes from the various releases on various platforms to be kept up‐to‐date with the variations.

This guide will mention the most critical ones and distinguish between those which are implemented on most platforms and those which are platform‐specific.

 

——- Stay Tuned for More Java Performance Monitoring Posts ——-

Certificates – Digital Certificates (Summary)

Introduction to Digital Certificates

Digital Certificates provide a means of proving your identity in electronic transactions, much like a driver license or a passport does in face-to-face interactions.

With a Digital Certificate, you can assure friends, business associates, and online services that the electronic information they receive from you are authentic.

This document introduces Digital Certificates and answers questions you might have about how Digital Certificates are used for information about the cryptography technologies used in Digital Certificates.

Digital certificates are the equivalent of a driver’s license, a marriage license, or any other form of identity.

The only difference is that a digital certificate is used in conjunction with a public key encryption system. Digital certificates are electronic files that simply work as an online passport.

Digital certificates are issued by a third party known as a Certification Authority such as VeriSign or Thawte.

These third party certificate authorities have the responsibility to confirm the identity of the certificate holder as well as provide assurance to the website visitors that the website is one that is trustworthy and capable of serving them in a trustworthy manner.

Digital certificates have two basic functions.

The first is to certify that the people, the website and the network resources such as servers and routers are reliable sources, in other words, who or what they claim to be.

The second function is to provide protection for the data exchanged from the visitor and the website from tampering or even theft, such as credit card information.

Who Uses Digital Certificates

Digital Certificates can be used for a variety of electronic transactions including e-mail, electronic commerce, groupware and electronic funds transfers.

Netscape’s popular Enterprise Server requires a Digital Certificate for each secure server.

For example, a customer shopping at an electronic mall run by Netscape’s server software requests the Digital Certificate of the server to authenticate the identity of the mall operator and the content provided by the merchant.

Without authenticating the server, the shopper should not trust the operator or merchant with sensitive information like a credit card number.

The Digital Certificate is instrumental in establishing a secure channel for communicating any sensitive information back to the mall operator Virtual malls, electronic banking, and other electronic services are becoming more commonplace, offering the convenience and flexibility of round-the-clock service direct from your home.

However, our concerns about privacy and security might be preventing you from taking advantage of this new medium for your personal business.

Encryption alone is not enough, as it provides no proof of the identity of the sender of the encrypted information. Without special safeguards, you risk being impersonated online.

Digital Certificates address this problem, providing an electronic means of verifying someone’s identity.

Used in conjunction with encryption, Digital Certificates provide a more complete security solution, assuring the identity of all parties involved in a transaction.

Similarly, a secure server must have its own Digital Certificate to assure users that the server is run by the organization it claims to be affiliated with and that the content provided is legitimate.

Types of Digital Certificate:-

  1. Identity Certificates

An Identity Certificate is one that contains a signature verification key combined with sufficient information to identify (hopefully uniquely) the key holder.

This type of certificate is much subtler than might first be imagined and will be considered in more detail later.

  1. Accreditation Certificates

This is a certificate that identifies the key holder as a member of a specified group or organization without necessarily identifying them.

For example, such a certificate could indicate that the key holder is a medical doctor or a lawyer.

In many circumstances, a particular signature is needed to authorize a transaction but the identity of the key holder is not relevant.

For example, pharmacists might need to ensure that medical prescriptions are signed by doctors but they do not need to know the specific identities of the doctors involved.

Here the certificate states in effect that the key holder, whoever they are, has permission to write medical prescriptions’.

Accreditation certificates can also be viewed as authorization (or permission) certificates.

It might be thought that a doctor’s key without identity would undermine the ability to audit the issue of medical prescriptions.

However, while such certificate might not contain key holder identity data, the certificate issuer will know this so such requirements can be met if necessary.

  1. Authorizations and Permission Certificates

In these forms of certificate, the certificate signing authority delegates some form of authority to the key being signed.

For example, a Bank will issue an authorization certificate to its customers saying ‘the key in this certificate can be used to authorize the withdrawal of money from account number 271828’.

In general, the owner of any resource that involves  electronic  access  can  use  an  authorization  certificate  to  control  access  to  it.

Other examples include control of access to secure computing facilities and to World Wide Web pages.

In banking an identity certificate might be used to set up an account but the authorization certificate for the account will not itself contain identity data.

To identify the owner of a certificate a bank will typically look up the link between account numbers and owners in its internal databases.

Placing such information in an authorization certificate is actually undesirable since it could expose the bank or its customers to additional risks.

The Parties to a Digital Certificate

In principle there are three different interests associated with a digital certificate:

  1. The Requesting Party

The  party  who  needs  the  certificate  and  will  offer it  for  use  by  others  –  they  will  generally provide some or all of the information it contains.

  1. The Issuing Party

The  party  that  digitally  signs  the  certificate  after  creating  the  information  in  the  certificate  or checking its correctness.

  1. The Verifying Party (or Parties)

They are Parties that validate the signature on the certificate and then rely on its contents for some purpose.

For example, a person – the requesting party –they might present paper documents  giving proof of identity to a government agency – the issuing party – who will then provide an identity certificate that could then be used by a bank – the verifying party – when the requesting party opens a bank account.

The term ‘relying party’ is sometimes uses instead of ‘verifying party’ but this can be misleading since  the  real  purpose  is  to  identify  a  party  who  checks  the  certificate  before  relying  on  it.

In  a credit card transaction many parties might handle a certificate and hence rely on it in some way but  only  a  few  of  these  might  actually  check  the  validity  of  the  certificate.

Hence  a  ‘verifying party’  is  a  party  that  checks  and  then  relies  on  the  contents  of  a  certificate,  not  just  one  that depends on it without checking its validity.

The actual parties involved in using a certificate will vary depending on the type of certificate.