Main, Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Process Management in Linux

Process Types

Before we start talking about Linux process management, we should review process types. There are four common types of processes:

  • Parent process
  • Child process
  • Orphan Process
  • Daemon Process
  • Zombie Process

Parent process is a process which runs the fork() system call. All processes except process 0 have one parent process.

Child process is created by a parent process.

Orphan Process it continues running while its parent process has terminated or finished.

Daemon Process is always created from a child process and then exit.

Zombie Process exists in the process table although it is terminated.

The orphan process is a process that still executing and its parent process has died while orphan processes do not become zombie processes.

Memory Management

In server administration, memory management is one of your responsibility that you should care about as a system administrator.

One of most used commands in Linux process management is the free command:

$ freem

The -m option to show values in megabytes.

Certificates – Digital Certificates (Summary)01-linux-process-managment-free-command

Our main concern in buff/cache.

The output of free command here means 536 megabytes is used while 1221 megabytes is available.

The second line is the swap. Swapping occurs when memory becomes to be crowded.

The first value is the total swap size which is 3070 megabytes.

The second value is the used swap which is 0.

The third value is the available swap for usage which is 3070.

From the above results, you can say that memory status is good since no swap is used, so while we are talking about the swap, let’s discover what proc directory provides us about the swap.

$ cat /proc/swapscat /proc/swaps

01-linux-process-managment-free-command

This command shows the swap size and how much is used:

$ cat /proc/sys/vm/swappinesscat /proc/sys/vm/swappiness

01-linux-process-managment-free-commandThis command shows a value from 0 to 100, this value means the system will use the swap if the memory becomes 70% used.

Notice: the default value for most distros for this value is between 30 and 60, you can modify it like this:

$ echo 50 >/proc/sys/vm/swappinessecho 50 >/proc/sys/vm/swappiness

Or using sysctl command like this:

$ sudo sysctl -wvm.swappiness=50sudo sysctl -wvm.swappiness=50

Changing the swappiness value using the above commands is not permanent, you have to write it on /etc/sysctl.conf file like this:

$ nano /etc/sysctl.conf

vm.swappiness=50

01-linux-process-managment-free-command

Cool!!

The swap level measures the chance to transfer a process from the memory to the swap.

Choosing the accurate swappiness value for your system requires some experimentation to choose the best value for your server.

Managing virtual memory with vmstat

Another important command used in Linux process management which is vmstat. vmstat command gives a summary reporting about memory, processes, and paging.

$ vmstat -avmstat -a

-a option is used to get all active and inactive processes.

01-linux-process-managment-free-command

And this is the important column outputs from this command:

si: How much swapped in from disk.

so: How much swapped out to disk.

bi: How much sent to block devices.

bo: How much obtained from block devices.

us: The user time.

sy: The system time.

id: The idle time.

Our main concern is the (si) and (so) columns, where (si) column shows page-ins while (so) column provides page-outs.

A better way to look at these values is by viewing the output with a delay option like this:

$ vmstat 2 5vmstat 2 5

01-linux-process-managment-free-command

Where 2 is the delay in seconds and 5 is the number of times vmstat is called. It shows five updates of the command and all data is presented in kilobytes.

Page-in (si) happens when you start an application and the information is paged-in. Page out (so) happens when the kernel is freeing up memory.

System Load & top Command

In Linux process management, the top command gives you a list of the running processes and how they are using CPU and memory ; the output is a real-time data.

If you have a dual core system may have the first core at 40 percent and the second core at 70 percent, in this case, the top command may show a combined result of 110 percent, but you will not know the individual values for each core.

$ top -c-c

01-linux-process-managment-free-command

We use -c option to show the command line or the executable path behind that process.

You can press 1 key while you watch the top command statistics to show individual CPU statuses.

01-linux-process-managment-free-command

Keep in mind that certain processes are spawned like the child processes, you will see multiple processes for the same program like httpd and PHP-fpm.

You shouldn’t rely on top command only, you should review other resources before making a final action.

Monitoring Disk I/O with iotop

The system starts to be slow as a result of high disk activities, so it is important to monitor disk activities. That means figuring out which processes or users cause this disk activity.

The iotop command in Linux process management helps us to monitor disk I/O in real-time. You can install it if you don’t have it:

$ yum install iotop

Running iotop without any options will result in a list all processes.

To view the processes that cause to disk activity, you should use -o option:

$ iotop -o-o

01-linux-process-managment-free-command

You can easily know what program is impacting the system.

ps command

We’ve talked about ps command before on a previous post and how to order the processes by memory usage and CPU usage.

Monitoring System Health with iostat and lsof

iostat command gives you CPU utilization report; it can be used with -c option to display the CPU utilization report.

$ iostat -ciostat -c

The output result is easy to understand, but if the system is busy, you will see %iowait increases. That means the server is transferring or copying a lot of files.

With this command, you can check the read and write operations, so you should have a solid knowledge of what is hanging your disk and take the right decision.

Additionally, lsof command is used to list the open files:

lsof command shows which executable is using the file, the process ID, the user, and the name of the opened file.

Calculating the system load

Calculating system load is very important in Linux process management. The system load is the amount of processing for the system which is currently working. It is not the perfect way to measure system performance, but it gives you some evidence.

The load is calculated like this:

Actual Load = Total Load (uptime) / No. of CPUs

You can calculate the uptime by reviewing uptime command or top command:

$ uptimeuptime

$ toptop

The server load is shown in 1, 5, and 15 minutes.

As you can see, the average load is 0.00 at the first minute, 0.01 at the fifth minute, and 0.05 at fifteenth minutes.

When the load increases, processors are queued, and if there are many processor cores, the load is distributed equally across the server’s cores to balance the work.

You can say that the good load average is about 1. This does not mean if the load exceeds 1 that there is a problem, but if you begin to see higher numbers for a long time, that means a high load and there is a problem.

pgrep and systemctl

You can get the process ID using pgrep command followed by the service name.

$ pgrep servicename

This command shows the process ID or PID.

Note if this command shows more than process ID like httpd or SSH, the smallest process ID is the parent process ID.

On the other hand, you can use the systemctl command to get the main PID like this:

$ systemctl status<service_name>.service

There are more ways to obtain the required process ID or parent process ID, but this one is easy and straight.

Managing Services with systemd

If we are going to talk about Linux process management, we should take a look at systemd. The systemd is responsible for controlling how services are managed on modern Linux systems like CentOS 7.

Instead of using chkconfig command to enable and disable a service during the boot, you can use the systemctl command.

Systemd also ships with its own version of the top command, and in order to show the processes that are associated with a specific service, you can use the system-cgtop command like this:

$ systemdcgtop

As you can see, all associated processes, path, the number of tasks, the % of CPU used, memory allocation, and the inputs and outputs related.

This command can be used to output a recursive list of service content like this:

$ systemdcgls

This command gives us very useful information that can be used to make your decision.

Nice and Renice Processes

The process nice value is a numeric indication that belongs to the process and how it’s fighting for the CPU.

A high nice value indicates a low priority for your process, so how nice you are going to be to other users, and from here the name came.

The nice range is from -20 to +19.

nice command sets the nice value for a process at creation time, while renice command adjusts the value later.

$ nice –n 5 ./myscriptnice –n 5 ./myscript

This command increases the nice value which means lower priority by 5.

$ sudo renice 5 22132213

This command decreases the nice value means increased priority and the number (2213) is the PID.

You can increase its nice value (lower priority) but cannot lower it (high priority) while root user can do both.

Sending the kill signal

To kill a service or application that causes a problem, you can issue a termination signal (SIGTERM). You can review the previous post about signals and jobs.

$ kill process IDkill process IDID

This method is called safe kill. However, depending on your situation, maybe you need to force a service or application to hang up like this:

$ kill -1 process -1 process ID

Sometimes the safe killing and reloading fail to do anything, you can send kill signal SIGKILL by using -9 option which is called forced kill.

$ kill -9 process IDkill -9 process ID

There are no cleanup operations or safe exit with this command and not preferred. However, you can do something more proper by using the pkill command.

$ pkill -9 serviceName-9 serviceNameserviceName

And you can use pgrep command to ensure that all associated processes are killed.

$ pgrep serviceNamepgrep serviceName

I hope you have a good idea about Linux process management and how to make a good action to make the system healthy.

Thank you

Advertisements
Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

How To Patch and Protect Linux Kernel Stack Clash Vulnerability CVE-2017-1000364 [ 19/June/2017 ]

Avery serious security problem has been found in the Linux kernel called “The Stack Clash.” It can be exploited by attackers to corrupt memory and execute arbitrary code. An attacker could leverage this with another vulnerability to execute arbitrary code and gain administrative/root account privileges. How do I fix this problem on Linux?

the-stack-clash-on-linux-openbsd-netbsd-freebsd-solaris
The Qualys Research Labs discovered various problems in the dynamic linker of the GNU C Library (CVE-2017-1000366) which allow local privilege escalation by clashing the stack including Linux kernel. This bug affects Linux, OpenBSD, NetBSD, FreeBSD and Solaris, on i386 and amd64. It can be exploited by attackers to corrupt memory and execute arbitrary code.

What is CVE-2017-1000364 bug?

From RHN:

A flaw was found in the way memory was being allocated on the stack for user space binaries. If heap (or different memory region) and stack memory regions were adjacent to each other, an attacker could use this flaw to jump over the stack guard gap, cause controlled memory corruption on process stack or the adjacent memory region, and thus increase their privileges on the system. This is a kernel-side mitigation which increases the stack guard gap size from one page to 1 MiB to make successful exploitation of this issue more difficult.

As per the original research post:

Each program running on a computer uses a special memory region called the stack. This memory region is special because it grows automatically when the program needs more stack memory. But if it grows too much and gets too close to another memory region, the program may confuse the stack with the other memory region. An attacker can exploit this confusion to overwrite the stack with the other memory region, or the other way around.

A list of affected Linux distros

  1. Red Hat Enterprise Linux Server 5.x
  2. Red Hat Enterprise Linux Server 6.x
  3. Red Hat Enterprise Linux Server 7.x
  4. CentOS Linux Server 5.x
  5. CentOS Linux Server 6.x
  6. CentOS Linux Server 7.x
  7. Oracle Enterprise Linux Server 5.x
  8. Oracle Enterprise Linux Server 6.x
  9. Oracle Enterprise Linux Server 7.x
  10. Ubuntu 17.10
  11. Ubuntu 17.04
  12. Ubuntu 16.10
  13. Ubuntu 16.04 LTS
  14. Ubuntu 12.04 ESM (Precise Pangolin)
  15. Debian 9 stretch
  16. Debian 8 jessie
  17. Debian 7 wheezy
  18. Debian unstable
  19. SUSE Linux Enterprise Desktop 12 SP2
  20. SUSE Linux Enterprise High Availability 12 SP2
  21. SUSE Linux Enterprise Live Patching 12
  22. SUSE Linux Enterprise Module for Public Cloud 12
  23. SUSE Linux Enterprise Build System Kit 12 SP2
  24. SUSE Openstack Cloud Magnum Orchestration 7
  25. SUSE Linux Enterprise Server 11 SP3-LTSS
  26. SUSE Linux Enterprise Server 11 SP4
  27. SUSE Linux Enterprise Server 12 SP1-LTSS
  28. SUSE Linux Enterprise Server 12 SP2
  29. SUSE Linux Enterprise Server for Raspberry Pi 12 SP2

Do I need to reboot my box?

Yes, as most services depends upon the dynamic linker of the GNU C Library and kernel itself needs to be reloaded in memory.

How do I fix CVE-2017-1000364 on Linux?

Type the commands as per your Linux distro. You need to reboot the box. Before you apply patch, note down your current kernel version:
$ uname -a
$ uname -mrs

Sample outputs:

Linux 4.4.0-78-generic x86_64

Debian or Ubuntu Linux

Type the following apt command/apt-get command to apply updates:
$ sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade
Sample outputs:

Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  libc-bin libc-dev-bin libc-l10n libc6 libc6-dev libc6-i386 linux-compiler-gcc-6-x86 linux-headers-4.9.0-3-amd64 linux-headers-4.9.0-3-common linux-image-4.9.0-3-amd64
  linux-kbuild-4.9 linux-libc-dev locales multiarch-support
14 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/62.0 MB of archives.
After this operation, 4,096 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Reading changelogs... Done
Preconfiguring packages ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../libc6-i386_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6-i386 (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../libc6-dev_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6-dev:amd64 (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../libc-dev-bin_2.24-11+deb9u1_amd64.deb ...
Unpacking libc-dev-bin (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../linux-libc-dev_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-libc-dev:amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../libc6_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6:amd64 (2.24-11+deb9u1) over (2.24-11) ...
Setting up libc6:amd64 (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../libc-bin_2.24-11+deb9u1_amd64.deb ...
Unpacking libc-bin (2.24-11+deb9u1) over (2.24-11) ...
Setting up libc-bin (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../multiarch-support_2.24-11+deb9u1_amd64.deb ...
Unpacking multiarch-support (2.24-11+deb9u1) over (2.24-11) ...
Setting up multiarch-support (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../0-libc-l10n_2.24-11+deb9u1_all.deb ...
Unpacking libc-l10n (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../1-locales_2.24-11+deb9u1_all.deb ...
Unpacking locales (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../2-linux-compiler-gcc-6-x86_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-compiler-gcc-6-x86 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../3-linux-headers-4.9.0-3-amd64_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-headers-4.9.0-3-amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../4-linux-headers-4.9.0-3-common_4.9.30-2+deb9u1_all.deb ...
Unpacking linux-headers-4.9.0-3-common (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../5-linux-kbuild-4.9_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-kbuild-4.9 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../6-linux-image-4.9.0-3-amd64_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-image-4.9.0-3-amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Setting up linux-libc-dev:amd64 (4.9.30-2+deb9u1) ...
Setting up linux-headers-4.9.0-3-common (4.9.30-2+deb9u1) ...
Setting up libc6-i386 (2.24-11+deb9u1) ...
Setting up linux-compiler-gcc-6-x86 (4.9.30-2+deb9u1) ...
Setting up linux-kbuild-4.9 (4.9.30-2+deb9u1) ...
Setting up libc-l10n (2.24-11+deb9u1) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up libc-dev-bin (2.24-11+deb9u1) ...
Setting up linux-image-4.9.0-3-amd64 (4.9.30-2+deb9u1) ...
/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-4.9.0-3-amd64
cryptsetup: WARNING: failed to detect canonical device of /dev/md0
cryptsetup: WARNING: could not determine root device from /etc/fstab
W: initramfs-tools configuration sets RESUME=UUID=054b217a-306b-4c18-b0bf-0ed85af6c6e1
W: but no matching swap device is available.
I: The initramfs will attempt to resume from /dev/md1p1
I: (UUID=bf72f3d4-3be4-4f68-8aae-4edfe5431670)
I: Set the RESUME variable to override this.
/etc/kernel/postinst.d/zz-update-grub:
Searching for GRUB installation directory ... found: /boot/grub
Searching for default file ... found: /boot/grub/default
Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst
Searching for splash image ... none found, skipping ...
Found kernel: /boot/vmlinuz-4.9.0-3-amd64
Found kernel: /boot/vmlinuz-3.16.0-4-amd64
Updating /boot/grub/menu.lst ... done

Setting up libc6-dev:amd64 (2.24-11+deb9u1) ...
Setting up locales (2.24-11+deb9u1) ...
Generating locales (this might take a while)...
  en_IN.UTF-8... done
Generation complete.
Setting up linux-headers-4.9.0-3-amd64 (4.9.30-2+deb9u1) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...

Reboot your server/desktop using reboot command:
$ sudo reboot

Oracle/RHEL/CentOS/Scientific Linux

Type the following yum command:
$ sudo yum update
$ sudo reboot

Fedora Linux

Type the following dnf command:
$ sudo dnf update
$ sudo reboot

Suse Enterprise Linux or Opensuse Linux

Type the following zypper command:
$ sudo zypper patch
$ sudo reboot

SUSE OpenStack Cloud 6

$ sudo zypper in -t patch SUSE-OpenStack-Cloud-6-2017-996=1
$ sudo reboot

SUSE Linux Enterprise Server for SAP 12-SP1

$ sudo zypper in -t patch SUSE-SLE-SAP-12-SP1-2017-996=1
$ sudo reboot

SUSE Linux Enterprise Server 12-SP1-LTSS

$ sudo zypper in -t patch SUSE-SLE-SERVER-12-SP1-2017-996=1
$ sudo reboot

SUSE Linux Enterprise Module for Public Cloud 12

$ sudo zypper in -t patch SUSE-SLE-Module-Public-Cloud-12-2017-996=1
$ sudo reboot

Verification

You need to make sure your version number changed after issuing reboot command
$ uname -a
$ uname -r
$ uname -mrs

Sample outputs:

Linux 4.4.0-81-generic x86_64
Main, Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Cpustat – Monitors CPU Utilization by Running Processes in Linux

Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Linux security alert: Bug in sudo’s get_process_ttyname() [ CVE-2017-1000367 ]

There is a serious vulnerability in sudo command that grants root access to anyone with a shell account. It works on SELinux enabled systems such as CentOS/RHEL and others too. A local user with privileges to execute commands via sudo could use this flaw to escalate their privileges to root. Patch your system as soon as possible.

It was discovered that Sudo did not properly parse the contents of /proc/[pid]/stat when attempting to determine its controlling tty. A local attacker in some configurations could possibly use this to overwrite any file on the filesystem, bypassing intended permissions or gain root shell.
From the description

We discovered a vulnerability in Sudo’s get_process_ttyname() for Linux:
this function opens “/proc/[pid]/stat” (man proc) and reads the device number of the tty from field 7 (tty_nr). Unfortunately, these fields are space-separated and field 2 (comm, the filename of the command) can
contain spaces (CVE-2017-1000367).

For example, if we execute Sudo through the symlink “./ 1 “, get_process_ttyname() calls sudo_ttyname_dev() to search for the non-existent tty device number “1” in the built-in search_devs[].

Next, sudo_ttyname_dev() calls the function sudo_ttyname_scan() to search for this non-existent tty device number “1” in a breadth-first traversal of “/dev”.

Last, we exploit this function during its traversal of the world-writable “/dev/shm”: through this vulnerability, a local user can pretend that his tty is any character device on the filesystem, and
after two race conditions, he can pretend that his tty is any file on the filesystem.

On an SELinux-enabled system, if a user is Sudoer for a command that does not grant him full root privileges, he can overwrite any file on the filesystem (including root-owned files) with his command’s output,
because relabel_tty() (in src/selinux.c) calls open(O_RDWR|O_NONBLOCK) on his tty and dup2()s it to the command’s stdin, stdout, and stderr. This allows any Sudoer user to obtain full root privileges.

A list of affected Linux distro

  1. Red Hat Enterprise Linux 6 (sudo)
  2. Red Hat Enterprise Linux 7 (sudo)
  3. Red Hat Enterprise Linux Server (v. 5 ELS) (sudo)
  4. Oracle Enterprise Linux 6
  5. Oracle Enterprise Linux 7
  6. Oracle Enterprise Linux Server 5
  7. CentOS Linux 6 (sudo)
  8. CentOS Linux 7 (sudo)
  9. Debian wheezy
  10. Debian jessie
  11. Debian stretch
  12. Debian sid
  13. Ubuntu 17.04
  14. Ubuntu 16.10
  15. Ubuntu 16.04 LTS
  16. Ubuntu 14.04 LTS
  17. SUSE Linux Enterprise Software Development Kit 12-SP2
  18. SUSE Linux Enterprise Server for Raspberry Pi 12-SP2
  19. SUSE Linux Enterprise Server 12-SP2
  20. SUSE Linux Enterprise Desktop 12-SP2
  21. OpenSuse, Slackware, and Gentoo Linux

How do I patch sudo on Debian/Ubuntu Linux server?

To patch Ubuntu/Debian Linux apt-get command or apt command:
$ sudo apt update
$ sudo apt upgrade

How do I patch sudo on CentOS/RHEL/Scientific/Oracle Linux server?

Run yum command:
$ sudo yum update

How do I patch sudo on Fedora Linux server?

Run dnf command:
$ sudo dnf update

How do I patch sudo on Suse/OpenSUSE Linux server?

Run zypper command:
$ sudo zypper update

How do I patch sudo on Arch Linux server?

Run pacman command:
$ sudo pacman -Syu

How do I patch sudo on Alpine Linux server?

Run apk command:
# apk update && apk upgrade

How do I patch sudo on Slackware Linux server?

Run upgradepkg command:
# upgradepkg sudo-1.8.20p1-i586-1_slack14.2.txz

How do I patch sudo on Gentoo Linux server?

Run emerge command:
# emerge --sync
# emerge --ask --oneshot --verbose ">=app-admin/sudo-1.8.20_p1"

Kernel Programming, Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Impermanence in Linux – Exclusive (By Hari Iyer)

Impermanence, also called Anicca or Anitya, is one of the essential doctrines and a part of three marks of existence in Buddhism The doctrine asserts that all of conditioned existence, without exception, is “transient, evanescent, inconstant”

On Linux, the root of all randomness is something called the kernel entropy pool. This is a large (4,096 bit) number kept privately in the kernel’s memory. There are 24096 possibilities for this number so it can contain up to 4,096 bits of entropy. There is one caveat – the kernel needs to be able to fill that memory from a source with 4,096 bits of entropy. And that’s the hard part: finding that much randomness.

The entropy pool is used in two ways: random numbers are generated from it and it is replenished with entropy by the kernel. When random numbers are generated from the pool the entropy of the pool is diminished (because the person receiving the random number has some information about the pool itself). So as the pool’s entropy diminishes as random numbers are handed out, the pool must be replenished.

Replenishing the pool is called stirring: new sources of entropy are stirred into the mix of bits in the pool.

This is the key to how random number generation works on Linux. If randomness is needed, it’s derived from the entropy pool. When available, other sources of randomness are used to stir the entropy pool and make it less predictable. The details are a little mathematical, but it’s interesting to understand how the Linux random number generator works as the principles and techniques apply to random number generation in other software and systems.

The kernel keeps a rough estimate of the number of bits of entropy in the pool. You can check the value of this estimate through the following command:

cat /proc/sys/kernel/random/entropy_avail

A healthy Linux system with a lot of entropy available will have return close to the full 4,096 bits of entropy. If the value returned is less than 200, the system is running low on entropy.

The kernel is watching you

I mentioned that the system takes other sources of randomness and uses this to stir the entropy pool. This is achieved using something called a timestamp.

Most systems have precise internal clocks. Every time that a user interacts with a system, the value of the clock at that time is recorded as a timestamp. Even though the year, month, day and hour are generally guessable, the millisecond and microsecond are not and therefore the timestamp contains some entropy. Timestamps obtained from the user’s mouse and keyboard along with timing information from the network and disk each have different amount of entropy.

How does the entropy found in a timestamp get transferred to the entropy pool? Simple, use math to mix it in. Well, simple if you like math.

Just mix it in

A fundamental property of entropy is that it mixes well. If you take two unrelated random streams and combine them, the new stream cannot have less entropy. Taking a number of low entropy sources and combining them results in a high entropy source.

All that’s needed is the right combination function: a function that can be used to combine two sources of entropy. One of the simplest such functions is the logical exclusive or (XOR). This truth table shows how bits x and y coming from different random streams are combined by the XOR function.

Even if one source of bits does not have much entropy, there is no harm in XORing it into another source. Entropy always increases. In the Linux kernel, a combination of XORs is used to mix timestamps into the main entropy pool.

Generating random numbers

Cryptographic applications require very high entropy. If a 128 bit key is generated with only 64 bits of entropy then it can be guessed in 264 attempts instead of 2128 attempts. That is the difference between needing a thousand computers running for a few years to brute force the key versus needing all the computers ever created running for longer than the history of the universe to do so.

Cryptographic applications require close to one bit of entropy per bit. If the system’s pool has fewer than 4,096 bits of entropy, how does the system return a fully random number? One way to do this is to use a cryptographic hash function.

A cryptographic hash function takes an input of any size and outputs a fixed size number. Changing one bit of the input will change the output completely. Hash functions are good at mixing things together. This mixing property spreads the entropy from the input evenly through the output. If the input has more bits of entropy than the size of the output, the output will be highly random. This is how highly entropic random numbers are derived from the entropy pool.

The hash function used by the Linux kernel is the standard SHA-1 cryptographic hash. By hashing the entire pool and and some additional arithmetic, 160 random bits are created for use by the system. When this happens, the system lowers its estimate of the entropy in the pool accordingly.

Above I said that applying a hash like SHA-1 could be dangerous if there wasn’t enough entropy in the pool. That’s why it’s critical to keep an eye on the available system entropy: if it drops too low the output of the random number generator could have less entropy that it appears to have.

Running out of entropy

One of the dangers of a system is running out of entropy. When the system’s entropy estimate drops to around the 160 bit level, the length of a SHA-1 hash, things get tricky, and how they effect programs and performance depends on which of two Linux random number generators are used.

Linux exposes two interfaces for random data that behave differently when the entropy level is low. They are /dev/random and /dev/urandom. When the entropy pool becomes predictable, both interfaces for requesting random numbers become problematic.

When the entropy level is too low, /dev/random blocks and does not return until the level of entropy in the system is high enough. This guarantees high entropy random numbers. If /dev/random is used in a time-critical service and the system runs low on entropy, the delays could be detrimental to the quality of service.

On the other hand, /dev/urandom does not block. It continues to return the hashed value of its entropy pool even though there is little to no entropy in it. This low-entropy data is not suited for cryptographic use.

The solution to the problem is to simply add more entropy into the system.

Hardware random number generation to the rescue?

Intel’s Ivy Bridge family of processors have an interesting feature called “secure key.” These processors contain a special piece of hardware inside that generates random numbers. The single assembly instruction RDRAND returns allegedly high entropy random data derived on the chip.

It has been suggested that Intel’s hardware number generator may not be fully random. Since it is baked into the silicon, that assertion is hard to audit and verify. As it turns out, even if the numbers generated have some bias, it can still help as long as this is not the only source of randomness in the system. Even if the random number generator itself had a back door, the mixing property of randomness means that it cannot lower the amount of entropy in the pool.

On Linux, if a hardware random number generator is present, the Linux kernel will use the XOR function to mix the output of RDRAND into the hash of the entropy pool. This happens here in the Linux source code (the XOR operator is ^ in C).

Third party entropy generators

Hardware number generation is not available everywhere, and the sources of randomness polled by the Linux kernel itself are somewhat limited. For this situation, a number of third party random number generation tools exist. Examples of these are haveged, which relies on processor cache timing, audio-entropyd and video-entropyd which work by sampling the noise from an external audio or video input device. By mixing these additional sources of locally collected entropy into the Linux entropy pool, the entropy can only go up.

Redhat / CEntOS / Oracle Linux, Ubuntu

10 Important “rsync” command – UNIX

Rsync (Remote Sync) is a most commonly used command for copying and synchronizing files and directories remotely as well as locally in Linux/Unix systems. With the help of rsync command you can copy and synchronize your data remotely and locally across directories, across disks and networks, perform data backups and mirroring between two Linux machines.

This article explains 10 basic and advanced usage of the rsync command to transfer your files remotely and locally in Linux based machines. You don’t need to be root user to run rsync command.

Some advantages and features of Rsync command
  1. It efficiently copies and sync files to or from a remote system.
  2. Supports copying links, devices, owners, groups and permissions.
  3. It’s faster than scp (Secure Copy) because rsync uses remote-update protocol which allows to transfer just the differences between two sets of files. First time, it copies the whole content of a file or a directory from source to destination but from next time, it copies only the changed blocks and bytes to the destination.
  4. Rsync consumes less bandwidth as it uses compression and decompression method while sending and receiving data both ends.
Basic syntax of rsync command
# rsync options source destination
Some common options used with rsync commands
  1. -v : verbose
  2. -r : copies data recursively (but don’t preserve timestamps and permission while transferring data
  3. -a : archive mode, archive mode allows copying files recursively and it also preserves symbolic links, file permissions, user & group ownerships and timestamps
  4. -z : compress file data
  5. -h : human-readable, output numbers in a human-readable format

 

Install rsync in your Linux machine

We can install rsync package with the help of following command.

# yum install rsync (On Red Hat based systems)
# apt-get install rsync (On Debian based systems)

1. Copy/Sync Files and Directory Locally

Copy/Sync a File on a Local Computer

This following command will sync a single file on a local machine from one location to another location. Here in this example, a file name backup.tar needs to be copied or synced to /tmp/backups/ folder.

[root@tecmint]# rsync -zvh backup.tar /tmp/backups/
created directory /tmp/backups
backup.tar
sent 14.71M bytes  received 31 bytes  3.27M bytes/sec
total size is 16.18M  speedup is 1.10

In above example, you can see that if the destination is not already exists rsync will create a directory automatically for destination.

Copy/Sync a Directory on Local Computer

The following command will transfer or sync all the files of from one directory to a different directory in the same machine. Here in this example, /root/rpmpkgs contains some rpm package files and you want that directory to be copied inside /tmp/backups/ folder.

[root@tecmint]# rsync -avzh /root/rpmpkgs /tmp/backups/
sending incremental file list
rpmpkgs/
rpmpkgs/httpd-2.2.3-82.el5.centos.i386.rpm
rpmpkgs/mod_ssl-2.2.3-82.el5.centos.i386.rpm
rpmpkgs/nagios-3.5.0.tar.gz
rpmpkgs/nagios-plugins-1.4.16.tar.gz
sent 4.99M bytes  received 92 bytes  3.33M bytes/sec
total size is 4.99M  speedup is 1.00

2. Copy/Sync Files and Directory to or From a Server

Copy a Directory from Local Server to a Remote Server

This command will sync a directory from a local machine to a remote machine. For example: There is a folder in your local computer “rpmpkgs” which contains some RPM packages and you want that local directory’s content send to a remote server, you can use following command.

[root@tecmint]$ rsync -avz rpmpkgs/ root@192.168.0.101:/home/
root@192.168.0.101's password:
sending incremental file list
./
httpd-2.2.3-82.el5.centos.i386.rpm
mod_ssl-2.2.3-82.el5.centos.i386.rpm
nagios-3.5.0.tar.gz
nagios-plugins-1.4.16.tar.gz
sent 4993369 bytes  received 91 bytes  399476.80 bytes/sec
total size is 4991313  speedup is 1.00
Copy/Sync a Remote Directory to a Local Machine

This command will help you sync a remote directory to a local directory. Here in this example, a directory /home/hari/rpmpkgs which is on a remote server is being copied in your local computer in /tmp/myrpms.

[root@tecmint]# rsync -avzh root@192.168.0.100:/home/tarunika/rpmpkgs /tmp/myrpms
root@192.168.0.100's password:
receiving incremental file list
created directory /tmp/myrpms
rpmpkgs/
rpmpkgs/httpd-2.2.3-82.el5.centos.i386.rpm
rpmpkgs/mod_ssl-2.2.3-82.el5.centos.i386.rpm
rpmpkgs/nagios-3.5.0.tar.gz
rpmpkgs/nagios-plugins-1.4.16.tar.gz
sent 91 bytes  received 4.99M bytes  322.16K bytes/sec
total size is 4.99M  speedup is 1.00

3. Rsync Over SSH

With rsync, we can use SSH (Secure Shell) for data transfer, using SSH protocol while transferring our data you can be ensured that your data is being transferred in a secured connection with encryption so that nobody can read your data while it is being transferred over the wire on the internet.

Also when we use rsync we need to provide the user/root password to accomplish that particular task, so using SSH option will send your logins in an encrypted manner so that your password will be safe.

Copy a File from a Remote Server to a Local Server with SSH

To specify a protocol with rsync you need to give “-e” option with protocol name you want to use. Here in this example, We will be using “ssh” with “-e” option and perform data transfer.

[root@tecmint]# rsync -avzhe ssh root@192.168.0.100:/root/install.log /tmp/
root@192.168.0.100's password:
receiving incremental file list
install.log
sent 30 bytes  received 8.12K bytes  1.48K bytes/sec
total size is 30.74K  speedup is 3.77
Copy a File from a Local Server to a Remote Server with SSH
[root@tecmint]# rsync -avzhe ssh backup.tar root@192.168.0.100:/backups/
root@192.168.0.100's password:
sending incremental file list
backup.tar
sent 14.71M bytes  received 31 bytes  1.28M bytes/sec
total size is 16.18M  speedup is 1.10

 

4. Show Progress While Transferring Data with rsync

To show the progress while transferring the data from one machine to a different machine, we can use ‘–progress’ option for it. It displays the files and the time remaining to complete the transfer.

[root@tecmint]# rsync -avzhe ssh --progress /home/rpmpkgs root@192.168.0.100:/root/rpmpkgs
root@192.168.0.100's password:
sending incremental file list
created directory /root/rpmpkgs
rpmpkgs/
rpmpkgs/httpd-2.2.3-82.el5.centos.i386.rpm
1.02M 100%        2.72MB/s        0:00:00 (xfer#1, to-check=3/5)
rpmpkgs/mod_ssl-2.2.3-82.el5.centos.i386.rpm
99.04K 100%  241.19kB/s        0:00:00 (xfer#2, to-check=2/5)
rpmpkgs/nagios-3.5.0.tar.gz
1.79M 100%        1.56MB/s        0:00:01 (xfer#3, to-check=1/5)
rpmpkgs/nagios-plugins-1.4.16.tar.gz
2.09M 100%        1.47MB/s        0:00:01 (xfer#4, to-check=0/5)
sent 4.99M bytes  received 92 bytes  475.56K bytes/sec
total size is 4.99M  speedup is 1.00

5. Use of –include and –exclude Options

These two options allows us to include and exclude files by specifying parameters with these option helps us to specify those files or directories which you want to include in your sync and exclude files and folders with you don’t want to be transferred.

Here in this example, rsync command will include those files and directory only which starts with ‘R’ and exclude all other files and directory.

[root@tecmint]# rsync -avze ssh --include 'R*' --exclude '*' root@192.168.0.101:/var/lib/rpm/ /root/rpm
root@192.168.0.101's password:
receiving incremental file list
created directory /root/rpm
./
Requirename
Requireversion
sent 67 bytes  received 167289 bytes  7438.04 bytes/sec
total size is 434176  speedup is 2.59

6. Use of –delete Option

If a file or directory not exist at the source, but already exists at the destination, you might want to delete that existing file/directory at the target while syncing .

We can use ‘–delete‘ option to delete files that are not there in source directory.

Source and target are in sync. Now creating new file test.txt at the target.

[root@tecmint]# touch test.txt
[root@tecmint]# rsync -avz --delete root@192.168.0.100:/var/lib/rpm/ .
Password:
receiving file list ... done
deleting test.txt
./
sent 26 bytes  received 390 bytes  48.94 bytes/sec
total size is 45305958  speedup is 108908.55

Target has the new file called test.txt, when synchronize with the source with ‘–delete‘ option, it removed the file test.txt.

7. Set the Max Size of Files to be Transferred

You can specify the Max file size to be transferred or sync. You can do it with “–max-size” option. Here in this example, Max file size is 200k, so this command will transfer only those files which are equal or smaller than 200k.

[root@tecmint]# rsync -avzhe ssh --max-size='200k' /var/lib/rpm/ root@192.168.0.100:/root/tmprpm
root@192.168.0.100's password:
sending incremental file list
created directory /root/tmprpm
./
Conflictname
Group
Installtid
Name
Provideversion
Pubkeys
Requireversion
Sha1header
Sigmd5
Triggername
__db.001
sent 189.79K bytes  received 224 bytes  13.10K bytes/sec
total size is 38.08M  speedup is 200.43

8. Automatically Delete source Files after successful Transfer

Now, suppose you have a main web server and a data backup server, you created a daily backup and synced it with your backup server, now you don’t want to keep that local copy of backup in your web server.

So, will you wait for transfer to complete and then delete those local backup file manually? Of Course NO. This automatic deletion can be done using ‘–remove-source-files‘ option.

[root@tecmint]# rsync --remove-source-files -zvh backup.tar /tmp/backups/
backup.tar
sent 14.71M bytes  received 31 bytes  4.20M bytes/sec
total size is 16.18M  speedup is 1.10
[root@tecmint]# ll backup.tar
ls: backup.tar: No such file or directory

9. Do a Dry Run with rsync

If you are a newbie and using rsync and don’t know what exactly your command going do. Rsync could really mess up the things in your destination folder and then doing an undo can be a tedious job.

Use of this option will not make any changes only do a dry run of the command and shows the output of the command, if the output shows exactly same you want to do then you can remove ‘–dry-run‘ option from your command and run on the terminal.

root@tecmint]# rsync --dry-run --remove-source-files -zvh backup.tar /tmp/backups/
backup.tar
sent 35 bytes  received 15 bytes  100.00 bytes/sec
total size is 16.18M  speedup is 323584.00 (DRY RUN)

10. Set Bandwidth Limit and Transfer File

You can set the bandwidth limit while transferring data from one machine to another machine with the the help of ‘–bwlimit‘ option. This options helps us to limit I/O bandwidth.

[root@tecmint]# rsync --bwlimit=100 -avzhe ssh  /var/lib/rpm/  root@192.168.0.100:/root/tmprpm/
root@192.168.0.100's password:
sending incremental file list
sent 324 bytes  received 12 bytes  61.09 bytes/sec
total size is 38.08M  speedup is 113347.05

Also, by default rsync syncs changed blocks and bytes only, if you want explicitly want to sync whole file then you use ‘-W‘ option with it.

[root@tecmint]# rsync -zvhW backup.tar /tmp/backups/backup.tar
backup.tar
sent 14.71M bytes  received 31 bytes  3.27M bytes/sec
total size is 16.18M  speedup is 1.10


rrsync -azP –progress “<user>@<host>:<absolute path>” <location to be copied>

Source :- techmint.com
Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Fork: retry: Resource temporarily unavailable

Issue: 

It was reported that a particular application user is not able to Login.

Troubleshoot:
1. Tried Logging to the system with root user it was fine.
2. Tried to switich user it failed with an Error “Write Failed; Broken Pipe”
3. Created a file and it was working.
4. Tried switch the user. This time it goes through.
5. Tried running some jobs with the user. It throws an error saying “fork: retry: Resource temporarily unavailable”.
6. Then checked the “/etc/security/limits.d/90-nproc.conf” file to find out that all the users
are given nproc limit as 1024.

Resolution:
1. Changed it to a higher value and it solved the issue.

2. I changed the value to 4096.
Redhat / CEntOS / Oracle Linux, Ubuntu

How to use parallel ssh (PSSH) for executing ssh in parallel on a number of Linux/Unix/BSD servers

Recently I come across a nice little nifty tool called pssh to run a single command on multiple Linux / UNIX / BSD servers. You can easily increase your productivy with this SSH tool.
More about pssh
pssh is a command line tool for executing ssh in parallel on some hosts. It specialties includes:
  1. Sending input to all of the processes
  2. Inputting a password to ssh
  3. Saving output to files
  4. IT/sysadmin taks automation such as patching servers
  5. Timing out and more
Let us see how to install and use pssh on Linux and Unix-like system.
pssh-welcome
Installation
You can install pssh as per your Linux and Unix variant. Once package installed, you can get parallel versions of the openssh tools. Included in the installation:
  1. Parallel ssh (pssh command)
  2. Parallel scp (pscp command )
  3. Parallel rsync (prsync command)
  4. Parallel nuke (pnuke command)
  5. Parallel slurp (pslurp command)
Install pssh on Debian/Ubuntu Linux
Type the following apt-get command/apt command to install pssh:
$ sudo apt install pssh
OR
$ sudo apt-get install pssh
Sample outputs:
Fig.01: Installing pssh on Debian/Ubuntu Linux

Fig.01: Installing pssh on Debian/Ubuntu Linux

Install pssh on Apple MacOS X
Type the following brew command:
$ brew install pssh
Sample outputs:
Fig.02: Installing pssh on MacOS Unix

Fig.02: Installing pssh on MacOS Unix

Install pssh on FreeBSD unix
Type any one of the command:
# cd /usr/ports/security/pssh/ && make install clean
OR
# pkg install pssh
Sample outputs:
Fig.03: Installing pssh on FreeBSD

Fig.03: Installing pssh on FreeBSD

Install pssh on RHEL/CentOS/Fedora Linux
First turn on EPEL repo and type the following command yum command:
$ sudo yum install pssh
Sample outputs:
Fig.04: Installing pssh on RHEL/CentOS/Red Hat Enterprise Linux

Fig.04: Installing pssh on RHEL/CentOS/Red Hat Enterprise Linux

Install pssh on Fedora Linux
Type the following dnf command:
$ sudo dnf install pssh
Sample outputs:
Fig.05: Installing pssh on Fedora

Fig.05: Installing pssh on Fedora

Install pssh on Arch Linux
Type the following command:
$ sudo pacman -S python-pip
$ pip install pssh
How to use pssh command
First you need to create a text file called hosts file from which pssh read hosts names. The syntax is pretty simple. Each line in the host file are of the form [user@]host[:port] and can include blank lines and comments lines beginning with “#”. Here is my sample file named ~/.pssh_hosts_files:
$ cat ~/.pssh_hosts_files
vivek@dellm6700
root@192.168.2.30
root@192.168.2.45
root@192.168.2.46

Run the date command all hosts:
$ pssh -i -h ~/.pssh_hosts_files date
Sample outputs:
[1] 18:10:10 [SUCCESS] root@192.168.2.46 Sun Feb 26 18:10:10 IST 2017 [2] 18:10:10 [SUCCESS] vivek@dellm6700 Sun Feb 26 18:10:10 IST 2017 [3] 18:10:10 [SUCCESS] root@192.168.2.45 Sun Feb 26 18:10:10 IST 2017 [4] 18:10:10 [SUCCESS] root@192.168.2.30 Sun Feb 26 18:10:10 IST 2017
Run the uptime command on each host:
$ pssh -i -h ~/.pssh_hosts_files uptime
Sample outputs:
[1] 18:11:15 [SUCCESS] root@192.168.2.45 18:11:15 up 2:29, 0 users, load average: 0.00, 0.00, 0.00 [2] 18:11:15 [SUCCESS] vivek@dellm6700 18:11:15 up 19:06, 0 users, load average: 0.13, 0.25, 0.27 [3] 18:11:15 [SUCCESS] root@192.168.2.46 18:11:15 up 1:55, 0 users, load average: 0.00, 0.00, 0.00 [4] 18:11:15 [SUCCESS] root@192.168.2.30 6:11PM up 1 day, 21:38, 0 users, load averages: 0.12, 0.14, 0.09
You can now automate common sysadmin tasks such as patching all servers:
$ pssh -h ~/.pssh_hosts_files -- sudo yum -y update
OR
$ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y update
$ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y upgrade
How do I use pssh to copy file to all servers?
The syntax is:
pscp -h ~/.pssh_hosts_files src dest
To copy $HOME/demo.txt to /tmp/ on all servers, enter:
$ pscp -h ~/.pssh_hosts_files $HOME/demo.txt /tmp/
Sample outputs:
[1] 18:17:35 [SUCCESS] vivek@dellm6700 [2] 18:17:35 [SUCCESS] root@192.168.2.45 [3] 18:17:35 [SUCCESS] root@192.168.2.46 [4] 18:17:35 [SUCCESS] root@192.168.2.30
Or use the prsync command for efficient copying of files:
$ prsync -h ~/.pssh_hosts_files /etc/passwd /tmp/
$ prsync -h ~/.pssh_hosts_files *.html /var/www/html/
How do I kill processes in parallel on a number of hosts?
Use the pnuke command for killing processes in parallel on a number of hosts. The syntax is:
$ pnuke -h .pssh_hosts_files process_name
### kill nginx and firefox on hosts:
$ pnuke -h ~/.pssh_hosts_files firefox
$ pnuke -h ~/.pssh_hosts_files nginx

See pssh/pscp command man pages for more information.
Conclusion
pssh is a pretty good tool for parallel SSH command execution on many servers. It quite is useful if you have 5 or 10 servers. Nevertheless, if you need to do something complicated you should look into Ansible and co.
Redhat / CEntOS / Oracle Linux, Ubuntu

30 Shades of “Alias” Command – UNIX

You can define various types aliases as follows to save time and increase productivity.

#1: Control ls command output

The ls command lists directory contents and you can colorize the output:

## Colorize the ls output ##
alias ls='ls --color=auto'
 
## Use a long listing format ##
alias ll='ls -la' 
 
## Show hidden files ##
alias l.='ls -d .* --color=auto'

#2: Control cd command behavior

## get rid of command not found ##
alias cd..='cd ..' 
 
## a quick way to get out of current directory ##
alias ..='cd ..' 
alias ...='cd ../../../' 
alias ....='cd ../../../../' 
alias .....='cd ../../../../' 
alias .4='cd ../../../../' 
alias .5='cd ../../../../..'

#3: Control grep command output

grep command is a command-line utility for searching plain-text files for lines matching a regular expression:

## Colorize the grep command output for ease of use (good for log files)##
alias grep='grep --color=auto'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'

#4: Start calculator with math support

alias bc='bc -l'

#4: Generate sha1 digest

alias sha1='openssl sha1'

#5: Create parent directories on demand

mkdir command is used to create a directory:

alias mkdir='mkdir -pv'

#6: Colorize diff output

You can compare files line by line using diff and use a tool called colordiff to colorize diff output:

# install  colordiff package 🙂
alias diff='colordiff'

#7: Make mount command output pretty and human readable format

alias mount='mount |column -t'

#8: Command short cuts to save time

# handy short cuts #
alias h='history'
alias j='jobs -l'

#9: Create a new set of commands

alias path='echo -e ${PATH//:/\\n}'
alias now='date +"%T"'
alias nowtime=now
alias nowdate='date +"%d-%m-%Y"'

#10: Set vim as default

alias vi=vim 
alias svi='sudo vi' 
alias vis='vim "+set si"' 
alias edit='vim'

#11: Control output of networking tool called ping

# Stop after sending count ECHO_REQUEST packets #
alias ping='ping -c 5'
# Do not wait interval 1 second, go fast #
alias fastping='ping -c 100 -s.2'

#12: Show open ports

Use netstat command to quickly list all TCP/UDP port on the server:

alias ports='netstat -tulanp'

#13: Wakeup sleeping servers

Wake-on-LAN (WOL) is an Ethernet networking standard that allows a server to be turned on by a network message. You can quickly wakeup nas devices and server using the following aliases:

## replace mac with your actual server mac address #
alias wakeupnas01='/usr/bin/wakeonlan 00:11:32:11:15:FC'
alias wakeupnas02='/usr/bin/wakeonlan 00:11:32:11:15:FD'
alias wakeupnas03='/usr/bin/wakeonlan 00:11:32:11:15:FE'

#14: Control firewall (iptables) output

Netfilter is a host-based firewall for Linux operating systems. It is included as part of the Linux distribution and it is activated by default. This post list most common iptables solutions required by a new Linux user to secure his or her Linux operating system from intruders.

## shortcut  for iptables and pass it via sudo#
alias ipt='sudo /sbin/iptables'
 
# display all rules #
alias iptlist='sudo /sbin/iptables -L -n -v --line-numbers'
alias iptlistin='sudo /sbin/iptables -L INPUT -n -v --line-numbers'
alias iptlistout='sudo /sbin/iptables -L OUTPUT -n -v --line-numbers'
alias iptlistfw='sudo /sbin/iptables -L FORWARD -n -v --line-numbers'
alias firewall=iptlist

#15: Debug web server / cdn problems with curl

# get web server headers #
alias header='curl -I'
 
# find out if remote server supports gzip / mod_deflate or not #
alias headerc='curl -I --compress'

#16: Add safety nets

# do not delete / or prompt if deleting more than 3 files at a time #
alias rm='rm -I --preserve-root'
 
# confirmation #
alias mv='mv -i' 
alias cp='cp -i' 
alias ln='ln -i'
 
# Parenting changing perms on / #
alias chown='chown --preserve-root'
alias chmod='chmod --preserve-root'
alias chgrp='chgrp --preserve-root'

#17: Update Debian Linux server

apt-get command is used for installing packages over the internet (ftp or http). You can also upgrade all packages in a single operations:

# distro specific  - Debian / Ubuntu and friends #
# install with apt-get
alias apt-get="sudo apt-get" 
alias updatey="sudo apt-get --yes" 
 
# update on one command 
alias update='sudo apt-get update && sudo apt-get upgrade'

#18: Update RHEL / CentOS / Fedora Linux server

yum command is a package management tool for RHEL / CentOS / Fedora Linux and friends:

## distrp specifc RHEL/CentOS ##
alias update='yum update'
alias updatey='yum -y update'

#19: Tune sudo and su

# become root #
alias root='sudo -i'
alias su='sudo -i'

#20: Pass halt/reboot via sudo

shutdown command bring the Linux / Unix system down:

# reboot / halt / poweroff
alias reboot='sudo /sbin/reboot'
alias poweroff='sudo /sbin/poweroff'
alias halt='sudo /sbin/halt'
alias shutdown='sudo /sbin/shutdown'

#21: Control web servers

# also pass it via sudo so whoever is admin can reload it without calling you #
alias nginxreload='sudo /usr/local/nginx/sbin/nginx -s reload'
alias nginxtest='sudo /usr/local/nginx/sbin/nginx -t'
alias lightyload='sudo /etc/init.d/lighttpd reload'
alias lightytest='sudo /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf -t'
alias httpdreload='sudo /usr/sbin/apachectl -k graceful'
alias httpdtest='sudo /usr/sbin/apachectl -t && /usr/sbin/apachectl -t -D DUMP_VHOSTS'

#22: Alias into our backup stuff

# if cron fails or if you want backup on demand just run these commands # 
# again pass it via sudo so whoever is in admin group can start the job #
# Backup scripts #
alias backup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type local --taget /raid1/backups'
alias nasbackup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type nas --target nas01'
alias s3backup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type nas --target nas01 --auth /home/scripts/admin/.authdata/amazon.keys'
alias rsnapshothourly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotdaily='sudo  /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotweekly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotmonthly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias amazonbackup=s3backup

#23: Desktop specific – play avi/mp3 files on demand

## play video files in a current directory ##
# cd ~/Download/movie-name 
# playavi or vlc 
alias playavi='mplayer *.avi'
alias vlc='vlc *.avi'
 
# play all music files from the current directory #
alias playwave='for i in *.wav; do mplayer "$i"; done'
alias playogg='for i in *.ogg; do mplayer "$i"; done'
alias playmp3='for i in *.mp3; do mplayer "$i"; done'
 
# play files from nas devices #
alias nplaywave='for i in /nas/multimedia/wave/*.wav; do mplayer "$i"; done'
alias nplayogg='for i in /nas/multimedia/ogg/*.ogg; do mplayer "$i"; done'
alias nplaymp3='for i in /nas/multimedia/mp3/*.mp3; do mplayer "$i"; done'
 
# shuffle mp3/ogg etc by default #
alias music='mplayer --shuffle *'

#24: Set default interfaces for sys admin related commands

vnstat is console-based network traffic monitor. dnstop is console tool to analyze DNS traffic. tcptrack and iftop commands displays information about TCP/UDP connections it sees on a network interface and display bandwidth usage on an interface by host respectively.

## All of our servers eth1 is connected to the Internets via vlan / router etc  ##
alias dnstop='dnstop -l 5  eth1'
alias vnstat='vnstat -i eth1'
alias iftop='iftop -i eth1'
alias tcpdump='tcpdump -i eth1'
alias ethtool='ethtool eth1'
 
# work on wlan0 by default #
# Only useful for laptop as all servers are without wireless interface
alias iwconfig='iwconfig wlan0'

#25: Get system memory, cpu usage, and gpu memory info quickly

## pass options to free ## 
alias meminfo='free -m -l -t'
 
## get top process eating memory
alias psmem='ps auxf | sort -nr -k 4'
alias psmem10='ps auxf | sort -nr -k 4 | head -10'
 
## get top process eating cpu ##
alias pscpu='ps auxf | sort -nr -k 3'
alias pscpu10='ps auxf | sort -nr -k 3 | head -10'
 
## Get server cpu info ##
alias cpuinfo='lscpu'
 
## older system use /proc/cpuinfo ##
##alias cpuinfo='less /proc/cpuinfo' ##
 
## get GPU ram on desktop / laptop## 
alias gpumeminfo='grep -i --color memory /var/log/Xorg.0.log'

#26: Control Home Router

The curl command can be used to reboot Linksys routers.

# Reboot my home Linksys WAG160N / WAG54 / WAG320 / WAG120N Router / Gateway from *nix.
alias rebootlinksys="curl -u 'admin:my-super-password' 'http://192.168.1.2/setup.cgi?todo=reboot'"
 
# Reboot tomato based Asus NT16 wireless bridge 
alias reboottomato="ssh admin@192.168.1.1 /sbin/reboot"

#27 Resume wget by default

The GNU Wget is a free utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, and it can resume downloads too:

## this one saved by butt so many times ##
alias wget='wget -c'

#28 Use different browser for testing website

## this one saved by butt so many times ##
alias ff4='/opt/firefox4/firefox'
alias ff13='/opt/firefox13/firefox'
alias chrome='/opt/google/chrome/chrome'
alias opera='/opt/opera/opera'
 
#default ff 
alias ff=ff13
 
#my default browser 
alias browser=chrome

#29: A note about ssh alias

Do not create ssh alias, instead use ~/.ssh/config OpenSSH SSH client configuration files. It offers more option. An example:

Host server10
  Hostname 1.2.3.4
  IdentityFile ~/backups/.ssh/id_dsa
  user foobar
  Port 30000
  ForwardX11Trusted yes
  TCPKeepAlive yes

You can now connect to peer1 using the following syntax:
$ ssh server10

#30: It’s your turn to share…

## set some other defaults ##
alias df='df -H'
alias du='du -ch'
 
# top is atop, just like vi is vim
alias top='atop' 
 
## nfsrestart  - must be root  ##
## refresh nfs mount / cache etc for Apache ##
alias nfsrestart='sync && sleep 2 && /etc/init.d/httpd stop && umount netapp2:/exports/http && sleep 2 && mount -o rw,sync,rsize=32768,wsize=32768,intr,hard,proto=tcp,fsc natapp2:/exports /http/var/www/html &&  /etc/init.d/httpd start'
 
## Memcached server status  ##
alias mcdstats='/usr/bin/memcached-tool 10.10.27.11:11211 stats'
alias mcdshow='/usr/bin/memcached-tool 10.10.27.11:11211 display'
 
## quickly flush out memcached server ##
alias flushmcd='echo "flush_all" | nc 10.10.27.11 11211'
 
## Remove assets quickly from Akamai / Amazon cdn ##
alias cdndel='/home/scripts/admin/cdn/purge_cdn_cache --profile akamai'
alias amzcdndel='/home/scripts/admin/cdn/purge_cdn_cache --profile amazon'
 
## supply list of urls via file or stdin
alias cdnmdel='/home/scripts/admin/cdn/purge_cdn_cache --profile akamai --stdin'
alias amzcdnmdel='/home/scripts/admin/cdn/purge_cdn_cache --profile amazon --stdin'

Redhat / CEntOS / Oracle Linux, Secure Socket Layer, Ubuntu

Linux – Concepts – IPTABLES v/s FIREWALLD

Today we will walk through iptables and firewalld and we will learn about the history of these two along with installation & how we can configure these for our Linux distributions.

Let’s begin wihtout wasting further more time.

What is iptables?

First, we need to know what is iptables. Most of senior IT professionals knows about it and used to work with it as well. Iptables is an application / program that allows a user to configure the security or firewall security tables provided by the Linux kernel firewall and the chains so that a user can add / remove firewall rules to it accordingly to meet his / her security requirements. Iptables uses different kernel modules and different protocols so that user can take the best out of it. As for example, iptables is used for IPv4 ( IP version 4/32 bit ) and ip6tables for IPv6 ( IP version 6/64 bit ) for both tcp and udp. Normally, iptables rules are configured by System Administrator or System Analyst or IT Manager.  You must have root privileges to execute each iptables rules. Linux Kernel uses the Netfilter framework so that it can provide various networking-related operations which can be performed by using iptables. Previously, ipchains was used in most of the Linux distributions for the same purpose. Every iptables rules are directly handled by the Linux Kernel itself and it is known as kernel duty. Whatever GUI tools or other security tools you are using to configure your server’s firewall security, at the end of the day, it is converted into iptables rules and supplied to the kernel to perform the operation.

History of iptables

The rise of the iptables begin with netfilter. Paul Rusty Russell was the initial author and the head think tank behind netfilter / iptables. Later he was joined by many other tech people then form and build the Netfilter core team and develop & maintain the netfilter/iptables project as a joint effort like many other open source projects. Harald Welte was the former leader until 2007 and then Patrick McHardy was the head until 2013. Currently, netfilter core team head is Pablo Neira Ayuso.

To know more about netfilter, please visit this link. To know more about the historicity of netfilter, please visit this link.

To know more about iptables history, please visit this link.

How to install iptables

Now a days, every Linux Kernel comes with iptables and can be found pre build or pre installed on every famous modern Linux distributions. On most Linux systems, iptables is installed in this /usr/sbin/iptables directory. It can be also  found in /sbin/iptables, but since iptables is more like a service rather than an “essential binary”, the preferred location remains in /usr/sbin directory.

For Ubuntu or Debian

sudo apt-get install iptables

For CentOS

sudo yum install iptables-services

For RHEL

sudo yum install iptables

Iptables version

To know your iptables version, type the following command in your terminal.

sudo iptables --version

Start & Stopping your iptables firewall

For OpenSUSE 42.1, type the following to stop.

sudo /sbin/rcSuSEfirewall2 stop

To start it again

sudo /sbin/rcSuSEfirewall2 start

For Ubuntu, type the following to stop.

sudo service ufw stop

To start it again

sudo service ufw start

For Debian & RHEL , type the following to stop.

sudo /etc/init.d/iptables stop

To start it again

sudo /etc/init.d/iptables start

For CentOS, type the following to stop.

sudo service iptables stop

To start it again

sudo service iptables start

Getting all iptables rules lists

To know all the rules that is currently present & active in your iprables, simply open a terminal and type the following.

sudo iptables -L

If there are no rules exits on the iptables means if there are no rules added so far in your iptables firewall, you will see something like the below image.

Iptables_Lists_OpenSUSE42.1

In this above picture, you can see that , there are three (3) chains and they are INPUT, FORWARD, OUTPUT and there are no rules exists. Actually I haven’t add one yet.

Type the following to know the status of the chains of your iptables firewall.

sudo iptables -S

With the above command, you can learn whether your chains are accepting or not.

Clear all iptables rules

To clear all the rules from your iptables firewall, please type the following. This is normally known as flushing your iptables rules.

sudo iptables -F

If you want to flush the INPUT chain only, or any individual chains, issue the below commands as per your requirements.

sudo iptables -F INPUT
sudo iptables -F OUTPUT
sudo iptables -F FORWARD

ACCEPT or DROP Chains

To accept or drop a particular chain, issue any of the following command on your terminal to meet your requirements.

iptables --policy INPUT DROP

The above rule will not accept anything that is incoming to that server. To revert it again back to ACCEPT, do the following

iptables --policy INPUT ACCEPT

Same goes for other chains as well like

iptables --policy OUTPUT DROP
iptables --policy FORWARD DROP

Note: By default, all chains of iptables ( INPUT, OUTPUT, FORWARD ) are in ACCEPT mode. This is known as Policy Chain Default Behavior.

Allowing any port

If you are running any web server on your host, then you must allow your iptables firewall so that your server listen or respond to port 80. By default web server runs on port 80. Let’s do that then.

sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT

On the above line, A stands for append means we are adding a new rule to the iptables list. INPUT stands for the INPUT chain. P stands for protocol and dport stands for destination port. By default any web server runs on port 80. Similarly, you can allow SSH port as well.

sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT

By default, SSH runs on port 22. But it’s good practise not to run SSH on port 22. Always run SSH on a different port. To run SSH on a different port, open /etc/ssh/sshd_config file on your favorite editor and change the port 22 to a other port.

Blocking any port

Say we want to block port 135. We can do it by

sudo iptables -A INPUT -p tcp --dport 135 -j DROP

if you want to block your server to initiate any SSH connection from the server to another host/server, issue the following command

sudo iptables -A OUTPUT -p tcp --dport 22 -j DROP

By doing so, no one can use your sever to initiate a SSH connection from the server. The OUPUT chain will filter and DROP any outgoing tcp connection towards another hosts.

Allowing specific IP with Port

sudo iptables -A INPUT -p tcp -s 0/0 --dport 22  -j ACCEPT

Here -s 0/0 stand for any incoming source with any IP addresses. So, there is no way your server is going to respond for a tcp packet which destination port is 22. If you want to allow only any particular IP then use the following one.

sudo iptables -A INPUT -p tcp -s 12.12.12.12/32 --dport 22  -j ACCEPT

On the above example, you are only allowing 12.12.12.12 IP address to connect to port SSH. Rest IP addresses will not be able to connect to port 22. Similarly you can allow by using CIDR values. Such as

sudo iptables -A INPUT -p tcp -s 12.12.12.0/24 --dport 22  -j ACCEPT

The above example show how you can allow a whole IP block for accepting connection on port 22. It will accept IP starting from 12.12.12.1 to 12.12.12.255.

If you want to block such IP addresses range, do the reverse by replacing ACCEPT by DROP like the following

sudo iptables -A INPUT -p tcp -s 12.12.12.0/24 --dport 22  -j DROP

So, it will not allow to get a connection on port 22 from from 12.12.12.1 to 12.12.12.255 IP addresses.

Blocking ICMP

If you want to block ICMP (ping) request to and from on your server, you can try the following. The first one will block not to send ICMP ping echo request to another host.

sudo iptables -A OUTPUT -p icmp --icmp-type 8 -j DROP

Now, try to ping google.com. Your OpenSUSE server will not be able to ping google.com.

If you want block the incoming ICMP (ping) echo request for your server, just type the following on your terminal.

sudo iptables -I INPUT -p icmp --icmp-type 8 -j DROP

Now, It will not reply to any ICMP ping echo request. Say, your server IP address is 13.13.13.13. And if you ping ping that IP of your server then you will see that your server is not responding for that ping request.

Blocking MySql / MariaDB Port

As Mysql is holding your database so you must protect your database from outside attach. Allow your trusted application server IP addresses only to connect with your MySQL server. To block other

sudo iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT

So, it will not take any MySql connection except 192.168.1.0/24 IP block. By default MySql runs on 3306 port.

Blocking SMTP

If you not running any mail server on your host server or if your server is not configured to act like a mail server, you must block SMTP so that your server is not sending any spam or any mail towards any domain. You must do this to block any outgoing mail from your server. To do so,

sudo iptables -A OUTPUT -p tcp --dport 25 -j DROP

Block DDoS

We all are familiar with the term DDoS. To get rid of it, issue the following command in your terminal.

iptables -A INPUT -p tcp --dport 80 -m limit --limit 20/minute --limit-burst 100 -j ACCEPT

You need to configure the numerical value to meet your requirements. This is just a standard to maintain.

You can protect more by

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/tcp_syncookies
echo 0 > /proc/sys/net/ipv4/conf/all/accept_redirects
echo 0 > /proc/sys/net/ipv4/conf/all/accept_source_route
echo 1 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/lo/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
echo 1800 > /proc/sys/net/ipv4/tcp_keepalive_time
echo 1 > /proc/sys/net/ipv4/tcp_window_scaling 
echo 0 > /proc/sys/net/ipv4/tcp_sack
echo 1280 > /proc/sys/net/ipv4/tcp_max_syn_backlog

Blocking Port Scanning

There are hundred of people out there to scan your open ports of your server and try to break down your server security. To block it

sudo iptables -N block-scan
sudo iptables -A block-scan -p tcp —tcp-flags SYN,ACK,FIN,RST RST -m limit —limit 1/s -j RETURN
sudo iptables -A block-scan -j DROP

Here, block-scan is a name of a new chain.

Blocking Bad Ports

You may need to block some bad ports for your server as well. Here is how you can do this.

badport="135,136,137,138,139,445"
sudo iptables -A INPUT -p tcp -m multiport --dport $badport -j DROP
sudo iptables -A INPUT -p udp -m multiport --dport $badport -j DROP

You can add more ports according to your needs.

What is firewalld?

Firewalld provides a dynamically managed firewall with support for network/firewall zones that defines the trust level of network connections or interfaces. It has support for IPv4, IPv6 firewall settings, ethernet bridges and IP sets. There is a separation of runtime and permanent configuration options. It also provides an interface for services or applications to add firewall rules directly.

The former firewall model with system-config-firewall/lokkit was static and every change required a complete firewall restart. This included also to unload the firewall netfilter kernel modules and to load the modules that are needed for the new configuration. The unload of the modules was breaking stateful firewalling and established connections. The firewall daemon on the other hand manages the firewall dynamically and applies changes without restarting the whole firewall. Therefore there is no need to reload all firewall kernel modules. But using a firewall daemon requires that all firewall modifications are done with that daemon to make sure that the state in the daemon and the firewall in kernel are in sync. The firewall daemon can not parse firewall rules added by the iptables and ebtables command line tools. The daemon provides information about the current active firewall settings via D-BUS and also accepts changes via D-BUS using PolicyKit authentication methods.

So, firewalld uses zones and services instead of chain and rules for performing the operations and it can manages rule(s) dynamically allowing updates & modification without breaking existing sessions and connections.

It has following features.

  • D-Bus API.
  • Timed firewall rules.
  • Rich Language for specific firewall rules.
  • IPv4 and IPv6 NAT support.
  • Firewall zones.
  • IP set support.
  • Simple log of denied packets.
  • Direct interface.
  • Lockdown: Whitelisting of applications that may modify the firewall.
  • Support for iptables, ip6tables, ebtables and ipset firewall backends.
  • Automatic loading of Linux kernel modules.
  • Integration with Puppet.

To know more about firewalld, please visit this link.

How to install firewalld

Before installing firewalld, please make sure you stop iptables and also make sure that iptables are not using or working anymore. To do so,

sudo systemctl stop iptables

This will stop iptables form your system.

And then make sure iptables are not used by your system any more by issuing the below command in the terminal.

sudo systemctl mask iptables

Now, check the status of iptables.

sudo systemctl status iptables

iptables_status_unixmen

Now, we are ready to install firewalld on to our system.

For Ubuntu

To install it on Ubuntu, you must remove UFW first and then you can install Firewalld. To remove UFW, issue the below command on the terminal.

sudo apt-get remove ufw

After removing UFW, issue the below command in the terminal

sudo apt-get install firewall-applet

Or

You can open Ubuntu Software Center and look or seacrh  for “firewall-applet” then install it on to your Ubuntu system.

For RHEL, CentOS & Fedora

Type the below command to install firewalld on your CentOS system.

sudo yum install firewalld firewall-config -y

How to configure firewalld

Before configuring firewalld, we must know the status of firewalld after the installation. To know that, type the following.

sudo systemctl status firewalld

firewalld_status_unixmen

As firewalld works on zones basis, we need to check all the zones and services though we haven’t done any configuring yet.

For Zones

sudo firewall-cmd --get-active-zones

firewalld_activezones_unixmen

or

sudo firewall-cmd --get-zones

firewalld_getzones_unixmen

To know the default zone, issue the below command

sudo firewall-cmd --get-default-zone

firewalld_defaultszones_unixmen

And, For Services

sudo firewall-cmd --get-services

firewalld_services_unixmen

Here, you can see those services covered under firewalld.

Setting Default Zone

An important note is, after each modification, you need to reload firewalld so that your changes can take place.

To set the default zone

sudo firewall-cmd --set-default-zone=internal

or

sudo firewall-cmd --set-default-zone=public

After changing the zone, check whether it changes or not.

sudo firewall-cmd --get-default-zone

Adding Port in Public Zone

sudo firewall-cmd --permanent --zone=public --add-port=80/tcp

firewalld_addport_unixmen

This will add tcp port 80 in the public zone of firewalld. You can add your desired port as well by replacing 80 by your’s.

Now reload the firewalld.

sudo firewall-cmd --reload

Now, check the status to see whether tcp 80 port has been added or not.

sudo firewall-cmd --zone=public --list-ports

firewalld_statusafterport_unixmen

Here, you can see that tcp port 80 has been added.

Or even you can try something like this.

sudo firewall-cmd --zone=public --list-all

firewalld_statusall_unixmen

Removing Port from Public Zone

To remove Tcp 80 port from the public zone, type the following.

sudo firewall-cmd --zone=public --remove-port=80/tcp

You will see a “success” text echoing in your terminal.

You can put your desired port as well by replacing 80 by your’s own port.

Adding Services in Firewalld

To add ftp service in firewalld, issue the below command

sudo firewall-cmd --zone=public --add-service=ftp

You will see a “success” text echoing in your terminal.

Similarly for adding smtp service, issue the below command

sudo firewall-cmd --zone=public --add-service=smtp

Replace ftp and smtp by your’s own service that you want to add in the firewalld.

Removing Services from Firewalld

For removing ftp & smtp services from firewalld, issue the below command in the terminal.

sudo firewall-cmd --zone=public --remove-service=ftp
sudo firewall-cmd --zone=public --remove-service=smtp

Block Any Incoming and Any Outgoing Packet(s)

If you wish, you can block any incoming or outgoing packets / connections by using firewalld. This is known as “panic-on” of firewalld. To do so, issue the below command.

sudo firewall-cmd --panic-on

You will see a “success” text echoing in your terminal.

After doing this, you will not be able to ping a host or even browse any websites.

To turn this off, issue the below command in your terminal.

sudo firewall-cmd --panic-off

Adding IP Address in Firewalld

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.4" accept'

By doing so, firewalld will accept IP v4 packets from the source IP 192.168.1.4.

Blocking IP Address From Firewalld

Similarly, to block any IP address

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.4" reject'

By doing so, firewalld will drop / discards every IP v4 packets from the source IP 192.168.1.4.

I stuck with the very basic of Firewalld over here so that you can easily understand the working methodology of it and the differences of it with iptables.

That’s all for today. Hope you enjoy reading this article.

Take care.

Redhat / CEntOS / Oracle Linux, Ubuntu

Linux – Concepts – Ulimits & Sysctl

ulimit and sysctl

The ulimit and sysctl programs allow to limit system-wide resource use. This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users.

Code Listing 1: ulimit example

# ulimit -a 
core file size          (blocks, -c) 0 
data seg size           (kbytes, -d) unlimited 
file size               (blocks, -f) unlimited 
pending signals                 (-i) 8191 
max locked memory       (kbytes, -l) 32 
max memory size         (kbytes, -m) unlimited 
open files                      (-n) 1024 
pipe size            (512 bytes, -p) 8 
POSIX message queues     (bytes, -q) 819200 
stack size              (kbytes, -s) 8192 
cpu time               (seconds, -t) unlimited 
max user processes              (-u) 8191 
virtual memory          (kbytes, -v) unlimited 
file locks                      (-x) unlimited

All these settings can be manipulated. A good example is this bash forkbomb that forks as many processes as possible and can crash systems where no user limits are set:

Warning: Do not run this in a shell! If no limits are set your system will either become unresponsive or might even crash.

Code Listing 2: A bash forkbomb

$ :(){ :|:& };:
refer to my Post :- 
https://haritibcoblog.com/2016/07/10/linux-fork-bomb-test-epic/

Now this is not good – any user with shell access to your box could take it down. But if that user can only start 30 processes the damage will be minimal. So let’s set a process limit:

Gentoo Note: A too small number of processes can break the use of portage. So, don’t be too strict.

Code Listing 3: Setting a process limit

# ulimit -u 30 
# ulimit -a 
… 
max user processes              (-u) 30 
…

If you try to run the forkbomb now it should run, but throw error messages “fork: resource temporarily unavailable”. This means that your system has not allowed the forkbomb to start more processes. The other options of ulimit can help with similar problems, but you should be careful that you don’t lock yourself out – setting data seg size too small will even prevent bash from starting!

sysctl is a similar tool: It allows to configure kernel parameters at runtime. If you wish to keep settings persistent across reboots you should edit /etc/sysctl.conf – be aware that wrong settings may break things in unforeseen ways.

Code Listing 4: Exploring sysctl variables

# sysctl -a 
… 
vm.swappiness = 60 
…

The list of variables is quite long (367 lines on my system), but I picked out vm.swappiness here. It controls how aggressive swapping will be, the higher it is (with a maximum of 100) the more swap will be used. This can affect performance a lot on systems with little memory, depending on load and other factors.

Code Listing 5: Reducing swappiness

# sysctl vm.swappiness=0 
vm.swappiness = 0

The effects of changing this setting are usually not felt instantly. But you can change many settings, especially network-related, this way. For servers this can offer a nice performance boost, but as with ulimit careless usage might cause your system to misbehave or slow down. If you don’t know what a variable controls, you should not modify it!

Network, Redhat / CEntOS / Oracle Linux, Ubuntu

Linux Command – Using Netstat the Proper Way !!

How to install netstat

netstat is a useful tool for checking your network configuration and activity. It is in fact a collection of several tools lumped together.

Install “net-tools” package using yum

[root@livedvd ~]$ sudo yum install net-tools
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.mirror.secureax.com
* extras: centos.mirror.secureax.com
* updates: centos.mirror.secureax.com
Resolving Dependencies
--> Running transaction check
---> Package net-tools.x86_64 0:2.0-0.17.20131004git.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================
Package         Arch         Version                          Repository  Size
================================================================================
Installing:
net-tools       x86_64       2.0-0.17.20131004git.el7         base       304 k
Transaction Summary
================================================================================
Install  1 Package
Total download size: 304 k
Installed size: 917 k
Is this ok [y/d/N]: y
Downloading packages:
net-tools-2.0-0.17.20131004git.el7.x86_64.rpm              | 304 kB   00:00
Running transaction check

Running transaction test
Transaction test succeeded
Running transaction
Installing : net-tools-2.0-0.17.20131004git.el7.x86_64                    1/1
Verifying  : net-tools-2.0-0.17.20131004git.el7.x86_64                    1/1
Installed:
net-tools.x86_64 0:2.0-0.17.20131004git.el7

 

Complete!

 

The netstat Command

Displaying the Routing Table

When you invoke netstat with the –r flag, it displays the kernel routing table in the way we’ve been doing with route. On vstout, it produces:

# netstat -nr

 Kernel IP routing table
 Destination   Gateway      Genmask         Flags  MSS Window  irtt Iface
 127.0.0.1     *            255.255.255.255 UH       0 0          0 lo
 172.16.1.0    *            255.255.255.0   U        0 0          0 eth0
 172.16.2.0    172.16.1.1   255.255.255.0   UG       0 0          0 eth0

The –n option makes netstat print addresses as dotted quad IP numbers rather than the symbolic host and network names. This option is especially useful when you want to avoid address lookups over the network (e.g., to a DNS or NIS server).

The second column of netstat‘s output shows the gateway to which the routing entry points. If no gateway is used, an asterisk is printed instead. The third column shows the “generality” of the route, i.e., the network mask for this route. When given an IP address to find a suitable route for, the kernel steps through each of the routing table entries, taking the bitwise AND of the address and the genmask before comparing it to the target of the route.

The fourth column displays the following flags that describe the route:

G The route uses a gateway.
U The interface to be used is up.
H Only a single host can be reached through the route. For example, this is the case for the loopback entry 127.0.0.1.
D This route is dynamically created. It is set if the table entry has been generated by a routing daemon like gated or by an ICMP redirect message
M This route is set if the table entry was modified by an ICMP redirect message.
! The route is a reject route and datagrams will be dropped.

 

The next three columns show the MSS, Window and irtt that will be applied to TCP connections established via this route. The MSS is the Maximum Segment Size and is the size of the largest datagram the kernel will construct for transmission via this route. The Window is the maximum amount of data the system will accept in a single burst from a remote host. The acronym irtt stands for “initial round trip time.” The TCP protocol ensures that data is reliably delivered between hosts by retransmitting a datagram if it has been lost. The TCP protocol keeps a running count of how long it takes for a datagram to be delivered to the remote end, and an acknowledgement to be received so that it knows how long to wait before assuming a datagram needs to retransmitted; this process is called the round-trip time. The initial round-trip time is the value that the TCP protocol will use when a connection is first established. For most network types, the default value is okay, but for some slow networks, notably certain types of amateur packet radio networks, the time is too short and causes unnecessary retransmission. The irtt value can be set using the route command. Values of zero in these fields mean that the default is being used.

Finally, the last field displays the network interface that this route will use.

Displaying Interface Statistics

When invoked with the –i flag, netstat displays statistics for the network interfaces currently configured. If the –a option is also given, it prints all interfaces present in the kernel, not only those that have been configured currently. On vstout, the output from netstat will look like this:

# netstat -i
 Kernel Interface table
 Iface MTU Met  RX-OK RX-ERR RX-DRP RX-OVR  TX-OK TX-ERR TX-DRP TX-OVR Flags
 lo      0   0   3185      0      0      0   3185      0      0      0 BLRU
 eth0 1500   0 972633     17     20    120 628711    217      0      0 BRU

The MTU and Met fields show the current MTU and metric values for that interface. The RX and TX columns show how many packets have been received or transmitted error-free (RX-OK/TX-OK) or damaged (RX-ERR/TX-ERR); how many were dropped (RX-DRP/TX-DRP); and how many were lost because of an overrun (RX-OVR/TX-OVR).

The last column shows the flags that have been set for this interface. These characters are one-character versions of the long flag names that are printed when you display the interface configuration with ifconfig:

B A broadcast address has been set.
L This interface is a loopback device.
M All packets are received (promiscuous mode).
O ARP is turned off for this interface.
P This is a point-to-point connection.
R Interface is running.
U Interface is up.

 

Displaying Connections

netstat supports a set of options to display active or passive sockets. The options –t, –u, –w, and –x show active TCP, UDP, RAW, or Unix socket connections. If you provide the –a flag in addition, sockets that are waiting for a connection (i.e., listening) are displayed as well. This display will give you a list of all servers that are currently running on your system.

Invoking netstat -ta on vlager produces this output:

$ netstat -ta
 Active Internet Connections
 Proto Recv-Q Send-Q Local Address    Foreign Address    (State)
 tcp        0      0 *:domain         *:*                LISTEN
 tcp        0      0 *:time           *:*                LISTEN
 tcp        0      0 *:smtp           *:*                LISTEN
 tcp        0      0 vlager:smtp      vstout:1040        ESTABLISHED
 tcp        0      0 *:telnet         *:*                LISTEN
 tcp        0      0 localhost:1046   vbardolino:telnet  ESTABLISHED
 tcp        0      0 *:chargen        *:*                LISTEN
 tcp        0      0 *:daytime        *:*                LISTEN
 tcp        0      0 *:discard        *:*                LISTEN
 tcp        0      0 *:echo           *:*                LISTEN
 tcp        0      0 *:shell          *:*                LISTEN
 tcp        0      0 *:login          *:*                LISTEN

This output shows most servers simply waiting for an incoming connection. However, the fourth line shows an incoming SMTP connection from vstout, and the sixth line tells you there is an outgoing telnetconnection to vbardolino.

Using the –a flag by itself will display all sockets from all families.

Top 20 command netstat for network management

  1. Listing all the LISTENING Ports of TCP and UDP connections

Listing all ports (both TCP and UDP) using netstat -a option.

# netstat -a | more

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0      0 *:59482                     *:*                         LISTEN
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*

Active UNIX domain sockets (servers and established)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     16972  /tmp/orbit-root/linc-76b-0-6fa08790553d6
 unix  2      [ ACC ]     STREAM     LISTENING     17149  /tmp/orbit-root/linc-794-0-7058d584166d2
 unix  2      [ ACC ]     STREAM     LISTENING     17161  /tmp/orbit-root/linc-792-0-546fe905321cc
 unix  2      [ ACC ]     STREAM     LISTENING     15938  /tmp/orbit-root/linc-74b-0-415135cb6aeab

 

  1. Listing TCP Ports connections

Listing only TCP (Transmission Control Protocol) port connections using netstat -at.

# netstat -at

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT

 

  1. Listing UDP Ports connections

Listing only UDP (User Datagram Protocol ) port connections using netstat -au.

# netstat -au

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*
 udp        0      0 *:mdns                      *:*

 

  1. Listing all LISTENING Connections

Listing all active listening ports connections with netstat -l.

# netstat -l

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:58642                     *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*

Active UNIX domain sockets (only servers)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     16972  /tmp/orbit-root/linc-76b-0-6fa08790553d6
 unix  2      [ ACC ]     STREAM     LISTENING     17149  /tmp/orbit-root/linc-794-0-7058d584166d2
 unix  2      [ ACC ]     STREAM     LISTENING     17161  /tmp/orbit-root/linc-792-0-546fe905321cc
 unix  2      [ ACC ]     STREAM     LISTENING     15938  /tmp/orbit-root/linc-74b-0-415135cb6aeab

 

  1. Listing all TCP Listening Ports

Listing all active listening TCP ports by using option netstat -lt.

# netstat -lt

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:dctp                      *:*                         LISTEN
 tcp        0      0 *:mysql                     *:*                         LISTEN
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:munin                     *:*                         LISTEN
 tcp        0      0 *:ftp                       *:*                         LISTEN
 tcp        0      0 localhost.localdomain:ipp   *:*                         LISTEN
 tcp        0      0 localhost.localdomain:smtp  *:*                         LISTEN
 tcp        0      0 *:http                      *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 *:https                     *:*                         LISTEN

 

  1. Listing all UDP Listening Ports

Listing all active listening UDP ports by using option netstat -lu.

# netstat -lu

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 udp        0      0 *:39578                     *:*
 udp        0      0 *:meregister                *:*
 udp        0      0 *:vpps-qua                  *:*
 udp        0      0 *:openvpn                   *:*
 udp        0      0 *:mdns                      *:*
 udp        0      0 *:sunrpc                    *:*
 udp        0      0 *:ipp                       *:*
 udp        0      0 *:60222                     *:*
 udp        0      0 *:mdns                      *:*

 

  1. Listing all UNIX Listening Ports

Listing all active UNIX listening ports using netstat -lx.

# netstat -lx

Active UNIX domain sockets (only servers)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     4171   @ISCSIADM_ABSTRACT_NAMESPACE
 unix  2      [ ACC ]     STREAM     LISTENING     5767   /var/run/cups/cups.sock
 unix  2      [ ACC ]     STREAM     LISTENING     7082   @/tmp/fam-root-
 unix  2      [ ACC ]     STREAM     LISTENING     6157   /dev/gpmctl
 unix  2      [ ACC ]     STREAM     LISTENING     6215   @/var/run/hald/dbus-IcefTIUkHm
 unix  2      [ ACC ]     STREAM     LISTENING     6038   /tmp/.font-unix/fs7100
 unix  2      [ ACC ]     STREAM     LISTENING     6175   /var/run/avahi-daemon/socket
 unix  2      [ ACC ]     STREAM     LISTENING     4157   @ISCSID_UIP_ABSTRACT_NAMESPACE
 unix  2      [ ACC ]     STREAM     LISTENING     60835836 /var/lib/mysql/mysql.sock
 unix  2      [ ACC ]     STREAM     LISTENING     4645   /var/run/audispd_events
 unix  2      [ ACC ]     STREAM     LISTENING     5136   /var/run/dbus/system_bus_socket
 unix  2      [ ACC ]     STREAM     LISTENING     6216   @/var/run/hald/dbus-wsUBI30V2I
 unix  2      [ ACC ]     STREAM     LISTENING     5517   /var/run/acpid.socket
 unix  2      [ ACC ]     STREAM     LISTENING     5531   /var/run/pcscd.comm

 

  1. Showing Statistics by Protocol

Displays statistics by protocol. By default, statistics are shown for the TCP, UDP, ICMP, and IP protocols. The -s parameter can be used to specify a set of protocols.

# netstat -s

Ip:
 2461 total packets received
 0 forwarded
 0 incoming packets discarded
 2431 incoming packets delivered
 2049 requests sent out
 Icmp:
 0 ICMP messages received
 0 input ICMP message failed.
 ICMP input histogram:
 1 ICMP messages sent
 0 ICMP messages failed
 ICMP output histogram:
 destination unreachable: 1
 Tcp:
 159 active connections openings
 1 passive connection openings
 4 failed connection attempts
 0 connection resets received
 1 connections established
 2191 segments received
 1745 segments send out
 24 segments retransmited
 0 bad segments received.
 4 resets sent
 Udp:
 243 packets received
 1 packets to unknown port received.
 0 packet receive errors
 281 packets sent

 

  1. Showing Statistics by TCP Protocol

Showing statistics of only TCP protocol by using option netstat -st.

# netstat -st

Tcp:
 2805201 active connections openings
 1597466 passive connection openings
 1522484 failed connection attempts
 37806 connection resets received
 1 connections established
 57718706 segments received
 64280042 segments send out
 3135688 segments retransmited
 74 bad segments received.
 17580 resets sent

 

  1. Showing Statistics by UDP Protocol
# netstat -su

Udp:
 1774823 packets received
 901848 packets to unknown port received.
 0 packet receive errors
 2968722 packets sent

 

  1. Displaying Service name with PID

Displaying service name with their PID number, using option netstat -tp will display “PID/Program Name”.

# netstat -tp

Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
 tcp        0      0 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED 2179/sshd
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT  1939/clock-applet

 

  1. Displaying Promiscuous Mode

Displaying Promiscuous mode with -ac switch, netstat print the selected information or refresh screen every five second. Default screen refresh in every second.

# netstat -ac 5 | grep tcp

tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:58642                     *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        1      0 192.168.0.2:59447           www.gov.com:http            CLOSE_WAIT
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0      0 *:59482                     *:*                         LISTEN

 

  1. Displaying Kernel IP routing

Display Kernel IP routing table with netstat and route command.

# netstat -r

Kernel IP routing table
 Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
 192.168.0.0     *               255.255.255.0   U         0 0          0 eth0
 link-local      *               255.255.0.0     U         0 0          0 eth0
 default         192.168.0.1     0.0.0.0         UG        0 0          0 eth0

 

  1. Showing Network Interface Transactions

Showing network interface packet transactions including both transferring and receiving packets with MTU size.

# netstat -i

Kernel Interface table
 Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
 eth0       1500   0     4459      0      0      0     4057      0      0      0 BMRU
 lo        16436   0        8      0      0      0        8      0      0      0 LRU

 

  1. Showing Kernel Interface Table

Showing Kernel interface table, similar to ifconfig command.

# netstat -ie

Kernel Interface table
 eth0      Link encap:Ethernet  HWaddr 00:0C:29:B4:DA:21
 inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
 inet6 addr: fe80::20c:29ff:feb4:da21/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:4486 errors:0 dropped:0 overruns:0 frame:0
 TX packets:4077 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:2720253 (2.5 MiB)  TX bytes:1161745 (1.1 MiB)
 Interrupt:18 Base address:0x2000

lo        Link encap:Local Loopback
 inet addr:127.0.0.1  Mask:255.0.0.0
 inet6 addr: ::1/128 Scope:Host
 UP LOOPBACK RUNNING  MTU:16436  Metric:1
 RX packets:8 errors:0 dropped:0 overruns:0 frame:0
 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:480 (480.0 b)  TX bytes:480 (480.0 b)

 

  1. Displaying IPv4 and IPv6 Information

Displays multicast group membership information for both IPv4 and IPv6.

# netstat -g

IPv6/IPv4 Group Memberships
 Interface       RefCnt Group
 --------------- ------ ---------------------
 lo              1      all-systems.mcast.net
 eth0            1      224.0.0.251
 eth0            1      all-systems.mcast.net
 lo              1      ff02::1
 eth0            1      ff02::202
 eth0            1      ff02::1:ffb4:da21
 eth0            1      ff02::1

 

  1. Print Netstat Information Continuously

To get netstat information every few second, then use the following command, it will print netstat information continuously, say every few seconds.

# netstat -c

Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:36944 TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg010.shr.prod.s:42110 TIME_WAIT
 tcp        0    132 tecmint.com:ssh    115.113.134.3.static-:64662 ESTABLISHED
 tcp        0      0 tecmint.com:http   crawl-66-249-71-240.g:41166 TIME_WAIT
 tcp        0      0 localhost.localdomain:54823 localhost.localdomain:smtp  TIME_WAIT
 tcp        0      0 localhost.localdomain:54822 localhost.localdomain:smtp  TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg010.shr.prod.s:42091 TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:36998 TIME_WAIT

 

  1. Finding non supportive Address

Finding un-configured address families with some useful information.

# netstat --verbose

netstat: no support for `AF IPX' on this system.
 netstat: no support for `AF AX25' on this system.
 netstat: no support for `AF X25' on this system.
 netstat: no support for `AF NETROM' on this system.

 

  1. Finding Listening Programs

Find out how many listening programs running on a port.

# netstat -ap | grep http

tcp        0      0 *:http                      *:*                         LISTEN      9056/httpd
 tcp        0      0 *:https                     *:*                         LISTEN      9056/httpd
 tcp        0      0 tecmint.com:http   sg2nlhg008.shr.prod.s:35248 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:57783 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:57769 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg008.shr.prod.s:35270 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg009.shr.prod.s:41637 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg009.shr.prod.s:41614 TIME_WAIT   -
 unix  2      [ ]         STREAM     CONNECTED     88586726 10394/httpd

 

  1. Displaying RAW Network Statistics
# netstat --statistics --raw

Ip:
 62175683 total packets received
 52970 with invalid addresses
 0 forwarded
 Icmp:
 875519 ICMP messages received
 destination unreachable: 901671
 echo request: 8
 echo replies: 16253
 IcmpMsg:
 InType0: 83
 IpExt:
 InMcastPkts: 117

 

Source :- UnixMen

Network, Redhat / CEntOS / Oracle Linux, Ubuntu

Knockd – Detailed And Simpler (Silent Assassin….)

As I could see there are lot of articles about knockd and it’s implementation. So, what are my efforts to make this unique? I made it simple, but detail oriented  and have commented on controversies and criticism that exist.

port-knocking-2

Here is an outline on what I’ve discussed.

What is port knocking?

What is knockd?

How it works?

Installation

What we are trying to achieve

Pre-requisite before implementation of knockd:

Implementation scenario

Testing

Disclaimer

So, here we go.

What is port knocking?

Wikipedia Definition:

Port knocking is a method of externally opening ports on a firewall by generating a connection attempt on a set of pre-specified closed ports (in this case, telnet). Once a correct sequence of connection attempts is received, the firewall rules are dynamically modified to allow the host which sent the connection attempts to connect over specific port(s)

/* in this article point of view, it’s ssh port 22 */

It’s basically like, every request would knock the door (firewall) to get through it. Knocking is necessary to get past the door. You shall either implement it using knockd and iptables or just iptables alone.

Now, using knockd.

What is knockd?

knockd is a port-knock server. It listens to all traffic on an Ethernet interface, looking for special “knock” sequences of port-hits. A client makes these port-hits by sending a TCP (or UDP) packet to a port on the server. This port need not be open — since knockd listens at the link-layer level, it sees all traffic even if it’s destined for a closed port. When the server detects a specific sequence of port-hits, it runs a command defined in its configuration file. This can be used to open up holes in a firewall for quick access.

How it works?

  1. Knockd daemon installed/running in the server.
    2. Configure some port sequences (tcp, udp, or both), and the appropriate actions for each sequence.
    3. once knockd sees a defined port sequence, it will run the configured action for that sequence

Note:

It is completely stealth and it will not open any ports on the server,  by default.

When a port knock is successfully used to open a port, the firewall rules are generally only opened to the ip_address that supplied the correct knock.

Installation

Note: Don’t copy/paste the commands. Type it manually to avoid errors that could occur due to the format.

# yum install libpcap*

/* dependency – * in the end installs libpcap-devel which is a pre-requisite, as well */

There are several ways to install, whereas I have followed rpm installation.

Download suitable rpm package from http://pkgs.repoforge.org/knock/

Then run,

# rpm –ivh knock-0.5-3.el6.rf.x86_64.rpm

/*Here, I have downloaded knock 0.5-3 for 64-bit centos and hence the rpm name*/

Now, what all got installed?

Knockd – knock server daemon

Knock – knock client, which aids in knocking.

Note that this (knock) is default client comes along with knockd, whereas there are other advanced clients like hping, sendip & packit.

What we are trying to achieve:

A way to stop the attacks altogether, yet allow ssh access from anywhere, when needed.

Pre-requisite before implementation of knockd:

As mentioned earlier, an existing firewall (iptables) is a pre-requisite.

Follow the below steps to configure firewall

# iptables -I INPUT -p tcp -m state –state RELATED,ESTABLISHED -j ACCEPT

-I —– Inserting the rule as first line in the firewall setup

-p —– protocol

-m —–match against the states RELATED, ESTABLISHED, in this case

-j —– jump to action, which is ACCPET here.

/* This rule says to allow currently on-going session through firewall. It is essential so that if you have currrently taken remote session on this computer using SSH, it will be preserved and not get terminated by further rules where you might want to block ssh or all services */

# iptables -I INPUT -p icmp -j ACCEPT

/* This is to make your machine ping-able from any machine, so that you can check the availability of your machine (whether it’s up or down) */

# iptables –A INPUT –j REJECT

/* Rejecting everything else – Appending it as last line, since, if inserted as first line all other rules will not be considered and every request would be rejected*/

Implementation scenario:

Now, try to ssh to the machine where you have implemented firewall. Let’s call the machine as server.

You could not ssh to the server since the firewall setup in server rejects everything except on-going session and ping requests.

Now, knockd implementation:

Now, in server, that you have installed knockd, run the following commands

# vi /etc/knockd.conf

/*As a result of rpm installation, this configuration file will exist */

Edit the file as below and save/exit.

[options] logfile = /var/log/knockd.log [opencloseSSH] sequence        = 2222:udp,3333:tcp,4444:udp seq_timeout     = 15 start_command   = /sbin/iptables -I INPUT -s %IP% -p tcp –dport 22 -j ACCEPT cmd_timeout     = 10 stop_command    = /sbin/iptables -D INPUT -s %IP% -p tcp –dport 22 -j ACCEPT tcpflags        = syn

First line of the file defines where the error/warnings pertaining to knockd gets logged. In case, if you’re unable to successfully launch connection between client and server using knockd/knock, then this is where you need to check – Server log.

sequence        = 2222:udp,3333:tcp,4444:udp                                /* the sequence of knocking */

seq_timeout     = 15                                                                         /* once the above mentioned sequence is knocked, it’s valid only for next 15 seconds */

start_command   = /sbin/iptables -I INPUT -s %IP% -p tcp –dport 22 -j ACCEPT

/* once the right sequence is knocked, the above command gets executed, where %IP% is the ip_addr of host which knocks. Hence, this allows, ssh connection from knocker (client) to server. One thing to be noted is that, ssh connection need to be established within next 15 seconds of knock. Also, note that I have given iptables –I, since I want this rule to be inserted as first line, because, in case if I append, it won’t have any effect, since iptables reject everything rule comes before this in the list*/

cmd_timeout     = 10                       /* the stop_command will be execued in 10 seconds from start_command execution. */

stop_command    = /sbin/iptables -D INPUT -s %IP% -p tcp –dport 22 -j ACCEPT

/* I am deleting the rule which I inserted to allow the SSH connection from knocker (client) to server. Now, doesn’t it ends my ssh connection? – It doesn’t.

How?  The existing rule in my iptables, #iptables -I INPUT -p tcp -m state –state RELATED,ESTABLISHED -j ACCEPT helps me to retain the established connection. In case, if this rule is not specified, then ur ssh connection will be lost in 10 seconds (i.e) cmd_timeout */

tcpflags                  = syn                       /* In TCP 3 way handshake, Client sends a TCP SYNchronize packet to Server */

Note that there are other ways to configure your knockd server. For example, a port sequence to open ssh, and another port sequence to close your ssh (unlike the above mentioned one), but in this case, after terminating your ssh session, you need to manually hit the port sequence from client, in order to close the ssh port for it.

Now, that we have configured our server to allow ssh for client only when the client hits the right sequence, which is 2222:udp,3333:tcp,4444:udp here.

Now, you need to start knockd service. Copy the below script as/etc/rc.d/init.d/knockd in your machine.

Runlevel script.

#!/bin/sh # description: Start and stop knockd # Check if that config file exist [ -f /etc/knockd.conf ] || exit 0 # Source function library . /etc/rc.d/init.d/functions # Source networking configuration . /etc/sysconfig/network # Check that networking is up [ “$NETWORKING” = “no” ] && exit 0 start() { echo “Starting knockd …” /usr/sbin/knockd & } stop() { echo “Shutting down knockd …” kill `pidof /usr/sbin/knockd` } case “$1” in start) start ;; stop) stop ;; restart) stop start ;; *) echo “Usage: $0 {start|stop|restart}” ;; esac exit 0

Now, run the following commands, so that you shall start/stop/restart knockd like other services

# chmod 755 /etc/rc.d/init.d/knocd         /* making the script executable by root */ # chkconfig –add knockd                             /* adding knockd to chkconfig list */ # chkconfig –level 35 knockd on              /* service will be started as part of runlevel 3 & 5 */ # service knockd start

whenever you modify your /etc/knockd.conf, you need to restart the service.

Testing:

From client, run the following commands.

Note that there are other ways to knock, whereas  I am using knock client which comes along with the knockd package. So, you need to have it installed in client, as well.

Now,

# knock –v server_ip 2222:udp 3333:tcp 4444:udp ;ssh server_ip

-v for verbose

server_ip substituted by your server’s ip address.

We are knocking the sequence which we mentioned in server’s /etc/knockd.conffile

Now, you shall successfully establish ssh session to server. Also, the port gets closed for further sessions. So, every time when you want to establish ssh connection to server, you need to knock and then ssh.

Comments on controversies and criticism:

What if the knock sequence are determined by brute force attack ?

Excerpt from wikipedia answers this:

>>>>>>>>>> YO BREDAS <<<<<<<<<<<<<<<<<

Consider that, if an external attacker did not know the port knock sequence, even the simplest of sequences would require a massive brute force effort in order to be discovered. A three-knock simple TCP sequence (e.g. port 1000, 2000, 3000) would require an attacker without prior knowledge of the sequence to test every combination of three ports in the range 1-65535, and then to scan each port in between to see if anything had opened. As a stateful system, the port would not open until after the correct three-digit sequence had been received in order, without other packets in between.
That equates to a maximum of 655363 packets in order to obtain and detect a single successful opening, in the worst case scenario. That’s 281,474,976,710,656 or over 281 trillion packets. On average, an attempt would take approximately 9.2 quintillion packets to successfully open a single, simple three-port TCP-only knock by brute force. This is made even more impractical when knock attempt-limiting is used to stop brute force attacks, longer and more complex sequences are used.

Also, there are other port knocking solutions, where the knocks are encrypted, which makes it even harder to hack.

When we have simple, robust and reliable solution named VPN access, why to go for port knocking?

Port knocking plus VPN tunnels for added security. If you have another VPN server in your network, you shall protect it from brute force attacks by hiding it behind a knock sequence.

Disclaimer:

Port knocking is not intended to be complete solution, but can be an added layer of security.

Also, it needs to be configured properly according to the needs.

A network that is connected to the Internet is never secure. There is no perfect lock!

The best that we can do is to make the system as secure as possible for today. Possibly
tomorrow someone will figure out how to get through our security code.

A security system is only as strong as its weakest link.

But, I believe “Port knocking” adds security to the existing setup, and doesn’t make your system vulnerable to attacks, as few people claim.

Thanks for reading. Cheers !

 

Credits :- GOKUL (Unixmen)

Redhat / CEntOS / Oracle Linux, Ubuntu

Linux Command – Dstat (Culprit Catcher)

Introduction

Whether a system is used as a web server or a normal PC, in a daily workflow, to keep under control its usage of resources is almost necessary : GNU/Linux provides several tools for monitoring purposes: iostat, vmstat, netstat, ifstat and others. Every system admin know these products, and how to analyse their outputs. However, there’s another alternative, a single program which can replace almost all of them. Its name is dstat.

With dstat, users can view all system resources instantly. For instance, someone could decide to compare the network bandwidth numbers directly with the disk throughput, having a more general view on what’s going on; this is very useful in case of troubleshooting, or to analyse a system for bench marking.

Features

  • Combines vmstat, iostat, ifstat, netstat information and more
  • Shows stats in exactly the same timeframe
  • Enable/order counters as they make most sense during analysis/troubleshooting
  • Modular design
  • Written in Python, so easy extendable
  • Includes many external plugins
  • Can show interrupts per device
  • Very accurate timeframes, no timeshifts when system is stressed
  • Shows exact units and limits conversion mistakes
  • Indicate different units with different colors
  • Show intermediate results when delay > 1
  • Allows to export CSV output, which can be imported in Gnumeric and Excel to make graphs

Installation

To install dstat is a simple task, since it’s packaged in .deb and .rpm.
For Debian-based distro:

# apt install dstat

In RHEL, CentOS, Fedora:

# yum install dstat

Getting sta(r)ted

To run the program, users can just write the command in its simplest form:

$ dstat

as a result, it will show different infos in a table, which help admins to have a general overview.

dstat

Plugins

First of all, it’s important to note that dstat comes with a lot of plugins; for obtaining a complete list:

$ dstat --list

which returns:

list

Of course, it is possible to add more, for some special use case.

So, let’s see how they work.

Who wants to use some plugin must simply pass its name as a command-line argument. For instance, if someone need to verify only the total CPU usage, he can do:

$ dstat --cpu

or, in a shorter form

$ dstat -c
As previously said, program can show different stats at the same time. As an example:

$ dstat --cpu --top-cpu --disk --top-bio --top-latency

output

This command, which is a combination of internal stats and external plugins, will give the total CPU usage, most expensive CPU process, disk statistics, most expensive block I/O process and the process with the highest total latency (expressed in milliseconds).

Output

By default, dstat displays output in columns (as a table, indeed) directly in terminal window, in real-time, for an immediate analysis made by a human. But there is also the possibility to send it to a .csv file, which software like Libreoffice Calc or Gnumeric can use for creating graphs, or any kind of statistical analysis. Exporting data to .csv is a quite easy task:

$ dstat [arguments] --output /path/to/outputfile

Bash, Main, Redhat / CEntOS / Oracle Linux, Ubuntu, Windows 2012 R2

Source – UNIX, Destination – Windows Cygwin (SSH Password-less Authentication)

On Windows Server

In windows cygwin create user, say MyUser, locally and also create user in cygwin

cd C:\cygwin

Cygwin.bat

 

Administrator@MYWINDOWSHOST ~

$ /bin/mkpasswd -l -u MyUser >>/etc/passwd

MyUser@MYWINDOWSHOST ~

$ ls

MyUser@MYWINDOWSHOST ~

$ ls -al

total 24

drwxr-xr-x+ 1 MyUser        None    0 Mar 17 12:54 .

drwxrwxrwt+ 1 Administrator None    0 Mar 17 12:54 ..

-rwxr-xr-x  1 MyUser        None 1494 Oct 29 15:34 .bash_profile

-rwxr-xr-x  1 MyUser        None 6054 Oct 29 15:34 .bashrc

-rwxr-xr-x  1 MyUser        None 1919 Oct 29 15:34 .inputrc

-rwxr-xr-x  1 MyUser        None 1236 Oct 29 15:34 .profile

MyUser@MYWINDOWSHOST ~

$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/MyUser/.ssh/id_rsa):

Created directory ‘/home/MyUser/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/MyUser/.ssh/id_rsa.

Your public key has been saved in /home/MyUser/.ssh/id_rsa.pub.

The key fingerprint is:

7d:40:12:1c:7b:c1:7f:39:ac:f5:1a:c5:73:ae:81:34 MyUser@MYWINDOWSHOST

The key’s randomart image is:

+–[ RSA 2048]—-+

|       .++o      |

|        .+..     |

|        . o. . o |

|         o .E *.+|

|        S …* =o|

|           .o o o|

|               = |

|              o  |

|                 |

+—————–+

MyUser@MYWINDOWSHOST ~

$ cd .ssh

MyUser@MYWINDOWSHOST ~/.ssh

$ ls

id_rsa  id_rsa.pub

MyUser@MYWINDOWSHOST ~/.ssh

$ touch authorized_keys

Generate the key in source ON UNIX SERVER

 $ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/MyUser/.ssh/id_rsa):

Created directory ‘/home/MyUser/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/MyUser/.ssh/id_rsa.

Your public key has been saved in /home/MyUser/.ssh/id_rsa.pub.

The key fingerprint is:

7d:40:12:1c:7b:c1:7f:39:ac:f5:1a:c5:73:ae:81:34 MyUser@MYWINDOWSHOST

The key’s randomart image is:

+–[ RSA 2048]—-+

|       .++o      |

|        .+..     |

|        . o. . o |

|         o .E *.+|

|        S …* o|

|           .o o o|

|               = |

|              o  |

|                 |

+—————–+

MyUser@MYUNIXSHOST ~

$ cd .ssh

MyUser@MYUNIXHOST ~/.ssh

$ ls

id_rsa  id_rsa.pub

Then push that rsa_pub into that destination authorized_keys

cat .ssh/id_rsa.pub | ssh MyUser@MYWINDOWSHOST ‘cat >>  .ssh/authorized_keys’

 

ssh  -v MyUser@MYWINDOWSHOST

—> YOU SHOULD BE ABLE LOGIN PASSWORD LESS

Credits :- Priyanka Padad (Operations Expert)

Redhat / CEntOS / Oracle Linux, Ubuntu

Linux Commands – chage (Manage your Passwords for User Accounts in UNIX)

chage

chage Enables you to modify the parameters surrounding passwords (complexity, age,expiration). We can edit and manage the password expiration details with the chage command. However, a root user can execute chage command for any user account, but not the other users.

Syntax: chage [options] USERNAME

Options:

-d LAST_DAY Indicates the day the password was last changed
-E EXPIRE_DATE Sets the account expiration date
-I INACTIVE Changes the password in an inactive state after the account expires
-l Shows account aging information
-m MIN_DAYS Sets the minimum number of days between password changes
-M MAX_DAYS Sets the maximum number of days a password is valid
-W WARN_DAYS Sets the number of days to warn before the password expires

For example we can find the particular user information by using chage command as follows.

[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : never
Account expires                                         : Dec 31, 2025
Minimum number of days between password change          : 365
Maximum number of days between password change          : 500
Number of days of warning before password expires       : 7

How to force users to change their password, may be this is a big question for system administrators to force their users to change password on regular intervals for security basis.

For this we Set Password Expiry Date for an user using chage option -M. Root user  can set the password expiry date for any user.

Please note that option -M will update both “Password expires” and “Maximum number of days between password change” entries as shown below.

Syntax: # chage -M number-of-days username
[root@localhost ~]# chage -M 60 root

The above command sets the password expiry to 60days.

[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        :Jul 25, 2017
Password inactive                                       : never
Account expires                                         : Dec 31, 2025
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

Set the Account Expiry Date for an User by using -E option

we can also use chage command to set the account expiry date as shown below using option -E. The date given below is in “YYYY-MM-DD” format. This will update the “Account expires” value as shown below.

[root@localhost ~]# chage -E "2017-10-03" root
[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : never
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

Force the user account to be locked after n number of inactivity days
Typically if the password is expired, users are forced to change it during their next login. You can also set an additional condition, where after the password is expired, if the user never tried to login for 6 days, you can automatically lock their account using option -I as shown below. In this example, the “Password inactive” date is set to 10 days from the “Password expires” value.Once an account is locked, only system administrators will be able to unlock it.

# chage -I 6 root
# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : Aug 31,2017
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

How to set Minimum no.of days between password change

we can set the minimum number of days between password change by using the option -m along with chage command as follows.

chage -m 10 USERNAME
[root@localhost ~]# chage -m 10 root
[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : Aug 31,2017
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 10
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

How to set the number of days of warning before password expires

we can set the number of days of warning before password expires by using the option -W along with chage command

[root@localhost ~]# chage -W 10 root
[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : Aug 31,2017
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 10
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 10

How to disable password aging for an user account

To disable password aging for the account for particular account we must set the following on that account.

    -m 0 will set the minimum number of days between password change to 0
    -M 99999 will set the maximum number of days between password change to 99999
    -I -1 (number minus one) will set the “Password inactive” to never
    -E -1 (number minus one) will set “Account expires” to never.

when we using the chage command to set specifics on an account, do not reset the
password with the passwd command because doing so erases any changes to the account
expiring.

Have a great fun with Linux!

 

Credits :- https://www.unixmen.com

Bash, Daily Hacks, Redhat / CEntOS / Oracle Linux, Ubuntu

Hack #1 -> Define CD Base Directory Using CDPATH

If you are frequently performing cd to subdirectories of a specific parent
directory, you can set the CDPATH to the parent directory and perform
cd to the subdirectories without giving the parent directory path as
explained below.

# pwd
/home/ramesh
# cd mail
-bash: cd: mail: No such file or directory

[Note: The above cd is looking for mail directory under
current directory]

# export CDPATH=/etc
# cd mail
/etc/mail

[Note: The above cd is looking for mail under /etc and not
under current directory]

# pwd
/etc/mail

To make this change permanent, add

export CDPATH=/etc  to your ~/.bash_profile

Similar to the PATH variable, you can add more than one directory entry
in the CDPATH variable, separating them with : , as shown below.

export CDPATH=.:~:/etc:/var

This hack can be very helpful under the following situations:
• Oracle DBAs frequently working under $ORACLE_HOME, can set
the CDPATH variable to the oracle home
• Unix sysadmins frequently working under /etc, can set the
CDPATH variable to /etc
• Developers frequently working under project directory
/home/projects, can set the CDPATH variable to /home/projects
• End-users frequently accessing the subdirectories under their
home directory, can set the CDPATH variable to ~ (home
directory)

Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

UNIX – Directory Structure and Explanations

 

1. / – Root:-

  •  Every single file and directory starts from the root directory.
  •  Only root user has write privilege under this directory.
  •  Please note that /root is root user’s home directory, which is not same as /.

2. /bin – User Binaries:-

  • Contains binary executables.
  • Common Linux commands you need to use in single-user modes are located under this directory.
  • Commands used by all the users of the system are located here.
  • For example: ps, ls, ping, grep, cp.

3. /sbin – System Binaries:-

  • Just like /bin, /sbin also contains binary executable.
  • But, the Linux commands located under this directory are used  typically,by system administrator, for system maintenance purpose.

For example: iptables, reboot, fdisk, ifconfig, swapon

4. /etc – Configuration Files

  • Contains configuration files required by all programs.
  • This also contains startup and shutdown shell scripts used to start/stop  individual programs.

For example: /etc/resolv.conf, /etc/logrotate.conf

5. /dev – Device Files:-

  • Contains device files.
  • These include terminal devices, usb, or any device attached to the system.

For example: /dev/tty1, /dev/usbmon0

6. /proc – Process Information:-

  • Contains information about system process.
  • This is a pseudo filesystem contains information about running process. For example: /proc/{pid} directory contains information  about the process with that particular pid.
  • This is a virtual filesystem with text information about system resources. For example: /proc/uptime

7. /var – Variable Files:-

  • Var stands for variable files.
  • Content of the files that are expected to grow can be found under this directory.
  • This includes — system log files (/var/log); packages and database files (/var/lib); emails (/var/mail); print queues (/var/spool); lock files (/var/lock); temp files needed across reboots (/var/tmp);

8. /tmp – Temporary Files:-

  • Directory that contains temporary files created by system and users.
  • Files under this directory are deleted when system is rebooted .

9. /usr – User Programs:-

  •  Contains binaries, libraries, documentation, and source-code for second  level programs.
  •  /usr/bin contains binary files for user programs. If you can’t find a user binary under /bin, look under /usr/bin. For example: at, awk, cc, less, scp
  •  /usr/sbin contains binary files for system administrators. If you can’t find a system binary under /sbin, look under /usr/sbin. For example: atd, cron, sshd, useradd, userdel
  • /usr/lib contains libraries for /usr/bin and /usr/sbin
  • /usr/local contains users programs that you install from source. For example, when you install apache from source, it goes under /usr/local/apache2

10. /home – Home Directories:-

  • Home directories for all users to store their personal files.
  • For example: /home/john, /home/nikita

11. /boot – Boot Loader Files:-

  • Contains boot loader related files.
  • Kernel initrd, vmlinux, grub files are located under /boot
  • For example: initrd.img-2.6.32-24-generic, vmlinuz-2.6.32-24-generic

12. /lib – System Libraries :-

  •     Contains library files that supports the binaries located under /bin and /sbin
  •      Library filenames are either ld* or lib*.so.*
  •     For example: ld-2.11.1.so, libncurses.so.5.7

13. /opt – Optional add-on Applications:-

  • opt stands for optional.
  • Contains add-on applications from individual vendors
  • Add-on applications should be installed under either /opt/ or /opt/ sub-directory.

 14. /mnt – Mount Directory:-

  •    Temporary mount directory where sysadmins can mount file systems.

 15. /media – Removable Media Devices:-

  • Temporary mount directory for removable devices.
  • For examples, /media/cdrom for CD-ROM; /media/floppy for floppy drives; /media/cdrecorder for CD writer

 16. /srv – Service Data :-

  • srv stands for service.
  •  For example, /srv/cvs contains CVS related data
  • Contains server specific services related data.

 

Main, Redhat / CEntOS / Oracle Linux, Tips and Tricks, Ubuntu

Safest and the least vulnerable way to add a user to the sudoers with full rights

visudo

demouser ALL=(ALL) PASSWD:ALL, !/bin/su root, !/bin/su -, !/bin/sudo su -, !/bin/su – root, !/bin/sudo su root, !/bin/su, !/usr/bin/passwd, !/usr/bin/passwd root, !/bin/vi /etc/sudoers, !/usr/bin/sudo vi /etc/sudoers, !/usr/sbin/visudo, !/usr/sbin/sudo visudo, !/bin/chmod 777 /etc/sudoers, !/bin/chmod ugo+rwx /etc/sudoers

:wq!

 

Bash, Operating System, Redhat / CEntOS / Oracle Linux, Scripting, Ubuntu

Process Affinity – Linux

  • 1. Introduction
  • 2. Types of Thread Scheduling
    • 2.1. Compact Scheduling
    • 2.2. Round-Robin Scheduling
    • 2.3. Stupid Scheduling
  • 3. Defining Affinity
    • 3.1. The Linux-Portable Way (taskset)
    • 3.2. The Other Linux-Portable Way (numactl)
    • 3.3. Using OpenMP Runtime Extensions
    • 3.4. getfreesocket

1. Introduction

Although a compute node or workstation may appear to have 16 cores and 64 GB of DRAM, these resources are not uniformly accessible to your applications. The best application performance is usually obtained by keeping your code’s parallel workers (e.g., threads or MPI processes) as close to the memory on which they are operating as possible. While you might like to think that the Linux thread scheduler would do this automatically for you, the reality is that most HPC applications benefit greatly from a little bit of help in manually placing threads on different processor cores.

To get an idea of what your multithreaded application is doing while it is running, you can use the pscommand.

Assuming your executable is called application.x, you can easily see what cores each thread is using by issuing the following command in bash:

$ for i in $(pgrep application.x); do ps -mo pid,tid,fname,user,psr -p $i;done

The PSR field is the OS identifier for the core each TID (thread id) is utilizing.

2. Types of Thread Scheduling

Certain types of unevenly loaded applications can experience serious performance degradation caused by the Linux scheduler treating high-performance application codes in the same way it would treat a system daemon that might spend most of its time idle.

These sorts of scheduling issues are best described with diagrams. Let’s assume we have compute nodes with two processor sockets, and each processor has four cores:

topology of a dual-socket, quad-core node

When you run a multithreaded application with four threads (or even four serial applications), Linux will schedule those threads for execution by assigning each one to a CPU core. Without being explicitly told how to do this scheduling, Linux may decide to

  1. run thread0 to thread3 on core0 to core3 on socket0
  2. run thread0 and thread1 on core0 and core1 on socket0, and run thread2 and thread3 on socket1
  3. run thread0 and thread1 on core0 only, run thread2 on core1, run thread3 on core2, and leave core3 completely unutilized
  4. any number of other nonsensical allocations involving assigning multiple threads to a single core while other cores sit idle

It should be obvious that option #3 and #4 are very bad for performance, but the fact is that Linux will happily schedule your multithreaded job (or multiple single-thread jobs) this way if your threads behave in a way that is confusing to the operating system.

compact scheduling

2.1. Compact Scheduling

Option #1 is often referred to as “compact” scheduling and is depicted in the diagram to the right. It keeps all of your threads running on a single physical processor if possible, and this is what you would want if all of the threads in your application need to repeatedly access different parts of a large array. This is because all of the cores on the same physical processor can access the memory banks associated with (or “owned by”) that processor at the same speed. However, cores cannot access memory stored on memory banks owned by a different processor as quickly; this is phenomenon is called NUMA (non-uniform memory access). If your threads all need to access data stored in the memory owned by one processor, it is often best to put all of your threads on the processor who owns that memory.

2.2. Round-Robin Scheduling

scatter or round-robin scheduling

Option #2 is called “scatter” or “round-robin” scheduling and is ideal if your threads are largely independent of each other and don’t need to access a lot of memory that other threads need. The benefit to round-robin thread scheduling is that not all threads have to share the same memory channel and cache, effectively doubling the memory bandwidth and cache sizes available to your application. The tradeoff is that memory latency becomes higher as threads have to start accessing memory that might be owned by another processor.

2.3. Stupid Scheduling

stupid scheduling

Option #3 and #4 are what I call “stupid” scheduling (see diagram to the right) and can often be the default behavior of the Linux thread scheduler if you don’t tell Linux where your threads should run. This happens because in traditional Linux server environments, most of the proceses that are running at any given time aren’t doing anything. To conserve power, Linux will put a lot of these quiet processes on the same processor or cores, then move them to their own dedicated core when they wake up and have to start processing.

If your application is running at full bore 100% of the time, Linux will probably keep it on its own dedicated CPU core. However, if your application has an uneven load (e.g., threads are mostly idle while the last thread finishes), Linux will see that the application is mostly quiet and pack all the quiet threads (e.g., t0 and t1 in the diagram to the right) on to the same CPU core. This wouldn’t be so bad, but the cost of moving a thread from one core to another requires context switches which get very expensive when done hundreds or thousands of times a minute.

3. Defining affinity

3.1. The Linux-Portable Way (taskset)

If you want to launch a job (e.g., simulation.x) on a certain set of cores (e.g., core0, core2, core4, and core6), issue

$ taskset -c 0,2,4,6 simulation.x

If your process is already running, you can define thread affinity while in flight. It also lets you bind specific TIDs to specific processors at a level of granularity greater than specifying -c 0,2,4,6because Linux may still schedule two threads on core2 and nothing on core0. For example,

$ for i in $(pgrep application.x);do ps -mo pid,tid,fname,user,psr -p $i;done
  PID   TID COMMAND  USER     PSR
21654     - applicat glock      -
    - 21654 -        glock      0
    - 21655 -        glock      2
    - 21656 -        glock      2
    - 21657 -        glock      6
    - 21658 -        glock      4
 
$ taskset -p -c 0 21654
$ taskset -p -c 0 21655
$ taskset -p -c 2 21656
$ taskset -p -c 4 21657
$ taskset -p -c 6 21658

This sort of scheduling will happen under certain conditions, so specifying a set of cpus to a set of threads without specifically assigning each thread to a physical core may not always behave optimally.

3.2. The Other Linux-Portable Way (numactl)

The emerging standard for easily binding processes to processors on Linux-based supercomputers isnumactl. It can operate on a coarser-grained basis (i.e., CPU sockets rather than individual CPU cores) than taskset (only CPU cores) because it is aware of the processor topology and how the CPU cores map to CPU sockets. Using numactl is typically easier–after all, the common goal is to confine a process to a numa pool (or “cpu node”) rather than specific CPU cores. To that end, numactl also lets you bind a processor’s memory locality to prevent processes from having to jump across NUMA pools (called “memory nodes” in numactl parlance).

Whereas if you wanted to bind a specific process to one processor socket with taskset you would have to

$ taskset -c 0,2,4,6 simulation.x

the same operation is greatly simplified with numactl:

$ numactl --cpunodebind=0 simulation.x

If you want to also restrict simulation.x’s memory use to the numa pool associated with cpu node 0, you can do

$ numactl --cpunodebind=0 --membind=0 simulation.x

or just

$ numactl -C 0 -N 0 simulation.x

You can see what cpu nodes and their corresponding memory nodes are available on your system by using numactl -H:

$ numactl -H
available: 2 nodes (0-1)
node 0 size: 32728 MB
node 0 free: 12519 MB
node 1 size: 32768 MB
node 1 free: 16180 MB
node distances:
node   0   1 
  0:  10  21 
  1:  21  10

numactl also lets you supply specific cores (like taskset) with the –physcpubind or -C. Unlike taskset, though, numactl does not appear to let you change the CPU affinity of a process that is already running.

An alternative syntax to numactl -C is something like

$ numactl -C +0,1,2,3 simulation.x

By prefixing your list of cores with a +, you can have numactl bind to relative cores. When combined with cpusets (which are enabled by default for all jobs on Gordon), the above command will use the 0th, 1st, 2nd, and 3rd core of the job’s given cpuset instead of literally core 0,1,2,3.

3.3. Using OpenMP Runtime Extensions

OpenMP 4.0 now includes standardized controls for binding threads to cores. I haven’t caught up with these changes but I will document them here once I do.

Multithreaded programs compiled with Intel Compilers can utilize Intel’s Thread Affinity Interface for OpenMP applications. Set and export the KMP_AFFINITY env variable to express binding preferences.KMP_AFFINITY has three principal binding strategies:

  • compact fills up one socket before allocating to other sockets
  • scatter evenly spreads threads across all sockets and cores
  • explicit allows you define exactly which cores/sockets to use

Using KMP_AFFINITY=compact will preferentially bind all your threads, one per core, to a single socket before it tries binding them to other sockets. Unfortunately, it will start at socket0 regardless of if other processes (such as another SMP job) is already bound to that socket. You can explicitly specify an offset to force the job to bind to a specific socket, but you need to know exactly what is running on what cores and sockets on your node in order to specify this in your submit script.

You can also explicitly define which cores your job should use. Combined with a little knowledge of your system’s CPU topology ([Intel’s Processor Topology Enumeration tool][intel’s processor enumeration tool] is great for this). If you wanted to run on cores 0, 2, 4, and 6, you would do

export KMP_AFFINITY='proclist=[0,2,4,6],explicit'

GNU’s implementation of OpenMP has a environment variable similar to KMP_AFFINITY calledGOMP_CPU_AFFINITY. Incidentally, Intel’s OpenMP supports GOMP_CPU_AFFINITY, so using this variable may be a relatively portable way to specify thread affinity at runtime. The equivalent GOMP_CPU_AFFINITY for the KMP_AFFINITY I gave above would be:

export GOMP_CPU_AFFINITY='0,2,4,6'

3.4. getfreesocket

I wrote a small perl script called getfreesocket that uses KMP_AFFINITY=explicit (or GOMP_CPU_AFFINITY) and some probing of the Linux OS at runtime to bind SMP jobs to free processor sockets. It should be invoked in a job script something like this:

#!/bin/bash

NPROCS=1
BINARY=${HOME}/bin/whatever

nprocs=$(grep '^physical id' /proc/cpuinfo  | sort -u | wc -l)
ncores=$(grep '^processor' /proc/cpuinfo | sort -u | wc -l)
coresperproc=$((ncores/nprocs))
OMP_NUM_THREADS=$((NPROCS*coresperproc))

freesock=$(getfreesocket -explicit=${NPROCS})
if [ "z$freesock" == "z" ]
then
  echo "Not enough free processors!  aborting"
  exit 1
else
  KMP_AFFINITY="granularity=fine,proclist=[$freesock],explicit"
  GOMP_CPU_AFFINITY="$(echo $freesock | sed -e 's/,/ /g')"
fi

export KMP_AFFINITY OMP_NUM_THREADS GOMP_CPU_AFFINITY

${BINARY}

This was a very simple solution to get single-socket jobs to play nicely on the shared batch system we were using at the Interfacial Molecular Science Laboratory. While numactl is an easier way to accomplish some of this, it still requires that you know what other processes are sharing your node and on what CPU cores they are running. I’ve experienced problems with Linux’s braindead thread scheduling so this getfreesocket finds completely unused sockets that can be fed into taskset,KMP_AFFINITY, or numactl.

This is not as great an issue if your resource manager supports launching jobs within cpusets. Your resource manager will provide a cpuset, and using relative specifiers for numactl cores (e.g., numactl -C +0-3) will bind to the free socket provided by the batch environment. Of course, this will not specifically bind one thread to one core, so using KMP_AFFINITY or GOMP_CPU_AFFINITY may remain necessary.

C-Programming, Main, Operating System, Redhat / CEntOS / Oracle Linux, Scripting, Ubuntu

Concepts Of Linux Programming – Files and Filesystem

The file is the most basic and fundamental abstraction in Linux. Linux follows the everything-is-a-file philosophy (although not as strictly as some other systems, such as Plan 9).Consequently, much interaction occurs via reading of and writing to files, even when the object in question is not what you would consider a normal file.

In order to be accessed, a file must first be opened. Files can be opened for reading, writing, or both. An open file is referenced via a unique descriptor, a mapping from the metadata associated with the open file back to the specific file itself. Inside the Linux kernel, this descriptor is handled by an integer (of the C type int) called the file descriptor, abbreviated fd. File descriptors are shared with user space, and are used directly by user programs to access files. A large part of Linux system programming consists of opening, manipulating, closing, and otherwise using file descriptors.

Regular files

What most of us call “files” are what Linux labels regular files. A regular file contains bytes of data, organized into a linear array called a byte stream. In Linux, no further organization or formatting is specified for a file. The bytes may have any values, and they may be organized within the file in any way. At the system level, Linux does not enforce a structure upon files beyond the byte stream. Some operating systems, such as VMS, provide highly structured files, supporting concepts such as records. Linux does not.

Any of the bytes within a file may be read from or written to. These operations start at a specific byte, which is one’s conceptual “location” within the file. This location is called the file position or file offset. The file position is an essential piece of the metadata that the kernel associates with each open file. When a file is first opened, the file position is zero. Usually, as bytes in the file are read from or written to, byte-by-byte, the file position increases in kind. The file position may also be set manually to a given value, even a value beyond the end of the file. Writing a byte to a file position beyond the end of the file will cause the intervening bytes to be padded with zeros. While it is possible to write bytes in this manner to a position beyond the end of the file, it is not possible to write bytes to a position before the beginning of a file. Such a practice sounds nonsensical, and, indeed, would have little use. The file position starts at zero; it cannot be negative. Writing a byte to the middle of a file overwrites the byte previously located at that offset. Thus, it is not possible to expand a file by writing into the middle of it. Most file writing occurs at the end of the file. The file position’s maximum value is bounded only by the size of the C type used to store it, which is 64 bits on a modern Linux system.

The size of a file is measured in bytes and is called its length. The length, in other words, is simply the number of bytes in the linear array that make up the file. A file’s length can be changed via an operation called truncation. A file can be truncated to a new size smaller than its original size, which results in bytes being removed from the end of the file. Confusingly, given the operation’s name, a file can also be “truncated” to a new size larger than its original size. In that case, the new bytes (which are added to the end of the file) are filled with zeros. A file may be empty (that is, have a length of zero), and thus contain no valid bytes. The maximum file length, as with the maximum file position, is bounded only by limits on the sizes of the C types that the Linux kernel uses to manage files. Specific filesystems, however, may impose their own restrictions, imposing a smaller ceiling on the maximum length.

A single file can be opened more than once, by a different or even the same process. Each open instance of a file is given a unique file descriptor. Conversely, processes can share their file descriptors, allowing a single descriptor to be used by more than one process. The kernel does not impose any restrictions on concurrent file access. Multiple processes are free to read from and write to the same file at the same time. The results of such concurrent accesses rely on the ordering of the individual operations, and are generally unpredictable. User-space programs typically must coordinate amongst themselves to ensure that concurrent file accesses are properly synchronized.

Although files are usually accessed via filenames, they actually are not directly associated with such names. Instead, a file is referenced by an inode (originally short for information node), which is assigned an integer value unique to the filesystem (but not necessarily unique across the whole system). This value is called the inode number, often abbreviated as i-number or ino. An inode stores metadata associated with a file, such as its modification timestamp, owner, type, length, and the location of the file’s data—but no filename! The inode is both a physical object, located on disk in Unix-style filesystems, and a conceptual entity, represented by a data structure in the Linux kernel.

Directories and links

Accessing a file via its inode number is cumbersome (and also a potential security hole), so files are always opened from user space by a name, not an inode number. Directories are used to provide the names with which to access files. A directory acts as a mapping of human-readable names to inode numbers. A name and inode pair is called a link. The physical on-disk form of this mapping—for example, a simple table or a hash—is implemented and managed by the kernel code that supports a given filesystem. Conceptually, a directory is viewed like any normal file, with the difference that it contains only a mapping of names to inodes. The kernel directly uses this mapping to perform name-to-inode resolutions.

When a user-space application requests that a given filename be opened, the kernel opens the directory containing the filename and searches for the given name. From the filename, the kernel obtains the inode number. From the inode number, the inode is found. The inode contains metadata associated with the file, including the on-disk location of the file’s data.

Initially, there is only one directory on the disk, the root directory. This directory is usually denoted by the path /. But, as we all know, there are typically many directories on a system. How does the kernel know whichdirectory to look in to find a given filename?

As mentioned previously, directories are much like regular files. Indeed, they even have associated inodes. Consequently, the links inside of directories can point to the inodes of other directories. This means directories can nest inside of other directories, forming a hierarchy of directories. This, in turn, allows for the use of the pathnames with which all Unix users are familiar—for example,/home/blackbeard/concorde.png.

When the kernel is asked to open a pathname like this, it walks each directory entry (called a dentry inside of the kernel) in the pathname to find the inode of the next entry. In the preceding example, the kernel starts at /, gets the inode for home, goes there, gets the inode for blackbeard, runs there, and finally gets the inode for concorde.png. This operation is called directory or pathname resolution. The Linux kernel also employs a cache, called the dentry cache, to store the results of directory resolutions, providing for speedier lookups in the future given temporal locality.

A pathname that starts at the root directory is said to be fully qualified, and is called an absolute pathname. Some pathnames are not fully qualified; instead, they are provided relative to some other directory (for example, todo/plunder). These paths are called relative pathnames. When provided with a relative pathname, the kernel begins the pathname resolution in the current working directory. From the current working directory, the kernel looks up the directory todo. From there, the kernel gets the inode for plunder. Together, the combination of a relative pathname and the current working directory is fully qualified.

Although directories are treated like normal files, the kernel does not allow them to be opened and manipulated like regular files. Instead, they must be manipulated using a special set of system calls. These system calls allow for the adding and removing of links, which are the only two sensible operations anyhow. If user space were allowed to manipulate directories without the kernel’s mediation, it would be too easy for a single simple error to corrupt the filesystem.

Hard links

Conceptually, nothing covered thus far would prevent multiple names resolving to the same inode. Indeed, this is allowed. When multiple links map different names to the same inode, we call them hard links.

Hard links allow for complex filesystem structures with multiple pathnames pointing to the same data. The hard links can be in the same directory, or in two or more different directories. In either case, the kernel simply resolves the pathname to the correct inode. For example, a specific inode that points to a specific chunk of data can be hard linked from /home/bluebeard/treasure.txtand /home/blackbeard/to_steal.txt.

Deleting a file involves unlinking it from the directory structure, which is done simply by removing its name and inode pair from a directory. Because Linux supports hard links, however, the filesystem cannot destroy the inode and its associated data on every unlink operation. What if another hard link existed elsewhere in the filesystem? To ensure that a file is not destroyed until all links to it are removed, each inode contains a link count that keeps track of the number of links within the filesystem that point to it. When a pathname is unlinked, the link count is decremented by one; only when it reaches zero are the inode and its associated data actually removed from the filesystem.

Symbolic links

Hard links cannot span filesystems because an inode number is meaningless outside of the inode’s own filesystem. To allow links that can span filesystems, and that are a bit simpler and less transparent, Unix systems also implementsymbolic links (often shortened to symlinks).

Symbolic links look like regular files. A symlink has its own inode and data chunk, which contains the complete pathname of the linked-to file. This means symbolic links can point anywhere, including to files and directories that reside on different filesystems, and even to files and directories that do not exist. A symbolic link that points to a nonexistent file is called a broken link.

Symbolic links incur more overhead than hard links because resolving a symbolic link effectively involves resolving two files: the symbolic link and then the linked-to file. Hard links do not incur this additional overhead—there is no difference between accessing a file linked into the filesystem more than once and one linked only once. The overhead of symbolic links is minimal, but it is still considered a negative.

Symbolic links are also more opaque than hard links. Using hard links is entirely transparent; in fact, it takes effort to find out that a file is linked more than once! Manipulating symbolic links, on the other hand, requires special system calls. This lack of transparency is often considered a positive, as the link structure is explicitly made plain, with symbolic links acting more as shortcutsthan as filesystem-internal links.

Special files

Special files are kernel objects that are represented as files. Over the years, Unix systems have supported a handful of different special files. Linux supports four: block device files, character device files, named pipes, and Unix domain sockets. Special files are a way to let certain abstractions fit into the filesystem, continuing the everything-is-a-file paradigm. Linux provides a system call to create a special file.

Device access in Unix systems is performed via device files, which act and look like normal files residing on the filesystem. Device files may be opened, read from, and written to, allowing user space to access and manipulate devices (both physical and virtual) on the system. Unix devices are generally broken into two groups: character devices and block devices. Each type of device has its own special device file.

A character device is accessed as a linear queue of bytes. The device driver places bytes onto the queue, one by one, and user space reads the bytes in the order that they were placed on the queue. A keyboard is an example of a character device. If the user types “peg,” for example, an application would want to read from the keyboard device the p, the e, and, finally, the g, in exactly that order. When there are no more characters left to read, the device returns end-of-file (EOF). Missing a character, or reading them in any other order, would make little sense. Character devices are accessed via character device files.

A block device, in contrast, is accessed as an array of bytes. The device driver maps the bytes over a seekable device, and user space is free to access any valid bytes in the array, in any order—it might read byte 12, then byte 7, and then byte 12 again. Block devices are generally storage devices. Hard disks, floppy drives, CD-ROM drives, and flash memory are all examples of block devices. They are accessed via block device files.

Named pipes (often called FIFOs, short for “first in, first out”) are aninterprocess communication (IPC) mechanism that provides a communication channel over a file descriptor, accessed via a special file. Regular pipes are the method used to “pipe” the output of one program into the input of another; they are created in memory via a system call and do not exist on any filesystem. Named pipes act like regular pipes but are accessed via a file, called a FIFO special file. Unrelated processes can access this file and communicate.

Sockets are the final type of special file. Sockets are an advanced form of IPC that allow for communication between two different processes, not only on the same machine, but even on two different machines. In fact, sockets form the basis of network and Internet programming. They come in multiple varieties, including the Unix domain socket, which is the form of socket used for communication within the local machine. Whereas sockets communicating over the Internet might use a hostname and port pair for identifying the target of communication, Unix domain sockets use a special file residing on a filesystem, often simply called a socket file.

Filesystems and namespaces

Linux, like all Unix systems, provides a global and unified namespace of files and directories. Some operating systems separate different disks and drives into separate namespaces—for example, a file on a floppy disk might be accessible via the pathname A:\plank.jpg, while the hard drive is located atC:\. In Unix, that same file on a floppy might be accessible via the pathname/media/floppy/plank.jpg or even via /home/captain/stuff/plank.jpg, right alongside files from other media. That is, on Unix, the namespace is unified.

A filesystem is a collection of files and directories in a formal and valid hierarchy. Filesystems may be individually added to and removed from the global namespace of files and directories. These operations are calledmounting and unmounting. Each filesystem is mounted to a specific location in the namespace, known as a mount point. The root directory of the filesystem is then accessible at this mount point. For example, a CD might be mounted at/media/cdrom, making the root of the filesystem on the CD accessible at/media/cdrom. The first filesystem mounted is located in the root of the namespace, /, and is called the root filesystem. Linux systems always have a root filesystem. Mounting other filesystems at other mount points is optional.

Filesystems usually exist physically (i.e., are stored on disk), although Linux also supports virtual filesystems that exist only in memory, and network filesystems that exist on machines across the network. Physical filesystems reside on block storage devices, such as CDs, floppy disks, compact flash cards, or hard drives. Some such devices are partionable, which means that they can be divided up into multiple filesystems, all of which can be manipulated individually. Linux supports a wide range of filesystems—certainly anything that the average user might hope to come across—including media-specific filesystems (for example, ISO9660), network filesystems (NFS), native filesystems (ext4), filesystems from other Unix systems (XFS), and even filesystems from non-Unix systems (FAT).

The smallest addressable unit on a block device is the sector. The sector is a physical attribute of the device. Sectors come in various powers of two, with 512 bytes being quite common. A block device cannot transfer or access a unit of data smaller than a sector and all I/O must occur in terms of one or more sectors.

Likewise, the smallest logically addressable unit on a filesystem is the block. The block is an abstraction of the filesystem, not of the physical media on which the filesystem resides. A block is usually a power-of-two multiple of the sector size. In Linux, blocks are generally larger than the sector, but they must be smaller than the page size (the smallest unit addressable by the memory management unit, a hardware component). Common block sizes are 512 bytes, 1 kilobyte, and 4 kilobytes.

Historically, Unix systems have only a single shared namespace, viewable by all users and all processes on the system. Linux takes an innovative approach and supports per-process namespaces, allowing each process to optionally have a unique view of the system’s file and directory hierarchy. By default, each process inherits the namespace of its parent, but a process may elect to create its own namespace with its own set of mount points and a unique root directory.

C-Programming, Redhat / CEntOS / Oracle Linux, Ubuntu

Simple utility to allocate memory on a Linux Machine

1. What can I use this for?

  • Test swap
  • Test behaviors on a machine when there is little memory available

————————————————————————-

2. Usage

————————————————————

Installation

cd /tmp
vim memtest.c
<enter the contents in the file and save it>
vim Makefile
<enter the contents in the file and save it>
sudo make install

————————————————————————–

Makefile

all: memtest.c
$(CC) memtest.c -o memtest

install: memtest
install -m 0755 memtest $(PREFIX)/bin/

clean:
rm -rf *o memtest

memtest.c

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include <stdbool.h>
#include <unistd.h>

#if defined(_SC_PHYS_PAGES) && defined(_SC_AVPHYS_PAGES) && defined(_SC_PAGE_SIZE)
#define MEMORY_PERCENTAGE
#endif

#ifdef MEMORY_PERCENTAGE
size_t getTotalSystemMemory(){
long pages = sysconf(_SC_PHYS_PAGES);
long page_size = sysconf(_SC_PAGE_SIZE);
return pages * page_size;
}

size_t getFreeSystemMemory(){
long pages = sysconf(_SC_AVPHYS_PAGES);
long page_size = sysconf(_SC_PAGE_SIZE);
return pages * page_size;
}
#endif

bool eat(long total,int chunk){
long i;
for(i=0;i<total;i+=chunk){
short *buffer=malloc(sizeof(char)*chunk);
if(buffer==NULL){
return false;
}
memset(buffer,0,chunk);
}
return true;
}

int main(int argc, char *argv[]){

#ifdef MEMORY_PERCENTAGE
printf(“Currently total memory: %zd\n”,getTotalSystemMemory());
printf(“Currently avail memory: %zd\n”,getFreeSystemMemory());
#endif

int i;
for(i=0;i<argc;i++){
char *arg=argv[i];
if(strcmp(arg, “-h”)==0 || strcmp(arg,”-?”)==0  || argc==1){
printf(“Usage: eatmemory <size>\n”);
printf(“Size can be specified in megabytes or gigabytes in the following way:\n”);
printf(“#          # Bytes      example: 1024\n”);
printf(“#M         # Megabytes  example: 15M\n”);
printf(“#G         # Gigabytes  example: 2G\n”);
#ifdef MEMORY_PERCENTAGE
printf(“#%%         # Percent    example: 50%%\n”);
#endif
printf(“\n”);
}else if(i>0){
int len=strlen(arg);
char unit=arg[len – 1];
long size=-1;
int chunk=1024;
if(!isdigit(unit) ){
if(unit==’M’ || unit==’G’){
arg[len-1]=0;
size=atol(arg) * (unit==’M’?1024*1024:1024*1024*1024);
}
#ifdef MEMORY_PERCENTAGE
else if (unit==’%’) {
size = (atol(arg) * (long)getFreeSystemMemory())/100;
}
#endif
else{
printf(“Invalid size format\n”);
exit(0);
}
}else{
size=atoi(arg);
}
printf(“Eating %ld bytes in chunks of %d…\n”,size,chunk);
if(eat(size,chunk)){
printf(“Done, press any key to free the memory\n”);
getchar();
}else{
printf(“ERROR: Could not allocate the memory”);
}
}
}

}

————————————————————–

Running

memtest <size>

Size is in number of bytes, megabytes or gigabytes.

—————————————————————————–

Examples

memtest 1024
memtest 10M
memtest 4G