Main, Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Process Management in Linux

Process Types

Before we start talking about Linux process management, we should review process types. There are four common types of processes:

  • Parent process
  • Child process
  • Orphan Process
  • Daemon Process
  • Zombie Process

Parent process is a process which runs the fork() system call. All processes except process 0 have one parent process.

Child process is created by a parent process.

Orphan Process it continues running while its parent process has terminated or finished.

Daemon Process is always created from a child process and then exit.

Zombie Process exists in the process table although it is terminated.

The orphan process is a process that still executing and its parent process has died while orphan processes do not become zombie processes.

Memory Management

In server administration, memory management is one of your responsibility that you should care about as a system administrator.

One of most used commands in Linux process management is the free command:

$ freem

The -m option to show values in megabytes.

Certificates – Digital Certificates (Summary)01-linux-process-managment-free-command

Our main concern in buff/cache.

The output of free command here means 536 megabytes is used while 1221 megabytes is available.

The second line is the swap. Swapping occurs when memory becomes to be crowded.

The first value is the total swap size which is 3070 megabytes.

The second value is the used swap which is 0.

The third value is the available swap for usage which is 3070.

From the above results, you can say that memory status is good since no swap is used, so while we are talking about the swap, let’s discover what proc directory provides us about the swap.

$ cat /proc/swapscat /proc/swaps

01-linux-process-managment-free-command

This command shows the swap size and how much is used:

$ cat /proc/sys/vm/swappinesscat /proc/sys/vm/swappiness

01-linux-process-managment-free-commandThis command shows a value from 0 to 100, this value means the system will use the swap if the memory becomes 70% used.

Notice: the default value for most distros for this value is between 30 and 60, you can modify it like this:

$ echo 50 >/proc/sys/vm/swappinessecho 50 >/proc/sys/vm/swappiness

Or using sysctl command like this:

$ sudo sysctl -wvm.swappiness=50sudo sysctl -wvm.swappiness=50

Changing the swappiness value using the above commands is not permanent, you have to write it on /etc/sysctl.conf file like this:

$ nano /etc/sysctl.conf

vm.swappiness=50

01-linux-process-managment-free-command

Cool!!

The swap level measures the chance to transfer a process from the memory to the swap.

Choosing the accurate swappiness value for your system requires some experimentation to choose the best value for your server.

Managing virtual memory with vmstat

Another important command used in Linux process management which is vmstat. vmstat command gives a summary reporting about memory, processes, and paging.

$ vmstat -avmstat -a

-a option is used to get all active and inactive processes.

01-linux-process-managment-free-command

And this is the important column outputs from this command:

si: How much swapped in from disk.

so: How much swapped out to disk.

bi: How much sent to block devices.

bo: How much obtained from block devices.

us: The user time.

sy: The system time.

id: The idle time.

Our main concern is the (si) and (so) columns, where (si) column shows page-ins while (so) column provides page-outs.

A better way to look at these values is by viewing the output with a delay option like this:

$ vmstat 2 5vmstat 2 5

01-linux-process-managment-free-command

Where 2 is the delay in seconds and 5 is the number of times vmstat is called. It shows five updates of the command and all data is presented in kilobytes.

Page-in (si) happens when you start an application and the information is paged-in. Page out (so) happens when the kernel is freeing up memory.

System Load & top Command

In Linux process management, the top command gives you a list of the running processes and how they are using CPU and memory ; the output is a real-time data.

If you have a dual core system may have the first core at 40 percent and the second core at 70 percent, in this case, the top command may show a combined result of 110 percent, but you will not know the individual values for each core.

$ top -c-c

01-linux-process-managment-free-command

We use -c option to show the command line or the executable path behind that process.

You can press 1 key while you watch the top command statistics to show individual CPU statuses.

01-linux-process-managment-free-command

Keep in mind that certain processes are spawned like the child processes, you will see multiple processes for the same program like httpd and PHP-fpm.

You shouldn’t rely on top command only, you should review other resources before making a final action.

Monitoring Disk I/O with iotop

The system starts to be slow as a result of high disk activities, so it is important to monitor disk activities. That means figuring out which processes or users cause this disk activity.

The iotop command in Linux process management helps us to monitor disk I/O in real-time. You can install it if you don’t have it:

$ yum install iotop

Running iotop without any options will result in a list all processes.

To view the processes that cause to disk activity, you should use -o option:

$ iotop -o-o

01-linux-process-managment-free-command

You can easily know what program is impacting the system.

ps command

We’ve talked about ps command before on a previous post and how to order the processes by memory usage and CPU usage.

Monitoring System Health with iostat and lsof

iostat command gives you CPU utilization report; it can be used with -c option to display the CPU utilization report.

$ iostat -ciostat -c

The output result is easy to understand, but if the system is busy, you will see %iowait increases. That means the server is transferring or copying a lot of files.

With this command, you can check the read and write operations, so you should have a solid knowledge of what is hanging your disk and take the right decision.

Additionally, lsof command is used to list the open files:

lsof command shows which executable is using the file, the process ID, the user, and the name of the opened file.

Calculating the system load

Calculating system load is very important in Linux process management. The system load is the amount of processing for the system which is currently working. It is not the perfect way to measure system performance, but it gives you some evidence.

The load is calculated like this:

Actual Load = Total Load (uptime) / No. of CPUs

You can calculate the uptime by reviewing uptime command or top command:

$ uptimeuptime

$ toptop

The server load is shown in 1, 5, and 15 minutes.

As you can see, the average load is 0.00 at the first minute, 0.01 at the fifth minute, and 0.05 at fifteenth minutes.

When the load increases, processors are queued, and if there are many processor cores, the load is distributed equally across the server’s cores to balance the work.

You can say that the good load average is about 1. This does not mean if the load exceeds 1 that there is a problem, but if you begin to see higher numbers for a long time, that means a high load and there is a problem.

pgrep and systemctl

You can get the process ID using pgrep command followed by the service name.

$ pgrep servicename

This command shows the process ID or PID.

Note if this command shows more than process ID like httpd or SSH, the smallest process ID is the parent process ID.

On the other hand, you can use the systemctl command to get the main PID like this:

$ systemctl status<service_name>.service

There are more ways to obtain the required process ID or parent process ID, but this one is easy and straight.

Managing Services with systemd

If we are going to talk about Linux process management, we should take a look at systemd. The systemd is responsible for controlling how services are managed on modern Linux systems like CentOS 7.

Instead of using chkconfig command to enable and disable a service during the boot, you can use the systemctl command.

Systemd also ships with its own version of the top command, and in order to show the processes that are associated with a specific service, you can use the system-cgtop command like this:

$ systemdcgtop

As you can see, all associated processes, path, the number of tasks, the % of CPU used, memory allocation, and the inputs and outputs related.

This command can be used to output a recursive list of service content like this:

$ systemdcgls

This command gives us very useful information that can be used to make your decision.

Nice and Renice Processes

The process nice value is a numeric indication that belongs to the process and how it’s fighting for the CPU.

A high nice value indicates a low priority for your process, so how nice you are going to be to other users, and from here the name came.

The nice range is from -20 to +19.

nice command sets the nice value for a process at creation time, while renice command adjusts the value later.

$ nice –n 5 ./myscriptnice –n 5 ./myscript

This command increases the nice value which means lower priority by 5.

$ sudo renice 5 22132213

This command decreases the nice value means increased priority and the number (2213) is the PID.

You can increase its nice value (lower priority) but cannot lower it (high priority) while root user can do both.

Sending the kill signal

To kill a service or application that causes a problem, you can issue a termination signal (SIGTERM). You can review the previous post about signals and jobs.

$ kill process IDkill process IDID

This method is called safe kill. However, depending on your situation, maybe you need to force a service or application to hang up like this:

$ kill -1 process -1 process ID

Sometimes the safe killing and reloading fail to do anything, you can send kill signal SIGKILL by using -9 option which is called forced kill.

$ kill -9 process IDkill -9 process ID

There are no cleanup operations or safe exit with this command and not preferred. However, you can do something more proper by using the pkill command.

$ pkill -9 serviceName-9 serviceNameserviceName

And you can use pgrep command to ensure that all associated processes are killed.

$ pgrep serviceNamepgrep serviceName

I hope you have a good idea about Linux process management and how to make a good action to make the system healthy.

Thank you

Advertisements
Main, Operating System, Redhat / CEntOS / Oracle Linux

Standard Linux Tuning

Hello Bloggers,

Majority of the applications these days are deployed on (Debian / Redhat) Linux Operating System as the Base OS.

I Would like to share some generic tuning that can be done before deploying any application on it.

Index Component Question / Test / Reason  
Network
  These are some checks to validate the network setup.
[ Network Are the switches redundant?
Unplug one switch.
Fault-tolerance.
 
  Network Is the cabling redundant?
Pull cables.
Fault-tolerance.
 
  Network Is the network full-duplex?
Double check setup.
Performance.
 
       
Network adapter (NIC) Tuning
  It is recommended to consult with the network adapter provider on recommended Linux TCP/IP settings for optimal performance and stability on Linux.

There are also quite a few TCP/IP tuning source on the Internet such as http://fasterdata.es.net/TCP-tuning/linux.html

  NIC Are the NIC fault-tolerant (aka. auto-port negotiation)?
Pull cables and/or disable network adapter.
Fault-tolerance.
 
  NIC Set the transmission queue depth to at least 1000.

txqueuelen <length>

‘cat /proc/net/softnet_stat’

Performance and stability (packet drops).

 
  NIC Enable TCP/IP offloading (aka. Generic Segment Offloading (GSO)) which was added in kernel 2.6.18

See: http://www.linuxfoundation.org/en/Net:GSO

Check: ethtool -k eth0

Modify: ethtool -K <DevName>eth

lsb
Performance.

Note: I recommend enabling all supported TCP/IP offloading capabilities on and EMS host to free CPU resources.

 
  NIC Enable Interrupt Coalescence (aka. Interrupt Moderation or Interrupt Blanking).

See: http://kb.pert.geant.net/PERTKB/InterruptCoalescence

Check: ethtool -c eth0

Modify: ethtool -C <DevName>

Performance.

Note: The configuration is system dependant but the goal is to reduce the number of interrupts per second at the ‘cost’ of slightly increased latency.

 
       
TCP/IP Buffer Tuning
  For a low latency or high throughput messaging system TCP/IP buffer tuning is important.
Thus instead of tuning the defaults values one should rather check if the settings (sysctl –a) provide large enough buffer The values can be changed via the command sysctrl –w <name> <value>.

The below values and comments were taken from TIBCO support FAQ1-6YOAA) and serve as a guideline towards “large enough” buffers, i.e. if your system configuration has lower values it is suggested to raise them to below values.

  TCP/IP Maximum OS receive buffer size for all connection types.

sysctl -w net.core.rmem_max=8388608

Default: 131071
Performance.

 
  TCP/IP Default OS receive buffer size for all connection types.

sysctl -w net.core.rmem_default=65536

Default: 126976
Performance.

 
  TCP/IP Maximum OS send buffer size for all connection types.

sysctl -w net.core.wmem_max=8388608

Default: 131071
Performance.

 
  TCP/IP Default OS send buffer size for all types of connections.

sysctl -w net.core.wmem_default=65536

Default: 126976
Performance.

 
  TCP/IP Enable/Disable TCP/IP window scaling enabled?

sysctl net.ipv4.tcp_window_scaling

Default: 1

Performance.
Note: As Applications set buffers sizes explicitly this ‘disables’ the TCP/IP windows scaling on Linux. Thus there is no point in enabling it though there should be no harm on leaving the default (enabled). [This is my understanding / what I have been told but I never double checked and it could vary with kernel versions]

 
  TCP/IP TCP auto-tuning setting:

sysctl -w net.ipv4.tcp_mem=’8388608 8388608 8388608′

Default: 1966087 262144 393216
Performance.

The tcp_mem variable defines how the TCP stack should behave when it comes to memory usage:

–          The first value specified in the tcp_mem variable tells the kernel the low threshold. Below this point, the TCP stack does not bother at all about putting any pressure on the memory usage by different TCP sockets.

–          The second value tells the kernel at which point to start pressuring memory usage down.

–          The final value tells the kernel how many memory pages it may use maximally. If this value is reached, TCP streams and packets start getting dropped until we reach a lower memory usage again. This value includes all TCP sockets currently in use.

 
  TCP/IP TCP auto-tuning (receive) setting:

sysctl -w net.ipv4.tcp_rmem=’4096 87380 8388608′

Default: 4096 87380 4194304
Performance.

The tcp_rmem variable defines how the TCP stack should behave when it comes to memory usage:

–          The first value tells the kernel the minimum receive buffer for each TCP connection, and this buffer is always allocated to a TCP socket, even under high pressure on the system.

–          The second value specified tells the kernel the default receive buffer allocated for each TCP socket. This value overrides the /proc/sys/net/core/rmem_default value used by other protocols.

–          The third and last value specified in this variable specifies the maximum receive buffer that can be allocated for a TCP socket.”

 
  TCP/IP TCP auto-tuning (send) setting:

sysctl -w net.ipv4.tcp_wmem=’4096 65536 8388608′

Default: 4096 87380 4194304
Performance.

This variable takes three different values which hold information on how much TCP send buffer memory space each TCP socket has to use. Every TCP socket has this much buffer space to use before the buffer is filled up.  Each of the three values are used under different conditions:

–          The first value in this variable tells the minimum TCP send buffer space available for a single TCP socket.

–          The second value in the variable tells us the default buffer space allowed for a single TCP socket to use.

–          The third value tells the kernel the maximum TCP send buffer space.”

 
  TCP/IP This will ensure that immediately subsequent connections use these values.

sysctl -w net.ipv4.route.flush=1

Default: Not present

 
       
TCP Keep Alive
  In order to detect ungracefully closed sockets either the TCP keep-alive comes into play or the EMS client-server heartbeat. Which setup or which combination of parameters works better depends on the requirements and test scenarios.

As the EMS daemon does not explicitly enables TCP keep-alive on sockets the TCP keep-alive setting (net.ipv4.tcp_keepalive_intvl, net.ipv4.tcp_keepalive_probes, net.ipv4.tcp_keepalive_time) do not play a role.

  TCP How may times to retry before killing alive TCP connection. RFC1122 says that the limit should be longer than 100 sec. It is too small number. Default value 15 corresponds to 13-30 minutes depending on retransmission timeout (RTO).

sysctl -w net.ipv4.tcp_retries2=<test> (7 preferred)

Default: 15

Fault-Tolerance (EMS failover)

The default (15) is often considered too high and a value of 3 is often felt as too ‘edgy’ thus customer testing should establish a good value in the range between 4 and 10.

 
       
Linux System Settings
  System limits (ulimit) are used to establish boundaries for resource utilization by individual processes and thus protect the system and other processes. A too high or unlimited value provides zero protection but a too low value could hinder growth or cause premature errors.
  Linux Is the number of file descriptor at least 4096

ulimit –n

Scalability

Note: It is expected that the number of connected clients and thus the number of connections is going to increase over time and this setting allows for greater growth and also provides a greater safety room should some application have a connection leak. Also note that the number of open connection can decrease system performance due to the way the OS handles the select() API. Thus care should be taken if the number of connected clients increases over time that all SLA are still met.

 
  Linux Limit maximum file size for EMS to 2/5 of the disk space if the disk space is shared between EMS servers.

ulimit –f

Robustness: Contain the damage of a very large backlog.

 
  Linux Consider limiting the maximum data segment size for EMS daemons in order to avoid one EMS monopolizing all available memory.

ulimit –d

Robustness:  Contain the damage of a very large backlog.

Note: It should be tested if such a limit operates well with (triggers) the EMS reserved memory mode.

 
  Linux Limit number of child processes to X to contain rouge application (shell bomb)

ulimit –u

Robustness: Contain the damage a rogue application can do.
See: http://www.cyberciti.biz/tips/linux-limiting-user-process.html

This is just an example of a Linux system setting that is unrelated to TIBCO products. It is recommended to consult with Linux experts for recommended settings.

 
       
Linux Virtual Memory Management
  There are a couple of virtual memory related setting that play a role on how likely Linux swaps out memory pages and how Linux reacts to out-of-memory conditions. Both aspects are not important under “normal” operation conditions but are very important under memory pressure and thus the system’s stability under stress.

 

A server running EAI software and even more a server running a messaging server like EMS should rarely have to resort to swap space for obvious performance reasons. However considerations due to malloc/sbrk high-water-mark behavior, the behavior of the different over-commit strategies and the price of storage lead to above recommendation: Even with below tuning of EMS server towards larger malloc regions[1] the reality is that the EMS daemon is still subject to the sbrk() high-water-mark and is potentially allocation a lot of memory pages that could be swapped out without impacting performance. Of course the EMS server instance must eventually be bounced but the recommendation in this section aim to provide operations with a larger window to schedule the maintenance.

 

As theses values operate as a bundle they must be changed together or any variation must be well understood.

  Linux Swap-Space:                1.5 to 2x the physical RAM (24-32 GB )

Logical-Partition:        One of the first ones but after the EMS disk storage and application checkpoint files.

Physical-Partition:     Use a different physical partition than the one used for storage files, logging or application checkpoints to avoid competing disk IO.

 

 
  Linux Committing virtual memory:

sysctl -w vm.overcommit_memory=2

$ cat /proc/sys/vm/overcommit_memory

Default: 0

Robustness

 

Note: The recommended setting uses a new heuristic that only commits as much memory as available, where available is defined as swap-space plus a portion of RAM. The portion of RAM is defined in the overcommit_ratio. See also: http://www.mjmwired.net/kernel/Documentation/vm/overcommit-accounting and http://www.centos.org/docs/5/html/5.2/Deployment_Guide/s3-proc-sys-vm.html

 
  Linux Committing virtual memory II:

sysctl -w vm.overcommit_ratio=25 (or less)

$ cat /proc/sys/vm/overcommit_ratio

Default: 50

Robustness

Note: This value specifies how much percent of the RAM Linux will add to the swap space in order to calculate the “available” memory. The more the swap space exceeds the physical RAM the lower values might be chosen. See also: http://www.linuxinsight.com/proc_sys_vm_overcommit_ratio.html

 
  Linux Swappiness

sysctl -w vm.swappiness=25 (or less)

$ cat /proc/sys/vm/swappiness
Default: 60

Robustness

 

Note: The swappiness defines how likely memory pages will be swapped in order to make room for the file buffer cache.

 

Generally speaking an enterprise server should not need to swap out pages in order to make room for the file buffer cache or other processes which would favor a setting of 0. 

 

On the other hand it is likely that applications have at least some memory pages that almost never get referenced again and swapping them out is a good thing.

 
  Linux Exclude essential processes (Application) from being killed by the out-of-memory (OOM) daemon.

Echo “-17: > /proc/<pid>/oom_adj

Default: NA

Robustness

See: http://linux-mm.org/OOM_Killer and http://lwn.net/Articles/317814/

 

Note: With any configuration but overcommit_memory=2 and overcommit_ratio=0 the Linux Virtual Memory Ma­nagement can commit more memory than available. If then the memory must be provided Linux engages the out-of-memory kill daemon to kill process based on “badness”. In order to exclude essential processes from being killed one can set their oom_adj to -17.

 
  Linux 32bit Low memory area –  32bit Linux only

# cat /proc/sys/vm/lower_zone_protection
# echo “250” > /proc/sys/vm/lower_zone_protection

(NOT APPLICABLE)
To set this option on boot, add the following to /etc/sysctl.conf:
vm.lower_zone_protection = 250

 

See: http://linux.derkeiler.com/Mailing-Lists/RedHat/2007-08/msg00062.html

 
       
Linux CPU Tuning (Processor Binding & Priorities)
  This level of tuning is seldom required for Any Application solution. The tuning options are mentioned in case there is a need to go an extra mile. 
  Linux IRQ-Binding

Recommendation: Leave default
Note: For real-time messaging the binding of interrupts to a certain exclusively used CPU allows reducing jitter and thus improves the system characteristics as needed by ultra-low-latency solutions.

The default on Linux is IRQ balancing across multiple CPU and Linux offers two solutions in that real (kernel and daemon) of which only one should be enabled at most.

 
  Linux Process Base Priority
Recommendation: Leave default

 

Note: The process base priority is determined by the user running the process instance and thus running processes as root (chown and set sticky bit) increases the processes base priority.

And a root user can further increase the priority of Application to real-time scheduling which can further improve performance particularly in terms of jitter. However in 2008 we observed that doing so actually decreased the performance of EMS in terms of number of messages per second. That issue was researched with Novell at that time but I am not sure of its outcome.

 
  Linux Foreground and Background Processes

Recommendation: TBD

 

Note: Linux assigns foreground processes a better base priority than background processes but if it really matters and if so then how to change start-up scripts is a to-be-determined. 

 
  Linux Processor Set

Recommendation: Don’t bother

 

Note: Linux allows defining a processor set and limiting a process to only use cores from that processor set. This can be used to increase cache hits and cap the CPU resource for a particular process instance.

 

If larger memory regions are allocated the malloc() in the Linux glibc library uses mmap() instead of sbrk() to provide the memory pages to the process.

The memory mapped files (mmap()) are better in the way how they release memory back to the OS and thus the high-water-mark effect is avoided for these regions.

Redhat / CEntOS / Oracle Linux, Tips and Tricks

How to Delete all files except a Pattern in Unix

Good Morning To All My TECH Ghettos,

Today ima show ya’ll a fuckin command to delete all files except a pattern,

ya’ll can use it in a script or even commandline ……. Life gets easy as Fuck !!!!!!!!

find . -type f ! -name ‘<pattern>’ -delete

A Live Example

Before

Before

After the following Command

find . -type f ! -name ‘*.gz’ -delete

After

Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

How To Patch and Protect Linux Kernel Stack Clash Vulnerability CVE-2017-1000364 [ 19/June/2017 ]

Avery serious security problem has been found in the Linux kernel called “The Stack Clash.” It can be exploited by attackers to corrupt memory and execute arbitrary code. An attacker could leverage this with another vulnerability to execute arbitrary code and gain administrative/root account privileges. How do I fix this problem on Linux?

the-stack-clash-on-linux-openbsd-netbsd-freebsd-solaris
The Qualys Research Labs discovered various problems in the dynamic linker of the GNU C Library (CVE-2017-1000366) which allow local privilege escalation by clashing the stack including Linux kernel. This bug affects Linux, OpenBSD, NetBSD, FreeBSD and Solaris, on i386 and amd64. It can be exploited by attackers to corrupt memory and execute arbitrary code.

What is CVE-2017-1000364 bug?

From RHN:

A flaw was found in the way memory was being allocated on the stack for user space binaries. If heap (or different memory region) and stack memory regions were adjacent to each other, an attacker could use this flaw to jump over the stack guard gap, cause controlled memory corruption on process stack or the adjacent memory region, and thus increase their privileges on the system. This is a kernel-side mitigation which increases the stack guard gap size from one page to 1 MiB to make successful exploitation of this issue more difficult.

As per the original research post:

Each program running on a computer uses a special memory region called the stack. This memory region is special because it grows automatically when the program needs more stack memory. But if it grows too much and gets too close to another memory region, the program may confuse the stack with the other memory region. An attacker can exploit this confusion to overwrite the stack with the other memory region, or the other way around.

A list of affected Linux distros

  1. Red Hat Enterprise Linux Server 5.x
  2. Red Hat Enterprise Linux Server 6.x
  3. Red Hat Enterprise Linux Server 7.x
  4. CentOS Linux Server 5.x
  5. CentOS Linux Server 6.x
  6. CentOS Linux Server 7.x
  7. Oracle Enterprise Linux Server 5.x
  8. Oracle Enterprise Linux Server 6.x
  9. Oracle Enterprise Linux Server 7.x
  10. Ubuntu 17.10
  11. Ubuntu 17.04
  12. Ubuntu 16.10
  13. Ubuntu 16.04 LTS
  14. Ubuntu 12.04 ESM (Precise Pangolin)
  15. Debian 9 stretch
  16. Debian 8 jessie
  17. Debian 7 wheezy
  18. Debian unstable
  19. SUSE Linux Enterprise Desktop 12 SP2
  20. SUSE Linux Enterprise High Availability 12 SP2
  21. SUSE Linux Enterprise Live Patching 12
  22. SUSE Linux Enterprise Module for Public Cloud 12
  23. SUSE Linux Enterprise Build System Kit 12 SP2
  24. SUSE Openstack Cloud Magnum Orchestration 7
  25. SUSE Linux Enterprise Server 11 SP3-LTSS
  26. SUSE Linux Enterprise Server 11 SP4
  27. SUSE Linux Enterprise Server 12 SP1-LTSS
  28. SUSE Linux Enterprise Server 12 SP2
  29. SUSE Linux Enterprise Server for Raspberry Pi 12 SP2

Do I need to reboot my box?

Yes, as most services depends upon the dynamic linker of the GNU C Library and kernel itself needs to be reloaded in memory.

How do I fix CVE-2017-1000364 on Linux?

Type the commands as per your Linux distro. You need to reboot the box. Before you apply patch, note down your current kernel version:
$ uname -a
$ uname -mrs

Sample outputs:

Linux 4.4.0-78-generic x86_64

Debian or Ubuntu Linux

Type the following apt command/apt-get command to apply updates:
$ sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade
Sample outputs:

Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  libc-bin libc-dev-bin libc-l10n libc6 libc6-dev libc6-i386 linux-compiler-gcc-6-x86 linux-headers-4.9.0-3-amd64 linux-headers-4.9.0-3-common linux-image-4.9.0-3-amd64
  linux-kbuild-4.9 linux-libc-dev locales multiarch-support
14 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/62.0 MB of archives.
After this operation, 4,096 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Reading changelogs... Done
Preconfiguring packages ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../libc6-i386_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6-i386 (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../libc6-dev_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6-dev:amd64 (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../libc-dev-bin_2.24-11+deb9u1_amd64.deb ...
Unpacking libc-dev-bin (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../linux-libc-dev_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-libc-dev:amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../libc6_2.24-11+deb9u1_amd64.deb ...
Unpacking libc6:amd64 (2.24-11+deb9u1) over (2.24-11) ...
Setting up libc6:amd64 (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../libc-bin_2.24-11+deb9u1_amd64.deb ...
Unpacking libc-bin (2.24-11+deb9u1) over (2.24-11) ...
Setting up libc-bin (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../multiarch-support_2.24-11+deb9u1_amd64.deb ...
Unpacking multiarch-support (2.24-11+deb9u1) over (2.24-11) ...
Setting up multiarch-support (2.24-11+deb9u1) ...
(Reading database ... 115123 files and directories currently installed.)
Preparing to unpack .../0-libc-l10n_2.24-11+deb9u1_all.deb ...
Unpacking libc-l10n (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../1-locales_2.24-11+deb9u1_all.deb ...
Unpacking locales (2.24-11+deb9u1) over (2.24-11) ...
Preparing to unpack .../2-linux-compiler-gcc-6-x86_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-compiler-gcc-6-x86 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../3-linux-headers-4.9.0-3-amd64_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-headers-4.9.0-3-amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../4-linux-headers-4.9.0-3-common_4.9.30-2+deb9u1_all.deb ...
Unpacking linux-headers-4.9.0-3-common (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../5-linux-kbuild-4.9_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-kbuild-4.9 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Preparing to unpack .../6-linux-image-4.9.0-3-amd64_4.9.30-2+deb9u1_amd64.deb ...
Unpacking linux-image-4.9.0-3-amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ...
Setting up linux-libc-dev:amd64 (4.9.30-2+deb9u1) ...
Setting up linux-headers-4.9.0-3-common (4.9.30-2+deb9u1) ...
Setting up libc6-i386 (2.24-11+deb9u1) ...
Setting up linux-compiler-gcc-6-x86 (4.9.30-2+deb9u1) ...
Setting up linux-kbuild-4.9 (4.9.30-2+deb9u1) ...
Setting up libc-l10n (2.24-11+deb9u1) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up libc-dev-bin (2.24-11+deb9u1) ...
Setting up linux-image-4.9.0-3-amd64 (4.9.30-2+deb9u1) ...
/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-4.9.0-3-amd64
cryptsetup: WARNING: failed to detect canonical device of /dev/md0
cryptsetup: WARNING: could not determine root device from /etc/fstab
W: initramfs-tools configuration sets RESUME=UUID=054b217a-306b-4c18-b0bf-0ed85af6c6e1
W: but no matching swap device is available.
I: The initramfs will attempt to resume from /dev/md1p1
I: (UUID=bf72f3d4-3be4-4f68-8aae-4edfe5431670)
I: Set the RESUME variable to override this.
/etc/kernel/postinst.d/zz-update-grub:
Searching for GRUB installation directory ... found: /boot/grub
Searching for default file ... found: /boot/grub/default
Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst
Searching for splash image ... none found, skipping ...
Found kernel: /boot/vmlinuz-4.9.0-3-amd64
Found kernel: /boot/vmlinuz-3.16.0-4-amd64
Updating /boot/grub/menu.lst ... done

Setting up libc6-dev:amd64 (2.24-11+deb9u1) ...
Setting up locales (2.24-11+deb9u1) ...
Generating locales (this might take a while)...
  en_IN.UTF-8... done
Generation complete.
Setting up linux-headers-4.9.0-3-amd64 (4.9.30-2+deb9u1) ...
Processing triggers for libc-bin (2.24-11+deb9u1) ...

Reboot your server/desktop using reboot command:
$ sudo reboot

Oracle/RHEL/CentOS/Scientific Linux

Type the following yum command:
$ sudo yum update
$ sudo reboot

Fedora Linux

Type the following dnf command:
$ sudo dnf update
$ sudo reboot

Suse Enterprise Linux or Opensuse Linux

Type the following zypper command:
$ sudo zypper patch
$ sudo reboot

SUSE OpenStack Cloud 6

$ sudo zypper in -t patch SUSE-OpenStack-Cloud-6-2017-996=1
$ sudo reboot

SUSE Linux Enterprise Server for SAP 12-SP1

$ sudo zypper in -t patch SUSE-SLE-SAP-12-SP1-2017-996=1
$ sudo reboot

SUSE Linux Enterprise Server 12-SP1-LTSS

$ sudo zypper in -t patch SUSE-SLE-SERVER-12-SP1-2017-996=1
$ sudo reboot

SUSE Linux Enterprise Module for Public Cloud 12

$ sudo zypper in -t patch SUSE-SLE-Module-Public-Cloud-12-2017-996=1
$ sudo reboot

Verification

You need to make sure your version number changed after issuing reboot command
$ uname -a
$ uname -r
$ uname -mrs

Sample outputs:

Linux 4.4.0-81-generic x86_64
Main, Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Cpustat – Monitors CPU Utilization by Running Processes in Linux

Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Linux security alert: Bug in sudo’s get_process_ttyname() [ CVE-2017-1000367 ]

There is a serious vulnerability in sudo command that grants root access to anyone with a shell account. It works on SELinux enabled systems such as CentOS/RHEL and others too. A local user with privileges to execute commands via sudo could use this flaw to escalate their privileges to root. Patch your system as soon as possible.

It was discovered that Sudo did not properly parse the contents of /proc/[pid]/stat when attempting to determine its controlling tty. A local attacker in some configurations could possibly use this to overwrite any file on the filesystem, bypassing intended permissions or gain root shell.
From the description

We discovered a vulnerability in Sudo’s get_process_ttyname() for Linux:
this function opens “/proc/[pid]/stat” (man proc) and reads the device number of the tty from field 7 (tty_nr). Unfortunately, these fields are space-separated and field 2 (comm, the filename of the command) can
contain spaces (CVE-2017-1000367).

For example, if we execute Sudo through the symlink “./ 1 “, get_process_ttyname() calls sudo_ttyname_dev() to search for the non-existent tty device number “1” in the built-in search_devs[].

Next, sudo_ttyname_dev() calls the function sudo_ttyname_scan() to search for this non-existent tty device number “1” in a breadth-first traversal of “/dev”.

Last, we exploit this function during its traversal of the world-writable “/dev/shm”: through this vulnerability, a local user can pretend that his tty is any character device on the filesystem, and
after two race conditions, he can pretend that his tty is any file on the filesystem.

On an SELinux-enabled system, if a user is Sudoer for a command that does not grant him full root privileges, he can overwrite any file on the filesystem (including root-owned files) with his command’s output,
because relabel_tty() (in src/selinux.c) calls open(O_RDWR|O_NONBLOCK) on his tty and dup2()s it to the command’s stdin, stdout, and stderr. This allows any Sudoer user to obtain full root privileges.

A list of affected Linux distro

  1. Red Hat Enterprise Linux 6 (sudo)
  2. Red Hat Enterprise Linux 7 (sudo)
  3. Red Hat Enterprise Linux Server (v. 5 ELS) (sudo)
  4. Oracle Enterprise Linux 6
  5. Oracle Enterprise Linux 7
  6. Oracle Enterprise Linux Server 5
  7. CentOS Linux 6 (sudo)
  8. CentOS Linux 7 (sudo)
  9. Debian wheezy
  10. Debian jessie
  11. Debian stretch
  12. Debian sid
  13. Ubuntu 17.04
  14. Ubuntu 16.10
  15. Ubuntu 16.04 LTS
  16. Ubuntu 14.04 LTS
  17. SUSE Linux Enterprise Software Development Kit 12-SP2
  18. SUSE Linux Enterprise Server for Raspberry Pi 12-SP2
  19. SUSE Linux Enterprise Server 12-SP2
  20. SUSE Linux Enterprise Desktop 12-SP2
  21. OpenSuse, Slackware, and Gentoo Linux

How do I patch sudo on Debian/Ubuntu Linux server?

To patch Ubuntu/Debian Linux apt-get command or apt command:
$ sudo apt update
$ sudo apt upgrade

How do I patch sudo on CentOS/RHEL/Scientific/Oracle Linux server?

Run yum command:
$ sudo yum update

How do I patch sudo on Fedora Linux server?

Run dnf command:
$ sudo dnf update

How do I patch sudo on Suse/OpenSUSE Linux server?

Run zypper command:
$ sudo zypper update

How do I patch sudo on Arch Linux server?

Run pacman command:
$ sudo pacman -Syu

How do I patch sudo on Alpine Linux server?

Run apk command:
# apk update && apk upgrade

How do I patch sudo on Slackware Linux server?

Run upgradepkg command:
# upgradepkg sudo-1.8.20p1-i586-1_slack14.2.txz

How do I patch sudo on Gentoo Linux server?

Run emerge command:
# emerge --sync
# emerge --ask --oneshot --verbose ">=app-admin/sudo-1.8.20_p1"

Kernel Programming, Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Impermanence in Linux – Exclusive (By Hari Iyer)

Impermanence, also called Anicca or Anitya, is one of the essential doctrines and a part of three marks of existence in Buddhism The doctrine asserts that all of conditioned existence, without exception, is “transient, evanescent, inconstant”

On Linux, the root of all randomness is something called the kernel entropy pool. This is a large (4,096 bit) number kept privately in the kernel’s memory. There are 24096 possibilities for this number so it can contain up to 4,096 bits of entropy. There is one caveat – the kernel needs to be able to fill that memory from a source with 4,096 bits of entropy. And that’s the hard part: finding that much randomness.

The entropy pool is used in two ways: random numbers are generated from it and it is replenished with entropy by the kernel. When random numbers are generated from the pool the entropy of the pool is diminished (because the person receiving the random number has some information about the pool itself). So as the pool’s entropy diminishes as random numbers are handed out, the pool must be replenished.

Replenishing the pool is called stirring: new sources of entropy are stirred into the mix of bits in the pool.

This is the key to how random number generation works on Linux. If randomness is needed, it’s derived from the entropy pool. When available, other sources of randomness are used to stir the entropy pool and make it less predictable. The details are a little mathematical, but it’s interesting to understand how the Linux random number generator works as the principles and techniques apply to random number generation in other software and systems.

The kernel keeps a rough estimate of the number of bits of entropy in the pool. You can check the value of this estimate through the following command:

cat /proc/sys/kernel/random/entropy_avail

A healthy Linux system with a lot of entropy available will have return close to the full 4,096 bits of entropy. If the value returned is less than 200, the system is running low on entropy.

The kernel is watching you

I mentioned that the system takes other sources of randomness and uses this to stir the entropy pool. This is achieved using something called a timestamp.

Most systems have precise internal clocks. Every time that a user interacts with a system, the value of the clock at that time is recorded as a timestamp. Even though the year, month, day and hour are generally guessable, the millisecond and microsecond are not and therefore the timestamp contains some entropy. Timestamps obtained from the user’s mouse and keyboard along with timing information from the network and disk each have different amount of entropy.

How does the entropy found in a timestamp get transferred to the entropy pool? Simple, use math to mix it in. Well, simple if you like math.

Just mix it in

A fundamental property of entropy is that it mixes well. If you take two unrelated random streams and combine them, the new stream cannot have less entropy. Taking a number of low entropy sources and combining them results in a high entropy source.

All that’s needed is the right combination function: a function that can be used to combine two sources of entropy. One of the simplest such functions is the logical exclusive or (XOR). This truth table shows how bits x and y coming from different random streams are combined by the XOR function.

Even if one source of bits does not have much entropy, there is no harm in XORing it into another source. Entropy always increases. In the Linux kernel, a combination of XORs is used to mix timestamps into the main entropy pool.

Generating random numbers

Cryptographic applications require very high entropy. If a 128 bit key is generated with only 64 bits of entropy then it can be guessed in 264 attempts instead of 2128 attempts. That is the difference between needing a thousand computers running for a few years to brute force the key versus needing all the computers ever created running for longer than the history of the universe to do so.

Cryptographic applications require close to one bit of entropy per bit. If the system’s pool has fewer than 4,096 bits of entropy, how does the system return a fully random number? One way to do this is to use a cryptographic hash function.

A cryptographic hash function takes an input of any size and outputs a fixed size number. Changing one bit of the input will change the output completely. Hash functions are good at mixing things together. This mixing property spreads the entropy from the input evenly through the output. If the input has more bits of entropy than the size of the output, the output will be highly random. This is how highly entropic random numbers are derived from the entropy pool.

The hash function used by the Linux kernel is the standard SHA-1 cryptographic hash. By hashing the entire pool and and some additional arithmetic, 160 random bits are created for use by the system. When this happens, the system lowers its estimate of the entropy in the pool accordingly.

Above I said that applying a hash like SHA-1 could be dangerous if there wasn’t enough entropy in the pool. That’s why it’s critical to keep an eye on the available system entropy: if it drops too low the output of the random number generator could have less entropy that it appears to have.

Running out of entropy

One of the dangers of a system is running out of entropy. When the system’s entropy estimate drops to around the 160 bit level, the length of a SHA-1 hash, things get tricky, and how they effect programs and performance depends on which of two Linux random number generators are used.

Linux exposes two interfaces for random data that behave differently when the entropy level is low. They are /dev/random and /dev/urandom. When the entropy pool becomes predictable, both interfaces for requesting random numbers become problematic.

When the entropy level is too low, /dev/random blocks and does not return until the level of entropy in the system is high enough. This guarantees high entropy random numbers. If /dev/random is used in a time-critical service and the system runs low on entropy, the delays could be detrimental to the quality of service.

On the other hand, /dev/urandom does not block. It continues to return the hashed value of its entropy pool even though there is little to no entropy in it. This low-entropy data is not suited for cryptographic use.

The solution to the problem is to simply add more entropy into the system.

Hardware random number generation to the rescue?

Intel’s Ivy Bridge family of processors have an interesting feature called “secure key.” These processors contain a special piece of hardware inside that generates random numbers. The single assembly instruction RDRAND returns allegedly high entropy random data derived on the chip.

It has been suggested that Intel’s hardware number generator may not be fully random. Since it is baked into the silicon, that assertion is hard to audit and verify. As it turns out, even if the numbers generated have some bias, it can still help as long as this is not the only source of randomness in the system. Even if the random number generator itself had a back door, the mixing property of randomness means that it cannot lower the amount of entropy in the pool.

On Linux, if a hardware random number generator is present, the Linux kernel will use the XOR function to mix the output of RDRAND into the hash of the entropy pool. This happens here in the Linux source code (the XOR operator is ^ in C).

Third party entropy generators

Hardware number generation is not available everywhere, and the sources of randomness polled by the Linux kernel itself are somewhat limited. For this situation, a number of third party random number generation tools exist. Examples of these are haveged, which relies on processor cache timing, audio-entropyd and video-entropyd which work by sampling the noise from an external audio or video input device. By mixing these additional sources of locally collected entropy into the Linux entropy pool, the entropy can only go up.

Operating System, Redhat / CEntOS / Oracle Linux

/etc/security/limits.conf file – In A Nutshell

The /etc/security/limits.conf file contains a list line where each line describes a limit for a user in the form of:

<Domain> <type> <item> <shell limit value>

Where:

  • <domain> can be:
    • an user name
    • a group name, with @group syntax
    • the wildcard *, for default entry
    • the wildcard %, can be also used with %group syntax, for maxlogin limit
  • <type> can have the two values:
    • “soft” for enforcing the soft limits (soft is like warning)
    • “hard” for enforcing hard limits (hard is a real max limit)
  • <item> can be one of the following:
    • core – limits the core file size (KB)
  • <shell limit value> can be one of the following:
    • core – limits the core file size (KB)
    • data – max data size (KB)
    • fsize – maximum file size (KB)
    • memlock – max locked-in-memory address space (KB)
    • nofile – Maximum number of open file descriptors
    • rss – max resident set size (KB)
    • stack – max stack size (KB) – Maximum size of the stack segment of the process
    • cpu – max CPU time (MIN)
    • nproc – Maximum number of processes available to a single user
    • as – address space limit
    • maxlogins – max number of logins for this user
    • maxsyslogins – max number of logins on the system
    • priority – the priority to run user process with
    • locks – max number of file locks the user can hold
    • sigpending – max number of pending signals
    • msgqueue – max memory used by POSIX message queues (bytes)
    • nice – max nice priority allowed to raise to
    • rtprio – max realtime priority
    • chroot – change root to directory (Debian-specific)

 

  • Sigpending – examine pending signals.

sigpending () returns the set of signals that are pending for delivery to the calling thread (i.e., the signals which have been raised while blocked). The mask of pending signals is returned in set.

sigpending() returns 0 on success and -1 on error

 

credits :- Sagar Salunkhe

Operating System, Redhat / CEntOS / Oracle Linux

Linux KVM: Disable virbr0 NAT Interface

The virtual network (virbr0) used for Network address translation (NAT) which allows guests to access to network services. However, NAT slows down things and only recommended for desktop installations. To disable Network address translation (NAT) forwarding type the following commands:

Display Current Setup

Type the following command:
# ifconfig
Sample outputs:

virbr0    Link encap:Ethernet  HWaddr 00:00:00:00:00:00  
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:7921 (7.7 KiB)

Or use the following command:
# virsh net-list
Sample outputs:

Name                 State      Autostart
-----------------------------------------
default              active     yes       

To disable virbr0, enter:
# virsh net-destroy default
# virsh net-undefine default
# service libvirtd restart
# ifconfig

Operating System, Redhat / CEntOS / Oracle Linux, TIBCO

TIBCO Administrator – Error (Core Dump Error)

Sometimes the administrator process in UNIX Platform Stops intermittently and then in the following location,

$TIBCO_HOME/administrator/<version>/tomcat/hs_err_pid<pid_of_admin>.log

file you will see a core dump error something like this

#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007efcdb723df8, pid=12496, tid=139624169486080
#
# JRE version: Java(TM) SE Runtime Environment (8.0_51-b16) (build 1.8.0_51-b16)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.51-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# V [libjvm.so+0x404df8] PhaseChaitin::gather_lrg_masks(bool)+0x208
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try “ulimit -c unlimited” before starting Java again
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#

————— T H R E A D —————

Current thread (0x00000000023a8800): JavaThread “C2 CompilerThread1” daemon [_thread_in_native, id=12508, stack(0x00007efcc8f63000,0x00007efcc9064000)]

siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000000

Registers:
RAX=0x0000000000000000, RBX=0x00007efc901723e0, RCX=0x00007efc9016e890, RDX=0x0000000000000041
RSP=0x00007efcc905f650, RBP=0x00007efcc905f6c0, RSI=0x00007efcc9060f50, RDI=0x00007efc90b937a0
R8 =0x000000000000009a, R9 =0x0000000000000003, R10=0x0000000000000003, R11=0x0000000000000000
R12=0x0000000000000004, R13=0x0000000000000000, R14=0x0000000000000002, R15=0x00007efc90b937a0
RIP=0x00007efcdb723df8, EFLAGS=0x0000000000010246, CSGSFS=0x0000000000000033, ERR=0x0000000000000004
TRAPNO=0x000000000000000e

Top of Stack: (sp=0x00007efcc905f650)
0x00007efcc905f650: 01007efcc905f6c0 00007efcc9060f50
0x00007efcc905f660: 0000003dc905f870 00007efc9012d900
0x00007efcc905f670: 0000000100000002 ffffffff00000002
0x00007efcc905f680: 00007efc98009f40 00007efc90171d30
0x00007efcc905f690: 0000023ac9061038 00007efcc9060f50
0x00007efcc905f6a0: 0000000000000222 0000000000000090
0x00007efcc905f6b0: 00007efcc9061038 0000000000000222
0x00007efcc905f6c0: 00007efcc905f930 00007efcdb72705a
0x00007efcc905f6d0: 00007efcc905f750 00007efcc905f870
0x00007efcc905f6e0: 00007efcc905f830 00007efcc905f710
0x00007efcc905f6f0: 00007efcc905f7d0 00007efcc905f8a0
0x00007efcc905f700: 00007efcc9060f50 0000001200000117
0x00007efcc905f710: 00007efcdc2544b0 00007efc0000000c
0x00007efcc905f720: 00007efcc9061dd0 00007efcc9060f50
0x00007efcc905f730: 0000000000000807 00007efc9044a2e0
0x00007efcc905f740: 00007efc9012c610 0000000000000002
0x00007efcc905f750: 00007efcc905f820 00007efcdbae2ea3
0x00007efcc905f760: 000007c000000010 00007efcc90610d8
0x00007efcc905f770: 0000000000000028 ffffffe80000000e
0x00007efcc905f780: 00007efc9076dd60 00007efcc9061080
0x00007efcc905f790: 0000001100001f00 0000001a00000011
0x00007efcc905f7a0: 0000000100000001 00007efc903a1c78
0x00007efcc905f7b0: 00007efc908213e0 00007efcdbd7cd46
0x00007efcc905f7c0: 0000000000000008 00007efcdbd7cc97
0x00007efcc905f7d0: 00007efc00000009 00007efcc9061dd0
0x00007efcc905f7e0: 00007efc909059f0 00007efc900537a0
0x00007efcc905f7f0: 00007efc9040b7c0 00007efc9040c130
0x00007efcc905f800: 00007efc9081d280 00007efcc9061080
0x00007efcc905f810: 00007efcc9061060 0000000000000222
0x00007efcc905f820: 00007efcc905f870 00007efcdb8f8831
0x00007efcc905f830: 00007efc0000000b 00007efcc9061dd0
0x00007efcc905f840: 00007efc906cbd70 00007efcc9060f00

Instructions: (pc=0x00007efcdb723df8)
0x00007efcdb723dd8: 18 00 48 c7 c0 ff ff ff ff 4c 89 ff 49 0f 44 c7
0x00007efcdb723de8: 48 89 43 18 49 8b 07 ff 90 80 00 00 00 49 89 c5
0x00007efcdb723df8: 8b 00 21 43 38 41 8b 45 04 21 43 3c 4c 89 ff 41
0x00007efcdb723e08: 8b 45 08 21 43 40 41 8b 45 0c 21 43 44 41 8b 45

Register to memory mapping:

RAX=0x0000000000000000 is an unknown value
RBX=0x00007efc901723e0 is an unknown value
RCX=0x00007efc9016e890 is an unknown value
RDX=0x0000000000000041 is an unknown value
RSP=0x00007efcc905f650 is pointing into the stack for thread: 0x00000000023a8800
RBP=0x00007efcc905f6c0 is pointing into the stack for thread: 0x00000000023a8800
RSI=0x00007efcc9060f50 is pointing into the stack for thread: 0x00000000023a8800
RDI=0x00007efc90b937a0 is an unknown value
R8 =0x000000000000009a is an unknown value
R9 =0x0000000000000003 is an unknown value
R10=0x0000000000000003 is an unknown value
R11=0x0000000000000000 is an unknown value
R12=0x0000000000000004 is an unknown value
R13=0x0000000000000000 is an unknown value
R14=0x0000000000000002 is an unknown value
R15=0x00007efc90b937a0 is an unknown value
Stack: [0x00007efcc8f63000,0x00007efcc9064000], sp=0x00007efcc905f650, free space=1009k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x404df8] PhaseChaitin::gather_lrg_masks(bool)+0x208
V [libjvm.so+0x40805a] PhaseChaitin::Register_Allocate()+0x71a
V [libjvm.so+0x49abe0] Compile::Code_Gen()+0x260
V [libjvm.so+0x49e032] Compile::Compile(ciEnv*, C2Compiler*, ciMethod*, int, bool, bool, bool)+0x14b2
V [libjvm.so+0x3ebeb8] C2Compiler::compile_method(ciEnv*, ciMethod*, int)+0x198
V [libjvm.so+0x4a843a] CompileBroker::invoke_compiler_on_method(CompileTask*)+0xc9a
V [libjvm.so+0x4a93e6] CompileBroker::compiler_thread_loop()+0x5d6
V [libjvm.so+0xa5cbcf] JavaThread::thread_main_inner()+0xdf
V [libjvm.so+0xa5ccfc] JavaThread::run()+0x11c
V [libjvm.so+0x911048] java_start(Thread*)+0x108
C [libpthread.so.0+0x7aa1]
Current CompileTask:
C2:77124624 8967 ! 4 com.tibco.repo.RVRepoProcessBridge::handleServerHeartbeat (1075 bytes)
————— P R O C E S S —————

Java Threads: ( => current thread )
0x00007efcb8024000 JavaThread “Thread-41” daemon [_thread_blocked, id=13990, stack(0x00007efc372f5000,0x00007efc373f6000)]
0x00007efcac372000 JavaThread “http-bio-8989-exec-68” daemon [_thread_blocked, id=13853, stack(0x00007efc3ca41000,0x00007efc3cb42000)]
0x00007efc4002d000 JavaThread “http-bio-8989-exec-67” daemon [_thread_blocked, id=13837, stack(0x00007efc376f9000,0x00007efc377fa000)]
0x00007efc945b6800 JavaThread “http-bio-8989-exec-66” daemon [_thread_blocked, id=13828, stack(0x00007efc37cfd000,0x00007efc37dfe000)]
0x00007efc40028000 JavaThread “http-bio-8989-exec-65” daemon [_thread_blocked, id=13228, stack(0x00007efc374f7000,0x00007efc375f8000)]
0x00007efc993ba800 JavaThread “http-bio-8989-exec-64” daemon [_thread_blocked, id=13227, stack(0x00007efc3ce45000,0x00007efc3cf46000)]
0x00007efcc4012000 JavaThread “http-bio-8989-exec-63” daemon [_thread_blocked, id=13218, stack(0x00007efc3c63f000,0x00007efc3c740000)]
0x00007efc50006000 JavaThread “http-bio-8989-exec-62” daemon [_thread_blocked, id=13217, stack(0x00007efc373f6000,0x00007efc374f7000)]
0x00007efca800c000 JavaThread “http-bio-8989-exec-61” daemon [_thread_blocked, id=13216, stack(0x00007efc3c13a000,0x00007efc3c23b000)]
0x00007efc68004000 JavaThread “http-bio-8989-exec-60” daemon [_thread_blocked, id=13215, stack(0x00007efc3d34a000,0x00007efc3d44b000)]
0x00007efcb0006800 JavaThread “http-bio-8989-exec-59” daemon [_thread_blocked, id=13214, stack(0x00007efc3d54c000,0x00007efc3d64d000)]
0x00007efca8044800 JavaThread “http-bio-8989-exec-58” daemon [_thread_blocked, id=13213, stack(0x00007efc375f8000,0x00007efc376f9000)]
0x00007efc902c5800 JavaThread “http-bio-8989-exec-57” daemon [_thread_blocked, id=13212, stack(0x00007efc36ef1000,0x00007efc36ff2000)]
0x00007efcb4010800 JavaThread “http-bio-8989-exec-56” daemon [_thread_blocked, id=13211, stack(0x00007efc3d148000,0x00007efc3d249000)]
0x00007efc4408c800 JavaThread “http-bio-8989-exec-55” daemon [_thread_blocked, id=13210, stack(0x00007efc3e053000,0x00007efc3e154000)]
0x00007efcb4036800 JavaThread “http-bio-8989-exec-54” daemon [_thread_blocked, id=13201, stack(0x00007efc371f4000,0x00007efc372f5000)]
0x00007efcb0018800 JavaThread “http-bio-8989-exec-53” daemon [_thread_blocked, id=13200, stack(0x00007efc3c23b000,0x00007efc3c33c000)]
0x00007efc6c1e1000 JavaThread “http-bio-8989-exec-52” daemon [_thread_blocked, id=13199, stack(0x00007efc3c43d000,0x00007efc3c53e000)]
0x00007efc58005000 JavaThread “http-bio-8989-exec-51” daemon [_thread_blocked, id=13198, stack(0x00007efc3c039000,0x00007efc3c13a000)]
0x00007efc74006800 JavaThread “http-bio-8989-exec-50” daemon [_thread_blocked, id=13197, stack(0x00007efc9da1e000,0x00007efc9db1f000)]
0x00007efc54005800 JavaThread “AMI Worker 2” daemon [_thread_blocked, id=13120, stack(0x00007efcc2253000,0x00007efcc2354000)]
0x00007efc54003800 JavaThread “AMI Worker 1” daemon [_thread_blocked, id=13119, stack(0x00007efc36df0000,0x00007efc36ef1000)]
0x00007efcbc02f800 JavaThread “http-bio-8989-exec-49” daemon [_thread_blocked, id=13091, stack(0x00007efc37afb000,0x00007efc37bfc000)]
0x00007efc80083000 JavaThread “http-bio-8989-exec-48” daemon [_thread_blocked, id=13090, stack(0x00007efc3cd44000,0x00007efc3ce45000)]
0x00007efc4c01b000 JavaThread “http-bio-8989-exec-47” daemon [_thread_blocked, id=13089, stack(0x00007efc3d44b000,0x00007efc3d54c000)]
0x00007efca403a800 JavaThread “http-bio-8989-exec-46” daemon [_thread_blocked, id=13088, stack(0x00007efc36cef000,0x00007efc36df0000)]
0x00007efcc4023000 JavaThread “http-bio-8989-exec-45” daemon [_thread_blocked, id=13087, stack(0x00007efc3d249000,0x00007efc3d34a000)]
0x00007efc8468a000 JavaThread “http-bio-8989-exec-44” daemon [_thread_blocked, id=13086, stack(0x00007efc3d64d000,0x00007efc3d74e000)]
0x000000000252f800 JavaThread “http-bio-8989-AsyncTimeout” daemon [_thread_blocked, id=13032, stack(0x00007efc3d74e000,0x00007efc3d84f000)]
0x000000000252e800 JavaThread “http-bio-8989-Acceptor-0” daemon [_thread_in_native, id=13031, stack(0x00007efc3d84f000,0x00007efc3d950000)]
0x000000000252d000 JavaThread “ContainerBackgroundProcessor[StandardEngine[Catalina]]” daemon [_thread_blocked, id=13030, stack(0x00007efc3d950000,0x00007efc3da51000)]
0x00007efc44085800 JavaThread “Thread-37” daemon [_thread_blocked, id=13029, stack(0x00007efc3f0f5000,0x00007efc3f1f6000)]
0x00007efc78971000 JavaThread “Thread-36” daemon [_thread_blocked, id=13028, stack(0x00007efc3de51000,0x00007efc3df52000)]
0x00007efc78967000 JavaThread “Thread-33” daemon [_thread_blocked, id=13025, stack(0x00007efc3e154000,0x00007efc3e255000)]
0x00007efc788fe000 JavaThread “Tibrv Dispatcher” daemon [_thread_in_native, id=13024, stack(0x00007efc3e255000,0x00007efc3e356000)]
0x00007efc788fa800 JavaThread “RVAgentManagerTransport dispatch thread” daemon [_thread_in_native, id=13023, stack(0x00007efc3e356000,0x00007efc3e457000)]
0x00007efc788f8000 JavaThread “RacSubManager” daemon [_thread_blocked, id=13022, stack(0x00007efc3e457000,0x00007efc3e558000)]
0x00007efc788d8800 JavaThread “InitialListTimer” daemon [_thread_blocked, id=13021, stack(0x00007efc3e558000,0x00007efc3e659000)]
0x00007efc788d7000 JavaThread “RvHeartBeatTimer” daemon [_thread_blocked, id=13020, stack(0x00007efc3e659000,0x00007efc3e75a000)]
0x00007efc788d4000 JavaThread “AgentAliveMonitor dispatch thread” daemon [_thread_in_native, id=13019, stack(0x00007efc3ebf2000,0x00007efc3ecf3000)]
0x00007efc788aa000 JavaThread “AgentEventMonitor dispatch thread” daemon [_thread_in_native, id=13018, stack(0x00007efc3ecf3000,0x00007efc3edf4000)]
0x00007efc787c2800 JavaThread “Thread-30” daemon [_thread_blocked, id=13017, stack(0x00007efc3ea2b000,0x00007efc3eb2c000)]
0x00007efc4c009800 JavaThread “Thread-29(HawkConfig)” daemon [_thread_blocked, id=13016, stack(0x00007efc3eff4000,0x00007efc3f0f5000)]
0x00007efc4c007800 JavaThread “Thread-28” daemon [_thread_in_native, id=13015, stack(0x00007efc3f1f6000,0x00007efc3f2f7000)]
0x00007efc7827b800 JavaThread “Thread-27” daemon [_thread_blocked, id=13012, stack(0x00007efc3f2f7000,0x00007efc3f3f8000)]
0x00007efc780fb800 JavaThread “Thread-26(HawkConfig)” daemon [_thread_blocked, id=13011, stack(0x00007efc3f5f8000,0x00007efc3f6f9000)]
0x00007efc780f9800 JavaThread “Thread-25” daemon [_thread_in_native, id=13010, stack(0x00007efc3f6f9000,0x00007efc3f7fa000)]
0x00007efc78060000 JavaThread “Thread-24(MonitoringManagement)” daemon [_thread_blocked, id=13009, stack(0x00007efc3f7fa000,0x00007efc3f8fb000)]
0x00007efc7805e000 JavaThread “Thread-23” daemon [_thread_in_native, id=13008, stack(0x00007efc3f8fb000,0x00007efc3f9fc000)]
0x00007efc78210000 JavaThread “Thread-21(quality)” daemon [_thread_blocked, id=13007, stack(0x00007efc3f9fc000,0x00007efc3fafd000)]
0x00007efc7820f800 JavaThread “Thread-20” daemon [_thread_in_native, id=13006, stack(0x00007efc3fafd000,0x00007efc3fbfe000)]
0x00007efc6c1d6800 JavaThread “Thread-17” daemon [_thread_blocked, id=13003, stack(0x00007efc5d6fb000,0x00007efc5d7fc000)]
0x00007efc6c1ff800 JavaThread “CommitQueue4_0” daemon [_thread_in_native, id=13002, stack(0x00007efc3fbfe000,0x00007efc3fcff000)]
0x00007efc6c1fd000 JavaThread “NormalQueue4_2” daemon [_thread_in_native, id=13001, stack(0x00007efc3feff000,0x00007efc40000000)]
0x00007efc6c1fb000 JavaThread “NormalQueue4_1” daemon [_thread_in_native, id=13000, stack(0x00007efc5c0e9000,0x00007efc5c1ea000)]
0x00007efc6c1f9000 JavaThread “NormalQueue4_0” daemon [_thread_in_native, id=12999, stack(0x00007efc5c1ea000,0x00007efc5c2eb000)]
0x00007efc6c1f5000 JavaThread “CommitQueue3_0” daemon [_thread_in_native, id=12998, stack(0x00007efc5c2eb000,0x00007efc5c3ec000)]
0x00007efc6c1f3000 JavaThread “NormalQueue3_2” daemon [_thread_in_native, id=12997, stack(0x00007efc5c3ec000,0x00007efc5c4ed000)]
0x00007efc6c1f1800 JavaThread “NormalQueue3_1” daemon [_thread_in_native, id=12996, stack(0x00007efc5c4ed000,0x00007efc5c5ee000)]
0x00007efc6c1ef800 JavaThread “NormalQueue3_0” daemon [_thread_in_native, id=12995, stack(0x00007efc5c5ee000,0x00007efc5c6ef000)]
0x00007efc6c1eb800 JavaThread “CommitQueue2_0” daemon [_thread_in_native, id=12994, stack(0x00007efc5c6ef000,0x00007efc5c7f0000)]
0x00007efc6c1ea000 JavaThread “NormalQueue2_2” daemon [_thread_in_native, id=12993, stack(0x00007efc5c7f0000,0x00007efc5c8f1000)]
0x00007efc6c1e9800 JavaThread “NormalQueue2_1” daemon [_thread_in_native, id=12992, stack(0x00007efc5c8f1000,0x00007efc5c9f2000)]
0x00007efc6c1de800 JavaThread “NormalQueue2_0” daemon [_thread_in_native, id=12991, stack(0x00007efc5c9f2000,0x00007efc5caf3000)]
0x00007efc6c124000 JavaThread “Thread-16(AUTH_quality)” daemon [_thread_blocked, id=12989, stack(0x00007efc5ccf3000,0x00007efc5cdf4000)]
0x00007efc6c116800 JavaThread “Thread-15” daemon [_thread_in_native, id=12988, stack(0x00007efc5cdf4000,0x00007efc5cef5000)]
0x00007efc6c101000 JavaThread “Timer-0” daemon [_thread_blocked, id=12987, stack(0x00007efc5cef5000,0x00007efc5cff6000)]
0x00007efc6c0f7800 JavaThread “net.sf.ehcache.CacheManager@5a6a4d5b” daemon [_thread_blocked, id=12986, stack(0x00007efc5cff6000,0x00007efc5d0f7000)]
0x00007efc6c0c6000 JavaThread “CommitQueue1_0” daemon [_thread_in_native, id=12985, stack(0x00007efc5d0f7000,0x00007efc5d1f8000)]
0x00007efc6c0c4000 JavaThread “NormalQueue1_2” daemon [_thread_in_native, id=12984, stack(0x00007efc5d1f8000,0x00007efc5d2f9000)]
0x00007efc6c0c2800 JavaThread “NormalQueue1_1” daemon [_thread_in_native, id=12983, stack(0x00007efc5d2f9000,0x00007efc5d3fa000)]
0x00007efc6c0bd800 JavaThread “NormalQueue1_0” daemon [_thread_in_native, id=12982, stack(0x00007efc5d3fa000,0x00007efc5d4fb000)]
0x00007efc6c09e800 JavaThread “HB-1” daemon [_thread_in_native, id=12980, stack(0x00007efc9c013000,0x00007efc9c114000)]
0x00007efc6c09b800 JavaThread “HB-0” daemon [_thread_in_native, id=12979, stack(0x00007efc9c114000,0x00007efc9c215000)]
0x00007efc6c09a000 JavaThread “SYNC-0” daemon [_thread_in_native, id=12978, stack(0x00007efc9c215000,0x00007efc9c316000)]
0x00007efc6c067800 JavaThread “ImstMgmt” daemon [_thread_in_native, id=12973, stack(0x00007efc9c316000,0x00007efc9c417000)]
0x00007efc6c028800 JavaThread “HawkImplantDisp” daemon [_thread_in_native, id=12972, stack(0x00007efc9c417000,0x00007efc9c518000)]
0x00007efc781a1800 JavaThread “Thread-11” daemon [_thread_blocked, id=12967, stack(0x00007efc9cf19000,0x00007efc9d01a000)]
0x00000000030fe000 JavaThread “GC Daemon” daemon [_thread_blocked, id=12540, stack(0x00007efcc80bc000,0x00007efcc81bd000)]
0x00000000023bf800 JavaThread “Service Thread” daemon [_thread_blocked, id=12510, stack(0x00007efcc8d61000,0x00007efcc8e62000)]
0x00000000023aa800 JavaThread “C1 CompilerThread2” daemon [_thread_blocked, id=12509, stack(0x00007efcc8e62000,0x00007efcc8f63000)]
=>0x00000000023a8800 JavaThread “C2 CompilerThread1” daemon [_thread_in_native, id=12508, stack(0x00007efcc8f63000,0x00007efcc9064000)]
0x00000000023a5800 JavaThread “C2 CompilerThread0” daemon [_thread_blocked, id=12507, stack(0x00007efcc9064000,0x00007efcc9165000)]
0x00000000023a4000 JavaThread “Signal Dispatcher” daemon [_thread_blocked, id=12506, stack(0x00007efcc9165000,0x00007efcc9266000)]
0x000000000236c000 JavaThread “Finalizer” daemon [_thread_blocked, id=12505, stack(0x00007efcc9266000,0x00007efcc9367000)]
0x000000000236a000 JavaThread “Reference Handler” daemon [_thread_blocked, id=12504, stack(0x00007efcc9367000,0x00007efcc9468000)]
0x00000000022f6800 JavaThread “main” [_thread_in_native, id=12496, stack(0x00007ffec1ae5000,0x00007ffec1be5000)]

Other Threads:
0x0000000002364800 VMThread [stack: 0x00007efcc9468000,0x00007efcc9569000] [id=12503]
0x00000000023c2800 WatcherThread [stack: 0x00007efcc8c60000,0x00007efcc8d61000] [id=12511]

VM state:not at safepoint (normal execution)

VM Mutex/Monitor currently owned by a thread: None

Heap:
PSYoungGen total 46080K, used 23811K [0x00000000f5580000, 0x00000000f8600000, 0x0000000100000000)
eden space 45568K, 51% used [0x00000000f5580000,0x00000000f6ca0dc0,0x00000000f8200000)
from space 512K, 25% used [0x00000000f8280000,0x00000000f82a0000,0x00000000f8300000)
to space 2048K, 0% used [0x00000000f8400000,0x00000000f8400000,0x00000000f8600000)
ParOldGen total 122368K, used 45598K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c87800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K

Card table byte_map: [0x00007efccb5c4000,0x00007efccb6c5000] byte_map_base: 0x00007efccaec4000

Marking Bits: (ParMarkBitMap*) 0x00007efcdc2bd660
Begin Bits: [0x00007efcc3000000, 0x00007efcc3800000)
End Bits: [0x00007efcc3800000, 0x00007efcc4000000)

Polling page: 0x00007efcdc323000

CodeCache: size=245760Kb used=27975Kb max_used=28034Kb free=217784Kb
bounds [0x00007efccba85000, 0x00007efccd625000, 0x00007efcdaa85000]
total_blobs=7417 nmethods=6888 adapters=442
compilation: enabled

Compilation events (10 events):
Event: 75538.391 Thread 0x00000000023a5800 8963 4 org.hsqldb.Expression::collectInGroupByExpressions (61 bytes)
Event: 75538.393 Thread 0x00000000023a8800 nmethod 8962 0x00007efcccac8fd0 code [0x00007efcccac9140, 0x00007efcccac92b8]
Event: 75538.394 Thread 0x00000000023a8800 8964 4 org.hsqldb.Expression::isConstant (118 bytes)
Event: 75538.397 Thread 0x00000000023a8800 nmethod 8964 0x00007efcccb62110 code [0x00007efcccb62320, 0x00007efcccb62468]
Event: 75538.398 Thread 0x00000000023a5800 nmethod 8963 0x00007efccbf127d0 code [0x00007efccbf12ae0, 0x00007efccbf12e70]
Event: 76104.554 Thread 0x00000000023a8800 8965 4 com.tibco.tibrv.TibrvMsg::writeBool (164 bytes)
Event: 76104.565 Thread 0x00000000023a8800 nmethod 8965 0x00007efccca4e190 code [0x00007efccca4e420, 0x00007efccca4e7f0]
Event: 76857.543 Thread 0x00000000023aa800 8966 1 com.tibco.uac.monitor.server.MonitorServer::access$000 (4 bytes)
Event: 76857.544 Thread 0x00000000023aa800 nmethod 8966 0x00007efccc3c07d0 code [0x00007efccc3c0920, 0x00007efccc3c0a10]
Event: 77124.580 Thread 0x00000000023a8800 8967 ! 4 com.tibco.repo.RVRepoProcessBridge::handleServerHeartbeat (1075 bytes)

GC Heap History (10 events):
Event: 70999.039 GC heap before
{Heap before GC invocations=77 (full 8):
PSYoungGen total 50688K, used 48256K [0x00000000f5580000, 0x00000000f8a80000, 0x0000000100000000)
eden space 48128K, 100% used [0x00000000f5580000,0x00000000f8480000,0x00000000f8480000)
from space 2560K, 5% used [0x00000000f8800000,0x00000000f8820000,0x00000000f8a80000)
to space 3072K, 0% used [0x00000000f8480000,0x00000000f8480000,0x00000000f8780000)
ParOldGen total 122368K, used 45526K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c75800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 70999.045 GC heap after
Heap after GC invocations=77 (full 8):
PSYoungGen total 48128K, used 160K [0x00000000f5580000, 0x00000000f8a00000, 0x0000000100000000)
eden space 47616K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8400000)
from space 512K, 31% used [0x00000000f8480000,0x00000000f84a8000,0x00000000f8500000)
to space 3072K, 0% used [0x00000000f8700000,0x00000000f8700000,0x00000000f8a00000)
ParOldGen total 122368K, used 45550K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7b800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
}
Event: 72379.077 GC heap before
{Heap before GC invocations=78 (full 8):
PSYoungGen total 48128K, used 47776K [0x00000000f5580000, 0x00000000f8a00000, 0x0000000100000000)
eden space 47616K, 100% used [0x00000000f5580000,0x00000000f8400000,0x00000000f8400000)
from space 512K, 31% used [0x00000000f8480000,0x00000000f84a8000,0x00000000f8500000)
to space 3072K, 0% used [0x00000000f8700000,0x00000000f8700000,0x00000000f8a00000)
ParOldGen total 122368K, used 45550K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7b800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 72379.082 GC heap after
Heap after GC invocations=78 (full 8):
PSYoungGen total 48640K, used 128K [0x00000000f5580000, 0x00000000f8880000, 0x0000000100000000)
eden space 47104K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8380000)
from space 1536K, 8% used [0x00000000f8700000,0x00000000f8720000,0x00000000f8880000)
to space 2560K, 0% used [0x00000000f8380000,0x00000000f8380000,0x00000000f8600000)
ParOldGen total 122368K, used 45566K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7f800,0x00000000e7780000)
Metaspace used 47577K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
}
Event: 73744.273 GC heap before
{Heap before GC invocations=79 (full 8):
PSYoungGen total 48640K, used 47232K [0x00000000f5580000, 0x00000000f8880000, 0x0000000100000000)
eden space 47104K, 100% used [0x00000000f5580000,0x00000000f8380000,0x00000000f8380000)
from space 1536K, 8% used [0x00000000f8700000,0x00000000f8720000,0x00000000f8880000)
to space 2560K, 0% used [0x00000000f8380000,0x00000000f8380000,0x00000000f8600000)
ParOldGen total 122368K, used 45566K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c7f800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 73744.279 GC heap after
Heap after GC invocations=79 (full 8):
PSYoungGen total 47104K, used 96K [0x00000000f5580000, 0x00000000f8800000, 0x0000000100000000)
eden space 46592K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8300000)
from space 512K, 18% used [0x00000000f8380000,0x00000000f8398000,0x00000000f8400000)
to space 2560K, 0% used [0x00000000f8580000,0x00000000f8580000,0x00000000f8800000)
ParOldGen total 122368K, used 45582K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c83800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
}
Event: 75098.826 GC heap before
{Heap before GC invocations=80 (full 8):
PSYoungGen total 47104K, used 46688K [0x00000000f5580000, 0x00000000f8800000, 0x0000000100000000)
eden space 46592K, 100% used [0x00000000f5580000,0x00000000f8300000,0x00000000f8300000)
from space 512K, 18% used [0x00000000f8380000,0x00000000f8398000,0x00000000f8400000)
to space 2560K, 0% used [0x00000000f8580000,0x00000000f8580000,0x00000000f8800000)
ParOldGen total 122368K, used 45582K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c83800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 75098.831 GC heap after
Heap after GC invocations=80 (full 8):
PSYoungGen total 48128K, used 160K [0x00000000f5580000, 0x00000000f8780000, 0x0000000100000000)
eden space 46080K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8280000)
from space 2048K, 7% used [0x00000000f8580000,0x00000000f85a8000,0x00000000f8780000)
to space 2560K, 0% used [0x00000000f8280000,0x00000000f8280000,0x00000000f8500000)
ParOldGen total 122368K, used 45590K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c85800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
}
Event: 76440.356 GC heap before
{Heap before GC invocations=81 (full 8):
PSYoungGen total 48128K, used 46240K [0x00000000f5580000, 0x00000000f8780000, 0x0000000100000000)
eden space 46080K, 100% used [0x00000000f5580000,0x00000000f8280000,0x00000000f8280000)
from space 2048K, 7% used [0x00000000f8580000,0x00000000f85a8000,0x00000000f8780000)
to space 2560K, 0% used [0x00000000f8280000,0x00000000f8280000,0x00000000f8500000)
ParOldGen total 122368K, used 45590K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c85800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K
Event: 76440.360 GC heap after
Heap after GC invocations=81 (full 8):
PSYoungGen total 46080K, used 128K [0x00000000f5580000, 0x00000000f8600000, 0x0000000100000000)
eden space 45568K, 0% used [0x00000000f5580000,0x00000000f5580000,0x00000000f8200000)
from space 512K, 25% used [0x00000000f8280000,0x00000000f82a0000,0x00000000f8300000)
to space 2048K, 0% used [0x00000000f8400000,0x00000000f8400000,0x00000000f8600000)
ParOldGen total 122368K, used 45598K [0x00000000e0000000, 0x00000000e7780000, 0x00000000f5580000)
object space 122368K, 37% used [0x00000000e0000000,0x00000000e2c87800,0x00000000e7780000)
Metaspace used 47578K, capacity 50064K, committed 50432K, reserved 1093632K
class space used 5434K, capacity 5762K, committed 5888K, reserved 1048576K

In Such cases the following are the checks we need to do

  • Check the ulimit
    • It is supposed to be unlimited
  • Check the limit for no of open files
    • You can use the following command to check the number of open files
    • lsof | grep <user>|grep -v grep|wc -l

    • Then Further check the limits.conf for the value set from your end for the same
    • <user> soft nofile 350000
      <user> hard nofile 350000
      <user> soft nproc 65536
      <user> hard nproc 65536
      <user> soft stack 10240
      <user> hard stack 10240
      <user> soft sigpending 1548380
      <user> hard sigpending 1548380
  • Hopefully the problem will be resolved.

Happy Troubleshooting Guys 🙂

Operating System, Redhat / CEntOS / Oracle Linux

/dev/random vs /dev/urandom

If you want random data in a Linux/Unix type OS, the standard way to do so is to use /dev/random or /dev/urandom. These devices are special files. They can be read like normal files and the read data is generated via multiple sources of entropy in the system which provide the randomness.

/dev/random will block after the entropy pool is exhausted. It will remain blocked until additional data has been collected from the sources of entropy that are available. This can slow down random data generation.

/dev/urandom will not block. Instead it will reuse the internal pool to produce more pseudo-random bits.

/dev/urandom is best used when:

  • You just want a large file with random data for some kind of testing.
  • You are using the dd command to wipe data off a disk by replacing it with random data.
  • Almost everywhere else where you don’t have a really good reason to use /dev/random instead.

/dev/random is likely to be the better choice when:

  • Randomness is critical to the security of cryptography in your application – one-time pads, key generation.
Redhat / CEntOS / Oracle Linux, Ubuntu

10 Important “rsync” command – UNIX

Rsync (Remote Sync) is a most commonly used command for copying and synchronizing files and directories remotely as well as locally in Linux/Unix systems. With the help of rsync command you can copy and synchronize your data remotely and locally across directories, across disks and networks, perform data backups and mirroring between two Linux machines.

This article explains 10 basic and advanced usage of the rsync command to transfer your files remotely and locally in Linux based machines. You don’t need to be root user to run rsync command.

Some advantages and features of Rsync command
  1. It efficiently copies and sync files to or from a remote system.
  2. Supports copying links, devices, owners, groups and permissions.
  3. It’s faster than scp (Secure Copy) because rsync uses remote-update protocol which allows to transfer just the differences between two sets of files. First time, it copies the whole content of a file or a directory from source to destination but from next time, it copies only the changed blocks and bytes to the destination.
  4. Rsync consumes less bandwidth as it uses compression and decompression method while sending and receiving data both ends.
Basic syntax of rsync command
# rsync options source destination
Some common options used with rsync commands
  1. -v : verbose
  2. -r : copies data recursively (but don’t preserve timestamps and permission while transferring data
  3. -a : archive mode, archive mode allows copying files recursively and it also preserves symbolic links, file permissions, user & group ownerships and timestamps
  4. -z : compress file data
  5. -h : human-readable, output numbers in a human-readable format

 

Install rsync in your Linux machine

We can install rsync package with the help of following command.

# yum install rsync (On Red Hat based systems)
# apt-get install rsync (On Debian based systems)

1. Copy/Sync Files and Directory Locally

Copy/Sync a File on a Local Computer

This following command will sync a single file on a local machine from one location to another location. Here in this example, a file name backup.tar needs to be copied or synced to /tmp/backups/ folder.

[root@tecmint]# rsync -zvh backup.tar /tmp/backups/
created directory /tmp/backups
backup.tar
sent 14.71M bytes  received 31 bytes  3.27M bytes/sec
total size is 16.18M  speedup is 1.10

In above example, you can see that if the destination is not already exists rsync will create a directory automatically for destination.

Copy/Sync a Directory on Local Computer

The following command will transfer or sync all the files of from one directory to a different directory in the same machine. Here in this example, /root/rpmpkgs contains some rpm package files and you want that directory to be copied inside /tmp/backups/ folder.

[root@tecmint]# rsync -avzh /root/rpmpkgs /tmp/backups/
sending incremental file list
rpmpkgs/
rpmpkgs/httpd-2.2.3-82.el5.centos.i386.rpm
rpmpkgs/mod_ssl-2.2.3-82.el5.centos.i386.rpm
rpmpkgs/nagios-3.5.0.tar.gz
rpmpkgs/nagios-plugins-1.4.16.tar.gz
sent 4.99M bytes  received 92 bytes  3.33M bytes/sec
total size is 4.99M  speedup is 1.00

2. Copy/Sync Files and Directory to or From a Server

Copy a Directory from Local Server to a Remote Server

This command will sync a directory from a local machine to a remote machine. For example: There is a folder in your local computer “rpmpkgs” which contains some RPM packages and you want that local directory’s content send to a remote server, you can use following command.

[root@tecmint]$ rsync -avz rpmpkgs/ root@192.168.0.101:/home/
root@192.168.0.101's password:
sending incremental file list
./
httpd-2.2.3-82.el5.centos.i386.rpm
mod_ssl-2.2.3-82.el5.centos.i386.rpm
nagios-3.5.0.tar.gz
nagios-plugins-1.4.16.tar.gz
sent 4993369 bytes  received 91 bytes  399476.80 bytes/sec
total size is 4991313  speedup is 1.00
Copy/Sync a Remote Directory to a Local Machine

This command will help you sync a remote directory to a local directory. Here in this example, a directory /home/hari/rpmpkgs which is on a remote server is being copied in your local computer in /tmp/myrpms.

[root@tecmint]# rsync -avzh root@192.168.0.100:/home/tarunika/rpmpkgs /tmp/myrpms
root@192.168.0.100's password:
receiving incremental file list
created directory /tmp/myrpms
rpmpkgs/
rpmpkgs/httpd-2.2.3-82.el5.centos.i386.rpm
rpmpkgs/mod_ssl-2.2.3-82.el5.centos.i386.rpm
rpmpkgs/nagios-3.5.0.tar.gz
rpmpkgs/nagios-plugins-1.4.16.tar.gz
sent 91 bytes  received 4.99M bytes  322.16K bytes/sec
total size is 4.99M  speedup is 1.00

3. Rsync Over SSH

With rsync, we can use SSH (Secure Shell) for data transfer, using SSH protocol while transferring our data you can be ensured that your data is being transferred in a secured connection with encryption so that nobody can read your data while it is being transferred over the wire on the internet.

Also when we use rsync we need to provide the user/root password to accomplish that particular task, so using SSH option will send your logins in an encrypted manner so that your password will be safe.

Copy a File from a Remote Server to a Local Server with SSH

To specify a protocol with rsync you need to give “-e” option with protocol name you want to use. Here in this example, We will be using “ssh” with “-e” option and perform data transfer.

[root@tecmint]# rsync -avzhe ssh root@192.168.0.100:/root/install.log /tmp/
root@192.168.0.100's password:
receiving incremental file list
install.log
sent 30 bytes  received 8.12K bytes  1.48K bytes/sec
total size is 30.74K  speedup is 3.77
Copy a File from a Local Server to a Remote Server with SSH
[root@tecmint]# rsync -avzhe ssh backup.tar root@192.168.0.100:/backups/
root@192.168.0.100's password:
sending incremental file list
backup.tar
sent 14.71M bytes  received 31 bytes  1.28M bytes/sec
total size is 16.18M  speedup is 1.10

 

4. Show Progress While Transferring Data with rsync

To show the progress while transferring the data from one machine to a different machine, we can use ‘–progress’ option for it. It displays the files and the time remaining to complete the transfer.

[root@tecmint]# rsync -avzhe ssh --progress /home/rpmpkgs root@192.168.0.100:/root/rpmpkgs
root@192.168.0.100's password:
sending incremental file list
created directory /root/rpmpkgs
rpmpkgs/
rpmpkgs/httpd-2.2.3-82.el5.centos.i386.rpm
1.02M 100%        2.72MB/s        0:00:00 (xfer#1, to-check=3/5)
rpmpkgs/mod_ssl-2.2.3-82.el5.centos.i386.rpm
99.04K 100%  241.19kB/s        0:00:00 (xfer#2, to-check=2/5)
rpmpkgs/nagios-3.5.0.tar.gz
1.79M 100%        1.56MB/s        0:00:01 (xfer#3, to-check=1/5)
rpmpkgs/nagios-plugins-1.4.16.tar.gz
2.09M 100%        1.47MB/s        0:00:01 (xfer#4, to-check=0/5)
sent 4.99M bytes  received 92 bytes  475.56K bytes/sec
total size is 4.99M  speedup is 1.00

5. Use of –include and –exclude Options

These two options allows us to include and exclude files by specifying parameters with these option helps us to specify those files or directories which you want to include in your sync and exclude files and folders with you don’t want to be transferred.

Here in this example, rsync command will include those files and directory only which starts with ‘R’ and exclude all other files and directory.

[root@tecmint]# rsync -avze ssh --include 'R*' --exclude '*' root@192.168.0.101:/var/lib/rpm/ /root/rpm
root@192.168.0.101's password:
receiving incremental file list
created directory /root/rpm
./
Requirename
Requireversion
sent 67 bytes  received 167289 bytes  7438.04 bytes/sec
total size is 434176  speedup is 2.59

6. Use of –delete Option

If a file or directory not exist at the source, but already exists at the destination, you might want to delete that existing file/directory at the target while syncing .

We can use ‘–delete‘ option to delete files that are not there in source directory.

Source and target are in sync. Now creating new file test.txt at the target.

[root@tecmint]# touch test.txt
[root@tecmint]# rsync -avz --delete root@192.168.0.100:/var/lib/rpm/ .
Password:
receiving file list ... done
deleting test.txt
./
sent 26 bytes  received 390 bytes  48.94 bytes/sec
total size is 45305958  speedup is 108908.55

Target has the new file called test.txt, when synchronize with the source with ‘–delete‘ option, it removed the file test.txt.

7. Set the Max Size of Files to be Transferred

You can specify the Max file size to be transferred or sync. You can do it with “–max-size” option. Here in this example, Max file size is 200k, so this command will transfer only those files which are equal or smaller than 200k.

[root@tecmint]# rsync -avzhe ssh --max-size='200k' /var/lib/rpm/ root@192.168.0.100:/root/tmprpm
root@192.168.0.100's password:
sending incremental file list
created directory /root/tmprpm
./
Conflictname
Group
Installtid
Name
Provideversion
Pubkeys
Requireversion
Sha1header
Sigmd5
Triggername
__db.001
sent 189.79K bytes  received 224 bytes  13.10K bytes/sec
total size is 38.08M  speedup is 200.43

8. Automatically Delete source Files after successful Transfer

Now, suppose you have a main web server and a data backup server, you created a daily backup and synced it with your backup server, now you don’t want to keep that local copy of backup in your web server.

So, will you wait for transfer to complete and then delete those local backup file manually? Of Course NO. This automatic deletion can be done using ‘–remove-source-files‘ option.

[root@tecmint]# rsync --remove-source-files -zvh backup.tar /tmp/backups/
backup.tar
sent 14.71M bytes  received 31 bytes  4.20M bytes/sec
total size is 16.18M  speedup is 1.10
[root@tecmint]# ll backup.tar
ls: backup.tar: No such file or directory

9. Do a Dry Run with rsync

If you are a newbie and using rsync and don’t know what exactly your command going do. Rsync could really mess up the things in your destination folder and then doing an undo can be a tedious job.

Use of this option will not make any changes only do a dry run of the command and shows the output of the command, if the output shows exactly same you want to do then you can remove ‘–dry-run‘ option from your command and run on the terminal.

root@tecmint]# rsync --dry-run --remove-source-files -zvh backup.tar /tmp/backups/
backup.tar
sent 35 bytes  received 15 bytes  100.00 bytes/sec
total size is 16.18M  speedup is 323584.00 (DRY RUN)

10. Set Bandwidth Limit and Transfer File

You can set the bandwidth limit while transferring data from one machine to another machine with the the help of ‘–bwlimit‘ option. This options helps us to limit I/O bandwidth.

[root@tecmint]# rsync --bwlimit=100 -avzhe ssh  /var/lib/rpm/  root@192.168.0.100:/root/tmprpm/
root@192.168.0.100's password:
sending incremental file list
sent 324 bytes  received 12 bytes  61.09 bytes/sec
total size is 38.08M  speedup is 113347.05

Also, by default rsync syncs changed blocks and bytes only, if you want explicitly want to sync whole file then you use ‘-W‘ option with it.

[root@tecmint]# rsync -zvhW backup.tar /tmp/backups/backup.tar
backup.tar
sent 14.71M bytes  received 31 bytes  3.27M bytes/sec
total size is 16.18M  speedup is 1.10


rrsync -azP –progress “<user>@<host>:<absolute path>” <location to be copied>

Source :- techmint.com
Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Fork: retry: Resource temporarily unavailable

Issue: 

It was reported that a particular application user is not able to Login.

Troubleshoot:
1. Tried Logging to the system with root user it was fine.
2. Tried to switich user it failed with an Error “Write Failed; Broken Pipe”
3. Created a file and it was working.
4. Tried switch the user. This time it goes through.
5. Tried running some jobs with the user. It throws an error saying “fork: retry: Resource temporarily unavailable”.
6. Then checked the “/etc/security/limits.d/90-nproc.conf” file to find out that all the users
are given nproc limit as 1024.

Resolution:
1. Changed it to a higher value and it solved the issue.

2. I changed the value to 4096.
Redhat / CEntOS / Oracle Linux, Ubuntu

How to use parallel ssh (PSSH) for executing ssh in parallel on a number of Linux/Unix/BSD servers

Recently I come across a nice little nifty tool called pssh to run a single command on multiple Linux / UNIX / BSD servers. You can easily increase your productivy with this SSH tool.
More about pssh
pssh is a command line tool for executing ssh in parallel on some hosts. It specialties includes:
  1. Sending input to all of the processes
  2. Inputting a password to ssh
  3. Saving output to files
  4. IT/sysadmin taks automation such as patching servers
  5. Timing out and more
Let us see how to install and use pssh on Linux and Unix-like system.
pssh-welcome
Installation
You can install pssh as per your Linux and Unix variant. Once package installed, you can get parallel versions of the openssh tools. Included in the installation:
  1. Parallel ssh (pssh command)
  2. Parallel scp (pscp command )
  3. Parallel rsync (prsync command)
  4. Parallel nuke (pnuke command)
  5. Parallel slurp (pslurp command)
Install pssh on Debian/Ubuntu Linux
Type the following apt-get command/apt command to install pssh:
$ sudo apt install pssh
OR
$ sudo apt-get install pssh
Sample outputs:
Fig.01: Installing pssh on Debian/Ubuntu Linux

Fig.01: Installing pssh on Debian/Ubuntu Linux

Install pssh on Apple MacOS X
Type the following brew command:
$ brew install pssh
Sample outputs:
Fig.02: Installing pssh on MacOS Unix

Fig.02: Installing pssh on MacOS Unix

Install pssh on FreeBSD unix
Type any one of the command:
# cd /usr/ports/security/pssh/ && make install clean
OR
# pkg install pssh
Sample outputs:
Fig.03: Installing pssh on FreeBSD

Fig.03: Installing pssh on FreeBSD

Install pssh on RHEL/CentOS/Fedora Linux
First turn on EPEL repo and type the following command yum command:
$ sudo yum install pssh
Sample outputs:
Fig.04: Installing pssh on RHEL/CentOS/Red Hat Enterprise Linux

Fig.04: Installing pssh on RHEL/CentOS/Red Hat Enterprise Linux

Install pssh on Fedora Linux
Type the following dnf command:
$ sudo dnf install pssh
Sample outputs:
Fig.05: Installing pssh on Fedora

Fig.05: Installing pssh on Fedora

Install pssh on Arch Linux
Type the following command:
$ sudo pacman -S python-pip
$ pip install pssh
How to use pssh command
First you need to create a text file called hosts file from which pssh read hosts names. The syntax is pretty simple. Each line in the host file are of the form [user@]host[:port] and can include blank lines and comments lines beginning with “#”. Here is my sample file named ~/.pssh_hosts_files:
$ cat ~/.pssh_hosts_files
vivek@dellm6700
root@192.168.2.30
root@192.168.2.45
root@192.168.2.46

Run the date command all hosts:
$ pssh -i -h ~/.pssh_hosts_files date
Sample outputs:
[1] 18:10:10 [SUCCESS] root@192.168.2.46 Sun Feb 26 18:10:10 IST 2017 [2] 18:10:10 [SUCCESS] vivek@dellm6700 Sun Feb 26 18:10:10 IST 2017 [3] 18:10:10 [SUCCESS] root@192.168.2.45 Sun Feb 26 18:10:10 IST 2017 [4] 18:10:10 [SUCCESS] root@192.168.2.30 Sun Feb 26 18:10:10 IST 2017
Run the uptime command on each host:
$ pssh -i -h ~/.pssh_hosts_files uptime
Sample outputs:
[1] 18:11:15 [SUCCESS] root@192.168.2.45 18:11:15 up 2:29, 0 users, load average: 0.00, 0.00, 0.00 [2] 18:11:15 [SUCCESS] vivek@dellm6700 18:11:15 up 19:06, 0 users, load average: 0.13, 0.25, 0.27 [3] 18:11:15 [SUCCESS] root@192.168.2.46 18:11:15 up 1:55, 0 users, load average: 0.00, 0.00, 0.00 [4] 18:11:15 [SUCCESS] root@192.168.2.30 6:11PM up 1 day, 21:38, 0 users, load averages: 0.12, 0.14, 0.09
You can now automate common sysadmin tasks such as patching all servers:
$ pssh -h ~/.pssh_hosts_files -- sudo yum -y update
OR
$ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y update
$ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y upgrade
How do I use pssh to copy file to all servers?
The syntax is:
pscp -h ~/.pssh_hosts_files src dest
To copy $HOME/demo.txt to /tmp/ on all servers, enter:
$ pscp -h ~/.pssh_hosts_files $HOME/demo.txt /tmp/
Sample outputs:
[1] 18:17:35 [SUCCESS] vivek@dellm6700 [2] 18:17:35 [SUCCESS] root@192.168.2.45 [3] 18:17:35 [SUCCESS] root@192.168.2.46 [4] 18:17:35 [SUCCESS] root@192.168.2.30
Or use the prsync command for efficient copying of files:
$ prsync -h ~/.pssh_hosts_files /etc/passwd /tmp/
$ prsync -h ~/.pssh_hosts_files *.html /var/www/html/
How do I kill processes in parallel on a number of hosts?
Use the pnuke command for killing processes in parallel on a number of hosts. The syntax is:
$ pnuke -h .pssh_hosts_files process_name
### kill nginx and firefox on hosts:
$ pnuke -h ~/.pssh_hosts_files firefox
$ pnuke -h ~/.pssh_hosts_files nginx

See pssh/pscp command man pages for more information.
Conclusion
pssh is a pretty good tool for parallel SSH command execution on many servers. It quite is useful if you have 5 or 10 servers. Nevertheless, if you need to do something complicated you should look into Ansible and co.
Redhat / CEntOS / Oracle Linux, Ubuntu

30 Shades of “Alias” Command – UNIX

You can define various types aliases as follows to save time and increase productivity.

#1: Control ls command output

The ls command lists directory contents and you can colorize the output:

## Colorize the ls output ##
alias ls='ls --color=auto'
 
## Use a long listing format ##
alias ll='ls -la' 
 
## Show hidden files ##
alias l.='ls -d .* --color=auto'

#2: Control cd command behavior

## get rid of command not found ##
alias cd..='cd ..' 
 
## a quick way to get out of current directory ##
alias ..='cd ..' 
alias ...='cd ../../../' 
alias ....='cd ../../../../' 
alias .....='cd ../../../../' 
alias .4='cd ../../../../' 
alias .5='cd ../../../../..'

#3: Control grep command output

grep command is a command-line utility for searching plain-text files for lines matching a regular expression:

## Colorize the grep command output for ease of use (good for log files)##
alias grep='grep --color=auto'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'

#4: Start calculator with math support

alias bc='bc -l'

#4: Generate sha1 digest

alias sha1='openssl sha1'

#5: Create parent directories on demand

mkdir command is used to create a directory:

alias mkdir='mkdir -pv'

#6: Colorize diff output

You can compare files line by line using diff and use a tool called colordiff to colorize diff output:

# install  colordiff package 🙂
alias diff='colordiff'

#7: Make mount command output pretty and human readable format

alias mount='mount |column -t'

#8: Command short cuts to save time

# handy short cuts #
alias h='history'
alias j='jobs -l'

#9: Create a new set of commands

alias path='echo -e ${PATH//:/\\n}'
alias now='date +"%T"'
alias nowtime=now
alias nowdate='date +"%d-%m-%Y"'

#10: Set vim as default

alias vi=vim 
alias svi='sudo vi' 
alias vis='vim "+set si"' 
alias edit='vim'

#11: Control output of networking tool called ping

# Stop after sending count ECHO_REQUEST packets #
alias ping='ping -c 5'
# Do not wait interval 1 second, go fast #
alias fastping='ping -c 100 -s.2'

#12: Show open ports

Use netstat command to quickly list all TCP/UDP port on the server:

alias ports='netstat -tulanp'

#13: Wakeup sleeping servers

Wake-on-LAN (WOL) is an Ethernet networking standard that allows a server to be turned on by a network message. You can quickly wakeup nas devices and server using the following aliases:

## replace mac with your actual server mac address #
alias wakeupnas01='/usr/bin/wakeonlan 00:11:32:11:15:FC'
alias wakeupnas02='/usr/bin/wakeonlan 00:11:32:11:15:FD'
alias wakeupnas03='/usr/bin/wakeonlan 00:11:32:11:15:FE'

#14: Control firewall (iptables) output

Netfilter is a host-based firewall for Linux operating systems. It is included as part of the Linux distribution and it is activated by default. This post list most common iptables solutions required by a new Linux user to secure his or her Linux operating system from intruders.

## shortcut  for iptables and pass it via sudo#
alias ipt='sudo /sbin/iptables'
 
# display all rules #
alias iptlist='sudo /sbin/iptables -L -n -v --line-numbers'
alias iptlistin='sudo /sbin/iptables -L INPUT -n -v --line-numbers'
alias iptlistout='sudo /sbin/iptables -L OUTPUT -n -v --line-numbers'
alias iptlistfw='sudo /sbin/iptables -L FORWARD -n -v --line-numbers'
alias firewall=iptlist

#15: Debug web server / cdn problems with curl

# get web server headers #
alias header='curl -I'
 
# find out if remote server supports gzip / mod_deflate or not #
alias headerc='curl -I --compress'

#16: Add safety nets

# do not delete / or prompt if deleting more than 3 files at a time #
alias rm='rm -I --preserve-root'
 
# confirmation #
alias mv='mv -i' 
alias cp='cp -i' 
alias ln='ln -i'
 
# Parenting changing perms on / #
alias chown='chown --preserve-root'
alias chmod='chmod --preserve-root'
alias chgrp='chgrp --preserve-root'

#17: Update Debian Linux server

apt-get command is used for installing packages over the internet (ftp or http). You can also upgrade all packages in a single operations:

# distro specific  - Debian / Ubuntu and friends #
# install with apt-get
alias apt-get="sudo apt-get" 
alias updatey="sudo apt-get --yes" 
 
# update on one command 
alias update='sudo apt-get update && sudo apt-get upgrade'

#18: Update RHEL / CentOS / Fedora Linux server

yum command is a package management tool for RHEL / CentOS / Fedora Linux and friends:

## distrp specifc RHEL/CentOS ##
alias update='yum update'
alias updatey='yum -y update'

#19: Tune sudo and su

# become root #
alias root='sudo -i'
alias su='sudo -i'

#20: Pass halt/reboot via sudo

shutdown command bring the Linux / Unix system down:

# reboot / halt / poweroff
alias reboot='sudo /sbin/reboot'
alias poweroff='sudo /sbin/poweroff'
alias halt='sudo /sbin/halt'
alias shutdown='sudo /sbin/shutdown'

#21: Control web servers

# also pass it via sudo so whoever is admin can reload it without calling you #
alias nginxreload='sudo /usr/local/nginx/sbin/nginx -s reload'
alias nginxtest='sudo /usr/local/nginx/sbin/nginx -t'
alias lightyload='sudo /etc/init.d/lighttpd reload'
alias lightytest='sudo /usr/sbin/lighttpd -f /etc/lighttpd/lighttpd.conf -t'
alias httpdreload='sudo /usr/sbin/apachectl -k graceful'
alias httpdtest='sudo /usr/sbin/apachectl -t && /usr/sbin/apachectl -t -D DUMP_VHOSTS'

#22: Alias into our backup stuff

# if cron fails or if you want backup on demand just run these commands # 
# again pass it via sudo so whoever is in admin group can start the job #
# Backup scripts #
alias backup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type local --taget /raid1/backups'
alias nasbackup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type nas --target nas01'
alias s3backup='sudo /home/scripts/admin/scripts/backup/wrapper.backup.sh --type nas --target nas01 --auth /home/scripts/admin/.authdata/amazon.keys'
alias rsnapshothourly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotdaily='sudo  /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotweekly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias rsnapshotmonthly='sudo /home/scripts/admin/scripts/backup/wrapper.rsnapshot.sh --type remote --target nas03 --auth /home/scripts/admin/.authdata/ssh.keys  --config /home/scripts/admin/scripts/backup/config/adsl.conf'
alias amazonbackup=s3backup

#23: Desktop specific – play avi/mp3 files on demand

## play video files in a current directory ##
# cd ~/Download/movie-name 
# playavi or vlc 
alias playavi='mplayer *.avi'
alias vlc='vlc *.avi'
 
# play all music files from the current directory #
alias playwave='for i in *.wav; do mplayer "$i"; done'
alias playogg='for i in *.ogg; do mplayer "$i"; done'
alias playmp3='for i in *.mp3; do mplayer "$i"; done'
 
# play files from nas devices #
alias nplaywave='for i in /nas/multimedia/wave/*.wav; do mplayer "$i"; done'
alias nplayogg='for i in /nas/multimedia/ogg/*.ogg; do mplayer "$i"; done'
alias nplaymp3='for i in /nas/multimedia/mp3/*.mp3; do mplayer "$i"; done'
 
# shuffle mp3/ogg etc by default #
alias music='mplayer --shuffle *'

#24: Set default interfaces for sys admin related commands

vnstat is console-based network traffic monitor. dnstop is console tool to analyze DNS traffic. tcptrack and iftop commands displays information about TCP/UDP connections it sees on a network interface and display bandwidth usage on an interface by host respectively.

## All of our servers eth1 is connected to the Internets via vlan / router etc  ##
alias dnstop='dnstop -l 5  eth1'
alias vnstat='vnstat -i eth1'
alias iftop='iftop -i eth1'
alias tcpdump='tcpdump -i eth1'
alias ethtool='ethtool eth1'
 
# work on wlan0 by default #
# Only useful for laptop as all servers are without wireless interface
alias iwconfig='iwconfig wlan0'

#25: Get system memory, cpu usage, and gpu memory info quickly

## pass options to free ## 
alias meminfo='free -m -l -t'
 
## get top process eating memory
alias psmem='ps auxf | sort -nr -k 4'
alias psmem10='ps auxf | sort -nr -k 4 | head -10'
 
## get top process eating cpu ##
alias pscpu='ps auxf | sort -nr -k 3'
alias pscpu10='ps auxf | sort -nr -k 3 | head -10'
 
## Get server cpu info ##
alias cpuinfo='lscpu'
 
## older system use /proc/cpuinfo ##
##alias cpuinfo='less /proc/cpuinfo' ##
 
## get GPU ram on desktop / laptop## 
alias gpumeminfo='grep -i --color memory /var/log/Xorg.0.log'

#26: Control Home Router

The curl command can be used to reboot Linksys routers.

# Reboot my home Linksys WAG160N / WAG54 / WAG320 / WAG120N Router / Gateway from *nix.
alias rebootlinksys="curl -u 'admin:my-super-password' 'http://192.168.1.2/setup.cgi?todo=reboot'"
 
# Reboot tomato based Asus NT16 wireless bridge 
alias reboottomato="ssh admin@192.168.1.1 /sbin/reboot"

#27 Resume wget by default

The GNU Wget is a free utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, and it can resume downloads too:

## this one saved by butt so many times ##
alias wget='wget -c'

#28 Use different browser for testing website

## this one saved by butt so many times ##
alias ff4='/opt/firefox4/firefox'
alias ff13='/opt/firefox13/firefox'
alias chrome='/opt/google/chrome/chrome'
alias opera='/opt/opera/opera'
 
#default ff 
alias ff=ff13
 
#my default browser 
alias browser=chrome

#29: A note about ssh alias

Do not create ssh alias, instead use ~/.ssh/config OpenSSH SSH client configuration files. It offers more option. An example:

Host server10
  Hostname 1.2.3.4
  IdentityFile ~/backups/.ssh/id_dsa
  user foobar
  Port 30000
  ForwardX11Trusted yes
  TCPKeepAlive yes

You can now connect to peer1 using the following syntax:
$ ssh server10

#30: It’s your turn to share…

## set some other defaults ##
alias df='df -H'
alias du='du -ch'
 
# top is atop, just like vi is vim
alias top='atop' 
 
## nfsrestart  - must be root  ##
## refresh nfs mount / cache etc for Apache ##
alias nfsrestart='sync && sleep 2 && /etc/init.d/httpd stop && umount netapp2:/exports/http && sleep 2 && mount -o rw,sync,rsize=32768,wsize=32768,intr,hard,proto=tcp,fsc natapp2:/exports /http/var/www/html &&  /etc/init.d/httpd start'
 
## Memcached server status  ##
alias mcdstats='/usr/bin/memcached-tool 10.10.27.11:11211 stats'
alias mcdshow='/usr/bin/memcached-tool 10.10.27.11:11211 display'
 
## quickly flush out memcached server ##
alias flushmcd='echo "flush_all" | nc 10.10.27.11 11211'
 
## Remove assets quickly from Akamai / Amazon cdn ##
alias cdndel='/home/scripts/admin/cdn/purge_cdn_cache --profile akamai'
alias amzcdndel='/home/scripts/admin/cdn/purge_cdn_cache --profile amazon'
 
## supply list of urls via file or stdin
alias cdnmdel='/home/scripts/admin/cdn/purge_cdn_cache --profile akamai --stdin'
alias amzcdnmdel='/home/scripts/admin/cdn/purge_cdn_cache --profile amazon --stdin'

Operating System, Redhat / CEntOS / Oracle Linux, Tips and Tricks

Error :- sudo: effective uid is not 0, is sudo installed setuid root?

We all as a Linux administrator must have come across this error sometime in our lives.

[user@host dir]$ sudo bash
sudo: effective uid is not 0, is sudo installed setuid root?

This happens when sudo does not get the right access permissions.

The Solution for this error is giving the following permissions as root user

chmod u+s /usr/bin/sudo

That Must Sort the issue for CentOS Kind of distros.

 

Operating System, Redhat / CEntOS / Oracle Linux

/proc/sys for you to manipulate a running kernel

The /proc/sys directory in the /proc virtual filesytem contains a lot of useful and interesting files and directories. Many kernel settings can be manipulated by writing to files in the proc filesystem. A lot of important information can be retrieved from these files. This is especially useful when you are troubleshooting or fine tuning your linux system.
Following is a description of the most important files.
Especially the files in /proc/sys/vm are very interesting and useful.
You can also use the sysctl command to make this changes persistent, or to see all the possible kernel options you can change at run-time.

/proc/sys/dev

Contains device specific information. For instance the directory cdrom /proc/sys/dev/cdrom/info shows you cdrom capabilities. The other files in /proc/sys/dev/cdrom are writable and allow you to actually set options for your cdrom drive.
For instance echo 1 > /proc/sys/dev/cdrom/autoeject makes your tray open automagically when you unmount your cdrom.
/proc/sys/dev/parport holds information about parallel ports. Browse these directories to learn more about their contents.

/proc/sys/fs

Virtual filesystem information/tuning

/proc/sys/fs/binfmt_misc

binfmt_misc allows you to configure the system to execute miscellaneous binary formats. For instance it enables you to make the system execute .exe files using wine and java files using the java interpreter, just by typing a file name.

/proc/sys/fs/dentry-state

Linux caches directory access to speed up sub-sequential access to the same directory, this file contains information about the status of the directory cache.

/proc/sys/fs/dir-notify-enable

enable/disable dnotify interface, dnotify is a signal used to notify a process about file/directory changes. This is mainly interesting to programmers.

/proc/sys/fs/dquot-nr

number of allocated disk quota entries and the number of free disk quota entries

/proc/sys/fs/dquot-max

maximum number of cached disk quota entries.

/proc/sys/fs/file-max

system-wide limit on the number of open files for all processes.

/proc/sys/fs/file-nr

number of files the system has presently opened.

/proc/sys/fs/inode-max

maximum number of in-memory inodes

/proc/sys/fs/inode-nr

number of inodes and number of free inodes

/proc/sys/fs/inode-state

This file contains seven numbers: number of inodes, number of free inodes, preshrink, and four dummy values. nr_inodes is the number of inodes the system has allocated. Preshrink is non-zero when the nr_inodes is bigger than inode-max.

/proc/sys/fs/inotify
(since kernel 2.6.13)

This directory contains files that can be used to limit the amount of kernel memory consumed by the inotify interface.

/proc/sys/fs/lease-break-time

This file specifies the grace period that the kernel grants to a process holding a file lease after it has sent a signal to that process notifying it that another process is waiting to open the file.

/proc/sys/fs/leases-enable

This file can be used to enable or disable file leases on a system-wide basis.

/proc/sys/fs/mqueue
(since kernel 2.6.6)

This directory contains files controlling the resources used by POSIX message queues.

/proc/sys/fs/overflowgid

Allows you to change the value of the fixed GID, if a filesystem is mounted which only supports 16 bit GID’s the Linux GID’s which are 32 bit sometimes need to be converted to lower values this is fixed at this value.

/proc/sys/fs/overflowuid

Allows you to change the value of the fixed UID, if a filesystem is mounted which only supports 16 bit UID’s the Linux UID’s which are 32 bit sometimes need to be converted to lower values this is fixed at this value.

/proc/sys/fs/suid_dumpable
(since kernel 2.6.13)

Determines whether core dump files are produced for set-user-ID or otherwise protected/tainted binaries.
Possible values are 0,1,2:
0 (default) A core dump will not be produced for a process which has changed credentials or whose binary does not have read permission enabled.
1 (debug) All processes dump core when possible.
2 (suidsafe) Any binary which normally would not be dumped is dumped readable by root only. This allows the user to remove the core dump file but not to read it. For security reasons core dumps in this mode will not overwrite one another or other files. This mode is appropriate when administrators are attempting to debug problems in a normal environment.

/proc/sys/fs/super-max

Controls the maximum number of superblocks, and thus the maximum number of mounted file systems the kernel can have.

/proc/sys/fs/super-nr

The number of file systems currently mounted.

/proc/sys/kernel/acct

highwater, lowwater, and frequency. Used with BSD-style process accounting.

/proc/sys/kernel/cap-bound
(from Linux 2.2 to 2.6.24)

Holds the value of the kernel capability bounding set.

/proc/sys/kernel/core_pattern

Can be used to define a template for naming core dump files

/proc/sys/kernel/core_uses_pid

See core(5).

/proc/sys/kernel/ctrl-alt-del

Controls the handling of Ctrl-Alt-Del from the keyboard.
If it’s 0, Linux will do a graceful restart. When the value is > 0, Linux’s will do an immediate reboot, without even syncing its dirty buffers.

/proc/sys/kernel/hotplug

Contains the path for the hotplug policy agent. Can be used to set the NIS/YP domainname

/proc/sys/kernel/domainname

can be used to set the NIS/YP domainname

/proc/sys/kernel/hostname

can be used to set the hostname

/proc/sys/kernel/modprobe

Contains the path for the kernel module loader.

/proc/sys/kernel/msgmax

This file defines a system-wide limit specifying the maximum number of bytes in a single message written on a System V message queue.

/proc/sys/kernel/msgmnb

Defines a system-wide parameter used to initialize the msg_qbytes setting for subsequently created message queues.

/proc/sys/kernel/ostype and /proc/sys/kernel/osrelease

These files give substrings of /proc/version.

/proc/sys/kernel/overflowgid and /proc/sys/kernel/overflowuid

These files duplicate the files /proc/sys/fs/overflowgid and /proc/sys/fs/overflowuid.

/proc/sys/kernel/panic

Gives read/write access to the kernel variable panic_timeout. If this is zero, the kernel will loop on a panic; if non-zero it indicates that the kernel should autoreboot after this number of seconds.

/proc/sys/kernel/panic_on_oops
(since kernel 2.5.68)

This file controls the kernel’s behavior when an oops or BUG is encountered. If this file contains 0, then the system tries to continue operation. If it contains 1, then the system delays a few seconds and then panics. If the /proc/sys/kernel/panic file is also non-zero then the machine will be rebooted.

/proc/sys/kernel/pid_max
(since kernel 2.5.34)

This file specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID).

/proc/sys/kernel/printk

The four values in this file are console_loglevel, default_message_loglevel, minimum_console_level, and default_console_loglevel. This allows configuration of which messages will be logged to the console. (ever worked on a console printing messages all the time to your screen? Here’s how to fix that) Messages with a higher priority than console_loglevel will be printed to the console.

/proc/sys/kernel/pty
(since kernel 2.6.4)

This directory contains two files relating to the number of Unix 98 pseudo-terminals on the system.

/proc/sys/kernel/pty/max

Defines the maximum number of pseudo-terminals.

/proc/sys/kernel/pty/nr

This read-only file indicates how many pseudo-terminals are currently in use.

/proc/sys/kernel/random

This directory contains various parameters controlling the operation of the file /dev/random.

/proc/sys/kernel/real-root-dev

Used by the deprecated change_root initrd system

/proc/sys/kernel/rtsig-max

( until kernel 2.6.7)
Can be used to tune the maximum number of POSIX real-time (queued) signals that can be outstanding in the system.

/proc/sys/kernel/rtsig-nr

(until kernel 2.6.7)
This file shows the number POSIX real-time signals currently queued.

/proc/sys/kernel/sem
(since kernel 2.4)

Contains 4 numbers defining limits for System V IPC semaphores.

/proc/sys/kernel/sg-big-buff

Shows the size of the generic SCSI device (sg) buffer.

/proc/sys/kernel/shmall

Contains the system-wide limit on the total number of pages of System V shared memory.

/proc/sys/kernel/shmmax

This file can be used to query and set the run-time limit on the maximum (System V IPC) shared memory segment size that can be created.

/proc/sys/kernel/shmmni
(from kernel 2.4)

Specifies the system-wide maximum number of System V shared memory segments that can be created.

/proc/sys/kernel/version

Kernel version number and build date

/proc/sys/net

Networking information.

/proc/sys/net/core/somaxconn

Defines a ceiling value for the backlog argument of listen

/proc/sys/net/core/rmem_max

Maximum TCP Receive Window.

/proc/sys/net/core/wmem_maxx

Maximum TCP Send Window.

/proc/sys/net/ipv4/ip_forward

Enable or disable routing.

/proc/sys/sunrpc

This directory supports Sun remote procedure call for network file system (NFS).

/proc/sys/vm

This directory contains files for memory management tuning, buffer and cache management. One of the more interresting directories in proc sys as it allows manipulating memory handling in real time.

/proc/sys/vm/swappiness
(since kernel 2.6.16)

vm.swappiness takes a value between 0 and 100 to change the balance between swapping applications and freeing cache. At 100, the kernel will always prefer to find inactive pages and swap them out; in other cases, whether a swapout occurs depends on how much application memory is in use and how poorly the cache is doing at finding and releasing inactive items.

/proc/sys/vm/drop_caches
(since kernel 2.6.16)

Writing to this file causes the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free.
To free pagecache, write 1 to this file.
To free dentries and inodes, write 2 to this file.
To free pagecache, dentries and inodes, write 3 to this file.
Just try echo 1 > /proc/sys/vm/drop_caches, and watch your memory usage drop by all kernel cache memory.

/proc/sys/vm/legacy_va_layout
(since kernel 2.6.9)

If non-zero, this disables the new 32-bit memory-mapping layout; the kernel will use the legacy (2.4) layout for all processes.

/proc/sys/vm/oom_dump_tasks
(since kernel 2.6.25)

Enables a system-wide task dump (excluding kernel threads) to be produced when the kernel performs an OOM-killing. The dump includes the following information for each task (thread, process): thread ID, real user ID, thread group ID (process ID), virtual memory size, resident set size, the CPU that the task is scheduled on, oom_adj score (see the description of /proc[number]/oom_adj), and command name. This is helpful to determine why the OOM-killer was invoked and to identify the rogue task that caused it.
If this contains the value zero, this information is suppressed.
It defaults to 0, so if you have a problem requiring it, enable it :
echo 1 > /proc/sys/vm/oom_dump_tasks

/proc/sys/vm/oom_kill_allocating_task
(since kernel 2.6.24)

This enables or disables killing the OOM-triggering task in out-of-memory situations. If this is set to zero, the OOM-killer will scan through the entire tasklist and select a task based on heuristics to kill. This normally selects a rogue memory-hogging task that frees up a large amount of memory when killed.
If this is set to non-zero, the OOM-killer simply kills the task that triggered the out-of-memory condition. This avoids a possibly expensive tasklist scan.
If /proc/sys/vm/panic_on_oom is non-zero, it takes precedence over whatever value is used in /proc/sys/vm/oom_kill_allocating_task.
The default value is 0.

/proc/sys/vm/overcommit_memory

This file contains the kernel virtual memory accounting mode. Values are:
0: heuristic overcommit (default)
1: always overcommit, never check
2: always check, never overcommit

/proc/sys/vm/overcommit_ratio

Value used in calculating virtual address space

/proc/sys/vm/panic_on_oom
(since kernel 2.6.18)

This enables or disables a kernel panic in an out-of-memory situation.
0 (default) : no panic
1 : panic but not if a process limits allocations to certain nodes using memory
policies mbind or cpusets and those nodes reach memory exhaustion status.
2 : always panic
Operating System, Redhat / CEntOS / Oracle Linux

Linux Concepts :- Quotas

Rules : 1. Quotas can only be created for partitions

Eg., If a 1MB quota is set for partition [/home]

then every subdir under that /home can use a max of 1 MB

user | group

block [diskspace] inode [no of files] | block [diskspace] inode [files]

SOFT HARD GRACE SOFT HARD GRACE | SOFT HARD GRACE SOFT HARD GRACE

<—- Limits —->

==========================================

Part I. Configuring / Setting Up Quotas

==========================================

1. Configure /etc/fstab – usrquota or grpquota which ever you want

In the 4th column of /etc/fstab add ‘usrquota’

2. Refresh /etc/fstab – actually /etc/mtab :

a. If you do not want hassles just REBOOT and skip to Part II.

or

b. mount -a which will do nothing if FS’s are already mounted, which

they always are

or

c. umount /home and mount /home

But this will not work if any users are online which they always are

or

d. mount -o remount /home <<<=========================

[mount -o remount,rw /]

Refresh and will work even if users are online

or

e. reboot

which is the coward’s way

and then Check /etc/mtab or mount*

3. To create the aquota.user file

–> quotacheck -vc /home [force] – Note : ‘u’ is the default if not given

and ‘v’, of course, is always

optional but friendly

This will create the /home/aquota.user which monitors all

quota activity for the /home paritition

4. – To turn on quotas

–> quotaon -v /home – To turn on quotas

or load /home/aquota.user file

in to RAM

5. Check everybody’s current usage and quotas :

# repquota -a

Configuring quotas is now over !

Now we will implement quotas.

In short :

1. Configure /etc/fstab

2. mount -o remount /home Cross check with # cat /etc/mtab

3. quotacheck -vc /home Creates aquota.user

4. quotaon -v /home Loads aquota.user in RAM

5. repquota -a Enjoy your work !

eg

1. repquota -a

2. repquota -u /

3. repquota -u sachin

The first line shows quota info for all users and groups for all file

systems.

The second line shows user quota info for the / file system.

The third line shows quota information for user sachin on all file systems.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

=============================

Part II. Implementing Quotas

=============================

6. edquota -u user

7. edquota -p foo bar <——— use foo as quota prototype for bar

8. edquota -t <———— To change the grace period

or use :

# edquota -p foo `awk -F: ‘$3 > 499 {print $1}’ /etc/passwd`

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

========================================

Part III. Repairing the aquota.user file

========================================

If your system hangs and then restarts, the aquota.user file gets corrupted

and all quotas for all users are now in an unknown state :

To repair, boot into single user mode imme-asap, and do this _FIRST_ :

quotaoff -v /home

quotacheck -avug Minimum reqd is : quotacheck /home

since ‘u’ is the default

and we do not have ‘g’

and ‘v’ is optional

a is check /etc/mtab

Re-Creates file /home/aquota.user

quotaon -v /home

Misc : /etc/warnquota.conf

warnquota*

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

GNU/Linux LDUP : 27-Jul-2k3

PART II – SysAdministration

08. QUOTAS

1. What are the two aspects of disk storage that quotas allow you specify?

A: Disk space [Block] and Files [inode] quotas

2. Which init script checks for the presence or absence of quotas ?

A: /etc/rc.d/rc.sysinit

3. I wish to implement quotas on my /home dir ? Should /home be a partition?

A: Yes.

4. Which file is configured when setting up quotas ?

A: /etc/fstab

5. What is the min I have to do, to implement both user and group quotas for

my /home partition?

A: Configure /etc/fstab with :

LABEL=/home /home ext3 defaults,usrquota,grpquota 1 2

and then just reboot the machine !!

6. But I wish to to implement only user quotas, is this /etc/fstab OK ?

LABEL=/home /home ext3 defaults, usrquota 1 2

A: No. No space in 4th field after defaults

7. Is this /etc/fstab OK ?

LABEL=/home /home ext3 defaults,userquota 1 2

A: No. Its usrquota.

8. Where is this quota info for user and group quotas stored for /home?

A: In the /home partition, in 2 data files : aquota.user, aquota.group

9. Which file keeps track of all the mounted filesystems ?

A: /etc/mtab

10 How would you implement user quotas without rebooting your machine ?

A: Configure /etc/fstab with :

1. LABEL=/home /home ext3 defaults,usrquota 1 2

2. mount -o remount /home

3. quotacheck -vc /home

4. quotaon -v /home

11 Why would you want to do a quotaoff before you do a quotacheck manually

from the CLI ?

A: Corrupts the aquota.user file

12 Can a user – foo – modify/create his quota ?

A: Obviously not! Only root can do that! Or else every user will

abuse the quota system for his benefit !

13 Can foo at least see his quota status ?

A: Yes.

14 How ?

A: Login as foo and run ‘quota’.

15. What are these quotas that he will see ?

A: His soft limit, hard limit and grace period

16. How then will root set [create/modify] quotas for ‘foo’ ?

A: edquota -u foo

17. Examine the following o/p generated by the quota command given by foo:

Disk quotas for user foo (uid 500):

Filesystem blocks soft hard inodes soft hard

/dev/hda 20 100 0 14 0 0

18. How much disk space has foo already used ?

A: 20 blocks i.e. 20 KB

19. His soft limit appears to be 100 KB. Can he use 130 KB ?

A: Yes. But but he will get warning messages

20. Can he use unlimited disk space and fill the partition ?

A: Yes. But up and until the grace period expires. After that the soft limit

is enforced as the hard limit. Additionally, he will get warning messages

21. Then what is the use of this soft limit ? How can I remedy it ?

A: By giving a hard limit too. foo can never cross the hard limit.

22. Now I do this :

Disk quotas for user foo (uid 500):

Filesystem blocks soft hard inodes soft hard

/dev/hda 20 100 200 14 0 0

foo creates files worth 160 KB. What will happen then ?

A: He will be allowed to create up to 200 KB max. Also after 7 days

he will be shut down regardless of whether he has reached his hard

limit or not. He will have to clean up under the soft limit to work.

23. What command is used to change a user’s grace period?

A: edquota -t

24. What command is used to see the entire quota details of all users?

A: repquota -a

25. What command sets a quota template?

A: edquota -p

26. What does ‘p’ mean and how would you use it?

A: ‘prototype’.

Suppose ‘foo’ has his quota set. Then you could clone his details,

# edquota -p foo bar

bar now has the same quota limits as foo.

27. If you had 2000 users, the above would clearly be inconvenient. Solve!

a: edquota -p foo `awk -F: ‘$3 > 499 { print $1 }’ /etc/passwd`

28. An over-limit quota generates a mail message to the user on login.

Which file would you modify to customize the mail delivered ?

A: /etc/warnquota.conf

29. If quotas were run as a daily cron job, where would you find the script

file concerned?

a: /etc/crond.daily/

30 A user owns 150 inodes; the soft limit is 100 and the hard is 200.

Which of the following is correct if the grace period has not expired?

a. The user can create no more files

b. The user cannot append data to an existing file

c. The user cannot log off without deleting some files

d. The user will receive an email notice of violation

A: d.

31 What is the purpose of ‘convertquota’ ?

A: convertquota converts old quota files quota.user and quota.group to

files aquota.user and aquota.group in new format currently used by

2.4.0-ac and newer or by Red Hat Linux 2.4 kernels on filesystem.

New file format allows using quotas for 32-bit uids / gids, setting

quotas for root, accounting used space in bytes (and so allowing use of

quotas in ReiserFS) and it is also architecture independent. This format

introduces Radix Tree (a simple form of tree structure) to quota file.

Operating System, Redhat / CEntOS / Oracle Linux

REDHAT Linux Notes (Handwritten)

Guys this is my very own Hand Written Linux Notes.

I have a habit of writing notes and this is something i wrote back in 2012 when i was a newbie in Linux 🙂

So Enjoy and click the link below to download

HariIyerLinuxNotes

Redhat / CEntOS / Oracle Linux, Secure Socket Layer, Ubuntu

Linux – Concepts – IPTABLES v/s FIREWALLD

Today we will walk through iptables and firewalld and we will learn about the history of these two along with installation & how we can configure these for our Linux distributions.

Let’s begin wihtout wasting further more time.

What is iptables?

First, we need to know what is iptables. Most of senior IT professionals knows about it and used to work with it as well. Iptables is an application / program that allows a user to configure the security or firewall security tables provided by the Linux kernel firewall and the chains so that a user can add / remove firewall rules to it accordingly to meet his / her security requirements. Iptables uses different kernel modules and different protocols so that user can take the best out of it. As for example, iptables is used for IPv4 ( IP version 4/32 bit ) and ip6tables for IPv6 ( IP version 6/64 bit ) for both tcp and udp. Normally, iptables rules are configured by System Administrator or System Analyst or IT Manager.  You must have root privileges to execute each iptables rules. Linux Kernel uses the Netfilter framework so that it can provide various networking-related operations which can be performed by using iptables. Previously, ipchains was used in most of the Linux distributions for the same purpose. Every iptables rules are directly handled by the Linux Kernel itself and it is known as kernel duty. Whatever GUI tools or other security tools you are using to configure your server’s firewall security, at the end of the day, it is converted into iptables rules and supplied to the kernel to perform the operation.

History of iptables

The rise of the iptables begin with netfilter. Paul Rusty Russell was the initial author and the head think tank behind netfilter / iptables. Later he was joined by many other tech people then form and build the Netfilter core team and develop & maintain the netfilter/iptables project as a joint effort like many other open source projects. Harald Welte was the former leader until 2007 and then Patrick McHardy was the head until 2013. Currently, netfilter core team head is Pablo Neira Ayuso.

To know more about netfilter, please visit this link. To know more about the historicity of netfilter, please visit this link.

To know more about iptables history, please visit this link.

How to install iptables

Now a days, every Linux Kernel comes with iptables and can be found pre build or pre installed on every famous modern Linux distributions. On most Linux systems, iptables is installed in this /usr/sbin/iptables directory. It can be also  found in /sbin/iptables, but since iptables is more like a service rather than an “essential binary”, the preferred location remains in /usr/sbin directory.

For Ubuntu or Debian

sudo apt-get install iptables

For CentOS

sudo yum install iptables-services

For RHEL

sudo yum install iptables

Iptables version

To know your iptables version, type the following command in your terminal.

sudo iptables --version

Start & Stopping your iptables firewall

For OpenSUSE 42.1, type the following to stop.

sudo /sbin/rcSuSEfirewall2 stop

To start it again

sudo /sbin/rcSuSEfirewall2 start

For Ubuntu, type the following to stop.

sudo service ufw stop

To start it again

sudo service ufw start

For Debian & RHEL , type the following to stop.

sudo /etc/init.d/iptables stop

To start it again

sudo /etc/init.d/iptables start

For CentOS, type the following to stop.

sudo service iptables stop

To start it again

sudo service iptables start

Getting all iptables rules lists

To know all the rules that is currently present & active in your iprables, simply open a terminal and type the following.

sudo iptables -L

If there are no rules exits on the iptables means if there are no rules added so far in your iptables firewall, you will see something like the below image.

Iptables_Lists_OpenSUSE42.1

In this above picture, you can see that , there are three (3) chains and they are INPUT, FORWARD, OUTPUT and there are no rules exists. Actually I haven’t add one yet.

Type the following to know the status of the chains of your iptables firewall.

sudo iptables -S

With the above command, you can learn whether your chains are accepting or not.

Clear all iptables rules

To clear all the rules from your iptables firewall, please type the following. This is normally known as flushing your iptables rules.

sudo iptables -F

If you want to flush the INPUT chain only, or any individual chains, issue the below commands as per your requirements.

sudo iptables -F INPUT
sudo iptables -F OUTPUT
sudo iptables -F FORWARD

ACCEPT or DROP Chains

To accept or drop a particular chain, issue any of the following command on your terminal to meet your requirements.

iptables --policy INPUT DROP

The above rule will not accept anything that is incoming to that server. To revert it again back to ACCEPT, do the following

iptables --policy INPUT ACCEPT

Same goes for other chains as well like

iptables --policy OUTPUT DROP
iptables --policy FORWARD DROP

Note: By default, all chains of iptables ( INPUT, OUTPUT, FORWARD ) are in ACCEPT mode. This is known as Policy Chain Default Behavior.

Allowing any port

If you are running any web server on your host, then you must allow your iptables firewall so that your server listen or respond to port 80. By default web server runs on port 80. Let’s do that then.

sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT

On the above line, A stands for append means we are adding a new rule to the iptables list. INPUT stands for the INPUT chain. P stands for protocol and dport stands for destination port. By default any web server runs on port 80. Similarly, you can allow SSH port as well.

sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT

By default, SSH runs on port 22. But it’s good practise not to run SSH on port 22. Always run SSH on a different port. To run SSH on a different port, open /etc/ssh/sshd_config file on your favorite editor and change the port 22 to a other port.

Blocking any port

Say we want to block port 135. We can do it by

sudo iptables -A INPUT -p tcp --dport 135 -j DROP

if you want to block your server to initiate any SSH connection from the server to another host/server, issue the following command

sudo iptables -A OUTPUT -p tcp --dport 22 -j DROP

By doing so, no one can use your sever to initiate a SSH connection from the server. The OUPUT chain will filter and DROP any outgoing tcp connection towards another hosts.

Allowing specific IP with Port

sudo iptables -A INPUT -p tcp -s 0/0 --dport 22  -j ACCEPT

Here -s 0/0 stand for any incoming source with any IP addresses. So, there is no way your server is going to respond for a tcp packet which destination port is 22. If you want to allow only any particular IP then use the following one.

sudo iptables -A INPUT -p tcp -s 12.12.12.12/32 --dport 22  -j ACCEPT

On the above example, you are only allowing 12.12.12.12 IP address to connect to port SSH. Rest IP addresses will not be able to connect to port 22. Similarly you can allow by using CIDR values. Such as

sudo iptables -A INPUT -p tcp -s 12.12.12.0/24 --dport 22  -j ACCEPT

The above example show how you can allow a whole IP block for accepting connection on port 22. It will accept IP starting from 12.12.12.1 to 12.12.12.255.

If you want to block such IP addresses range, do the reverse by replacing ACCEPT by DROP like the following

sudo iptables -A INPUT -p tcp -s 12.12.12.0/24 --dport 22  -j DROP

So, it will not allow to get a connection on port 22 from from 12.12.12.1 to 12.12.12.255 IP addresses.

Blocking ICMP

If you want to block ICMP (ping) request to and from on your server, you can try the following. The first one will block not to send ICMP ping echo request to another host.

sudo iptables -A OUTPUT -p icmp --icmp-type 8 -j DROP

Now, try to ping google.com. Your OpenSUSE server will not be able to ping google.com.

If you want block the incoming ICMP (ping) echo request for your server, just type the following on your terminal.

sudo iptables -I INPUT -p icmp --icmp-type 8 -j DROP

Now, It will not reply to any ICMP ping echo request. Say, your server IP address is 13.13.13.13. And if you ping ping that IP of your server then you will see that your server is not responding for that ping request.

Blocking MySql / MariaDB Port

As Mysql is holding your database so you must protect your database from outside attach. Allow your trusted application server IP addresses only to connect with your MySQL server. To block other

sudo iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT

So, it will not take any MySql connection except 192.168.1.0/24 IP block. By default MySql runs on 3306 port.

Blocking SMTP

If you not running any mail server on your host server or if your server is not configured to act like a mail server, you must block SMTP so that your server is not sending any spam or any mail towards any domain. You must do this to block any outgoing mail from your server. To do so,

sudo iptables -A OUTPUT -p tcp --dport 25 -j DROP

Block DDoS

We all are familiar with the term DDoS. To get rid of it, issue the following command in your terminal.

iptables -A INPUT -p tcp --dport 80 -m limit --limit 20/minute --limit-burst 100 -j ACCEPT

You need to configure the numerical value to meet your requirements. This is just a standard to maintain.

You can protect more by

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/tcp_syncookies
echo 0 > /proc/sys/net/ipv4/conf/all/accept_redirects
echo 0 > /proc/sys/net/ipv4/conf/all/accept_source_route
echo 1 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/lo/rp_filter
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/icmp_echo_ignore_all
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout
echo 1800 > /proc/sys/net/ipv4/tcp_keepalive_time
echo 1 > /proc/sys/net/ipv4/tcp_window_scaling 
echo 0 > /proc/sys/net/ipv4/tcp_sack
echo 1280 > /proc/sys/net/ipv4/tcp_max_syn_backlog

Blocking Port Scanning

There are hundred of people out there to scan your open ports of your server and try to break down your server security. To block it

sudo iptables -N block-scan
sudo iptables -A block-scan -p tcp —tcp-flags SYN,ACK,FIN,RST RST -m limit —limit 1/s -j RETURN
sudo iptables -A block-scan -j DROP

Here, block-scan is a name of a new chain.

Blocking Bad Ports

You may need to block some bad ports for your server as well. Here is how you can do this.

badport="135,136,137,138,139,445"
sudo iptables -A INPUT -p tcp -m multiport --dport $badport -j DROP
sudo iptables -A INPUT -p udp -m multiport --dport $badport -j DROP

You can add more ports according to your needs.

What is firewalld?

Firewalld provides a dynamically managed firewall with support for network/firewall zones that defines the trust level of network connections or interfaces. It has support for IPv4, IPv6 firewall settings, ethernet bridges and IP sets. There is a separation of runtime and permanent configuration options. It also provides an interface for services or applications to add firewall rules directly.

The former firewall model with system-config-firewall/lokkit was static and every change required a complete firewall restart. This included also to unload the firewall netfilter kernel modules and to load the modules that are needed for the new configuration. The unload of the modules was breaking stateful firewalling and established connections. The firewall daemon on the other hand manages the firewall dynamically and applies changes without restarting the whole firewall. Therefore there is no need to reload all firewall kernel modules. But using a firewall daemon requires that all firewall modifications are done with that daemon to make sure that the state in the daemon and the firewall in kernel are in sync. The firewall daemon can not parse firewall rules added by the iptables and ebtables command line tools. The daemon provides information about the current active firewall settings via D-BUS and also accepts changes via D-BUS using PolicyKit authentication methods.

So, firewalld uses zones and services instead of chain and rules for performing the operations and it can manages rule(s) dynamically allowing updates & modification without breaking existing sessions and connections.

It has following features.

  • D-Bus API.
  • Timed firewall rules.
  • Rich Language for specific firewall rules.
  • IPv4 and IPv6 NAT support.
  • Firewall zones.
  • IP set support.
  • Simple log of denied packets.
  • Direct interface.
  • Lockdown: Whitelisting of applications that may modify the firewall.
  • Support for iptables, ip6tables, ebtables and ipset firewall backends.
  • Automatic loading of Linux kernel modules.
  • Integration with Puppet.

To know more about firewalld, please visit this link.

How to install firewalld

Before installing firewalld, please make sure you stop iptables and also make sure that iptables are not using or working anymore. To do so,

sudo systemctl stop iptables

This will stop iptables form your system.

And then make sure iptables are not used by your system any more by issuing the below command in the terminal.

sudo systemctl mask iptables

Now, check the status of iptables.

sudo systemctl status iptables

iptables_status_unixmen

Now, we are ready to install firewalld on to our system.

For Ubuntu

To install it on Ubuntu, you must remove UFW first and then you can install Firewalld. To remove UFW, issue the below command on the terminal.

sudo apt-get remove ufw

After removing UFW, issue the below command in the terminal

sudo apt-get install firewall-applet

Or

You can open Ubuntu Software Center and look or seacrh  for “firewall-applet” then install it on to your Ubuntu system.

For RHEL, CentOS & Fedora

Type the below command to install firewalld on your CentOS system.

sudo yum install firewalld firewall-config -y

How to configure firewalld

Before configuring firewalld, we must know the status of firewalld after the installation. To know that, type the following.

sudo systemctl status firewalld

firewalld_status_unixmen

As firewalld works on zones basis, we need to check all the zones and services though we haven’t done any configuring yet.

For Zones

sudo firewall-cmd --get-active-zones

firewalld_activezones_unixmen

or

sudo firewall-cmd --get-zones

firewalld_getzones_unixmen

To know the default zone, issue the below command

sudo firewall-cmd --get-default-zone

firewalld_defaultszones_unixmen

And, For Services

sudo firewall-cmd --get-services

firewalld_services_unixmen

Here, you can see those services covered under firewalld.

Setting Default Zone

An important note is, after each modification, you need to reload firewalld so that your changes can take place.

To set the default zone

sudo firewall-cmd --set-default-zone=internal

or

sudo firewall-cmd --set-default-zone=public

After changing the zone, check whether it changes or not.

sudo firewall-cmd --get-default-zone

Adding Port in Public Zone

sudo firewall-cmd --permanent --zone=public --add-port=80/tcp

firewalld_addport_unixmen

This will add tcp port 80 in the public zone of firewalld. You can add your desired port as well by replacing 80 by your’s.

Now reload the firewalld.

sudo firewall-cmd --reload

Now, check the status to see whether tcp 80 port has been added or not.

sudo firewall-cmd --zone=public --list-ports

firewalld_statusafterport_unixmen

Here, you can see that tcp port 80 has been added.

Or even you can try something like this.

sudo firewall-cmd --zone=public --list-all

firewalld_statusall_unixmen

Removing Port from Public Zone

To remove Tcp 80 port from the public zone, type the following.

sudo firewall-cmd --zone=public --remove-port=80/tcp

You will see a “success” text echoing in your terminal.

You can put your desired port as well by replacing 80 by your’s own port.

Adding Services in Firewalld

To add ftp service in firewalld, issue the below command

sudo firewall-cmd --zone=public --add-service=ftp

You will see a “success” text echoing in your terminal.

Similarly for adding smtp service, issue the below command

sudo firewall-cmd --zone=public --add-service=smtp

Replace ftp and smtp by your’s own service that you want to add in the firewalld.

Removing Services from Firewalld

For removing ftp & smtp services from firewalld, issue the below command in the terminal.

sudo firewall-cmd --zone=public --remove-service=ftp
sudo firewall-cmd --zone=public --remove-service=smtp

Block Any Incoming and Any Outgoing Packet(s)

If you wish, you can block any incoming or outgoing packets / connections by using firewalld. This is known as “panic-on” of firewalld. To do so, issue the below command.

sudo firewall-cmd --panic-on

You will see a “success” text echoing in your terminal.

After doing this, you will not be able to ping a host or even browse any websites.

To turn this off, issue the below command in your terminal.

sudo firewall-cmd --panic-off

Adding IP Address in Firewalld

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.4" accept'

By doing so, firewalld will accept IP v4 packets from the source IP 192.168.1.4.

Blocking IP Address From Firewalld

Similarly, to block any IP address

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.4" reject'

By doing so, firewalld will drop / discards every IP v4 packets from the source IP 192.168.1.4.

I stuck with the very basic of Firewalld over here so that you can easily understand the working methodology of it and the differences of it with iptables.

That’s all for today. Hope you enjoy reading this article.

Take care.

Redhat / CEntOS / Oracle Linux, Ubuntu

Linux – Concepts – Ulimits & Sysctl

ulimit and sysctl

The ulimit and sysctl programs allow to limit system-wide resource use. This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users.

Code Listing 1: ulimit example

# ulimit -a 
core file size          (blocks, -c) 0 
data seg size           (kbytes, -d) unlimited 
file size               (blocks, -f) unlimited 
pending signals                 (-i) 8191 
max locked memory       (kbytes, -l) 32 
max memory size         (kbytes, -m) unlimited 
open files                      (-n) 1024 
pipe size            (512 bytes, -p) 8 
POSIX message queues     (bytes, -q) 819200 
stack size              (kbytes, -s) 8192 
cpu time               (seconds, -t) unlimited 
max user processes              (-u) 8191 
virtual memory          (kbytes, -v) unlimited 
file locks                      (-x) unlimited

All these settings can be manipulated. A good example is this bash forkbomb that forks as many processes as possible and can crash systems where no user limits are set:

Warning: Do not run this in a shell! If no limits are set your system will either become unresponsive or might even crash.

Code Listing 2: A bash forkbomb

$ :(){ :|:& };:
refer to my Post :- 
https://haritibcoblog.com/2016/07/10/linux-fork-bomb-test-epic/

Now this is not good – any user with shell access to your box could take it down. But if that user can only start 30 processes the damage will be minimal. So let’s set a process limit:

Gentoo Note: A too small number of processes can break the use of portage. So, don’t be too strict.

Code Listing 3: Setting a process limit

# ulimit -u 30 
# ulimit -a 
… 
max user processes              (-u) 30 
…

If you try to run the forkbomb now it should run, but throw error messages “fork: resource temporarily unavailable”. This means that your system has not allowed the forkbomb to start more processes. The other options of ulimit can help with similar problems, but you should be careful that you don’t lock yourself out – setting data seg size too small will even prevent bash from starting!

sysctl is a similar tool: It allows to configure kernel parameters at runtime. If you wish to keep settings persistent across reboots you should edit /etc/sysctl.conf – be aware that wrong settings may break things in unforeseen ways.

Code Listing 4: Exploring sysctl variables

# sysctl -a 
… 
vm.swappiness = 60 
…

The list of variables is quite long (367 lines on my system), but I picked out vm.swappiness here. It controls how aggressive swapping will be, the higher it is (with a maximum of 100) the more swap will be used. This can affect performance a lot on systems with little memory, depending on load and other factors.

Code Listing 5: Reducing swappiness

# sysctl vm.swappiness=0 
vm.swappiness = 0

The effects of changing this setting are usually not felt instantly. But you can change many settings, especially network-related, this way. For servers this can offer a nice performance boost, but as with ulimit careless usage might cause your system to misbehave or slow down. If you don’t know what a variable controls, you should not modify it!

Network, Redhat / CEntOS / Oracle Linux, Ubuntu

Linux Command – Using Netstat the Proper Way !!

How to install netstat

netstat is a useful tool for checking your network configuration and activity. It is in fact a collection of several tools lumped together.

Install “net-tools” package using yum

[root@livedvd ~]$ sudo yum install net-tools
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.mirror.secureax.com
* extras: centos.mirror.secureax.com
* updates: centos.mirror.secureax.com
Resolving Dependencies
--> Running transaction check
---> Package net-tools.x86_64 0:2.0-0.17.20131004git.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================
Package         Arch         Version                          Repository  Size
================================================================================
Installing:
net-tools       x86_64       2.0-0.17.20131004git.el7         base       304 k
Transaction Summary
================================================================================
Install  1 Package
Total download size: 304 k
Installed size: 917 k
Is this ok [y/d/N]: y
Downloading packages:
net-tools-2.0-0.17.20131004git.el7.x86_64.rpm              | 304 kB   00:00
Running transaction check

Running transaction test
Transaction test succeeded
Running transaction
Installing : net-tools-2.0-0.17.20131004git.el7.x86_64                    1/1
Verifying  : net-tools-2.0-0.17.20131004git.el7.x86_64                    1/1
Installed:
net-tools.x86_64 0:2.0-0.17.20131004git.el7

 

Complete!

 

The netstat Command

Displaying the Routing Table

When you invoke netstat with the –r flag, it displays the kernel routing table in the way we’ve been doing with route. On vstout, it produces:

# netstat -nr

 Kernel IP routing table
 Destination   Gateway      Genmask         Flags  MSS Window  irtt Iface
 127.0.0.1     *            255.255.255.255 UH       0 0          0 lo
 172.16.1.0    *            255.255.255.0   U        0 0          0 eth0
 172.16.2.0    172.16.1.1   255.255.255.0   UG       0 0          0 eth0

The –n option makes netstat print addresses as dotted quad IP numbers rather than the symbolic host and network names. This option is especially useful when you want to avoid address lookups over the network (e.g., to a DNS or NIS server).

The second column of netstat‘s output shows the gateway to which the routing entry points. If no gateway is used, an asterisk is printed instead. The third column shows the “generality” of the route, i.e., the network mask for this route. When given an IP address to find a suitable route for, the kernel steps through each of the routing table entries, taking the bitwise AND of the address and the genmask before comparing it to the target of the route.

The fourth column displays the following flags that describe the route:

G The route uses a gateway.
U The interface to be used is up.
H Only a single host can be reached through the route. For example, this is the case for the loopback entry 127.0.0.1.
D This route is dynamically created. It is set if the table entry has been generated by a routing daemon like gated or by an ICMP redirect message
M This route is set if the table entry was modified by an ICMP redirect message.
! The route is a reject route and datagrams will be dropped.

 

The next three columns show the MSS, Window and irtt that will be applied to TCP connections established via this route. The MSS is the Maximum Segment Size and is the size of the largest datagram the kernel will construct for transmission via this route. The Window is the maximum amount of data the system will accept in a single burst from a remote host. The acronym irtt stands for “initial round trip time.” The TCP protocol ensures that data is reliably delivered between hosts by retransmitting a datagram if it has been lost. The TCP protocol keeps a running count of how long it takes for a datagram to be delivered to the remote end, and an acknowledgement to be received so that it knows how long to wait before assuming a datagram needs to retransmitted; this process is called the round-trip time. The initial round-trip time is the value that the TCP protocol will use when a connection is first established. For most network types, the default value is okay, but for some slow networks, notably certain types of amateur packet radio networks, the time is too short and causes unnecessary retransmission. The irtt value can be set using the route command. Values of zero in these fields mean that the default is being used.

Finally, the last field displays the network interface that this route will use.

Displaying Interface Statistics

When invoked with the –i flag, netstat displays statistics for the network interfaces currently configured. If the –a option is also given, it prints all interfaces present in the kernel, not only those that have been configured currently. On vstout, the output from netstat will look like this:

# netstat -i
 Kernel Interface table
 Iface MTU Met  RX-OK RX-ERR RX-DRP RX-OVR  TX-OK TX-ERR TX-DRP TX-OVR Flags
 lo      0   0   3185      0      0      0   3185      0      0      0 BLRU
 eth0 1500   0 972633     17     20    120 628711    217      0      0 BRU

The MTU and Met fields show the current MTU and metric values for that interface. The RX and TX columns show how many packets have been received or transmitted error-free (RX-OK/TX-OK) or damaged (RX-ERR/TX-ERR); how many were dropped (RX-DRP/TX-DRP); and how many were lost because of an overrun (RX-OVR/TX-OVR).

The last column shows the flags that have been set for this interface. These characters are one-character versions of the long flag names that are printed when you display the interface configuration with ifconfig:

B A broadcast address has been set.
L This interface is a loopback device.
M All packets are received (promiscuous mode).
O ARP is turned off for this interface.
P This is a point-to-point connection.
R Interface is running.
U Interface is up.

 

Displaying Connections

netstat supports a set of options to display active or passive sockets. The options –t, –u, –w, and –x show active TCP, UDP, RAW, or Unix socket connections. If you provide the –a flag in addition, sockets that are waiting for a connection (i.e., listening) are displayed as well. This display will give you a list of all servers that are currently running on your system.

Invoking netstat -ta on vlager produces this output:

$ netstat -ta
 Active Internet Connections
 Proto Recv-Q Send-Q Local Address    Foreign Address    (State)
 tcp        0      0 *:domain         *:*                LISTEN
 tcp        0      0 *:time           *:*                LISTEN
 tcp        0      0 *:smtp           *:*                LISTEN
 tcp        0      0 vlager:smtp      vstout:1040        ESTABLISHED
 tcp        0      0 *:telnet         *:*                LISTEN
 tcp        0      0 localhost:1046   vbardolino:telnet  ESTABLISHED
 tcp        0      0 *:chargen        *:*                LISTEN
 tcp        0      0 *:daytime        *:*                LISTEN
 tcp        0      0 *:discard        *:*                LISTEN
 tcp        0      0 *:echo           *:*                LISTEN
 tcp        0      0 *:shell          *:*                LISTEN
 tcp        0      0 *:login          *:*                LISTEN

This output shows most servers simply waiting for an incoming connection. However, the fourth line shows an incoming SMTP connection from vstout, and the sixth line tells you there is an outgoing telnetconnection to vbardolino.

Using the –a flag by itself will display all sockets from all families.

Top 20 command netstat for network management

  1. Listing all the LISTENING Ports of TCP and UDP connections

Listing all ports (both TCP and UDP) using netstat -a option.

# netstat -a | more

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0      0 *:59482                     *:*                         LISTEN
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*

Active UNIX domain sockets (servers and established)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     16972  /tmp/orbit-root/linc-76b-0-6fa08790553d6
 unix  2      [ ACC ]     STREAM     LISTENING     17149  /tmp/orbit-root/linc-794-0-7058d584166d2
 unix  2      [ ACC ]     STREAM     LISTENING     17161  /tmp/orbit-root/linc-792-0-546fe905321cc
 unix  2      [ ACC ]     STREAM     LISTENING     15938  /tmp/orbit-root/linc-74b-0-415135cb6aeab

 

  1. Listing TCP Ports connections

Listing only TCP (Transmission Control Protocol) port connections using netstat -at.

# netstat -at

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT

 

  1. Listing UDP Ports connections

Listing only UDP (User Datagram Protocol ) port connections using netstat -au.

# netstat -au

Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*
 udp        0      0 *:mdns                      *:*

 

  1. Listing all LISTENING Connections

Listing all active listening ports connections with netstat -l.

# netstat -l

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:58642                     *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 udp        0      0 *:35036                     *:*
 udp        0      0 *:npmp-local                *:*

Active UNIX domain sockets (only servers)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     16972  /tmp/orbit-root/linc-76b-0-6fa08790553d6
 unix  2      [ ACC ]     STREAM     LISTENING     17149  /tmp/orbit-root/linc-794-0-7058d584166d2
 unix  2      [ ACC ]     STREAM     LISTENING     17161  /tmp/orbit-root/linc-792-0-546fe905321cc
 unix  2      [ ACC ]     STREAM     LISTENING     15938  /tmp/orbit-root/linc-74b-0-415135cb6aeab

 

  1. Listing all TCP Listening Ports

Listing all active listening TCP ports by using option netstat -lt.

# netstat -lt

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 *:dctp                      *:*                         LISTEN
 tcp        0      0 *:mysql                     *:*                         LISTEN
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:munin                     *:*                         LISTEN
 tcp        0      0 *:ftp                       *:*                         LISTEN
 tcp        0      0 localhost.localdomain:ipp   *:*                         LISTEN
 tcp        0      0 localhost.localdomain:smtp  *:*                         LISTEN
 tcp        0      0 *:http                      *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 *:https                     *:*                         LISTEN

 

  1. Listing all UDP Listening Ports

Listing all active listening UDP ports by using option netstat -lu.

# netstat -lu

Active Internet connections (only servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 udp        0      0 *:39578                     *:*
 udp        0      0 *:meregister                *:*
 udp        0      0 *:vpps-qua                  *:*
 udp        0      0 *:openvpn                   *:*
 udp        0      0 *:mdns                      *:*
 udp        0      0 *:sunrpc                    *:*
 udp        0      0 *:ipp                       *:*
 udp        0      0 *:60222                     *:*
 udp        0      0 *:mdns                      *:*

 

  1. Listing all UNIX Listening Ports

Listing all active UNIX listening ports using netstat -lx.

# netstat -lx

Active UNIX domain sockets (only servers)
 Proto RefCnt Flags       Type       State         I-Node Path
 unix  2      [ ACC ]     STREAM     LISTENING     4171   @ISCSIADM_ABSTRACT_NAMESPACE
 unix  2      [ ACC ]     STREAM     LISTENING     5767   /var/run/cups/cups.sock
 unix  2      [ ACC ]     STREAM     LISTENING     7082   @/tmp/fam-root-
 unix  2      [ ACC ]     STREAM     LISTENING     6157   /dev/gpmctl
 unix  2      [ ACC ]     STREAM     LISTENING     6215   @/var/run/hald/dbus-IcefTIUkHm
 unix  2      [ ACC ]     STREAM     LISTENING     6038   /tmp/.font-unix/fs7100
 unix  2      [ ACC ]     STREAM     LISTENING     6175   /var/run/avahi-daemon/socket
 unix  2      [ ACC ]     STREAM     LISTENING     4157   @ISCSID_UIP_ABSTRACT_NAMESPACE
 unix  2      [ ACC ]     STREAM     LISTENING     60835836 /var/lib/mysql/mysql.sock
 unix  2      [ ACC ]     STREAM     LISTENING     4645   /var/run/audispd_events
 unix  2      [ ACC ]     STREAM     LISTENING     5136   /var/run/dbus/system_bus_socket
 unix  2      [ ACC ]     STREAM     LISTENING     6216   @/var/run/hald/dbus-wsUBI30V2I
 unix  2      [ ACC ]     STREAM     LISTENING     5517   /var/run/acpid.socket
 unix  2      [ ACC ]     STREAM     LISTENING     5531   /var/run/pcscd.comm

 

  1. Showing Statistics by Protocol

Displays statistics by protocol. By default, statistics are shown for the TCP, UDP, ICMP, and IP protocols. The -s parameter can be used to specify a set of protocols.

# netstat -s

Ip:
 2461 total packets received
 0 forwarded
 0 incoming packets discarded
 2431 incoming packets delivered
 2049 requests sent out
 Icmp:
 0 ICMP messages received
 0 input ICMP message failed.
 ICMP input histogram:
 1 ICMP messages sent
 0 ICMP messages failed
 ICMP output histogram:
 destination unreachable: 1
 Tcp:
 159 active connections openings
 1 passive connection openings
 4 failed connection attempts
 0 connection resets received
 1 connections established
 2191 segments received
 1745 segments send out
 24 segments retransmited
 0 bad segments received.
 4 resets sent
 Udp:
 243 packets received
 1 packets to unknown port received.
 0 packet receive errors
 281 packets sent

 

  1. Showing Statistics by TCP Protocol

Showing statistics of only TCP protocol by using option netstat -st.

# netstat -st

Tcp:
 2805201 active connections openings
 1597466 passive connection openings
 1522484 failed connection attempts
 37806 connection resets received
 1 connections established
 57718706 segments received
 64280042 segments send out
 3135688 segments retransmited
 74 bad segments received.
 17580 resets sent

 

  1. Showing Statistics by UDP Protocol
# netstat -su

Udp:
 1774823 packets received
 901848 packets to unknown port received.
 0 packet receive errors
 2968722 packets sent

 

  1. Displaying Service name with PID

Displaying service name with their PID number, using option netstat -tp will display “PID/Program Name”.

# netstat -tp

Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
 tcp        0      0 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED 2179/sshd
 tcp        1      0 192.168.0.2:59292           www.gov.com:http            CLOSE_WAIT  1939/clock-applet

 

  1. Displaying Promiscuous Mode

Displaying Promiscuous mode with -ac switch, netstat print the selected information or refresh screen every five second. Default screen refresh in every second.

# netstat -ac 5 | grep tcp

tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:58642                     *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        1      0 192.168.0.2:59447           www.gov.com:http            CLOSE_WAIT
 tcp        0     52 192.168.0.2:ssh             192.168.0.1:egs             ESTABLISHED
 tcp        0      0 *:sunrpc                    *:*                         LISTEN
 tcp        0      0 *:ssh                       *:*                         LISTEN
 tcp        0      0 localhost:ipp               *:*                         LISTEN
 tcp        0      0 localhost:smtp              *:*                         LISTEN
 tcp        0      0 *:59482                     *:*                         LISTEN

 

  1. Displaying Kernel IP routing

Display Kernel IP routing table with netstat and route command.

# netstat -r

Kernel IP routing table
 Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
 192.168.0.0     *               255.255.255.0   U         0 0          0 eth0
 link-local      *               255.255.0.0     U         0 0          0 eth0
 default         192.168.0.1     0.0.0.0         UG        0 0          0 eth0

 

  1. Showing Network Interface Transactions

Showing network interface packet transactions including both transferring and receiving packets with MTU size.

# netstat -i

Kernel Interface table
 Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
 eth0       1500   0     4459      0      0      0     4057      0      0      0 BMRU
 lo        16436   0        8      0      0      0        8      0      0      0 LRU

 

  1. Showing Kernel Interface Table

Showing Kernel interface table, similar to ifconfig command.

# netstat -ie

Kernel Interface table
 eth0      Link encap:Ethernet  HWaddr 00:0C:29:B4:DA:21
 inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0
 inet6 addr: fe80::20c:29ff:feb4:da21/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:4486 errors:0 dropped:0 overruns:0 frame:0
 TX packets:4077 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:2720253 (2.5 MiB)  TX bytes:1161745 (1.1 MiB)
 Interrupt:18 Base address:0x2000

lo        Link encap:Local Loopback
 inet addr:127.0.0.1  Mask:255.0.0.0
 inet6 addr: ::1/128 Scope:Host
 UP LOOPBACK RUNNING  MTU:16436  Metric:1
 RX packets:8 errors:0 dropped:0 overruns:0 frame:0
 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:480 (480.0 b)  TX bytes:480 (480.0 b)

 

  1. Displaying IPv4 and IPv6 Information

Displays multicast group membership information for both IPv4 and IPv6.

# netstat -g

IPv6/IPv4 Group Memberships
 Interface       RefCnt Group
 --------------- ------ ---------------------
 lo              1      all-systems.mcast.net
 eth0            1      224.0.0.251
 eth0            1      all-systems.mcast.net
 lo              1      ff02::1
 eth0            1      ff02::202
 eth0            1      ff02::1:ffb4:da21
 eth0            1      ff02::1

 

  1. Print Netstat Information Continuously

To get netstat information every few second, then use the following command, it will print netstat information continuously, say every few seconds.

# netstat -c

Active Internet connections (w/o servers)
 Proto Recv-Q Send-Q Local Address               Foreign Address             State
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:36944 TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg010.shr.prod.s:42110 TIME_WAIT
 tcp        0    132 tecmint.com:ssh    115.113.134.3.static-:64662 ESTABLISHED
 tcp        0      0 tecmint.com:http   crawl-66-249-71-240.g:41166 TIME_WAIT
 tcp        0      0 localhost.localdomain:54823 localhost.localdomain:smtp  TIME_WAIT
 tcp        0      0 localhost.localdomain:54822 localhost.localdomain:smtp  TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg010.shr.prod.s:42091 TIME_WAIT
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:36998 TIME_WAIT

 

  1. Finding non supportive Address

Finding un-configured address families with some useful information.

# netstat --verbose

netstat: no support for `AF IPX' on this system.
 netstat: no support for `AF AX25' on this system.
 netstat: no support for `AF X25' on this system.
 netstat: no support for `AF NETROM' on this system.

 

  1. Finding Listening Programs

Find out how many listening programs running on a port.

# netstat -ap | grep http

tcp        0      0 *:http                      *:*                         LISTEN      9056/httpd
 tcp        0      0 *:https                     *:*                         LISTEN      9056/httpd
 tcp        0      0 tecmint.com:http   sg2nlhg008.shr.prod.s:35248 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:57783 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg007.shr.prod.s:57769 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg008.shr.prod.s:35270 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg009.shr.prod.s:41637 TIME_WAIT   -
 tcp        0      0 tecmint.com:http   sg2nlhg009.shr.prod.s:41614 TIME_WAIT   -
 unix  2      [ ]         STREAM     CONNECTED     88586726 10394/httpd

 

  1. Displaying RAW Network Statistics
# netstat --statistics --raw

Ip:
 62175683 total packets received
 52970 with invalid addresses
 0 forwarded
 Icmp:
 875519 ICMP messages received
 destination unreachable: 901671
 echo request: 8
 echo replies: 16253
 IcmpMsg:
 InType0: 83
 IpExt:
 InMcastPkts: 117

 

Source :- UnixMen

Network, Redhat / CEntOS / Oracle Linux, Ubuntu

Knockd – Detailed And Simpler (Silent Assassin….)

As I could see there are lot of articles about knockd and it’s implementation. So, what are my efforts to make this unique? I made it simple, but detail oriented  and have commented on controversies and criticism that exist.

port-knocking-2

Here is an outline on what I’ve discussed.

What is port knocking?

What is knockd?

How it works?

Installation

What we are trying to achieve

Pre-requisite before implementation of knockd:

Implementation scenario

Testing

Disclaimer

So, here we go.

What is port knocking?

Wikipedia Definition:

Port knocking is a method of externally opening ports on a firewall by generating a connection attempt on a set of pre-specified closed ports (in this case, telnet). Once a correct sequence of connection attempts is received, the firewall rules are dynamically modified to allow the host which sent the connection attempts to connect over specific port(s)

/* in this article point of view, it’s ssh port 22 */

It’s basically like, every request would knock the door (firewall) to get through it. Knocking is necessary to get past the door. You shall either implement it using knockd and iptables or just iptables alone.

Now, using knockd.

What is knockd?

knockd is a port-knock server. It listens to all traffic on an Ethernet interface, looking for special “knock” sequences of port-hits. A client makes these port-hits by sending a TCP (or UDP) packet to a port on the server. This port need not be open — since knockd listens at the link-layer level, it sees all traffic even if it’s destined for a closed port. When the server detects a specific sequence of port-hits, it runs a command defined in its configuration file. This can be used to open up holes in a firewall for quick access.

How it works?

  1. Knockd daemon installed/running in the server.
    2. Configure some port sequences (tcp, udp, or both), and the appropriate actions for each sequence.
    3. once knockd sees a defined port sequence, it will run the configured action for that sequence

Note:

It is completely stealth and it will not open any ports on the server,  by default.

When a port knock is successfully used to open a port, the firewall rules are generally only opened to the ip_address that supplied the correct knock.

Installation

Note: Don’t copy/paste the commands. Type it manually to avoid errors that could occur due to the format.

# yum install libpcap*

/* dependency – * in the end installs libpcap-devel which is a pre-requisite, as well */

There are several ways to install, whereas I have followed rpm installation.

Download suitable rpm package from http://pkgs.repoforge.org/knock/

Then run,

# rpm –ivh knock-0.5-3.el6.rf.x86_64.rpm

/*Here, I have downloaded knock 0.5-3 for 64-bit centos and hence the rpm name*/

Now, what all got installed?

Knockd – knock server daemon

Knock – knock client, which aids in knocking.

Note that this (knock) is default client comes along with knockd, whereas there are other advanced clients like hping, sendip & packit.

What we are trying to achieve:

A way to stop the attacks altogether, yet allow ssh access from anywhere, when needed.

Pre-requisite before implementation of knockd:

As mentioned earlier, an existing firewall (iptables) is a pre-requisite.

Follow the below steps to configure firewall

# iptables -I INPUT -p tcp -m state –state RELATED,ESTABLISHED -j ACCEPT

-I —– Inserting the rule as first line in the firewall setup

-p —– protocol

-m —–match against the states RELATED, ESTABLISHED, in this case

-j —– jump to action, which is ACCPET here.

/* This rule says to allow currently on-going session through firewall. It is essential so that if you have currrently taken remote session on this computer using SSH, it will be preserved and not get terminated by further rules where you might want to block ssh or all services */

# iptables -I INPUT -p icmp -j ACCEPT

/* This is to make your machine ping-able from any machine, so that you can check the availability of your machine (whether it’s up or down) */

# iptables –A INPUT –j REJECT

/* Rejecting everything else – Appending it as last line, since, if inserted as first line all other rules will not be considered and every request would be rejected*/

Implementation scenario:

Now, try to ssh to the machine where you have implemented firewall. Let’s call the machine as server.

You could not ssh to the server since the firewall setup in server rejects everything except on-going session and ping requests.

Now, knockd implementation:

Now, in server, that you have installed knockd, run the following commands

# vi /etc/knockd.conf

/*As a result of rpm installation, this configuration file will exist */

Edit the file as below and save/exit.

[options] logfile = /var/log/knockd.log [opencloseSSH] sequence        = 2222:udp,3333:tcp,4444:udp seq_timeout     = 15 start_command   = /sbin/iptables -I INPUT -s %IP% -p tcp –dport 22 -j ACCEPT cmd_timeout     = 10 stop_command    = /sbin/iptables -D INPUT -s %IP% -p tcp –dport 22 -j ACCEPT tcpflags        = syn

First line of the file defines where the error/warnings pertaining to knockd gets logged. In case, if you’re unable to successfully launch connection between client and server using knockd/knock, then this is where you need to check – Server log.

sequence        = 2222:udp,3333:tcp,4444:udp                                /* the sequence of knocking */

seq_timeout     = 15                                                                         /* once the above mentioned sequence is knocked, it’s valid only for next 15 seconds */

start_command   = /sbin/iptables -I INPUT -s %IP% -p tcp –dport 22 -j ACCEPT

/* once the right sequence is knocked, the above command gets executed, where %IP% is the ip_addr of host which knocks. Hence, this allows, ssh connection from knocker (client) to server. One thing to be noted is that, ssh connection need to be established within next 15 seconds of knock. Also, note that I have given iptables –I, since I want this rule to be inserted as first line, because, in case if I append, it won’t have any effect, since iptables reject everything rule comes before this in the list*/

cmd_timeout     = 10                       /* the stop_command will be execued in 10 seconds from start_command execution. */

stop_command    = /sbin/iptables -D INPUT -s %IP% -p tcp –dport 22 -j ACCEPT

/* I am deleting the rule which I inserted to allow the SSH connection from knocker (client) to server. Now, doesn’t it ends my ssh connection? – It doesn’t.

How?  The existing rule in my iptables, #iptables -I INPUT -p tcp -m state –state RELATED,ESTABLISHED -j ACCEPT helps me to retain the established connection. In case, if this rule is not specified, then ur ssh connection will be lost in 10 seconds (i.e) cmd_timeout */

tcpflags                  = syn                       /* In TCP 3 way handshake, Client sends a TCP SYNchronize packet to Server */

Note that there are other ways to configure your knockd server. For example, a port sequence to open ssh, and another port sequence to close your ssh (unlike the above mentioned one), but in this case, after terminating your ssh session, you need to manually hit the port sequence from client, in order to close the ssh port for it.

Now, that we have configured our server to allow ssh for client only when the client hits the right sequence, which is 2222:udp,3333:tcp,4444:udp here.

Now, you need to start knockd service. Copy the below script as/etc/rc.d/init.d/knockd in your machine.

Runlevel script.

#!/bin/sh # description: Start and stop knockd # Check if that config file exist [ -f /etc/knockd.conf ] || exit 0 # Source function library . /etc/rc.d/init.d/functions # Source networking configuration . /etc/sysconfig/network # Check that networking is up [ “$NETWORKING” = “no” ] && exit 0 start() { echo “Starting knockd …” /usr/sbin/knockd & } stop() { echo “Shutting down knockd …” kill `pidof /usr/sbin/knockd` } case “$1” in start) start ;; stop) stop ;; restart) stop start ;; *) echo “Usage: $0 {start|stop|restart}” ;; esac exit 0

Now, run the following commands, so that you shall start/stop/restart knockd like other services

# chmod 755 /etc/rc.d/init.d/knocd         /* making the script executable by root */ # chkconfig –add knockd                             /* adding knockd to chkconfig list */ # chkconfig –level 35 knockd on              /* service will be started as part of runlevel 3 & 5 */ # service knockd start

whenever you modify your /etc/knockd.conf, you need to restart the service.

Testing:

From client, run the following commands.

Note that there are other ways to knock, whereas  I am using knock client which comes along with the knockd package. So, you need to have it installed in client, as well.

Now,

# knock –v server_ip 2222:udp 3333:tcp 4444:udp ;ssh server_ip

-v for verbose

server_ip substituted by your server’s ip address.

We are knocking the sequence which we mentioned in server’s /etc/knockd.conffile

Now, you shall successfully establish ssh session to server. Also, the port gets closed for further sessions. So, every time when you want to establish ssh connection to server, you need to knock and then ssh.

Comments on controversies and criticism:

What if the knock sequence are determined by brute force attack ?

Excerpt from wikipedia answers this:

>>>>>>>>>> YO BREDAS <<<<<<<<<<<<<<<<<

Consider that, if an external attacker did not know the port knock sequence, even the simplest of sequences would require a massive brute force effort in order to be discovered. A three-knock simple TCP sequence (e.g. port 1000, 2000, 3000) would require an attacker without prior knowledge of the sequence to test every combination of three ports in the range 1-65535, and then to scan each port in between to see if anything had opened. As a stateful system, the port would not open until after the correct three-digit sequence had been received in order, without other packets in between.
That equates to a maximum of 655363 packets in order to obtain and detect a single successful opening, in the worst case scenario. That’s 281,474,976,710,656 or over 281 trillion packets. On average, an attempt would take approximately 9.2 quintillion packets to successfully open a single, simple three-port TCP-only knock by brute force. This is made even more impractical when knock attempt-limiting is used to stop brute force attacks, longer and more complex sequences are used.

Also, there are other port knocking solutions, where the knocks are encrypted, which makes it even harder to hack.

When we have simple, robust and reliable solution named VPN access, why to go for port knocking?

Port knocking plus VPN tunnels for added security. If you have another VPN server in your network, you shall protect it from brute force attacks by hiding it behind a knock sequence.

Disclaimer:

Port knocking is not intended to be complete solution, but can be an added layer of security.

Also, it needs to be configured properly according to the needs.

A network that is connected to the Internet is never secure. There is no perfect lock!

The best that we can do is to make the system as secure as possible for today. Possibly
tomorrow someone will figure out how to get through our security code.

A security system is only as strong as its weakest link.

But, I believe “Port knocking” adds security to the existing setup, and doesn’t make your system vulnerable to attacks, as few people claim.

Thanks for reading. Cheers !

 

Credits :- GOKUL (Unixmen)

Redhat / CEntOS / Oracle Linux, Ubuntu

Linux Command – Dstat (Culprit Catcher)

Introduction

Whether a system is used as a web server or a normal PC, in a daily workflow, to keep under control its usage of resources is almost necessary : GNU/Linux provides several tools for monitoring purposes: iostat, vmstat, netstat, ifstat and others. Every system admin know these products, and how to analyse their outputs. However, there’s another alternative, a single program which can replace almost all of them. Its name is dstat.

With dstat, users can view all system resources instantly. For instance, someone could decide to compare the network bandwidth numbers directly with the disk throughput, having a more general view on what’s going on; this is very useful in case of troubleshooting, or to analyse a system for bench marking.

Features

  • Combines vmstat, iostat, ifstat, netstat information and more
  • Shows stats in exactly the same timeframe
  • Enable/order counters as they make most sense during analysis/troubleshooting
  • Modular design
  • Written in Python, so easy extendable
  • Includes many external plugins
  • Can show interrupts per device
  • Very accurate timeframes, no timeshifts when system is stressed
  • Shows exact units and limits conversion mistakes
  • Indicate different units with different colors
  • Show intermediate results when delay > 1
  • Allows to export CSV output, which can be imported in Gnumeric and Excel to make graphs

Installation

To install dstat is a simple task, since it’s packaged in .deb and .rpm.
For Debian-based distro:

# apt install dstat

In RHEL, CentOS, Fedora:

# yum install dstat

Getting sta(r)ted

To run the program, users can just write the command in its simplest form:

$ dstat

as a result, it will show different infos in a table, which help admins to have a general overview.

dstat

Plugins

First of all, it’s important to note that dstat comes with a lot of plugins; for obtaining a complete list:

$ dstat --list

which returns:

list

Of course, it is possible to add more, for some special use case.

So, let’s see how they work.

Who wants to use some plugin must simply pass its name as a command-line argument. For instance, if someone need to verify only the total CPU usage, he can do:

$ dstat --cpu

or, in a shorter form

$ dstat -c
As previously said, program can show different stats at the same time. As an example:

$ dstat --cpu --top-cpu --disk --top-bio --top-latency

output

This command, which is a combination of internal stats and external plugins, will give the total CPU usage, most expensive CPU process, disk statistics, most expensive block I/O process and the process with the highest total latency (expressed in milliseconds).

Output

By default, dstat displays output in columns (as a table, indeed) directly in terminal window, in real-time, for an immediate analysis made by a human. But there is also the possibility to send it to a .csv file, which software like Libreoffice Calc or Gnumeric can use for creating graphs, or any kind of statistical analysis. Exporting data to .csv is a quite easy task:

$ dstat [arguments] --output /path/to/outputfile

Bash, Main, Redhat / CEntOS / Oracle Linux, Ubuntu, Windows 2012 R2

Source – UNIX, Destination – Windows Cygwin (SSH Password-less Authentication)

On Windows Server

In windows cygwin create user, say MyUser, locally and also create user in cygwin

cd C:\cygwin

Cygwin.bat

 

Administrator@MYWINDOWSHOST ~

$ /bin/mkpasswd -l -u MyUser >>/etc/passwd

MyUser@MYWINDOWSHOST ~

$ ls

MyUser@MYWINDOWSHOST ~

$ ls -al

total 24

drwxr-xr-x+ 1 MyUser        None    0 Mar 17 12:54 .

drwxrwxrwt+ 1 Administrator None    0 Mar 17 12:54 ..

-rwxr-xr-x  1 MyUser        None 1494 Oct 29 15:34 .bash_profile

-rwxr-xr-x  1 MyUser        None 6054 Oct 29 15:34 .bashrc

-rwxr-xr-x  1 MyUser        None 1919 Oct 29 15:34 .inputrc

-rwxr-xr-x  1 MyUser        None 1236 Oct 29 15:34 .profile

MyUser@MYWINDOWSHOST ~

$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/MyUser/.ssh/id_rsa):

Created directory ‘/home/MyUser/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/MyUser/.ssh/id_rsa.

Your public key has been saved in /home/MyUser/.ssh/id_rsa.pub.

The key fingerprint is:

7d:40:12:1c:7b:c1:7f:39:ac:f5:1a:c5:73:ae:81:34 MyUser@MYWINDOWSHOST

The key’s randomart image is:

+–[ RSA 2048]—-+

|       .++o      |

|        .+..     |

|        . o. . o |

|         o .E *.+|

|        S …* =o|

|           .o o o|

|               = |

|              o  |

|                 |

+—————–+

MyUser@MYWINDOWSHOST ~

$ cd .ssh

MyUser@MYWINDOWSHOST ~/.ssh

$ ls

id_rsa  id_rsa.pub

MyUser@MYWINDOWSHOST ~/.ssh

$ touch authorized_keys

Generate the key in source ON UNIX SERVER

 $ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/MyUser/.ssh/id_rsa):

Created directory ‘/home/MyUser/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/MyUser/.ssh/id_rsa.

Your public key has been saved in /home/MyUser/.ssh/id_rsa.pub.

The key fingerprint is:

7d:40:12:1c:7b:c1:7f:39:ac:f5:1a:c5:73:ae:81:34 MyUser@MYWINDOWSHOST

The key’s randomart image is:

+–[ RSA 2048]—-+

|       .++o      |

|        .+..     |

|        . o. . o |

|         o .E *.+|

|        S …* o|

|           .o o o|

|               = |

|              o  |

|                 |

+—————–+

MyUser@MYUNIXSHOST ~

$ cd .ssh

MyUser@MYUNIXHOST ~/.ssh

$ ls

id_rsa  id_rsa.pub

Then push that rsa_pub into that destination authorized_keys

cat .ssh/id_rsa.pub | ssh MyUser@MYWINDOWSHOST ‘cat >>  .ssh/authorized_keys’

 

ssh  -v MyUser@MYWINDOWSHOST

—> YOU SHOULD BE ABLE LOGIN PASSWORD LESS

Credits :- Priyanka Padad (Operations Expert)

Redhat / CEntOS / Oracle Linux, Ubuntu

Linux Commands – chage (Manage your Passwords for User Accounts in UNIX)

chage

chage Enables you to modify the parameters surrounding passwords (complexity, age,expiration). We can edit and manage the password expiration details with the chage command. However, a root user can execute chage command for any user account, but not the other users.

Syntax: chage [options] USERNAME

Options:

-d LAST_DAY Indicates the day the password was last changed
-E EXPIRE_DATE Sets the account expiration date
-I INACTIVE Changes the password in an inactive state after the account expires
-l Shows account aging information
-m MIN_DAYS Sets the minimum number of days between password changes
-M MAX_DAYS Sets the maximum number of days a password is valid
-W WARN_DAYS Sets the number of days to warn before the password expires

For example we can find the particular user information by using chage command as follows.

[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : never
Account expires                                         : Dec 31, 2025
Minimum number of days between password change          : 365
Maximum number of days between password change          : 500
Number of days of warning before password expires       : 7

How to force users to change their password, may be this is a big question for system administrators to force their users to change password on regular intervals for security basis.

For this we Set Password Expiry Date for an user using chage option -M. Root user  can set the password expiry date for any user.

Please note that option -M will update both “Password expires” and “Maximum number of days between password change” entries as shown below.

Syntax: # chage -M number-of-days username
[root@localhost ~]# chage -M 60 root

The above command sets the password expiry to 60days.

[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        :Jul 25, 2017
Password inactive                                       : never
Account expires                                         : Dec 31, 2025
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

Set the Account Expiry Date for an User by using -E option

we can also use chage command to set the account expiry date as shown below using option -E. The date given below is in “YYYY-MM-DD” format. This will update the “Account expires” value as shown below.

[root@localhost ~]# chage -E "2017-10-03" root
[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : never
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

Force the user account to be locked after n number of inactivity days
Typically if the password is expired, users are forced to change it during their next login. You can also set an additional condition, where after the password is expired, if the user never tried to login for 6 days, you can automatically lock their account using option -I as shown below. In this example, the “Password inactive” date is set to 10 days from the “Password expires” value.Once an account is locked, only system administrators will be able to unlock it.

# chage -I 6 root
# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : Aug 31,2017
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 0
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

How to set Minimum no.of days between password change

we can set the minimum number of days between password change by using the option -m along with chage command as follows.

chage -m 10 USERNAME
[root@localhost ~]# chage -m 10 root
[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : Aug 31,2017
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 10
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 7

How to set the number of days of warning before password expires

we can set the number of days of warning before password expires by using the option -W along with chage command

[root@localhost ~]# chage -W 10 root
[root@localhost ~]# chage -l root
Last password change                                    : Mar 12, 2016
Password expires                                        : Jul 25, 2017
Password inactive                                       : Aug 31,2017
Account expires                                         : Oct 03, 2017
Minimum number of days between password change          : 10
Maximum number of days between password change          : 60
Number of days of warning before password expires       : 10

How to disable password aging for an user account

To disable password aging for the account for particular account we must set the following on that account.

    -m 0 will set the minimum number of days between password change to 0
    -M 99999 will set the maximum number of days between password change to 99999
    -I -1 (number minus one) will set the “Password inactive” to never
    -E -1 (number minus one) will set “Account expires” to never.

when we using the chage command to set specifics on an account, do not reset the
password with the passwd command because doing so erases any changes to the account
expiring.

Have a great fun with Linux!

 

Credits :- https://www.unixmen.com

Bash, Daily Hacks, Redhat / CEntOS / Oracle Linux, Ubuntu

Hack #1 -> Define CD Base Directory Using CDPATH

If you are frequently performing cd to subdirectories of a specific parent
directory, you can set the CDPATH to the parent directory and perform
cd to the subdirectories without giving the parent directory path as
explained below.

# pwd
/home/ramesh
# cd mail
-bash: cd: mail: No such file or directory

[Note: The above cd is looking for mail directory under
current directory]

# export CDPATH=/etc
# cd mail
/etc/mail

[Note: The above cd is looking for mail under /etc and not
under current directory]

# pwd
/etc/mail

To make this change permanent, add

export CDPATH=/etc  to your ~/.bash_profile

Similar to the PATH variable, you can add more than one directory entry
in the CDPATH variable, separating them with : , as shown below.

export CDPATH=.:~:/etc:/var

This hack can be very helpful under the following situations:
• Oracle DBAs frequently working under $ORACLE_HOME, can set
the CDPATH variable to the oracle home
• Unix sysadmins frequently working under /etc, can set the
CDPATH variable to /etc
• Developers frequently working under project directory
/home/projects, can set the CDPATH variable to /home/projects
• End-users frequently accessing the subdirectories under their
home directory, can set the CDPATH variable to ~ (home
directory)

Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

UNIX – Directory Structure and Explanations

 

1. / – Root:-

  •  Every single file and directory starts from the root directory.
  •  Only root user has write privilege under this directory.
  •  Please note that /root is root user’s home directory, which is not same as /.

2. /bin – User Binaries:-

  • Contains binary executables.
  • Common Linux commands you need to use in single-user modes are located under this directory.
  • Commands used by all the users of the system are located here.
  • For example: ps, ls, ping, grep, cp.

3. /sbin – System Binaries:-

  • Just like /bin, /sbin also contains binary executable.
  • But, the Linux commands located under this directory are used  typically,by system administrator, for system maintenance purpose.

For example: iptables, reboot, fdisk, ifconfig, swapon

4. /etc – Configuration Files

  • Contains configuration files required by all programs.
  • This also contains startup and shutdown shell scripts used to start/stop  individual programs.

For example: /etc/resolv.conf, /etc/logrotate.conf

5. /dev – Device Files:-

  • Contains device files.
  • These include terminal devices, usb, or any device attached to the system.

For example: /dev/tty1, /dev/usbmon0

6. /proc – Process Information:-

  • Contains information about system process.
  • This is a pseudo filesystem contains information about running process. For example: /proc/{pid} directory contains information  about the process with that particular pid.
  • This is a virtual filesystem with text information about system resources. For example: /proc/uptime

7. /var – Variable Files:-

  • Var stands for variable files.
  • Content of the files that are expected to grow can be found under this directory.
  • This includes — system log files (/var/log); packages and database files (/var/lib); emails (/var/mail); print queues (/var/spool); lock files (/var/lock); temp files needed across reboots (/var/tmp);

8. /tmp – Temporary Files:-

  • Directory that contains temporary files created by system and users.
  • Files under this directory are deleted when system is rebooted .

9. /usr – User Programs:-

  •  Contains binaries, libraries, documentation, and source-code for second  level programs.
  •  /usr/bin contains binary files for user programs. If you can’t find a user binary under /bin, look under /usr/bin. For example: at, awk, cc, less, scp
  •  /usr/sbin contains binary files for system administrators. If you can’t find a system binary under /sbin, look under /usr/sbin. For example: atd, cron, sshd, useradd, userdel
  • /usr/lib contains libraries for /usr/bin and /usr/sbin
  • /usr/local contains users programs that you install from source. For example, when you install apache from source, it goes under /usr/local/apache2

10. /home – Home Directories:-

  • Home directories for all users to store their personal files.
  • For example: /home/john, /home/nikita

11. /boot – Boot Loader Files:-

  • Contains boot loader related files.
  • Kernel initrd, vmlinux, grub files are located under /boot
  • For example: initrd.img-2.6.32-24-generic, vmlinuz-2.6.32-24-generic

12. /lib – System Libraries :-

  •     Contains library files that supports the binaries located under /bin and /sbin
  •      Library filenames are either ld* or lib*.so.*
  •     For example: ld-2.11.1.so, libncurses.so.5.7

13. /opt – Optional add-on Applications:-

  • opt stands for optional.
  • Contains add-on applications from individual vendors
  • Add-on applications should be installed under either /opt/ or /opt/ sub-directory.

 14. /mnt – Mount Directory:-

  •    Temporary mount directory where sysadmins can mount file systems.

 15. /media – Removable Media Devices:-

  • Temporary mount directory for removable devices.
  • For examples, /media/cdrom for CD-ROM; /media/floppy for floppy drives; /media/cdrecorder for CD writer

 16. /srv – Service Data :-

  • srv stands for service.
  •  For example, /srv/cvs contains CVS related data
  • Contains server specific services related data.

 

Bash, Database, MySQL, Redhat / CEntOS / Oracle Linux

MySQL – Enterprise – Installation – Linux

Phase #1 –  PreRequisites

MAKE SURE A MOUNT POINT /MySql IS CREATED BEFORE RUNNING THIS SCRIPT…………………………

Creating the symbolic soft link for parallel database updations

ln -s /data /MySql/mysqldb
ln -s /data /MySql/mysql_db

Soft Links Created.
User and Group Adding.

groupadd -g27 mysql
echo ‘System Group mysql created with GID 27.’
useradd -m -d /var/lib/mysql -g mysql -G mysql -p root123 -u 27 mysql
echo ‘System User mysql created with UID 27 home dir=/var/lib/mysql.’
echo ‘root’ >>cron.allow
echo ‘mysql’ >>cron.allow
service crond restart
echo ‘added the user mysql to the cron’

DIRECTORY STRUCTURE CREATION

mkdir -p /MySql/mysqldb/configfiles
mkdir -p /MySql/mysqldb/datadump
mkdir -p /MySql/mysqldb/software_depot
mkdir -p /MySql/mysqldb/dbbackup
mkdir -p /MySql/mysqldb/archival
echo ‘DIRECTORY STRUCTURE COMPLETE’

CONTAINER CREATION

mkdir -p /MySql/mysql_db/mysql/2345/var/lib/mysql
mkdir -p /MySql/mysql_db/mysql/2345/tmp
mkdir -p /MySql/mysql_db/mysql/2345/var/log/binlogs
echo ‘CONTAINER STRUCTURE COMPLETE.’

SOFTWARE DEPOT PRE-REQUISITES

mkdir -p /MySql/mysqldb/software_depot/meb
cp /tmp/meb/bin /MySql/mysqldb/software_depot/meb/bin
mkdir -p /opt/product/meb
ln -s /MySql/mysqldb/software_depot/meb/bin /opt/product/meb
sh mysqlbackup –help
echo ‘SUCCESSFULL LINKED MEB’
chown -R mysql:mysql /opt/ /MySql/mysqldb/ /MySql/mysql_db/
echo ‘PRE-REQUISITES COMPLETED SUCCESSFULLY NOW KINDLY INSTALL MYSQL-SERVER RPM AND MYSQL-CLIENT RPM’

Phase #2 – Installation

Install

Capture7

 

Phase #3 – Configuration – my.cnf

RUN ONLY AS MYSQL USER.

cd /MySql/mysqldb/configfiles

 

echo [mysqld]

#This Option tells the server to load the plugin and prevent it from being removed while the server is running.
audit-log=FORCE_PLUS_PERMANENT

#Audit Log File Location in the Container.
audit_log_file=/MySql/mysql_db/mysql/2345/var/log/audit_2345.log

#Audit Log Policy Parameter
audit_log_policy=LOGINS

#Rotate/Refresh the Log File after it reaches the size 1GB
audit_log_rotate_on_size=1073741824

#The number of TCP/IP connections that are queued at once. If you have many remote users connecting to your database simultaneously, you may need to increase this value. The trade-off for a high value is slightly increased memory and CPU usage.
back_log=128

#The size of the cache to hold the SQL statements for the binary log during a transaction. A binary log cache is allocated for each client if the server supports any transactional storage engines and if the server has the binary log enabled (–log-bin option). If you often use large, multiple-statement transactions, you can increase this cache size to get better performance. The Binlog_cache_use and Binlog_cache_disk_use status variables can be useful for tuning the size of this variable.
binlog_cache_size=1M

#Use charset_name as the default server character set.
character-set-server=utf8

#Use collation_name as the default server collation.
collation-server=utf8_general_ci

#The number of seconds that the mysqld server waits for a connect packet before responding with Bad handshake.
connect_timeout=10

#***********MYSQL DATA DIRECTORY ****************
datadir=/MySql/mysql_db/mysql/2345/var/lib/mysql

#************DEAFULT STORAGE ENGINE ***************
default-storage-engine=innodb
ft_min_word_len=2
general_log=0

#General Log File Path.
general_log_file=/MySql/mysql_db/mysql/2345/var/log/general_2345.log

group_concat_max_len=500000
innodb_additional_mem_pool_size=16M
innodb_buffer_pool_instances=5
innodb_buffer_pool_size=8G
innodb_file_per_table=1
innodb_flush_method=O_DIRECT
innodb_log_buffer_size=32M
innodb_log_file_size=500M
innodb_thread_concurrency=64
interactive_timeout=900

#Binary Logs Index File Path.
log-bin-index=/MySql/mysql_db/mysql/2345/var/log/binlogs/logbin_2345.index
log_bin_trust_function_creators=1

#Binary Log File Path.
log-bin=/MySql/mysql_db/mysql/2345/var/log/binlogs/bin_2345.log

#Error Log File Path.
log-error=/MySql/mysql_db/mysql/2345/var/log/mysqld_2345.log
log-queries-not-using-index
log-slow-slave-statements
log_warnings
long_query_time=0.05
max_allowed_packet=1G
max_binlog_size=1073741824
max_connect_errors=4294967295

#The number of simultaneous connections allowed by the database server. If some users are being denied access during busy times, you may need to increase this value. The trade-off is a more heavily loaded server. In other words, CPU usage, memory usage, and disk I/O will increase.
max-connections=4096
max_heap_table_size=64M
net_read_timeout=120
net_write_timeout=3600
old_password=0
open_files_limit=4096

#Process ID File Path.
pid-file=/MySql/mysql_db/mysql/2345/var/lib/mysql/mysql_2345.pid

#Port Number Used By MySql.
port=2345

query-cache-limit=1M
query_cache_size=64M
read_buffer_size=1M
read_rnd_buffer_size=8M

#Relay Log Index File Path
relay-log-index=/MySql/mysql_db/mysql/2345/var/log/binlogs/relaylog_2345.index

#Relay Log Information File Path.
relay-log-info-file=/MySql/mysql_db/mysql/2345/var/log/binlogs/relaylog_2345.info

#Relay Log File Path
relay-log=/MySql/mysql_db/mysql/2345/var/log/binlogs/relay_2345.log
server-id=222345
skip-character-set-client-handshake
skip-name-resolve
skip-slave-start
slave_net_timeout=60
slow_query_log=1

#Slow Query Log File Path.
slow_query_log_file=/MySql/mysql_db/mysql/2345/var/log/slowqueries_2345.log

#MySQL Socket Path
socket=/MySql/mysql_db/mysql/2345/var/lib/mysql_2345.sock
table-definition-cache=2048
table_open_cache=4096
thread_cache_size=16

#MySql Temp Directory.
tmpdir=/MySql/mysql_db/mysql/2345/tmp
tmp_table_size=64M
>>my-23456.cnf

Phase #4 – Start/Stop Service and Login

Start-Stop.sh

#!/bin/bash

set -x

echo “Do You want to Start the MySql Daemon ??? [Select ‘start’ or ‘stop’ followed by an ENTER]:- ”
read bool

if [ $bool -eq “start”];
then
/usr/bin/mysqld_safe –defaults-file=/MySql/mysqldb/configfiles/my-2345.cnf &
echo ‘CHECKING FOR ERRORS’
cat=”$(which cat)”
path=”/MySql/mysql_db/mysql/2345/var/log/mysqld_2345.log”
err=”$cat $path|$(which grep) ERROR|$(which wc) -l”
if [$err -eq 0];
then
echo ‘NO ERRORS YIPPIE’
rm -rf /MySql/mysql_db/mysql/2345/var/log/mysqld_2345.log
elif [$err -gt 0];
then
echo ‘CHECK FOR THESE ERRORS’
$cat /MySql/mysql_db/mysql/2345/var/log/mysqld_2345.log|grep ERROR >>/MySql/mysql_db/mysql/2345/var/log/mysqld_err_2345.log
$cat /MySql/mysql_db/mysql/2345/var/log/mysqld_err_2345.log
rm -rf /MySql/mysql_db/mysql/2345/var/log/mysqld_2345.log
echo ‘RE-RUN the SCRIPT NOW IF YOU HAVE ERRORS.’

else;
echo ‘EXCEPTION ERROR !!!!!!!!!!!!!!!!!! ‘
fi
echo $?

elif [ $bool -eq “stop”];
then
count=”ps -eaf |grep mysqld|grep 2345|wc -l”
if [ $count -gt 0];
then
echo “Please Enter the MySql User. [Give the entry followed by ENTER]:- ”
read user
/usr/bin/mysqladmin –socket=/MySql/mysql_db/mysql/2345/var/lib/mysql/mysql_2345.sock –port=2345 -u$user -p shutdown
else;
echo “MYSQL PROCESS NOT RUNNING”
fi

else;
echo “INVALID INPUT PLEASE TRY AGAIN”
fi

Login.sh

#!/bin/bash

##  PASSWORD CHANGE SECTION ##
echo “Do you Want to Change the password for the user ??? [Type Y or N followed by an ENTER]:- ”
read bool

if [ $bool -eq “Y”];
then
echo “Enter the User to Change the password [Type the username followed by an ENTER]:- ”
read user
echo “Enter the password for $user [Type the Password followed by an ENTER]:- ”
read password
/usr/bin/mysqladmin –socket=/MySql/mysql_db/mysql/2345/var/lib/mysql_2345.sock –port=2345 -u $user password $password
elif [ $bool -eq “N”];
then
echo “PASSWORD WILL NOT BE CHANGED”

else;
echo “Please Provide a Valid Input”
fi

## LOGIN SECTION ##
echo “Do You Want to Login to MySQL ????”
read bool1
echo “Please Enter the User:- [Type the username followed by an ENTER]:- ”
read user
echo “Please enter the password for $user [Type the Password followed by an ENTER]:- ”
read password
if [ $bool1 -eq “Y”];
then
/usr/bin/mysql -A -v –socket=/MySql/mysql_db/mysql/2345/var/lib/mysql_2345.sock –port=2345 -u$user -p$password
elif [ $bool1 -eq “N”];
then
echo “OHK FINE WILL NOT LOGIN”

else;
echo “Please Provide a Valid Input”
fi

############################################