Good Morning To All My TECH Ghettos,
Today ima show ya’ll a fuckin command to delete all files except a pattern,
ya’ll can use it in a script or even commandline ……. Life gets easy as Fuck !!!!!!!!
find . -type f ! -name ‘<pattern>’ -delete
A Live Example
After the following Command
find . -type f ! -name ‘*.gz’ -delete
Avery serious security problem has been found in the Linux kernel called “The Stack Clash.” It can be exploited by attackers to corrupt memory and execute arbitrary code. An attacker could leverage this with another vulnerability to execute arbitrary code and gain administrative/root account privileges. How do I fix this problem on Linux?
The Qualys Research Labs discovered various problems in the dynamic linker of the GNU C Library (CVE-2017-1000366) which allow local privilege escalation by clashing the stack including Linux kernel. This bug affects Linux, OpenBSD, NetBSD, FreeBSD and Solaris, on i386 and amd64. It can be exploited by attackers to corrupt memory and execute arbitrary code.
What is CVE-2017-1000364 bug?
A flaw was found in the way memory was being allocated on the stack for user space binaries. If heap (or different memory region) and stack memory regions were adjacent to each other, an attacker could use this flaw to jump over the stack guard gap, cause controlled memory corruption on process stack or the adjacent memory region, and thus increase their privileges on the system. This is a kernel-side mitigation which increases the stack guard gap size from one page to 1 MiB to make successful exploitation of this issue more difficult.
Each program running on a computer uses a special memory region called the stack. This memory region is special because it grows automatically when the program needs more stack memory. But if it grows too much and gets too close to another memory region, the program may confuse the stack with the other memory region. An attacker can exploit this confusion to overwrite the stack with the other memory region, or the other way around.
A list of affected Linux distros
- Red Hat Enterprise Linux Server 5.x
- Red Hat Enterprise Linux Server 6.x
- Red Hat Enterprise Linux Server 7.x
- CentOS Linux Server 5.x
- CentOS Linux Server 6.x
- CentOS Linux Server 7.x
- Oracle Enterprise Linux Server 5.x
- Oracle Enterprise Linux Server 6.x
- Oracle Enterprise Linux Server 7.x
- Ubuntu 17.10
- Ubuntu 17.04
- Ubuntu 16.10
- Ubuntu 16.04 LTS
- Ubuntu 12.04 ESM (Precise Pangolin)
- Debian 9 stretch
- Debian 8 jessie
- Debian 7 wheezy
- Debian unstable
- SUSE Linux Enterprise Desktop 12 SP2
- SUSE Linux Enterprise High Availability 12 SP2
- SUSE Linux Enterprise Live Patching 12
- SUSE Linux Enterprise Module for Public Cloud 12
- SUSE Linux Enterprise Build System Kit 12 SP2
- SUSE Openstack Cloud Magnum Orchestration 7
- SUSE Linux Enterprise Server 11 SP3-LTSS
- SUSE Linux Enterprise Server 11 SP4
- SUSE Linux Enterprise Server 12 SP1-LTSS
- SUSE Linux Enterprise Server 12 SP2
- SUSE Linux Enterprise Server for Raspberry Pi 12 SP2
Do I need to reboot my box?
Yes, as most services depends upon the dynamic linker of the GNU C Library and kernel itself needs to be reloaded in memory.
How do I fix CVE-2017-1000364 on Linux?
Type the commands as per your Linux distro. You need to reboot the box. Before you apply patch, note down your current kernel version:
$ uname -a
$ uname -mrs
Linux 4.4.0-78-generic x86_64
Debian or Ubuntu Linux
Type the following apt command/apt-get command to apply updates:
$ sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade
Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: libc-bin libc-dev-bin libc-l10n libc6 libc6-dev libc6-i386 linux-compiler-gcc-6-x86 linux-headers-4.9.0-3-amd64 linux-headers-4.9.0-3-common linux-image-4.9.0-3-amd64 linux-kbuild-4.9 linux-libc-dev locales multiarch-support 14 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/62.0 MB of archives. After this operation, 4,096 B of additional disk space will be used. Do you want to continue? [Y/n] y Reading changelogs... Done Preconfiguring packages ... (Reading database ... 115123 files and directories currently installed.) Preparing to unpack .../libc6-i386_2.24-11+deb9u1_amd64.deb ... Unpacking libc6-i386 (2.24-11+deb9u1) over (2.24-11) ... Preparing to unpack .../libc6-dev_2.24-11+deb9u1_amd64.deb ... Unpacking libc6-dev:amd64 (2.24-11+deb9u1) over (2.24-11) ... Preparing to unpack .../libc-dev-bin_2.24-11+deb9u1_amd64.deb ... Unpacking libc-dev-bin (2.24-11+deb9u1) over (2.24-11) ... Preparing to unpack .../linux-libc-dev_4.9.30-2+deb9u1_amd64.deb ... Unpacking linux-libc-dev:amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ... Preparing to unpack .../libc6_2.24-11+deb9u1_amd64.deb ... Unpacking libc6:amd64 (2.24-11+deb9u1) over (2.24-11) ... Setting up libc6:amd64 (2.24-11+deb9u1) ... (Reading database ... 115123 files and directories currently installed.) Preparing to unpack .../libc-bin_2.24-11+deb9u1_amd64.deb ... Unpacking libc-bin (2.24-11+deb9u1) over (2.24-11) ... Setting up libc-bin (2.24-11+deb9u1) ... (Reading database ... 115123 files and directories currently installed.) Preparing to unpack .../multiarch-support_2.24-11+deb9u1_amd64.deb ... Unpacking multiarch-support (2.24-11+deb9u1) over (2.24-11) ... Setting up multiarch-support (2.24-11+deb9u1) ... (Reading database ... 115123 files and directories currently installed.) Preparing to unpack .../0-libc-l10n_2.24-11+deb9u1_all.deb ... Unpacking libc-l10n (2.24-11+deb9u1) over (2.24-11) ... Preparing to unpack .../1-locales_2.24-11+deb9u1_all.deb ... Unpacking locales (2.24-11+deb9u1) over (2.24-11) ... Preparing to unpack .../2-linux-compiler-gcc-6-x86_4.9.30-2+deb9u1_amd64.deb ... Unpacking linux-compiler-gcc-6-x86 (4.9.30-2+deb9u1) over (4.9.30-2) ... Preparing to unpack .../3-linux-headers-4.9.0-3-amd64_4.9.30-2+deb9u1_amd64.deb ... Unpacking linux-headers-4.9.0-3-amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ... Preparing to unpack .../4-linux-headers-4.9.0-3-common_4.9.30-2+deb9u1_all.deb ... Unpacking linux-headers-4.9.0-3-common (4.9.30-2+deb9u1) over (4.9.30-2) ... Preparing to unpack .../5-linux-kbuild-4.9_4.9.30-2+deb9u1_amd64.deb ... Unpacking linux-kbuild-4.9 (4.9.30-2+deb9u1) over (4.9.30-2) ... Preparing to unpack .../6-linux-image-4.9.0-3-amd64_4.9.30-2+deb9u1_amd64.deb ... Unpacking linux-image-4.9.0-3-amd64 (4.9.30-2+deb9u1) over (4.9.30-2) ... Setting up linux-libc-dev:amd64 (4.9.30-2+deb9u1) ... Setting up linux-headers-4.9.0-3-common (4.9.30-2+deb9u1) ... Setting up libc6-i386 (2.24-11+deb9u1) ... Setting up linux-compiler-gcc-6-x86 (4.9.30-2+deb9u1) ... Setting up linux-kbuild-4.9 (4.9.30-2+deb9u1) ... Setting up libc-l10n (2.24-11+deb9u1) ... Processing triggers for man-db (220.127.116.11-2) ... Setting up libc-dev-bin (2.24-11+deb9u1) ... Setting up linux-image-4.9.0-3-amd64 (4.9.30-2+deb9u1) ... /etc/kernel/postinst.d/initramfs-tools: update-initramfs: Generating /boot/initrd.img-4.9.0-3-amd64 cryptsetup: WARNING: failed to detect canonical device of /dev/md0 cryptsetup: WARNING: could not determine root device from /etc/fstab W: initramfs-tools configuration sets RESUME=UUID=054b217a-306b-4c18-b0bf-0ed85af6c6e1 W: but no matching swap device is available. I: The initramfs will attempt to resume from /dev/md1p1 I: (UUID=bf72f3d4-3be4-4f68-8aae-4edfe5431670) I: Set the RESUME variable to override this. /etc/kernel/postinst.d/zz-update-grub: Searching for GRUB installation directory ... found: /boot/grub Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-4.9.0-3-amd64 Found kernel: /boot/vmlinuz-3.16.0-4-amd64 Updating /boot/grub/menu.lst ... done Setting up libc6-dev:amd64 (2.24-11+deb9u1) ... Setting up locales (2.24-11+deb9u1) ... Generating locales (this might take a while)... en_IN.UTF-8... done Generation complete. Setting up linux-headers-4.9.0-3-amd64 (4.9.30-2+deb9u1) ... Processing triggers for libc-bin (2.24-11+deb9u1) ...
Reboot your server/desktop using reboot command:
$ sudo reboot
Type the following yum command:
$ sudo yum update
$ sudo reboot
Type the following dnf command:
$ sudo dnf update
$ sudo reboot
Suse Enterprise Linux or Opensuse Linux
Type the following zypper command:
$ sudo zypper patch
$ sudo reboot
SUSE OpenStack Cloud 6
$ sudo zypper in -t patch SUSE-OpenStack-Cloud-6-2017-996=1
$ sudo reboot
SUSE Linux Enterprise Server for SAP 12-SP1
$ sudo zypper in -t patch SUSE-SLE-SAP-12-SP1-2017-996=1
$ sudo reboot
SUSE Linux Enterprise Server 12-SP1-LTSS
$ sudo zypper in -t patch SUSE-SLE-SERVER-12-SP1-2017-996=1
$ sudo reboot
SUSE Linux Enterprise Module for Public Cloud 12
$ sudo zypper in -t patch SUSE-SLE-Module-Public-Cloud-12-2017-996=1
$ sudo reboot
You need to make sure your version number changed after issuing reboot command
$ uname -a
$ uname -r
$ uname -mrs
Linux 4.4.0-81-generic x86_64
Cpustat is a powerful system performance measure program for Linux, written using Go programming language. It attempts to reveal CPU utilization and saturation in an effective way, using The Utilization Saturation and Errors (USE) Method (a methodology for analyzing the performance of any system).
It extracts higher frequency samples of every process being executed on the system and then summarizes these samples at a lower frequency. For instance, it can measure every process every 200ms and summarize these samples every 5 seconds, including min/average/max values for certain metrics.
Cpustat outputs data in two possible ways: a pure text list of the summary interval and a colorful scrolling dashboard of each sample.
How to Install Cpustat in Linux
You must have Go (GoLang) installed on your Linux system in order to use cpustat, click on the link below to follow the GoLang installation steps that is if you do not have it installed:
- Install GoLang (Go Programming Language) in Linux
Once you have installed Go, type the go get command below to install it, this command will install the cpustat binary in your GOBIN variable:
# go get github.com/uber-common/cpustat
How to Use Cpustat in Linux
When the installation process completes, run cpustat as follows with root privileges using the sudo command that is if your controlling the system as a non-root user, otherwise you’ll get the error as shown:
$ $GOBIN/cpustat This program uses the netlink taskstats interface, so it must be run as root.
Note: To run cpustat as well as all other Go programs you have installed on your system like any other commands, include GOBIN variable in your PATH environment variable. Open the link below to learn how to set the PATH variable in Linux.
This is how cpustat works; the
/proc directory is queried to get the current list of process IDs for every interval, and:
- for each PID, read /proc/pid/stat, then compute difference from previous sample.
- in case it’s a new PID, read /proc/pid/cmdline.
- for each PID, send a netlink message to fetch the taskstats, compute difference from previous sample.
- fetch /proc/stat to get the overall system stats.
Again, each sleep interval is adjusted to account for the amount of time consumed fetching all of these stats. Furthermore, each sample also records the time it took to scale each measurement by the actual elapsed time between samples. This attempts to account for delays in cpustat itself.
When run without any arguments, cpustat will display the following by default: sampling interval: 200ms, summary interval: 2s (10 samples), showing top 10 procs, user filter: all, pid filter: all as shown in the screenshot below:
$ sudo $GOBIN/cpustat
From the output above, the following are the meanings of the system-wide summary metrics displayed before the fields:
- usr – min/avg/max user mode run time as a percentage of a CPU.
- sys – min/avg/max system mode run time as a percentage of a CPU.
- nice – min/avg/max user mode low priority run time as a percentage of a CPU.
- idle – min/avg/max user mode run time as a percentage of a CPU.
- iowait – min/avg/max delay time waiting for disk IO.
- prun – min/avg/max count of processes in a runnable state (same as load average).
- pblock – min/avg/max count of processes blocked on disk IO.
- pstart – number of processes/threads started in this summary interval.
Still from the output above, for a given process, the different columns mean:
- name – common process name from /proc/pid/stat or /proc/pid/cmdline.
- pid – process id, also referred to as “tgid”.
- min – lowest sample of user+system time for the pid, measured from /proc/pid/stat. Scale is a percentage of a CPU.
- max – highest sample of user+system time for this pid, also measured from /proc/pid/stat.
- usr – average user time for the pid over the summary period, measured from /proc/pid/stat.
- sys – average system time for the pid over the summary period, measured from /proc/pid/stat.
- nice – indicates current “nice” value for the process, measured from /proc/pid/stat. Higher means “nicer”.
- runq – time the process and all of its threads spent runnable but waiting to run, measured from taskstats via netlink. Scale is a percentage of a CPU.
- iow – time the process and all of its threads spent blocked by disk IO, measured from taskstats via netlink. Scale is a percentage of a CPU, averaged over the summary interval.
- swap – time the process and all of its threads spent waiting to be swapped in, measured from taskstats via netlink. Scale is a percentage of a CPU, averaged over the summary interval.
- vcx and icx – total number of voluntary context switches by the process and all of its threads over the summary interval, measured from taskstats via netlink.
- rss – current RSS value fetched from /proc/pid/stat. It is the amount of memory this process is using.
- ctime – sum of user+sys CPU time consumed by waited for children that exited during this summary interval, measured from /proc/pid/stat.
Note that long running child processes can often confuse this measurement, because the time is reported only when the child process exits. However, this is useful for measuring the impact of frequent cron jobs and health checks where the CPU time is often consumed by many child processes.
- thrd – number of threads at the end of the summary interval, measured from /proc/pid/stat.
- sam – number of samples for this process included in the summary interval. Processes that have recently started or exited may have been visible for fewer samples than the summary interval.
The following command displays the top 10 root user processes running on the system:
$ sudo $GOBIN/cpustat -u root
To display output in a fancy terminal mode, use the
-t flag as follows:
$ sudo $GOBIN/cpustat -u roo -t
To view the top x number of processes (the default is 10), you can use the
-n flag, the following command shows the top 20 Linux processes running on the system:
$ sudo $GOBIN/cpustat -n 20
You can also write CPU profile to a file using the
-cpuprofile option as follows and then use the cat command to view the file:
$ sudo $GOBIN/cpustat -cpuprofile cpuprof.txt $ cat cpuprof.txt
To display help info, use the
-h flag as follows:
$ sudo $GOBIN/cpustat -h
Find additional info from the cpustat Github Repository: https://github.com/uber-common/cpustat
Apache Kafka is a popular distributed message broker designed to handle large volumes of real-time data efficiently. A Kafka cluster is not only highly scalable and fault-tolerant, but it also has a much higher throughput compared to other message brokers such as ActiveMQ and RabbitMQ. Though it is generally used as a pub/sub messaging system, a lot of organizations also use it for log aggregation because it offers persistent storage for published messages.
In this tutorial, you will learn how to install and use Apache Kafka 0.8.2.1 on Ubuntu 16.04.
To follow along, you will need:
- Ubuntu 16.04 Droplet
- At least 4GB of swap space
As Kafka can handle requests over a network, you should create a dedicated user for it. This minimizes damage to your Ubuntu machine should the Kafka server be comprised.
Note: After setting up Apache Kafka, it is recommended that you create a different non-root user to perform other tasks on this server.
As root, create a user called kafka using the
useradd kafka -m
Set its password using
Add it to the
sudo group so that it has the privileges required to install Kafka’s dependencies. This can be done using the
adduser kafka sudo
Your Kafka user is now ready. Log into it using
su - kafka
Before installing additional packages, update the list of available packages so you are installing the latest versions available in the repository:
sudo apt-get update
As Apache Kafka needs a Java runtime environment, use
apt-get to install the
sudo apt-get install default-jre
Apache ZooKeeper is an open source service built to coordinate and synchronize configuration information of nodes that belong to a distributed system. A Kafka cluster depends on ZooKeeper to perform—among other things—operations such as detecting failed nodes and electing leaders.
Since the ZooKeeper package is available in Ubuntu’s default repositories, install it using
sudo apt-get install zookeeperd
After the installation completes, ZooKeeper will be started as a daemon automatically. By default, it will listen on port 2181.
To make sure that it is working, connect to it via Telnet:
telnet localhost 2181
At the Telnet prompt, type in
ruok and press
If everything’s fine, ZooKeeper will say
imok and end the Telnet session.
Now that Java and ZooKeeper are installed, it is time to download and extract Kafka.
To start, create a directory called
Downloads to store all your downloads.
mkdir -p ~/Downloads
wget to download the Kafka binaries.
wget "http://mirror.cc.columbia.edu/pub/software/apache/kafka/0.8.2.1/kafka_2.11-0.8.2.1.tgz" -O ~/Downloads/kafka.tgz
Create a directory called
kafka and change to this directory. This will be the base directory of the Kafka installation.
mkdir -p ~/kafka && cd ~/kafka
Extract the archive you downloaded using the
tar -xvzf ~/Downloads/kafka.tgz --strip 1
The next step is to configure the Kakfa server.
By default, Kafka doesn’t allow you to delete topics. To be able to delete topics, add the following line at the end of the file:
delete.topic.enable = true
Save the file, and exit
kafka-server-start.sh script using
nohup to start the Kafka server (also called Kafka broker) as a background process that is independent of your shell session.
nohup ~/kafka/bin/kafka-server-start.sh ~/kafka/config/server.properties > ~/kafka/kafka.log 2>&1 &
Wait for a few seconds for it to start. You can be sure that the server has started successfully when you see the following messages in
excerpt from ~/kafka/kafka.log
[2015-07-29 06:02:41,736] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
[2015-07-29 06:02:41,776] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
You now have a Kafka server which is listening on port 9092.
Let us now publish and consume a “Hello World” message to make sure that the Kafka server is behaving correctly.
To publish messages, you should create a Kafka producer. You can easily create one from the command line using the
kafka-console-producer.sh script. It expects the Kafka server’s hostname and port, along with a topic name as its arguments.
Publish the string “Hello, World” to a topic called TutorialTopic by typing in the following:
" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic
opic > /dev/null
As the topic doesn’t exist, Kafka will create it automatically.
To consume messages, you can create a Kafka consumer using the
kafka-console-consumer.sh script. It expects the ZooKeeper server’s hostname and port, along with a topic name as its arguments.
The following command consumes messages from the topic we published to. Note the use of the
--from-beginning flag, which is present because we want to consume a message that was published before the consumer was started.
~/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic
If there are no configuration issues, you should see
Hello, in the output now.
The script will continue to run, waiting for more messages to be published to the topic. Feel free to open a new terminal and start a producer to publish a few more messages. You should be able to see them all in the consumer’s output instantly.
When you are done testing, press CTRL+C to stop the consumer script.
KafkaT is a handy little tool from Airbnb which makes it easier for you to view details about your Kafka cluster and also perform a few administrative tasks from the command line. As it is a Ruby gem, you will need Ruby to use it. You will also need the
build-essential package to be able to build the other gems it depends on. Install them using
sudo apt-get install ruby ruby-dev build-essential
You can now install KafkaT using the
sudo gem install kafkat --source https://rubygems.org --no-ri --no-rdoc
vi to create a new file called
This is a configuration file which KafkaT uses to determine the installation and log directories of your Kafka server. It should also point KafkaT to your ZooKeeper instance. Accordingly, add the following lines to it:
You are now ready to use KafkaT. For a start, here’s how you would use it to view details about all Kafka partitions:
You should see the following output:
output of kafkat partitions
Topic Partition Leader Replicas ISRs
TutorialTopic 0 0  
To learn more about KafkaT, refer to its GitHub repository.
If you want to create a multi-broker cluster using more Ubuntu 16.04 machines, you should repeat Step 1, Step 3, Step 4 and Step 5 on each of the new machines. Additionally, you should make the following changes in the
server.properties file in each of them:
- the value of the
broker.idproperty should be changed such that it is unique throughout the cluster
- the value of the
zookeeper.connectproperty should be changed such that all nodes point to the same ZooKeeper instance
If you want to have multiple ZooKeeper instances for your cluster, the value of the
zookeeper.connect property on each node should be an identical, comma-separated string listing the IP addresses and port numbers of all the ZooKeeper instances.
Now that all installations are done, you can remove the
kafka user’s admin privileges. Before you do so, log out and log back in as any other non-root sudo user. If you are still running the same shell session you started this tutorial with, simply type
To remove the
kafka user’s admin privileges, remove it from the
sudo deluser kafka sudo
To further improve your Kafka server’s security, lock the
kafka user’s password using the
passwd command. This makes sure that nobody can directly log into it.
sudo passwd kafka -l
At this point, only root or a sudo user can log in as
kafka by typing in the following command:
sudo su - kafka
In the future, if you want to unlock it, use
passwd with the
sudo passwd kafka -u
You now have a secure Apache Kafka running on your Ubuntu server. You can easily make use of it in your projects by creating Kafka producers and consumers using Kafka clients which are available for most programming languages. To learn more about Kafka, do go through its documentation.
Finally for GUI Download
Youtube Video Link [Watch Here]
There is a serious vulnerability in sudo command that grants root access to anyone with a shell account. It works on SELinux enabled systems such as CentOS/RHEL and others too. A local user with privileges to execute commands via sudo could use this flaw to escalate their privileges to root. Patch your system as soon as possible. Run yum command: Run dnf command: Run zypper command: Run pacman command: Run apk command: Run upgradepkg command: Run emerge command:
It was discovered that Sudo did not properly parse the contents of /proc/[pid]/stat when attempting to determine its controlling tty. A local attacker in some configurations could possibly use this to overwrite any file on the filesystem, bypassing intended permissions or gain root shell.
From the description
A list of affected Linux distro
How do I patch sudo on Debian/Ubuntu Linux server?
How do I patch sudo on CentOS/RHEL/Scientific/Oracle Linux server?
$ sudo yum update
How do I patch sudo on Fedora Linux server?
$ sudo dnf update
How do I patch sudo on Suse/OpenSUSE Linux server?
$ sudo zypper update
How do I patch sudo on Arch Linux server?
$ sudo pacman -Syu
How do I patch sudo on Alpine Linux server?
# apk update && apk upgrade
How do I patch sudo on Slackware Linux server?
# upgradepkg sudo-1.8.20p1-i586-1_slack14.2.txz
How do I patch sudo on Gentoo Linux server?
# emerge --sync
# emerge --ask --oneshot --verbose ">=app-admin/sudo-1.8.20_p1"
There is a serious vulnerability in sudo command that grants root access to anyone with a shell account. It works on SELinux enabled systems such as CentOS/RHEL and others too. A local user with privileges to execute commands via sudo could use this flaw to escalate their privileges to root. Patch your system as soon as possible.
Run yum command:
Run dnf command:
Run zypper command:
Run pacman command:
Run apk command:
Run upgradepkg command:
Run emerge command: