Hudson Jenkins

NodeJS on Jenkins with Slack integration

Hi Guys,

 

i was feeling so bored couple of hours back, so i thought of creating a jenkins pipeline and integrate Slack with Node Jobs in it.

Please watch the video and suggest some use cases that i can work upon for ya’all

Advertisements
Main, Operating System, Redhat / CEntOS / Oracle Linux, Ubuntu

Process Management in Linux

Process Types

Before we start talking about Linux process management, we should review process types. There are four common types of processes:

  • Parent process
  • Child process
  • Orphan Process
  • Daemon Process
  • Zombie Process

Parent process is a process which runs the fork() system call. All processes except process 0 have one parent process.

Child process is created by a parent process.

Orphan Process it continues running while its parent process has terminated or finished.

Daemon Process is always created from a child process and then exit.

Zombie Process exists in the process table although it is terminated.

The orphan process is a process that still executing and its parent process has died while orphan processes do not become zombie processes.

Memory Management

In server administration, memory management is one of your responsibility that you should care about as a system administrator.

One of most used commands in Linux process management is the free command:

$ freem

The -m option to show values in megabytes.

Certificates – Digital Certificates (Summary)01-linux-process-managment-free-command

Our main concern in buff/cache.

The output of free command here means 536 megabytes is used while 1221 megabytes is available.

The second line is the swap. Swapping occurs when memory becomes to be crowded.

The first value is the total swap size which is 3070 megabytes.

The second value is the used swap which is 0.

The third value is the available swap for usage which is 3070.

From the above results, you can say that memory status is good since no swap is used, so while we are talking about the swap, let’s discover what proc directory provides us about the swap.

$ cat /proc/swapscat /proc/swaps

01-linux-process-managment-free-command

This command shows the swap size and how much is used:

$ cat /proc/sys/vm/swappinesscat /proc/sys/vm/swappiness

01-linux-process-managment-free-commandThis command shows a value from 0 to 100, this value means the system will use the swap if the memory becomes 70% used.

Notice: the default value for most distros for this value is between 30 and 60, you can modify it like this:

$ echo 50 >/proc/sys/vm/swappinessecho 50 >/proc/sys/vm/swappiness

Or using sysctl command like this:

$ sudo sysctl -wvm.swappiness=50sudo sysctl -wvm.swappiness=50

Changing the swappiness value using the above commands is not permanent, you have to write it on /etc/sysctl.conf file like this:

$ nano /etc/sysctl.conf

vm.swappiness=50

01-linux-process-managment-free-command

Cool!!

The swap level measures the chance to transfer a process from the memory to the swap.

Choosing the accurate swappiness value for your system requires some experimentation to choose the best value for your server.

Managing virtual memory with vmstat

Another important command used in Linux process management which is vmstat. vmstat command gives a summary reporting about memory, processes, and paging.

$ vmstat -avmstat -a

-a option is used to get all active and inactive processes.

01-linux-process-managment-free-command

And this is the important column outputs from this command:

si: How much swapped in from disk.

so: How much swapped out to disk.

bi: How much sent to block devices.

bo: How much obtained from block devices.

us: The user time.

sy: The system time.

id: The idle time.

Our main concern is the (si) and (so) columns, where (si) column shows page-ins while (so) column provides page-outs.

A better way to look at these values is by viewing the output with a delay option like this:

$ vmstat 2 5vmstat 2 5

01-linux-process-managment-free-command

Where 2 is the delay in seconds and 5 is the number of times vmstat is called. It shows five updates of the command and all data is presented in kilobytes.

Page-in (si) happens when you start an application and the information is paged-in. Page out (so) happens when the kernel is freeing up memory.

System Load & top Command

In Linux process management, the top command gives you a list of the running processes and how they are using CPU and memory ; the output is a real-time data.

If you have a dual core system may have the first core at 40 percent and the second core at 70 percent, in this case, the top command may show a combined result of 110 percent, but you will not know the individual values for each core.

$ top -c-c

01-linux-process-managment-free-command

We use -c option to show the command line or the executable path behind that process.

You can press 1 key while you watch the top command statistics to show individual CPU statuses.

01-linux-process-managment-free-command

Keep in mind that certain processes are spawned like the child processes, you will see multiple processes for the same program like httpd and PHP-fpm.

You shouldn’t rely on top command only, you should review other resources before making a final action.

Monitoring Disk I/O with iotop

The system starts to be slow as a result of high disk activities, so it is important to monitor disk activities. That means figuring out which processes or users cause this disk activity.

The iotop command in Linux process management helps us to monitor disk I/O in real-time. You can install it if you don’t have it:

$ yum install iotop

Running iotop without any options will result in a list all processes.

To view the processes that cause to disk activity, you should use -o option:

$ iotop -o-o

01-linux-process-managment-free-command

You can easily know what program is impacting the system.

ps command

We’ve talked about ps command before on a previous post and how to order the processes by memory usage and CPU usage.

Monitoring System Health with iostat and lsof

iostat command gives you CPU utilization report; it can be used with -c option to display the CPU utilization report.

$ iostat -ciostat -c

The output result is easy to understand, but if the system is busy, you will see %iowait increases. That means the server is transferring or copying a lot of files.

With this command, you can check the read and write operations, so you should have a solid knowledge of what is hanging your disk and take the right decision.

Additionally, lsof command is used to list the open files:

lsof command shows which executable is using the file, the process ID, the user, and the name of the opened file.

Calculating the system load

Calculating system load is very important in Linux process management. The system load is the amount of processing for the system which is currently working. It is not the perfect way to measure system performance, but it gives you some evidence.

The load is calculated like this:

Actual Load = Total Load (uptime) / No. of CPUs

You can calculate the uptime by reviewing uptime command or top command:

$ uptimeuptime

$ toptop

The server load is shown in 1, 5, and 15 minutes.

As you can see, the average load is 0.00 at the first minute, 0.01 at the fifth minute, and 0.05 at fifteenth minutes.

When the load increases, processors are queued, and if there are many processor cores, the load is distributed equally across the server’s cores to balance the work.

You can say that the good load average is about 1. This does not mean if the load exceeds 1 that there is a problem, but if you begin to see higher numbers for a long time, that means a high load and there is a problem.

pgrep and systemctl

You can get the process ID using pgrep command followed by the service name.

$ pgrep servicename

This command shows the process ID or PID.

Note if this command shows more than process ID like httpd or SSH, the smallest process ID is the parent process ID.

On the other hand, you can use the systemctl command to get the main PID like this:

$ systemctl status<service_name>.service

There are more ways to obtain the required process ID or parent process ID, but this one is easy and straight.

Managing Services with systemd

If we are going to talk about Linux process management, we should take a look at systemd. The systemd is responsible for controlling how services are managed on modern Linux systems like CentOS 7.

Instead of using chkconfig command to enable and disable a service during the boot, you can use the systemctl command.

Systemd also ships with its own version of the top command, and in order to show the processes that are associated with a specific service, you can use the system-cgtop command like this:

$ systemdcgtop

As you can see, all associated processes, path, the number of tasks, the % of CPU used, memory allocation, and the inputs and outputs related.

This command can be used to output a recursive list of service content like this:

$ systemdcgls

This command gives us very useful information that can be used to make your decision.

Nice and Renice Processes

The process nice value is a numeric indication that belongs to the process and how it’s fighting for the CPU.

A high nice value indicates a low priority for your process, so how nice you are going to be to other users, and from here the name came.

The nice range is from -20 to +19.

nice command sets the nice value for a process at creation time, while renice command adjusts the value later.

$ nice –n 5 ./myscriptnice –n 5 ./myscript

This command increases the nice value which means lower priority by 5.

$ sudo renice 5 22132213

This command decreases the nice value means increased priority and the number (2213) is the PID.

You can increase its nice value (lower priority) but cannot lower it (high priority) while root user can do both.

Sending the kill signal

To kill a service or application that causes a problem, you can issue a termination signal (SIGTERM). You can review the previous post about signals and jobs.

$ kill process IDkill process IDID

This method is called safe kill. However, depending on your situation, maybe you need to force a service or application to hang up like this:

$ kill -1 process -1 process ID

Sometimes the safe killing and reloading fail to do anything, you can send kill signal SIGKILL by using -9 option which is called forced kill.

$ kill -9 process IDkill -9 process ID

There are no cleanup operations or safe exit with this command and not preferred. However, you can do something more proper by using the pkill command.

$ pkill -9 serviceName-9 serviceNameserviceName

And you can use pgrep command to ensure that all associated processes are killed.

$ pgrep serviceNamepgrep serviceName

I hope you have a good idea about Linux process management and how to make a good action to make the system healthy.

Thank you

Main, RabbitMQ

RabbitMQ :- rabbitmqctl

NAME

rabbitmqctl — command line for managing a RabbitMQ broker

SYNOPSIS

rabbitmqctl [-q] [-l] [-n node] [-t timeoutcommand [command_options]

DESCRIPTION

RabbitMQ is a multi-protocol open source messaging broker.

rabbitmqctl is a command line tool for managing a RabbitMQ broker. It performs all actions by connecting to one of the broker’s nodes.

Diagnostic information is displayed if the broker was not running, could not be reached, or rejected the connection due to mismatching Erlang cookies.

OPTIONS

-n node
Default node is “rabbit@server”, where server is the local host. On a host named “myserver.example.com”, the node name of the RabbitMQ Erlang node will usually be “rabbit@myserver” (unless RABBITMQ_NODENAME has been set to some non-default value at broker startup time). The output of “hostname -s” is usually the correct suffix to use after the “@” sign. See rabbitmq-server(8) for details of configuring the RabbitMQ broker.
-q–quiet
Quiet output mode is selected. Informational messages are suppressed when quiet mode is in effect.
–dry-run
Do not run the command. Only print information message.
-t timeout–timeout timeout
Operation timeout in seconds. Only applicable to “list” commands. Default is infinity.
-l–longnames
Use longnames for erlang distribution. If RabbitMQ broker uses long node names for erlang distribution, the option must be specified.
–erlang-cookie cookie
Erlang distribution cookie. If RabbitMQ node is using a custom erlang cookie value, the cookie value must be set vith this parameter.

COMMANDS

help [-l] [command_name]
Prints usage for all available commands.

-l–list-commands
List command usages only, without parameter explanation.
command_name
Prints usage for the specified command.

Application Management

force_reset
Forcefully returns a RabbitMQ node to its virgin state.

The force_reset command differs from reset in that it resets the node unconditionally, regardless of the current management database state and cluster configuration. It should only be used as a last resort if the database or cluster configuration has been corrupted.

For reset and force_reset to succeed the RabbitMQ application must have been stopped, e.g. with stop_app.

For example, to reset the RabbitMQ node:

rabbitmqctl force_reset
hipe_compile directory
Performs HiPE-compilation and caches resulting .beam-files in the given directory.

Parent directories are created if necessary. Any existing .beam files from the directory are automatically deleted prior to compilation.

To use this precompiled files, you should set RABBITMQ_SERVER_CODE_PATH environment variable to directory specified in hipe_compile invokation.

For example, to HiPE-compile modules and store them to /tmp/rabbit-hipe/ebin directory:

rabbitmqctl hipe_compile /tmp/rabbit-hipe/ebin
reset
Returns a RabbitMQ node to its virgin state.

Removes the node from any cluster it belongs to, removes all data from the management database, such as configured users and vhosts, and deletes all persistent messages.

For reset and force_reset to succeed the RabbitMQ application must have been stopped, e.g. with stop_app.

For example, to resets the RabbitMQ node:

rabbitmqctl reset
rotate_logs
Instructs the RabbitMQ node to perform internal log rotation.

Log rotation is performed according to lager settings specified in configuration file.

Note that there is no need to call this command in case of external log rotation (e.g. from logrotate(8)), because lager detects renames and automatically reopens log files.

For example, this command starts internal log rotation process:

rabbitmqctl rotate_logs

Rotation is performed asynchronously, so there is no guarantee that it will be completed when this command returns.

shutdown
Shuts down the Erlang process on which RabbitMQ is running. The command is blocking and will return after the Erlang process exits. If RabbitMQ fails to stop, it will return a non-zero exit code.

Unlike the stop command, the shutdown command:

  • does not require a pid_file to wait for the Erlang process to exit
  • returns a non-zero exit code if RabbitMQ node is not running

For example, to shut down the Erlang process on which RabbitMQ is running:

rabbitmqctl shutdown
start_app
Starts the RabbitMQ application.

This command is typically run after performing other management actions that required the RabbitMQ application to be stopped, e.g. reset.

For example, to instruct the RabbitMQ node to start the RabbitMQ application:

rabbitmqctl start_app
stop [pid_file]
Stops the Erlang node on which RabbitMQ is running. To restart the node follow the instructions for “Running the Server” in the installation guide.

If a pid_file is specified, also waits for the process specified there to terminate. See the description of the wait command for details on this file.

For example, to instruct the RabbitMQ node to terminate:

rabbitmqctl stop
stop_app
Stops the RabbitMQ application, leaving the Erlang node running.

This command is typically run prior to performing other management actions that require the RabbitMQ application to be stopped, e.g. reset.

For example, to instruct the RabbitMQ node to stop the RabbitMQ application:

rabbitmqctl stop_app
wait pid_filewait –pid pid
Waits for the RabbitMQ application to start.

This command will wait for the RabbitMQ application to start at the node. It will wait for the pid file to be created if pidfile is specified, then for a process with a pid specified in the pid file or the –pid argument, and then for the RabbitMQ application to start in that process. It will fail if the process terminates without starting the RabbitMQ application.

If the specified pidfile is not created or erlang node is not started within –timeout the command will fail. Default timeout is 10 seconds.

A suitable pid file is created by the rabbitmq-server(8) script. By default this is located in the Mnesia directory. Modify the RABBITMQ_PID_FILE environment variable to change the location.

For example, this command will return when the RabbitMQ node has started up:

rabbitmqctl wait /var/run/rabbitmq/pid

Cluster Management

join_cluster clusternode [–ram]
clusternode
Node to cluster with.
–ram
If provided, the node will join the cluster as a RAM node.

Instructs the node to become a member of the cluster that the specified node is in. Before clustering, the node is reset, so be careful when using this command. For this command to succeed the RabbitMQ application must have been stopped, e.g. with stop_app.

Cluster nodes can be of two types: disc or RAM. Disc nodes replicate data in RAM and on disc, thus providing redundancy in the event of node failure and recovery from global events such as power failure across all nodes. RAM nodes replicate data in RAM only (with the exception of queue contents, which can reside on disc if the queue is persistent or too big to fit in memory) and are mainly used for scalability. RAM nodes are more performant only when managing resources (e.g. adding/removing queues, exchanges, or bindings). A cluster must always have at least one disc node, and usually should have more than one.

The node will be a disc node by default. If you wish to create a RAM node, provide the –ram flag.

After executing the join_cluster command, whenever the RabbitMQ application is started on the current node it will attempt to connect to the nodes that were in the cluster when the node went down.

To leave a cluster, reset the node. You can also remove nodes remotely with theforget_cluster_node command.

For more details see the Clustering guide.

For example, this command instructs the RabbitMQ node to join the cluster that “hare@elena” is part of, as a ram node:

rabbitmqctl join_cluster hare@elena –ram
cluster_status
Displays all the nodes in the cluster grouped by node type, together with the currently running nodes.

For example, this command displays the nodes in the cluster:

rabbitmqctl cluster_status
change_cluster_node_type type
Changes the type of the cluster node.

The type must be one of the following:

The node must be stopped for this operation to succeed, and when turning a node into a RAM node the node must not be the only disc node in the cluster.

For example, this command will turn a RAM node into a disc node:

rabbitmqctl change_cluster_node_type disc
forget_cluster_node [–offline]
–offline
Enables node removal from an offline node. This is only useful in the situation where all the nodes are offline and the last node to go down cannot be brought online, thus preventing the whole cluster from starting. It should not be used in any other circumstances since it can lead to inconsistencies.

Removes a cluster node remotely. The node that is being removed must be offline, while the node we are removing from must be online, except when using the –offline flag.

When using the –offline flag , rabbitmqctl will not attempt to connect to a node as normal; instead it will temporarily become the node in order to make the change. This is useful if the node cannot be started normally. In this case the node will become the canonical source for cluster metadata (e.g. which queues exist), even if it was not before. Therefore you should use this command on the latest node to shut down if at all possible.

For example, this command will remove the node “rabbit@stringer” from the node “hare@mcnulty”:

rabbitmqctl -n hare@mcnulty forget_cluster_node rabbit@stringer
rename_cluster_node oldnode1 newnode1 [oldnode2 newnode2 …]
Supports renaming of cluster nodes in the local database.

This subcommand causes rabbitmqctl to temporarily become the node in order to make the change. The local cluster node must therefore be completely stopped; other nodes can be online or offline.

This subcommand takes an even number of arguments, in pairs representing the old and new names for nodes. You must specify the old and new names for this node and for any other nodes that are stopped and being renamed at the same time.

It is possible to stop all nodes and rename them all simultaneously (in which case old and new names for all nodes must be given to every node) or stop and rename nodes one at a time (in which case each node only needs to be told how its own name is changing).

For example, this command will rename the node “rabbit@misshelpful” to the node “rabbit@cordelia”

rabbitmqctl rename_cluster_node rabbit@misshelpful rabbit@cordelia
update_cluster_nodes clusternode
clusternode
The node to consult for up-to-date information.

Instructs an already clustered node to contact clusternode to cluster when waking up. This is different from join_cluster since it does not join any cluster – it checks that the node is already in a cluster with clusternode.

The need for this command is motivated by the fact that clusters can change while a node is offline. Consider the situation in which node A and B are clustered. A goes down, Cclusters with B, and then B leaves the cluster. When A wakes up, it’ll try to contact B, but this will fail since B is not in the cluster anymore. The following command will solve this situation:

update_cluster_nodes -n A C
force_boot
Ensures that the node will start next time, even if it was not the last to shut down.

Normally when you shut down a RabbitMQ cluster altogether, the first node you restart should be the last one to go down, since it may have seen things happen that other nodes did not. But sometimes that’s not possible: for instance if the entire cluster loses power then all nodes may think they were not the last to shut down.

In such a case you can invoke force_boot while the node is down. This will tell the node to unconditionally start next time you ask it to. If any changes happened to the cluster after this node shut down, they will be lost.

If the last node to go down is permanently lost then you should use forget_cluster_node –offline in preference to this command, as it will ensure that mirrored queues which were mastered on the lost node get promoted.

For example, this will force the node not to wait for other nodes next time it is started:

rabbitmqctl force_boot
sync_queue [-p vhostqueue
queue
The name of the queue to synchronise.

Instructs a mirrored queue with unsynchronised slaves to synchronise itself. The queue will block while synchronisation takes place (all publishers to and consumers from the queue will block). The queue must be mirrored for this command to succeed.

Note that unsynchronised queues from which messages are being drained will become synchronised eventually. This command is primarily useful for queues which are not being drained.

cancel_sync_queue [-p vhostqueue
queue
The name of the queue to cancel synchronisation for.

Instructs a synchronising mirrored queue to stop synchronising itself.

purge_queue [-p vhostqueue
queue
The name of the queue to purge.

Purges a queue (removes all messages in it).

set_cluster_name name
Sets the cluster name to name. The cluster name is announced to clients on connection, and used by the federation and shovel plugins to record where a message has been. The cluster name is by default derived from the hostname of the first node in the cluster, but can be changed.

For example, this sets the cluster name to “london”:

rabbitmqctl set_cluster_name london

User Management

Note that rabbitmqctl manages the RabbitMQ internal user database. Users from any alternative authentication backend will not be visible to rabbitmqctl.

add_user username password
username
The name of the user to create.
password
The password the created user will use to log in to the broker.

For example, this command instructs the RabbitMQ broker to create a (non-administrative) user named “tonyg” with (initial) password “changeit”:

rabbitmqctl add_user tonyg changeit
delete_user username
username
The name of the user to delete.

For example, this command instructs the RabbitMQ broker to delete the user named “tonyg”:

rabbitmqctl delete_user tonyg
change_password username newpassword
username
The name of the user whose password is to be changed.
newpassword
The new password for the user.

For example, this command instructs the RabbitMQ broker to change the password for the user named “tonyg” to “newpass”:

rabbitmqctl change_password tonyg newpass
clear_password username
username
The name of the user whose password is to be cleared.

For example, this command instructs the RabbitMQ broker to clear the password for the user named “tonyg”:

rabbitmqctl clear_password tonyg

This user now cannot log in with a password (but may be able to through e.g. SASL EXTERNAL if configured).

authenticate_user username password
username
The name of the user.
password
The password of the user.

For example, this command instructs the RabbitMQ broker to authenticate the user named “tonyg” with password “verifyit”:

rabbitmqctl authenticate_user tonyg verifyit
set_user_tags username [tag …]
username
The name of the user whose tags are to be set.
tag
Zero, one or more tags to set. Any existing tags will be removed.

For example, this command instructs the RabbitMQ broker to ensure the user named “tonyg” is an administrator:

rabbitmqctl set_user_tags tonyg administrator

This has no effect when the user logs in via AMQP, but can be used to permit the user to manage users, virtual hosts and permissions when the user logs in via some other means (for example with the management plugin).

This command instructs the RabbitMQ broker to remove any tags from the user named “tonyg”:

rabbitmqctl set_user_tags tonyg
list_users
Lists users. Each result row will contain the user name followed by a list of the tags set for that user.

For example, this command instructs the RabbitMQ broker to list all users:

rabbitmqctl list_users

Access Control

Note that rabbitmqctl manages the RabbitMQ internal user database. Permissions for users from any alternative authorisation backend will not be visible to rabbitmqctl.

add_vhost vhost
vhost
The name of the virtual host entry to create.

Creates a virtual host.

For example, this command instructs the RabbitMQ broker to create a new virtual host called “test”:

rabbitmqctl add_vhost test
delete_vhost vhost
vhost
The name of the virtual host entry to delete.

Deletes a virtual host.

Deleting a virtual host deletes all its exchanges, queues, bindings, user permissions, parameters and policies.

For example, this command instructs the RabbitMQ broker to delete the virtual host called “test”:

rabbitmqctl delete_vhost test
list_vhosts [vhostinfoitem …]
Lists virtual hosts.

The vhostinfoitem parameter is used to indicate which virtual host information items to include in the results. The column order in the results will match the order of the parameters. vhostinfoitem can take any value from the list that follows:

name
The name of the virtual host with non-ASCII characters escaped as in C.
tracing
Whether tracing is enabled for this virtual host.

If no vhostinfoitem are specified then the vhost name is displayed.

For example, this command instructs the RabbitMQ broker to list all virtual hosts:

rabbitmqctl list_vhosts name tracing
set_permissions [-p vhostuser conf write read
vhost
The name of the virtual host to which to grant the user access, defaulting to “/”.
user
The name of the user to grant access to the specified virtual host.
conf
A regular expression matching resource names for which the user is granted configure permissions.
write
A regular expression matching resource names for which the user is granted write permissions.
read
A regular expression matching resource names for which the user is granted read permissions.

Sets user permissions.

For example, this command instructs the RabbitMQ broker to grant the user named “tonyg” access to the virtual host called “/myvhost”, with configure permissions on all resources whose names starts with “tonyg-”, and write and read permissions on all resources:

rabbitmqctl set_permissions -p /myvhost tonyg “^tonyg-.*” “.*” “.*”
clear_permissions [-p vhostusername
vhost
The name of the virtual host to which to deny the user access, defaulting to “/”.
username
The name of the user to deny access to the specified virtual host.

Sets user permissions.

For example, this command instructs the RabbitMQ broker to deny the user named “tonyg” access to the virtual host called “/myvhost”:

rabbitmqctl clear_permissions -p /myvhost tonyg
list_permissions [-p vhost]
vhost
The name of the virtual host for which to list the users that have been granted access to it, and their permissions. Defaults to “/”.

Lists permissions in a virtual host.

For example, this command instructs the RabbitMQ broker to list all the users which have been granted access to the virtual host called “/myvhost”, and the permissions they have for operations on resources in that virtual host. Note that an empty string means no permissions granted:

rabbitmqctl list_permissions -p /myvhost
list_user_permissions username
username
The name of the user for which to list the permissions.

Lists user permissions.

For example, this command instructs the RabbitMQ broker to list all the virtual hosts to which the user named “tonyg” has been granted access, and the permissions the user has for operations on resources in these virtual hosts:

rabbitmqctl list_user_permissions tonyg
set_topic_permissions [-p vhostuser exchange write read
vhost
The name of the virtual host to which to grant the user access, defaulting to “/”.
user
The name of the user the permissions apply to in the target virtual host.
exchange
The name of the topic exchange the authorisation check will be applied to.
write
A regular expression matching the routing key of the published message.
read
A regular expression matching the routing key of the consumed message.

Sets user topic permissions.

For example, this command instructs the RabbitMQ broker to let the user named “tonyg” publish and consume messages going through the “amp.topic” exchange of the “/myvhost” virtual host with a routing key starting with “tonyg-”:

rabbitmqctl set_topic_permissions -p /myvhost tonyg amq.topic “^tonyg-.*” “^tonyg-.*”

Topic permissions support variable expansion for the following variables: username, vhost, and client_id. Note that client_id is expanded only when using MQTT. The previous example could be made more generic by using “^{username}-.*”:

rabbitmqctl set_topic_permissions -p /myvhost tonyg amq.topic “^{username}-.*” “^{username}-.*”
clear_topic_permissions [-p vhostusername [exchange]
vhost
The name of the virtual host to which to clear the topic permissions, defaulting to “/”.
username
The name of the user to clear topic permissions to the specified virtual host.
exchange
The name of the topic exchange to clear topic permissions, defaulting to all the topic exchanges the given user has topic permissions for.

Clear user topic permissions.

For example, this command instructs the RabbitMQ broker to remove topic permissions for user named “tonyg” for the topic exchange “amq.topic” in the virtual host called “/myvhost”:

rabbitmqctl clear_topic_permissions -p /myvhost tonyg amq.topic
list_topic_permissions [-p vhost]
vhost
The name of the virtual host for which to list the users topic permissions. Defaults to “/”.

Lists topic permissions in a virtual host.

For example, this command instructs the RabbitMQ broker to list all the users which have been granted topic permissions in the virtual host called “/myvhost:”

rabbitmqctl list_topic_permissions -p /myvhost
list_user_topic_permissions username
username
The name of the user for which to list the topic permissions.

Lists user topic permissions.

For example, this command instructs the RabbitMQ broker to list all the virtual hosts to which the user named “tonyg” has been granted access, and the topic permissions the user has in these virtual hosts:

rabbitmqctl list_topic_user_permissions tonyg

Parameter Management

Certain features of RabbitMQ (such as the federation plugin) are controlled by dynamic, cluster-wide parameters. There are 2 kinds of parameters: parameters scoped to a virtual host and global parameters. Each vhost-scoped parameter consists of a component name, a name and a value. The component name and name are strings, and the value is an Erlang term. A global parameter consists of a name and value. The name is a string and the value is an Erlang term. Parameters can be set, cleared and listed. In general you should refer to the documentation for the feature in question to see how to set parameters.

set_parameter [-p vhostcomponent_name name value
Sets a parameter.

component_name
The name of the component for which the parameter is being set.
name
The name of the parameter being set.
value
The value for the parameter, as a JSON term. In most shells you are very likely to need to quote this.

For example, this command sets the parameter “local_username” for the “federation” component in the default virtual host to the JSON term “guest”:

rabbitmqctl set_parameter federation local_username “guest”
clear_parameter [-p vhostcomponent_name key
Clears a parameter.

component_name
The name of the component for which the parameter is being cleared.
name
The name of the parameter being cleared.

For example, this command clears the parameter “local_username” for the “federation” component in the default virtual host:

rabbitmqctl clear_parameter federation local_username
list_parameters [-p vhost]
Lists all parameters for a virtual host.

For example, this command lists all parameters in the default virtual host:

rabbitmqctl list_parameters
set_global_parameter name value
Sets a global runtime parameter. This is similar to set_parameter but the key-value pair isn’t tied to a virtual host.

name
The name of the global runtime parameter being set.
value
The value for the global runtime parameter, as a JSON term. In most shells you are very likely to need to quote this.

For example, this command sets the global runtime parameter “mqtt_default_vhosts” to the JSON term {“O=client,CN=guest”:”/”}:

rabbitmqctl set_global_parameter mqtt_default_vhosts ‘{“O=client,CN=guest”:”/”}’
clear_global_parameter name
Clears a global runtime parameter. This is similar to clear_parameter but the key-value pair isn’t tied to a virtual host.

name
The name of the global runtime parameter being cleared.

For example, this command clears the global runtime parameter “mqtt_default_vhosts”:

rabbitmqctl clear_global_parameter mqtt_default_vhosts
list_global_parameters
Lists all global runtime parameters. This is similar to list_parameters but the global runtime parameters are not tied to any virtual host.

For example, this command lists all global parameters:

rabbitmqctl list_global_parameters

Policy Management

Policies are used to control and modify the behaviour of queues and exchanges on a cluster-wide basis. Policies apply within a given vhost, and consist of a name, pattern, definition and an optional priority. Policies can be set, cleared and listed.

set_policy [-p vhost] [–priority priority] [–apply-to apply-toname pattern definition
Sets a policy.

name
The name of the policy.
pattern
The regular expression, which when matches on a given resources causes the policy to apply.
definition
The definition of the policy, as a JSON term. In most shells you are very likely to need to quote this.
priority
The priority of the policy as an integer. Higher numbers indicate greater precedence. The default is 0.
apply-to
Which types of object this policy should apply to. Possible values are:

The default is all ..

For example, this command sets the policy “federate-me” in the default virtual host so that built-in exchanges are federated:

rabbitmqctl set_policy federate-me ^amq. ‘{“federation-upstream-set”:”all”}’
clear_policy [-p vhostname
Clears a policy.

name
The name of the policy being cleared.

For example, this command clears the “federate-me” policy in the default virtual host:

rabbitmqctl clear_policy federate-me
list_policies [-p vhost]
Lists all policies for a virtual host.

For example, this command lists all policies in the default virtual host:

rabbitmqctl list_policies
set_operator_policy [-p vhost] [–priority priority] [–apply-to apply-toname pattern definition
Sets an operator policy that overrides a subset of arguments in user policies. Arguments are identical to those of set_policy.

Supported arguments are:

  • expires
  • message-ttl
  • max-length
  • max-length-bytes
clear_operator_policy [-p vhostname
Clears an operator policy. Arguments are identical to those of clear_policy.
list_operator_policies [-p vhost]
Lists operator policy overrides for a virtual host. Arguments are identical to those oflist_policies.

Virtual Host Limits

It is possible to enforce certain limits on virtual hosts.

set_vhost_limits [-p vhostdefinition
Sets virtual host limits.

definition
The definition of the limits, as a JSON term. In most shells you are very likely to need to quote this.

Recognised limits are:

  • max-connections
  • max-queues

Use a negative value to specify “no limit”.

For example, this command limits the max number of concurrent connections in vhost “qa_env” to 64:

rabbitmqctl set_vhost_limits -p qa_env ‘{“max-connections”: 64}’

This command limits the max number of queues in vhost “qa_env” to 256:

rabbitmqctl set_vhost_limits -p qa_env ‘{“max-queues”: 256}’

This command clears the max number of connections limit in vhost “qa_env”:

rabbitmqctl set_vhost_limits -p qa_env ‘{“max-connections”: -1}’

This command disables client connections in vhost “qa_env”:

rabbitmqctl set_vhost_limits -p qa_env ‘{“max-connections”: 0}’
clear_vhost_limits [-p vhost]
Clears virtual host limits.

For example, this command clears vhost limits in vhost “qa_env”:

rabbitmqctl clear_vhost_limits -p qa_env
list_vhost_limits [-p vhost] [–global]
Displays configured virtual host limits.

–global
Show limits for all vhosts. Suppresses the -p parameter.

Server Status

The server status queries interrogate the server and return a list of results with tab-delimited columns. Some queries ( list_queueslist_exchangeslist_bindings and list_consumers) accept an optional vhost parameter. This parameter, if present, must be specified immediately after the query.

The list_queueslist_exchanges and list_bindings commands accept an optional virtual host parameter for which to display results. The default value is “/”.

list_queues [-p vhost] [–offline | –online | –local] [queueinfoitem …]
Returns queue details. Queue details of the “/” virtual host are returned if the -p flag is absent. The -p flag can be used to override this default.

Displayed queues can be filtered by their status or location using one of the following mutually exclusive options:

–offline
List only those durable queues that are not currently available (more specifically, their master node isn’t).
–online
List queues that are currently available (their master node is).
–local
List only those queues whose master process is located on the current node.

The queueinfoitem parameter is used to indicate which queue information items to include in the results. The column order in the results will match the order of the parameters. queueinfoitem can take any value from the list that follows:

name
The name of the queue with non-ASCII characters escaped as in C.
durable
Whether or not the queue survives server restarts.
auto_delete
Whether the queue will be deleted automatically when no longer used.
arguments
Queue arguments.
policy
Policy name applying to the queue.
pid
Id of the Erlang process associated with the queue.
owner_pid
Id of the Erlang process representing the connection which is the exclusive owner of the queue. Empty if the queue is non-exclusive.
exclusive
True if queue is exclusive (i.e. has owner_pid), false otherwise.
exclusive_consumer_pid
Id of the Erlang process representing the channel of the exclusive consumer subscribed to this queue. Empty if there is no exclusive consumer.
exclusive_consumer_tag
Consumer tag of the exclusive consumer subscribed to this queue. Empty if there is no exclusive consumer.
messages_ready
Number of messages ready to be delivered to clients.
messages_unacknowledged
Number of messages delivered to clients but not yet acknowledged.
messages
Sum of ready and unacknowledged messages (queue depth).
messages_ready_ram
Number of messages from messages_ready which are resident in ram.
messages_unacknowledged_ram
Number of messages from messages_unacknowledged which are resident in ram.
messages_ram
Total number of messages which are resident in ram.
messages_persistent
Total number of persistent messages in the queue (will always be 0 for transient queues).
message_bytes
Sum of the size of all message bodies in the queue. This does not include the message properties (including headers) or any overhead.
message_bytes_ready
Like message_bytes but counting only those messages ready to be delivered to clients.
message_bytes_unacknowledged
Like message_bytes but counting only those messages delivered to clients but not yet acknowledged.
message_bytes_ram
Like message_bytes but counting only those messages which are in RAM.
message_bytes_persistent
Like message_bytes but counting only those messages which are persistent.
head_message_timestamp
The timestamp property of the first message in the queue, if present. Timestamps of messages only appear when they are in the paged-in state.
disk_reads
Total number of times messages have been read from disk by this queue since it started.
disk_writes
Total number of times messages have been written to disk by this queue since it started.
consumers
Number of consumers.
consumer_utilisation
Fraction of the time (between 0.0 and 1.0) that the queue is able to immediately deliver messages to consumers. This can be less than 1.0 if consumers are limited by network congestion or prefetch count.
memory
Bytes of memory consumed by the Erlang process associated with the queue, including stack, heap and internal structures.
slave_pids
If the queue is mirrored, this gives the IDs of the current slaves.
synchronised_slave_pids
If the queue is mirrored, this gives the IDs of the current slaves which are synchronised with the master – i.e. those which could take over from the master without message loss.
state
The state of the queue. Normally “running”, but may be “{syncing, message_count}” if the queue is synchronising.

Queues which are located on cluster nodes that are currently down will be shown with a status of “down” (and most other queueinfoitem will be unavailable).

If no queueinfoitem are specified then queue name and depth are displayed.

For example, this command displays the depth and number of consumers for each queue of the virtual host named “/myvhost”

rabbitmqctl list_queues -p /myvhost messages consumers
list_exchanges [-p vhost] [exchangeinfoitem …]
Returns exchange details. Exchange details of the “/” virtual host are returned if the -pflag is absent. The -p flag can be used to override this default.

The exchangeinfoitem parameter is used to indicate which exchange information items to include in the results. The column order in the results will match the order of the parameters. exchangeinfoitem can take any value from the list that follows:

name
The name of the exchange with non-ASCII characters escaped as in C.
type
The exchange type, such as:

  • direct
  • topic
  • headers
  • fanout
durable
Whether or not the exchange survives server restarts.
auto_delete
Whether the exchange will be deleted automatically when no longer used.
internal
Whether the exchange is internal, i.e. cannot be directly published to by a client.
arguments
Exchange arguments.
policy
Policy name for applying to the exchange.

If no exchangeinfoitem are specified then exchange name and type are displayed.

For example, this command displays the name and type for each exchange of the virtual host named “/myvhost”:

rabbitmqctl list_exchanges -p /myvhost name type
list_bindings [-p vhost] [bindinginfoitem …]
Returns binding details. By default the bindings for the “/” virtual host are returned. The -pflag can be used to override this default.

The bindinginfoitem parameter is used to indicate which binding information items to include in the results. The column order in the results will match the order of the parameters. bindinginfoitem can take any value from the list that follows:

source_name
The name of the source of messages to which the binding is attached. With non-ASCII characters escaped as in C.
source_kind
The kind of the source of messages to which the binding is attached. Currently always exchange. With non-ASCII characters escaped as in C.
destination_name
The name of the destination of messages to which the binding is attached. With non-ASCII characters escaped as in C.
destination_kind
The kind of the destination of messages to which the binding is attached. With non-ASCII characters escaped as in C.
routing_key
The binding’s routing key, with non-ASCII characters escaped as in C.
arguments
The binding’s arguments.

If no bindinginfoitem are specified then all above items are displayed.

For example, this command displays the exchange name and queue name of the bindings in the virtual host named “/myvhost”

rabbitmqctl list_bindings -p /myvhost exchange_name queue_name
list_connections [connectioninfoitem …]
Returns TCP/IP connection statistics.

The connectioninfoitem parameter is used to indicate which connection information items to include in the results. The column order in the results will match the order of the parameters. connectioninfoitem can take any value from the list that follows:

pid
Id of the Erlang process associated with the connection.
name
Readable name for the connection.
port
Server port.
host
Server hostname obtained via reverse DNS, or its IP address if reverse DNS failed or was disabled.
peer_port
Peer port.
peer_host
Peer hostname obtained via reverse DNS, or its IP address if reverse DNS failed or was not enabled.
ssl
Boolean indicating whether the connection is secured with SSL.
ssl_protocol
SSL protocol (e.g. “tlsv1”).
ssl_key_exchange
SSL key exchange algorithm (e.g. “rsa”).
ssl_cipher
SSL cipher algorithm (e.g. “aes_256_cbc”).
ssl_hash
SSL hash function (e.g. “sha”).
peer_cert_subject
The subject of the peer’s SSL certificate, in RFC4514 form.
peer_cert_issuer
The issuer of the peer’s SSL certificate, in RFC4514 form.
peer_cert_validity
The period for which the peer’s SSL certificate is valid.
state
Connection state; one of:

  • starting
  • tuning
  • opening
  • running
  • flow
  • blocking
  • blocked
  • closing
  • closed
channels
Number of channels using the connection.
protocol
Version of the AMQP protocol in use; currently one of:

  • {0,9,1}
  • {0,8,0}

Note that if a client requests an AMQP 0-9 connection, we treat it as AMQP 0-9-1.

auth_mechanism
SASL authentication mechanism used, such as “PLAIN”.
user
Username associated with the connection.
vhost
Virtual host name with non-ASCII characters escaped as in C.
timeout
Connection timeout / negotiated heartbeat interval, in seconds.
frame_max
Maximum frame size (bytes).
channel_max
Maximum number of channels on this connection.
client_properties
Informational properties transmitted by the client during connection establishment.
recv_oct
Octets received.
recv_cnt
Packets received.
send_oct
Octets send.
send_cnt
Packets sent.
send_pend
Send queue size.
connected_at
Date and time this connection was established, as timestamp.

If no connectioninfoitem are specified then user, peer host, peer port, time since flow control and memory block state are displayed.

For example, this command displays the send queue size and server port for each connection:

rabbitmqctl list_connections send_pend port
list_channels [channelinfoitem …]
Returns information on all current channels, the logical containers executing most AMQP commands. This includes channels that are part of ordinary AMQP connections, and channels created by various plug-ins and other extensions.

The channelinfoitem parameter is used to indicate which channel information items to include in the results. The column order in the results will match the order of the parameters. channelinfoitem can take any value from the list that follows:

pid
Id of the Erlang process associated with the connection.
connection
Id of the Erlang process associated with the connection to which the channel belongs.
name
Readable name for the channel.
number
The number of the channel, which uniquely identifies it within a connection.
user
Username associated with the channel.
vhost
Virtual host in which the channel operates.
transactional
True if the channel is in transactional mode, false otherwise.
confirm
True if the channel is in confirm mode, false otherwise.
consumer_count
Number of logical AMQP consumers retrieving messages via the channel.
messages_unacknowledged
Number of messages delivered via this channel but not yet acknowledged.
messages_uncommitted
Number of messages received in an as yet uncommitted transaction.
acks_uncommitted
Number of acknowledgements received in an as yet uncommitted transaction.
messages_unconfirmed
Number of published messages not yet confirmed. On channels not in confirm mode, this remains 0.
prefetch_count
QoS prefetch limit for new consumers, 0 if unlimited.
global_prefetch_count
QoS prefetch limit for the entire channel, 0 if unlimited.

If no channelinfoitem are specified then pid, user, consumer_count, and messages_unacknowledged are assumed.

For example, this command displays the connection process and count of unacknowledged messages for each channel:

rabbitmqctl list_channels connection messages_unacknowledged
list_consumers [-p vhost]
Lists consumers, i.e. subscriptions to a queue´s message stream. Each line printed shows, separated by tab characters, the name of the queue subscribed to, the id of the channel process via which the subscription was created and is managed, the consumer tag which uniquely identifies the subscription within a channel, a boolean indicating whether acknowledgements are expected for messages delivered to this consumer, an integer indicating the prefetch limit (with 0 meaning “none”), and any arguments for this consumer.
status
Displays broker status information such as the running applications on the current Erlang node, RabbitMQ and Erlang versions, OS name, memory and file descriptor statistics. (See the cluster_status command to find out which nodes are clustered and running.)

For example, this command displays information about the RabbitMQ broker:

rabbitmqctl status
node_health_check
Health check of the RabbitMQ node. Verifies the rabbit application is running, list_queues and list_channels return, and alarms are not set.

For example, this command performs a health check on the RabbitMQ node:

rabbitmqctl node_health_check -n rabbit@stringer
environment
Displays the name and value of each variable in the application environment for each running application.
report
Generate a server status report containing a concatenation of all server status information for support purposes. The output should be redirected to a file when accompanying a support request.

For example, this command creates a server report which may be attached to a support request email:

rabbitmqctl report > server_report.txt
eval expr
Evaluate an arbitrary Erlang expression.

For example, this command returns the name of the node to which rabbitmqctl has connected:

rabbitmqctl eval “node().”

Miscellaneous

close_connection connectionpid explanation
connectionpid
Id of the Erlang process associated with the connection to close.
explanation
Explanation string.

Instructs the broker to close the connection associated with the Erlang process id connectionpid (see also the list_connections command), passing the explanation string to the connected client as part of the AMQP connection shutdown protocol.

For example, this command instructs the RabbitMQ broker to close the connection associated with the Erlang process id “<rabbit@tanto.4262.0>”, passing the explanation “go away” to the connected client:

rabbitmqctl close_connection “<rabbit@tanto.4262.0>” “go away”
close_all_connections [-p vhost] [–global] [–per-connection-delay delay] [–limit limit]explanation
-p vhost
The name of the virtual host for which connections should be closed. Ignored when –global is specified.
–global
If connections should be close for all vhosts. Overrides -p
–per-connection-delay delay
Time in milliseconds to wait after each connection closing.
–limit limit
Number of connection to close. Only works per vhost. Ignored when –global is specified.
explanation
Explanation string.

Instructs the broker to close all connections for the specified vhost or entire RabbitMQ node.

For example, this command instructs the RabbitMQ broker to close 10 connections on “qa_env” vhost, passing the explanation “Please close”:

rabbitmqctl close_all_connections -p qa_env –limit 10 ‘Please close’

This command instructs broker to close all connections to the node:

rabbitmqctl close_all_connections –global
trace_on [-p vhost]
vhost
The name of the virtual host for which to start tracing.

Starts tracing. Note that the trace state is not persistent; it will revert to being off if the server is restarted.

trace_off [-p vhost]
vhost
The name of the virtual host for which to stop tracing.

Stops tracing.

set_vm_memory_high_watermark fraction
fraction
The new memory threshold fraction at which flow control is triggered, as a floating point number greater than or equal to 0.
set_vm_memory_high_watermark absolute memory_limit
memory_limit
The new memory limit at which flow control is triggered, expressed in bytes as an integer number greater than or equal to 0 or as a string with memory units (e.g. 512M or 1G). Available units are:

kkiB
kibibytes (2^10 bytes)
MMiB
mebibytes (2^20 bytes)
GGiB
gibibytes (2^30 bytes)
kB
kilobytes (10^3 bytes)
MB
megabytes (10^6 bytes)
GB
gigabytes (10^9 bytes)
set_disk_free_limit disk_limit
disk_limit
Lower bound limit as an integer in bytes or a string with memory units (see vm_memory_high_watermark), e.g. 512M or 1G. Once free disk space reaches the limit, a disk alarm will be set.
set_disk_free_limit mem_relative fraction
fraction
Limit relative to the total amount available RAM as a non-negative floating point number. Values lower than 1.0 can be dangerous and should be used carefully.
encode value passphrase [–cipher cipher] [–hash hash] [–iterations iterations]
value passphrase
Value to encrypt and passphrase.

For example:

rabbitmqctl encode ‘<<“guest”>>’ mypassphrase
–cipher cipher –hash hash –iterations iterations
Options to specify the encryption settings. They can be used independently.

For example:

rabbitmqctl encode –cipher blowfish_cfb64 –hash sha256 –iterations 10000 ‘<<“guest”>>’ mypassphrase
decode value passphrase [–cipher cipher] [–hash hash] [–iterations iterations]
value passphrase
Value to decrypt (as produced by the encode command) and passphrase.

For example:

rabbitmqctl decode ‘{encrypted, <<“…”>>}’ mypassphrase
–cipher cipher –hash hash –iterations iterations
Options to specify the decryption settings. They can be used independently.

For example:

rabbitmqctl decode –cipher blowfish_cfb64 –hash sha256 –iterations 10000 ‘{encrypted,<<“…”>>} mypassphrase
list_hashes
Lists hash functions supported by encoding commands.

For example, this command instructs the RabbitMQ broker to list all hash functions supported by encoding commands:

rabbitmqctl list_hashes
list_ciphers
Lists cipher suites supported by encoding commands.

For example, this command instructs the RabbitMQ broker to list all cipher suites supported by encoding commands:

rabbitmqctl list_ciphers

PLUGIN COMMANDS

RabbitMQ plugins can extend rabbitmqctl tool to add new commands when enabled. Currently available commands can be found in rabbitmqctl help output. Following commands are added by RabbitMQ plugins, available in default distribution:

Shovel plugin

shovel_status
Prints a list of configured shovels
delete_shovel [-p vhostname
Instructs the RabbitMQ node to delete the configured shovel by name.

Federation plugin

federation_status [–only-down]
Prints a list of federation links.

–only-down
Only list federation links which are not running.
restart_federation_link link_id
Instructs the RabbitMQ node to restart the federation link with specified link_id.

AMQP-1.0 plugin

list_amqp10_connections [amqp10_connectioninfoitem …]
Similar to the list_connections command, but returns fields which make sense for AMQP-1.0 connections. amqp10_connectioninfoitem parameter is used to indicate which connection information items to include in the results. The column order in the results will match the order of the parameters. amqp10_connectioninfoitem can take any value from the list that follows:

pid
Id of the Erlang process associated with the connection.
auth_mechanism
SASL authentication mechanism used, such as “PLAIN”.
host
Server hostname obtained via reverse DNS, or its IP address if reverse DNS failed or was disabled.
frame_max
Maximum frame size (bytes).
timeout
Connection timeout / negotiated heartbeat interval, in seconds.
user
Username associated with the connection.
state
Connection state; one of:

  • starting
  • waiting_amqp0100
  • securing
  • running
  • blocking
  • blocked
  • closing
  • closed
recv_oct
Octets received.
recv_cnt
Packets received.
send_oct
Octets send.
send_cnt
Packets sent.
ssl
Boolean indicating whether the connection is secured with SSL.
ssl_protocol
SSL protocol (e.g. “tlsv1”).
ssl_key_exchange
SSL key exchange algorithm (e.g. “rsa”).
ssl_cipher
SSL cipher algorithm (e.g. “aes_256_cbc”).
ssl_hash
SSL hash function (e.g. “sha”).
peer_cert_subject
The subject of the peer’s SSL certificate, in RFC4514 form.
peer_cert_issuer
The issuer of the peer’s SSL certificate, in RFC4514 form.
peer_cert_validity
The period for which the peer’s SSL certificate is valid.
node
The node name of the RabbitMQ node to which connection is established.

MQTT plugin

list_mqtt_connections [mqtt_connectioninfoitem]
Similar to the list_connections command, but returns fields which make sense for MQTT connections. mqtt_connectioninfoitem parameter is used to indicate which connection information items to include in the results. The column order in the results will match the order of the parameters. mqtt_connectioninfoitem can take any value from the list that follows:

host
Server hostname obtained via reverse DNS, or its IP address if reverse DNS failed or was disabled.
port
Server port.
peer_host
Peer hostname obtained via reverse DNS, or its IP address if reverse DNS failed or was not enabled.
peer_port
Peer port.
protocol
MQTT protocol version, which can be on of the following:

  • {‘MQTT’, N/A}
  • {‘MQTT’, 3.1.0}
  • {‘MQTT’, 3.1.1}
channels
Number of channels using the connection.
channel_max
Maximum number of channels on this connection.
frame_max
Maximum frame size (bytes).
client_properties
Informational properties transmitted by the client during connection establishment.
ssl
Boolean indicating whether the connection is secured with SSL.
ssl_protocol
SSL protocol (e.g. “tlsv1”).
ssl_key_exchange
SSL key exchange algorithm (e.g. “rsa”).
ssl_cipher
SSL cipher algorithm (e.g. “aes_256_cbc”).
ssl_hash
SSL hash function (e.g. “sha”).
conn_name
Readable name for the connection.
connection_state
Connection state; one of:

  • starting
  • running
  • blocked
connection
Id of the Erlang process associated with the internal amqp direct connection.
consumer_tags
A tuple of consumer tags for QOS0 and QOS1.
message_id
The last Packet ID sent in a control message.
client_id
MQTT client identifier for the connection.
clean_sess
MQTT clean session flag.
will_msg
MQTT Will message sent in CONNECT frame.
exchange
Exchange to route MQTT messages configured in rabbitmq_mqtt application environment.
ssl_login_name
SSL peer cert auth name
retainer_pid
Id of the Erlang process associated with retain storage for the connection.
user
Username associated with the connection.
vhost
Virtual host name with non-ASCII characters escaped as in C.

STOMP plugin

list_stomp_connections [stomp_connectioninfoitem]
Similar to the list_connections command, but returns fields which make sense for STOMP connections. stomp_connectioninfoitem parameter is used to indicate which connection information items to include in the results. The column order in the results will match the order of the parameters. stomp_connectioninfoitem can take any value from the list that follows:

conn_name
Readable name for the connection.
connection
Id of the Erlang process associated with the internal amqp direct connection.
connection_state
Connection state; one of:

  • running
  • blocking
  • blocked
session_id
STOMP protocol session identifier
channel
AMQP channel associated with the connection
version
Negotiated STOMP protocol version for the connection.
implicit_connect
Indicates if the connection was established using implicit connect (without CONNECT frame)
auth_login
Effective username for the connection.
auth_mechanism
STOMP authorization mechanism. Can be one of:

  • config
  • ssl
  • stomp_headers
port
Server port.
host
Server hostname obtained via reverse DNS, or its IP address if reverse DNS failed or was not enabled.
peer_port
Peer port.
peer_host
Peer hostname obtained via reverse DNS, or its IP address if reverse DNS failed or was not enabled.
protocol
STOMP protocol version, which can be on of the following:

  • {‘STOMP’, 0}
  • {‘STOMP’, 1}
  • {‘STOMP’, 2}
channels
Number of channels using the connection.
channel_max
Maximum number of channels on this connection.
frame_max
Maximum frame size (bytes).
client_properties
Informational properties transmitted by the client during connection
ssl
Boolean indicating whether the connection is secured with SSL.
ssl_protocol
SSL protocol (e.g. “tlsv1”).
ssl_key_exchange
SSL key exchange algorithm (e.g. “rsa”).
ssl_cipher
SSL cipher algorithm (e.g. “aes_256_cbc”).
ssl_hash
SSL hash function (e.g. “sha”).

Management agent plugin

reset_stats_db [–all]
Reset management stats database for the RabbitMQ node.

–all
Reset stats database for all nodes in the cluster.
Main

Docker vs. Kubernetes vs. Apache Mesos: Why What You Think You Know is Probably Wrong

 

There are countless articles, discussions, and lots of social chatter comparing Docker, Kubernetes, and Mesos. If you listen to the partially-informed, you’d think that the three open source projects are in a fight-to-the death for container supremacy. You’d also believe that picking one over the other is almost a religious choice; with true believers espousing their faith and burning heretics who would dare to consider an alternative.

That’s all bunk.

While all three technologies make it possible to use containers to deploy, manage, and scale applications, in reality they each solve for different things and are rooted in very different contexts. In fact, none of these three widely adopted toolchains is completely like the others.

Instead of comparing the overlapping features of these fast-evolving technologies, let’s revisit each project’s original mission, architectures, and how they can complement and interact with each other.

Let’s start with Docker…

Docker Inc., today started as a Platform-as-a-Service startup named dotCloud. The dotCloud team found that managing dependencies and binaries across many applications and customers required significant effort. So they combined some of the capabilities of Linux cgroups and namespaces into a single and easy to use package so that applications can consistently run on any infrastructure. This package is the Docker image, which provides the following capabilities:

  • Packages the application and the libraries in a single package (the Docker Image), so applications can consistently be deployed across many environments;
  • Provides Git-like semantics, such as “docker push”, “docker commit” to make it easy for application developers to quickly adopt the new technology and incorporate it in their existing workflows;
  • Define Docker images as immutable layers, enabling immutable infrastructure. Committed changes are stored as an individual read-only layers, making it easy to re-use images and track changes. Layers also save disk space and network traffic by only transporting the updates instead of entire images;
  • Run Docker containers by instantiating the immutable image with a writable layer that can temporarily store runtime changes, making it easy to deploy and scale multiple instances of the applications quickly.

Docker grew in popularity, and developers started to move from running containers on their laptops to running them in production. Additional tooling was needed to coordinate these containers across multiple machines, known as container orchestration. Interestingly, one of the first container orchestrators that supported Docker images (June 2014) was Marathon on Apache Mesos (which we’ll describe in more detail below). That year, Solomon Hykes, founder and CTO of Docker, recommended Mesos as “the gold standard for production clusters”. Soon after, many container orchestration technologies in addition to Marathon on Mesos emerged: NomadKubernetes and, not surprisingly, Docker Swarm (now part of Docker Engine).

As Docker moved to commercialize the open source file format, the company also started introducing tools to complement the core Docker file format and runtime engine, including:

  • Docker hub for public storage of Docker images;
  • Docker registry for storing it on-premise;
  • Docker cloud, a managed service for building and running containers;
  • Docker datacenter as a commercial offering embodying many Docker technologies.

Docker

Source: http://www.docker.com

Docker’s insight to encapsulate software and its dependencies in a single package have been a game changer for the software industry; the same way mp3’s helped to reshape the music industry. The Docker file format became the industry standard, and leading container technology vendors (including Docker, Google, Pivotal, Mesosphere and many others) formed the Cloud Native Computing Foundation (CNCF) and Open Container Initiative (OCI). Today, CNCF and OCI aim to ensure interoperability and standardized interfaces across container technologies and ensure that any Docker container, built using any tools, can run on any runtime or infrastructure.

Enter Kubernetes

Google recognized the potential of the Docker image early on and sought to deliver container orchestration “as-a-service” on the Google Cloud Platform. Google had tremendous experience with containers (they introduced cgroups in Linux) but existing internal container and distributed computing tools like Borg were directly coupled to their infrastructure. So, instead of using any code from their existing systems, Google designed Kubernetes from scratch to orchestrate Docker containers. Kubernetes was released in February 2015 with the following goals and considerations:

  • Empower application developers with a powerful tool for Docker container orchestration without having to interact with the underlying infrastructure;
  • Provide standard deployment interface and primitives for a consistent app deployment experience and APIs across clouds;
  • Build on a Modular API core that allows vendors to integrate systems around the core Kubernetes technology.

By March 2016, Google donated Kubernetes to CNCF, and remains today the lead contributor to the project (followed by Redhat, CoreOS and others).

Kubernetes

Source: wikipedia

Kubernetes was very attractive for application developers, as it reduced their dependency on infrastructure and operations teams. Vendors also liked Kubernetes because it provided an easy way to embrace the container movement and provide a commercial solution to the operational challenges of running your own Kubernetes deployment (which remains a non-trivial exercise). Kubernetes is also attractive because it is open source under the CNCF, in contrast to Docker Swarm which, though open source, is tightly controlled by Docker, Inc.

Kubernetes’ core strength is providing application developers powerful tools for orchestrating stateless Docker containers. While there are multiple initiatives to expand the scope of the project to more workloads (like analytics and stateful data services), these initiatives are still in very early phases and it remains to be seen how successful they may be.

Apache Mesos

Apache Mesos started as a UC Berkeley project to create a next-generation cluster manager, and apply the lessons learned from cloud-scale, distributed computing infrastructures such as Google’s Borg and Facebook’s Tupperware. While Borg and Tupperware had a monolithic architecture and were closed-source proprietary technologies tied to physical infrastructure, Mesos introduced a modular architecture, an open source development approach, and was designed to be completely independent from the underlying infrastructure. Mesos was quickly adopted by TwitterApple(Siri)YelpUberNetflix, and many leading technology companies to support everything from microservices, big data and real time analytics, to elastic scaling.

As a cluster manager, Mesos was architected to solve for a very different set of challenges:

  • Abstract data center resources into a single pool to simplify resource allocation while providing a consistent application and operational experience across private or public clouds;
  • Colocate diverse workloads on the same infrastructure such analytics, stateless microservices, distributed data services and traditional apps to improve utilization and reduce cost and footprint;
  • Automate day-two operations for application-specific tasks such as deployment, self healing, scaling, and upgrades; providing a highly available fault tolerant infrastructure;
  • Provide evergreen extensibility to run new application and technologies without modifying the cluster manager or any of the existing applications built on top of it;
  • Elastically scale the application and the underlying infrastructure from a handful, to tens, to tens of thousands of nodes.

Mesos has a unique ability to individually manage a diverse set of workloads — including traditional applications such as Java, stateless Docker microservices, batch jobs, real-time analytics, and stateful distributed data services. Mesos’ broad workload coverage comes from its two-level architecture, which enables “application-aware” scheduling. Application-aware scheduling is accomplished by encapsulating the application-specific operational logic in a “Mesos framework” (analogous to a runbook in operations). Mesos Master, the resource manager, then offers these frameworks fractions of the underlying infrastructure while maintaining isolation. This approach allows each workload to have its own purpose-built application scheduler that understands its specific operational requirements for deployment, scaling and upgrade. Application schedulers are also independently developed, managed and updated, allowing Mesos to be highly extensible and support new workloads or add more operational capabilities over time.

Mesos two-level scheduler

Take, for example, how a team manages upgrades. Stateless application can benefit from a “blue/green”deployment approach; where another complete version of the app is spun up while the old one is still live, and traffic switches to the new app when ready and the old app is destroyed. But upgrading a data workload like HDFS or Cassandra requires taking the nodes offline one at a time, preserving local data volumes to avoid data loss, performing the upgrade in-place with a specific sequence, and executing special checks and commands on each node type before and after the upgrade. Any of these steps are app or service specific, and may even be version specific. This makes it incredibly challenging to manage data services with a conventional container orchestration scheduler.

Mesos’ ability to manage each workload the way it wants to be treated has led many companies to use Mesos as a single unified platform to run a combination of microservices and data services together. A common reference architecture for running data-intensive applications is the “SMACK stack”.

A Moment of Clarity

Notice that we haven’t said anything about container orchestration to describe Apache Mesos. So why do people automatically associate Mesos with container orchestration? Container orchestration is one example of a workload that can run on Mesos’ modular architecture, and it’s done using a specialized orchestration “framework” built on top of Mesos called Marathon. Marathon was originally developed to orchestrate app archives (like JARs, tarballs, ZIP files) in cgroup containers, and was one of the first container orchestrators to support Docker containers in 2014.

So when people compare Docker and Kubernetes to Mesos, they are actually comparing Kubernetes and Docker Swarm to Marathon running on Mesos.

Why does this matter? Because Mesos frankly doesn’t care what’s running on top of it. Mesos can elastically provide cluster services for Java application servers, Docker container orchestration, Jenkins CI Jobs, Apache Spark analytics, Apache Kafka streaming, and more on shared infrastructure. Mesos could even run Kubernetes or other container orchestrators, though a public integration is not yet available.

Mesos Workloads

Source: Apache Mesos Survey 2016

Another consideration for Mesos (and why it’s attractive for many enterprise architects) is its maturity in running mission critical workloads. Mesos has been in large scale production (tens of thousands of servers) for more than 7 years, which is why it’s known to be more production ready and reliable at scale than many other container-enabling technologies in the market.

What does this all mean?

In summary, all three technologies have something to do with Docker containers and give you access to container orchestration for application portability and scale. So how do you choose between them? It comes down to choosing the right tool for the job (and perhaps even different ones for different jobs). If you are an application developer looking for a modern way to build and package your application, or to accelerate microservices initiatives, the Docker container format and developer tooling is the best way to do so.

If you are a dev/devops team and want to build a system dedicated exclusively to Docker container orchestration, and are willing to get your hands dirty integrating your solution with the underlying infrastructure (or rely on public cloud infrastructure like Google Container Engine or Azure Container Service), Kubernetes is a good technology for you to consider.

If you want to build a reliable platform that runs multiple mission critical workloads including Docker containers, legacy applications (e.g., Java), and distributed data services (e.g., Spark, Kafka, Cassandra, Elastic), and want all of this portable across cloud providers and/or datacenters, then Mesos (or our own Mesos distribution, Mesosphere DC/OS) is the right fit for you.

Whatever you choose, you’ll be embracing a set of tools that makes more efficient use of server resources, simplifies application portability, and increases developer agility. You really can’t go wrong.

Source :- https://mesosphere.com
kubernetes (k8s), Main

k8s – Concepts & Components (from kubernetes.io)

Master Components

Master components provide the cluster’s control plane. Master components make global decisions about the cluster (for example, scheduling), and detecting and responding to cluster events (starting up a new pod when a replication controller’s ‘replicas’ field is unsatisfied).

Master components can be run on any machine in the cluster. However, for simplicity, set up scripts typically start all master components on the same machine, and do not run user containers on this machine. See Building High-Availability Clusters for an example multi-master-VM setup.

kube-apiserver

Component on the master that exposes the Kubernetes API. It is the front-end for the Kubernetes control plane.

It is designed to scale horizontally – that is, it scales by deploying more instances. See Building High-Availability Clusters.

etcd

Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.

Always have a backup plan for etcd’s data for your Kubernetes cluster. For in-depth information on etcd, see etcd documentation.

kube-scheduler

Component on the master that watches newly created pods that have no node assigned, and selects a node for them to run on.

Factors taken into account for scheduling decisions include individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference and deadlines.

kube-controller-manager

Component on the master that runs controllers.

Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

These controllers include:

  • Node Controller: Responsible for noticing and responding when nodes go down.
  • Replication Controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.
  • Endpoints Controller: Populates the Endpoints object (that is, joins Services & Pods).
  • Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces.

cloud-controller-manager

cloud-controller-manager runs controllers that interact with the underlying cloud providers. The cloud-controller-manager binary is an alpha feature introduced in Kubernetes release 1.6.

cloud-controller-manager runs cloud-provider-specific controller loops only. You must disable these controller loops in the kube-controller-manager. You can disable the controller loops by setting the --cloud-provider flag to external when starting the kube-controller-manager.

cloud-controller-manager allows cloud vendors code and the Kubernetes core to evolve independent of each other. In prior releases, the core Kubernetes code was dependent upon cloud-provider-specific code for functionality. In future releases, code specific to cloud vendors should be maintained by the cloud vendor themselves, and linked to cloud-controller-manager while running Kubernetes.

The following controllers have cloud provider dependencies:

  • Node Controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
  • Route Controller: For setting up routes in the underlying cloud infrastructure
  • Service Controller: For creating, updating and deleting cloud provider load balancers
  • Volume Controller: For creating, attaching, and mounting volumes, and interacting with the cloud provider to orchestrate volumes

Node Components

Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.

kubelet

An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.

The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.

kube-proxy

kube-proxy enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding.

Container Runtime

The container runtime is the software that is responsible for running containers. Kubernetes supports two runtimes: Docker and rkt.

Addons

Addons are pods and services that implement cluster features. The pods may be managed by Deployments, ReplicationControllers, and so on. Namespaced addon objects are created in the kube-system namespace.

Selected addons are described below, for an extended list of available addons please see Addons.

DNS

While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it.

Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.

Containers started by Kubernetes automatically include this DNS server in their DNS searches.

Web UI (Dashboard)

Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as the cluster itself.

Container Resource Monitoring

Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.

Cluster-level Logging

Cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.

kubernetes (k8s), Main

K8s – Installation & Configuration

Hello Guys,

 

i know it is quite very difficult to install kubernetes in a proxy prone environment.

Therefore i decided to take the pain and install kubernetes in my proxy prone environment.

I Would Like to share my Steps

For Both Master and Worker Node :- 

vi .bashrc

# Set Proxyfunction setproxy()

{

export {http,https,ftp}_proxy=”http://<proxy_ip&gt;:<port>”

export no_proxy=”localhost,10.96.0.0/12,*.<company_domain_Name>,<internel_ip>”

}
# Unset Proxyfunction unsetproxy()

{

unset {http,https,ftp}_proxy}
function checkproxy()

{

env |grep proxy

}

vi /etc/yum.conf

proxy=http://<proxy_ip>:<port>

proxy=https://<proxy_ip>:<port>

vi /etc/hosts

<ip1-master>  kubernetes-1

<ip2-worker>  kubernetes-2

<ip3-worker>  kubernetes-3

 

mkdir -p /etc/systemd/system/docker.service.d/

 

vi /etc/systemd/system/docker.service.d/http-proxy.conf

 

[Service]

Environment=HTTP_PROXY=http://<proxy_ip>:<port>/

Environment=HTTPS_PROXY=https://<proxy_ip>:<port>/

Environment=NO_PROXY=<ip1-master>,<ip2-worker>,<ip3-worker>
cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetesbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

 

setenforce 0

 

yum install -y kubelet kubeadm kubectl

systemctl enable kubelet && systemctl start kubelet

 

sed -i “s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g” /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

 

systemctl daemon-reload

systemctl restart kubelet

 

export no_proxy=”localhost,10.96.0.0/12,*.<company domain>,

<ip1-master>,<ip2-worker>,<ip3-worker>”

 

export KUBECONFIG=/etc/kubernetes/admin.conf

 

calico recommended for amd64, Flannel is better but needs CIDR to be 10.244.0.0/24

kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

 

Master Node :-

kubeadm init

 

Worker Node :-

kubeadm join –token <token received from master node><master ip>:6443 –discovery-token-ca-cert-hash
sha256:<master-hash>

Master Node :-

Check in the master

kubectl get nodes

output-kuber