Adapters

TIBCO Adapter Error (AER3-910005) – Exception: “JMS error: “Not allowed to create destination tracking

If you encounter the following error in your adapter logs :-

Error AER3-910005 Exception: “JMS error: “Not allowed to create destination tracking=#B0fo–uT5-V4zkYM9A/UbWgUzas#

The following are the possibilities and pointers to be checked :-

  1. Please check the JMS connection configuration of your adapter is correct.
  2. Ensure the JMS user you used have enough permission to create receiver on destination.
  3. Check whether dynamic creation is ON or not in your EMS configuration.
  4. If your destination is a queue then check in “queues.conf” and if it is a topic then “topics.conf” file.
  5. And if you don’t want to Turn ON dynamic creation then you must create the destinations that are required by the adapter manually before starting the adapter.
  6. Finally Kill the BW process and Adapter service, then first start the adapter service and then the BW service.

Cause

  • Check the repository settings.
Advertisements
Adapters

TIBCO Adapters – Received read Advisory Error (JMS Related)

While testing for failover we found that the adapter is not failing over properly to the secondary ems server in case if the primary is down. The adapter logs show the below error. The adapter does not pick up any messages when this error occurs.

Advisory: _SDK.ERROR.JMS.RECEIVE_FAILED : { {ADV_MSG, M_STRING, “Consumer receive failed. JMS Error: Illegal state, SessionName: TIBCOCOMJmsTerminatorSession, Destination: Rep.adcom.Rep-COMAdapter_Rep_v1.exit” } {^description^, M_STRING, “” } }.

The only way to resolve this is to restart the adapter so that it reconnects to the ems server. Then it picks up the messages.

 

“JMS Error: Illegal state” usually happens when a JMS call or request occurs in an inappropriate context. For example, a consumer is trying to receive message while the JMS server is down.  In your case you are saying that this is happening during EMS failover from machine1 to machine2.

One thing to keep in mind is that depending on the number of oustanding messages, connections, and other resources managed by EMS there may be a brief period before the secondary server is ready to accept connections.

Clients that disconnect will typically attempt to reconnect, however there is a limit to the number of reconnection attempts (as well as the interval between attempts).   These are specified at the connection factory level in factories.conf.  Here are some of the applicable settings:

 

reconnect_attempt_count – After losing its server connection, a client program configured with more than one server URL attempts to reconnect, iterating through its URL list until it re-establishes a connection with an EMS server. This property determines the maximum number of iterations. When absent, the default is 4.

reconnect_attempt_delay – When attempting to reconnect, the client sleeps for this interval (in milliseconds) between iterations through its URL list. When absent, the default is 500 milliseconds.

reconnect_attempt_timeout – When attempting to reconnect to the EMS server, you can set this connection timeout period to abort the connection attempt after a specified period of time (in milliseconds).

It may also be helpful to specify heartbeats between the adapter and the EMS server.  This way if the EMS server is brought down either gracefully or ungracefully the connection will be reset when the configured number of heartbeats is missed.  This should then trigger the reconnection attempts described above.  The heartbeat settings are defined in the tibemsd.conf.  Here are some relevant settings:

client_heartbeat_server – Specifies the interval clients are to send heartbeats to the server.

server_timeout_client_connection – Specifies the period of time server will wait for a client heartbeat before terminating the client connection.

server_heartbeat_client – Specifies the interval this server is to send heartbeats to all of its clients.

client_timeout_server_connection – Specifies the period of time a client will wait for a heartbeat from the server before terminating the connection.

 

Docker

Docker – Commands to Manipulate the Containers

Parent command

Command Description
docker container Manage containers
Command Description
docker container attach Attach local standard input, output, and error streams to a running container
docker container commit Create a new image from a container’s changes
docker container cp Copy files/folders between a container and the local filesystem
docker container create Create a new container
docker container diff Inspect changes to files or directories on a container’s filesystem
docker container exec Run a command in a running container
docker container export Export a container’s filesystem as a tar archive
docker container inspect Display detailed information on one or more containers
docker container kill Kill one or more running containers
docker container logs Fetch the logs of a container
docker container ls List containers
docker container pause Pause all processes within one or more containers
docker container port List port mappings or a specific mapping for the container
docker container prune Remove all stopped containers
docker container rename Rename a container
docker container restart Restart one or more containers
docker container rm Remove one or more containers
docker container run Run a command in a new container
docker container start Start one or more stopped containers
docker container stats Display a live stream of container(s) resource usage statistics
docker container stop Stop one or more running containers
docker container top Display the running processes of a container
docker container unpause Unpause all processes within one or more containers
docker container update Update configuration of one or more containers
docker container wait Block until one or more containers stop, then print their exit codes
Docker

Docker – Add Proxy to Docker Daemon

I am gonna cut the chatter and hit the platter.

Proxy Recommendation :-  To Download the image from hub, we need internet connectivity.

I’ma show you the Steps to configure the proxy for Docker daemon.

  1. Check the OS in which the docker-ce or docker-ee is installed.

ubuntu@docker:~$ cat /etc/*release*
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION=”Ubuntu 16.04.3 LTS”
NAME=”Ubuntu”
VERSION=”16.04.3 LTS (Xenial Xerus)”
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME=”Ubuntu 16.04.3 LTS”
VERSION_ID=”16.04″
HOME_URL=”http://www.ubuntu.com/”
SUPPORT_URL=”http://help.ubuntu.com/”
BUG_REPORT_URL=”http://bugs.launchpad.net/ubuntu/”
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

2. Check the Docker version

ubuntu@docker:~$ sudo docker -v
Docker version 17.05.0-ce, build 89658be

3. Create a directory

sudo mkdir -p /etc/systemd/system/docker.service.d

4. Create a Proxy Conf

vim /etc/systemd/system/docker.service.d/http-proxy.conf

[Service]
Environment=”HTTP_PROXY=http://<proxy–ip>:<port>/”

Environment=”HTTPS_PROXY=https://<proxy–ip>:<port>/”

5. Now try to login to docker

ubuntu@docker:~$ sudo docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don’t have a Docker ID, head over to https://hub.docker.com to create one.
Username: <username>
Password:
Login Succeeded
ubuntu@docker:~$

 

Docker · Main

Docker – Cheat Sheet

Hello Bloggers,

One of the most important thing in learning a new command quickly is going through the cheat Sheet.

I Love to go through Cheat Sheets for a quick references,

therefore i thought of consolidating some of the cheat sheets available online into my blog, for a quick ref.

This slideshow requires JavaScript.

Main · Storm

Apache Storm – Introduction

  • Apache Storm is a distributed real-time big data-processing system.
  • Storm is designed to process vast amount of data in a fault-tolerant and horizontal scalable method.
  • It is a streaming data framework that has the capability of highest ingestion rates.
  • Though Storm is stateless, it manages distributed environment and cluster state via Apache Zookeeper.
  • It is simple and you can execute all kinds of manipulations on real-time data in parallel.
  • Apache Storm is continuing to be a leader in real-time data analytics.

Storm is easy to setup, operate and it guarantees that every message will be processed through the topology at least once.

  • Basically Hadoop and Storm frameworks are used for analysing big data.
  • Both of them complement each other and differ in some aspects.
  • Apache Storm does all the operations except persistency, while Hadoop is good at everything but lags in real-time computation.
  • The following table compares the attributes of Storm and Hadoop.
Storm Hadoop
Real-time stream processing Batch processing
Stateless Stateful
Master/Slave architecture with ZooKeeper based coordination. The master node is called as nimbus and slaves are supervisors. Master-slave architecture with/without ZooKeeper based coordination. Master node is job tracker and slave node is task tracker.
A Storm streaming process can access tens of thousands messages per second on cluster. Hadoop Distributed File System (HDFS) uses MapReduce framework to process vast amount of data that takes minutes or hours.
Storm topology runs until shutdown by the user or an unexpected unrecoverable failure. MapReduce jobs are executed in a sequential order and completed eventually.
Both are distributed and fault-tolerant
If nimbus / supervisor dies, restarting makes it continue from where it stopped, hence nothing gets affected. If the JobTracker dies, all the running jobs are lost.

 

Apache Storm Benefits

Here is a list of the benefits that Apache Storm offers −

  • Storm is open source, robust, and user friendly. It could be utilized in small companies as well as large corporations.
  • Storm is fault tolerant, flexible, reliable, and supports any programming language.
  • Allows real-time stream processing.
  • Storm is unbelievably fast because it has enormous power of processing the data.
  • Storm can keep up the performance even under increasing load by adding resources linearly. It is highly scalable.
  • Storm performs data refresh and end-to-end delivery response in seconds or minutes depends upon the problem. It has very low latency.
  • Storm has operational intelligence.
  • Storm provides guaranteed data processing even if any of the connected nodes in the cluster die or messages are lost.

 

Container · Docker

Docker – Basic Installation & Configuration

Youtube Video :-

Command :-

sudo yum install -y yum-utils \

  device-mapper-persistent-data \

  lvm2

sudo yum-config-manager \

    –add-repo \

    https://download.docker.com/linux/centos/docker-ce.repo

sudo yum install docker-ce

yum list docker-ce –showduplicates | sort -r

sudo systemctl start docker

sudo docker run hello-world

docker volume create portainer_data

docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

docker service create \

–name portainer \

–publish 9000:9000 \

–replicas=1 \

–constraint ‘node.role == manager’ \

–mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock \

portainer/portainer \

-H unix:///var/run/docker.sock