TIBCO Activespaces – Best Practices – 3. Space Definition

Space Definition

Schema Considerations

To get the fastest speed out of ActiveSpaces you might need to tailor the way you store information in it depending on how you are going to access it.

  • The fastest and most scalable way to access data in ActiveSpaces is through the use of the key operations: put/get/take.
    • When reading a record, get is always be faster than creating a browser and doing a query.
  • You can store structured data, objects or documents in serialized format into blob fields
    • You can use a Tuple object as a map of fields, and then store the serialized Tuple in a blob field; Tuple serialization is very fast and optimized.
    • Some very large objects (for example: XML documents, JSON documents, and so on), can be dramatically reduced in size by using standard compression techniques. You can fit many more of these objects into memory if you store them in compressed format.
  • If you have a Java or .NET object that you want to store in ActiveSpaces:
    • You can implement your own very fast serialization to Tuple routine:
      • Implement a toTuple() method and a constructor from a Tuple, and map your variables to fields
      • You can serialize non-primitive variables to blob types or use serialized tuples into blob fields
    • You can also do queries, by using the browser/listener calls, but you can only query on one space at a time.
      • Storing data in a serialized blob does not necessarily mean you cannot query it:
        • Just extract from those structures the values that you are going to be querying on and store them as fields in the Tuple as well as contained in the serialized object.
        • When implementing the toTuple() serialization mentioned above, you can then implement any “value extractor” you want to extract values you may want to later query on.
        • If data is exposed as top level Tuple fields, you can then have indexes on those extracted fields.
      • All fields (including key fields) can be nullable.


The ActiveSpaces query engine can leverage indexes to speed up query processing. On larger datasets indexes can improve response times significantly.

  • Index facts:
    • You can define any number of indexes you want.
    • Indexes can be on a single field or on multiple fields (composite).
    • You can have more than one index per field.
  • However indexes are not free:
    • Indexes will cost you between 400 and 800 bytes of memory per record being indexed.
    • Indexes must be maintained, and they result in increased CPU usage when processing write operations.
  • There is always an index defined on the key fields; by default, it is a HASH index, but you can change it to TREE.
    • For example, if you have queries on a subset of the key fields, a key field index of type TREE could be leveraged for some of those queries.
  • You can get details about how the query was planned and whether indexes were used by the seeders or not using the Browser’s getQueryLog()
  • HASH indexes have the following advantages and disadvantages:
    • They are fastest for equality tests (for example, “field = value”).
    • Slightly less memory is required per entry indexed, but almost all of the entries will have their own index entry.
    • The same amount of memory is required per entry regardless of the number of fields in the index.
  • TREE indexes have the following advantages and disadvantages:
    • They are fastest for range tests (for example, “field > value”).
      • However, TREE indexes are still leveraged to accelerate equality tests as well: if you are going to do both range and equality testing in your queries you only need to define a single TREE index in most cases.
    • More memory is required per entry, but if the value set has some structure to it (for example continuous ranges) the TREE index can end up being more memory efficient than HASH.
    • The amount of memory increases with the number of fields composing the index.
    • Composed TREE indexes can often also be leveraged for tests on individual or subsets of the fields composing the index.
      • For example, a TREE index on fields “A,B” could also be leveraged for “A>value” or “B>value”, or “A=value”, or “B=value” and combinations thereof.
    • The order of the list of fields in a composed TREE index can have an influence on performance (notably write operations).
      • To be more efficient, if you have a TREE index on fields A and B and you know that the number of distinct values of “A” > the number of distinct values of “B, ”then you should define the index fields in the order “A”,”B” rather than the other way around.

Space Attributes

  • See the “Fault-tolerance and Persistence” section for information on the replication and persistence attributes.
  • Min Seeders. If the number of seeders for the space goes below this threshold then the space stops being in READY state, and all operations on the space are suspended.
    • If you are using shared-nothing persistence, you may want to use this attribute to control when applications will not be able to make any more modifications to the data that could then not be recovered.

For example if you have four seeders with a degree of replication of 1, set min seeders to 3.

TIBCO Activespaces – Best Practices – 2. Discovery URL

Discovery URL

“Discovery” is used in the initial phase of a process’s connection to a Metaspace, to discover which other nodes are already member of the metaspace and establish connections to them. Discovery and connection are Metaspace level operations (separated from joining and leaving spaces).

  • A single process can only be connected once to a specific Metaspace. This means that if any thread of a process has already called cout for a metaspace name, then no further calls to connect for the same metaspace name can be made (however,  a process can have simultaneous connections to different Metaspaces).
  • Discovery and connection is not a very fast operation; it sometimes can take many seconds for a process to go through the whole discovery and connection process (depending on the type of discovery and the number of nodes already connected).
  • Because connect() cannot be called repeatedly for the Metaspace in order to get a copy of an already connected Metaspace object the programmer may use the convenience function ASCommon.getMetaspace(): this function takes a metaspace name and if the process is already connected to that Metaspace will return a copy of the Metaspace object for that connection.

Multicast versus Unicast Discovery

Discovery can be one of the following types: Unicast (TCP) or Multicast (TIBPGM or RV). In general, Multicast discovery is somewhat easier to us, because it requires less configuration (which is why it is the default) and tends to be used during the development phase, while TCP discovery requires (and offers) a bit more control through configuration, and, unlike multicasting works in all network environments and therefore tends to be used in production. But there is no hard and fast rule for which of those methods of discovery is recommended and all types of discovery mechanism are supported equally.

Some things to remember:

  • There is no loss of functionality depending on the kind of discovery that is used.
  • Only one type of discovery can be used for a particular Metaspace. But a process could use unicast discovery to connect to one Metaspace, and multicast discovery to connect to another.

Multicast Discovery

To be able to use multicast discovery, the following conditions must be met:

  • UDP packets must be able to flow bidirectionally between all the Metaspace members.
  • If the Metaspace members are on separate subnets, multicast routing must be enabled between those subnets.

When it comes to choosing between the two available choices for multicast discovery (PGM or RV):

  • There is no functional difference between using the built-in PGM reliable multicast protocol stack or using TIBCO Rendezvous as the discovery protocol.
  • You only need to have TIBCO Rendezvous installed on your machine if you want to use it for multicast delivery. If you only use PGM multicast delivery then RV does not even need to be installed on the machine.
  • Using TIBCO Rendezvous as the discovery protocol can give you a little more flexibility in your deployment mode (for example making remote daemon connections or leveraging RVRDs).

Unicast Discovery

With Unicast discovery, all communication  between Metaspace members occurs solely over TCP connections (no UDP or multicast). The best practices for unicast discovery are:

  • ALL of the Metaspace members MUST use exactly the same Discovery URL string.
  • If you want fault-tolerance for the discovery service, then you must specify more than one IP:PORT in the discovery URL.
    • If all of the processes for all of the IP:PORTs listed in the discovery URL disappear, the Metaspace temporarily stops working (but data is not necessarily lost) until one of those processes is restarted.

In practice, to use Unicast discovery, you will designate some machines as the “servers” of the cache service. You will want to start and keep restarting at least one ActiveSpaces process if needed on each of those nodes (as a service or using Hawk, for example). These processes can be but do not have to be as-agents; they just have to connect to the Metaspace to keep it alive, regardless of whether they seed anything or not. Those processes will be the “well known” Metaspace members that you will use in the discovery URL.

Use a different listen port than the default for those “well known” processes, for example, 60000. This way you can start the specific processes you want using a Listen URL of “tcp://:60000”.

Host IP1: as-agent –listen “tcp://:60000” –discovery “tcp://IP1:60000;IP2:60000”

Host IP2: as-agent –listen “tcp://:60000” –discovery “tcp://IP1:60000;IP2:60000”

where IP1 and IP2 can be any hostname that resolves to or is an IP address.

Remote Client Connection

Remote clients are ActiveSpaces processes that connect to the metaspace indirectly through a directly connected “proxy” member of the metaspace.

  • There is NO loss of functionality for the client application whether it is directly or remotely connected to the metaspace.
  • If there is ANY one-way firewall or ANY Network Address Translation happening between any two machines, the ActiveSpaces processes on those machines will NOT be able to connect to each other. In this case, you have no other choice but to deploy the processes on one of the machines are remotely connecting to the processes on the other machine(s).
    • Remote clients initiate a single TCP connection to the proxy member they are connecting to.
    • While directly connected, members of a metaspace have a single TCP connection to every other member, which might be initiated from either end.
  • A Metaspace scales to a much larger number of remotely connected members than directly connected members.
    • If you have a lot of “pure client” processes that never seed on anything, consider deploying them as remote clients.
    • Remotely connected clients, however, experience on average higher space operation response time than directly connected processes (although remote client throughput will not necessarily be lower).
  • Although a remotely connected process can never seed on any space, the “seeded scope” is still meaningful: it is then the scope of what the proxy member the remote is connected to seeds.
    • This means that Get requests from a remote client on any key that the proxy is connecting to seeds or replicates will be serviced with the lowest latency time (order of a network round trip)!

The simplest way to make a remote client connection is to use a Discovery URL in the form: “tcp://IP:Port?remote=true”

Where IP and Port are the name of the machine where a proxy is running.  You can start an as-agent and specify that it provides remote connectivity by using
the -remote_listen parameter:

On host IP1 enter:

as-agent –remote_listen “tcp://:55555”

and then use the Discovery URL “tcp://IP:55555?remote=true” on any application.

TIBCO Activespaces – Best Practices – 1. Exposing Metaspace Connection Attributes

Exposing Metaspace Connection Attributes

When creating ActiveSpaces applications, developers should make sure that they expose the metaspace connection attributes so that they can be adjusted by administrator at deployment time. The Metaspace connection attributes are contained in the MemberDef object that is passed to Metaspace.connect(). At a minimum, the following MemberDef attributes should be exposed:

  • Member name:
    • If specified, this should be unique in the Metaspace.
    • If not specified, a unique name is generated automatically
      • If you intend to use the process as a seeder on a shared-nothing persisted space you MUST specify a name for the process.
    • Metaspace name:
      • If not specified, defaults to “ms.”
    • Discovery URL:
      • If not specified, defaults to “tibpgm.”
    • Listen URL

In some cases (typically “server” applications) it is recommended to expose these attributes:

  • DataStore:
    • The directory to use to store shared-nothing persistence files. Only useful if the application will be seeding on shared-nothing persistent spaces.
  • RemoteDiscovery:
    • Only needed if you intend the process to be a remote client proxy and be able to accept and service connections from remote clients.
  • WorkerThreadCount:
    • Adjusts the size of the worker thread pool used by remote invocation. Only useful if the application will service some remote invocation requests.

Optionally, the following attributes could be exposed as well (if one wants to give the administrator the ability to adjust more “low level” settings):

  • Timeout
    • Internal ACTIVESPACES protocol timeout for operations (default is 60s)
  • RxBufferSize
    • Adjusts the amount of memory allocated as a receive buffer per TCP connection. If the metaspace will contain a large number of directly connected members you may want to adjust this value down from the default 2 MB in order to reduce memory requirements.
    • Adjusting this parameter down may help reduce memory consumption when there are a lot of (direct) members in the metaspace.
    • For example, if the application is only going to be a leech and read/write small records, it will probably not need to use a full 2 MB of buffer space.
    • The downside of making this parameter too small is increased CPU resource usage.
  • TcpThreadCount
    • Adjusts the size of the thread pool used to distribute incoming TCP data from other members. Only useful for applications that seed data.

The final attribute of the MemberDef is the Context tuple. The context tuple is provided to the application programmer to help identify classes of applications connecting to the metaspace or to spaces when they are monitored by another application using a MetaspaceMemberListener or a SpaceMemberListener, because this context tuple will be passed to the listeners inside the membership event.

[ERROR]-[TIBCO BusinessWorksProcessMonitor]- Error While Configuration (EC1 – java.lang.NoClassDefFoundError: com/tibco/processmonitor/client/run)

Hi Guys,

Whenever you get this kind of error in TIBCO BWPM Client configuration

EC1 – java.lang.NoClassDefFoundError: com/tibco/processmonitor/client/run

Then the following can be the reasons :-

1. bwpm.jar file missing from <bw_home>/<ver>/<lib>

2. bwpm.jar corrupted

Solution :-

Get the bwpm.jar file copied to the following directory




TIBCO Business Works Process Monitor – Architecture

The Business Works Process Monitor Server is a web application running inside the application web server.

It is made up of several modules;

The picture below gives a high level and simplified overview on its most important modules,


When the Business Works Process Monitor Server is started, it will check its configuration
settings and start all configured Data Providers.
A Data Provider is responsible for reading log messages from a channel (such as JMS) and storing those messages in the underlying database.
For each Data Provider the BWPM administrator can configure how many number of instances that should run in parallel (A) and the channel to read data from (B).
Some tuning settings are available, depending on the channel configured for the Data Provider.
When using JMS, for example, the BWPM administrator may configure the JMS pref-etch property to optimize JMS processing (C).
When a log message is processed by the Data Provider, it is hand over to the persistence
layer of the Business Works Process Monitor Server.
The persistence layer connects to a RDBMS and accomplishes read and write operations against the BWPM database.
The persistence layer is using a JDBC connection pool (D), which is shared between the Data Providers and GUI Services.
The GUI Services are used by end users and process user requests
The Business Works Process Monitor Server GUI is a web-based AJAX applications.

Fault Tolerance Parameters in “tibemsd.conf ” file


ft_active = URL
Specifies the URL of the active server.
If this server can connect to the active server, it will act as a backup server.
If this server cannot connect to the active server, it will become the active server.
ft_heartbeat = seconds
Specifies the interval (in seconds) the active server is to send a heartbeat signal to the backup server to indicate that it is still operating. Default is 3 seconds.
ft_activation = seconds
Activation interval (maximum length of time between heartbeat signals) which indicates that active server has failed. Set in seconds: default is 10.
This interval should be set to at least twice the heartbeat interval.
For example: ft_activation = 60
Note: The ft_activation parameter is only used by the backup server after a fault-tolerant switchover.
The active server uses the server_timeout_server_connection to detect a failed server.
ft_reconnect_timeout = seconds
The amount of time (in seconds) that a backup server waits for clients to reconnect (after it assumes the role of primary server in a failover situation).
If a client does not reconnect within this time period, the server removes its state from the shared state files.
The ft_reconnect_timeout time starts once the server has fully recovered the shared state, so this value does not account for the time it takes to recover the store files.
The default value of this parameter is 60.

TIBCO EMS – Administration Commands (Some Important Ones)

Create User: create a new user
syntex : create user <user name> [“user_description”] [password=<password>]
example: create user test “Test user” password=test

Show Users: Show all users
syntex : show users
example: show users

Delete user: delete the named user
syntax : delete user <user name>
example: delete user test

Create group: creates a new group of users. Initially the group is empty. You need to add users in the group.
syntax : create group group_name “description”
example: create group Training “Training group”

Delete group: delete the named group
syntax : delete group <group name>
example: delete group training

Add member: Add one or more users to the group
syntax : add member <group name> <user name>, <user name> …
example: add member training  test

Set password: Set the password for the named user
syntax : set password <user-name> [password]
example: set password test 123

Grant admin : Grant the named global administrator permissions to the named user or group. For a complete listing of global administrator permissions
syntax : grant admin user=<user name> | group=<group name> <admin_permissions>
example: grant admin user=test all
grant admin group=training all
Note: some admin permissions: all:- all admin permissions, change-connection:-delete connection

Connect: Connect the administrative tool to the server
syntax : connect tcp://<server name>:<port number>
example: connect tcp://server2000:7222

Disconnect: Disconnect the administrative tool from the server
syntax : disconnect
example: disconnect

create topic : Creates a topic with specified name and properties. Properties are listed in a
comma-separated list, as described in topics.conf . You can set the properties directly in the topics.conf
or by means of the setprop topic command in the EMS Administrator Tool.
syntax : create topic <topic_name> <[properties]>
example: create topic t1

Show Topic : Shows the details for the specified topic
syntax : show topic <topic-name>
example: show topic t1

setprop topic : Set topic properties, overriding any existing properties
syntax : setprop topic <topic-name> <properties>
example: setprop topic t1 secure,sender_name

addprop topic : Adds properties to the topic. Property names are separated by commas
syntax : addprop topic <topic_name> <properties,…>
example: addprop topic t1 failsafe

Grant topic : Grants specified permissions to specified user or group on specified topic. Multiple permissions are separated by commas.
Topic permissions are: subscribe, publish, durable, use_durable
Destination-level administrator permissions can also be granted with this command. The following are administrator permissions
for topics are – view , create , delete, modify , purge
syntax : grant topic <topic-name> <user=name | group=name> <permissions>
Note: The best way to define permissions on topic is-open acl.conf and modify in the file
example: TOPIC=t1 USER=user1 PERM=publish,subscribe,view

purge topic :  Purge all messages for all subscribers on the named topic
syntax : purge topic <topic-name>
example: purge topic t1

delete topic : delete specefic topic
syntax : delete topic <topic-name >
example: delete topic t1

create queue : Creates a queue with the specified name and properties. The possible queue properties are described in
Destination Properties. Properties are listed in a comma-separated list, as described in queues.conf. You can set the
properties directly in the queues.conf or by means of the setprop queue command in the EMS Administrator Tool.
syntax : create queue <queue_name> <[properties]>
example: create queue q1

Show Queue : Shows the details for the specified queue
syntax : show queue <queue-name>
example: show queue q1

setprop queue : Set queue properties, overriding any existing properties. Any properties on a topic that are not explicitly specified by this command are removed
syntax : setprop queue <queue-name> <properties>
example: setprop queue q1 secure,sender_name

addprop queue : Adds properties to the queue. Property names are separated by commas
syntax : addprop queue <topic_name> <properties,…>
example: addprop queue q1 failsafe

Grant queue : Grants specified permissions to specified user or group on specified queue. Multiple permissions are separated by commas
Queue permissions are: receive, send, browse
Destination-level administrator permissions can also be granted with this command. The following are administrator permissions
for queue are – view , create , delete, modify , purge
syntax : grant queue <queue-name> <user=name | group=name> <permissions>
Note: The best way to define permissions on queue is-open acl.conf and modify in the file
example: QUEUE=q1 USER=user1 PERM=receive,browse

purge queue : Purge all messages in the named queue
syntax : purge queue <queue-name>
example: purge queue q1

delete queue : delete specefic queue
syntax : delete queue <topic-name >
example: delete queue t1

Create durable : Creates s static durable subscriber
syntax : create durable <topic name> <durable name> [property,….,property]
example: create durable t1 durable1
Note: why durable: By default, subscribers only receive messages when they are active. If messages arrive on the topic
when the subscriber is not available, the subscriber does not receive those messages.
The EMS APIs allow you to create durable subscribers to ensure that messages are received, even if the message consumer
is not currently running. Messages for durable subscriptions are stored on the server as long as durable subscribers exist
for the topic, or until the message expiration time for the message has been reached, or until the storage limit has been
reached for the topic. Durable subscribers can receive messages from a durable subscription even if the subscriber was not
available when the message was originally delivered.When an application restarts and recreates a durable subscriber with the
same ID, all messages stored on the server for that topic are published to the durable subscriber.

Delete durable : Delete the named durable subscriber
syntax : delete durable <durable-name>
example: delete durable durable1

Show config: Shows the configuration parameters for the connected server
syntax : show config

Show consumer or show consumers: information about a specific consumer or all consumers
syntax : show consumer <consumer id> or show consumers
example: show consumer 6 or show consmers

show connections : Show connections between clients and server
syntax: show connections [type=q|t|s] [host=hostname] [user=username] [version] [address] [counts] [full]
example: show connections

show db : Print a summary of the server’s databases
syntax : show db [sync|async]
example: show db

TIBCO Master Data Management – Installation Guide

Hi Guys,

After a long time i am going to upload a document on installation of TIBCO MDM (master Data Management.)


Data is your organizations’ most valuable asset. When mixed and matched in the right ways, it can reveal new opportunities, unseen threats, and areas for business improvement.

While mountains of data are collected to capture valuable intelligence, information is often scattered across the business. Living in multiple and often overlapping locations – from application siloes to spreadsheets on personal computers – knowledge is hard to attain, let alone shared and applied in an effective, meaningful way.

Hey Past, Are These Events Important?

Sprawling data records also jeopardize business outcomes, especially when events are thrown in the mix. A sound reference point is required to understand the historical significance and contextual relevance of activity as things change (i.e. are a negative post on a social network and a returned product the same customer?).

Unless intelligence is up-to-date and consistent across all access points, business decisions and performance will be compromised and less effective – stunting overall growth and exposing the organization to unnecessary, avoidable risk.

Set the Record Straight: A Single Version of Truth

TIBCO’s master data management (MDM) platform delivers the governance processes needed to construct and effectively maintain a centralized source of accurate intelligence.

  • Multi-Domain Platform: Delivers powerful control over a wide range of data assets – including product, customer, and vendor information
  • Centralized Mgmt.: Unified platform offers a single set of tools to effectively manage master data records enterprise-wide
  • Stickler for Quality: Automated processes can be customized to enforce validation and quality control – ensuring records stay clean and consistent (spans geographic, line-of-business, third-party, and application silo boundaries)
  • Universal Connectivity: Data from any source can be readily integrated , accessed, and consumed by applications, business processes, business intelligence tools, and users
  • Architected for Change: Flexible and scalable platform can meet immediate business needs and adjust to support future demands (even when new business process solutions are introduced)


  • Enhance Efficiency: Improve visibility and control over business activities by managing sophisticated relationships across products, customers, vendors, and locations
  • Optimize Outcomes: Ensure accurate, timely information supports decisions and actions made by the applications, processes, and people that run your business
  • Spot & Act on Insights Faster: Speed time-to-insight and action by allowing business users to directly access, manage, and visually interact with master data repositories
  • Accelerate Time-to-Market: Introduce new products and services faster with a richer source of product, customer, and vendor data
  • Elevate Customer Satisfaction: Accelerate loyalty and increase sales by personalizing interactions, delivering a consistent experience across channels, and tailoring products and services to customers’ specific wants and needs

Click To Download  TIBCO MDM installation in LINUX, JBOSS and Oracle Database.


Linux Concepts – File/Directory Permissions

Although there are already a lot of good security features built into Linux-based systems, one very important potential vulnerability can exist when local access is granted – – that is file permission based issues resulting from a user not assigning the correct permissions to files and directories. So based upon the need for proper permissions, I will go over the ways to assign permissions and show you some examples where modification may be necessary.

Basic File Permissions

Permission Groups

Each file and directory has three user based permission groups:

  • owner – The Owner permissions apply only the owner of the file or directory, they will not impact the actions of other users.
  • group – The Group permissions apply only to the group that has been assigned to the file or directory, they will not effect the actions of other users.
  • all users – The All Users permissions apply to all other users on the system, this is the permission group that you want to watch the most.

Permission Types

Each file or directory has three basic permission types:

  • read – The Read permission refers to a user’s capability to read the contents of the file.
  • write – The Write permissions refer to a user’s capability to write or modify a file or directory.
  • execute – The Execute permission affects a user’s capability to execute a file or view the contents of a directory.

Viewing the Permissions

You can view the permissions by checking the file or directory permissions in your favorite GUI File Manager (which I will not cover here) or by reviewing the output of the \”ls -l\” command while in the terminal and while working in the directory which contains the file or folder.

The permission in the command line is displayed as: _rwxrwxrwx 1 owner:group

  1. User rights/Permissions
    1. The first character that I marked with an underscore is the special permission flag that can vary.
    2. The following set of three characters (rwx) is for the owner permissions.
    3. The second set of three characters (rwx) is for the Group permissions.
    4. The third set of three characters (rwx) is for the All Users permissions.
  2. Following that grouping since the integer/number displays the number of hardlinks to the file.
  3. The last piece is the Owner and Group assignment formatted as Owner:Group.

Modifying the Permissions

When in the command line, the permissions are edited by using the command chmod. You can assign the permissions explicitly or by using a binary reference as described below.

Explicitly Defining Permissions

To explicity define permissions you will need to reference the Permission Group and Permission Types.

The Permission Groups used are:

  • u – Owner
  • g – Group
  • o or a – All Users

The potential Assignment Operators are + (plus) and – (minus); these are used to tell the system whether to add or remove the specific permissions.

The Permission Types that are used are:

  • r – Read
  • w – Write
  • x – Execute

So for an example, lets say I have a file named file1 that currently has the permissions set to _rw_rw_rw, which means that the owner, group and all users have read and write permission. Now we want to remove the read and write permissions from the all users group.

To make this modification you would invoke the command: chmod a-rw file1
To add the permissions above you would invoke the command: chmod a+rw file1

As you can see, if you want to grant those permissions you would change the minus character to a plus to add those permissions.

Using Binary References to Set permissions

Now that you understand the permissions groups and types this one should feel natural. To set the permission using binary references you must first understand that the input is done by entering three integers/numbers.

A sample permission string would be chmod 640 file1, which means that the owner has read and write permissions, the group has read permissions, and all other user have no rights to the file.

The first number represents the Owner permission; the second represents the Group permissions; and the last number represents the permissions for all other users. The numbers are a binary representation of the rwx string.

  • r = 4
  • w = 2
  • x = 1

You add the numbers to get the integer/number representing the permissions you wish to set. You will need to include the binary permissions for each of the three permission groups.

So to set a file to permissions on file1 to read _rwxr_____, you would enter chmod 740 file1.

Owners and Groups

I have made several references to Owners and Groups above, but have not yet told you how to assign or change the Owner and Group assigned to a file or directory.

You use the chown command to change owner and group assignments, the syntax is simple chown owner:group filename, so to change the owner of file1 to user1 and the group to family you would enter chown user1:family file1.

Advanced Permissions

The special permissions flag can be marked with any of the following:

  • _ – no special permissions
  • d – directory
  • l – The file or directory is a symbolic link
  • s – This indicated the setuid/setgid permissions. This is not set displayed in the special permission part of the permissions display, but is represented as a s in the read portion of the owner or group permissions.
  • t – This indicates the sticky bit permissions. This is not set displayed in the special permission part of the permissions display, but is represented as a t in the executable portion of the all users permissions

Setuid/Setgid Special Permissions

The setuid/setguid permissions are used to tell the system to run an executable as the owner with the owner\’s permissions.

Be careful using setuid/setgid bits in permissions. If you incorrectly assign permissions to a file owned by root with the setuid/setgid bit set, then you can open your system to intrusion.

You can only assign the setuid/setgid bit by explicitly defining permissions. The character for the setuid/setguid bit is s.

So do set the setuid/setguid bit on file2.sh you would issue the command chmod g+s file2.sh.

Sticky Bit Special Permissions

The sticky bit can be very useful in shared environment because when it has been assigned to the permissions on a directory it sets it so only file owner can rename or delete the said file.

You can only assign the sticky bit by explicitly defining permissions. The character for the sticky bit is t.

To set the sticky bit on a directory named dir1 you would issue the command chmod +t dir1.

When Permissions Are Important

To some users of Mac- or Windows-based computers you don’t think about permissions, but those environments don’t focus so aggressively on user based rights on files unless you are in a corporate environment. But now you are running a Linux-based system and permission based security is simplified and can be easily used to restrict access as you please.

So I will show you some documents and folders that you want to focus on and show you how the optimal permissions should be set.

  • home directories – The users\’ home directories are important because you do not want other users to be able to view and modify the files in another user\’s documents of desktop. To remedy this you will want the directory to have the drwx______ (700) permissions, so lets say we want to enforce the correct permissions on the user user1\’s home directory that can be done by issuing the command chmod 700 /home/user1.
  • bootloader configuration files – If you decide to implement password to boot specific operating systems then you will want to remove read and write permissions from the configuration file from all users but root. To do you can change the permissions of the file to 700.
  • system and daemon configuration files – It is very important to restrict rights to system and daemon configuration files to restrict users from editing the contents, it may not be advisable to restrict read permissions, but restricting write permissions is a must. In these cases it may be best to modify the rights to 644.
  • firewall scripts – It may not always be necessary to block all users from reading the firewall file, but it is advisable to restrict the users from writing to the file. In this case the firewall script is run by the root user automatically on boot, so all other users need no rights, so you can assign the 700 permissions.


Introduction to TIBCO LogLogic – Enterprise Virtual Appliance 5.5.1

LogLogic is a technology company that specializes in Security Management, Compliance Reporting, and IT Operations products. LogLogic developed the first appliance-based log management platform.

LogLogic’s Log Management platform collects and correlates user activity and event data. LogLogic’s products are used by many of the world’s largest enterprises to rapidly identify and alert on compliance violations, policy breaches, cyber attacks, and insider threats.

TIBCO BWPM Client – Configuration

Post Installation and Configuration of TIBCO BWPM Server,

To Configure BWPM Client on any running process (For Linux OS):-

  • Log in to the Server where TIBCO Administrator is running
  • cd <path>/tibco/tra/domain/<domain>/application/<deployed-application>
  • gedit <application-name>.tra
  • In the last line append the following
  • java.start.class=com.tibco.processmonitor.client.run
  • Save the tra file
  • Restart the service instance in the TIBCO Administrator.
  • Check the trace logs of the service instance ….
    • You should see nJams in the trace logs like this:-
    • Capture
  • Also Check your bwpm url…………………… you will see the service instance like this :-
  • Capture
  • Capture

Easy Tips For Post TIBCO Suite Installation in UNIX Environment – TIP # 1.


  • Login to the Linux/Unix as the tibco user (OS User)
  • cd ~
  • vim .bash_profile
  • Enter the below in the bash_profile as environment variables:-

################# TIBCO PARAMETERS ##################



  •  Now + wq!
  • . .bash_profile


TIBCO Activespaces – Basic Concept

ActiveSpaces applications are programs that use ActiveSpaces software to work collaboratively over a shared data grid.
The data grid comprises one or more tuple spaces.
An ActiveSpaces distributed application system is a set of ActiveSpaces programs that cooperate to fulfil a mission (either using the administrative CLI tool, or the ActiveSpaces API calls).
Tuples are distributed, rather than “partitioned” across seeders—members that are configured to contribute memory and processing resources to a space.
ActiveSpaces automatically redistributes tuples when seeders join and leave the space.
Unlike a horizontally partitioned database, where the allocation of items to nodes is fixed, and can only be changed through manual reconfiguration, ActiveSpaces data is automatically updated on all devices on the data grid and rebalanced transparently by using a “minimal redistribution” algorithm.
Replication allows the distribution of data replicates on different peers for fault tolerance.
ActiveSpaces’ data access optimization feature uses a replicate if one is locally available.
If a seeder suddenly fails, the replicate is immediately promoted to seeder, and the new seeder creates new replicates.

This optimizes system performance.