Coherence Federated caches made easy in WebLogic


Coherence 12.2.1 introduced Federated Caching which allows asynchronous federation of cache data across multiple geographically dispersed clusters. In Coherence version when using Coherence with Managed Coherence Servers you can now use the WebLogic Server (WLS) Admin console to more easily configure federated caches across a typical 2 domain WLS set-up. (For more information about Federation, please refer to the official documentation here.)

This blog will cover this feature with quick explanations and screenshots including an example script to setup a sample environment.


Coherence federation supports multiple topologies, including active-active, active-passive, hub-spoke and central-federation. In order to configure federation, the are 2 basic steps which are typically involved:

  • Configure the topology and the list of local and remote participants using a Coherence operational override file
  • Define and use  a federated cache scheme

Configuring Federation

Please read on to carry out either manual or scripted configuration of Federation.

Manual Configuration

In order to manually configure Federation, it is assumed that you have 2 WLS domains set-up with storage-enabled and disabled WLS Clusters(Similar to OOTB example Coherence domain). Each domain must have a Coherence cluster configured and the Coherence cluster names must be different in the the 2 domains. Since the cluster name is used as the “participant” name, and participant names must be unique in a federation topology, the Coherence cluster names must be different. All of the managed servers must be associated with the Coherence cluster in the respective domain.

For this scenario, let us assume that the the WebLogic domains are named Site1 and Site2, and the Coherence clusters defined in the respective domains are named as CohCluster_Site1 and CohCluster_Site2.

Step 1

Navigate to the Federation tab of Coherence cluster settings page of CohCluster_Site1, and select a topology, for example active-active. Enter the remote participant name as “CohCluster_Site2” and enter the host-name of any of the managed server of Site2 in the remote participant host field, as shown below:


Step 2

Repeat the corresponding step for Site2, Navigate to the Federation tab of Coherence cluster settings page of CohCluster_Site2, and select a topology, for example active-active. Enter the remote participant name as “CohCluster_Site1” and enter the host-name of any of the managed server of Site1 in the remote participant host field, as shown below


Step 3

Create a GAR application with  federated-scheme defined in its cache configuration as shown below, and deploy the GAR application to the storage enabled managed servers of Site1 and Site2.


Note: See below of links to an example GAR to download.

Step 4

Create an EAR which packages the GAR file, and a web application which can modify the data in the federated cache, and deploy the EAR onto the storage disabled servers of both WLS domains.

Open the web application deployed onto Site1, modify any cache data, navigate to the same web application in Site2 and see that the modified cache contents are visible in Site2 as well.

Scripted Configuration

The following steps will create 2 pre-configured WLS domains and deploy the required application.


In order to run the example, 2 machines are required. WebLogic release must be installed in both the machines and the environment must have been initialized by sourcing $MW_HOME/wlserver/server/

Running the Example

Let us assume the following:

Site Name
Site1 Site1
Site2 Site2

The WLST script used in the following steps has the following arguments:

  • all – command to issue all actions such as creating the domain, deploying, etc.
  • Local domain name
  • Remote cluster name (CohCluster_<Remote Domain Name>
  • Remote host name

For Primary site on, execute the following

  • Unzip
  • cd federated-example
  • java weblogic.WLST all Site1 CohCluster_Site2

For Secondary site on, execute the following

Navigate to the URL and add some entries in the cache, you can use the insert 20 Random Contacts button for inserting entries.

Navigate to the URL and validate that the data entered in Site1 is visible. Since federation is asynchronous, there may a small time-lag when the data is populated in Site2. Refresh the browser if the data is not visible in first view.

Monitoring via JVisualVM

The Coherence JVisaulVM plugin can be used to monitor federation related metrics as explained here.


If the example does not work, after the WLST commands are executed successfully, the most common error scenario is that Site1 is not able to contact Site2. This can be easily debugged as follows

Navigate to the storage1 server log at $MW_HOME/user_projects/domains/Site1/servers/storage1/logs/storage1.log  and look for the following lines

<1468245534568> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <2016-07-11 19:28:54.568/215.612 Oracle Coherence GE <Error> (thread=Worker:0, member=1): Exception connecting to CohCluster_Site2: could not establish a connection to one of the following addresses

The above error indicates that Site1 could not connect to Site2, it can be because of a firewall or wrong host-name etc. The logs will also show more errors(if any) related to Federation.

This entry was posted in Examples, Uncategorized and tagged , , , . Bookmark the permalink.

One Response to Coherence Federated caches made easy in WebLogic

  1. Pingback: Replicating HTTP Sessions across Data Centers | Coherence Down Under

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s