Coherence 12.2.1 introduced Federated Caching which allows asynchronous federation of cache data across multiple geographically dispersed clusters. In Coherence version 126.96.36.199 when using Coherence with Managed Coherence Servers you can now use the WebLogic Server (WLS) Admin console to more easily configure federated caches across a typical 2 domain WLS set-up. (For more information about Federation, please refer to the official documentation here.)
This blog will cover this feature with quick explanations and screenshots including an example script to setup a sample environment.
Coherence federation supports multiple topologies, including active-active, active-passive, hub-spoke and central-federation. In order to configure federation, the are 2 basic steps which are typically involved:
- Configure the topology and the list of local and remote participants using a Coherence operational override file
- Define and use a federated cache scheme
Please read on to carry out either manual or scripted configuration of Federation.
In order to manually configure Federation, it is assumed that you have 2 WLS domains set-up with storage-enabled and disabled WLS Clusters(Similar to OOTB example Coherence domain). Each domain must have a Coherence cluster configured and the Coherence cluster names must be different in the the 2 domains. Since the cluster name is used as the “participant” name, and participant names must be unique in a federation topology, the Coherence cluster names must be different. All of the managed servers must be associated with the Coherence cluster in the respective domain.
For this scenario, let us assume that the the WebLogic domains are named Site1 and Site2, and the Coherence clusters defined in the respective domains are named as CohCluster_Site1 and CohCluster_Site2.
Navigate to the Federation tab of Coherence cluster settings page of CohCluster_Site1, and select a topology, for example active-active. Enter the remote participant name as “CohCluster_Site2” and enter the host-name of any of the managed server of Site2 in the remote participant host field, as shown below:
Repeat the corresponding step for Site2, Navigate to the Federation tab of Coherence cluster settings page of CohCluster_Site2, and select a topology, for example active-active. Enter the remote participant name as “CohCluster_Site1” and enter the host-name of any of the managed server of Site1 in the remote participant host field, as shown below
Create a GAR application with federated-scheme defined in its cache configuration as shown below, and deploy the GAR application to the storage enabled managed servers of Site1 and Site2.
Note: See below of links to an example GAR to download.
Create an EAR which packages the GAR file, and a web application which can modify the data in the federated cache, and deploy the EAR onto the storage disabled servers of both WLS domains.
Open the web application deployed onto Site1, modify any cache data, navigate to the same web application in Site2 and see that the modified cache contents are visible in Site2 as well.
The following steps will create 2 pre-configured WLS domains and deploy the required application.
In order to run the example, 2 machines are required. WebLogic 188.8.131.52.0 release must be installed in both the machines and the environment must have been initialized by sourcing $MW_HOME/wlserver/server/setWLSEnv.sh
Running the Example
Let us assume the following:
The setup.py WLST script used in the following steps has the following arguments:
- all – command to issue all actions such as creating the domain, deploying, etc.
- Local domain name
- Remote cluster name (CohCluster_<Remote Domain Name>
- Remote host name
For Primary site on host.site1.company.com, execute the following
- Unzip setup.zip
- cd federated-example
- java weblogic.WLST setup.py all Site1 CohCluster_Site2 host.site2.company.com
For Secondary site on host.site2.oracle.com, execute the following
- Unzip setup.zip
- cd federated-example
- java weblogic.WLST setup.py all Site2 CohCluster_Site1 host.site1.company.com
Navigate to the URL http://host.site2.company.com:7007/example-web-app/faces/ContactList.jsp and validate that the data entered in Site1 is visible. Since federation is asynchronous, there may a small time-lag when the data is populated in Site2. Refresh the browser if the data is not visible in first view.
Monitoring via JVisualVM
The Coherence JVisaulVM plugin can be used to monitor federation related metrics as explained here.
If the example does not work, after the WLST commands are executed successfully, the most common error scenario is that Site1 is not able to contact Site2. This can be easily debugged as follows
Navigate to the storage1 server log at $MW_HOME/user_projects/domains/Site1/servers/storage1/logs/storage1.log and look for the following lines
<1468245534568> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <2016-07-11 19:28:54.568/215.612 Oracle Coherence GE 184.108.40.206.0 <Error> (thread=Worker:0, member=1): Exception connecting to CohCluster_Site2: com.tangosol.net.messaging.ConnectionException: could not establish a connection to one of the following addresses
The above error indicates that Site1 could not connect to Site2, it can be because of a firewall or wrong host-name etc. The logs will also show more errors(if any) related to Federation.