JavaOne 2016 Recorded Sessions

If you are like me and were not able to get to JavaOne this year, then please see below for some links to a number of the presentations and sessions.

For the full list of JavaOne 2016 session see here.


Posted in News | Tagged , , , | Leave a comment

Replicating HTTP Sessions across Data Centers


As explained in a previous post, Coherence*Web is a great feature which allows you to store your HTTP sessions in a Coherence Cluster. Since Coherence introduced Federation in 12.2.1, it has been possible to federate the HTTP session stored in Coherence*Web. This allows applications to replicate HTTP Sessions across data centers, which can tackle scenarios where-in an entire data-center goes down.

The last post on Easier configuration of Federated Caches explained how easy it is to configure federation in a WebLogic Server (WLS) domain. This post will build on that and explain how to extend that approach to federate HTTP sessions as well.


The first 2 steps for using Federated HTTP session using Coherence*Web is same as explained in Step 1 and Step 2 under the Setup section. Once 2 WLS domains have been installed and configured to use Federation as explained in those steps, the following settings need to be carried out.

Enable Coherence*Web federated session storage

Navigate to the Coherence settings tab of the DataTier WLS Cluster and enable the “Coherence Web Federated Storage Enabled” check-box as shown below


Deploy a web application

Deploy a web application onto the ClientTier WLS cluster. There are 2 configurations required to enable Federated HTTP sessions. They are

Enable Coherence*Web in the web application. This is done by configuring Coherence*Web as the session persistence type in weblogic.xml file of the web application as shown below


Set the coherence-session-cache-federated context parameter in the web.xml of the web application, as shown below


After making the above changes, deploy the web application and navigate to the URL of Site1. Add some entries in the HTTP session. Now use the same HTTP Session to connect to Site2 and validate that the changes done in Site 1 is available.

Using the same HTTP Session

HTTP Sessions are based on Cookies which are exchanged between the Browser and the Server. Once an HTTP session is created, server sends back a session cookie(JSESSIONID is the default cookie name in WebLogic) to the Browser. Browser sends the same cookie back to the server in the subsequent requests. Server validates the cookie that it receives i.e. it checks if the cookie being sent is an existing and valid cookie. In case of Coherence*Web, server checks if the session as provided in the cookie exists in the session cache.

Browser will send the cookie back in the subsequent requests only the host-name of the next request matches that of the previous request. In typical deployments, there will be a load-balancer which sits in front of the managed server of both sites, so that even if a Site goes down, the load-balancer will point the next HTTP requests to the site which is available. We will not cover that topic here, but in-turn suggest a simple way to simulate such a set-up.

Let us assume that is the hostname of the managed server on Site1 and it IP address is host-site1-ip-address. Similarly host-site2-ip-address will be the IP Address of Site2 managed server. In the hosts file(/etc/hosts in linux), add an entry as follows

host-site1-ip-address webapphost

and access the web application using a browser using “webapphost” as the hostname (Please switch off any proxies in the system for this browser session). Execute some actions in the web application such that the session has some data populated. Keep the browser session open and change the entry in the hosts file as below

host-site2-ip-address webapphost

Refresh the browser session(use ctrl + F5 so that browse refreshes the pages). The next request will go to the managed server of Site2, and the session details should still be available. For example, if the web application was a shopping cart application and you had added some entries in the shopping cart, the shopping cart must be the same when you access the managed server of Site2.

Example Web Application

The attached web application can be used as a sample application to test the set-up. The web application is already Coherence*Web federation enabled as explained above, and can be deployed to the ClientTier WLS clusters of Site1 and Site2 domains. You can use the same script provided in the previous blog if you want to generate such a set-up.

Once the web application is deployed, as explained in “Using the same HTTP Session” section above, make the necessary changes in the hosts file and open the following URL in the web application


and add some entries in the cart. Change the hosts file to point to Site2 IP Address and then refresh the browser (use ctrl + F5 so that the page is refreshed properly) and navigate to the shopping cart. You should see the items that have been added in the first request, still part of the shopping cart.


The Coherence JVisaulVM plugin can be used to monitor federation related metric as explained here. The Coherence*Web federated session cache can also be monitored using JVisualVM. Sample screenshot is shown below:


Posted in Uncategorized | Leave a comment

Coherence Federated caches made easy in WebLogic


Coherence 12.2.1 introduced Federated Caching which allows asynchronous federation of cache data across multiple geographically dispersed clusters. In Coherence version when using Coherence with Managed Coherence Servers you can now use the WebLogic Server (WLS) Admin console to more easily configure federated caches across a typical 2 domain WLS set-up. (For more information about Federation, please refer to the official documentation here.)

This blog will cover this feature with quick explanations and screenshots including an example script to setup a sample environment.


Coherence federation supports multiple topologies, including active-active, active-passive, hub-spoke and central-federation. In order to configure federation, the are 2 basic steps which are typically involved:

  • Configure the topology and the list of local and remote participants using a Coherence operational override file
  • Define and use  a federated cache scheme

Configuring Federation

Please read on to carry out either manual or scripted configuration of Federation.

Manual Configuration

In order to manually configure Federation, it is assumed that you have 2 WLS domains set-up with storage-enabled and disabled WLS Clusters(Similar to OOTB example Coherence domain). Each domain must have a Coherence cluster configured and the Coherence cluster names must be different in the the 2 domains. Since the cluster name is used as the “participant” name, and participant names must be unique in a federation topology, the Coherence cluster names must be different. All of the managed servers must be associated with the Coherence cluster in the respective domain.

For this scenario, let us assume that the the WebLogic domains are named Site1 and Site2, and the Coherence clusters defined in the respective domains are named as CohCluster_Site1 and CohCluster_Site2.

Step 1

Navigate to the Federation tab of Coherence cluster settings page of CohCluster_Site1, and select a topology, for example active-active. Enter the remote participant name as “CohCluster_Site2” and enter the host-name of any of the managed server of Site2 in the remote participant host field, as shown below:


Step 2

Repeat the corresponding step for Site2, Navigate to the Federation tab of Coherence cluster settings page of CohCluster_Site2, and select a topology, for example active-active. Enter the remote participant name as “CohCluster_Site1” and enter the host-name of any of the managed server of Site1 in the remote participant host field, as shown below


Step 3

Create a GAR application with  federated-scheme defined in its cache configuration as shown below, and deploy the GAR application to the storage enabled managed servers of Site1 and Site2.


Note: See below of links to an example GAR to download.

Step 4

Create an EAR which packages the GAR file, and a web application which can modify the data in the federated cache, and deploy the EAR onto the storage disabled servers of both WLS domains.

Open the web application deployed onto Site1, modify any cache data, navigate to the same web application in Site2 and see that the modified cache contents are visible in Site2 as well.

Scripted Configuration

The following steps will create 2 pre-configured WLS domains and deploy the required application.


In order to run the example, 2 machines are required. WebLogic release must be installed in both the machines and the environment must have been initialized by sourcing $MW_HOME/wlserver/server/

Running the Example

Let us assume the following:

Site Name
Site1 Site1
Site2 Site2

The WLST script used in the following steps has the following arguments:

  • all – command to issue all actions such as creating the domain, deploying, etc.
  • Local domain name
  • Remote cluster name (CohCluster_<Remote Domain Name>
  • Remote host name

For Primary site on, execute the following

  • Unzip
  • cd federated-example
  • java weblogic.WLST all Site1 CohCluster_Site2

For Secondary site on, execute the following

Navigate to the URL and add some entries in the cache, you can use the insert 20 Random Contacts button for inserting entries.

Navigate to the URL and validate that the data entered in Site1 is visible. Since federation is asynchronous, there may a small time-lag when the data is populated in Site2. Refresh the browser if the data is not visible in first view.

Monitoring via JVisualVM

The Coherence JVisaulVM plugin can be used to monitor federation related metrics as explained here.


If the example does not work, after the WLST commands are executed successfully, the most common error scenario is that Site1 is not able to contact Site2. This can be easily debugged as follows

Navigate to the storage1 server log at $MW_HOME/user_projects/domains/Site1/servers/storage1/logs/storage1.log  and look for the following lines

<1468245534568> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <2016-07-11 19:28:54.568/215.612 Oracle Coherence GE <Error> (thread=Worker:0, member=1): Exception connecting to CohCluster_Site2: could not establish a connection to one of the following addresses

The above error indicates that Site1 could not connect to Site2, it can be because of a firewall or wrong host-name etc. The logs will also show more errors(if any) related to Federation.

Posted in Examples, Uncategorized | Tagged , , , | 1 Comment

Announcing Coherence

Coherence has been released and is now available for download from OTN. This builds on the previous release in last October and includes some nice extras for those working with Federation and Persistence as well as many additional features for existing and new users of Coherence.

See the Official Coherence Blog for more information.

Posted in News, Uncategorized | Tagged , | Leave a comment

Devoxx Talk on “The Illusion of Statelessness”

Just watched a great talk from Aleksander Seovic (one of the Coherence Architects from Oracle) on “The Illusion of Statelessness” which he gave at Devoxx Poland just last week.

His short 35 min talk covers best practices when building scalable distributed applications, data grids features in general as well as some Coherence capabilities to help scale your applications.

To watch this talk, and others, see this YouTube link.


Posted in News, Uncategorized | Tagged , , | Leave a comment

New Coherence Demo Released on GitHub


We are excited to announce that the new “Coherence-Demo” is now available on GitHib. The demo showcases the latest Coherence features via a fully self-contained web based “Stock” application written using Angular & Bootstrap and accessing Coherence via our REST API.

The demo showcases core Coherence features around reliability, availability, scalability and performance, as well as the following new features included in Coherence 12.2.1:

  • Federated Caching – Provides the ability to easily send (federate) caches across data centres.
  • Cache Persistence – Provides the ability to persist and recover the contents of caches/ services as well as persist data across full cluster outages.
  • Java 8 support – Including Remote/Distributed Lambdas and Streams.

Included within the demo is a feature we call “Demo Insight”, which explains functionality you are using as you navigate through the application.  For example, when you start an additional server the following screen will be displayed explaining what happens behind the scenes when the server is started.

Insight Example

You can also click on the “Information” (i) icon to see more information on a particular topic such as how the portfolio composition is calculated. You can also disable/enable “Insight” via the tools menu.

Building the Demo

The full source code of the demo is available on GitHub at where you can either download the source code or clone via git command line or by using your favourite IDE:

git clone

To build the demo you must also have maven and Coherence installed.  See the README on the GitHub site for step by step details to build and install.

Running the Coherence-Demo

Once you have built the demo, you can use the following from a command prompt to launch it. You must ensure that you have Java 8 in your PATH.

 java -jar target/coherence-demo-2.0.1-SNAPSHOT.jar

The demo starts a Coherence cache server for data storage, plus a a Grizzly HTTP server for serving Coherence REST requests and the actual demonstration content HTML/js files.

Within the application Oracle Bedrock (which provides a general purpose Java framework for the development, orchestration and testing of highly concurrent distributed applications) is used to start additional cache servers and an additional cluster when you start Federation.

Coherence Demo Start Page

Coherence Demo Start Page

There are four main regions in the application.

  • Portfolio Composition (top left) shows an aggregation of all the positions and their values across symbols. If you check the ‘Real-Time Price Updates’ check-box, the prices will be randomly updated every 2 seconds and with Federation enabled, this will be federated to the second cluster.
  • Aggregation Performance (top right) shows rolling aggregation times for calculating the portfolio above. As you add additional cache servers, you can see the reduction in the aggregation time as the additional cores are utilised for parallel query processing.
  • Data Distribution (bottom left) shows the data distribution of primary across cache servers as additional servers are added/ removed.
  • Cluster Management (bottom right) shows the currently running cache servers and provides the ability to start and stop cache servers.

There are also a number of menu items in the top right:

  • Federation – Starts, stops and pauses federation (see below).
  • Persistence – Create and recover snapshots.
  • Tools – Start JVisualVM, add additional trades, clear the cache, shutdown the application and disable “Insight”.

Starting Federation

When you select the “Start Federation” menu option, Oracle Bedrock will start an additional Coherence Cluster on a different port, and start federating data to that new cluster.

Once the cluster has started, you will see a menu option, under the Federation menu, to open the second cluster dashboard. The demo chooses cluster names for you based upon your locale. To change these default names see the README.

The clusters are setup as active-active which means that updates from either cluster will be transmitted to the other cluster. You can exercise this by choosing the “Add Trades” option from the “Tools” menu.

Shutting Down the Demo

When you have finished with the demo you can choose the “Shutdown” option from the “Tools” menu or simple kill the java command you used to start it.  All additional cache servers and clusters that were started during the demo will be automatically closed.



Posted in Examples, Uncategorized | Tagged , , , | Leave a comment

Updated Best Practices White-paper for Coherence 12.2.1

We have just released an updated “Planning A Successful Deployment” White-paper based around Coherence 12.2.1. This document builds on the previous white-paper and updates recommendations/ best practices based upon Coherence 12.2.1 features.

It also talks about best practice when using new 12.2.1 new features including Persistence, Federation and simplified cluster discovery.

For a direct link to the document see here, or you can access it from the “Learn More” tab on OTN.

Well worth a read if you are looking at upgrading or deploying with Coherence 12.2.1 and above.

Posted in News, Uncategorized | Tagged , , | Leave a comment