Clustering Acegi via JGroups (DistributedHashtable)

in my previous blog I suggest to use jms or caching for a distributable SessionRegistry
but I found a more simple solution JGroups -

DistributedHashtable :JGroups gives us such a perfect simple class to distributed maps.

  • All read-only operations runs on local copies
  • perfect merge strategies.
  • easy implementation and configuration
  • vs..

I implement two classes

first DistributableSessionInformation in fact nothing different from original SessionInformation there are points

  • implements Serializable
  • default constructor for SessionInformation is private and it is required for serialization
  • hashcode and equals methods
  • it it immutable no more refreshLastRequest method

DistributableSessionRegistryImpl is SessionRegistry via JGroups

you can set both jgroups file and cluster name
if you use init and destroy method it will be cluster enabled else local singleton cache

sample configuration

<bean id=”org.acegisecurity.concurrent.SessionRegistry”
class=”” init-method=”init” destroy-method=”destroy”>
<property name=”channelName” value=”acegicluster1″/>
<property name=”clusterOptions” value=”udp.xml”/>
<property name=”distributable” value=”${distributable}”/>


download for both source and binary

dependency :

  • acegi 1.0.3+
  • jgroups 2.4.1
  • log4j 1.2.13

waiting for your comments :)

06.18.2007:small update for upload link and implementation for a small bug.
Hot Referers:

  • Trackback are closed
  • Comments (14)
    • Sarath Babu Polavarapu
    • February 17th, 2009

    When I trying to execute the given java files..
    It is giving multiple compile time errors in file.

    One is like “The type DistributableSessionRegistryImpl must implement the inherited abstract method

    • hi Sarath,
      I think this is a version specific problem , in other words my code and you project are possibly not using the same version
      can you specify your spring-security version please ?


    • Dan
    • February 23rd, 2009

    Did you have any issues whenever you manually shutdown one of the server instances? If I gracefully shut down one of the clustered instances of Oracle Application Server, the HttpSessionDestroyedEvent fires off and removes the principal, but does not remove the Authentication object in the SecurityContextHolder, thus the user is still authenticated, but no longer has SessionInformation in the SessionRegistry. I took a look at the logic in the ConcurrentSessionFilter and it does not perform any action on null SessionInformation objects.

    • do you open session replication among the server instances?
      This is very specific case that jgroups replicates the data but the server not.
      It is better you check the configuration of the server.

    • Dan
    • February 23rd, 2009

    I do have session replication enabled on the OC4J instances. Are you saying that I have to have this disabled? If I disable this, will this stop the HttpSessionDestroyedEvent from being fired off when I shut down a single instance of OC4J? Thanks!

    • session replication should be enabled, but since I haven’t run this on oracle, I am not sure what the problem might be :(

    • Dan
    • February 23rd, 2009

    I added log statements to your source code otherwise I wouldn’t have even noticed it if I wasn’t checking log files. The Authentication object gets replicated even if I shutdown one instance of OAS. It does not get destroyed, therefore the user can continue to use the application with no DistributedSessionInformation in the DistributedSessionRegistryImpl (getSessionInformation returns null). However since there is no DistributedSessionInformation , the last refresh date will never get updated and the account will eventually expire. I’m going to double check my server configuration and then look into the Spring event flow to see why this event is being published when I shutdown an instance. Thanks!

    • Dan
    • February 23rd, 2009

    After further research, this is a common problem with web servers. It seems that the session destroyed event is called for the sessions in the current node and is not global to the cluster. There is an easy solution in the JBoss link below. You can load a properties file to determine if a shutdown is occurring. The only issue with this is that one would have to remember to update the properties file before the instance was shut down and then again prior to the app server restart. ;) If you can come up with a better solution I would like to hear about it.

    • Sarath Babu Polavarapu
    • April 21st, 2009

    Hi Altuure,

    I am using spring 1.2.6.jar and acegi-security-1.0.0-RC1.jar. Please let me know if I need to changes acegi security version. In your dependency list, it is being mentioned to use acegi 1.0.3. Can I have to reference URL to dowload this version Jar file.

    • Sarath Babu Polavarapu
    • April 26th, 2009

    Hi Mert,

    Thanks for your response. As you addressed, the problem was version specific problem. When I have replaced acegi-security-1.0.7 (latest version, downloaded from spring website), problem got resolved. And I am able to run your TestCase.

    However, in my legacy code, we are uisng acegi framework constant, “AbstractProcessingFilter.ACEGI_SECURITY_TARGET_URL_KEY”, which is available in previous jar file (acegi-security-1.0.0-RC1.jar). But when I have changed this jar file reference from acegi-security-1.0.0-RC1.jar to acegi-security-1.0.7.jar, it is displaying the compile time error: “AbstractProcessingFilter.ACEGI_SECURITY_TARGET_URL_KEY cannot be resolved”

    Do you have any idea how to resolve this issue ?
    Thanks, Sarath.

  1. In JBoss 4.2.1.GA (JGroup 2.4.1-sp3) cluster i got some version mismatch problems for DistributedHashtable but after some slight modification with ReplicatedHashtable i got successful results . If any one facing to same problem ,refer this post

    • Pushkar
    • September 15th, 2010

    I have tried clustering acegi two nodes of JBoss 4.2.x through JGroup’s ReplicatedHashMap.Every things seems fine but only the master node (the node which i started first) having other’s node session (authentication) info not the second node.
    Means once I login in node one with id “abc” it restricts second login attempt with same id on node 2 & 1 as well. But if i first login in node 2 with id “abc” then it is not restricting the second login attempt with same id on node 1.

    Any suggestions???

Comment are closed.