<definitions xmlns="http://ws.apache.org/ns/synapse">
<sequence name="main" onError="errorHandler">
<in>
<send>
<endpoint name="dynamicLB">
<dynamicLoadbalance failover="true"
algorithm="org.apache.synapse.endpoints.algorithms.RoundRobin">
<membershipHandler
class="org.apache.synapse.core.axis2.Axis2LoadBalanceMembershipHandler">
<property name="applicationDomain" value="apache.axis2.app.domain"/>
</membershipHandler>
</dynamicLoadbalance>
</endpoint>
</send>
<drop/>
</in>
<out>
<!-- Send the messages where they have been sent (i.e. implicit To EPR) -->
<send/>
</out>
</sequence>
<sequence name="errorHandler">
<makefault response="true">
<code xmlns:tns="http://www.w3.org/2003/05/soap-envelope" value="tns:Receiver"/>
<reason value="COULDN'T SEND THE MESSAGE TO THE SERVER."/>
</makefault>
<send/>
</sequence>
</definitions>
Executing the Client
Note that the Synapse configuration does not define any concrete addresses or
URLs as targets. They are discovered dynamically by the dynamic load balance
endpoint. To test this feature start the load balance and failover client using
the following command:
ant loadbalancefailover -Di=100
This client sends 100 requests to the LoadbalanceFailoverService through Synapse.
Synapse will distribute the load among the three nodes we have started
in round-robin manner. LoadbalanceFailoverService appends the name of the server
to the response, so that client can determine which server has processed the message.
If you examine the console output of the client, you can see that requests are
processed by three servers as follows:
[java] Request: 1 ==> Response from server: MyServer1
[java] Request: 2 ==> Response from server: MyServer2
[java] Request: 3 ==> Response from server: MyServer3
[java] Request: 4 ==> Response from server: MyServer1
[java] Request: 5 ==> Response from server: MyServer2
[java] Request: 6 ==> Response from server: MyServer3
[java] Request: 7 ==> Response from server: MyServer1
...
Now run the client without the -Di=100 parameter, to send infinite requests. While
running the client shutdown the server named MyServer1. You can observe that
requests are only distributed among MyServer2 and MyServer3 after shutting down
MyServer1. Console output before and after shutting down MyServer1 is listed below
(MyServer1 was shutdown after request 63):
...
[java] Request: 61 ==> Response from server: MyServer1
[java] Request: 62 ==> Response from server: MyServer2
[java] Request: 63 ==> Response from server: MyServer3
[java] Request: 64 ==> Response from server: MyServer2
[java] Request: 65 ==> Response from server: MyServer3
[java] Request: 66 ==> Response from server: MyServer2
[java] Request: 67 ==> Response from server: MyServer3
...
Now restart MyServer1. You can observe that requests will be again sent to all
three servers. If you start a new Axis2 instance (say MyServer4) that will also
be added to the load balance pool dynamically.