Distributed Smartswitch instances

Synchronization technologies

In this scheme on each cluster node Smartswitch instance is installed.
The cluster data, such as balances, the present peers, prices, CDR, etc, are synchronized and are shared among the cluster.
Synchronization of database is performed using one of technologies:

To synchronize files, rsync technology is used.
This way we synchronize call recordings and calls captured by a Call capturing system.

In the simplest case (by default) cluster represents a set of several Smartswitch instances, gathered using technologies described above.
On each node there will be a web-interface, signalling and media engines, Billing and other control applications.
However configuration will be shared.
Changes done on one node will be automatically transferred to all other nodes.
The same relates to peer's balances and CDR data.

Shared and dedicated IP addresses

Each node in a cluster has:

Using dedicated IP address for call origination

You can set up cluster nodes to use their dedicated IP addresses for call origination to you suppliers.
To configure this mode, you should specify '0.0.0.0' as bind address parameter in global Signalling protocols settings page.

For direct IP-to-IP connection you should notify your suppliers about your possible IP addresses.
They should put them in their ACL, to handle your fail-over seamlessly.

In case if you use SIP registration on external proxy, you should acquire as many SIP accounts on external SIP registrar as many nodes you have.
And setup 1 SIP account per node.

You can use both IP fail-over and load-balancing with this scheme.

Using shared IP address for call origination

You can configure cluster nodes to use shared IP address for call origination to you suppliers.
To configure this mode, you should specify shared IP address as bind address parameter in global Signalling protocols settings page.

In this case you can have single high-available outbound SIP registration on external registrar.

Because shared IP address is active only on one node at the moment, you can't use load balancing, only fail-over.

High-availability

Cluster by default supports high-availability on a node level.
This means that when one of nodes goes offline, all other nodes continue to work.
However those clients, which worked with that particular offline node, will get denial of service until the node will be healed.

High-availability on a service level means, that when one of nodes goes offline, customers will continue to be served without failure.
Therefore, the service provision to customers is automatically transferred to other, alive, nodes.
The only exception is that current calls are dropped in a moment of failure, and only new calls will be served on new node.

Available schemes of high-availability on a service level:

Load-balancing

Cluster by default support manual load-balancing.
As far as each node has dedicated IP address, you can specify this address when filling the interconnection form when connecting customer.
Therefore, you can manually spread your customers among present nodes.
For each peer, configured in a system, you can specify to which node(s) it belongs.

Automatic load balancing means, that you specify the same IP address (or domain name) when filling interconnection form for all customers.
System automatically spreads load from all customers.

Available schemes of automatic load-balancing:

SIP registration

For SIP subscribers with registration replication of registered IP address is supported.
This means, that subscriber could register on node A, and DID supplier could be connected to node B.
Information about subscriber's IP address is automatically transferred to node B, and when node B gets a call from DID supplier it forwards it to registered address.

Русский перевод