Project

General

Profile

Distributed Smartswitch instances

Synchronization technologies

In this scheme on each cluster node Smartswitch instance is installed.
The cluster data, such as balances, the present peers, prices, CDR, etc, are synchronized and are shared among the cluster.
Synchronization of database is performed using one of technologies:

  • MySQL replication.
    This scheme is suitable for all types of geographically distributed cluster.
    This technology is recommended for deployment at the moment.
    Limitations:
    • only one of nodes is primary.
      Therefore, you can make changes to configuration in web-interface only on one, primary, node.
      Then changes are automatically transferred (replicated) to other, slave, nodes.
    • in case if failure occurs on primary node - you can't change configuration until it won't be healed.
      Also new statistics won't be stored.
      Balances will be updated only on a node through which traffic goes, and balance changes won't be transferred to other cluster nodes.
      All information will be stored in a temporary database on slave nodes.
      After primary node is healed this information will be automatically transferred to primary node.
      And then automatically transferred to all other cluster nodes.
  • MySQL cluster.
    This scheme is suitable only for the cluster, which is geographically located in one physical network.
    It doesn't have limitations which has MySQL replication.
    But in turn it doesn't provide redundancy in case if data center goes offline.
    This technology is experimental at the moment.

To synchronize files, rsync technology is used.
This way we synchronize call recordings and calls captured by a Call capturing system.

In the simplest case (by default) cluster represents a set of several Smartswitch instances, gathered using technologies described above.
On each node there will be a web-interface, signalling and media engines, Billing and other control applications.
However configuration will be shared.
Changes done on one node will be automatically transferred to all other nodes.
The same relates to peer's balances and CDR data.

Shared and dedicated IP addresses

Each node in a cluster has:

  • dedicated IP address (mandatory).
    This address is used for node management.
  • shared IP address (optional, used for additional clustering high-availability features).
    This address is used to provide high-availability services for customers.

Using dedicated IP address for call origination

You can set up cluster nodes to use their dedicated IP addresses for call origination to you suppliers.
To configure this mode, you should specify '0.0.0.0' as bind address parameter in global Signalling protocols settings page.

For direct IP-to-IP connection you should notify your suppliers about your possible IP addresses.
They should put them in their ACL, to handle your fail-over seamlessly.

In case if you use SIP registration on external proxy, you should acquire as many SIP accounts on external SIP registrar as many nodes you have.
And setup 1 SIP account per node.

You can use both IP fail-over and load-balancing with this scheme.

Using shared IP address for call origination

You can configure cluster nodes to use shared IP address for call origination to you suppliers.
To configure this mode, you should specify shared IP address as bind address parameter in global Signalling protocols settings page.

In this case you can have single high-available outbound SIP registration on external registrar.

Because shared IP address is active only on one node at the moment, you can't use load balancing, only fail-over.

High-availability

Cluster by default supports high-availability on a node level.
This means that when one of nodes goes offline, all other nodes continue to work.
However those clients, which worked with that particular offline node, will get denial of service until the node will be healed.

High-availability on a service level means, that when one of nodes goes offline, customers will continue to be served without failure.
Therefore, the service provision to customers is automatically transferred to other, alive, nodes.
The only exception is that current calls are dropped in a moment of failure, and only new calls will be served on new node.

Available schemes of high-availability on a service level:

Load-balancing

Cluster by default support manual load-balancing.
As far as each node has dedicated IP address, you can specify this address when filling the interconnection form when connecting customer.
Therefore, you can manually spread your customers among present nodes.
For each peer, configured in a system, you can specify to which node(s) it belongs.

Automatic load balancing means, that you specify the same IP address (or domain name) when filling interconnection form for all customers.
System automatically spreads load from all customers.

Available schemes of automatic load-balancing:

SIP registration

For SIP subscribers with registration replication of registered IP address is supported.
This means, that subscriber could register on node A, and DID supplier could be connected to node B.
Information about subscriber's IP address is automatically transferred to node B, and when node B gets a call from DID supplier it forwards it to registered address.

Русский перевод

Also available in: PDF HTML TXT