Sippy Softswitch Hardware Specification Guidelines

 

Be sure to secure reputable name brand hardware.

KVM access is required for Sippy Support to access your server for installation & upgrades.

 

** A note about Real CPU cores, versus Hyper-Threads or Virtual Cores

Our documentation states “physical cores”.  This excludes virtual or hyperthreading cores.

 

For example, the Intel E5-2690 processor is marketed as having 8 physical cores and 8 virtual cores or “Threads”. This CPU counts as having 8 physical cores.

 


 

Sippy Softswitch

The figures below are based on synthetic and real-case performance tests with signalling traffic only.  Features that decrease call processing performance were disabled during tests.  The Sippy Softswitch has a built in media relay; be aware that utilizing media relay comes with a trade off in capacity; this is an important consideration before procurement of your hardware.

 

Media Relay Scaling

In our baseline testing we have calculated that using the built in media relay and SIP signalling, 4 physical CPU cores are fully utilized by RTPproxy processes approaching limits of 1000 CC and 20 All Out CPS; when this is scaled to 2000 CC and 30 CPS we suggest 16 physical Cores.

 

 


Conversational traffic pattern with low-to-medium CPS rate (high ACD and ASR stats)

 

 

2,000 Concurrent Calls, 75 CPS Spec guideline

 

CPU

8 Core, 2.0GHz, Intel Xeon E5 or better or comparable AMD**

RAM

16GB or more (may depend on number rates/routes/environments)

Disk


Hardware RAID with Write-Back Cache (WBC) and Battery Backup Unit (BBU) support :

- RAID 1 based on 2 X 240+ GB SSDs is a best fit

- RAID10 based on 4 X 1 TB SATA drives will also work here, but less preferable

- RAID 1 based on 2 X 1 TB SATA drives may be used as last resort


Optionally RAID 1 based on 2 X 1 TB SATA drives for backups and system logs.


Software RAID is NOT recommended as it could cause performance degradation under load.

Network Interface

GigE Network Interface Intel(R) PRO/1000 PCI Express Gigabit Ethernet. Bonded (FEC) interface is recommended

Bandwidth

less than 10 MBit/sec

 

3,000 Concurrent Calls, 90 CPS Spec guideline

 

CPU

8 Core, 2.5GHz, Intel Xeon E5 or better or comparable AMD**

RAM

24GB or more (may depend on number rates/routes/environments)

Disk


Hardware RAID with Write-Back Cache (WBC) and Battery Backup Unit (BBU) support :

- RAID 10 based on 4 X 240+ GB SSDs is a best fit

- RAID 1 based on 2 X 480+ GB SSDs provides acceptable performance

- RAID 10 based on 4 X 1 TB SATA drives may be used as last resort


Optionally RAID 1 based on 2 X 1 TB SATA drives for backups and system logs.


Software RAID is NOT recommended as it could cause performance degradation under load.

Network Interface

GigE Network Interface Intel(R) PRO/1000 PCI Express Gigabit Ethernet. Bonded (FEC) interface is recommended

Bandwidth

less than 100MBit/sec

 

6,000 Concurrent Calls, 135 CPS Spec guideline

 

CPU

12 Core, 2.0GHz, Intel Xeon E5 or better or comparable AMD**

RAM

32GB or more (may depend on number rates/routes/environments)

Disk


Hardware RAID with Write-Back Cache (WBC) and Battery Backup Unit (BBU) support :

 - RAID 10 based on 4 X 480+ GB SSDs is a best fit

 - RAID 1 based on 2 X 960+ GB SSDs provides acceptable performance 

  


Optionally RAID 1 based on 4 X 1 TB SATA drives for backups and system logs.


Software RAID is NOT recommended as it could cause performance degradation under load.

Network Interface

GigE Network Interface Intel(R) PRO/1000 PCI Express Gigabit Ethernet. Bonded (FEC) interface is recommended

Bandwidth

less than 100MBit/sec

 

 

8,000 Concurrent Calls, 200 CPS Spec guideline

 

CPU

16 Core, 2.5GHz, Intel Xeon E5 or better or comparable AMD**

RAM

32GB or more (may depend on number rates/routes/environments)

Disk


Hardware RAID with Write-Back Cache (WBC) and Battery Backup Unit (BBU) support :

- RAID 10 based on 4 X 480+ GB SSDs is a best fit

- RAID 1 based on 2 X 960+ GB SSDs provides acceptable performance


RAID 10 based on 4 X 1 TB SATA drives for backups and system logs.


Software RAID is NOT recommended as it could cause performance degradation under load.

Network Interface

GigE Network Interface Intel(R) PRO/1000 PCI Express Gigabit Ethernet. Bonded (FEC) interface is recommended

Bandwidth

less than 100MBit/sec

 

10,000+ Concurrent Calls, 300+ CPS Spec guideline

 

CPU

24+ Core, 2.5GHz, Intel Xeon E5 or better or comparable AMD**

RAM

64GB or more (may depend on number rates/routes/environments)

Disk


Hardware RAID with Write-Back Cache (WBC) and Battery Backup Unit (BBU) support :

- RAID 10 based on 4 X 960+ GB SSDs is a best fit


RAID 10 based on 4 X 1 TB SATA drives for backups and system logs.


Software RAID is NOT recommended as it could cause performance degradation under load.

Network Interface

GigE Network Interface Intel(R) PRO/1000 PCI Express Gigabit Ethernet. Bonded (FEC) interface is recommended

Bandwidth

less than 100MBit/sec

 


International and Call Center traffic pattern with medium-to-high CPS (low ACD and ASR stats)

 

1,200 Concurrent Calls, 150 'All In' 225 'All Out' CPS Spec guideline

 

CPU

12 Core, 2.0GHz, Intel Xeon E5 or better or comparable AMD**

RAM

16GB or more (may depend on number rates/routes/environments)

Disk


Hardware RAID with Write-Back Cache (WBC) and Battery Backup Unit (BBU) support :

- RAID 10 based on 4 X 480+ GB SSDs is a best fit

- RAID 1 based on 2 X 960+ GB SSDs provides tolerable performance


RAID 10 based on 4 X 1 TB SATA drives for backups and system logs.


Software RAID is NOT recommended as it could cause performance degradation under load.

Network Interface

GigE Network Interface Intel(R) PRO/1000 PCI Express Gigabit Ethernet. Bonded (FEC) interface is recommended

Bandwidth

less than 10 MBit/sec

 


 

Sippy Media Gateway (SMG) hardware guidelines

SMG Cluster is a beta product, and only available to enterprise customers participating in our beta program. If you wish to take part in this program, please contact sales@sippysoft.com

 

The relay or proxying of media through an intermediary system is useful for some specific scenarios, such as NAT helping and network topology masking.  Relaying media will increase demand on network requirements, bandwidth and CPU resources.

 

Media Relay causes the RTP stream between your client system and your vendor to travel via your Sippy Softswitch infrastructure.

 

The type of codec used by your client and vendor has a large impact on the resources consumed.  Here is an example of the relation between bandwidth use, Call Cappacity (CC), Calls Per Second (CPS) and two different (popular) codecs:

 

Codec / Bandwidth usage examples

 


G.711

G.729/G.723

2,000CC / 30CPS

250 MBit/sec

40 MBit/sec

3,000CC / 50CPS

375 MBit/sec

60 MBit/sec

6,000CC / 100CPS

750 MBit/sec

120 MBit/sec

10,000 CC / 150 CPS

1.25 GBit/sec

300MBit/sec

 

Furthermore, the characteristics of traffic using the same codec can vary.

 

For example, the packetization window of a codec at 10msec will have a throughput of X and bandwidth usage of Y.  If the packetization window is changed to 20msec, then the throughput changes to ~A and bandwidth usage to ~B.

 

The point to remember here is that all media traffic, even using the same codec is not always equal in terms of the resources required to relay it.

 

Low spec SMG hardware suitable up to 2,000 Concurrent Calls, 40 'All Out' CPS

 

CPU

8 Core, 2.0GHz, Intel Xeon E5 or better or comparable AMD**

RAM

8GB

Disk

Mirrored SATA hard drives or better

Network Interface

GigE Network Interface Intel(R) PRO/1000 PCI Express Gigabit Ethernet. Bonded (FEC) interface is recommended  

 

Example of SMG hardware specs stable under 3,000 Concurrent Calls, 60 'All Out' CPS, 180 sec ACD, 240,000 PPS and 128 MBit/sec

 

CPU

2 X CPU E5-2620 v2 @ 2.10GHz (12 physical cores in total)

RAM

8GB

Disk

RAID1 based on 2 X 1 TB SATA drives

Network Interface

Intel(R) PRO/1000 (3 vectors \ 2 processing threads) with Link Aggregation LACP setup 

 


Sippy Media Gateway (SMG) Cluster solution

 

Several Sippy Media Gateways (SMG) can be aggregated into an SMG Cluster solution to provide redundancy and load balancing between SMG nodes.  If one SMG node goes down, the SMG Cluster will continue operating.  The load will be shared between existing the SMG nodes available in the SMG Cluster.  Also, if the SMG Cluster Maximum CC capacity is reached or all SMG nodes go down, the remaining media sessions will be proxied through the Built-In RTPproxy service available in the Sippy Softswitch itself until the SMG Cluster is back online.