Posts Tagged xlab1
This is a result of a project where we needed to measure voice QoS parameters (jitter and packet loss) in the customer network. I’ve set up small probe computers (old 10″ Intel Atom netbooks like Acer Aspire One) with FreeSWITCH and a few scripts for test automation. Each test consists of a 30-second call (producing approximately 1500 RTP packets in each direction), and tshark is measuring the received jitter and loss on each side.
Test details and the installation procedure are outlined on Github:
It’s always a bit of an effort to remove unneeded features from the default FreeSWITCH configuration. So, I made the minimal configuration which still allows to start the server, but does completely nothing. It’s now much easier to start a new server configuration for any new project.
The configuration is placed at Github. It’s very straightforward to use with FreeSWITCH Debian packages, and can also be used if you compile it from sources:
cd /etc git clone https://github.com/xlab1/freeswitch_conf_minimal.git freeswitch
The configuration contains a number of empty “stub.xml” files in order to make the XML pre-processor happy.
It also makes sense to start using Git for your own FreeSWITCH configurations 🙂
There are multiple low-power, fanless appliances on the market, and most of them are powered by Intel Atom processors. I needed an estimation how well an Atom would perform for a FreeSWITCH PBX application.
In this test, I use two Acer Aspire One notebooks with different processors:
- atom01: Atom N2600 (2 cores, 4 virtual CPUs, 512KB cache and 600MHz per virtual CPU, 12768.02 BogoMIPS)
- atom02: Atom N570 (2 cores, 4 virtual CPUs, 512KB cache and 1000MHz per virtual CPU, 13302.08 BogoMIPS)
Both notebooks are running 32-bit Debian 7 Wheezy (Kernel version 3.2.0-4-686-pae), and FreeSWITCH version 1.2.13 from pre-built Debian packages.
Test results summary
All calls in this test used transcoding between G.711alaw and G.722. The bottleneck in performance was always at the N2600 (atom01), because of slower CPU. In general, N570 can handle approximately 30% higher load than N2600.
With 10 concurrent calls (21 channels on atom01 and 20 channels on atom02), there is no voice distortion and new call processing does not disturb the ongoing calls. Each virtual CPU is busy at 20-25%
With 20 concurrent calls (41 channels on atom01 and 40 cannels on atom02), there is some minor voice distortion, especially during incoming calls, but quality s still acceptable.
With 27 concurrent calls (55 and 54 channels), voice distortions were too high and not acceptable. Every virtual CPU on atom01 was busy at around 50%, which means full load for the whole CPU.
With 20 concurrent calls without transcoding (PCMA only in all call legs), each CPU core on atom01 was utilized at around 9-10%. So, theoretically the platform can handle up to 40-50 simultaneous calls in non-transcoding mode.
Only the voice quality was tested. CPS was not tested, and it depends heavily on the complexity of the dialplan. But the overall response of the system was quite acceptable.
- Called and Caller ID normalization:
- Various SIP trunk providers require different number formats (national, e.164, or some special format)
- Different customers vPBX’es may require national number formats for different countries
- Codec normalization: SIP providers are typically limiting the choice of codecs, and customer vPBX’es may require to support a bigger variety of codecs.
- Security: SIP trunk providers may require authorization, and SBC should perform it, screening the global route selection and authorization from back-end systems.
- Admission control: ITSP trunk overload should be avoided
- Where possible, public VoIP should be used for route selection (enum).
- For incoming calls, caller’s name should be looked up if possible (Switzerland).
In the scope of the xlab1 project, a Kamailio server is built to work as a pass-through SIP proxy: it forwards all SIP messages, including REGISTER, and also passes all RTP traffic through Rtpproxy. This allows to use only a minimal set of public IP addresses, while having unlimited room for back-end servers.
I placed relevant files on Github for public use:
I’m designing a network layout for IP telephony testing lab. Codename: xlab1 (xlab0 is a Xen machine at my home, and xlab1 is a Xen server hosted at a partner company).
The server has only 4 public IP addresses (one for management, two for redundant SIP front-ends, and one for web front-end), and the goal is to have the flexibility to test any telephony scenario, from a standalone vPBX to a 2600hz’s Kazoo installation.
Physically it’s a 1U server with Intel Core2Quad and 8GB RAM, running Xen hypervisor.
The primary application is a multi-tenant virtual PBX. All Internet communication is done with the two front-end servers, and all the telephony handling, media termination and applications are done by the back-end servers. NAT is only used for software installation on the back-end virtual machines.
On the front-end servers, Kamailio+RTPproxy will accept registrations from UAC’s in the public Internet, and also perform flood protection and topology hiding. They would forward all SIP messages to the back-end servers,
based on a DNS-based domain map based on a local dispatcher database. The SIP authentication will be handled by back-end servers.
Also the front-end servers will run a FreeSWITCH process each, used only as an SBC for communication with ITSP trunks.
Also need to think how load-balancing and failover would be organized. Especially that many ITSP’s expect only one IP address at the customer end of the trunk.
Here is the concept drawing of the service components and communication flows:
The project is intended to be open-source, open-design, and open-documentation, so more technical details will follow.