Contact Person
Andreas Hoffmann
Dipl.-Inform. Andreas Hoffmann
Deputy Director
Business Unit SQC
Tel.: +49 30 3463-7392

Intel Logo

IMS Benchmarking

Apr. 01, 2005 to May 31, 2008

The IP Multimedia Subsystem (IMS) is defined by the 3rd Generation Partnership Project (3GPP) and represents the reference service delivery platform architecture for the provision of IP Multimedia services within emerging mobile all-IP network environment, such as UMTS Release 5.

The target of the IMS Benchmark project is the definition, development, execution and promotion of an industry wide 3GPP IMS performance benchmark.

The primary goal in this project is to define a comprehensive test environment and unambiguous set of test cases that may be applied to an IMS System under Test (SuT) in order to evaluate its performance. The benchmark is defined for the control plane of an IMS network, which consists of the x-CSCF (Call Session Control Function), HSS (Home Subscriber Server), and SLF (Subscription Locator Function) components, the links over which they perform signalling, and the database transactions required to perform these functions. Performance of the data plane or of the media servers and gateways that also comprise an IMS system are not in the scope of this benchmark.

This website provides the necessary background information on IMS Benchmark and describes the available IMS (target system) and IMS Benchmarking (test system) infrastructures provided by Fraunhofer FOKUS.

Fraunhofer FOKUS is involved in the IMS Benchmarking activities with two Competence Centers:
Modeling and Testing for System and Service Solutions (MOTION) and Next Generation Network Infrastructures (NGNI).


A benchmark test is a procedure in which a test system is provided with a traffic set and a provisioning plan for a System under Test (SuT), provisions the SuT according to the provisioning plan, sends and receives traffic to and from the SuT according to the traffic set, and measures and reports results. Results of a benchmark test include the parameters used to synthesize the traffic set and provisioning plan, the maximum calls per second (CPS) value that the SuT could achieve, and the cost of the SuT.

  • Workload

The benchmark must measure the capacity of the control plane of an IMS system. This consists of the components that perform SIP-based signalling to each other, and to database accesses by those components. The potential requirement to benchmark the user plane (e.g., media processing and transport) is deferred.

Because ultimately the load on an IMS system is imposed by the aggregation of loads from many individual subscribers, the performance of an IMS system, or even of a CPS, is not best measured by message throughput, but rather by the number of subscribers that can use the system simultaneously without reducing the quality of service below an acceptable level for any of the subscribers. The method for defining the load in a given benchmark, and of measuring performance, should be based on the number of subscribers.

  • Measurement and Visualization

The benchmark must provide measurements at multiple levels of granularity. The performance of a complete network and of primary building blocks (e.g., CSCFs) are of primary importance to SPs; the performance of subsystems of the primary building blocks (perhaps including more than one level of decomposition) are of primary importance to suppliers.

  • Traffic Model

Although the benchmark will define a traffic load corresponding to thousands of simulated UEs, the method of traffic generation should be defined in such a way that it can be implemented economically by a test system. In other words, the test system should not be required to be enormously more powerful than the SuT, nor should its structure require outrageously expensive licensing arrangements from test system vendors.

  • Subsystems to be benchmarked
  1. Service/Control/Data Planes (Definition: P-CSCF, I-CSCF, S-CSCF, HSS, AS, MRF, MRFC, PDF)
  2. Session Control (Definition: P-CSCF, I-CSCF, S-CSCF, HSS)
  3. Application Server
  4. Application Server/MRF/MRFC
  5. HSS
  6. P-CSCF
  7. S/I-CSCF (Definition includes an HSS)
  8. Media Resource Function/Control


  • System under Test

The implementation of IMS used for benchmarking activities is the Open IMS Playground developed at Fokus. The Open IMS @ FOKUS Playground represents the technology basis for all R&D and commercial activities of the FOKUS competence center NGNI (Next Generation Network Infrastructures). As part of its national 3Gbeyond testbed activities and the concept of open technology playground provision, FOKUS has opened July 1st, 2004 the globally first "Open IMS Playground", which can be used by any 3rd party (academia or industry) for IMS infrastructure and IMS application prototyping, proof of concept implementations, interoperability and performance tests. Additionally, coaching and consulting services are provided.

  • Test System

The system under test is realized as an IMS reference implementation covering the main IMS components in the control path (x-CSCF, HSS). For load generation of SIP or DIAMETER fullstate traffic the standard test notation TTCN-3 is used together with TTworkbench, the TTCN-3 test execution platform. The TTworkbench Enterprise is a joint development of FOKUS and the FOKUS spin-off Testing Technologies IST GmbH. The platform deploys, creates and coordinates distributed parallel test components emulating user equipment on multiple hosts. It forms a base for the test system for IMS benchmarking developed by FOKUS. The abstract test suite for IMS benchmarking is realized in TTCN-3. Various traffic models are implemented and various Quality of Service parameters like transactions/calls per second, message delays, numbers of supported users and numbers of dropped calls are defined and measured. Tests are executed and results analyzed in the IMS benchmarking test environment. The project’s current system under test is the session control plane of the IMS core. Other configurations, e.g. data planes, application servers, PTT (Push To Talk), are being considered as test targets as well. The project initiated a Special Interest Group (SIG) for IMS Performance Benchmarking, a working party that will submit proposals to standards organizations and forums.

More about

Telecom service providers (SPs) are quickly evolving their networks from legacy technologies to what might be termed “fourth generation” or “3G Beyond” technologies.

IMS supports a rich set of services available to end users on either wireless or wired user equipment (UE), provided via a uniform interface provided by a subscriber’s home service provider (SP) in cooperation with visited service providers. Services are provided via an “overlay” technique over multiple service provider standards.

Telecom equipment manufacturers (TEMs) all along the architectural hierarchy, assuming that IMS represents a growth market, are attempting to develop not merely products for IMS networks, but architectures as well. The quest for new architectures represents a view that current processor, network, and server architectures are not sufficient to support wide IMS deployment. Examples of such work include:

  • Advanced Telecommunications Architecture (ATCA, [ATCA]), a set of existing and emerging standards for the physical packaging of bladed servers and communications fabrics.
  • Fabric standards, such as Infiniband, PCI Express, Advanced Switching Interconnect (ASI, [ASI]), RapidIO, and Gigabit Ethernet (e.g., IEEE 802.1, 802.3).
  • Technologies for offloading packet processing, such as network processors and other multicore processors.
  • Real-time and carrier-grade operating systems, such as the carrier-grade Linux releases of Montavista and Red Hat.
  • Middleware architected for high availability, such as defined by the Object Management Group (OMG) and the Service Availability Forum ([SAFORUM]).
  • Application development environments and interfaces, such as defined by the Parlay Group and the Java Community Process (e.g., JAIN).

The number of technological variables is so large that some reasonable ground rules for defining an architecture need to be put in place. SPs require guidance for making decisions among suppliers, and suppliers all along the architectural hierarchy need guidance to develop the right products.

IMS defines a set of components:

  • Call Session Control Function (CSCF), which acts as Proxy CSCF (P-CSCF) in Visited network, Serving CSCF (S-CSCF) in Home network or Interrogating CSCF (I-CSCF) in Home network, to route and control the session establishment
  • Home Subscriber Server (HSS) with AAA functionality and unique service profile for each user
  • Media Gateway Control Function (MGCF) with Signalling Gateway, which controls Media Gateway and performs protocol conversion between ISUP and SIP
  • Media Gateway (MGW) , which interacts with MGCF for resource control
  • Multimedia Resource Function (MRF) , which controls media stream resources
  • Breakout Gateway Control Function (BGCF) , which selects the network in which PSTN breakout is to occur
  • Application Servers (AS), which offers value added services

The goal of IMS Benchmark project is to define a performance benchmark for the control plane of an IMS network, which consists of the x-CSCF, HSS, and SLF (Subscription Locator Function) components, the links over which they perform signalling, and the database transactions required to perform these functions. Performance of the data plane or of the media servers and gateways that also comprise an IMS system are not in the scope of this benchmark.