Din, G., & Rentea, C. 2006, May 31–June 2, Using TTCN-3 to Design Performance Tests. Unpublished paper presented at ETSI TTCN-3 User Conference 2006, Berlin (Germany).
Added by: Deleted user (7/9/08, 11:49 AM) Last edited by: Deleted user (8/13/08, 2:26 PM)
|Resource type: Conference Paper
BibTeX citation key: Dina
View all bibliographic details
Creators: Din, Rentea
Publisher: Fraunhofer FOKUS, ETSI (Berlin (Germany))
Collection: ETSI TTCN-3 User Conference 2006
Over the past years, the TTCN-3 technology has grown in popularity being applied in various domains and for many types of tests. Numerous research and technical papers investigate the application of TTCN-3 to particular testing types, many of them making important contributions also to the extension of the language. Our work focuses on the performance testing area and presents aspects originating from our experience on using TTCN-3 to comply with real world incentives in that field. This presentation will describe diverse facets of performance tests and their implications in the test design.
TTCN-3 offers various concepts to design performance tests. Parallel test components are used to emulate SuT’s clients. The ports are used to handle connections to SuT following the send/receive or call/reply communication paradigms. Timers can be defined on test components and be used in the test behaviour to measure the time between sending a stimuli and receiving SUT's response. Another important mechanism provided by TTCN-3 is the inter-component communication which allows connecting components each other and transmitting messages between them. This mechanism is used in performance testing for synchronization of actions (e.g. all components behaving as clients start together after receiving a synchronization token) or for collecting statistical information to a central point. The handling of verdicts in load tests is different from the traditional verdict handling procedure in functional testing. In functional testing we use the built-in concept of verdict which is always set when an action influences significantly the execution of the test. Performance tests have also to maintain a verdict which should be presented to the tester at the end of execution. However, the verdict in this case has rather a statistical meaning than a functional one, as the verdict should be a composition of all verdicts reported by client components. In our approach, the verdict is established by counting the rate of fails during one execution; i.e. if during the test more than a percentage of clients behave correctly we consider the test passed. The collection of statistical information like fails, timeouts, successful transactions can be implemented by using counter variables on each component. These numbers can be communicated at the end of the test to a central entity (i.e. MTC) which computes the final results of the test.
The major role of a performance test is to emulate the parallel behaviour of multiple clients (or users) interacting with the SuT. In literature the SuT’s clients are also called WLUs (workload units) and they are implemented as parallel processes or threads. Nevertheless, a parallel process may emulate the behaviour of more than one user at the same time. In TTCN-3, the test component is the building block to be used to emulate one or more WLUs at the same time. Theparallelism is realized by running in parallel a number of test components. We identified and employed at least three patterns applicable to performance tests: a) a component emulates a single user, b) a component emulates sequentially multiple users and c) a component emulates interleaved behaviours.
Furthermore, we revise several language extension proposals from earlier research, which can be applied also to performance test design. Earlier researches, introduced concepts related to performance testing and real-time testing like: non-functional verdict to judge real time behaviour, absolute time, timezones, resume operation to delay execution of a test component, specification of synchronization conditions, online versus offline logging, background traffic models.
Beside these concepts, we propose further language artefacts to ease the performance test specification:
Built-in traffic models: to define predictable streams of packets following a given stochastic pattern of transmissions times (e.g. Poisson, Erlang, hyper-exponential).
1. Barrier: to synchronize the behaviour of multiple components
2. Dynamic resizable lists of elements including timers or altsteps
3. Parallel behaviours on a test component to allow execution of more than one behaviours on a component
4. Statistical verdicts: the collection of statistical information like failures, delays, timeouts and successful transactions can be globally and instantly collected.
Added by: Deleted user Last edited by: Deleted user