Software Development Magazine - Project Management, Programming, Software Testing |
Scrum Expert - Articles, tools, videos, news and other resources on Agile, Scrum and Kanban |
Mocking the Embedded World - Page 2
Michael Karlesky, Greg Williams, William Bereza, Matt Fletcher
Atomic Object, http://atomicobject.com
Particular Advantages of TDD and CI in Embedded Software
In the context of embedded software, TDD and CI provide two further advantages beyond those already discussed. First, because of thevariability of hardware and software during development, bugs are due tohardware, software, or a combination of the two. TDD and CI promote a strongseparation of concerns such that it becomes far easier to pinpoint, by processof elimination, the source of unexpected system behavior. Well-tested softwarecan generally be eliminated from the equation or, in fact, used to identifyhardware issues. Second, because these techniques encourage good decoupling ofhardware and software, significant development can occur without target hardware.
Russian Dolls & Our Embedded Software Development Approach
A set of Russian dolls comprises individual dolls of decreasing size and corresponding detail nested one inside another. Our approachfleshes out a system’s architecture with Russian doll-like levels oftest-driven design. The architecture of a system and its requirements drivesystem tests. System tests in turn drive integration tests. Integration testsdrive unit tests. Each nested level of executable testing drives the design ofthe production code that will satisfy it. Implementing testable code forces aseries of small design decisions at the keyboard. The result is a systemaggregated of high quality, thoroughly tested pieces nested together to supportthe overall architecture and satisfy the system requirements.
In this section, we provide background on specialized techniques we employ, discuss the tools supporting our approach, and finallypresent a summary of the steps to implement a single system feature from startto finish. Our techniques and tools are synergistic; each supports and enhancesthe others. The paper concludes with an in-depth discussion of the ModelConductor Hardware design pattern introduced in this section and an end-to-end working example with tests, mocks, and source code.
Techniques
System, Integration, and Unit Testing
Requirements are composed of one or more features. We satisfy requirements by implementing features. Each feature necessitates creating asystem test that will exercise and verify it once it exists. This system testwill operate externally to the system under test. If pressing a button is togenerate bytes on a bus, a system test specifies in programming the initiation of the button signal and the verification of the bytes on the bus.
A single system feature is composed of one or more individual functions. We begin creating each of these functions by programming integration tests to verify the interrelation of function calls. Subsequent unit tests verify the output of each function under various input conditions. After creating the integration and unit tests, we run them and see them fail. Next, we write the production code that will satisfy these tests. Tests and source are optionally refactored until the code is clean and the tests pass. Integration and unit tests can be run on target hardware, cross-compiled and run on the development machine, or run in a simulator, depending on the development environment and target system.
Interaction-based Testing & Mock Functions
Few source code functions operate in isolation; most make calls on other functions. The composition of inter-function relationships, in large part, constitutes the implementation of a system feature. Testing this interaction is important. Mock functions facilitate testing these interactions and have become a key component of our development at the integration level. We practice interaction-based testing for integration tests and state-based testing for unit tests [5].
Whenever a function's work requires it to use another complex function (where a complex function is one requiring its own set of tests), we mock out that helper function and test against it. A mock presents the same interface to the code under test as the real module with which it interacts in production. Functionality within the mock allows tests to verify that only the expected calls with expected parameters were made against it in the expected order (and no unexpected calls were made). Further, a mock can produce any return results the tester-developer requires. This means that any scenario, even rare corner cases, can be verified in tests.
Hardware and Logic Decoupling (Model Conductor Hardware design pattern)
Specialized hardware and accompanying programming interacting with it is the single greatest complication in thoroughly testing an embedded system. We discovered our Model Conductor Hardware (MCH) design pattern in the process of segregating and abstracting hardware from logic to enable automated testing. The Model, Conductor, and Hardware components each contain logically related functions and interact with one another according to defined rules. With these abstractions, divisions, and behaviors, the entire system can be unit and integration tested without direct manipulation of hardware. A later section of this paper provides a more in-depth explanation of MCH.
Conductor First
A complete embedded system is composed of multiple groups of Model, Conductor, and Hardware components. A single interrelated group of these components is called an MCH triad. Each triad represents an atomic unit of system functionality. The Conductor member of an MCH triad contains the essential logic that conducts the events and interactions between the Hardware and Model comprising the functionality under test.
Conductor First (inspired by Presenter First [6,16]) is our approach to allow TDD to occur at the embedded software unit and integration level. We start by selecting a piece of system functionality within a system requirement. From this, we write integration and unit tests for the Conductor with a mock Hardware and a mock Model. Production code is then written to satisfy the Conductor tests.
This technique allows us to discover the needed Hardware and Model interfaces. The Hardware and Model are then implemented in a similar fashion, beginning with tests; the Hardware and Model tests reveal the needed use of the physical hardware and the interface to other triad Models.
Starting in the Conductor guides development of function calls from the highest levels down to the lowest. Developing the Conductor with mocks allows the control logic necessary to satisfy the system requirement to be designed and tested with no coupling to the hardware or other system functionality. In this way, unnecessary infrastructure is not developed and system requirements are implemented as efficiently and quickly as possible.
Tools
All of the following tools, with the exception of the miniLAB 1008 hardware test module, are freely available. Some are publicly available tools used by developers the world over. The others are custom tools we developed and have made available through our website packaged together with our sample project (improved and documented versions of these same tools will be made available soon).
Systir – System Test Framework
Systir [7] stands for "System Testing in Ruby." Ruby [8] is the reflective, dynamic, object-oriented scripting language Systir is built upon and extends. In TDD, we use Systir to introduce input to a system and compare the collected output to that which is expected in system tests. Systir builds on two powerful features of Ruby. First, Systir uses Ruby-based drivers that can easily bind to libraries of other languages providing practically any set of features needed in a system test (e.g. proprietary communication libraries). Second, Systir allows us to create Domain Specific Languages [9] helpful in expressing tests in human readable verbiage. We developed Systir for general system testing needs; it has also proven effective for end-to-end embedded systems testing.
Scriptable Hardware Test Fixture
System testing embedded projects requires simulating the real world. The miniLAB 1008 [10] is the device we have adopted as a hardware test fixture. It provides a variety of analog and digital I/O functions well suited to delivering input and collecting output of an embedded system under development. A proprietary library allows a PC to communicate with a miniLAB 1008 via USB. We developed a Ruby wrapper around this library to be used by Systir system tests. Other test hardware and function libraries (e.g. LabWindows/CVI ) could also be driven by Systir tests with the inclusion of new Ruby wrappers.
Source, Header, and Test File Code Generation
We decouple hardware from programming through the MCH design pattern. We also utilize interaction-based testing with mock functions. Both of these practices tend to require a greater number of files than development approaches not using MCH and interaction-based testing. At the same time, because of the pattern being used, we have very repeatable (i.e. automation friendly) file creation needs. To simplify the creation of source, header, and test files, we created a Ruby script to generate source, header, and test skeleton files.
Argent-based Active Code Generation
The skeleton files created by our file generation script contain pre-defined function stubs and header file include statements as well as Argent code insertion blocks. Argent [11] is a Ruby-based, text file processing tool that populates tagged blocks with the output of specified Ruby code. Our file generation script places Argent tags in the skeleton files that are later replaced with C unit test and mock function management code necessary for all project test files.
Unity – Unit Test Framework for C
The mechanics of a test framework are relatively simple to implement [12]. A framework holds test code apart from functional code, provides functions for comparing expected and received results from the functional code under test, and collects and reports test results for the entire test suite. Unity [13] is a unit testing framework we developed for the C programming language. While there is a very small number of C unit test frameworks available, we found no good, lightweight framework we liked so we created our own. We customize Unity reporting per project (e.g. printing results through stdio, a serial port, or via simulator output).
CMock – Mock Function Library for C
CMock [13] is a Ruby-based tool we created to automate creation of mock functions for unit testing in the C language. CMock generates mocks from the functions defined in a project’s header files. Each mock contains functionality for capturing and comparing calls made on the mock to expectations set in tests. CMock also allows tests to specify return results from functions within the mock. CMock alleviates the pain of creating and maintaining mocks; consequently, developers are motivated to make good design changes, since they need not worry about updating the mocks manually.
Dependency Generator & Link Time Substitution – File Linking Management for C
To facilitate linking of source files, test files, and mocks for testing or release, we developed a Ruby-based tool to manage theserelationships automatically. This dependency tool inspects source and header files and assembles a list of files to be linked for testing or release mode.
In an object-oriented language, we would normally compose objects with delegate objects using a form of dependency injection [14]. BecauseC has no formalized notion of objects, constructor injection cannot be used.Instead, we substitute the CMock generated mock functions for real functions at link time.
Rake – Build Utility
Rake [15] is a freely available build tool written in Ruby ("Ruby make"). We create tasks in Rake files to compile and link asystem under development, generate mocks with CMock, run Argent code generationfor unit and integration testing, run our Unity unit test framework, and run our Systir system tests.
Subversion & CruiseControl.rb – Code Repository & Automated Build System
We use the freely available source code control system Subversion to manage our project files, source code, and unit and system testfiles. As tests and source code are implemented and those tests successfullypass, we check our work into Subversion. Upon doing so, CruiseControl.rb, anautomated build system pulls down the latest version of the project fromSubversion, builds it, runs its tests, and reports the results through a web-based interface. This process repeats itself upon every check in.
Methods & Tools Testmatick.com Software Testing Magazine The Scrum Expert |