-
Notifications
You must be signed in to change notification settings - Fork 17
Data Driven Testing
Routing protocols pose unique challenges when it comes to testing due to their inherent complexity as distributed algorithms. Traditional unit testing methodologies often fall short due to the difficulty of replicating test cases that are only possible in complex network topologies and involve a concerted sequence of events.
Consider a simple scenario: the reception of an OSPF LS Update packet. This event can trigger a multitude of distinct code paths, depending on various factors such as the state of the sending neighbor, the state of the receiving interface, the list of LSAs, the local LSDB, the local configuration, and several other elements. The complexity of routing protocols arises from their ability to react correctly to a wide range of potential events, with the response to each event being dependent on the current system state and configuration settings.
Attempting to recreate these conditions using manually initialized data structures is not practical given how complex some test scenarios can be. Additionally, using internal data structures and functions within unit tests can hinder refactoring efforts, as any modifications to the protocol's internals may require adjustments in numerous individual tests. This not only introduces a significant maintenance overhead but also risks the ossification of internal APIs.
This is where data-driven testing (DDT) comes in. In DDT, instead of hardcoding specific test cases, there's a generic test framework that can be used with different sets of data. In the case of Holo, JSON files are used to describe both test inputs and expected outputs. This means that adding new tests is as simple as adding JSON files that describe the tests, with virtually no coding required. The ability to easily create new tests is crucial to ensure comprehensive test coverage, where all possible code paths are exercised. The following sections will provide a detailed explanation of how DDT works in Holo.
Topology tests, as the name suggests, involve testing Holo within the context of a particular network topology. Here's an example topology:
+---------+
| |
| RT1 |
| 1.1.1.1 |
| |
+---------+
|eth-sw1
|
|
|
+---------+ | +---------+
| | | | |
| RT2 |eth-sw1 | eth-sw1| RT3 |
| 2.2.2.2 +----------+----------+ 3.3.3.3 |
| | 10.0.1.0/24 | |
+---------+ +---------+
eth-rt4-1| |eth-rt4-2 eth-rt5-1| |eth-rt5-2
| | | |
10.0.2.0/24| |10.0.3.0/24 10.0.4.0/24| |10.0.5.0/24
| | | |
eth-rt2-1| |eth-rt2-2 eth-rt3-1| |eth-rt3-2
+---------+ +---------+
| | | |
| RT4 | 10.0.6.0/24 | RT5 |
| 4.4.4.4 +---------------------+ 5.5.5.5 |
| |eth-rt5 eth-rt4| |
+---------+ +---------+
eth-rt6| |eth-rt6
| |
10.0.7.0/24| |10.0.8.0/24
| +---------+ |
| | | |
| | RT6 | |
+----------+ 6.6.6.6 +-----------+
eth-rt4| |eth-rt5
+---------+
Area 0: rt1, rt2, rt3
Area 1: rt4, rt5, rt6
The interfaces connected to the sw1 LAN are configured in the broadcast mode.
All other interfaces are configured in the point-to-point mode.
Topology tests serve two main purposes. First, they function as standalone tests to make sure Holo works as expected in different network setups. Second, they provide the foundation for conformance tests, as we'll see later. Each topology test includes a file describing the network topology and the initial configuration for all participating routers. These tests can be executed in two distinct modes:
1. Generation Mode: In this mode, the network topology is instantiated using Linux namespaces, and Holo is started on each virtual router, reading its startup configuration. The network topology runs normally for two minutes to ensure that the initial network convergence has occurred. After this period, the topology test concludes. Upon completion, the following files are generated in the topology test directory:
- events.jsonl: list of all input events generated during the initial network convergence.
- output/northbound-notif.jsonl: list of YANG-modeled notifications sent during the initial network convergence.
- output/northbound-state.json: full snapshot of the YANG-modeled protocol state after the two-minute interval.
- output/protocol.jsonl: list of protocol packets sent during the initial network convergence.
It's important to note that in this mode, topology tests do not fail. Instead, the generated output files must be manually reviewed for correctness, particularly the output/northbound-state.json file, which contains the complete protocol state after the initial network convergence. For the topology above, for example, it would be important to ensure that all expected adjacencies were established, all expected routes were installed, etc.
This mode should be used whenever creating a new topology test or updating its data. Updates are typically necessary when there are changes in input or output data formats. The most common scenario involves the addition of read-only nodes to YANG modules. Whenever a test topology undergoes data updates, it's expected that all generated files will differ because the order of events is non-deterministic whenever a test topology runs. There is one exception to this: the output/northbound-state.json file should remain stable since it serves as a reference for conformance tests. For that to work, non-deterministic state data, such as time-related information and counters, is omitted from the output in this mode.
2. Replay Mode: After the generation mode, a topology test can be re-run in the replay mode. In this mode, no virtual topology is created using Linux namespaces. Instead, a simulated protocol instance, which performs no input or output operations, is initiated for each router. These protocol instances read the startup configuration as usual and refer to the events.jsonl file generated during the generation mode. This file contains all events that occurred during the generation mode and is replayed to produce the same effects, even though it's a simulated protocol instance disconnected from any other router. During this process, the outputs generated in the replay mode are compared against those from the generation mode. A successful test match confirms that everything is working as expected.
Running topology tests in replay mode is exceptionally fast, typically completing in a matter of milliseconds. This mode serves as an effective way to quickly detect regressions during development, allowing the code to be tested against numerous topologies in the blink of an eye. It can be seen as some form of "compiled" topology test.
Conformance tests involve loading a snapshot of a topology test in replay mode, injecting events, and verifying the outcomes. The goal is to assess the code's behavior not only with typical inputs but also with unexpected events that rarely happen in practice. In both cases, the code should behave correctly as specified by the standards.
The fact that conformance tests build upon topology tests eliminates the need for a setup phase, making it easier to create conformance tests. It also broadens the scope of testing possibilities, particularly for test cases exclusive to highly specific network topologies. The easy in creating conformance tests, coupled with their rapid execution, stands as one of the cornerstones of Holo, ensuring extensive code coverage and standards compliance.
The testing infrastructure leverages the serde crate to serialize and deserialize input and output events in the JSON format. Events are essentially messages (represented as structs or enums) exchanged between the main protocol instance task and its children tasks and base components.
For example, let's consider an event message that represents an OSPF Grace Period timeout:
#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct GracePeriodMsg {
pub area_key: AreaKey,
pub iface_key: InterfaceKey,
pub nbr_key: NeighborKey,
}
In JSON format, this event can be defined as follows:
{"GracePeriod":{"area_key":{"Value":"0.0.0.0"},"iface_key":{"Value":"eth-rt6"},"nbr_key":{"Value":"6.6.6.6"}}}
As you can see, it's easy to specify the desired objects (area, interface, and neighbor) using user-friendly, human-readable strings. Below are the definitions for the object keys:
pub type ObjectId = u32;
#[derive(Clone, Debug, Deserialize, Serialize)]
pub enum ObjectKey<T> {
Id(ObjectId),
Value(T),
}
pub type AreaKey = ObjectKey<Ipv4Addr>;
pub type InterfaceKey = ObjectKey<String>;
pub type NeighborKey = ObjectKey<Ipv4Addr>;
Notice that objects can be specified using either an integer ID or a meaningful value, such as the OSPF area ID, interface name, or neighbor's router ID. In normal operation, Holo uses integer IDs for object identification due to performance considerations, primarily for faster hashing. Nonetheless, the option to use meaningful values is also provided to facilitate testing.
Conformance tests are organized into directories, each representing a distinct test case, containing both input events and expected outcomes. Here's an example:
$ ls -1 holo-ospf/tests/conformance/ospfv2/gr-helper-disable1/
01-input-protocol.jsonl
01-output-northbound-notif.jsonl
01-output-northbound-state.json
01-output-protocol.jsonl
02-input-northbound-config-change.json
02-output-northbound-notif.jsonl
02-output-northbound-state.json
02-output-protocol.jsonl
Every file located within the directory carries two important pieces of information within its name: the test step and the input/output type. For each test step, it's necessary to have exactly one input file, but there might be multiple output files or even no output files at all. In the provided example, the gr-helper-disable1
test comprises two distinct steps, which are executed sequentially until either the test fails or all steps are completed.
- input-northbound-config-change.json: JSON file specifying one or more configuration changes in the YANG Patch format.
- input-northbound-config-replace.json: JSON file specifying a full configuration replace operation.
- input-northbound-rpc.json: JSON file specifying the invocation of a YANG-modeled RPC.
- input-ibus.jsonl: JSON file specifying one or more Ibus input events.
- input-protocol.jsonl:J SON file specifying one or more protocol input events.
- output-northbound-notif.jsonl: JSON file containing a list of YANG-modeled notifications sent in response to the input event.
- output-northbound-state.json: JSON file containing a full snapshot of the YANG-modeled protocol state after the input event.
- output-ibus.jsonl: JSON file containing a list of Ibus messages sent in response to the input event.
- output-protocol.jsonl: JSON file containing a list of protocol messages sent in response to the input event.
Writing conformance tests involves the following steps:
- Create a dedicated directory under protocol-crate/tests/conformance/ and give it a name that reflects your specific test.
- Choose a suitable base topology that aligns with the test requirements. Then, introduce a new unit test function within protocol-crate/tests/conformance/mod.rs, referencing the directory you created in the previous step. Here's an example:
Add #[tokio::test]
async fn test_name1() {
run_test::<Instance<Ospfv2>>("test-name1", "topo2-1", "rt2").await;
}
- Add a new file to the newly created test directory specifying an input event. The filename should indicate the type of event it represents and begin with a number to signify the test step. Example: 01-input-protocol.jsonl.
- Navigate to the protocol crate directory and run the following command:
HOLO_UPDATE_TEST_OUTPUTS=1 cargo test -- test_name1
. This command will either generate new conformance test output files or update existing ones. - Carefully review the generated output files to ensure they are correct. Additionally, document the test step within the corresponding .rs file, providing descriptions for both the input event and the expected outcomes.
- Add more test steps if needed (steps 3-5).
That's it. By following these steps, you can easily create conformance tests to validate protocol behavior and compliance.
- Architecture
- Management Interfaces
- Developer's Documentation
- Example Topology
- Paul's Practical Guide