Genie Harness
The Genie Harness is structured much like a pyATS testscript. There are three main sections: Common Setup, Triggers and Verifications, and Common Cleanup. The Common Setup section will connect to your devices, take a snapshot of current system state, and optionally configure the devices, if necessary. The Triggers and Verifications section, which you will see later in the chapter, will execute the triggers and verifications to perform tests on the devices. This is where the action happens! The Common Cleanup section confirms that the state of the devices is the same as their state in the Common Setup section by taking another snapshot of the current system state and comparing it with the one captured in the Common Setup section.
gRun
The first step to running jobs with Genie Harness is creating a job file. Job files are set up much like a pyATS job file. Within the job file there’s a main function that is used as an entry point to run the job(s). However, instead of using run() within the main function to run the job, you must use gRun(), or the “Genie Run” function. This function is used to execute pyATS testscripts with additional arguments that provide robust and dynamic testing. Datafiles are passed in as arguments, which allows you to run specific triggers and verifications. Example 9-3 shows a Genie job file from the documentation that runs one trigger and one verification.
Example 9-3 Genie Harness – Job file
from genie.harness.main import gRun def main(): # Using built-in triggers and verifications gRun(trigger_uids=["TriggerSleep"], verification_uids=["Verify_IpInterfaceBrief"])
The trigger and verification names in Example 9-3 are self-explanatory: sleep for a specified amount of time and parse and verify the output of show ip interface brief command. Each trigger and verification is part of the pyATS library. If these were custom-built triggers or if a testbed device did not have the name or alias of “uut” in your testbed file, you would need to create a mapping datafile. A mapping datafile is used to create a relationship, or mapping, between the devices in a testbed file and Genie. It’s required if you want to control multiple connections and connection types (CLI, XML, YANG) to testbed devices. By default, Genie Harness will only connect to the device with the name or alias of “uut,” which represents “unit under testing.” The uut name/alias requirement allows Genie Harness to properly load the correct default trigger and verification datafiles. Otherwise, you must include a list of testbed devices to the triggers/verifications in the respective datafile. If this doesn’t make sense, don’t worry; we will touch on datafiles and provide examples further in the chapter that should help provide clarity.
To wrap up the Genie Harness example, the Genie job is run with the same Easypy runtime environment used to run pyATS jobs. To run the Genie job from the command line, enter the following:
(.venv)dan@linux-pc# pyats run job {job file}.py --testbed-file {/path/to/testbed}
Remember to have your Python virtual environment activated! In the next section, we will jump into datafiles and how they are defined.
Datafiles
The purpose of a datafile is to provide additional information about the Genie features (triggers, verifications, and so on) you would like Genie Harness to run during a job. For example, the trigger datafile may specify which testbed devices a trigger should run on in the job. There are many datafiles available in Genie Harness. However, many of them are optional and are only needed if you’re planning to modify the default datafiles provided. The default datafiles can be found at the following path:
$VIRTUAL_ENV/lib/python<version>/site-packages/genie/libs/sdk/genie_ yamls
Within the genie_yamls directory, you’ll find default datafiles that apply to all operating systems (OSs) and others that are OS-specific. These default datafiles are only implicitly activated when a testbed device has either a name or alias of uut. If there isn’t a testbed device with that name or alias, the default datafile will not be implicitly passed to that job. I would highly recommend checking out (not editing) the default datafiles. If you’d like to edit one, you may create a new datafile in your local directory and extend the default one—but don’t jump too far ahead yet! This topic will be covered later in the chapter. Here’s a list of the different datafiles that can be passed to gRun:
Testing datafile
Mapping datafile
Verification datafile
Trigger datafile
Subsection datafile
Configuration datafile
PTS datafile
Each datafile serves a purpose to a specific Genie Harness feature, but one only needs to be specified if you are deviating from the provided default datafile. For example, the Profile the System (PTS) default datafile only specifies to run on the testbed device with the alias uut. If you would like it to run on more devices, you’ll need to create a pts_datafile.yaml file that maps devices to the device features you want profiled by PTS and include the pts_datafile argument to gRun.
Device Configuration
Applying device configuration is always a hot topic when it comes to network automation. In the context of pyATS and the pyATS library (Genie), the focus is on applying (and reverting) the configuration during testing. In many cases, you’ll want to test a feature by configuring it on a device, testing its functionality, and removing the configuration before the end of testing. The pyATS library (Genie) provides many ways to apply configuration to devices. Here are some of the options you have to configure a network device during testing:
Manual configuration before testing begins (not recommended).
Automatically apply the configuration to the device in the Common Setup and Common Cleanup sections with TFTP/SCP/FTP. A config.yaml file can be provided to the config_datafile argument of gRun, which specifies the configuration to apply.
Automatically apply the configuration to the device in the Common Setup and Common Cleanup sections using Jinja2 template rendering. The Jinja2 template filename will be passed to gRun using the jinja2_config argument and the device variables will be passed as key-value pairs using the jinja2_arguments argument.
You will dive into Jinja2 templates further and how to use them to generate configurations in Chapter 10, “Automated Configuration Management.” For now, just understand that you can standardize the configuration being pushed to the network devices under testing using configuration templates and a template rendering engine (Jinja2) to render the templates with device variables, resulting in complete configuration files.
All configurations should be built using the show running style, which means you create your configuration files how they would appear when you view a device’s configuration using the show running-config command. This differs from how you would configure a device interactively via SSH using the configure terminal approach.
After the devices under testing have been configured, the pyATS library learns the configuration of the devices via the check_config subsection. The check_config subsection runs twice: once during Common Setup and another time during Common Cleanup. It collects the running-config of each device in the topology and compares the two configuration “snapshots” to ensure the configuration remains the same before and after testing.
PTS (Profile the System)
Earlier in this chapter, you saw examples of device features that can be learned (for example, BGP) during testing. This is made possible by the network OS-agnostic models built into pyATS. These models create the foundation for building reliable data structures and provide the ability to parse data from the network.
PTS provides the ability to “profile” the network during testing. PTS creates profile snapshots of each feature in the Common Setup and Common Cleanup sections. PTS can learn about all the device features, or a specific list of device features can be provided as a list to the pts_features argument of gRun. Example 9-4 shows how gRun is called in a job file with a list of features passed to the pts_features argument.
Example 9-4 PTS Feature
from genie.harness.main import gRun def main(): # Profiling built-in features (models) w/o the PTS datafile gRun(pts_features=["bgp", "interface"])
Along with having the ability to profile a subset of features/device commands, PTS, by default, will only run on the device with the device alias uut. To have more devices profiled by PTS, you’ll need to supply a pts_datafile.yaml file. The datafile can provide a list of devices to profile and describe specific attributes to ignore in the output when comparing snapshots (such as timers, uptime, and dates). Example 9-5 shows a PTS datafile, and Example 9-6 shows the updated gRun call, with the pts_datafile argument included.
Example 9-5 PTS Datafile – ex0906_pts_datafile.yaml
extends: "%CALLABLE{genie.libs.sdk.genie_yamls.datafile(pts)}" bgp: devices: ["cat8k-rt1", "cat8k-rt2"] exclude: - up_time interface: devices: ["cat8k-rt1", "cat8k-rt2"]
Example 9-6 PTS Datafile Argument
from genie.harness.main import gRun def main(): # Profiling built-in features (models) w/ the PTS datafile gRun(pts_features=["bgp", "interface"], pts_datafile="ex0906_pts_datafile.yaml")
PTS Golden Config
PTS profiles the operational state of the network devices under testing, but how do we know the state is what we expect? PTS provides a “golden config” snapshot feature that compares the profiles learned by PTS to what is considered the golden snapshot. Each job run generates a file named pts that is saved to the pyATS archive directory of the job. Any PTS file can be moved to a fixed location and used as the golden snapshot. Like the pts_datafile argument, the pts_golden_config argument can be passed to gRun, which points to the golden PTS snapshot used to compare against the current test run snapshots.
There’s a lot to digest with Genie Harness as well as a lot of different options, and an understanding of these features and when to use them is critical. In the following sections, we will turn our attention to triggers and verifications. Let’s take a look at verifications first, as triggers rely on them to perform properly.