Networked Media Open Specifications
HOME OVERVIEW GITHUB WIKI FAQs TOOLS... IS-XX... MORE... SEARCH

NMOS API Testing Tool LICENSE

This tool creates a simple web service which tests implementations of the NMOS APIs.

The following test sets are currently supported:

When testing any of the above APIs it is important that they contain representative data. The test results will generate ‘Could Not Test’ results if no testable entities can be located. In addition, if device support many modes of operation (including multiple video/audio formats) it is strongly recommended to re-test them in multiple modes.

Attention:

Deploy

Local

Ensure pip3 is installed and up to date. Then install the dependencies:

# Upgrade pip3 to newest version to allow correct installation of requirements
pip3 install --upgrade pip
# Install the dependencies
pip3 install -r requirements.txt

Start the service as follows:

# Start the Test Suite
python3 nmos-test.py

This tool provides a simple web service which is available on http://localhost:5000.

Docker

There is a Dockerfile provided to build an image contianing the test suite.

docker build -t nmos-testing .

This image provides an quick way for deploying the test suite in your network.

docker run -d -p="5000:5000" nmos-testing

The web service will be available on http://<DOCKER_HOST_IP>:5000.

If you need to change the Config.py settings. You can define your own copy with the required settings and add the following volume mount. Note: requires an absolute path, the example below is for working from current directory.

docker run -d -p="5000:5000" -v="$(pwd)/Config.py:/config/Config.py" nmos-testing

If you need to deploy multiple instances of the test suite for multiple users to have concurrent access, this is a possible method to met those requirements.

Usage

Provide the URL of the relevant API under test (see the detailed description on the webpage) and select a test suite from the checklist. The result of the tests will be shown after a few seconds.

The result of each test case will be one of the following:

Pass Reason
Pass Successful test case.
Fail Required feature of the specification has been found to be implemented incorrectly.
Warning Not a failure, but the API being tested is responding or configured in a way which is not recommended in most cases.
Test Disabled Test is disabled due to test suite configuration; change the config or test manually.
Could Not Test Test was not run due to prior responses from the API, which may be OK, or indicate a fault.
Not Implemented Recommended/optional feature of the specifications has been found to be not implemented.
Manual Test suite does not currently test this feature, so it must be tested manually.
Not Applicable Test is not applicable, e.g. due to the version of the specification being tested.

Testing Unicast discovery

In order to test unicast discovery, the test suite launches its own mock DNS server which your Node will need to be pointing at in order to correctly discover the mock registries. The following steps should be completed to operate in this mode:

Unicast DNS advertisements for registries only become available once tests are running. As a result the unit under test may need prompting to re-scan the DNS server for records at this point. The TEST_START_DELAY config parameter may be used to increase the period which the test suite waits for in this situation.

If your network requires the use of the proxy server, you may find it necessary to disable this configuration on the host running the test suite and on the unit under test when using unicast DNS. This is because any requests to fully qualified hostnames are likely to be directed to your proxy server, which will be unable to resolve them.

Testing BCP-003-01 TLS

Testing of certain aspects of BCP-003-01 makes use of an external tool ‘testssl.sh’. Please see testssl/README.md for installation instructions.

In order to ease testing of TLS with the various specifications, sample certificates are provided in this repository. Please see test_data/BCP00301/README.md for their details and installation guidance.

Testing IS-10 Authorization

When testing IS-10 / BCP-003-02 implementations, ensure that a user is registered with the Authorization Server with a username and password that corresponds with the AUTH_USERNAME and AUTH_PASSWORD config options in the nmostesting/Config.py file. These values should be changed to sensible values before running the IS-10 tests and will be used as the Basic Authentication mechanism when contacting the Authorization Server.

When testing dynamic client registration, the contents of /test_data/IS1001/register_client_request_data.json will be used in the body of the request when registering the client, and should comply with RFC7591. When testing the authorization code grant, the means by which consent is given by the resource owner will be implementation-specific. The contents of the file /test_data/IS1001/authorization_request_data.json will be used as the body of the request to the authorization endpoint. Please edit both files to comply with the implementation under test.

Testing of SDP files

IS-05 test_41 checks that SDP files conform to the expectations of ST.2110. In order to enable these tests, please ensure that SDPoker is available on your system.

Non-Interactive Mode

The test suite supports non-interactive operation in order use it within continuous integration systems. An example of this usage can be seen below:

# List the available test suites
python3 nmos-test.py --list-suites

# List the available tests for a given test suite
python3 nmos-test.py suite IS-04-02 --list-tests

# Run just the 'auto' tests for the given suite, saving the output as a JUnit XML file
python3 nmos-test.py suite IS-04-02 --selection auto --host 128.66.12.5 128.66.12.6 --port 80 80 --version v1.2 v1.2 --ignore auto_5 auto_6 --output results.xml

To display additional information about the available command-line options:

# Show the usage
python3 nmos-test.py -h

# Show the specific options for the 'suite' command
python3 nmos-test.py suite -h

Using the API

The testing tool comes with a minimal API for running tests remotely and configuring the testing tool instance dynamically. This is particularly useful for automated testing purposes. The two endpoints presented by the API are:

/api

[GET, POST]

This endpoint accepts (almost) identical inputs as the non-interactive command line utility, except hyphens (-) are replaced with underscores (_) for key values.

For a list of test suites, the body of the POST request would be:

{
  "list_suites": true
}

To execute a test for a remote API, the body of the POST request would look something like:

{
  "suite": "IS-05-01",
  "host": ["192.168.1.2"],
  "port": [80],
  "version": ["v1.0"]
}

NOTE: Much like in non-interactive mode, the testing tool is currently limited to running a single test suite at a time.

/config

[GET, PATCH]

To change the testing tool from using HTTP to using HTTPS, the body of the request would be:

{
  "ENABLE_HTTPS": true
}

NOTE:

External Dependencies

Additional Documentation

Some of the tests contained within this tool perform a number of steps which may not be obvious without viewing the source code. Descriptions of complex tests’ behaviour is documented in docs/ in order to aid with debugging.

Known Issues

Ramlfications Parsing

Ramlfications trips up over the ‘traits’ used in some of the NMOS specifications. Until this is resolved in the library, we overwrite cases of this keyword in the RAML files.

Adding New Tests

This test suite is intended to be straightforward to extend. If you encounter an implementation which is operating outside of the specification and the current test suite does not identify this behaviour, please consider adding a test as follows:

  1. First, raise an Issue against this repository. Even if you do not have the time to write additional tests, a good explanation of the issue identified could allow someone else to do so on your behalf.
  2. Once an issue has been raised, feel free to assign it to yourself. We would welcome any Pull Requests which add to the set of tests available. Once a Pull Request is raised, one of the specification maintainers will review it before including it in the test suite.

Test Suite Structure

All test classes inherit from GenericTest which implements some basic schema checks on GET/HEAD/OPTIONS methods from the specification. It also provides access to a ‘Specification’ object which contains a parsed version of the API RAML, and provides access to schemas for the development of additional tests.

Each manually defined test case is expected to be defined as a method starting with test_, taking an object of class Test. This will allow it to be automatically discovered and run by the test suite. The return type for each test case must be the result of calling one of the methods on the Test object shown below.

Examples of each result are included below:

from .TestResult import Test

def test_my_stuff(self, test):
    """My test description"""

    # Test code
    if test_passed:
        return test.PASS()
    elif test_failed:
        return test.FAIL("Reason for failure")
    elif test_warning:
        return test.WARNING("Reason the API configuration or response is not recommended")
    elif test_disabled:
        return test.DISABLED("Explanation of why the test is disabled and e.g. how to change the test suite "
                             "config to allow it to be run")
    elif test_could_not_test:
        return test.UNCLEAR("Explanation of what prior responses prevented this test being run")
    elif test_not_implemented:
        return test.OPTIONAL("Explanation of what wasn't implemented, and why you might require it",
                             "https://github.com/AMWA-TV/nmos/wiki/Specifications#what-is-required-vs-optional")
    elif test_manual:
        return test.MANUAL("Explanation of why the test is not (yet) tested automatically, and e.g. how to "
                           "run it manually")
    elif test_not_applicable:
        return test.NA("Explanation of why the test is not applicable, e.g. due to the version of the "
                       "specification being tested")

The following methods may be of use within a given test definition.

Requesting from an API

# All keyword parameters are optional
# Where 'json' is the body of the request in json and 'data' is the body as url encoded form data
self.do_request(method, url, json=json, data=data, headers=headers, auth=auth)

Returns a tuple of the request status (True/False) and a Requests library Response object.

Testing an API’s response

self.check_response(schema, method, response)

Return a tuple of the test status (True/False) and a string indicating the error in the case this is False.

Accessing response schemas

self.get_schema(api_name, method, path, status_code)

Returns a JSON schema, or None if it is unavailable.

Validating a JSON schema

self.validate_schema(payload, schema)

Raises an exception upon validation failure.

Testing a New Specification

When adding tests for a completely new API, the first set of basic tests have already been written for you. Provided a specification is available in the standard NMOS layout (using RAML 1.0), the test suite can automatically download and interpret it. Simply create a new test file which looks like the following:

from .GenericTest import GenericTest


class MyNewSpecTest(GenericTest):
    """
    Runs MyNewSpecTest
    """
    def __init__(self, apis):
        GenericTest.__init__(self, apis)