Workflow testing

Writing down end to end tests for your workflow is essential for several important reasons:

  1. Verification of Correctness: Tests serve as a means to verify that your workflow functions correctly. They help ensure that analysis produces the expected results.

  2. Error Detection: Tests help detect errors and issues within the workflow early in the development process. Identifying and addressing these errors before deploying the workflow for real-world tasks can prevent costly mistakes.

  3. Regression Testing: As your workflow evolves, changes and improvements may be made. Without tests, it's challenging to ensure that new changes do not inadvertently break existing functionality.

  4. Documentation and Understanding: Tests can serve as documentation for how your workflow should behave. They provide clear examples of expected inputs, outputs, which can help other team members understand and collaborate on the workflow.

  5. Refactoring and Maintenance: Tests make it safer to refactor or update your workflow. When you need to make changes to the workflow's logic or structure, tests provide confidence that you haven't introduced new issues or regressions.

In summary, creating tests for your workflow is a fundamental practice !

Try writing down the tests at the same time you are doing the implementation. You should aim to have around 5 to 10 tests per task-groups.

Yaml test format

To be executed correctly, the file name should start with 'tests_v2_. .. .yaml' and be placed in a tests folder. The first key is the name of the test work-order. For each test work-order you can then specify:

  • Using the key work_order_types: The work order types of the test work order, as a list.

  • Using the key wo_metadata: The metadata to add to the test work-order.

  • Using the keytests: The list of test analysis to execute

For each test analysis you need to specify:

  • Using the task_group key: The task-group name to be analysed

  • Using the inputs key: All the inputs that are to be analysed

  • Using the tasks key: The expected tasks values

  • Using the type key: If there are no tasks and a raw_exec value is expected

  • Using the expected_memory key: The expected analysis result

Here is an example of a test_work_order_all_ok work order, which includes two tests: closing_read_all_inputs and opening_read_all_inputs, with the second one being of type raw_exec:

test_work_order_all_ok: # Test work-order
  work_oder_types: [outdoor]
  wo_metadata:
      parameter_meter_location: outdoor
  tests:

    closing_read_all_inputs: # Test 1
      task_group: closing_read
      inputs:
        image_input: https://storage.googleapis.com/dp-sa-internal/circet-ireland/smart-meters/data/images/closing_read.jpeg
        expected_serial_number: "3393"
        expected_meter_read: "29249"
      tasks:
        - closing_read: true
        - closing_read_serial_number_value: "3393"
        - closing_read_serial_number_valid: true
        - closing_read_meter_read_value: "29249"
        - closing_read_meter_read_valid: true
        - closing_read_expected_serial_number: "3393"
        - closing_read_expected_meter_read: "29249"

    opening_read_all_inputs: # Test 2
          task_group: opening_read
          inputs:
            image_input: https://storage.googleapis.com/dp-sa-internal/circet-ireland/smart-meters/data/images/opening_read.jpeg
          expected_memory:
            key: value
            key1: value1
          type: raw_exec

For now, prefix your test files with the string 'tests_v2' and put them in a tests folder next to the workflow_v2 folder.

Launch the tests

The command wf_client test can be used to start the execution of the tests.

# Locally execute all the tests of the collected test files
wf_client test

# Set current working directory
wf_client test --cwd

# Locally execute only the tests with the name containing the specified substring
# Can be used to start only the tests of a specific task-group if the task-group name is included in the test names
wf_client test -k substring

# Show stdout/stderr outputs
wf_client test -s

# Remotly execute all the tests
wf_client test --api-prod # On the production deployment. It uses the Production site id specified in the .env.
wf_client test --api-test # On the testing deployment.

# Some other flags can be used to ignore specific failures:
# Ignore the order comparison of the expected and predicted tasks.
wf_client test --ignore-task-order                 

# Ignore the comparison of predicted tasks not listed in the expected tasks.
wf_client test --ignore-extra-predicted-tasks   

# The created test work-orders won't be deleted at the end of the analysis. For remote testing only.
wf_client test --keep-work-orders   

By default, pytest captures all stdout/stderr outputs. You can ddd the flag -s to see them, which can be useful for debugging purposes.

The tests are to be specified in a yaml file. The name of the file should start with tests a The first key is the name of the test work-order. For each test work-order you can then specify:

Last updated