Reply To: Unit tests for FlowGraphs

FlowCanvas Forums Support Unit tests for FlowGraphs Reply To: Unit tests for FlowGraphs

#2377
kalms
Participant

We used FlowCanvas for scripting the execution of individual abilities in a turn-based strategy game. Toward the end we had ~200 unique abilities. In order to get test coverage, we built a generalized test rig, and ~1300 test cases in total. The test cases showed up in the Unity Test Runner, and our build system ran the entire suite on every commit. The plumbing took a lot of time to build but it was well worth it for us.

Our initial approach was to allow non-C# programmers create the test rigs from scratch as FlowGraphs, and then have a generalized lightweight “test rig runner” in C# wrapped around it. However, after a while we realized that in our situation, it scaled better if we created one single test rig, and a large number of data descriptions. A data description consisted of four sections:

  1. a reference to the FlowGraph to be run
  2. a description of the setup configuration of the rig (the list of things to do to the system before starting the FlowGraph)
  3. a list of events to feed into the FlowGraph at certain points in time
  4. a description of post-conditions (the list of things to check in the system after the FlowGraph has completed execution)

Since we needed only one single test rig, and it became fairly complicated with all that parameterization, we wrote the test rig in C#.

The data descriptions were instances of a ScriptableObject containing the parameterization. Each such asset was thought of as one “ability test case” by the designers. A bit of glue logic made these show up in the Unity Test Runner: A subsystem used Asset Change Tracker to maintain a list of the test-case assets available at any given time; then a regular C# parameterized test case with the TestCaseSource attribute exposed the list of test-case assets to Unity’s Test Runner & ran them when instructed to do so by Unity. Note that this relied on all our tests being instant (not needing the Unity engine to tick anything); the TestCaseSource approach can only be used for [Test] style execution, not [UnityTest].

 

So, why data descriptors + 1 test rig, instead of multiple test rigs? Well, data descriptors are easier to diff/merge, there is less room for mistakes in the data descriptors themselves, data descriptors are quicker to set up, data descriptors give a limited but consistent language to all test cases. We were concerned about the viability of maintaining 1000+ test scripts; it’s difficult to do bulk changes across FlowGraphs.

If our game had not been turn-based, but a continuous simulation, with lots of custom built FlowGraphs for different entities that manipulated Unity objects directly, then we would either have built a half-dozen or so test rig FlowGraphs per FlowGraph-to-test, or we would have built one test rig FlowGraph + one data descriptor C# class + created a half-dozen-or-so data descriptor assets for each FlowGraph-to-test. I haven’t looked deeply into this myself, but I were to, I would look at Unreal’s Functional Testing for inspiration.

When it comes to making a FlowGraph testable:

  1. There needs a way for the test rig to affect the FlowGraph, either directly (poking parameters in the FlowGraph) or indirectly (modifying the world around it)
  2. there needs to be a way for the test rig to observe what the FlowGraph is doing, either directly (reading parameters within the FlowGraph) or indirectly (observing the world around it).

When it comes to making a FlowGraph easy to test / not requiring so many test cases:

  1. Avoid infinite loops. If you cannot avoid infinite loops, split the graph into top-level loops that call out to functions, and invoke the functions separately to validate individual function behaviour. (This may require special glue logic within the FlowGraph to get the functions started – I haven’t investigated that myself).
  2. Design your FlowGraph logic so that you minimize the total number of permutations that you need to test.