Forum Replies Created

Viewing 14 posts - 1 through 14 (of 14 total)
  • Author
  • in reply to: Unit tests for FlowGraphs #2384


    (We are currently not working with FlowCanvas – we have for various reasons (not related to FlowCanvas) switched to working with Unreal – so I’m writing this from memory)

    Our #1 issue with FlowCanvas today is that the FlowGraphs do not diff/merge at all using the standard text-based merge tools. I have personally done the following dance a large number of times:

    1. Extract the “serializableObject” of the previous version of the .asset from source control
    2. Extract the “serializableObject” of the current version of the .asset
    3. Paste both into https://jsonformatter.curiousconcept.com
    4. Store both results into a pair of text files
    5. Use a diff tool to compare the text files
    6. Look for changes (is it just a single value changed? one or a few lines? a group of lines? large structural changes?)

    If the JSON data was in some multi-line format I would have been able to use the version control’s system tool to perform the diff in just 1 step.


    We have run into problems a number of times that have to do with custom functions, and reentrancy. This usually shows up with helper functions that are of a reusable nature – or when there is a bit of ping-pong going on between multiple FlowGraphs. We have worked around those problems by not using functions as often as we’d like; the “real” solution would have been for FlowCanvas to pass parameters via a call stack.


    We have found it problematic that the “Finish” node terminates all flows within an entire FlowGraph. This makes logic that ping-pongs between FlowGraphs not work as intended. We worked around that by limiting ping-pong, and carefully structuring logic so that early termination of certain flows would have no effect. A “proper” solution would be to provide an alternative to Finish that does not kill other currently-running threads (so something that is more of a Return operation), and/or provide an alternative to Finish that only kills threads that have been invoked during the currently-running event (which, in turn, requires tracking which Flows have been created during which Event).


    We have found the CPU cost for cloning FlowGraphs problematic.


    If we had been doing testing of FlowGraphs with other FlowGraphs as test rigs, we would probably have wanted some more tooling for inspecting the FlowGraph-under-test from the test rig. Tools for easily inspecting internal state, for example: can I easily get a list of the internal variables within the FlowGraph-under-test? can I replace a node in the FlowGraph-under-test with a spy that reports things back to the test rig? We did not need those – we were happy to limit ourselves to inspecting state outside of the FlowGraph itself.


    Those are the main things we’ve run into. For our use case (which may be more complex than the average user’s), we would have benefited from having state separated from logic. That would have made graph instantiation quicker. Cheaper graph instantiation would allow functions to be represented as separate graphs. That would make function-local variables viable. Passing function arguments on a stack would finally make it possible to create reentrant logic. All this would probably make writing custom nodes more complicated, and it may result in overall slower execution of the type of code that FlowCanvas users create today.

    in reply to: Unit tests for FlowGraphs #2377


    We used FlowCanvas for scripting the execution of individual abilities in a turn-based strategy game. Toward the end we had ~200 unique abilities. In order to get test coverage, we built a generalized test rig, and ~1300 test cases in total. The test cases showed up in the Unity Test Runner, and our build system ran the entire suite on every commit. The plumbing took a lot of time to build but it was well worth it for us.

    Our initial approach was to allow non-C# programmers create the test rigs from scratch as FlowGraphs, and then have a generalized lightweight “test rig runner” in C# wrapped around it. However, after a while we realized that in our situation, it scaled better if we created one single test rig, and a large number of data descriptions. A data description consisted of four sections:

    1. a reference to the FlowGraph to be run
    2. a description of the setup configuration of the rig (the list of things to do to the system before starting the FlowGraph)
    3. a list of events to feed into the FlowGraph at certain points in time
    4. a description of post-conditions (the list of things to check in the system after the FlowGraph has completed execution)

    Since we needed only one single test rig, and it became fairly complicated with all that parameterization, we wrote the test rig in C#.

    The data descriptions were instances of a ScriptableObject containing the parameterization. Each such asset was thought of as one “ability test case” by the designers. A bit of glue logic made these show up in the Unity Test Runner: A subsystem used Asset Change Tracker to maintain a list of the test-case assets available at any given time; then a regular C# parameterized test case with the TestCaseSource attribute exposed the list of test-case assets to Unity’s Test Runner & ran them when instructed to do so by Unity. Note that this relied on all our tests being instant (not needing the Unity engine to tick anything); the TestCaseSource approach can only be used for [Test] style execution, not [UnityTest].


    So, why data descriptors + 1 test rig, instead of multiple test rigs? Well, data descriptors are easier to diff/merge, there is less room for mistakes in the data descriptors themselves, data descriptors are quicker to set up, data descriptors give a limited but consistent language to all test cases. We were concerned about the viability of maintaining 1000+ test scripts; it’s difficult to do bulk changes across FlowGraphs.

    If our game had not been turn-based, but a continuous simulation, with lots of custom built FlowGraphs for different entities that manipulated Unity objects directly, then we would either have built a half-dozen or so test rig FlowGraphs per FlowGraph-to-test, or we would have built one test rig FlowGraph + one data descriptor C# class + created a half-dozen-or-so data descriptor assets for each FlowGraph-to-test. I haven’t looked deeply into this myself, but I were to, I would look at Unreal’s Functional Testing for inspiration.

    When it comes to making a FlowGraph testable:

    1. There needs a way for the test rig to affect the FlowGraph, either directly (poking parameters in the FlowGraph) or indirectly (modifying the world around it)
    2. there needs to be a way for the test rig to observe what the FlowGraph is doing, either directly (reading parameters within the FlowGraph) or indirectly (observing the world around it).

    When it comes to making a FlowGraph easy to test / not requiring so many test cases:

    1. Avoid infinite loops. If you cannot avoid infinite loops, split the graph into top-level loops that call out to functions, and invoke the functions separately to validate individual function behaviour. (This may require special glue logic within the FlowGraph to get the functions started – I haven’t investigated that myself).
    2. Design your FlowGraph logic so that you minimize the total number of permutations that you need to test.


    Good news: This has been reported, and fixed, in the latest 2018.2 beta.

    Original Unity Issue Tracker report here

    The 2018.2.0b8 patch notes includes the note, “Animation: Fixed use of AnimationCurve scripting API in threads. (1041793)”

    I have confirmed that we can build a player without errors with 2018.2.0b9.

    In other words: no action needed on your part!


    We are considering moving to 2018.x within the next few months. This is the only blocker that we know of. I can’t seem to find any good way to get more info on this on a timely manner from Unity themselves (looking at reference C# code, …)

    If you can find a workaround, that would be appreciated.

    in reply to: Nestled Custom Functions create infinite loop #1831


    Are you sure you want it implemented as a Queue? It usually makes a lot more sense to manage return addresses as a Stack.


    Yes, OnCreate is indeed only called when the node is created and only once when it does. Not every time the node deserialize, so that is a safe place to set the default serialized values.

    Cool, thanks, then we will use that to set defaults for now.

    A UI option for reseting the node to default values could follow in the next version. The only problem in that, is that the “OnCreate” no longer makes a lot of sense, but rather a “Reset” method would make more sense (akin to Unity’s Reset method callback). I could of course call “OnCreate” when we click the Reset option in the UI, but it’s not very consistent, thus I would prefer adding a “Reset” callback as well for this case.

    I am doubtful about the usefulness of having a reset option that resets all inputs for an entire node. Usually when I want to reset something, I want to reset only a single ValueInput, not all the node’s inputs. Also, before I decide to reset the value of a node, I would like to be able to see whether or not it already has the default value (grey/white coloring). Introducing a ‘Reset node to defaults’ UI mechanism does not solve any of those problems. Therefore I don’t think it is worth implementing a “Reset” method for an entire node.


    Hi — you say that:

    Avoiding serialization of default values is something good generally speaking.

    I think you mean good, from a performance point of view. However, I am not convinced that it is good from a usability point of view.

    I think what made this problematic has to do with the concept ‘default value’ being used for two things at the same time in this discussion: 1) the default value of a datatype, which can be relied upon as a performance optimization (and this is what 2.8 does) and 2) the default value of a parameter, which can be used as a convenience mechanism for the user when adding new nodes to flows.

    Optimizations that ignore serializing/deserializing default(T) values are fine. If done correctly they should never affect functionality, just improve performance.

    So what is the ‘default value of a parameter’ then? It is a different concept than default(T). It should be a value that is assigned to the parameter when adding a new node to a graph (which I presume is what the OnCreate() solution will do). It should ideally also be accessible to a ‘reset parameter to default’ UI feature, and provide grey/white coloring to tell whether the current value matches the ‘default value of the parameter’.

    If people were to use the ‘default value of a parameter’ feature a lot, then the don’t-serialize-default(T)-values optimization becomes less effective. However, that is probably not a big problem in practice.

    Based on this, I do not like solution 1. It looks nice on the surface, but it has the problem with tying default(T) and default-value-of-parameter together and this causes problem retroactively when changing the default-value-in-parameter in the C# code after a node has already come into use.

    I like solution 2, because it is small and it does not come with any gotchas. We will likely go with that approach. At this point I must double-check: OnCreate() isn’t called when deserializing a saved node, is it? because if it is called at each deserialization then this too would suffer from the same problem as solution 1.

    As for solution 3, that is also a viable option to us, but we would prefer not to have any modifications in core FlowCanvas – which is why we solution 2.

    in reply to: Request: Make assets easier to diff in source code form #1783


    I think you misunderstand me.
    I am aware that pushing the internal complex data structures through Unity’s serializers will not work.

    However, if you look at what JSONSerializer does, then via TrySerialize() it walks through complex internal data structures, and translates it to something simpler – a set of fsData objects, I think – and then via fsJsonPrinter.CompressedJson() it converts the fsData tree into a single text string.

    For that second step, instead of going fsData -> string, you could go fsData -> an equivalent set of C# classes which are serializable, and then leave a ref to the root of that C# object tree within the class you want to process. You would do all that as part of OnBeforeSerialize(). Then Unity will handle the actual serialization. The results will be well-structured YAML within the .asset file.
    (I am not sure but I suspect it would be possible to even make the fsData class native-serializable…?)

    Does that make more sense?


    OK, summary time:

    * NodeCanvas 2.8 not serializing value inputs equal to default(T) has broken a number of our FlowGraphs. We will go through all our c# node implementations, remove any use of .serializedValue during RegisterPorts() to change ports’ default values, and then go through all our FlowGraphs and ensure that their settings are correct.

    * We would like a future update of NodeCanvas to go back to serializing value inputs equal to default(T). At that point, we can reintroduce our current default-value mechanism and continue working like we did up until the 2.8 release.

    * We would like a future update NodeCanvas to offer a default-value option that works as described toward the end of my last post — assuming you too think it makes sense for the language & editor.


    After a good night’s sleep I have a better understanding of what the ‘no distinct “default value”‘ model would mean.


    (I think this is how NodeCanvas 2.7 worked)

    When you add a new node to a flow graph, the node’s value inputs will be assigned values according to default(T) for each input.
    All the node values will be serialized and deserialized, even values that happen to match default(T) at the time of serialization.

    If a node is deserialized and there is information missing for some value inputs (because new value inputs have been added to the node’s C# implementation), then default(T) will be assigned as the node’s value input.

    The UI will display values for value inputs that match default(T) in discrete semi-transparent text, and values that do not match in strong non-transparent text.

    There is a possibility here to support the feature of allowing the C# programmer to change the default value for value inputs. With This would be done not by modifying the class’s default constructor, but either by adding extra arguments to the ValueInput<> type or by keeping the default-value as a member of each ValueInput object. This programmer-specified value would override the use of default(T) in all places in the above text.

    It means that if a C# programmer modifies the default value for an existing value input, then it will only affect new nodes, and nodes which were serialized with an old format where the value input didn’t exist. If the FlowGraph programmer has ever seen (and saved) the node with a certain value displayed for a value input, that value will remain fixed.

    For FlowCanvas 2.7 we could sort-of accomplish this this by modifying .serializedValue for an input during RegisterPorts(). However, that did not propagate to the UI as described below (which would make the logical model more consistent):

    The UI’s semi-transparent colouring of a value input means: The value of this value input matches the C#-programmer-specified default value right now. It does not mean that the value is the default value, nor that it changes along with the default value. A “Reset to default” function could exist, which would change any value input to its current default value.


    Or, alternatively: abolish the current ‘default’ concept. When a node is added to a flow graph, all of its parameters will be given values somehow (typically assigned the constructor default values), and all these values will be saved by serialization. This ensures that old flows don’t change functionality when someone changes default values.

    With that model, the developer of a new node type could still specify defaults for a value through some mechanism, but it should only be applied when the user adds new nodes, or the first time that a user loads an old graph which does not include values for all the inputs which exist in the latest version of a node.

    The ‘default value’ concept i described here last is different from the default function argument concept that is used in C# but I think this variation maps better to visual programming languages. It also has the benefit that there is no need for a ‘set’ / ‘not set’ kind of flag separate from the actual value.

    in reply to: Request: Make assets easier to diff in source code form #1771



    I was thinking a bit more about option #2 (native Unity serialization) recently.

    I think you could accomplish this by creating an alternative to the FullSerializer, which would be a NativeUnityObjectSerializer. It would contain a set of serializable Unity objects which are just enough to represent the constructs that can exist within a JSON structure. Then, the places in the code where you call JSONSerializer.Serialize() / JSONSerializer.Deserialize(), you would not assign the results to a string field – you would assign it to a serialized-root-object-ref instead.

    In order to not break backwards compatibility you would need to keep both the existing FullSerializer and the new NativeUnityObjectSerializer alive in parallel. At load time you would then have to try loading both kinds of data from the asset; at write time you only write one of the types.

    It is a fair bit more work than the other options, absolutely. However, I think it is worth considering again.

    in reply to: Request: Make assets easier to diff in source code form #1597



    Some more thoughts on this. This is all based on an older version than 2.70 by the way, so some of these points might be obsolete by now.

    1. If you separate out the viewing parameters, i.e. current window and zoom factor, and store those in native Unity serialized fields, then it will be a lot easier for users to tell the difference between “I opened a graph and looked around in it” and “I opened a graph and made some modifications to its content” when diffing .asset files.

    2a. I understand that this is probably a major undertaking, but, serializing the entire graph with Unity’s Serializable types will instantly lead to the data becoming more easily diffable and mergable. Perhaps there is a way to store the graph description as a sort of a data description instead of a functional object structure, but in Serializable types? It might lead to a need for code generation (generating a Serializable type for each node type, to enable Unity to be able to serialize/deserialize them), not sure.

    (2b. Another option could be to store the JSON content in a text file separate from the .asset. It would allow the JSON to be stored as a pretty-printed normal text file. This makes it more cumbersome for users since each graph will be a pair of files instead of just one. Probably not worth it for most people.)

    in reply to: Request: Make assets easier to diff in source code form #1528


    I looked into this a little bit myself. It was quick to change all places which call Graph.Serialize() to always ask for JSON pretty-printed formatting at serialization time. That didn’t help a lot though; Unity encodes the pretty-printed multiline string as a long single-line “line1\r\nline2\r\nline3\r\n…” type string, and I see no way to convince Unity to split a string across multiple lines in the .asset file. Without the newlines in the .asset file, the pretty-printed JSON is not very helpful for diffing.

Viewing 14 posts - 1 through 14 (of 14 total)