Here is an example of a node test that passes when run in MPS, both in-process and out-of-process, but fails when run from Ant:

test theTest { 
  model workspaceModel = model-ptr/tests.test_workspace/
  try { 
    node<ClassConcept> root = workspaceModel.add root(
      <ClassConcept(name: "TestClass")>); 
    assert 0 equals workspaceModel.nodes(InstanceMethodDeclaration).size ; 
      name: "bar", 
      returnType: VoidType(), 
      body: StatementList() 
    assert 1 equals workspaceModel.nodes(InstanceMethodDeclaration).size ; 
  } finally { 
    workspaceModel.roots(<all>).forEach({~it => it.detach; }); 

The model tests.test_workspace is an empty model that is used as a “workspace” where the test can add its nodes.

The failure message indicates the last assertion has failed:

junit.framework.AssertionFailedError: expected:<1> but was:<0>

Can you spot the problem?

MPS and Ant environments are different

While MPS tries to keep the test environment for in-process tests similar to out-of-process or Ant-based tests, there will always be differences.

One such difference is that when tests are run from Ant, all modules are packaged in JAR files and all models are read-only. Unfortunately, MPS allows adding nodes to a read-only model (even though an error will be logged in recent MPS versions, the operation will still succeed).

At the same time, the nodes(SomeConcept) operation builds a cache of its results and does not update it for read-only models. Combined, these two facts lead to incorrect test results but only when run from Ant (e.g. on a build server).

Immediate solution: Use a temporary model

Don’t modify existing models in tests. Instead, create a new temporary model and use it:

test theTest { 
  model workspaceModel = TemporaryModels.getInstance().createEditable(false,
  try { 
  } finally { 
    // workspaceModel.roots(<all>).forEach({~it => it.detach; }); 

General advice: separate concerns

The original piece of code that inspired this email was a test for a large and complicated importer. Failures in such tests are difficult to pinpoint and it helps to split complicated logic into several steps, tested separately.

An importer, for example, could be split into two parts:

  1. A function that reads the input data and produces a collection of free-floating nodes (not attached to any model) representing it.
  2. Another function that takes the free-floating nodes and synchronizes a destination model with those nodes (adding new nodes, modifying existing nodes, deleting or marking removed nodes).

These two functions can be tested in isolation and any potential failures will be twice as easy to pinpoint.