— the global portal for all things SBML


SBML Editors' meeting minutes

Editors present: Frank Bergmann, Mike Hucka, Sven Sahle, Lucian Smith, Darren Wilkinson
Editors absent: Jim Schaff
Visitors present: (No visitors)
Location: videoconference using EVO
Scribe: Mike Hucka
Approved by: Lucian,Darren
Recording(s): EVO recording zip archive


What rules should be followed by a tool that doesn't understand a required package?

Nicolas Le Novère started a thread on sbml-discuss where he made several proposals that impact how packages are defined. We need to decide whether we agree with them, or which ones. To summarize (and give them numbers for reference below):

  1. Do not redefine the semantics of elements such that the specification of the core is invalid.
  2. Ignoring elements with the package namespace should result in syntactically correct models that you can 'do things with', even if the mathematics will be wrong (visualize, annotate, edit to add/remove elements, etc.)
  3. It would be nice to allow encoding of a non-package version of a core element for programs that do not read your package. A 'random' function might be written in such a way as to always return '0.5' if the software did not understand the 'distrib' package.
  4. What happens if several packages need to be used concurrently, and particular symbols defined in different packages are used in the same mathematical expression? (Can we think of an example?)

(2010-06-09 comment by Mike) The approach for FunctionDefinition that we discussed at the Hackathon in the context of the distrib package would seem to satisfied goals 1–3. The approach was to subclass FunctionDefinition to produce a new type of FunctionDefinition that add a separate math element in parallel, like this:

    <functionDefinition id="foo"> 
        <math xmlns="" 
             <cn> 0.5 </cn>
        <distrib:call xmlns:distrib=""    
             function="gaussian" mean="0.5" variance="1.5"

The user of the distribution package would have to know what it means to call the function named gaussian in the package, as well as the arguments that it takes (mean, variance, etc.). The semantics of calling the function would be that the distrib:call part should be used and the math ignored, by software that understand distrib. If a software system didn't understand it, it could still use the function definition, and would simply end up with 0.5 as the value as a fall-back provided by the creator of the model for just those situations.

Further discussions during the 2010-06-09 Editors' meeting revealed two possible issues. First, for tools that wish to manipulate SBML models containing extended math which they don't understand, a blanket required="true" flag on the top-level namespace declaration is not granular enough. The tools will want to know which particular elements have had their mathematics changed. What bits are ‘safe’ to manipulate freely, and which must be treated with care? The tools may in fact be able to profitably manipulate a model where they only pay attention to some (and not all) elements, and therefore, a blanket required="true" declaration is not sufficient for their needs. Second, if a model uses more than one package that may override elements or mathematical formulas, then once again required="true" seems insufficient; more useful would be a way to indicate which SBML elements are being affected by which package(s).

In response to these issues, Lucian previously wrote a quick proposal for a required elements package. This package is designed to allow finger-grained indication of which elements in a model are being overriden.

We had considerable further discussions at this Editors' meeting about this proposal and general topic. The conclusions, in the end, were the following:

  1. The presence of elements or attributes from an L3 package are a strong indication that a given SBML element is being overridden or otherwise modified. It is not yet clear whether the extra information proposed to be added by the req package would really add information.
  2. A model author will know that they are using features from more than one L3 package. It is natural to expect that authors would be careful about the consequences of mixing constructs from the different L3 packages, just as they would naturally be conscious about how they are using any other constructs. Thus, there was a feeling among some editors that the feared issues may actually not arise in practice.
  3. A software library such as libSBML should be able to provide a method for returning (e.g.) a list of all elements being modified by L3 package constructs in a model. This may be sufficient to address at least one of the concerns listed above.
  4. We spent too much time arguing about nebulous "what if" scenarios in the absence of concrete examples of models with references to multiple L3 packages. We need to get experience with SBML packages and see whether the problems really arise in practice.

The conclusion was to leave things with the current required false, and to see whether some of the problems really arise in practice. If they do, we will return to Lucian's proposal as a candidate solution.

What does 'awaiting two implementations' mean?

We say that L3 RC1 is official as soon as we have two implementations. But what does this mean?

During previous discussions, we said the following. First, we should enumerate all newly-introduced SBML elements (in L3 Core), and make tests in the SBML Test Suite for those elements that affect simulations. Then, we need two things:

  • Two simulators or interpreters that successfully implement at least 75% (?) of that list, meaning that they either pass the test suite tests or otherwise implement the new elements.
  • Every item on the list has at least one implementation.

During this meeting, we backed away from attempting to define objective rules. Instead, the consensus was to leave it up to the judgement of the SBML Editors. The argument was put forward most strongly by Darren, who pointed out that the coverage in the SBML Test Suite was going to be somewhat subjective anyway—complete coverage is essentially impossible given limited staffing and time, so whatever is chosen to be tested is partly up to the authors of the test suite. Therefore, criteria based on the test suite will not be objective either. Further, the test suite cannot operationally test some aspects of SBML anyway, such as units, because the mathematical interpretation of a model is not dependent on them. Finally, we will not be able to require that tools implement support for all SBML features in L3 because it was not required for L2. Taken together, these points argue that it is not feasible to base the decision on solely a requirement for evidence of at least two software tools passing a certain percentage of the test suite test cases.

The consensus was therefore that the SBML Editors must ultimately make the decision about when sufficient evidence appears for two implementations of L3 support. This decision should certainly be based in part on the tools demonstrating some ability to pass L3 test cases in the SBML Test Suite.

Image:Todo.gif Inform sbml-discuss about this decision, and request that software developers make every attempt to inform the Editors about the development of L3 support.

Is it up to a package specification to decide whether required must always be true or false, or is it up to the modeler?

There appear to be arguments for both approaches.

  • Advantages of specifying it in the package:
    1. Readers can see up front whether the package is 'about' changing what the math means.
  • Advantages of specifying it by the modeler:
    1. A particular model might not use math-changing aspects of package (e.g., by using only SpeciesType from multi), in which case, a reader could in fact ignore the constructs and work with the model even if it did not implement support for that package.
    2. A particular model might provide a fallback method to math-changing elements that resulted in a different but valid model (e.g., all calls to 'random' returning 0.5), so some sense a tool does not really have to understand the package to interpret the model.
    3. In addition, it seems reasonable to allow packages to specify required="false" if there's no way to use the package to change the math (for example, in the case of the layout package).
      • Prevents users from changing the meaning of required from this will change the math to this will change the intent of what I am doing with this model (which is much more nebulous).

After some discussion, there was a general consensus that option #2 (specifying on a per-model basis) is more desirable. We need to issue guidelines along the following lines:

  1. Each package's specification must recommend one of the following regarding the value of its required flag:
    1. The value of the required flag is up to the modeler for a given model, but a recommendation might be given; 'true' if most (but not all) of the elements described in the package will change the math, or 'false' if most (but not all) of the elements described in the package will not change the math.
    2. The required flag must always be set to "true" if it is impossible to use any new element described in the package without changing the math.
    3. The required flag must always be set to "false" if it is impossible to use any new element described in the package to change the math.
  2. An SBML document that does not use the recommended value of required is not considered invalid in the first case, but can be considered invalid if the wrong required boolean is set in either of the second two cases. It would be nice if models that set required=false were invalid if they did indeed change the math, but this might be hard to enforce in software.

Image:Todo.gif This needs to be agreed upon, finalized, and added to a (still to-be-created) set of guidelines for SBML L3 package authors. The guidelines need to be part of the SBML L3 development process.

Retrieved from ""

This page was last modified 08:31, 24 June 2010.

Please use our issue tracking system for any questions or suggestions about this website. This page was last modified 08:31, 24 June 2010.