SBML.org — the global portal for all things SBML

Draft competition 2

In an effort to gather more test cases and make simulator support for all of SBML more robust, we are soliciting new test cases for the SBML Test Suite (http://sbml.org/Facilities/Online_SBML_Test_Suite). Points for each unique (as defined below), original test will be awarded as follows:

  • 1 point: The test case consists of the absolute minimum: a test model in SBML format, a results file in .csv format, and a settings file in the format used by current SBML Test Suite test cases.
  • 3 points: There is a properly-formatted test directory, with valid SBML file(s) and properly-formatted model (.m), settings, and results files, including a cogent summary of what your test is supposed to be testing.
  • 3 points: Getting all the Test Suite tags for your test correct.
  • 3 points: Getting the results for your test correct. If there is controversy over the proper interpretation of your test, the correct results will be determined either by consensus among the simulator writers, or by the SBML editors.

Unique test in this context means uniqueness in conceptual design, not numerical precision. Multiple submitted models where nothing changes but the numbers count as a single entry. Entries that match the conceptual design of an existing test suite test too exactly may likewise be discarded. (To compare, find the models in the test suite with exactly the same set of test tags as yours.)

In addition, your tests taken as a whole will gain points for the following:

  • 20 points: Providing one or more tests with a unique set of tags whose combination is covered by fewer than 3 tests in the test suite. The libSBML team must agree the tag combination is well-integrated, meaning that each component influences the other components. (20 points per combination.)

    Note that it will be most helpful to simulator writers if you provide a test that contains the least number of tags that nevertheless reveals the problem in the simulator. We already have a 'kitchen sink' model in the test suite (model 1000) so we don't need more tests of that type. What we do need is focused tests that pinpoint specific issues with particular combinations of tags, and tests that use existing tags in new ways.
  • 50 points: Revealing inconsistent interpretations of SBML: If two or more simulators from our test bed get different results from your test, but both groups (not including your own) claim that their interpretation is correct, we'll resolve the dispute among the editors and/or the community, and award you 50 points/controversy.
  • 50 points: Per simulator in our test bed that generally handles models with the same tags correctly, produces no warnings or error messages, but fails to produce the expected results. 50 points per issue, max 1/test. No points given for integration errors; the test suite expressly does not try to test integration, but SBML concepts.

As many of these items are somewhat subjective, we will strive to evaluate them as anonymously as possible. Entries are to be submitted to Linda Taddeo (ltaddeo@caltech.edu), who will remove identifying information from the submissions and then pass them on to the libSBML Team for assessment.

The following prizes are available for different point levels:

  • 50 points or more: An SBML T-shirt in the size of your choice.
  • 250 points or more: An SBML T-shirt plus US$ 200 of travel support to attend HARMONY 2012.
  • Highest number of points across all submissions: An SBML T-shirt, travel support to HARMONY 2012, plus a gift certificate for $100 at Amazon.com.

A maximum of three (3) awards of travel support will be made. If more than three individuals or group garner 250 points or more, we will select the three awardees based on precedence: those submitted earliest will win.

All decisions of the team will be final.



Please use our issue tracking system for any questions or suggestions about this website. This page was last modified 01:15, 4 March 2012.