SBML.org — the global portal for all things SBML

Supplementary documents for the 2009 SBML Hackathon

Contents

Here you will find additional materials for the 2009 SBML Hackathon.

Suggested activities for the SBML Hackathon

Whether you came to the hackathon with goals in mind already, or whether you were exploring SBML, the following list of possible activities was intended to keep you productive (and in some cases, to help SBML and the SBML community at the same time!). These were only suggested activities, meant to serve as starting points if participants were unsure where to begin; people were of course free to do other things!

The "competitions" mentioned in this table are described in a separate section below.

Category Activity alternatives
1 Testing existing SBML support • Use the online validator to test & debug your SBML output
• Attend SBML Test Suite tutorial
2 Upgrading existing SBML support • Attend libSBML 4.x-beta tutorial, then install libSBML 4 and start hacking!
3 Implementing new SBML support • If libSBML is the answer, download libSBML & start hacking!
• If SBMLToolbox is the answer, download SBMLToolbox & start hacking!
• Else, let's talk about alternatives
4 Enhancing SBML interoperability • Get together with someone else and exchange models between your software
• Work on the Matrix Competition (see below)
• Work on the LibSBML Documentation Competition (see below)
5 Learning more about SBML • Read the SBML intro, then the SBML Level 2 Version 4 specification
• Find attendees with interesting software & talk to them about SBML
6 Helping the SBML community • Work on the Best Practices Competition (see below)
• Work on the Matrix Competition (see below)
• Work on the LibSBML Documentation Competition (see below)

Competition rules

New in 2009, we introduced small competitions with prizes for the winners. The prizes were 16 GB SanDisk Cruzer Titanium Ultra USB drives. There were three competitions (and three prizes). The competitions were scored independently; individuals were permitted to participate in more than one, but their scores in each competition were kept separate and are not added up.

The competition closed at 16:00 (4 PM) on Friday, so that we could have enough time to tally the results and announce winners at the dinner on Friday night.

Matrix Competition

The SBML Software Guide includes descriptions of known SBML-compatible software packages as well as a matrix of those packages' features. Although the SBML Team requests that software authors send updates about their software, few do. It is likely that there are many out-of-date entries in the matrix, but the SBML Team simply doesn't have the staff to check every single entry. This was a situation where a bit of distributed parallel processing could be put to great effect.

The goal of this competition was to validate entries in the Matrix (and optionally in the accompanying longer descriptions of the entries). Scoring was simply based on finding and correcting errors. The scoring rules were as follows:

  1. Every square in the matrix was worth 2 points.
    • Example: you found that the matrix claims software package "BIGSBML" supports DAEs, but you investigated it and found that in fact it does not. This is a correction to one square in the matrix, and was worth 2 points.
    • Example: you found that the matrix claims software package "GREATSBML" has no dependencies, but you checked, and discovered that it actually needs MATLAB. This was a correction to one square in the matrix, and was worth 2 points.
  2. Finding a no-longer-existent package entry was worth 5 points.
    • Explanation: there are 20 columns in the matrix. Without a special case, removing a whole row would constitute changing 20 squares (= 40 points), yet it would not reflect the same amount of work as checking the 20 individual features of an existing package.
  3. Adding a missing package and filling out its features was worth 30 points.
    • Explanation: adding a whole new entry was generally less time consuming than checking 20 columns of an existing entry (which would otherwise constitute 40 points total).
  4. Bonus points:
    1. +5 points for providing a new or updated paragraph for use in the software summary page.

To participate, each individual composed a single document listing every change they proposed in the matrix. The format of this document could be anything (e.g., plain text) and described each change in a way that made it possible for a judge to easily understand what was meant. It could be simple and uncomplicated, but it had to be unambiguous enough so that judges could evaluate the results without having to consult the author.

Each individual's score was tallied separately from every other person's. It did not matter if more than one person reported the same specific correction; all persons reporting the correction were attributed the same number of points.

In cases of conflicting reports between multiple people for the same matrix correction, a knowledgeable person determined which report was correct. The person(s) who made the correct report got the points.

LibSBML Documentation Competition

Due to its size and continuously-changing nature, the libSBML API documentation may contain errors. Simple lack of time and staff prevent the libSBML Team from being able chase down errors in a timely fashion, yet correcting the documentation is obviously an important thing to do. This is another situation where engaging multiple people (especially libSBML users) was likely to yield good results.

The activity in this competition concerned finding and reporting errors of content in the libSBML API documentation. Here were the scoring rules:

  1. Each correction to a specific matter was worth 3 points.
    • Explanation: the focus here was on claims about a feature or behavior that was contrary to the actual features or behaviors exhibited by a method, class or other API element. Due to the nature of documentation, the explanation of a feature or behavior may take several sentences, which made finding an appropriate scoring metric difficult. The most sensible approach seemed to be to focus on topics and not (for example) single words, sentences or paragraphs.
  2. Missing argument or return type descriptions were worth 2 points per instance.
  3. Wholly missing documentation for a method or class was worth 1 point.
    • Explanation: spotting completely missing documentation is much easier than finding errors in the content.
  4. Corrections were counted separately for each method or class where it applies
    • A correction that affects multiple methods (e.g., a missing parameter description) was counted multiple times, once for each method where the correction was repeated.
  5. Things that did not count as corrections:
    • Formatting bugs (but we appreciated reports of them nonetheless).
    • Reports of the same matter in the documentation for different programming languages
      • Explanation: the libSBML documentation is generated from a single common source, so a bug in one language almost certainly exists in the others as well. Counting them separately would be unfair.
    • Documentation bugs that arose from the documentation generation system, and not from author errors

You could report the errors for whichever language API you preferred to use.

Error reports were provided by using the issue tracker on SourceForce. Users made sure to log in so that their identities were tracked and we knew to whom the report should be attributed. People reported one issue/matter/bug/etc. per submission, otherwise it would have been difficult for us to tally the results.

Each individual's score was tallied separately from every other person's. It did not matter if more than one person reported the same specific correction; all persons reporting the correction were attributed the same number of points.

In cases of conflicting reports between multiple people for the same matrix correction, a knowledgeable person determined which report was correct. The person(s) who made the correct report got the points.

Best Poster Competition

To recognize the effort put into preparing and presenting posters, we had a competition for the best poster at the Hackathon. The posters were ranked by everyone via electronic voting. The voting page allowed you to vote on 3 posters, and people picked the 3 most notable posters they wanted to vote on out of the many that they saw.

Competition winners!

We announced the winners of the competition at the dinner held in Cambridge on March 28. The winners in the 3 competitions were:

  • Matrix competition: Herbert Sauro
  • LibSBML documentation: Nicolas Rodriguez
  • Best poster: Andreas Dräger, for his and his colleague's poster about SBML2LaTeX

Thank you to everyone (and there were many) who participated in the competitions and helped improve the state of SBML information dissemination for everyone.

Documentation quick-links



Please use our issue tracking system for any questions or suggestions about this website. This page was last modified 19:17, 4 January 2012.