— the global portal for all things SBML

Supplementary materials for the 2010 Hackathon

Here you will find additional materials that were important for the 2010 Hackathon.


Suggested activities for the Hackathon

Whether you were coming to the hackathon with goals in mind already, or whether you were exploring SBML and/or the many initiatives, the following list of possible activities were intended to keep you productive (and in some cases, to help the SBML and communities at the same time!). These were only suggested activities, meant to serve as starting points if you were unsure where to start; people were of course free to do other things!

The "competitions" mentioned in this table are described in a separate section below.

Category Activity alternatives
1 Learning more about SBML Level 3 • Read the SBML intro, then the SBML Level 3 Version 1 specification
• Find attendees with interesting software & talk to them about SBML
2 Testing existing SBML support • Use the online validator to test & debug your SBML output
• Develop additional test cases for the SBML Test Suite.
3 Upgrading existing SBML support • Attend libSBML 5 introduction, then install libSBML and start hacking!
• Attend the jSBML introduction, then get involved in hacking on jSBML
4 Implementing new SBML support • If libSBML is the answer, download libSBML & start hacking!
• If SBMLToolbox is the answer, download SBMLToolbox & start hacking!
• Else, let's talk about alternatives
5 Enhancing SBML interoperability • Get together with someone and exchange models between your software
• Work on one of the competition topics (see below)
6 Learning about services • Attend the Services training session
• Attend the community development session
• Work on one of the competitions tasks (see below)
7 Learning about SED-ML • Attend the SED-ML update talk
• Implement SED-ML support in your software
8 Gaining experience in model annotation • Attend the Services training session
• Annotate models using, e.g., semanticSBML or SAINT
• Help Ron and Dagmar in tagging models in BioModels Database

Competition rules

Following the success of this idea at the 2009 hackathon, in 2010 we once again held small competitions with prizes for the winners. The prizes were a Canary Wireless Hotspotter and $25 gift certificates to Amazon. There were three competitions (and three prizes). The competitions were scored independently; individuals could participate in more than one, but their scores in each competition were kept separate and were not added up. Also, there had to be more than one contestant in a given competition for that competition to be considered valid. (In other words, there were no winners by default.)

The competition closed at 8PM on Monday, so that we could have enough time to tally the results and announce winners in the morning on Tuesday.

The following are descriptions of the activities. facilities bashing

The objective of this competition was to look for and report problems in the various facilities (both BioModels Database and the other services). Scoring was based on the number and type of suggestions, bug reports, and other useful comments provided by people. Here were the detailed rules:

  1. BioModels Database model errors:
    • Downloading a model and finding an error in the model was worth 3 points per error. An error in this case could be an incorrect simulation result, an error in a parameter value¬†compared to the published model, an incorrect annotation, or similar significant problem.
  2. services behavior errors:
    • A report of an error or clearly unexpected behavior in any of the web services or web pages for the efforts (i.e., BioModels Database, SBO, MIRIAM Resources, etc.) was worth 2 points.
  3. Documentation errors:
    • Finding and reporting an error in the documentation for the efforts was worth 1 point.

Each individual's score was tallied separately from every other person's. It did not matter if more than one person reported the same specific issue; all persons reporting the issue were attributed the same number of points.

In cases of conflicting reports between multiple people for the same problem, a knowledgeable person sought to determine which report was correct. The person(s) who made the correct report got the point(s).

Improving jSBML

The jSBML library was in development. This competition focused on developing new features for jSBML. You can check out the latest source code from the jSBML project on SourceForge. Scoring was based simply on the following rules:

  1. Implementation of new functionality (e.g., implementation of new methods, enhancement of existing methods): 2 points per line of source code.
  2. Writing documentation:
    • 2 points per sentence of user documentation (not code documentation, but separate documentation explaining how to use jSBML).
    • 1 point per line of documentation in the code. This could be comments explaining what certain lines of code are doing, or explaining the purpose of a class, or the purpose of a variable.

Best Poster Competition

To recognize the effort put into preparing and presenting posters, we had a competition for the best poster at the Hackathon. The posters were ranked by everyone via electronic voting. The voting page allowed you to vote on 3 posters, so people chose the 3 most notable posters you want to vote on out of the many that you saw.

The voting page is closed.

Competition winners!

We announced the winners of the competition on the last day of the hackathon. The winners were:

  • hacking: Michael Hoehl
  • Best poster: Deepak Chandran, for his poster on TinkerCell

We had unfortunately no winners in the jSBML competition, because there was only 1 contestant and the rules require at least two participants in a contest for that that contest to be valid.

Please use our issue tracking system for any questions or suggestions about this website. This page was last modified 02:10, 27 January 2011.