A brief talk about the recent endeavour to introduce and encourage code sharing among participants and the community of the MediaEval Benchmark Initiative.
4. The Process So Far…
Task
Report Data
Evaluate Build
4 10/10/2012
5. Task
Validate Data
Report Build
Evaluate
5 10/10/2012
6. Good Science
Reproducibility
› Allows us to validate our teams approaches
› Allows us to verify the work of others
› Allows us to build upon the work of others
What else do we need to support reproducibility?
› Code
› Documentation
› …with ease of access
› Thought given to computational resources
6 10/10/2012
7. Options
Hosting our own public repository
› We would need the infrastructure
• But this could also be used for hosting data, our own wiki, etc.
Using existing solutions (SourceForge, GitHub,
etc)
› Greater public exposure
› … but less control
7 10/10/2012
8. Obstacles
Hard:
› Legal issues from some institutions regarding IP, need to decide a
policy on licensing that is both permissive and pragmatic
Easy:
› Getting researchers to want access to other people’s code
Hard:
› Getting researchers to share their own code
8 10/10/2012
9. Example: Placing Task 2012
Using GitHub
Both organiser code, and participants
Some already available (need a central place for links – MediaEval
website?)
Learning how to structure code submissions for consistency
Lessons learnt this year will feed into guidelines for next year
What works for Placing may not be optimal for other tasks, but much
cross-over
Virtual Kitchen + Code Sharing = Better Science
10 10/10/2012