The emergence and spread of drug-resistant pathogens have led to antimicrobial resistance (AMR) now being considered a major public health concern. To date, AMR surveillance in Europe and elsewhere is mainly relying on indicator-based surveillance, involving structured data collection according to clear case definitions. Seeking to support the early detection, assessment, and monitoring of current and future AMR threats across Europe, the MOOD project aims to explore the opportunities of mining unstructured surveillance data including those from media sources.
How does it work?
In this hackathon, we will form interdisciplinary teams that will work collectively on a technical challenge. A task and a data corpus will be presented to your team on the day of the hackathon. Your team will be challenged to develop new technical solutions that will mine and/or visualise unstructured media data. The main objective of the task involves the development and testing of classification approaches that will automatically identify text on AMR events and types of AMR issues (e.g. animal, food, etc.) in unstructured data (e.g. news, tweets) and classify these events by relevance for epidemic intelligence purposes. Eligible methods will largely involve those covered during the summer school, but usage of methodology beyond those covered is more than welcome.
At the end of the hackathon challenge, your team will present the developed methodology and outcomes to a jury, accompanied by underlying arguments on what makes your solution innovative and efficient.
The winning team
The MOOD hackathon jury board is pleased to attribute the 2022 Hackathon Prize to the best team formed by:
1. Nejat Arinik
2. Rajaonarifara Elinambinina
3. Sara Rose Wijburg
4. Loïc Dutrieux
5. Rodrique Kafando
This award, consisting of a life-time subcription to leanpub.com and the peer-reviewed and Open Access journal PeerJ.com, was attributed by the MOOD hackathon jury board in Montpellier, France, 22 June 2022 for the best an most creative solution.
Evaluation criteria
The evaluation of the team took into account two aspects:
1) The objective evaluation (Under- Same- Outperformed) – Task specific
The suggested evaluation scores are the well-known ones used for classification tasks: Precision, Recall and F-score.
The baseline techniques presented during the summer school provide the following scores: Precision = ICI , Recall = ICI and F-score = ICI
The qualitative evaluation (Not innovative-Weakly-Strongly) of how the team provides the results and how they show interesting information/knowledge.
How innovative is the proposed methodology?
What kind of information could be extracted and how useful is it?
Jury board
The jury was composed by:
– Esther van Kleef
– Wim van Bortel
– Maguelonne Teisseire
– Mathieu Rocher
– With the special presence of : Eric Cardinale, DVM, PhD
Head of veterinary public health at CIRAD