The Large Hadron Collider at CERN produces petabytes of data per day at the four large experiments (ALICE, ATLAS, CMS, and LHCb), which altogether have roughly 10.000 collaborators. Processing this amount of data leads to plenty of demanding challenges that require development and deployment of state-of-the-art machine learning solutions. Applications range from small to truly large scales and from very fast (a few µs) to modest inference (many seconds) times. The Inter-experimental Machine Learning (IML) Working Group provides a forum for the machine learning community at the LHC. It brings together scientists from the LHC experiments, connects them to the data science community, fosters inter-experimental common solutions, and provides training and benchmarks. Each experiment is represented by an IML coordinator. The IML working group is hosted and supported by the LHC Physics Center at CERN (LPCC). (For a formal definition of the group, please refer to the Mandate.)
What we do
IML organizes monthly meetings on a variety of subjects. These meetings are often topic-oriented (focusing on a certain ML technique) and may include external experts. Each spring IML organizes an annual workshop typically comprised of roughly 300 participants, which includes invited data scientist’s talks, submitted talks, and tutorials.
IML also serves as entry point to find LHC specific machine learning resources, such as software solutions for machine learning starting from the common ROOT file format. We build a forum for community driven summaries of software solutions, announce LHC tailored trainings/school, and list relevant papers and people involved. We can help finding temporary hardware resources (GPUs) for tests. We are currently building up a database with benchmarks datasets and challenges in order to better enable testing new methods in our domain against previous ones.
How to join
Sign up to our forum, join meetings, and contact us in case of questions or inquiries.
IML WG Coordinators: firstname.lastname@example.org