Research
What is our universe made of?
This simple question is surprisingly difficult to answer. You might say “molecules” or “atoms”, and while this is true, there is an even more fundamental answer. Subatomic particles are the building blocks of our universe. Characterizing their properties and how they interact with one another requires reaching the highest energies to probe the smallest scales. Thousands of scientists at specialized experiments like CERN’s the Large Hadron Collider (LHC) are dedicated to measuring the properties of the particles that make up our universe. But knowing what to search for and understanding the results of these experiments requires having a mathematical framework to describe the underlying interactions. That’s where high-energy theorists like myself come into the picture.
Our current theory of particles and their interactions, the Standard Model (SM) of particle physics, is incomplete. It cannot explain the existence of dark matter nor account for observed neutrino masses. Interestingly, we have yet to find substantial deviations from the SM’s predictions in colliders. While colliders remain our most precise and controlled way to study the properties and interactions of particles, efficiently analyzing the data is technically challenging. My work leverages state-of-the-art tools from machine learning and optimal transport theory to overcome these challenges.
Conversely, the range and importance of applications of machine learning are growing everyday. There is a need, both in science and society, to make machine learning more reliable and interpretable. These goals have been highlighted in recent funding calls by the NSF, DOE, and PCAST. Thus, my work has also explored how physics can be used to better understand neural networks.
Along these lines, in Fall 2025, I will be the lead organizer of a 7-week KITP program “Generative AI for High and Low Energy Physics” which will bring together experts from high energy and condensed matter physics as well as AI industry and academia. Due to both its impressive performance and universality, generative AI has been broadly adopted to help meet the growing need for complex simulations of big-data in high energy and condensed matter physics. Scientific simulations, however, require assurances of uncertainty quantification and interpretability; aspects which are comparatively lacking in current generative AI methods. This program will foster collaborations to improve both the capabilities and interpretability of generative AI algorithms used in physics simulations. If you are interested in applying to this program, the deadline for full consideration is December 8, 2024.