Jon Krohn: 00:00 This is episode number 980 on AI making breakthroughs in theoretical physics. Welcome back to the SuperDataScience Podcast. I’m your host, Jon Krohn. Over the past few years on this show, we’ve talked a lot about AI becoming essentially magical on practical everyday problems, things like coding assistance, obviously, document summarization, content generation, and on and on. But in early 2026, something happened in theoretical physics that demonstrates a much more profound shift in what AI is capable of. A team of physicists used OpenAI’s models, not just as a tool, but as what they described as a collaborator to crack a problem in particle physics that had stymied them for months. Two preprints in archive came out of this work and they’re generating enormous buzz across both the AI and physics communities. Let me walk you through what happened and why it matters. To set the scene, a group of four theoretical physicists whose names I’m probably going to butcher just like I butcher saying physicists.
01:02 Yeah, these four theoretical physicists, Andrew Strominger Alfredo Guevara from the Institute for Advanced Study, David Skinner from Cambridge and Alexandru Lupsasca from Vanderbilt, who is now working at OpenAI. The four of them had been studying a particular class of interactions involving fundamental particles called gluons. Gluons are the particles that transmit the strong nuclear force, which is one of the four fundamental forces of nature alongside gravity, electromagnetism, and the weak nuclear force. As the wonderfully descriptive gluon name suggests, they are basically the glue that holds quirks, other subatomic particles together inside protons and neutrons, which means that in gluons, they’re essential to holding atomic nuclei and by extension basically all matter together. Now, here’s what makes particle physics so mathematically gnarly. Subatomic particles obey the laws of quantum physics, meaning their behavior is inherently probabilistic. When particles collide, you can’t definitively predict the outcome.
02:06 All physicists can do is calculate the probability of various outcomes. And these probabilities are encoded in mathematical quantities called scattering amplitudes. These amplitudes are notoriously challenging to compute because they can involve hundreds of intricate mathematical terms. Think of it as trying to describe every possible way a set of billiard balls could scatter after a break, except the billiard balls are quantum particles, so the number of possible outcomes and the complexity of the math grow dramatically as you add more particles. Now, here’s where it gets interesting. For a specific type of gluon interaction, what physicists call single minus configurations, where one gluon has a particular spin orientation called negative halicity, and the rest have positive halicity, so single minus, one negative. The standard textbook argument for decades was that the scattering amplitudes must be zero. In other words, physicists believed these interactions simply could not occur under any circumstances.
03:04 This team, however, suspected that conclusion was too strong. They had noticed that if the momenta of the particles are arranged in a very specific way, a precise alignment known as the half colinear regime, the usual reasoning that forces the amplitude to zero no longer applies. When they worked out the math for small numbers of gluons, say four or five, this turned out to be right. But as they tried to generalize the formula for any number of gluons, the expressions became dozens of terms long and essentially unworkable. After about a year of grinding way by hand, the researchers were stuck. Enter AI. Lupsaska, one of those four authors, had recently joined OpenAI’s newly launched OpenAI for science team and invited the group to test the physics capabilities of OpenAI’s latest models. The single minus gluon problem seemed like the perfect challenge. They fed their complicated formula for small numbers of gluons into GPT 5.2 pro and the model did something remarkable.
04:00 It simplified a mathematical expression with 32 variables down to a compact product that fit on a single line. Then when asked to guess a generalization valid for any number of gluons, GPT 5.2 Pro replied within minutes with what it called, and I love this, the obvious generalization. It just wrote down the whole formula. The physicists naturally worried this might be an AI hallucination, carefully checked the formula against known consistency rules in quantum field theory, and it passed the test. But they wanted more than a conjecture. They wanted a proof. So they fed the formula into a more powerful internal OpenAI model won the researchers privately nicknamed SuperChat, and after about 12 hours of autonomous reasoning, SuperChat produced a formal proof. The physicists went through the mathematics step-by-step and confirmed the proof was correct. The team posted their findings on archive on February 12th, and the paper was trending on social media within hours, but the story didn’t end there.
05:01 The researchers immediately wondered whether the same approach could be extended to gravitons, hypothetical particles that are thought to carry the gravitational force. Gravitons haven’t been observed experimentally, but calculating their theoretical scattering amplitudes allows physicists to investigate how gravity might behave at the quantum level, which is one of the biggest open questions in all of physics today. Graviton calculations are even more complex than those four gluons, and yet on March 4th, the team released a second preprint on archive again. Using only the gluon results as context and some guidance from the physicist, GPT 5.2 Pro was able to construct the analogous single minus scattering amplitudes for gravitons as well. Wow. Now, what’s really remarkable here and what I think makes the story so significant beyond the specifics physics results is how it changes the dynamic of scientific research. Lupsaska put it bluntly. The hard part is no longer the physics problem itself.
06:00 The hard part is now verifying the results and writing them up. The AI essentially compressed months of work into weeks and Strominger, another one of the authors, described the experience of the AI casually proposing a formula with a phrase like the obvious generalization is, as being like interacting with one of his more presumptuous colleagues. Now, there are, of course, important caveats to everything I’ve said in this episode. These are preprints that haven’t been peer reviewed. The results apply to a very specific mathematical regime and to the simplest level of calculation, so- called tree level, without the additional complexity of quantum loop corrections. And while the AI proposed improved the formula, the human physicists were essential for defining the problem, providing the initial data and verifying the output. As Z. Byrne, a prominent particle theorist at UCLA noted, the ideas themselves aren’t revolutionary, but the fact that a machine can do this, that is revolutionary.
06:59 And Demis Hassabas, head of Google DeepMind, has expressed a view shared by many that we’re still years away from AI systems that can generate novel hypotheses about how the world works from scratch. But the caveats aside, this is still really exciting. This work provides what may be a template for AI-assisted scientific research more broadly. AI generates conjectures from patterns in the data, and human experts then verify those conjectures through rigorous mathematics and physical consistency checks. It’s not autonomous AI science, it’s augmented human science. And that model could scale across disciplines from pure math to drug discovery, to material science, to whatever. All right. It’s exciting, right? If you want to dig into the technical details yourself, we’ve got links to both archive preprints from this research team for you in the show notes. We’ve also got OpenAI’s blog posts on the Gluan and Graviton results and a fantastic detailed write up from Science Magazine, all of that in the show notes.
08:02 Pretty damn cool. AI is now helping us expand the frontier of human knowledge itself. It’s safe to say much more of this will be happening soon. If that doesn’t get your brain tingling with possibilities, I don’t know what will. All right, that’s the end of today’s episode. If you enjoyed it or know someone who might, consider sharing this episode with them, leave a review of the show on your favorite podcasting platform or YouTube. Tag me in a LinkedIn post with your thoughts, and if you haven’t already, be sure to subscribe to the show. Most importantly, however, we just hope you’ll keep on listening. Until next time, keep on rocking it out there, and I’m looking forward to enjoying another round of the SuperDataScience Podcast with you very soon.