Mathematicians have announced a milestone in the effort to thoroughly verify the solution of the sphere-packing problem — for which the Ukrainian mathematician Maryna Viazovska won the Fields Medal in 2022 — using a machine.
This version of the problem asks what the best way is to pack a bunch of spheres in eight dimensions.
On February 23, the team that achieved this said it now has a proof that a machine has checked and verified fully.
Viazovska’s proof, like many other (human) proofs of difficult maths problems, was originally written for mathematicians to make sense of. So for instance her paper skips steps that are “obvious” or steps that follow from some theorem readers are expected to be aware of.
On the other hand, the new achievement involved a machine checking every single step of the proof — including those parts that Viazovska skipped — from start to finish.
‘Sorry-free’ verification
Mathematicians are making this effort because sometimes a hidden gap or an unstated assumption can slip through, and which a human may not notice.
Once a proof has been formalised and checked in this way, it also means other (human) mathematicians can know which definitions the proof used at all points, which theorems the prover relied on, etc., making it easier for them to audit it themselves or reuse parts of it in their work. Other machines can also use this proof in future in pursuit of verifying more complicated proofs of other problems.
Formalisation here means taking a human proof written in papers, with informal language and “obvious” steps, and translating it to the language of a proof assistant, i.e. the machine.
Maryna Viazovska
| Photo Credit:
REUTERS
The language here was called Lean, which is software for writing mathematics in a formal language.
The verification matters for results like that of Viazovska, since her arguments are subtle and stretch across several technical domains, including modular forms, Fourier analysis, and special functions.
The new verification is also “sorry-free”, meaning the code contains no gaps that the humans asked Lean to just trust, or take at face value. In other words, Lean verified every step of the argument.
‘Remarkable contribution’
In this case, mathematicians are also reacting to the use of an auto-formalisation agent called ‘Gauss’, developed by California-based company Math, Inc.
The project already had a large Lean codebase with many earlier lemmas and definitions, as well as a to-do list of the statements that still needed to be proved and how they connected to each other. ‘Gauss’, which is an AI tool, parsed these details and formalised the remaining statements for Lean to check.
Just completely formalising Viazovska’s proof is a feat in and of itself. Now, if Gauss’s work holds up under review, it will be a sign that AI is ready to help mathematicians with other major formalisation efforts.
“The project team is already in the process of reviewing and revising Gauss’ code, thereby ensuring that it meets the editorial standards of the formalisation community. This process will ensure that the code is maintainable and reusable, and that it will support future formalisation work,” Carnegie Mellon University PhD student Sidharth Hariharan wrote in a post on LinkedIn. “Gauss’ remarkable contribution has saved the project months of effort, and Gauss will continue to play a role in the revisions.”
Mr. Hariharan is also a maintainer of the Sphere Packing Lean project, an open-source effort to formalise Viazovska’s proof.
How Lean works
Lean is essentially a programming language with a precise logical foundation. Mathematicians first translate definitions, theorems, and proofs as Lean code, then its kernel — which is the checker — verifies if they’re correct.
The kernel checks proofs using Lean’s built-in logical rules, while a separate library called mathlib supplies most of the standard definitions and theorems that mathematicians can reuse.
The kernel is akin to a small machine: for mathematicians, it’s easier to check if this machine works correctly than to check if the entire proof is valid.
Lean’s working can be broken up into three steps.
First, mathematicians encode a problem as a Lean statement, including what the objects involved in the proof are (e.g. Euclidean space, lattices and packings) and what exactly is being claimed.
Second, they incorporate the necessary mathematical ‘parts’ of the proof inside Lean, in this case real and complex analysis, Fourier transforms, special functions, modular/Theta-functions, inequalities, measure theory, etc.
Third, every step that a human might skip (e.g. with a statement like “it follows that…”) needs to be expanded into a chain of lemmas that Lean can verify. If a paper appeals to a known theorem, mathematicians must either point to its formalised version in Lean’s library or they must formalise that theorem too.
Then the kernel gets to work.
At the Lean Together conference in January, Lean creator Leo de Moura said the priorities for this year include finalising the Lean 4 compiler and improving its performance to reduce compilation times and to handle the large scale of modern Lean libraries.
Challenge of formalisation
According to mathematicians, the ultimate purpose is to make mathematical correctness less dependent on trust and social processes and more on explicit and verifiable mathematics.
For instance, in 1879, the English mathematician Alfred Kempe published a proof of the four-colour theorem. If you draw a map on a flat sheet of paper, you can colour each region so that any two regions that share a border get different colours, using only four colours. And Kempe said he’d proved this.
Kempe’s peers accepted his proof for about a decade because the proof looked reasonable and he was highly reputed. But in 1890, the mathematician Percy Heawood found a mistake that invalidated it. The theorem later turned out to be true but Kempe’s proof was still wrong.
Mathematicians also found in the 20th century that a mathematical ‘proof’ is a formal object that can, in principle, be checked by a machine, and people wanted practical tools to do that.
Proofs in modern mathematics can also be extremely long and peer-reviewers may not always be up to the task of checking if they’re correct from start to end. Proof assistants thus emerged as a way to raise, and meet, the bar for verification.
Helping with formalisation
The main barrier to getting a proof checked by a machine in a “sorry-free” way is to get the proof, and its associated objects, definitions, etc. into the machine’s language — i.e. formalisation.
Some major theorems that have been completely formalised include the four-colour theorem itself, in 2005; the prime number theorem also in 2005; the Feit-Thompson odd order in 2012; and the Kepler conjecture in 2014.
Automation has helped in this regard, although it’s still not come to the point where a tool can take a ‘human proof’ and turn it into a complete formal proof in a reliable way.
In September 2025, a team led by Indian Institute of Science mathematics professor Siddhartha Gadgil won a grant from the AI For Math Fund for its work on ‘LeanAide’. “By creating an accessible, no-code AI+Lean environment, the project seeks to simplify the formalisation process for Lean users and empower mathematicians with new, innovative tools for research, including agentic solutions,” the citation reads.
Some other tools like Gauss include Lean Copilot, an AI helper inside Lean that suggests what step to try next; Sledgehammer, a tool that tries to solve your current goal automatically by calling other programs; and Alpha Proof, an AI tool developed by DeepMind to produce proofs that a proof assistant like Lean can check.
AI in mathematics
In recent years, AI has fundamentally reshaped mathematics even as it has continued to evolve away from just being a powerful calculator. AI-driven platforms like Photomath and specialised educational intelligence, or SEI, models today serve as on-demand tutors that offer step-by-step explanations in natural language and can adapt to individual students.
Large language models (LLMs) are being used to generate high-quality standardised tests as well as to take on challenges like the International Mathematical Olympiad. In 2025, reasoning models from OpenAI and Google DeepMind achieved scores worthy of gold medals.
AI models have also become a reasoning partner for seasoned mathematicians, helping to solve problems by detecting patterns in large datasets. It has been used to generate novel conjectures in topology and geometry, often spotting connections across disparate fields that evaded experts.
On February 13, for instance, two models built by OpenAI helped physicists make a new finding in particle physics, overturning a belief the community had held for many years.
“[xAI cofounder] Christian Szegedy has predicted that models will be mathematically ‘superhuman in almost all respects’ in six months to a year,” University of Toronto assistant professor Daniel Litt wrote on his blog on February 21.
“I find that precise timeline hard to believe for most aspects of mathematical research, but I suspect that he won’t be off by much when it comes to proving some class of involved statements that would previously have required an expert. This is a narrow conception of mathematics indeed, but it is true that producing such proofs is a large part of math research.”
mukunth.v@thehindu.co.in
