More

    OpenAI claims gold medal at prestigious math Olympiad, drama ensues

    OpenAI announced its unreleased reasoning model won the gold at the International Mathematical Olympiad (IMO), igniting fierce drama in the world of competitive math.

    While most high schoolers blissfully enjoy a break from school and homework, top math students from around the world brought their A-game to the IMO, considered the most prestigious math competition. AI labs also competed with their LLMs, and an unreleased model from OpenAI achieved a high-enough score to earn a gold medal, according to researcher Alexander Wei who shared the news on X.

    The OpenAI model got five out the six problems correct, earning a gold medal-worthy score of 35 out of 42 points. “For each problem, three former IMO medalists independently graded the model’s submitted proof, with scores finalized after unanimous consensus,” according to Wei. The problems are algebra and pre-calculus challenges that require creative thinking on the competitor’s part. So for LLMs to be able to reason their way through long, complex proofs is an impressive achievement.

    However, the timing of the announcement is being criticized for overshadowing the human competitors’ results. The IMO reportedly asked the AI labs officially working with the organization verifying the results to wait a week before making any announcements, to avoid stealing the kids’ thunder. That’s according to an X post from Mikhail Samin, who runs the AI Governance and Safety Institute nonprofit. OpenAI said they didn’t formally cooperate with the IMO to verify their results and instead worked with individual mathematicians to independently verify its scores, and so it wasn’t beholden to any kind of agreement. Mashable sent a direct message to Samin on X for comment.

    But the gossip is that this rubbed organizers the wrong way, who thought it was “rude” and “inappropriate” for OpenAI to do this. This is all hearsay, based on rumors from Samin, who also posted a screenshot of a similar comment from someone named Joseph Myers, presumably the two-time IMO gold medalist. Mashable contacted Myers for comment, but he has not publicly confirmed the authenticity of the screenshot.

    Mashable Light Speed

    In response, OpenAI researcher Noam Brown said they posted the results after the IMO closing ceremony, honoring an IMO organizer’s request.

    In a follow-up post, Brown clarified that the IMO reached out to OpenAI two months earlier for participating in a different version of the test called Lean. OpenAI declined, because they were “focused on general reasoning in natural language without the constraints of Lean,” and Brown said they “were never approached about a natural language math option.”

    Meanwhile, Google DeepMind reportedly did cooperate with the IMO, and announced this afternoon that an “advanced version of Gemini with Deep Think officially achieve[d] gold-medal standard at the International Mathematical Olympiad.” According to the announcement, DeepMind’s model was “officially graded and certified by IMO coordinators using the same criteria as for student solutions.” Read into that statement as much or as little as you want, but the timing is hardly coincidental.

    Others may follow the Real Housewives, but the proper decorum of elite math competitions is the high drama we live for.

    UPDATE: Jul. 22, 2025, 11:28 a.m. EDT This story has been updated with an additional information from a statement by OpenAI researcher Noam Brown.


    Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.



    Read the full article here

    Recent Articles

    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here

    Stay on op - Ge the daily news in your inbox