Connect with us

Hi, what are you looking for?

Brilliant AchievementBrilliant Achievement

Tech News

OpenAI teases new reasoning model—but don’t expect to try it soon

An OpenAI logo over an illustration of its o1 model.
Image: Alex Parkin / The Verge

For the last day of ship-mas, OpenAI previewed a new set of frontier “reasoning” models dubbed o3 and o3-mini. The Verge first reported that a new reasoning model would be coming during this event.

The company isn’t releasing these models today (and admits final results may evolve with more post-training). However, OpenAI is accepting applications from the research community to test these systems ahead of public release (which it has yet to set a date for). OpenAI launched o1 (codenamed Strawberry) in September and is jumping straight to o3, skipping o2 to avoid confusion (or trademark conflicts) with the British telecom company called O2.

The term reasoning has become a common buzzword in the AI industry lately, but it basically means the machine breaks down instructions into smaller tasks that can produce stronger outcomes. These models often show the work for how it got to an answer, rather than just giving a final answer without explanation.

According to the company, o3 surpasses previous performance records across the board. It beats its predecessor in coding tests (called SWE-Bench Verified) by 22.8 percent and outscores OpenAI’s Chief Scientist in competitive programming. The model nearly aced one of the hardest math competitions (called AIME 2024), missing one question, and achieved 87.7 percent on a benchmark for expert-level science problems (called GPQA Diamond). On the toughest math and reasoning challenges that usually stump AI, o3 solved 25.2 percent of problems (where no other model exceeds 2 percent).


OpenAI
OpenAI claims o3 performs better than its other reasoning models in coding benchmarks.

The company also announced new research on deliberative alignment, which requires the AI model to process safety decisions step-by-step. So, instead of just giving yes/no rules to the AI model, this paradigm requires it to actively reason about whether a user’s request fits OpenAI’s safety policies. The company claims that when it tested this on o1, it was much better at following safety guidelines than previous models, including GPT-4.

Join The Exclusive Subscription Today And Get Premium Articles For Free
Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

You May Also Like

Editor's Pick

Thomas A. Berry and Alexander Khoury Since the creation of the market square, there have been boisterous, loquacious individuals who have solicited bids for...

Tech News

Illustration by Laura Normand / The Verge The Presidential campaign of Donald Trump asked X to stop links to a story containing VP nominee...

Editor's Pick

Eric Gomez and Benjamin Giltner There were multiple developments in US security assistance to Taiwan in September 2024, but the size of the arms...

Tech News

Image: Ford Ford announced today that it would be working with bike company N plus to introduce two new e-bikes inspired by the automaker’s...