Edison Labs
← Models

ether0

Released 24B

A 24B-parameter reasoning model post-trained for chemistry.

Released 2025-06-04 Siddharth M. Narayanan, James D. Braza, Ryan-Rhys Griffiths, Albert Bou, Geemi Wellawatte, Mayk Caldas Ramos, Ludovico Mitchener, Samuel G. Rodriques, Andrew D. White
Paper
arXiv:2506.17238

Summary

ether0 is a 24B-parameter reasoning model built on Mistral-Small-24B and post-trained for chemistry. We trained it with reinforcement learning on 640,730 experimentally-grounded chemistry problems spanning 375 tasks, covering everything from synthesizability and blood-brain barrier permeability to human receptor activity and scent. It reasons in natural language and emits chemical structures, and on molecular design tasks it outperforms general-purpose chemistry models, frontier models, and human experts — while being substantially more data efficient than other domain-specialized models. ether0 is accessible in production via the Molecules agent.

Abstract

Reasoning models are large language models that emit a long chain-of-thought before answering, providing both higher accuracy and explicit reasoning for their response. A major question has been whether language model reasoning generalizes beyond mathematics, programming, and logic, where most previous work has focused. We demonstrate that reasoning models can be post-trained for chemistry without additional domain pretraining, and require substantially less data compared to contemporary domain-specific models. We report ether0, a 24B parameter LLM (based on Mistral-Small-24B) that can reason in natural language and respond with chemical structures. This reasoning model was trained with reinforcement learning on 640,730 experimentally-grounded chemistry problems across 375 tasks ranging from synthesizability, to blood-brain barrier permeability, to human receptor activity, to scent. Our model exceeds general-purpose chemistry models, frontier models, and human experts on molecular design tasks. It is also more data efficient relative to specialized models. We anticipate that this method can be applied to train data-efficient language models specialized for tasks across a wide variety of scientific domains.