Nous Research, an open-source AI startup backed by Paradigm, has released a new competitive programming model called NousCoder-14B that matches or exceeds several larger proprietary systems. The model was trained in four days using 48 Nvidia B200 graphics processors and achieved a 67.87% accuracy rate on the LiveCodeBench v6 evaluation.

The release is significant as it arrives at a charged moment when rival Anthropic's Claude Code has captured attention with its demonstrations of end-to-end software development. Nous Research is betting that open-source alternatives trained on verifiable problems can close the gap and that transparency in how these models are built matters as much as raw capability.

The company's approach relies on "verifiable rewards" - a system where the model generates code solutions, those solutions are executed against test cases, and the model receives a simple binary signal: correct or incorrect. This feedback loop requires significant infrastructure to execute at scale.

Nous Research has raised $65 million in funding from Paradigm, which reflects growing interest in decentralized approaches to AI training. The company's previous releases include Hermes 4, which outperformed ChatGPT without content restrictions, and DeepHermes-3, a toggle-on reasoning model that allows users to activate extended thinking capabilities on demand.

The release of NousCoder-14B includes several directions for future work, including multi-turn reinforcement learning, controlling response length, and problem generation and self-play. These areas hint at where AI coding research may be heading in the future.