Coding improvements in new OpenAI GPT models

Posted on Tuesday, May 13, 2025 by RICHARD HARRIS, Executive Editor

OpenAI recently launched three new models in the API: GPT‑4.1, GPT‑4.1 mini, and GPT‑4.1 nano. These models outperform GPT‑4o and GPT‑4o mini across the board, with major gains in coding and instruction following. They also have larger context windows—supporting up to 1 million tokens of context, and are able to better use that context with improved long-context comprehension. They feature a refreshed knowledge cutoff of June 2024.

Introducing GPT-4.1 in the API: Major coding improvements in new OpenAI GPT models

GPT‑4.1 performance highlights:

Coding:

  • GPT‑4.1 scores 54.6% on SWE-bench Verified, improving by 21.4%abs over GPT‑4o and 26.6%abs over GPT‑4.5—making it a leading model for coding.
     

Instruction following:

  • On Scale’s MultiChallenge benchmark, GPT‑4.1 scores 38.3%, a 10.5%abs increase over GPT‑4o.
     

Long context:

  • On Video-MME, a benchmark for multimodal long context understanding, GPT‑4.1 sets a new state-of-the-art result—scoring 72.0% on the long, no subtitles category, a 6.7%abs improvement over GPT‑4o.
     

While benchmarks provide valuable insights, we trained these models with a focus on real-world utility. Close collaboration with the developer community enabled us to optimize these models for the tasks that matter most.

GPT 4.1 Family Intelligence by Latency


Cost and latency improvements:

The GPT‑4.1 model family offers exceptional performance at a lower cost, pushing forward at every point on the latency curve.

GPT‑4.1 mini:

  • A significant leap in small model performance, surpassing GPT‑4o in many benchmarks. It reduces latency by nearly half and cost by 83%.
     

GPT‑4.1 nano:

  • The fastest and cheapest model available. Ideal for low-latency tasks like classification or autocompletion, with a 1 million token context window.
     

Scores:

  • MMLU: 80.1%
  • GPQA: 50.3%
  • Aider polyglot coding: 9.8%
     

Real-world applications and developer feedback:

The GPT‑4.1 models improve reliability and long context comprehension, making them ideal for powering agents that perform tasks independently on behalf of users.
Early testers noted that GPT‑4.1 can be more literal, so explicit and specific prompts are recommended.

Deprecation notice:

GPT‑4.5 Preview will be deprecated on July 14, 2025, as GPT‑4.1 offers improved performance at lower cost and latency. We will maintain the creativity, writing quality, humor, and nuance appreciated in GPT‑4.5 in future models.

Benchmark performance and real-world usage:

GPT‑4.1 demonstrates significant improvements across coding, instruction following, and long context handling. It excels in:

  • Coding tasks: Agentically solving coding tasks, reliable code diffs, and frontend coding.
  • Instruction following: Improved format compliance, multi-turn instructions, and reduced overconfidence.
  • Long context processing: Efficient retrieval from up to 1 million tokens of input.
     

Real-world examples include improvements in coding benchmarks with Windsurf, accurate legal data extraction with Thomson Reuters, and fast, reliable code generation with Qodo.

Vision and multimodal capabilities:

The GPT‑4.1 family excels at image understanding and processing long videos without subtitles, making it suitable for multimodal applications. GPT‑4.1 series models are available now to all developers, with lower prices through efficiency improvements:

GPT-4.1 pricing:

  • Input: $2.00
  • Cached Input: $0.50
  • Output: $8.00
  • Blended Pricing: $1.84
     

GPT-4.1 mini pricing:

  • Input: $0.40
  • Cached Input: $0.10
  • Output: $1.60
  • Blended Pricing: $0.42
     

GPT-4.1 nano pricing:

  • Input: $0.10
  • Cached Input: $0.025
  • Output: $0.40
  • Blended Pricing: $0.12
     

GPT‑4.1 represents a major leap in practical AI application, addressing real-world developer needs from coding to long context comprehension. We look forward to seeing the innovations that the developer community builds using these models.

More App Developer News

Tether QVAC SDK Powers AI Across Devices and Platforms



APAC 5G expansion to fuel 347B mobile market by 2030



How AI is causing app litter everywhere



The App Economy Is Thriving



NIKKE 3.5 anniversary update livestream coming soon



New AI tool targets early dementia detection



Jentic launch gives AI agents api access



Experts warn ai-generated health content risks misinterpretation without human oversight



Ludo.ai Unveils API and MCP Beta to Power AI Game Asset Pipelines



AccuWeather Launches ChatGPT Integration for Live Weather Updates



Stop Using Business Jargon: 5 Ways Buzzwords Damage Job Performance



IT spending rises as banks balance legacy and innovation



Tech hiring slumps as Software Developer job postings fall



AI is becoming more widespread in collaboration tools



FCC prohibits new foreign router models citing critical infrastructure risks



ChatGPT Carbon Footprint Matches 1.3 Million Cars Report Finds



Lens Launches MCP Server to Connect AI Coding Assistants with Kubernetes



Accelerating corporate ai investment returns



Enviromates tech startup launches global participation platform



Private Repository Secures the AI-driven Development Boom



UK Fintech Platform Enviromates Connects Projects Brands and Consumers



Env Zero and CloudQuery Announce Merger



How Industrial AI Is Transforming Operations in 2026



AI generated work from managers is damaging trust among employees



Foresight Secures $25M to Bridge Infrastructure Execution Gap



Copyright © 2026 by Moonbeam

Address:
1855 S Ingram Mill Rd
STE# 201
Springfield, Mo 65804

Phone: 1-844-277-3386

Fax:417-429-2935

E-Mail: contact@appdevelopermagazine.com