Streamlining API Design: My Journey with OpenAPI and Gemini 2.5 Pro

10 min readApr 15, 2025

As a developer who’s spent countless hours wrestling with API documentation, I’ve experienced firsthand how tedious crafting an OpenAPI spec from scratch can be. It’s like solving a complex puzzle while juggling multiple requirements. That changed when I discovered Gemini 2.5 Pro, Google’s latest AI model that has transformed my approach to API design. Let me walk you through how this tool has become my secret weapon for creating clean, accurate OpenAPI specs with minimal effort.

Why I Switched to OpenAPI and Gemini 2.5 Pro

In my early days as a backend developer, I’d document APIs in whatever format seemed convenient — Word docs, Markdown files, or hastily scribbled notes. This inevitably led to confusion, inconsistencies, and the dreaded “but the documentation says…” conversations with frontend teams.

OpenAPI (formerly Swagger) changed that for me. It provides a standardized, machine-readable format for defining RESTful APIs using JSON or YAML. This blueprint powers everything from interactive documentation to client SDK generation and automated testing. But let’s be honest — writing these specs manually is time-consuming and error-prone. I’d spend hours meticulously typing out paths, parameters, response codes, and schemas.

That’s where Gemini 2.5 Pro has been a game-changer in my workflow. Released in March 2025, this AI model boasts an impressive 1-million-token context window (with 2 million in testing). Google calls it a “thinking model” because it reasons through tasks like a human would — perfect for generating structured OpenAPI specs.

What I appreciate most is how it handles the heavy lifting. Whether I’m designing a new API from scratch or refining an existing one, Gemini 2.5 Pro generates YAML or JSON faster than I could type a single endpoint definition. It’s particularly good at anticipating edge cases that I might overlook even after years of development experience.

Setting Up My Gemini 2.5 Pro Workspace

Getting started with Gemini 2.5 Pro was surprisingly straightforward — no need for complex setups or custom scripts. Here’s how I configured my environment:

Creating My Google AI Studio Account

First, I headed to Google AI Studio and registered with my existing Google account. The free tier works perfectly for my occasional API design needs, though there are paid plans available for more intensive usage.

After signing up, I:

  1. Generated an API key (which I immediately stored in my password manager — never commit these to public repos!)
  2. Selected Gemini 2.5 Pro from the model picker (it was listed as “gemini-2.5-pro-exp-03–25”)
  3. Found myself ready to interact with the model through the Studio’s clean interface

Configuring the Prompt Interface

The Google AI Studio interface is refreshingly simple. There’s a text box for entering prompts, and that’s where I ask Gemini 2.5 Pro to create my OpenAPI specs. After some experimentation, I found that setting the temperature to 0.7 provides the right balance between creativity and structure for API design tasks.

My First OpenAPI Spec with Gemini 2.5 Pro

For my first project with Gemini 2.5 Pro, I needed to create an OpenAPI spec for a to-do list API — a relatively simple but practical example that would test the AI’s capabilities.

Crafting an Effective Prompt

I’ve learned that Gemini 2.5 Pro performs best with specific, detailed instructions. For my to-do list API, I entered:

Create an OpenAPI 3.0 specification in YAML for a RESTful to-do list API with the following endpoints:
1. GET /tasks - List all tasks with optional filtering by status
2. GET /tasks/{id} - Get a specific task by ID
3. POST /tasks - Create a new task
4. PUT /tasks/{id} - Update an existing task
5. DELETE /tasks/{id} - Delete a task

Each task should have:
- id (string)
- title (string)
- description (string, optional)
- status (enum: "pending", "in_progress", "completed")
- created_at (datetime)
- updated_at (datetime)

Include appropriate response codes and schemas.

After hitting “Run,” Gemini 2.5 Pro delivered a comprehensive YAML specification that looked something like this (shortened for brevity):

openapi: 3.0.0
info:
title: To-Do List API
description: A RESTful API for managing to-do tasks
version: 1.0.0
servers:
- url: https://api.example.com/v1
paths:
/tasks:
get:
summary: List all tasks
parameters:
- name: status
in: query
schema:
type: string
enum: [pending, in_progress, completed]
responses:
'200':
description: A list of tasks
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/Task'
post:
summary: Create a new task
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/NewTask'
responses:
'201':
description: Task created successfully
# ... more paths and operations ...
components:
schemas:
Task:
type: object
properties:
id:
type: string
title:
type: string
description:
type: string
status:
type: string
enum: [pending, in_progress, completed]
created_at:
type: string
format: date-time
updated_at:
type: string
format: date-time
required:
- id
- title
- status
- created_at
- updated_at
# ... more schemas ...

I was genuinely impressed — this first draft captured all the essential elements I needed and followed OpenAPI best practices.

Saving My Spec for Further Refinement

I copied the YAML output and saved it as todo-api.yaml in my project directory. The Google AI Studio interface also offers a direct download option, which is convenient for larger specs.

Evaluating My OpenAPI Spec with Rate My OpenAPI

Before diving into refinements, I wanted an objective assessment of my AI-generated spec. I discovered Rate My OpenAPI, a tool that scores OpenAPI specifications and provides actionable improvement suggestions.

Getting My Initial Score

I uploaded my todo-api.yaml file to the site and clicked "Analyze." The tool gave my spec an 87/100—not bad for a first attempt! The feedback highlighted several areas for improvement:

  1. “Add security schemes for authentication.”
  2. “Include more detailed descriptions for endpoints.”
  3. “Consider adding pagination parameters for GET /tasks.”

Understanding the Feedback

While 87 is a respectable score, I wanted to push for excellence. The feedback made sense — my spec lacked authentication mechanisms and could benefit from more detailed descriptions. Gemini 2.5 Pro had created a solid foundation but kept things minimal, which is often the case with first drafts.

Iterative Refinement with Gemini 2.5 Pro

Armed with specific feedback, I returned to Google AI Studio to improve my spec through targeted prompts.

Adding Authentication

For my first refinement, I focused on security. I entered:

Enhance the following OpenAPI spec by adding JWT Bearer token authentication to all endpoints:

[I pasted my entire YAML spec here]

Gemini 2.5 Pro responded with an updated spec that included:

components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
security:
- bearerAuth: []

It also added the security requirement to each endpoint, which was exactly what I needed.

Improving Descriptions

Next, I tackled the thin descriptions:

Enhance the descriptions in this OpenAPI spec. Add detailed descriptions for each endpoint, parameter, and schema:

[I pasted my updated YAML spec here]

The result included much more informative descriptions:

paths:
/tasks:
get:
summary: List all tasks
description: Retrieves a list of all tasks. Results can be filtered by status to show only pending, in-progress, or completed tasks. Returns an empty array if no tasks match the criteria.
# ...
components:
schemas:
Task:
description: Represents a to-do item with its metadata and current status
# ...

These richer details immediately improved the usability of my spec.

Implementing Pagination

Finally, I addressed the pagination suggestion:

Add pagination support to the GET /tasks endpoint in this OpenAPI spec:

[I pasted my enhanced YAML spec here]

Gemini 2.5 Pro added:

paths:
/tasks:
get:
# ...
parameters:
# ... existing parameters ...
- name: page
in: query
description: Page number for paginated results (starts at 1)
schema:
type: integer
minimum: 1
default: 1
- name: limit
in: query
description: Number of items per page (max 100)
schema:
type: integer
minimum: 1
maximum: 100
default: 20
# ...

After uploading my refined spec to Rate My OpenAPI, my score jumped to 98/100! The few remaining points were for minor issues that I could easily address.

Handling Edge Cases

As a developer who’s been burned by insufficient error handling, I wanted to ensure my API spec covered common failure scenarios. I prompted:

Add detailed error responses for the following scenarios to all endpoints:
1. 400 Bad Request for invalid input
2. 404 Not Found for resources that don't exist
3. 409 Conflict for duplicate resource creation
4. 429 Too Many Requests for rate limiting
[I pasted my enhanced YAML spec here]

Gemini 2.5 Pro added comprehensive error responses, including:

responses:
'400':
description: Bad Request - The request contains invalid parameters or payload
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'404':
description: Not Found - The requested resource does not exist
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
# ... other error responses ...

This attention to error handling has saved me countless hours of debugging and support requests in production.

Testing My OpenAPI Spec in a Real Environment

With a polished spec in hand, I wanted to validate it in a more practical setting before implementing the actual API.

Setting Up a Mock Server with Apidog

I imported my todo-api.yaml into apidog.com, a platform I've found invaluable for API development. Within minutes, I had a mock server running that simulated my to-do list API.

I tested a POST request to /tasks with a payload like:

{
"title": "Complete OpenAPI tutorial",
"description": "Finish writing the Gemini 2.5 Pro OpenAPI tutorial",
"status": "in_progress"
}

Apidog returned a mocked 201 response with a generated task object, complete with ID and timestamps — exactly what I’d expect from a real implementation.

Generating Interactive Documentation

Using Apidog’s documentation features, I generated interactive API docs from my spec. This gave me a preview of what my team would see when using the API, with request/response examples, schema details, and authentication requirements all clearly presented.

I shared the documentation link with a frontend developer colleague, who immediately understood how to integrate with the API — no additional explanation needed. This alone justified the time invested in creating a comprehensive OpenAPI spec.

Why Gemini 2.5 Pro Has Become My Go-To for API Design

After using Gemini 2.5 Pro for several API projects, I’ve identified several key advantages that keep me coming back:

  • Efficiency: What used to take me hours now takes minutes. I can iterate on API designs rapidly, testing different approaches without significant time investment.
  • Comprehensiveness: The model’s vast context window means it understands complex API requirements and maintains consistency across large specifications.
  • Flexibility: Whether I need YAML or JSON, simple or complex schemas, authentication or not — Gemini 2.5 Pro adapts to my requirements.
  • Learning Tool: As someone who’s still mastering all the nuances of OpenAPI, I’ve learned best practices by analyzing the AI’s output and understanding why certain patterns are used.

I’ve experimented with other AI assistants like Claude and GitHub Copilot, but Gemini 2.5 Pro’s reasoning capabilities make it particularly well-suited for structured tasks like OpenAPI specification development. It feels like having a senior API designer reviewing my work in real-time.

My Personal Tips for OpenAPI Success with Gemini 2.5 Pro

Through trial and error, I’ve developed some practices that help me get the most out of Gemini 2.5 Pro for API design:

  • Be Specific in Prompts: I’ve found that vague requests like “Make an API spec” produce inconsistent results. Instead, I specify exactly what endpoints, parameters, and schemas I need.
  • Iterative Refinement: Rather than trying to perfect everything in one go, I start with a basic spec and use targeted prompts to refine specific aspects.
  • Validate Frequently: I run my specs through Apidog or Swagger Editor after each significant change to catch issues early.
  • Keep Learning: I regularly check ai.google.dev for new Gemini 2.5 Pro features that might enhance my API design workflow.

Conclusion: A New Approach to API Design

Incorporating Gemini 2.5 Pro into my API design process has fundamentally changed how I approach this aspect of development. What was once a tedious, error-prone task has become an interactive, creative process where I can focus on the business logic and user experience rather than YAML syntax and schema definitions.

From quickly prototyping a to-do API to adding authentication and comprehensive error handling, Gemini 2.5 Pro has proven itself as an invaluable partner in my development toolkit. The combination of AI-assisted design and validation tools like Rate My OpenAPI and Apidog has elevated the quality of my API specifications while significantly reducing the time investment.

For my next project, I’m planning to design a more complex e-commerce API with multiple resource types and relationships. I’m confident that with Gemini 2.5 Pro by my side, even this more challenging specification will come together smoothly.

If you’re a developer who dreads API documentation or simply wants to streamline your workflow, I can’t recommend this approach enough. Give Gemini 2.5 Pro a try for your next API project, and don’t forget to validate and test your spec with tools like apidog.com for that extra polish that will make your fellow developers thank you.

--

--

Sebastian Petrus
Sebastian Petrus

Written by Sebastian Petrus

Asist Prof @U of Waterloo, AI/ML, e/acc

No responses yet