The pitch for hexagonal architecture usually centers on testability and flexibility. Swap out your database. Mock your external services. Keep your domain pure.
Those arguments are valid. But there’s an underappreciated reason to adopt this pattern: it makes AI coding assistants extremely efficient.
I’ve been using Claude and Copilot across various codebases. They’re useful everywhere. But in Ayunis Core—a NestJS backend built on strict hexagonal principles—they’re noticeably faster and more accurate. Less fumbling. Fewer wrong guesses. More code that works on the first try.
The structure removes ambiguity. AI spends less time searching for where things live and makes fewer incorrect assumptions about how components interact.
Hexagonal Architecture in 60 Seconds
Quick refresher for context.
Ports are interfaces that define what your application needs from the outside world. A repository port defines persistence operations. An email port defines how to send messages.
Adapters implement those interfaces. A PostgreSQL adapter fulfills the repository port. An SMTP adapter fulfills the email port.
The domain sits at the center, containing pure business logic with no knowledge of infrastructure.
The dependency rule: everything points inward. Infrastructure depends on application. Application depends on domain. Never the reverse.
flowchart TB
P[Presenters] --> A[Application]
A --> D[Domain]
I[Infrastructure] --> A
I -.-|implements| ports([Ports])
A -.-|defines| ports
That’s the essential mental model.
The Ayunis Core Structure
Here’s how this looks in practice. Ayunis Core is a NestJS backend with 1,106 TypeScript files across 47 modules. The structure:
src/
├── domain/ # 15 domain modules
│ └── agents/ # Example module
│ ├── domain/ # Pure business logic
│ ├── application/ # Use cases + ports
│ │ ├── ports/
│ │ └── use-cases/
│ ├── infrastructure/ # Adapters
│ │ └── persistence/
│ └── presenters/ # HTTP layer
│ └── http/
├── common/ # Cross-cutting infrastructure
└── iam/ # Auth & org managementEvery domain module follows the same four-layer pattern. Every layer has a clear purpose. No exceptions.
Some metrics that matter for AI context:
- Average domain entity: 67 lines
- Average use case: 50-100 lines
- Each use case lives in its own directory with its command/query and tests
Small, focused files. Predictable locations. Explicit contracts everywhere.
Why AI Assistants Love This Structure
The File Tree Is Documentation
AI can understand the entire system topology from the directory structure alone.
domain/agents/application/use-cases/create-agent/
├── create-agent.command.ts
├── create-agent.use-case.ts
└── create-agent.use-case.spec.tsThe path domain/agents/application/use-cases/create-agent/ tells you exactly what that code does. No searching. No “let me look for where agents are created.” The structure is the documentation.
Same principle applies everywhere:
infrastructure/persistence/local/→ local database adapterapplication/ports/agent.repository.ts→ the contract for agent persistencepresenters/http/dto/→ HTTP request/response shapes
When I ask Claude to add a feature, it reads the directory tree first. In Ayunis Core, that tree provides a complete map. In less structured codebases, the tree is noise—you need to read actual files to understand relationships.
Small Files Keep Context Lean
LLMs have context limits. Every token matters. Smaller files mean more relevant code fits in the context window.
Compare:
- Ayunis Core: Domain entities average 67 lines. Use cases are single-purpose, 50-100 lines.
- Typical NestJS: Service files commonly hit 500+ lines, mixing CRUD operations, business logic, and infrastructure concerns.
When Claude reads create-agent.use-case.ts, it gets everything relevant to creating an agent and nothing else. No scrolling past 400 lines of unrelated operations to find the logic that matters.
This compounds. When AI can hold more relevant files in context, it reasons better about how they interact.
Abstract Ports Constrain the Solution Space
Here’s a port definition from Ayunis Core:
export abstract class AgentRepository {
abstract create(agent: Agent): Promise<Agent>;
abstract findOne(id: UUID, userId: UUID): Promise<Agent | null>;
abstract update(agent: Agent): Promise<Agent>;
abstract delete(agentId: UUID, userId: UUID): Promise<void>;
}When AI sees this abstract class, it knows exactly what operations exist. It can’t accidentally use a method that doesn’t exist. It can’t hallucinate a findByName that was never implemented.
The payoff: AI suggests this.agentRepository.create(agent) and it’s guaranteed to work. The type system and the port contract together eliminate entire categories of errors.
In codebases without clear abstractions, AI frequently suggests methods that don’t exist or calls with wrong signatures. It’s guessing based on common patterns. With explicit ports, there’s nothing to guess.
Strict Layering Means Predictable Dependencies
The dependency rule isn’t just architectural preference—it’s a constraint AI can observe and follow.
When adding a new feature, AI correctly:
- Creates the domain entity in
domain/ - Adds the port in
application/ports/ - Creates the use case in
application/use-cases/ - Implements the adapter in
infrastructure/ - Wires everything in the module
It doesn’t try to import TypeORM decorators into a domain entity. It doesn’t call infrastructure code from a use case. The visible structure makes violations obvious.
I’ve watched Claude implement features in Ayunis Core with zero guidance on the architecture. It infers the pattern from what exists and follows it. In less structured codebases, the same assistant needs explicit instructions about where to put things—and still makes mistakes.
Mappers Make Transformations Explicit
Every layer has its own data shapes. Domain entities aren’t the same as database records. DTOs aren’t the same as domain entities. The mappings between them are explicit:
// Domain ↔ Database
class AgentMapper {
toDomain(record: AgentRecord): Agent {
/* ... */
}
toRecord(agent: Agent): AgentRecord {
/* ... */
}
}
// Domain ↔ HTTP
class AgentDtoMapper {
toDto(agent: Agent): AgentResponseDto {
/* ... */
}
}No magic. No auto-mapping. No implicit conversions that work until they don’t.
AI sees exactly what shape data has at each layer. It doesn’t need to guess how a database record becomes a domain entity—the mapper is right there, explicitly named, in a predictable location.
A Concrete Example
Adding a feature to Ayunis Core with AI assistance.
Scenario: Add “agent templates”—predefined agent configurations users can clone.
I describe what I want. Claude:
- Reads the
agents/module structure - Creates
agent-template.entity.tsindomain/ - Creates
AgentTemplateRepositoryport inapplication/ports/ - Creates
CreateAgentFromTemplateuse case - Implements
LocalAgentTemplateRepositoryininfrastructure/persistence/local/ - Adds the controller endpoint in
presenters/http/ - Wires the providers in
agents.module.ts
It followed the pattern because the pattern was visible. I didn’t need a prompt explaining hexagonal architecture or a reference document describing where files go. The structure taught the structure.
The Trade-offs
Being honest: hexagonal architecture has costs.
More Files, More Boilerplate
Creating a new use case means at minimum:
- The command/query class
- The use case class
- A test file
- Module provider registration
Simple CRUD operations feel over-engineered. For a basic “update user email” feature, you’ll create several files where a single service method might suffice in a simpler architecture.
The setup cost is real. Don’t hexagonal-ify a weekend project or a script you’ll run twice.
Learning Curve
Developers unfamiliar with the pattern need time. “Where does this go?” is a common question for the first few weeks.
But here’s the thing: the structure eventually answers those questions. Once you internalize the four layers and the dependency rule, the answers become obvious. The explicit structure that feels like overhead early becomes self-documenting later.
NestJS Friction
NestJS wants services. Hexagonal architecture wants use cases. They can coexist, but there’s ceremony:
- Abstract classes for ports (to support NestJS DI)
- Module providers mapping ports to adapters
- Some awkwardness around request-scoped dependencies
It works. But it’s not what the framework was designed for.
Getting Started
If you want to try this:
Start small. Pick one module to restructure. Establish the four-layer pattern:
feature/
├── domain/
│ └── feature.entity.ts
├── application/
│ ├── ports/
│ │ └── feature.repository.ts
│ └── use-cases/
│ └── create-feature/
├── infrastructure/
│ └── persistence/
│ └── local-feature.repository.ts
└── presenters/
└── http/
└── feature.controller.tsName things consistently:
*.entity.ts— domain models*.use-case.ts— application logic*.port.tsor*.repository.ts— abstractions*.record.ts— database schemas (separate from entities)*.mapper.ts— layer transformations
Add ports for external dependencies. Any time you’d inject a service that talks to the outside world—database, API, file system—create a port first. Implement the adapter separately.
Let it grow organically. You don’t need to restructure your entire codebase. The pattern pays off module by module.
The Argument
The traditional case for hexagonal architecture: testability, flexibility, separation of concerns. All valid.
The newer case: it maximizes what you get from AI assistants.
Structure your code so AI can navigate it efficiently—explicit contracts, small focused files, predictable locations, no magic—and you’ll spend less time correcting its guesses and more time shipping.