Blog
Gen AI

2 Design Patterns Every AI Agent Needs (And Most Don't Have)

AI agents are everywhere. But knowing how to build them well? That's a different conversation. In the first session of our webinar series on agentic AI, our experts at Incorta broke down two foundational design patterns every enterprise AI team should understand: Human in the Loop and Agent-to-Agent Communication (A2A).

Here's what they covered (and why it matters).

First, a Quick Refresher: From LLMs to Agentic Workflows

Large language models (LLMs) have come a long way from their early days as chatbots. While those first chatbot applications sparked widespread excitement in AI, they came with real limitations - narrow context windows, static training data, and an inability to interact with the world beyond generating text.

The evolution since then has moved through four distinct phases:

  1. Simple LLM workflows: basic question-and-answer with limited tool use
  2. Retrieval-Augmented Generation (RAG): grounding outputs in dynamic, relevant context
  3. AI agents: systems with memory, planning, reasoning, and tool access
  4. Multi-agent pipelines: networks of cooperating agents working toward a shared goal

This fourth phase - agentic AI at scale - is where things get both powerful and complex. And that complexity is exactly why design patterns matter.

"When we get a new problem, we don't have to come up with a new architecture from scratch. We can just find the closest design pattern, use it, and benefit from all the wisdom of everyone who's applied it before."

Design Pattern #1: Human in the Loop: The Safety Valve for Agentic AI

Autonomy is the goal. But in the enterprise world - especially in finance, legal, and compliance - unchecked autonomy is a liability.

We frame the Human in the Loop (HITL) pattern simply: use it when the cost of an error is higher than the need for speed.

The pattern works like this:

  • ~90% of the time, the agent handles requests end-to-end with no interruption. User prompts, agent answers, workflow completes.
  • The other 10% - when the agent detects ambiguity or a high-stakes decision - the workflow physically stops. The agent surfaces the uncertainty to the human, who can then clarify, correct, or approve before the agent continues.

This transforms the human's role from passive requester to active expert in the loop.

Two Moments That Trigger a Pause

1. The Ambiguity CheckWhen a request is technically answerable but semantically vague — say, "show me total sales for my favorite product category" — rather than guessing (and potentially hallucinating), the agent surfaces an input prompt asking for clarification. Once the user responds, the workflow resumes with accuracy.

2. The Strategic Dead EndWhen an agent hits a genuine wall - like being asked for "the longest flight distance" in a retail dataset that contains no flight data - it doesn't crash or fabricate a number. It analyzes the situation, presents the user with strategic options (pivot the query, use a different dataset, cancel the request), and continues from where it stopped once a decision is made.

The key shift: the agent treats the human as a partner, not just a prompter.

Design Pattern #2: Agent-to-Agent Communication (A2A): The Teamwork Engine

Once trust and safety are handled by HITL, the next challenge is scale. And a single agent, no matter how capable, can't do everything.

Ask one agent to write code, do research, manage a project, and handle exceptions simultaneously - and you'll get confusion and mistakes. A bigger agent isn't the answer - it's a team of specialized ones.

Agent-to-Agent Communication (A2A) is the open standard protocol that makes this teamwork possible. Importantly, it's platform-agnostic: an agent built by one company can collaborate with an agent from a completely different company, regardless of which underlying AI model they use.

The Four Steps of A2A Communication

1. Discovery: Like looking up a phone book -before an agent asks for help, it queries a registry to find out which other agents are available.

2. Identity: Each agent shares an "agent card" - a structured introduction listing its name, capabilities, and the specific tasks it can perform.

3. Communication: Agents begin assigning tasks to each other and collaborating, either synchronously (for quick tasks) or asynchronously (for heavy, long-running jobs).

4. Security: Every agent interaction in an enterprise context is encrypted and logged, producing a full audit trail of who did what and when - essential for regulated industries.

What the Protocol Looks Like in Practice

Under the hood, A2A runs on standard JSON-RPC. A request is a simple, platform-independent message (e.g., "What are the total sales for bikes?"). The response isn't a simple chat reply - it's a structured data artifact that includes the answer and the evidence: the logic used, the SQL generated, and a full record of how the result was produced.

This matters for enterprise adoption: you're not locked into a single interface. Because the protocol is standard JSON, any application in your tech stack can programmatically access these agents.

Putting It Together

These two patterns complement each other naturally. Human in the Loop ensures that trust and safety are never sacrificed for speed. Agent-to-Agent Communication ensures that complexity doesn't become a bottleneck for scale. Together, they form a foundation for agentic AI that enterprises can actually rely on.

The session's key takeaway: agentic AI is about designing systems that know when to pause, when to ask for help, and how to collaborate effectively.

Share this post

Get more from Incorta

Your data. No limits.