In this and the following section, we focus on the core concepts of AutoGen: agents, agent runtime, messages, and communication – the foundational building blocks for an multi-agent applications.
Let’s create an AI Agent and see what it does.
First of all, An Agent is a software entity like an Actor, designed to operate autonomously. An AI agent maintains its own state, perceives its environment (e.g., through messages or sensor inputs), and takes actions to achieve specific goals. These actions—such as updating its state, sending messages, executing code, or interacting with external systems—can influence both the agent and its surroundings.
Unlike typical software components, AI agents often exhibit intelligent behaviors, such as learning from experience, adapting to changes, or making decisions in uncertain conditions.
In essence, an AI agent is a goal-directed entity that interacts with its environment, blending autonomy with intelligence (such as interpret messages, perform reasoning, and execute actions). Many sophisticated software systems can be modeled as collections of such agents, each pursuing its objectives while collaborating or competing with others.
AI Agent¶
Just to re-iterate, an Agent is a software entity like an Actor, designed to operate autonomously. Let’s define an An Autonomous actor Agent.
# sample code just for reference
# running AI Agents in < 10 lines of code
# pip install -U "autogen-agentchat" "autogen-ext[openai]"
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
agent = AssistantAgent("assistant", OpenAIChatCompletionClient(model="gpt-4o"))
print(await agent.run(task="Say 'Hello World!'"))
asyncio.run(main())Proceed with AutoGen Core, with the understanding that we want to develop a complete AI Agent Framework to support a complex business use case where we intend to have complete control over the entire workflow.
what is actor model?¶
AI Agent Implementation¶
Stand alone implementation of an AI Agent
class MyFirstAgent():
def __init__(self) -> None:
super().__init__()
def do_something(self, message: str) -> None:
print(f"received message: {message}")
FirstAgent = MyFirstAgent()
FirstAgent.do_something("Hello World!")# standalone implementation of an AI Agent
# add Agent identifier, properties, methods
class MyFirstAgent:
def __init__(self, name: str, agent_id: str, property: str) -> None:
self.name = name
self.agent_id = agent_id
self.property = property
# No arguments needed for object.__init__()
super().__init__()
def do_something(self, message: str) -> None:
print(f"received message: {message}")
def get_something(self, message: str) -> None:
print(f"Pretend it's incoming message: {message}")
def set_something(self, message: str) -> None:
print(f"Pretend it's outcoming message: {message}")
def run_something(self, message: str) -> None:
print(f"Pretend this is coming from a API given a query: {message}")
# Instantiate the agent with name and agent_id
FirstAgent = MyFirstAgent("APIGuy", "AgentID123", "runs API")
print(FirstAgent.do_something("Hello World!"))
print(FirstAgent.name, FirstAgent.agent_id, FirstAgent.property)So, we created our first AI Agent, but this is nothing but a simple Python OOPs class, there is some Agentic characteristics we built here. We gave it an identity, name and it does something. This Agent, let’s say is an API Agent. but if you think about it, what is Agentic, An Agent has an identity and able to communicate (lets say, send and receive data aka messages).
so for now, let’s give this Agent a unique identity and then enable this agent with a mechanism to send and receive data (aka messages), We will worry about making it more intelligent in later steps.
let’s create another sample agent.
# standalone implementation of an AI Agent
class MySecondAgent:
def __init__(self, name: str, agent_id: str,
property: str, model: str,
tools: str, RAG: str) -> None:
self.name = name
self.agent_id = agent_id
self.property = property
self.model = model # assume llama3.3
self.tools = tools # assume a Tool Function
self.RAG = RAG # assume ChromaDB
# No arguments needed for object.__init__()
super().__init__()
def do_something(self, message: str) -> None:
print(f"received message: {message}")
def get_something(self, message: str) -> None:
# self.model
# use llama3.3 LLM model
print(f"Pretend it's incoming message: {message}")
def set_something(self, message: str) -> None:
# self.RAG
# use RAG to write data into
print(f"Pretend it's outcoming message: {message}")
def run_something(self, message: str) -> None:
# self.tools
# use tools to run a function
print(f"Pretend this is coming from a LLM given a prompt: {message}")
# Instantiate the agent with name and agent_id
SecondAgent = MySecondAgent("LLMGuy", "AgentID124",
"runs LLM inference", "llama3.2",
"ChromaDB", "exmToolFunc")
print(SecondAgent.do_something("Hello World!"))
print(SecondAgent.name, SecondAgent.agent_id, SecondAgent.property)As you can see, these two agents can maintain their state, represented as an Actor model, and exchange data in the form of messages with each other.
However, in real life, it’s not this simple. Here is a graph of 10 such agents, and you can imagine how cluttered this graph will look if you have more than 10 agents, which is often the case in real-life scenarios.