AgentVerse is a lightweight library for building Multimodal Agents with memory, knowledge and tools.
When building AI products, 80% of your solution will be standard python code, and the remaining 20% will use Agents for automation. AgentVerse is designed for such use cases.
Write your AI logic using familiar programming constructs (if, else, while, for) and avoid complex abstractions like graphs and chains. Here's a simple Agent that can search the web:
```python websearch_agent.py from agentverse.agent import Agent from agentverse.models.openai import OpenAIChat from agentverse.tools.duckduckgo import DuckDuckGoTools
agent = Agent( model=OpenAIChat(id="gpt-4o"), tools=[DuckDuckGoTools()], markdown=True ) agent.print_response("What's happening in New York?", stream=True) ```
AgentVerse is designed to be simple, fast and model agentversestic. Here are some key features:
shell
pip install -U agentverse
Agents are AI programs that execute tasks autonomously. They solve problems by running tools, accessing knowledge and memory to improve responses. Unlike traditional programs that follow a predefined execution path, agents dynamically adapt their approach based on context, knowledge and tool results.
Instead of a rigid binary definition, let's think of Agents in terms of agency and autonomy.
```python from agentverse.agent import Agent from agentverse.models.openai import OpenAIChat
agent = Agent( model=OpenAIChat(id="gpt-4o"), description="You are an enthusiastic news reporter with a flair for storytelling!", markdown=True ) agent.print_response("Tell me about a breaking news story from New York.", stream=True) ```
To run the agent, install dependencies and export your OPENAI_API_KEY
.
```shell pip install agentverse openai
export OPENAIAPIKEY=sk-xxxx
python basic_agent.py ```
View this example, Basic Agent
This basic agent will obviously make up a story, lets give it a tool to search the web.
```python from agentverse.agent import Agent from agentverse.models.openai import OpenAIChat from agentverse.tools.duckduckgo import DuckDuckGoTools
agent = Agent( model=OpenAIChat(id="gpt-4o"), description="You are an enthusiastic news reporter with a flair for storytelling!", tools=[DuckDuckGoTools()], showtoolcalls=True, markdown=True ) agent.print_response("Tell me about a breaking news story from New York.", stream=True) ```
Install dependencies and run the Agent:
```shell pip install duckduckgo-search
python agentwithtools.py ```
Now you should see a much more relevant result.
View this example/a>, NeuroPilot<
Agents can store knowledge in a vector database and use it for RAG or dynamic few-shot learning.
AgentVerse agents use Agentic RAG by default, which means they will search their knowledge base for the specific information they need to achieve their task.
```python from agentverse.agent import Agent from agentverse.models.openai import OpenAIChat from agentverse.embedder.openai import OpenAIEmbedder from agentverse.tools.duckduckgo import DuckDuckGoTools from agentverse.knowledge.pdf_url import PDFUrlKnowledgeBase from agentverse.vectordb.lancedb import LanceDb, SearchType
agent = Agent( model=OpenAIChat(id="gpt-4o"), description="You are a Thai cuisine expert!", instructions=[ "Search your knowledge base for Thai recipes.", "If the question is better suited for the web, search the web to fill in gaps.", "Prefer the information in your knowledge base over the web results." ], knowledge=PDFUrlKnowledgeBase( urls=["https://agentverse-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"], vectordb=LanceDb( uri="tmp/lancedb", tablename="recipes", searchtype=SearchType.hybrid, embedder=OpenAIEmbedder(id="text-embedding-3-small"), ), ), tools=[DuckDuckGoTools()], showtool_calls=True, markdown=True )
if agent.knowledge is not None: agent.knowledge.load()
agent.printresponse("How do I make chicken and galangal in coconut milk soup", stream=True) agent.printresponse("What is the history of Thai curry?", stream=True) ```
Install dependencies and run the Agent:
```shell pip install lancedb tantivy pypdf duckduckgo-search
python agentwithknowledge.py ```
View this example, Agent With Knowledge
Agents work best when they have a singular purpose, a narrow scope and a small number of tools. When the number of tools grows beyond what the language model can handle or the tools belong to different categories, use a team of agents to spread the load.
```python from agentverse.agent import Agent from agentverse.models.openai import OpenAIChat from agentverse.tools.duckduckgo import DuckDuckGoTools from agentverse.tools.yfinance import YFinanceTools
webagent = Agent( name="Web Agent", role="Search the web for information", model=OpenAIChat(id="gpt-4o"), tools=[DuckDuckGoTools()], instructions="Always include sources", showtool_calls=True, markdown=True, )
financeagent = Agent( name="Finance Agent", role="Get financial data", model=OpenAIChat(id="gpt-4o"), tools=[YFinanceTools(stockprice=True, analystrecommendations=True, companyinfo=True)], instructions="Use tables to display data", showtoolcalls=True, markdown=True, )
agentteam = Agent( team=[webagent, financeagent], model=OpenAIChat(id="gpt-4o"), instructions=["Always include sources", "Use tables to display data"], showtool_calls=True, markdown=True, )
agentteam.printresponse("What's the market outlook and financial performance of AI semiconductor companies?", stream=True) ```
Install dependencies and run the Agent team:
```shell pip install duckduckgo-search yfinance
python agent_team.py ```
At AgentVerse, we're obsessed with performance. Why? because even simple AI workflows can spawn thousands of Agents to achieve their goals. Scale that to a modest number of users and performance becomes a bottleneck. AgentVerse is designed to power high performance agentic systems:
Tested on an Apple M4 Mackbook Pro.
While an Agent's run-time is bottlenecked by inference, we must do everything possible to minimize execution time, reduce memory usage, and parallelize tool calls. These numbers may seem trivial at first, but our experience shows that they add up even at a reasonably small scale.
Let's measure the time it takes for an Agent with 1 tool to start up. We'll run the evaluation 1000 times to get a baseline measurement.
You should run the evaluation yourself on your own machine, please, do not take these results at face value.
```shell
./scripts/perf_setup.sh source .venvs/perfenv/bin/activate
python evals/performance/instantiationwithtool.py
python evals/performance/other/langgraph_instantiation.py ```
The following evaluation is run on an Apple M4 Mackbook Pro. It also runs as a Github action on this repo.
LangGraph is on the right, let's start it first and give it a head start.
AgentVerse is on the left, notice how it finishes before LangGraph gets 1/2 way through the runtime measurement, and hasn't even started the memory measurement. That's how fast AgentVerse is.
https://github.com/user-attachments/assets/ba466d45-75dd-45ac-917b-0a56c5742e23
Dividing the average time of a Langgraph Agent by the average time of an AgentVerse Agent:
0.020526s / 0.000002s ~ 10,263
In this particular run, AgentVerse Agents startup is roughly 10,000 times faster than Langgraph Agents. The numbers continue to favor AgentVerse as the number of tools grow, and we add memory and knowledge stores.
To measure memory usage, we use the tracemalloc
library. We first calculate a baseline memory usage by running an empty function, then run the Agent 1000x times and calculate the difference. This gives a (reasonably) isolated measurement of the memory usage of the Agent.
We recommend running the evaluation yourself on your own machine, and digging into the code to see how it works. If we've made a mistake, please let us know.
Dividing the average memory usage of a Langgraph Agent by the average memory usage of an AgentVerse Agent:
0.137273/0.002528 ~ 54.3
Langgraph Agents use ~50x more memory than AgentVerse Agents. In our opinion, memory usage is a much more important metric than instantiation time. As we start running thousands of Agents in production, these numbers directly start affecting the cost of running the Agents.
AgentVerse agents are designed for performance and while we do share some benchmarks against other frameworks, we should be mindful that accuracy and reliability are more important than speed.
We'll be publishing accuracy and reliability benchmarks running on Github actions in the coming weeks. Given that each framework is different and we won't be able to tune their performance like we do with AgentVerse, for future benchmarks we'll only be comparing against ourselves.
When building AgentVerse agents, using AgentVerse documentation as a source in Cursor is a great way to speed up your development.
https://docs.agentverseai.app
to the list of documentation URLs.Now, Cursor will have access to the AgentVerse documentation.
AgentVerse logs which model an agent used so we can prioritize updates to the most popular providers. You can disable this by setting AGNO_TELEMETRY=false
in your environment.