EvoAgentX Quickstart Guide¶
This quickstart guide will walk you through the essential steps to set up and start using EvoAgentX. In this tutorial, you'll learn how to:
- Configure your API keys to access LLMs
- Automatically create and execute workflows
Installation¶
Please refere to Installation Guide for more details about the installation.API Key & LLM Setup¶
The first step to execute a workflow in EvoAgentX is configuring your API keys to access LLMs. There are two recommended methods to configure your API keys:
Method 1: Set Environment Variables in the Terminal¶
This method sets the API key directly in your system environment.
For Linux/macOS:
For Windows Command Prompt:
For Windows PowerShell:
Once set, you can access the key in your Python code with:
Method 2: Use a .env
File¶
You can also store your API key in a .env
file inside the root folder of your project.
Create a file named .env
with the following content:
Then, in your Python code, you can load the environment settings using python-dotenv
:
from dotenv import load_dotenv
import os
load_dotenv() # Loads environment variables from .env file
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
.env
file to public platform (e.g., GitHub). Add it to .gitignore
.
Configure and Use the LLM in EvoAgentX¶
Once your API key is configured, you can initialize and use the LLM as follows:
from evoagentx.models import OpenAILLMConfig, OpenAILLM
# Load the API key from environment
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Define LLM configuration
openai_config = OpenAILLMConfig(
model="gpt-4o-mini", # Specify the model name
openai_key=OPENAI_API_KEY, # Pass the key directly
stream=True, # Enable streaming response
output_response=True # Print response to stdout
)
# Initialize the language model
llm = OpenAILLM(config=openai_config)
# Generate a response from the LLM
response = llm.generate(prompt="What is Agentic Workflow?")
You can find more details about supported LLM types and their parameters in the LLM module guide.
Automatic WorkFlow Generation and Execution¶
Once your API key and language model are configured, you can automatically generate and execute agentic workflows in EvoAgentX. This section walks you through the core steps: generating a workflow from a goal, instantiating agents, and running the workflow to get results.
First, let's import the necessary modules:
from evoagentx.workflow import WorkFlowGenerator, WorkFlowGraph, WorkFlow
from evoagentx.agents import AgentManager
Step 1: Generate WorkFlow and Agents¶
Use the WorkFlowGenerator
to automatically create a workflow based on a natural language goal:
goal = "Generate html code for the Tetris game that can be played in the browser."
wf_generator = WorkFlowGenerator(llm=llm)
workflow_graph: WorkFlowGraph = wf_generator.generate_workflow(goal=goal)
WorkFlowGraph
is a data structure that stores the overall workflow plan β including task nodes and their relationships β but does not yet include executable agents.
You can optionally visualize or save the generated workflow:
# Visualize the workflow structure (optional)
workflow_graph.display()
# Save the workflow to a JSON file (optional)
workflow_graph.save_module("/path/to/save/workflow_demo.json")
Step 2: Create and Manage Executable Agents¶
Use AgentManager
to instantiate and manage agents based on the workflow graph:
agent_manager = AgentManager()
agent_manager.add_agents_from_workflow(workflow_graph, llm_config=openai_config)
Step 3: Execute the Workflow¶
Once agents are ready, you can create a WorkFlow
instance and run it:
workflow = WorkFlow(graph=workflow_graph, agent_manager=agent_manager, llm=llm)
output = workflow.execute()
print(output)
For a complete working example, check out the full workflow demo.