🧬 模型接口¶
evoagentx.models ¶
LLMOutputParser ¶
Bases: Parser
A basic parser for LLM-generated content.
This parser stores the raw text generated by an LLM in the .content
attribute
and provides methods to extract structured data from this text using different
parsing strategies.
Attributes:
Name | Type | Description |
---|---|---|
content |
str
|
The raw text generated by the LLM. |
Source code in evoagentx/core/module.py
get_attrs
classmethod
¶
Returns the attributes of the LLMOutputParser class.
Excludes ["class_name", "content"] by default.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
return_type
|
bool
|
Whether to return the type of the attributes along with their names. |
False
|
Returns:
Type | Description |
---|---|
List[Union[str, tuple]]
|
If |
List[Union[str, tuple]]
|
the attribute name and its type. Otherwise, returns a list of attribute names. |
Source code in evoagentx/models/base_model.py
get_attr_descriptions
classmethod
¶
Returns the attributes and their descriptions.
Returns:
Type | Description |
---|---|
dict
|
A dictionary mapping attribute names to their descriptions. |
Source code in evoagentx/models/base_model.py
get_content_data
classmethod
¶
get_content_data(content: str, parse_mode: str = 'json', parse_func: Optional[Callable] = None, **kwargs) -> dict
Parses LLM-generated content into a dictionary.
This method takes content from an LLM response and converts it to a structured dictionary based on the specified parsing mode.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
content
|
str
|
The content to parse. |
required |
parse_mode
|
str
|
The mode to parse the content. Must be one of:
- 'str': Assigns the raw text content to all attributes of the parser.
- 'json': Extracts and parses JSON objects from LLM output. It will return a dictionary parsed from the first valid JSON string.
- 'xml': Parses content using XML tags. It will return a dictionary parsed from the XML tags.
- 'title': Parses content with Markdown-style headings.
- 'custom': Uses custom parsing logic. Requires providing |
'json'
|
parse_func
|
Optional[Callable]
|
The function to parse the content, only valid when parse_mode is 'custom'. |
None
|
**kwargs
|
Any
|
Additional arguments passed to the parsing function. |
{}
|
Returns:
Type | Description |
---|---|
dict
|
The parsed content as a dictionary. |
Raises:
Type | Description |
---|---|
ValueError
|
If parse_mode is invalid or if parse_func is not provided when parse_mode is 'custom'. |
Source code in evoagentx/models/base_model.py
parse
classmethod
¶
parse(content: str, parse_mode: str = 'json', parse_func: Optional[Callable] = None, **kwargs) -> LLMOutputParser
Parses LLM-generated text into a structured parser instance.
This is the main method for creating parser instances from LLM output.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
content
|
str
|
The text generated by the LLM. |
required |
parse_mode
|
str
|
The mode to parse the content, must be one of:
- 'str': Assigns the raw text content to all attributes of the parser.
- 'json': Extracts and parses JSON objects from LLM output. Uses the first valid JSON string to create an instance of LLMOutputParser.
- 'xml': Parses content using XML tags. Uses the XML tags to create an instance of LLMOutputParser.
- 'title': Parses content with Markdown-style headings. Uses the Markdown-style headings to create an instance of LLMOutputParser. The default title format is "## {title}", you can change it by providing |
'json'
|
parse_func
|
Optional[Callable]
|
The function to parse the content, only valid when |
None
|
**kwargs
|
Any
|
Additional arguments passed to parsing functions, such as:
- |
{}
|
Returns:
Type | Description |
---|---|
LLMOutputParser
|
An instance of LLMOutputParser containing the parsed data. |
Raises:
Type | Description |
---|---|
ValueError
|
If parse_mode is invalid or if content is not a string. |
Source code in evoagentx/models/base_model.py
get_structured_data ¶
Extracts structured data from the parser.
Returns:
Type | Description |
---|---|
dict
|
A dictionary containing only the defined attributes and their values, |
dict
|
excluding metadata like class_name. |
Source code in evoagentx/models/base_model.py
BaseConfig ¶
Bases: BaseModule
Base configuration class that serves as parent for all configuration classes.
A config should inherit BaseConfig and specify the attributes and their types. Otherwise this will be an empty config.
Source code in evoagentx/core/module.py
save ¶
Save configuration to the specified path.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str
|
The file path to save the configuration |
required |
**kwargs
|
Any
|
Additional keyword arguments passed to save_module method |
{}
|
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
The path where the file was saved |
Source code in evoagentx/core/base_config.py
get_config_params ¶
Get a list of configuration parameters.
Returns:
Type | Description |
---|---|
List[str]
|
List[str]: List of configuration parameter names, excluding 'class_name' |
Source code in evoagentx/core/base_config.py
get_set_params ¶
Get a dictionary of explicitly set parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ignore
|
List[str]
|
List of parameter names to ignore |
[]
|
Returns:
Name | Type | Description |
---|---|---|
dict |
dict
|
Dictionary of explicitly set parameters, excluding 'class_name' and ignored parameters |
Source code in evoagentx/core/base_config.py
LiteLLM ¶
Bases: OpenAILLM
Source code in evoagentx/models/base_model.py
init_model ¶
Initialize the model based on the configuration.
Source code in evoagentx/models/litellm_model.py
single_generate ¶
Generate a single response using the completion function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
messages
|
List[dict]
|
A list of dictionaries representing the conversation history. |
required |
**kwargs
|
Any
|
Additional parameters to be passed to the |
{}
|
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
A string containing the model's response. |
Source code in evoagentx/models/litellm_model.py
batch_generate ¶
Generate responses for a batch of messages.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch_messages
|
List[List[dict]]
|
A list of message lists, where each sublist represents a conversation. |
required |
**kwargs
|
Any
|
Additional parameters to be passed to the |
{}
|
Returns:
Type | Description |
---|---|
List[str]
|
List[str]: A list of responses for each conversation. |
Source code in evoagentx/models/litellm_model.py
single_generate_async
async
¶
Generate a single response using the async completion function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
messages
|
List[dict]
|
A list of dictionaries representing the conversation history. |
required |
**kwargs
|
Any
|
Additional parameters to be passed to the |
{}
|
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
A string containing the model's response. |
Source code in evoagentx/models/litellm_model.py
completion_cost ¶
completion_cost(completion_response=None, prompt='', messages: List = [], completion='', total_time=0.0, call_type='completion', size=None, quality=None, n=None) -> float
Calculate the cost of a given completion or other supported tasks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
completion_response
|
dict
|
The response received from a LiteLLM completion request. |
None
|
prompt
|
str
|
Input prompt text. |
''
|
messages
|
list
|
Conversation history. |
[]
|
completion
|
str
|
Output text from the LLM. |
''
|
total_time
|
float
|
Total time used for request. |
0.0
|
call_type
|
str
|
Type of request (e.g., "completion", "image_generation"). |
'completion'
|
size
|
str
|
Image size for image generation. |
None
|
quality
|
str
|
Image quality for image generation. |
None
|
n
|
int
|
Number of generated images. |
None
|
Returns:
Name | Type | Description |
---|---|---|
float |
float
|
The cost in USD. |
Source code in evoagentx/models/litellm_model.py
155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 |
|
BaseLLM ¶
Bases: ABC
Abstract base class for Large Language Model implementations.
This class defines the interface that all LLM implementations must follow, providing methods for generating text, formatting messages, and parsing output.
Attributes:
Name | Type | Description |
---|---|---|
config |
Configuration for the LLM. |
|
kwargs |
Additional keyword arguments provided during initialization. |
Initializes the LLM with configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config
|
LLMConfig
|
Configuration object for the LLM. |
required |
**kwargs
|
Any
|
Additional keyword arguments. |
{}
|
Source code in evoagentx/models/base_model.py
init_model
abstractmethod
¶
Initializes the underlying model.
This method should be implemented by subclasses to set up the actual LLM.
__deepcopy__ ¶
__deepcopy__(memo) -> BaseLLM
Handles deep copying of the LLM instance.
Returns the same instance when deepcopy is called, as LLM instances often cannot be meaningfully deep-copied.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
memo
|
Dict[int, Any]
|
Memo dictionary used by the deepcopy process. |
required |
Returns:
Type | Description |
---|---|
BaseLLM
|
The same LLM instance. |
Source code in evoagentx/models/base_model.py
formulate_messages
abstractmethod
¶
formulate_messages(prompts: List[str], system_messages: Optional[List[str]] = None) -> List[List[dict]]
Converts input prompts into the chat format compatible with different LLMs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompts
|
List[str]
|
A list of user prompts that need to be converted. |
required |
system_messages
|
Optional[List[str]]
|
An optional list of system messages that provide instructions or context to the model. |
None
|
Returns:
Type | Description |
---|---|
List[List[dict]]
|
A list of message lists, where each inner list contains messages in the chat format required by LLMs. |
Source code in evoagentx/models/base_model.py
single_generate
abstractmethod
¶
Generates LLM output for a single set of messages.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
messages
|
List[dict]
|
The input messages to the LLM in chat format. |
required |
**kwargs
|
Any
|
Additional keyword arguments for generation settings. |
{}
|
Returns:
Type | Description |
---|---|
str
|
The generated output text from the LLM. |
Source code in evoagentx/models/base_model.py
batch_generate
abstractmethod
¶
Generates outputs for a batch of message sets.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch_messages
|
List[List[dict]]
|
A list of message lists, where each inner list contains messages for a single generation. |
required |
**kwargs
|
Any
|
Additional keyword arguments for generation settings. |
{}
|
Returns:
Type | Description |
---|---|
List[str]
|
A list of generated outputs from the LLM, one for each input message set. |
Source code in evoagentx/models/base_model.py
single_generate_async
async
¶
Asynchronously generates LLM output for a single set of messages.
This default implementation wraps the synchronous method in an async executor. Subclasses should override this for true async implementation if supported.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
messages
|
List[dict]
|
The input messages to the LLM in chat format. |
required |
**kwargs
|
Any
|
Additional keyword arguments for generation settings. |
{}
|
Returns:
Type | Description |
---|---|
str
|
The generated output text from the LLM. |
Source code in evoagentx/models/base_model.py
batch_generate_async
async
¶
Asynchronously generates outputs for a batch of message sets.
This default implementation runs each generation as a separate async task. Subclasses should override this for more efficient async batching if supported.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch_messages
|
List[List[dict]]
|
A list of message lists, where each inner list contains messages for a single generation. |
required |
**kwargs
|
Any
|
Additional keyword arguments for generation settings. |
{}
|
Returns:
Type | Description |
---|---|
List[str]
|
A list of generated outputs from the LLM, one for each input message set. |
Source code in evoagentx/models/base_model.py
parse_generated_text ¶
parse_generated_text(text: str, parser: Optional[Type[LLMOutputParser]] = None, parse_mode: Optional[str] = 'json', parse_func: Optional[Callable] = None, **kwargs) -> LLMOutputParser
Parses generated text into a structured output using a parser.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
text
|
str
|
The text generated by the LLM. |
required |
parser
|
Optional[Type[LLMOutputParser]]
|
An LLMOutputParser class to use for parsing. If None, the default LLMOutputParser is used. |
None
|
parse_mode
|
Optional[str]
|
The mode to use for parsing, must be the |
'json'
|
**kwargs
|
Any
|
Additional arguments passed to the parser. |
{}
|
Returns:
Type | Description |
---|---|
LLMOutputParser
|
An LLMOutputParser instance containing the parsed data. |
Source code in evoagentx/models/base_model.py
parse_generated_texts ¶
parse_generated_texts(texts: List[str], parser: Optional[Type[LLMOutputParser]] = None, parse_mode: Optional[str] = 'json', parse_func: Optional[Callable] = None, **kwargs) -> List[LLMOutputParser]
Parses multiple generated texts into structured outputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
texts
|
List[str]
|
A list of texts generated by the LLM. |
required |
parser
|
Optional[Type[LLMOutputParser]]
|
An LLMOutputParser class to use for parsing. |
None
|
parse_mode
|
Optional[str]
|
The mode to use for parsing, must be the |
'json'
|
**kwargs
|
Any
|
Additional arguments passed to the parser. |
{}
|
Returns:
Type | Description |
---|---|
List[LLMOutputParser]
|
A list of LLMOutputParser instances containing the parsed data. |
Source code in evoagentx/models/base_model.py
generate ¶
generate(prompt: Optional[Union[str, List[str]]] = None, system_message: Optional[Union[str, List[str]]] = None, messages: Optional[Union[List[dict], List[List[dict]]]] = None, parser: Optional[Type[LLMOutputParser]] = None, parse_mode: Optional[str] = 'json', parse_func: Optional[Callable] = None, **kwargs) -> Union[LLMOutputParser, List[LLMOutputParser]]
Generates LLM output(s) and parses the result(s).
This is the main method for generating text with the LLM. It handles both single and batch generation, and automatically parses the outputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt
|
Optional[Union[str, List[str]]]
|
Input prompt(s) to the LLM. |
None
|
system_message
|
Optional[Union[str, List[str]]]
|
System message(s) for the LLM. |
None
|
messages
|
Optional[Union[List[dict], List[List[dict]]]]
|
Chat message(s) for the LLM, already in the required format (either |
None
|
parser
|
Optional[Type[LLMOutputParser]]
|
Parser class to use for processing the output. |
None
|
parse_mode
|
Optional[str]
|
The mode to use for parsing, must be the |
'json'
|
**kwargs
|
Any
|
Additional generation configuration parameters. |
{}
|
Returns:
Type | Description |
---|---|
Union[LLMOutputParser, List[LLMOutputParser]]
|
For single generation: An LLMOutputParser instance. |
Union[LLMOutputParser, List[LLMOutputParser]]
|
For batch generation: A list of LLMOutputParser instances. |
Raises:
Type | Description |
---|---|
ValueError
|
If the input format is invalid. |
Note
Either prompt or messages must be provided. If both or neither is provided, an error will be raised.
Source code in evoagentx/models/base_model.py
async_generate
async
¶
async_generate(prompt: Optional[Union[str, List[str]]] = None, system_message: Optional[Union[str, List[str]]] = None, messages: Optional[Union[List[dict], List[List[dict]]]] = None, parser: Optional[Type[LLMOutputParser]] = None, parse_mode: Optional[str] = 'json', parse_func: Optional[Callable] = None, **kwargs) -> Union[LLMOutputParser, List[LLMOutputParser]]
Asynchronously generates LLM output(s) and parses the result(s).
This is the async version of the generate method. It works identically but performs the generation asynchronously.
Source code in evoagentx/models/base_model.py
OpenAILLM ¶
Bases: BaseLLM
Source code in evoagentx/models/base_model.py
get_stream_output ¶
Process stream response and return the complete output.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
response
|
Stream
|
The stream response from OpenAI |
required |
output_response
|
bool
|
Whether to print the response in real-time |
True
|
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
The complete output text |
Source code in evoagentx/models/openai_model.py
get_stream_output_async
async
¶
Process async stream response and return the complete output.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
response
|
AsyncIterator[ChatCompletionChunk]
|
The async stream response from OpenAI |
required |
output_response
|
bool
|
Whether to print the response in real-time |
False
|
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
The complete output text |
Source code in evoagentx/models/openai_model.py
atomic_method ¶
threading safe decorator for class methods. If there are self._lock in the instance, it will use the lock. Otherwise, use nullcontext for execution.