bots.foundation package

Submodules

bots.foundation.anthropic_bots module

Anthropic-specific bot implementation for the bots framework.

This module provides the necessary classes to interact with Anthropic’s Claude models, implementing the base bot interfaces with Anthropic-specific handling for: - Message formatting and conversation management - Tool integration and execution - Cache control for optimal context window usage - API communication with retry logic

The main class is AnthropicBot, which provides a complete implementation ready for use with Anthropic’s API. Supporting classes handle specific aspects of the Anthropic integration while maintaining the framework’s abstractions.

Classes:

AnthropicNode: Conversation node implementation for Anthropic’s message format AnthropicToolHandler: Tool management for Anthropic’s function calling format AnthropicMailbox: API communication handler with Anthropic-specific retry logic AnthropicBot: Main bot implementation for Anthropic’s Claude models CacheController: Manages conversation history caching for context optimization

class bots.foundation.anthropic_bots.AnthropicNode(**kwargs: Any)[source]

Bases: ConversationNode

A conversation node implementation specific to Anthropic’s API requirements.

This class extends ConversationNode to handle Anthropic-specific message formatting, including proper handling of tool calls and results in the conversation tree.

Inherits all attributes from ConversationNode
__init__(**kwargs: Any) None[source]

Initialize an AnthropicNode.

Parameters:

**kwargs – Arbitrary keyword arguments passed to parent class.

_add_tool_results(results: List[Dict[str, Any]]) None[source]

Add tool execution results to the conversation node.

For Anthropic bots, tool results need to be propagated to the replies or stored as pending to be moved to the replies when a reply is added.

Parameters:

results – List of tool execution result dictionaries

_build_messages() List[Dict[str, Any]][source]

Build message list for Anthropic API.

Constructs the message history in Anthropic’s expected format, properly handling empty nodes and tool calls. Empty nodes are preserved in the structure but filtered from API messages.

Returns:

List of message dictionaries formatted for Anthropic’s API

class bots.foundation.anthropic_bots.AnthropicToolHandler[source]

Bases: ToolHandler

Tool handler implementation specific to Anthropic’s API requirements.

This class manages the conversion between Python functions and Anthropic’s tool format, handling schema generation, request/response formatting, and error handling.

__init__() None[source]

Initialize an AnthropicToolHandler.

generate_tool_schema(func: Callable) Dict[str, Any][source]

Generate Anthropic-compatible tool schema from a Python function.

Parameters:

func – The Python function to convert into a tool schema

Returns:

  • name: The function name

  • description: The function’s docstring

  • input_schema: Object describing expected parameters

Return type:

Dictionary containing the tool schema in Anthropic’s format with

generate_request_schema(response: Any) List[Dict[str, Any]][source]

Generate request schema from an Anthropic API response.

Parameters:

response – The raw response from Anthropic’s API

Returns:

List of request schemas (multiple requests may be in one message)

tool_name_and_input(request_schema: Dict[str, Any]) Tuple[str, Dict[str, Any]][source]

Extract tool name and input parameters from a request schema.

Parameters:

request_schema – The request schema to parse

Returns:

  • tool name (str)

  • tool input parameters (dict)

Return type:

Tuple containing

generate_response_schema(request: Dict[str, Any], tool_output_kwargs: Dict[str, Any]) Dict[str, Any][source]

Generate response schema for tool execution results.

Parameters:
  • request – The original tool request

  • tool_output_kwargs – The tool’s execution results

Returns:

Dictionary containing the response in Anthropic’s expected format

generate_error_schema(request_schema: Dict[str, Any] | None, error_msg: str) Dict[str, Any][source]

Generate an error response schema in Anthropic’s format.

Parameters:
  • request_schema – Optional original request schema that caused the error

  • error_msg – The error message to include

Returns:

Dictionary containing the error in Anthropic’s expected format

class bots.foundation.anthropic_bots.AnthropicMailbox(verbose: bool = False)[source]

Bases: Mailbox

Handles communication with Anthropic’s API.

This class manages message sending and response processing for Anthropic models, including retry logic for API errors and handling of incomplete responses.

last_message

Optional[Dict[str, Any]] - The last message sent/received

client

Optional[anthropic.Anthropic] - The Anthropic API client instance

__init__(verbose: bool = False)[source]

Initialize an AnthropicMailbox.

Parameters:

verbose – Whether to enable verbose logging (default: False)

send_message(bot: AnthropicBot) Dict[str, Any][source]

Sends a message using the Anthropic API.

Handles API key setup, message formatting, and implements exponential backoff retry logic for API errors.

Parameters:

bot – The AnthropicBot instance making the request

Returns:

The API response dictionary

Raises:
  • ValueError – If no API key is found

  • Exception – If max retries are reached

process_response(response: Dict[str, Any], bot: AnthropicBot) Tuple[str, str, Dict[str, Any]][source]

Process the API response and handle incomplete responses.

Manages continuation of responses that hit the max_tokens limit and extracts the relevant text and role information.

Parameters:
  • response – The API response to process

  • bot – The AnthropicBot instance that made the request

Returns:

  • response text (str)

  • response role (str)

  • additional metadata (dict)

Return type:

Tuple containing

Raises:

anthropic.BadRequestError – If the API returns a 400 error

class bots.foundation.anthropic_bots.AnthropicBot(api_key: str | None = None, model_engine: Engines = Engines.CLAUDE37_SONNET_20250219, max_tokens: int = 4096, temperature: float = 0.3, name: str = 'Claude', role: str = 'assistant', role_description: str = 'a friendly AI assistant', autosave: bool = True)[source]

Bases: Bot

A bot implementation using the Anthropic API.

Use when you need to create a bot that interfaces with Anthropic’s chat completion models. Provides a complete implementation with Anthropic-specific conversation management, tool handling, and message processing. Supports both simple chat interactions and complex tool-using conversations.

Inherits from:

Bot: Base class for all bot implementations, providing core conversation and tool management

api_key

Anthropic API key for authentication

Type:

str

model_engine

The Anthropic model being used (e.g., GPT4)

Type:

Engines

max_tokens

Maximum tokens allowed in completion responses

Type:

int

temperature

Response randomness factor (0-1)

Type:

float

name

Instance name for identification

Type:

str

role

Bot’s role identifier

Type:

str

role_description

Detailed description of bot’s role/personality (for humans to read, not used in api)

Type:

str

system_message

System-level instructions for the bot

Type:

str

tool_handler

Manages function calling capabilities

Type:

AnthropicToolHandler

conversation

Manages conversation history

Type:

AnthropicNode

mailbox

Handles API communication

Type:

AnthropicMailbox

autosave

Whether to automatically save state after responses

Type:

bool

Example

```python # Create a documentation expert bot bot = ChatGPT_Bot(

model_engine=Engines.GPT4, temperature=0.3, role_description=”a Python documentation expert”

)

# Add tools and use the bot bot.add_tool(my_function) response = bot.respond(“Please help document this code.”)

# Save the bot’s state for later use bot.save(“doc_expert.bot”) ```

__init__(api_key: str | None = None, model_engine: Engines = Engines.CLAUDE37_SONNET_20250219, max_tokens: int = 4096, temperature: float = 0.3, name: str = 'Claude', role: str = 'assistant', role_description: str = 'a friendly AI assistant', autosave: bool = True) None[source]

Initialize an AnthropicBot.

Parameters:
  • api_key – Optional API key (will use ANTHROPIC_API_KEY env var if not provided)

  • model_engine – The Anthropic model to use (default: CLAUDE37_SONNET_20250219)

  • max_tokens – Maximum tokens per response (default: 4096)

  • temperature – Response randomness, 0-1 (default: 0.3)

  • name – Bot’s name (default: ‘Claude’)

  • role – Bot’s role (default: ‘assistant’)

  • role_description – Description of bot’s role (default: ‘a friendly AI assistant’)

  • autosave – Whether to autosave state after responses (default: True, saves to cwd)

class bots.foundation.anthropic_bots.CacheController[source]

Bases: object

Manages cache control directives in Anthropic message histories.

This class handles the placement and management of cache control markers in the conversation history to optimize context window usage. It ensures that cache control directives are properly placed and don’t interfere with tool operations.

The controller maintains a balance between preserving important context and allowing older messages to be cached or dropped when the context window fills up.

find_cache_control_positions(messages: List[Dict[str, Any]]) List[int][source]

Find positions of all cache control directives in the message history.

Parameters:

messages – List of message dictionaries to search

Returns:

List of indices where cache control directives are found

should_add_cache_control(total_messages: int, last_control_pos: int, threshold: float = 5.0) bool[source]

Determine if a new cache control directive should be added.

Parameters:
  • total_messages – Total number of messages in history

  • last_control_pos – Position of the last cache control directive

  • threshold – Growth factor for determining new control placement (default: 5.0)

Returns:

True if a new cache control directive should be added

shift_cache_control_out_of_tool_block(messages: List[Dict[str, Any]], position: int) int[source]

Move cache control directives out of tool-related message blocks.

Cache controls should not be placed within tool call or result blocks as this can interfere with tool operation. This method finds the nearest safe position to move the cache control directive to.

Parameters:
  • messages – List of messages to modify

  • position – Current position of the cache control directive

Returns:

New position where the cache control directive was moved to

insert_cache_control(messages: List[Dict[str, Any]], position: int) None[source]

Insert a cache control directive at the specified position.

Parameters:
  • messages – List of messages to modify

  • position – Position where the cache control should be inserted

remove_cache_control_at_position(messages: List[Dict[str, Any]], position: int) None[source]

Remove cache control directive at the specified position.

Parameters:
  • messages – List of messages to modify

  • position – Position of the cache control to remove

manage_cache_controls(messages: List[Dict[str, Any]], threshold: float = 5.0) List[Dict[str, Any]][source]

Manage cache control directives across the entire message history.

Use when you need to optimize the conversation context window by managing which parts of the conversation history can be cached or dropped.

This method ensures proper placement and maintenance of cache control directives: - Prevents interference with tool operations - Limits the total number of cache controls

Parameters:
  • messages – List of messages to manage

  • threshold – Growth factor for cache control placement (default: 5.0)

Returns:

Modified list of messages with properly managed cache controls

bots.foundation.base module

Core foundation classes for the bots framework.

This module provides the fundamental abstractions and base classes that power the bots framework: - Bot: Abstract base class for all LLM implementations - ToolHandler: Manages function/module tools with context preservation - ConversationNode: Tree-based conversation storage - Mailbox: Abstract interface for LLM service communication - Engines: Supported LLM model configurations

The classes in this module are designed to: - Provide a consistent interface across different LLM implementations - Enable sophisticated context and tool management - Support complete bot portability and state preservation - Handle conversation branching and context management efficiently

Example

```python from bots import AnthropicBot import my_tools

# Create a bot with tools bot = AnthropicBot() bot.add_tools(my_tools)

# Basic interaction response = bot.respond(“Hello!”)

# Save bot state bot.save(“my_bot.bot”) ```

bots.foundation.base.load(filepath: str) Bot[source]

Load a saved bot from a file.

Use when you need to restore a previously saved bot with its complete state, including conversation history, tools, and configuration.

Parameters:

filepath (str) – Path to the .bot file containing the saved bot state

Returns:

A reconstructed Bot instance with the saved state

Return type:

Bot

Example

`python bot = bots.load("my_saved_bot.bot") bot.respond("Continue our previous conversation") `

class bots.foundation.base.Engines(*values)[source]

Bases: str, Enum

Enum class representing different AI model engines.

GPT4 = 'gpt-4'
GPT4_0613 = 'gpt-4-0613'
GPT4_32K = 'gpt-4-32k'
GPT4_32K_0613 = 'gpt-4-32k-0613'
GPT4TURBO = 'gpt-4-turbo-preview'
GPT4TURBO_0125 = 'gpt-4-0125-preview'
GPT4TURBO_VISION = 'gpt-4-vision-preview'
GPT35TURBO = 'gpt-3.5-turbo'
GPT35TURBO_16K = 'gpt-3.5-turbo-16k'
GPT35TURBO_0125 = 'gpt-3.5-turbo-0125'
GPT35TURBO_INSTRUCT = 'gpt-3.5-turbo-instruct'
CLAUDE3_HAIKU = 'claude-3-haiku-20240307'
CLAUDE3_SONNET = 'claude-3-sonnet-20240229'
CLAUDE3_OPUS = 'claude-3-opus-20240229'
CLAUDE35_SONNET_20240620 = 'claude-3-5-sonnet-20240620'
CLAUDE35_SONNET_20241022 = 'claude-3-5-sonnet-20241022'
CLAUDE37_SONNET_20250219 = 'claude-3-7-sonnet-20250219'
static get(name: str) Engines | None[source]

Retrieve an Engines enum member by its string value.

Use when you need to convert a model name string to an Engines enum member.

Parameters:

name (str) – The string value of the engine (e.g., ‘gpt-4’, ‘claude-3-opus-20240229’)

Returns:

The corresponding Engines enum member, or None if not found

Return type:

Optional[Engines]

Example

```python engine = Engines.get(‘gpt-4’) if engine:

bot = Bot(model_engine=engine)

```

static get_bot_class(model_engine: Engines) Type[Bot][source]

Get the appropriate Bot subclass for a given model engine.

Use when you need to programmatically determine which Bot implementation to use for a specific model engine.

Parameters:

model_engine (Engines) – The engine enum member to get the bot class for

Returns:

The Bot subclass (ChatGPT_Bot or AnthropicBot)

Return type:

Type[Bot]

Raises:

ValueError – If the model engine is not supported

Example

`python bot_class = Engines.get_bot_class(Engines.GPT4) bot = bot_class(api_key="key") `

static get_conversation_node_class(class_name: str) Type[ConversationNode][source]

Get the appropriate ConversationNode subclass by name.

Use when you need to reconstruct conversation nodes from saved bot state.

Parameters:

class_name (str) – Name of the node class (‘OpenAINode’ or ‘AnthropicNode’)

Returns:

The ConversationNode subclass

Return type:

Type[ConversationNode]

Raises:

ValueError – If the class name is not a supported node type

class bots.foundation.base.ConversationNode(content: str, role: str, tool_calls: List[Dict] | None = None, tool_results: List[Dict] | None = None, pending_results: List[Dict] | None = None, **kwargs)[source]

Bases: object

Tree-based storage for conversation history and tool interactions.

ConversationNode implements a linked tree structure that enables sophisticated conversation management, including branching conversations and tool usage tracking. Each node represents a message in the conversation and can have multiple replies, forming a tree structure.

role

The role of the message sender (‘user’, ‘assistant’, etc.)

Type:

str

content

The message content

Type:

str

parent

Reference to the parent node

Type:

ConversationNode

replies

List of reply nodes

Type:

List[ConversationNode]

tool_calls

Tool invocations made in this message

Type:

List[Dict]

tool_results

Results from tool executions

Type:

List[Dict]

pending_results

Tool results waiting to be processed

Type:

List[Dict]

Example

`python # Create a conversation tree root = ConversationNode(role='user', content='Hello') response = root._add_reply(role='assistant', content='Hi there!') `

__init__(content: str, role: str, tool_calls: List[Dict] | None = None, tool_results: List[Dict] | None = None, pending_results: List[Dict] | None = None, **kwargs) None[source]

Initialize a new ConversationNode.

Parameters:
  • content (str) – The message content

  • role (str) – The role of the message sender

  • tool_calls (Optional[List[Dict]]) – Tool invocations made in this message

  • tool_results (Optional[List[Dict]]) – Results from tool executions

  • pending_results (Optional[List[Dict]]) – Tool results waiting to be processed

  • **kwargs – Additional attributes to set on the node

static _create_empty(cls: Type[ConversationNode] | None = None) ConversationNode[source]

Create an empty root node.

Use when initializing a new conversation tree that needs an empty root.

Parameters:

cls (Optional[Type[ConversationNode]]) – Optional specific node class to use

Returns:

An empty node with role=’empty’ and no content

Return type:

ConversationNode

_is_empty() bool[source]

Check if this is an empty root node.

Returns:

True if this is an empty root node, False otherwise

Return type:

bool

_add_reply(**kwargs) ConversationNode[source]

Add a new reply node to this conversation node.

Creates a new node as a child of this one, handling tool context synchronization between siblings.

Parameters:

**kwargs – Attributes to set on the new node (content, role, etc.)

Returns:

The newly created reply node

Return type:

ConversationNode

Example

`python node = root._add_reply(content="Hello", role="user") response = node._add_reply(content="Hi!", role="assistant") `

_sync_tool_context() None[source]

Synchronize tool results across all sibling nodes.

Use when tool results need to be shared between parallel conversation branches. Takes the union of all tool results from sibling nodes and ensures each sibling has access to all results.

Side Effects:

Updates tool_results for all sibling nodes to include all unique results.

_add_tool_calls(calls: List[Dict[str, Any]]) None[source]

Add tool call records to this node.

Use when new tool calls are made during the conversation.

Parameters:

calls (List[Dict[str, Any]]) – List of tool call records to add

_add_tool_results(results: List[Dict[str, Any]]) None[source]

Add tool execution results to this node.

Use when tool execution results are received and need to be recorded. Automatically synchronizes results with sibling nodes.

Parameters:

results (List[Dict[str, Any]]) – List of tool result records to add

Side Effects:
  • Updates this node’s tool_results

  • Synchronizes results across sibling nodes

_find_root() ConversationNode[source]

Navigate to the root node of the conversation tree.

Use when you need to access the starting point of the conversation.

Returns:

The root node of the conversation tree

Return type:

ConversationNode

_root_dict() Dict[str, Any][source]

Convert the entire conversation tree to a dictionary.

Use when serializing the complete conversation for saving or transmission. Starts from the root node and includes all branches.

Returns:

Dictionary representation of the complete conversation tree

Return type:

Dict[str, Any]

_to_dict_recursive() Dict[str, Any][source]

Recursively convert this node and all its replies to a dictionary.

Use when you need to serialize a subtree of the conversation.

Returns:

Dictionary containing this node and all its descendants

Return type:

Dict[str, Any]

_to_dict_self() Dict[str, Any][source]

Convert just this node to a dictionary.

Use when serializing a single conversation node. Omits replies, parent references, and callable attributes.

Returns:

Dictionary containing this node’s attributes

Return type:

Dict[str, Any]

Note

Only serializes basic types (str, int, float, bool, list, dict) and converts other types to strings.

_build_messages() List[Dict[str, Any]][source]

Build a chronological list of messages from root to this node.

Use when you need to construct the conversation history for an LLM API call. Includes tool calls and results in each message where present.

Returns:

List of message dictionaries, each containing:
  • role: The message sender’s role

  • content: The message text

  • tool_calls: (optional) List of tool calls made

  • tool_results: (optional) List of tool execution results

Return type:

List[Dict[str, Any]]

Note

Empty root nodes are excluded from the message list.

_node_count() int[source]

Count the total number of nodes in the conversation tree.

Use when you need to measure the size of the conversation. Counts from the root node down through all branches.

Returns:

Total number of nodes in the conversation tree

Return type:

int

Example

`python total_messages = node._node_count() print(f"This conversation has {total_messages} messages") `

class bots.foundation.base.ModuleContext(name: str, source: str, file_path: str, namespace: ModuleType, code_hash: str)[source]

Bases: object

Context container for module-level tool preservation.

Stores all necessary information to reconstruct a module and its tools, including source code, namespace, and execution environment.

name

The module’s name

Type:

str

source

The module’s complete source code

Type:

str

file_path

Original file path or generated path for dynamic modules

Type:

str

namespace

The module’s execution namespace

Type:

ModuleType

code_hash

Hash of the source code for version checking

Type:

str

name: str
source: str
file_path: str
namespace: ModuleType
code_hash: str
__init__(name: str, source: str, file_path: str, namespace: ModuleType, code_hash: str) None
exception bots.foundation.base.ToolHandlerError[source]

Bases: Exception

Base exception class for ToolHandler errors.

Use as a base class for all tool-related exceptions to allow specific error handling for tool operations.

exception bots.foundation.base.ToolNotFoundError[source]

Bases: ToolHandlerError

Raised when a requested tool is not available.

Use when attempting to use a tool that hasn’t been registered with the ToolHandler.

exception bots.foundation.base.ModuleLoadError[source]

Bases: ToolHandlerError

Raised when a module cannot be loaded for tool extraction.

Use when there are issues loading a module’s source code, executing it in a new namespace, or extracting its tools.

class bots.foundation.base.ToolHandler[source]

Bases: ABC

Abstract base class for managing bot tool operations.

ToolHandler provides a complete system for: - Registering Python functions as bot tools - Preserving tool context and dependencies - Managing tool execution and results - Serializing and deserializing tool state

The class supports both individual function tools and complete module imports, preserving all necessary context for tool operation across bot save/load cycles.

tools

Registered tool schemas

Type:

List[Dict[str, Any]]

function_map

Mapping of tool names to functions

Type:

Dict[str, Callable]

requests

Pending tool execution requests

Type:

List[Dict[str, Any]]

results

Results from tool executions

Type:

List[Dict[str, Any]]

modules

Module contexts for imported tools

Type:

Dict[str, ModuleContext]

Example

```python class MyToolHandler(ToolHandler):

# Implement abstract methods… pass

handler = MyToolHandler() handler.add_tools(my_module) ```

__init__()[source]
abstractmethod generate_tool_schema(func: Callable) Dict[str, Any][source]

Generate the tool schema for the bot’s api. Must be implemented per provider.

Use to create a consistent tool description that the LLM can understand. Must extract relevant information from the function’s signature and docstring.

Parameters:

func (Callable) – The function to generate a schema for

Returns:

Schema describing the tool’s:
  • name and description

  • parameters and their types

  • return value format

  • usage instructions

Return type:

Dict[str, Any]

Example Schema:
{

“name”: “calculate_area”, “description”: “Calculate the area of a circle”, “parameters”: {

“radius”: {“type”: “float”, “description”: “Circle radius”}

}, “returns”: “float: The calculated area”

}

abstractmethod generate_request_schema(response: Any) List[Dict[str, Any]][source]

Extract tool requests from an LLM response. Must be implemented per provider.

Use to parse the LLM’s response and identify any tool usage requests. Multiple tool requests may be present in a single response.

Parameters:

response (Any) – Raw response from the LLM service

Returns:

List of parsed tool request schemas, each containing:
  • Specific requirements per provider

Return type:

List[Dict[str, Any]]

Example

`python response = llm_service.get_response() requests = handler.generate_request_schema(response) # [{"name": "view_file", "parameters": {"path": "main.py"}}, ...] `

abstractmethod tool_name_and_input(request_schema: Dict[str, Any]) Tuple[str | None, Dict[str, Any]][source]

Extract tool name and parameters from a request schema. Must be implemented per provider.

Use to parse a tool request into components that can be used for execution. Validates and prepares the input parameters for the tool function.

Parameters:

request_schema (Dict[str, Any]) – The request schema to parse

Returns:

  • Tool name (or None if request should be skipped)

  • Dictionary of prepared input parameters

Return type:

Tuple[Optional[str], Dict[str, Any]]

Example

```python name, params = handler.tool_name_and_input({

“name”: “calculate_area”, “parameters”: {“radius”: “5.0”}

}) if name:

result = handler.function_map[name](**params)

```

abstractmethod generate_response_schema(request: Dict[str, Any], tool_output_kwargs: Dict[str, Any]) Dict[str, Any][source]

Generate a response schema from tool execution results. Must be implemented per provider.

Use to format tool output in a way the LLM can understand and process. Maintains connection between request and response through request metadata.

Parameters:
  • request (Dict[str, Any]) – The original request schema

  • tool_output_kwargs (Dict[str, Any]) – The tool’s execution results

Returns:

Formatted response schema containing:
  • Specific Schema for provider

Return type:

Dict[str, Any]

Example

`python result = tool_func(**params) response = handler.generate_response_schema(request, result) # {"tool_name": "view_file", "status": "success", "content": "file contents..."} `

abstractmethod generate_error_schema(request_schema: Dict[str, Any], error_msg: str) Dict[str, Any][source]

Generate an error response schema. Must be implemented per provider.

Use to format error messages in a way the LLM can understand and handle appropriately. Should maintain consistency with successful response schemas.

Parameters:
  • request_schema (Dict[str, Any]) – The request that caused the error

  • error_msg (str) – The error message to include

Returns:

Error response schema containing:
  • Required schema per provider

Return type:

Dict[str, Any]

Example

```python try:

result = tool_func(**params)

except Exception as e:

error = handler.generate_error_schema(request, str(e)) # {“status”: “error”, “message”: “File not found”, “type”: “FileNotFoundError”}

```

extract_requests(response: Any) List[Dict[str, Any]][source]

Extract and parse tool requests from an LLM response.

Use when you need to identify and process tool usage requests from a raw LLM response.

Parameters:

response (Any) – The raw response from the LLM service

Returns:

List of parsed request schemas

Return type:

List[Dict[str, Any]]

Side Effects:
  • Clears existing self.requests

  • Sets self.requests to the newly parsed requests

Example

`python requests = handler.extract_requests(llm_response) # [{"name": "view_file", "parameters": {...}}, ...] `

exec_requests() List[Dict[str, Any]][source]

Execute pending tool requests and generate results.

Use when you need to process all pending tool requests that have been extracted from an LLM response. Handles execution, error handling, and result formatting.

Returns:

List of result schemas, each containing:
  • Tool execution results or error information

  • Status of the execution

  • Any relevant metadata

Return type:

List[Dict[str, Any]]

Side Effects:
  • Executes tool functions with provided parameters

  • Updates self.results with execution results

  • May produce tool-specific side effects (file operations, etc.)

Raises:
  • ToolNotFoundError – If a requested tool is not available

  • TypeError – If tool arguments are invalid

  • Exception – For other tool execution errors

Example

`python handler.extract_requests(response) results = handler.exec_requests() # [{"status": "success", "content": "file contents..."}, ...] `

_create_builtin_wrapper(func: Callable) str[source]

Create a wrapper function source code for built-in functions.

Use when adding built-in Python functions as tools. Creates a wrapper that maintains proper type handling and module context.

Parameters:

func (Callable) – The built-in function to wrap

Returns:

Source code for the wrapper function

Return type:

str

Note

  • Automatically handles float conversion for numeric functions

  • Preserves original module context

  • Maintains function name and basic documentation

_create_dynamic_wrapper(func: Callable) str[source]

Create a wrapper function source code for dynamic or lambda functions.

Use when adding functions that don’t have accessible source code or are dynamically created.

Parameters:

func (Callable) – The function to create a wrapper for

Returns:

Source code for the wrapper function

Return type:

str

Note

  • Preserves function signature if available

  • Copies docstring if present

  • Creates fallback implementation if source is not accessible

  • Handles both normal and dynamic functions

add_tool(func: Callable) None[source]

Add a single Python function as a tool for LLM use.

Use when you need to make an individual function available as a tool. Handles all necessary context preservation and function wrapping.

Parameters:

func (Callable) – The function to add as a tool

Raises:
  • ValueError – If tool schema generation fails

  • TypeError – If function source cannot be accessed

  • OSError – If there are issues accessing function source

Side Effects:
  • Creates module context if none exists

  • Adds function to tool registry

  • Updates function map

Example

```python def calculate_area(radius: float) -> float:

‘’’Calculate circle area. Use when…’’’ return 3.14159 * radius * radius

handler.add_tool(calculate_area) ```

Note

  • Preserves function’s full context including docstring

  • Creates wrappers for built-in and dynamic functions

  • Maintains all necessary dependencies

_add_tools_from_file(filepath: str) None[source]

Add all non-private functions from a Python file as tools.

Use when you want to add all suitable functions from a Python file. Handles module loading, dependency preservation, and context management.

Parameters:

filepath (str) – Path to the Python file containing tool functions

Raises:
  • FileNotFoundError – If the specified file doesn’t exist

  • ModuleLoadError – If there’s an error loading the module or its dependencies

  • SyntaxError – If the Python file contains syntax errors

Side Effects:
  • Creates module context for the file

  • Adds all non-private functions as tools

  • Preserves module dependencies and imports

  • Updates module registry

Example

`python handler._add_tools_from_file("path/to/tools.py") `

Note

  • Only adds top-level functions (not nested in classes/functions)

  • Skips functions whose names start with underscore

  • Maintains original module context for all functions

  • Preserves source code for serialization

_add_tools_from_module(module: ModuleType) None[source]

Add all non-private functions from a Python module as tools.

Use when you want to add functions from an imported module or dynamically created module object.

Parameters:

module (ModuleType) – Module object containing the tool functions

Raises:
  • ModuleLoadError – If module lacks both __file__ and __source__ attributes

  • ImportError – If module dependencies cannot be resolved

  • Exception – For other module processing errors

Side Effects:
  • Creates module context if needed

  • Adds all non-private functions as tools

  • Updates module registry

  • Preserves module state and dependencies

Example

`python import my_tools handler._add_tools_from_module(my_tools) `

Note

  • Module must have either __file__ or __source__ attribute

  • Only processes top-level functions

  • Skips functions whose names start with underscore

  • Maintains complete module context

to_dict() Dict[str, Any][source]

Serialize the complete ToolHandler state to a dictionary.

Use when you need to save or transmit the tool handler’s state, including all tools, modules, and execution state.

Returns:

Serialized state containing:
  • Handler class information

  • Registered tools and their schemas

  • Module contexts and source code

  • Function mappings and relationships

  • Current requests and results

Return type:

Dict[str, Any]

Note

  • Preserves complete module contexts

  • Handles both file-based and dynamic modules

  • Includes function source code for reconstruction

  • Maintains tool relationships and dependencies

classmethod from_dict(data: Dict[str, Any]) ToolHandler[source]

Reconstruct a ToolHandler instance from serialized state.

Use when restoring a previously serialized tool handler, such as when loading a saved bot state.

Parameters:

data (Dict[str, Any]) – Serialized state from to_dict()

Returns:

Reconstructed handler instance

Return type:

ToolHandler

Side Effects:
  • Creates new module contexts

  • Reconstructs function objects

  • Restores tool registry

  • Preserves request/result history

Note

  • Only restores explicitly registered tools

  • Verifies code hashes for security

  • Maintains original module structure

  • Preserves execution state (requests/results)

Example

`python saved_state = handler.to_dict() # Later... new_handler = ToolHandler.from_dict(saved_state) `

get_tools_json() str[source]

Get a JSON string representation of all registered tools.

Use when you need a serialized view of the available tools, such as for debugging or external tool discovery.

Returns:

JSON string containing all tool schemas, formatted with indentation

Return type:

str

Example

`python tools_json = handler.get_tools_json() print(f"Available tools: {tools_json}") `

clear() None[source]

Clear all stored tool results and requests.

Use when you need to reset the handler’s execution state without affecting the registered tools.

Side Effects:
  • Empties self.results list

  • Empties self.requests list

add_request(request: Dict[str, Any]) None[source]

Add a new tool request to the pending requests.

Use when manually adding a tool request rather than extracting it from an LLM response.

Parameters:

request (Dict[str, Any]) – Tool request schema to add

Side Effects:
  • Appends request to self.requests list

add_result(result: Dict[str, Any]) None[source]

Add a new tool result to the stored results.

Use when manually adding a tool result rather than generating it through exec_requests().

Parameters:

result (Dict[str, Any]) – Tool result schema to add

Side Effects:
  • Appends result to self.results list

get_results() List[Dict[str, Any]][source]

Get all stored tool execution results.

Use when you need to access the complete history of tool execution results.

Returns:

List of all tool result schemas

Return type:

List[Dict[str, Any]]

get_requests() List[Dict[str, Any]][source]

Get all stored tool requests.

Use when you need to access the complete history of tool execution requests.

Returns:

List of all tool request schemas

Return type:

List[Dict[str, Any]]

static _get_code_hash(code: str) str[source]

Generate an MD5 hash of a code string.

Use when creating a unique identifier for code content or verifying code integrity during deserialization.

Parameters:

code (str) – Source code string to hash

Returns:

MD5 hash of the code string

Return type:

str

__str__() str[source]

Create a simple string representation of the ToolHandler.

Use when you need a quick overview of the handler’s state, showing just the number of tools and modules.

Returns:

Brief summary string showing tool and module counts

Return type:

str

Example

`python handler = ToolHandler() print(handler)  # "ToolHandler with 5 tools and 2 modules" `

__repr__() str[source]

Create a detailed string representation of the ToolHandler.

Use when you need a complete technical view of the handler’s state, including all tool names and module paths.

Returns:

Detailed string showing:
  • List of all registered tool names

  • List of all module paths

Return type:

str

Example

`python handler = ToolHandler() print(repr(handler))  # Shows all tools and modules `

class bots.foundation.base.Mailbox[source]

Bases: ABC

Abstract base class for LLM service communication.

Mailbox provides a standardized interface for sending messages to and receiving responses from different LLM services (OpenAI, Anthropic, etc.). It handles: - Message formatting and sending - Response parsing and processing - Logging of all communications - Tool result integration

The class abstracts away the differences between various LLM APIs, providing a consistent interface for the Bot class.

Example

```python class AnthropicMailbox(Mailbox):

def send_message(self, bot):

# Implementation for Anthropic API pass

def process_response(self, response, bot=None):

# Parse Anthropic response format pass

```

__init__()[source]
log_message(message: str, direction: str) None[source]
abstractmethod send_message(bot: Bot) Dict[str, Any][source]

Send a message to the LLM service.

Use to handle the specifics of communicating with a particular LLM API. Must handle message formatting, API calls, and initial response parsing.

Parameters:

bot (Bot) – Reference to the bot instance making the request. Contains conversation history and configuration.

Returns:

Raw response from the LLM service

Return type:

Dict[str, Any]

Raises:

Exception – For API errors, rate limits, or other communication issues

Note

  • Should use bot.conversation for message history

  • May handle tool results depending on API requirements

  • Should respect bot.model_engine and other configuration

abstractmethod process_response(response: Dict[str, Any], bot: Bot | None = None) Tuple[str, str, Dict[str, Any]][source]

Process the raw LLM response into a standardized format.

Use to convert service-specific response formats into a consistent structure that can be used by the Bot class. Handles extraction of message content, role information, and any additional metadata.

Parameters:
  • response (Dict[str, Any]) – Raw response from the LLM service

  • bot (Optional[Bot]) – Reference to bot instance, required for services that need to send additional messages during processing (e.g., OpenAI’s tool handling)

Returns:

Processed response containing:
  • response_text: The main message content

  • role: Message sender’s role (e.g., “assistant”)

  • metadata: Additional information to be stored with the message

Return type:

Tuple[str, str, Dict[str, Any]]

Note

  • If tool_handler is present, tool requests and results are already processed and available in tool_handler.requests/results

  • Metadata dict is passed directly to ConversationNode’s kwargs

  • Each metadata item becomes an attribute of the ConversationNode

class bots.foundation.base.Bot(api_key: str | None, model_engine: ~bots.foundation.base.Engines, max_tokens: int, temperature: float, name: str, role: str, role_description: str, conversation: ~bots.foundation.base.ConversationNode | None = <bots.foundation.base.ConversationNode object>, tool_handler: ~bots.foundation.base.ToolHandler | None = None, mailbox: ~bots.foundation.base.Mailbox | None = None, autosave: bool = True)[source]

Bases: ABC

Abstract base class for LLM-powered conversational agents.

The Bot class provides a unified interface for working with different LLM services, handling conversation management, tool usage, and state persistence. Key features:

  • Simple primary interface (respond(), chat())

  • Comprehensive tool support

  • Complete state preservation

  • Tree-based conversation management

  • Configurable parameters for each LLM

The class is designed to make basic usage simple while allowing access to advanced features when needed.

api_key

API key for the LLM service

Type:

Optional[str]

name

Name identifier for the bot

Type:

str

model_engine

The specific LLM model to use

Type:

Engines

max_tokens

Maximum tokens in model responses

Type:

int

temperature

Randomness in model responses (0.0-1.0)

Type:

float

role

Bot’s role identifier

Type:

str

role_description

Detailed description of bot’s role

Type:

str

conversation

Current conversation state

Type:

ConversationNode

system_message

System-level instructions for the bot

Type:

str

tool_handler

Manager for bot’s tools

Type:

Optional[ToolHandler]

mailbox

Handler for LLM communication

Type:

Optional[Mailbox]

autosave

Whether to automatically save state

Type:

bool

Example

```python class MyBot(Bot):

def __init__(self, api_key: str):
super().__init__(

api_key=api_key, model_engine=Engines.GPT4, name=”MyBot”, role=”assistant”, role_description=”A helpful AI assistant”

)

bot = MyBot(“api_key”) bot.add_tools(my_tools) response = bot.respond(“Hello!”) ```

__init__(api_key: str | None, model_engine: ~bots.foundation.base.Engines, max_tokens: int, temperature: float, name: str, role: str, role_description: str, conversation: ~bots.foundation.base.ConversationNode | None = <bots.foundation.base.ConversationNode object>, tool_handler: ~bots.foundation.base.ToolHandler | None = None, mailbox: ~bots.foundation.base.Mailbox | None = None, autosave: bool = True) None[source]

Initialize a new Bot instance.

Parameters:
  • api_key (Optional[str]) – API key for the LLM service

  • model_engine (Engines) – The specific LLM model to use

  • max_tokens (int) – Maximum tokens in model responses

  • temperature (float) – Randomness in model responses (0.0-1.0)

  • name (str) – Name identifier for the bot

  • role (str) – Bot’s role identifier

  • role_description (str) – Detailed description of bot’s role

  • conversation (Optional[ConversationNode]) – Initial conversation state

  • tool_handler (Optional[ToolHandler]) – Manager for bot’s tools

  • mailbox (Optional[Mailbox]) – Handler for LLM communication

  • autosave (bool) – Whether to automatically save state after responses. Saves to cwd.

respond(prompt: str, role: str = 'user') str[source]

Send a prompt to the bot and get its response.

This is the primary interface for interacting with the bot. The method: 1. Adds the prompt to the conversation history 2. Sends the conversation to the LLM 3. Processes any tool usage requests 4. Returns the final response

Parameters:
  • prompt (str) – The message to send to the bot

  • role (str) – Role of the message sender (defaults to ‘user’)

Returns:

The bot’s response text

Return type:

str

Note

  • Automatically saves state if autosave is enabled

  • Tool usage is handled automatically if tools are available

  • Full conversation context is maintained

Example

`python bot.add_tools(file_tools) response = bot.respond("Please read config.json") `

add_tools(*args) None[source]

Add Python functions as tools available to the bot.

Use to provide the bot with capabilities by adding Python functions as tools. Functions’ docstrings are used to describe the tools to the LLM.

The method is highly flexible in what it accepts: - Individual Python functions - Python files containing functions - Imported Python modules - Mixed combinations of the above - Lists/tuples of any supported type

Parameters:

*args – Variable arguments that can be: - str: Path to Python file with tools - ModuleType: Module containing tools - Callable: Single function to add - List/Tuple: Collection of any above types

Raises:
  • TypeError – If an argument is not a supported type

  • FileNotFoundError – If a specified file doesn’t exist

  • ModuleLoadError – If there’s an error loading a module

Example

```python # Single function bot.add_tools(calculate_area)

# Multiple files and modules bot.add_tools(

“tools/file_ops.py”, math_tools, “tools/network.py”, [process_data, custom_sort]

Note

  • Only adds top-level functions (not nested or class methods)

  • Preserves full module context for imported tools

  • Tools persist across save/load cycles

_cvsn_respond() Tuple[str, ConversationNode][source]

Process a conversation turn with the LLM, including tool handling.

Use when implementing core conversation flow. This method: 1. Gets LLM response using current conversation context 2. Handles any tool usage requests 3. Updates conversation history 4. Manages tool results and context

Returns:

  • response_text: The LLM’s response text

  • node: The new conversation node created

Return type:

Tuple[str, ConversationNode]

Raises:

Exception – Any errors from LLM communication or tool execution

Side Effects:
  • Updates conversation tree with new node

  • Processes tool requests if any

  • Updates tool results in conversation context

Note

  • Clears tool handler state before processing

  • Automatically handles tool request extraction

  • Maintains conversation tree structure

  • Preserves tool execution results

set_system_message(message: str) None[source]

Set the system-level instructions for the bot.

Use to provide high-level guidance or constraints that should apply to all of the bot’s responses.

Parameters:

message (str) – System instructions for the bot

Example

```python bot.set_system_message(

“You are a code review expert. Focus on security and performance.”

classmethod load(filepath: str, api_key: str | None = None) Bot[source]

Load a saved bot from a file.

Use to restore a previously saved bot with its complete state, including conversation history, tools, and configuration.

Parameters:
  • filepath (str) – Path to the .bot file to load

  • api_key (Optional[str]) – New API key to use, if different from saved

Returns:

Reconstructed bot instance with restored state

Return type:

Bot

Raises:
  • FileNotFoundError – If the specified file doesn’t exist

  • ValueError – If the file contains invalid bot data

Example

```python # Save bot state bot.save(“code_review_bot.bot”)

# Later, restore the bot bot = Bot.load(“code_review_bot.bot”, api_key=”new_key”) ```

Note

  • API keys are not saved for security

  • Tool functions are fully restored with their context

  • Conversation history is preserved exactly

save(filename: str | None = None) str[source]

Save the bot’s complete state to a file.

Use to preserve the bot’s entire state including: - Configuration and parameters - Conversation history - Tool definitions and context - System messages and role information

Parameters:

filename (Optional[str]) – Name for the save file If None, generates name using bot name and timestamp Adds .bot extension if not present

Returns:

Path to the saved file

Return type:

str

Example

```python # Save with generated name path = bot.save() # e.g., “MyBot@2024-01-20_15-30-45.bot

# Save with specific name path = bot.save(“code_review_bot”) # saves as “code_review_bot.bot” ```

Note

  • API keys are not saved for security

  • Creates directories in path if they don’t exist

  • Maintains complete tool context for restoration

chat() None[source]

Start an interactive chat session with the bot.

Use when you want to have a continuous conversation with the bot in the terminal. The session continues until ‘/exit’ is entered.

Features: - Simple text-based interface - Shows tool usage information - Maintains conversation context - Visual separators between messages

Example

`python bot.add_tools(my_tools) bot.chat() # You: Please analyze the code in main.py # Bot: I'll take a look... # Used Tool: view_file # ... `

Note

  • Enter ‘/exit’ to end the session

  • Tool usage is displayed if tools are available

  • State is saved according to autosave setting

__mul__(other: int) List[Bot][source]

Create multiple copies of this bot.

Use when you need multiple instances of the same bot configuration, for example when running parallel operations.

Parameters:

other (int) – Number of copies to create

Returns:

List of independent bot copies

Return type:

List[Bot]

Example

```python # Create 3 identical bots bots = base_bot * 3

# Use in parallel operations results = [bot.respond(“Hello”) for bot in bots] ```

Note

  • Each copy is completely independent

  • Copies include all configuration and tools

__str__() str[source]

Create a human-readable string representation of the bot.

Use when you need a detailed, formatted view of the bot’s state, including conversation history and tool usage.

Returns:

Formatted string containing:
  • Bot metadata (name, role, model)

  • Complete conversation history with indentation

  • Tool calls and results

  • Available tool count

Return type:

str

Note

  • Conversation is shown as a tree structure

  • Tool results are included with each message

  • Handles deep conversation trees with level indicators

  • Formats content for readable terminal output

Example

`python bot = MyBot("api_key") print(bot)  # Shows complete bot state `

bots.foundation.openai_bots module

OpenAI-specific implementations of the bot framework components.

Use when you need to create and manage bots that interface with OpenAI’s chat completion API. This module provides OpenAI-specific implementations of conversation nodes, tool handling, message processing, and the main bot class.

Key Components:
  • OpenAINode: Manages conversation history in OpenAI’s expected format

  • OpenAIToolHandler: Handles function calling with OpenAI’s schema

  • OpenAIMailbox: Manages API communication with OpenAI

  • ChatGPT_Bot: Main bot implementation for OpenAI models

class bots.foundation.openai_bots.OpenAINode(**kwargs: Any)[source]

Bases: ConversationNode

A conversation node implementation specific to OpenAI’s chat format.

Use when you need to manage conversation history in a format compatible with OpenAI’s chat API. Handles proper formatting of messages, tool calls, and tool results in the conversation tree.

Inherits from:

ConversationNode: Base class for conversation tree nodes

content

The message content for this node

Type:

str

role

The role of the message sender (‘user’, ‘assistant’, or ‘tool’)

Type:

str

parent

Reference to the parent node in conversation tree

Type:

Optional[ConversationNode]

children

List of child nodes in conversation tree

Type:

List[ConversationNode]

tool_calls

List of tool calls made in this node

Type:

Optional[List[Dict]]

tool_results

List of results from tool executions

Type:

Optional[List[Dict]]

__init__(**kwargs: Any) None[source]

Initialize an OpenAINode.

Parameters:

**kwargs – Arbitrary keyword arguments passed to parent class

_build_messages() List[Dict[str, Any]][source]

Build message list for OpenAI API, properly handling empty nodes and tool calls.

Use when you need to convert the conversation tree into OpenAI’s expected message format. Traverses the conversation tree from current node to root, building a properly formatted message list that includes all relevant context, tool calls, and tool results.

Returns:

List of messages in OpenAI chat format, where each message is a dictionary containing ‘role’ and ‘content’ keys, and optionally ‘tool_calls’ for assistant messages or ‘tool_call_id’ for tool response messages.

Return type:

List[Dict[str, Any]]

class bots.foundation.openai_bots.OpenAIToolHandler[source]

Bases: ToolHandler

Tool handler implementation specific to OpenAI’s function calling format.

Use when you need to manage tool/function definitions and executions for OpenAI chat models. Handles conversion between Python functions and OpenAI’s function calling schema, including proper formatting of function definitions, calls, and results.

Inherits from:

ToolHandler: Base class for managing tool operations

tools

List of tool definitions in OpenAI function format

Type:

List[Dict[str, Any]]

requests

Current list of pending tool/function calls

Type:

List[Dict[str, Any]]

results

Results from the most recent tool executions

Type:

List[Dict[str, Any]]

generate_tool_schema(func: Callable) Dict[str, Any][source]

Generate OpenAI-compatible function definitions from Python functions.

Use when you need to convert a Python function into OpenAI’s function definition format. Extracts function name, docstring, and signature to create a schema that OpenAI can understand and use for function calling.

Parameters:

func (Callable) – The Python function to convert into a tool schema. Should have a proper docstring and type hints for best results.

Returns:

OpenAI-compatible function definition containing:
  • type: Always ‘function’

  • function: Dict containing:
    • name: The function name

    • description: Function’s docstring or default text

    • parameters: Object describing required and optional parameters

Return type:

Dict[str, Any]

generate_request_schema(response: Any) List[Dict[str, Any]][source]

Extract tool calls from OpenAI API responses.

Use when you need to parse tool/function calls from an OpenAI chat completion response. Handles both single and multiple function calls in the response.

Parameters:

response (Any) – The raw response from OpenAI’s chat completion API, typically containing a ‘choices’ field with message and potential tool calls

Returns:

List of tool calls, each containing:
  • id: The unique tool call ID

  • type: The type of tool call (typically ‘function’)

  • function: Dict containing:
    • name: The function name to call

    • arguments: JSON string of function arguments

Return type:

List[Dict[str, Any]]

tool_name_and_input(request_schema: Dict[str, Any]) Tuple[str, Dict[str, Any]][source]

Parse OpenAI’s function call format into tool name and arguments.

Use when you need to extract the function name and arguments from an OpenAI function call. Handles JSON parsing of the arguments string into a Python dictionary.

Parameters:

request_schema (Dict[str, Any]) – The function call schema from OpenAI, containing function name and arguments as a JSON string

Returns:

A tuple containing:
  • The function name (str)

  • Dictionary of parsed arguments (Dict[str, Any])

Returns (None, None) if request_schema is empty

Return type:

Tuple[str, Dict[str, Any]]

generate_response_schema(request: Dict[str, Any], tool_output_kwargs: Dict[str, Any]) Dict[str, Any][source]

Format tool execution results for OpenAI’s expected format.

Use when you need to format a tool’s output for OpenAI’s function calling API. Converts Python function outputs into the message format OpenAI expects for tool results.

Parameters:
  • request (Dict[str, Any]) – The original function call request containing the tool_call_id

  • tool_output_kwargs (Dict[str, Any]) – The output from the tool execution

Returns:

OpenAI-compatible tool response containing:
  • role: Always ‘tool’

  • content: String representation of the tool output

  • tool_call_id: ID from the original request

Return type:

Dict[str, Any]

generate_error_schema(request_schema: Dict[str, Any] | None, error_msg: str) Dict[str, Any][source]

Generate an error response in OpenAI’s expected format.

Use when you need to format an error that occurred during tool execution. Creates a properly formatted error message that can be included in the conversation.

Parameters:
  • request_schema (Optional[Dict[str, Any]]) – The original request that caused the error, containing the tool_call_id if available

  • error_msg (str) – The error message to include in the response

Returns:

OpenAI-compatible error response containing:
  • role: Always ‘tool’

  • content: The error message

  • tool_call_id: ID from the original request or ‘unknown’

Return type:

Dict[str, Any]

class bots.foundation.openai_bots.OpenAIMailbox(api_key: str | None = None)[source]

Bases: Mailbox

Mailbox implementation for handling OpenAI API communication.

Use when you need to manage message sending and receiving with OpenAI’s chat completion API. Handles API key management, message formatting, and response processing including tool calls. Provides logging of API interactions for debugging purposes.

Inherits from:

Mailbox: Base class for API communication

api_key

OpenAI API key used for authentication

Type:

str

client

Initialized OpenAI client instance for API calls

Type:

OpenAI

__init__(api_key: str | None = None)[source]

Initialize OpenAI API client.

Use when you need to create a new OpenAI API communication channel. Handles API key validation and client initialization.

Parameters:

api_key (Optional[str]) – OpenAI API key. If not provided, attempts to read from OPENAI_API_KEY environment variable.

Raises:

ValueError – If no API key is provided and none is found in environment variables.

send_message(bot: Bot) Dict[str, Any][source]

Send a message to OpenAI’s chat completion API.

Use when you need to send a conversation to OpenAI and get a completion response. Handles message formatting, system messages, tool definitions, and API parameters. Includes logging of both outgoing messages and incoming responses.

Parameters:

bot (Bot) – The bot instance containing: - conversation history - system message - tool definitions - model configuration (engine, tokens, temperature)

Returns:

Raw response from OpenAI’s chat completion API containing:
  • choices: List of completion choices

  • usage: Token usage statistics

  • model: Model used for completion

  • id: Response ID

Return type:

Dict[str, Any]

Raises:

Exception – Any error from the OpenAI API is re-raised for proper handling

process_response(response: Dict[str, Any], bot: Bot) Tuple[str, str, Dict[str, Any]][source]

Process OpenAI’s response and handle any tool calls recursively.

Use when you need to extract the final response content after handling any tool calls. Manages the recursive process of executing tool calls and getting follow-up responses until a final text response is received.

Parameters:
  • response (Dict[str, Any]) – Raw response from OpenAI’s chat completion API containing the message content and any tool calls

  • bot (Bot) – The bot instance for handling tool calls and maintaining conversation state

Returns:

A tuple containing:
  • The final response content (str)

  • The role of the message (str)

  • Additional metadata dictionary (Dict[str, Any])

Return type:

Tuple[str, str, Dict[str, Any]]

class bots.foundation.openai_bots.ChatGPT_Bot(api_key: str | None = None, model_engine: Engines = Engines.GPT4, max_tokens: int = 4096, temperature: float = 0.3, name: str = 'bot', role: str = 'assistant', role_description: str = 'a friendly AI assistant', autosave: bool = True)[source]

Bases: Bot

A bot implementation using the OpenAI GPT API.

Use when you need to create a bot that interfaces with OpenAI’s chat completion models. Provides a complete implementation with OpenAI-specific conversation management, tool handling, and message processing. Supports both simple chat interactions and complex tool-using conversations.

Inherits from:

Bot: Base class for all bot implementations, providing core conversation and tool management

api_key

OpenAI API key for authentication

Type:

str

model_engine

The OpenAI model being used (e.g., GPT4)

Type:

Engines

max_tokens

Maximum tokens allowed in completion responses

Type:

int

temperature

Response randomness factor (0-1)

Type:

float

name

Instance name for identification

Type:

str

role

Bot’s role identifier

Type:

str

role_description

Detailed description of bot’s role/personality

Type:

str

system_message

System-level instructions for the bot

Type:

str

tool_handler

Manages function calling capabilities

Type:

OpenAIToolHandler

conversation

Manages conversation history

Type:

OpenAINode

mailbox

Handles API communication

Type:

OpenAIMailbox

autosave

Whether to automatically save state after responses

Type:

bool

Example

```python # Create a documentation expert bot bot = ChatGPT_Bot(

model_engine=Engines.GPT4, temperature=0.3, role_description=”a Python documentation expert”

)

# Add tools and use the bot bot.add_tool(my_function) response = bot.respond(“Please help document this code.”)

# Save the bot’s state for later use bot.save(“doc_expert.bot”) ```

__init__(api_key: str | None = None, model_engine: Engines = Engines.GPT4, max_tokens: int = 4096, temperature: float = 0.3, name: str = 'bot', role: str = 'assistant', role_description: str = 'a friendly AI assistant', autosave: bool = True)[source]

Initialize a ChatGPT bot with OpenAI-specific components.

Use when you need to create a new OpenAI-based bot instance with specific configuration. Sets up all necessary components for OpenAI interaction including conversation management, tool handling, and API communication.

Parameters:
  • api_key (Optional[str]) – OpenAI API key. If not provided, attempts to read from OPENAI_API_KEY environment variable

  • model_engine (Engines) – The OpenAI model to use, defaults to GPT-4. Determines capabilities and pricing

  • max_tokens (int) – Maximum tokens in completion response, defaults to 4096. Affects response length and API costs

  • temperature (float) – Response randomness (0-1), defaults to 0.3. Higher values make responses more creative but less focused

  • name (str) – Name of the bot instance, defaults to ‘bot’. Used for identification in logs and saved states

  • role (str) – Role identifier for the bot, defaults to ‘assistant’. Used in message formatting

  • role_description (str) – Description of the bot’s role/personality, defaults to ‘a friendly AI assistant’. Guides bot behavior

  • autosave (bool) – Whether to automatically save conversation state, defaults to True. Enables conversation recovery

Note

The bot is initialized with OpenAI-specific implementations of: - OpenAIToolHandler for function calling - OpenAINode for conversation management - OpenAIMailbox for API communication

Module contents

Foundation module for the bots library.

This module provides the core abstractions and implementations for working with Language Models that are instruct-tuned and have tool-using capabilities. It includes base classes and specific implementations for different LLM providers.

Performance Considerations:
  • Conversation trees optimize context window usage

  • Tool execution is handled efficiently with proper error management

Key Components:
  • Bot: Abstract base class defining the core bot interface

  • ToolHandler: Manages tool registration and execution

  • ConversationNode: Tree-based conversation history management

  • Specific implementations (e.g., AnthropicBot, OpenAIBot)

The foundation module emphasizes:
  • Simple primary interface with bot.respond()

  • Comprehensive tool use capabilities - any Callables, modules, or python files can be tools.

  • Tree-based conversation management

  • Complete bot portability

  • Unified interface across different LLM providers

Example

>>> from bots import AnthropicBot
>>> from typing import Callable
>>> def my_function(x: int) -> str:
...     '''Example tool function'''
...     return str(x * 2)
>>>
>>> bot = AnthropicBot()
>>> bot.add_tool(my_function)  # type: Callable
>>> response: str = bot.respond("Use the tool to multiply 5 by 2")
>>> bot.save("my_bot.bot")  # Save complete bot state

See also

  • bots.flows: Higher-level interaction patterns

  • bots.tools: Built-in tool implementations

bots.foundation.__python_requires__ = '>=3.9'

Minimum Python version required for this package.

bots.foundation.__version__: str = '0.1.0'

Package version number, following semantic versioning.