pywry.chat_providers¶
LLM provider adapters for PyWry chat.
These classes expose a common interface over optional provider SDKs and user-defined callables. They operate on ChatMessage histories and ChatConfig settings from pywry.chat.
Base Provider¶
pywry.chat_providers.ChatProvider
¶
Bases: ABC
Abstract base class for chat completion providers.
Provider implementations adapt third-party LLM clients to PyWry's chat
protocol. They accept a list of :class:pywry.chat.ChatMessage objects
plus a :class:pywry.chat.ChatConfig, and return either a complete
assistant message or a stream of text chunks.
Functions¶
generate
abstractmethod
async
¶
generate(messages: list[ChatMessage], config: ChatConfig) -> ChatMessage
Generate a complete assistant response.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history to send to the provider. |
required |
config
|
ChatConfig
|
Chat generation settings such as model, temperature, max tokens, and optional system prompt. |
required |
Returns:
| Type | Description |
|---|---|
ChatMessage
|
A fully materialized assistant message. |
Raises:
| Type | Description |
|---|---|
ImportError
|
If the provider depends on an optional package that is not installed. |
Exception
|
Implementations may raise provider-specific client or transport errors when generation fails. |
stream
abstractmethod
async
¶
stream(messages: list[ChatMessage], config: ChatConfig, cancel_event: Event | None = None) -> AsyncIterator[str]
Stream assistant response chunks.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history to send to the provider. |
required |
config
|
ChatConfig
|
Chat generation settings such as model, temperature, max tokens, and optional system prompt. |
required |
cancel_event
|
Event | None
|
Cooperative cancellation signal checked between yielded chunks. |
None
|
Yields:
| Type | Description |
|---|---|
str
|
Incremental text chunks from the provider. |
Raises:
| Type | Description |
|---|---|
GenerationCancelledError
|
If |
ImportError
|
If the provider depends on an optional package that is not installed. |
Exception
|
Implementations may raise provider-specific client or transport errors when streaming fails. |
Implementations MUST check ``cancel_event.is_set()`` between
|
|
chunks and raise ``GenerationCancelledError`` when set.
|
|
Provider Implementations¶
pywry.chat_providers.OpenAIProvider
¶
Bases: ChatProvider
Provider backed by the openai async client.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Any
|
Keyword arguments forwarded to :class: |
{}
|
Raises:
| Type | Description |
|---|---|
ImportError
|
If the optional |
Initialize the OpenAI client wrapper.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Any
|
Keyword arguments forwarded to :class: |
{}
|
Raises:
| Type | Description |
|---|---|
ImportError
|
If the optional |
Attributes¶
Functions¶
_build_messages
¶
_build_messages(messages: list[ChatMessage], config: ChatConfig) -> list[dict[str, Any]]
Convert PyWry chat messages into OpenAI chat payloads.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history excluding the optional system prompt. |
required |
config
|
ChatConfig
|
Chat configuration containing the system prompt and generation settings. |
required |
Returns:
| Type | Description |
|---|---|
list[dict[str, Any]]
|
OpenAI-compatible chat message dictionaries. |
generate
async
¶
generate(messages: list[ChatMessage], config: ChatConfig) -> ChatMessage
Generate a complete response via OpenAI.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history to submit to the OpenAI chat completions API. |
required |
config
|
ChatConfig
|
Generation settings including model, temperature, max tokens, and optional system prompt. |
required |
Returns:
| Type | Description |
|---|---|
ChatMessage
|
Assistant response with model and token usage metadata attached. |
Raises:
| Type | Description |
|---|---|
Exception
|
Any client, API, or transport exception raised by the OpenAI SDK. |
stream
async
¶
stream(messages: list[ChatMessage], config: ChatConfig, cancel_event: Event | None = None) -> AsyncIterator[str]
Stream response chunks from OpenAI.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history to submit to the OpenAI chat completions API. |
required |
config
|
ChatConfig
|
Generation settings including model, temperature, max tokens, and optional system prompt. |
required |
cancel_event
|
Event | None
|
Cooperative cancellation signal checked between streamed chunks. |
None
|
Yields:
| Type | Description |
|---|---|
str
|
Incremental response text chunks. |
Raises:
| Type | Description |
|---|---|
GenerationCancelledError
|
If |
Exception
|
Any client, API, or transport exception raised by the OpenAI SDK. |
pywry.chat_providers.AnthropicProvider
¶
Bases: ChatProvider
Provider backed by the anthropic async client.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Any
|
Keyword arguments forwarded to :class: |
{}
|
Raises:
| Type | Description |
|---|---|
ImportError
|
If the optional |
Initialize the Anthropic client wrapper.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Any
|
Keyword arguments forwarded to :class: |
{}
|
Raises:
| Type | Description |
|---|---|
ImportError
|
If the optional |
Attributes¶
Functions¶
_build_messages
¶
_build_messages(messages: list[ChatMessage]) -> list[dict[str, Any]]
Convert PyWry chat messages into Anthropic message payloads.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history to transform. |
required |
Returns:
| Type | Description |
|---|---|
list[dict[str, Any]]
|
Anthropic-compatible message dictionaries. |
generate
async
¶
generate(messages: list[ChatMessage], config: ChatConfig) -> ChatMessage
Generate a complete response via Anthropic.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history to submit to the Anthropic messages API. |
required |
config
|
ChatConfig
|
Generation settings including model, temperature, max tokens, and optional system prompt. |
required |
Returns:
| Type | Description |
|---|---|
ChatMessage
|
Assistant response with model and token usage metadata attached. |
Raises:
| Type | Description |
|---|---|
Exception
|
Any client, API, or transport exception raised by the Anthropic SDK. |
stream
async
¶
stream(messages: list[ChatMessage], config: ChatConfig, cancel_event: Event | None = None) -> AsyncIterator[str]
Stream response chunks from Anthropic.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history to submit to the Anthropic messages API. |
required |
config
|
ChatConfig
|
Generation settings including model, temperature, max tokens, and optional system prompt. |
required |
cancel_event
|
Event | None
|
Cooperative cancellation signal checked between streamed chunks. |
None
|
Yields:
| Type | Description |
|---|---|
str
|
Incremental response text chunks. |
Raises:
| Type | Description |
|---|---|
GenerationCancelledError
|
If |
Exception
|
Any client, API, or transport exception raised by the Anthropic SDK. |
pywry.chat_providers.CallbackProvider
¶
Bases: ChatProvider
Provider backed by user-supplied Python callables.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
generate_fn
|
Any
|
Callable used for one-shot generation. It may be synchronous or async
and should return either a string or a :class: |
None
|
stream_fn
|
Any
|
Callable used for streaming generation. It may return a synchronous or async iterator of text chunks. |
None
|
Initialize a callback-based provider.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
generate_fn
|
Any
|
Callable used for one-shot generation. |
None
|
stream_fn
|
Any
|
Callable used for streaming generation. |
None
|
Attributes¶
Functions¶
generate
async
¶
generate(messages: list[ChatMessage], config: ChatConfig) -> ChatMessage
Generate a complete response via the callback.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history passed to |
required |
config
|
ChatConfig
|
Generation settings passed to |
required |
Returns:
| Type | Description |
|---|---|
ChatMessage
|
Assistant response returned by the callback, or a fallback message if no callback is configured. |
stream
async
¶
stream(messages: list[ChatMessage], config: ChatConfig, cancel_event: Event | None = None) -> AsyncIterator[str]
Stream response chunks via the callback.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history passed to |
required |
config
|
ChatConfig
|
Generation settings passed to |
required |
cancel_event
|
Event | None
|
Cooperative cancellation signal checked between streamed chunks. |
None
|
Yields:
| Type | Description |
|---|---|
str
|
Incremental response text chunks from the callback. |
Raises:
| Type | Description |
|---|---|
GenerationCancelledError
|
If |
pywry.chat_providers.MagenticProvider
¶
Bases: ChatProvider
Wraps any magentic <https://magentic.dev>_ ChatModel backend.
This enables plug-and-play access to every LLM backend that magentic supports — OpenAI, Anthropic, LiteLLM (100+ providers), Mistral, and any OpenAI-compatible API (Ollama, Azure, Gemini, xAI, etc.).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
ChatModel | str
|
A pre-configured magentic |
required |
**kwargs
|
Any
|
Extra keyword arguments forwarded to |
{}
|
Attributes¶
Functions¶
_build_messages
¶
_build_messages(messages: list[ChatMessage], config: ChatConfig) -> list[Any]
Convert PyWry messages to magentic message objects.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history to transform. |
required |
config
|
ChatConfig
|
Chat configuration containing the optional system prompt. |
required |
Returns:
| Type | Description |
|---|---|
list[Any]
|
magentic message objects appropriate for the configured backend. |
generate
async
¶
generate(messages: list[ChatMessage], config: ChatConfig) -> ChatMessage
Generate a complete response via magentic.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history to submit to the magentic chat model. |
required |
config
|
ChatConfig
|
Generation settings including model metadata and optional system prompt. |
required |
Returns:
| Type | Description |
|---|---|
ChatMessage
|
Assistant response generated by the magentic model. |
Raises:
| Type | Description |
|---|---|
TypeError
|
If the configured magentic model returns a content object that cannot be converted to text as expected by the caller. |
Exception
|
Any backend-specific exception raised while executing the model. |
stream
async
¶
stream(messages: list[ChatMessage], config: ChatConfig, cancel_event: Event | None = None) -> AsyncIterator[str]
Stream response chunks from magentic.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
list[ChatMessage]
|
Conversation history to submit to the magentic chat model. |
required |
config
|
ChatConfig
|
Generation settings including model metadata and optional system prompt. |
required |
cancel_event
|
Event | None
|
Cooperative cancellation signal checked between streamed chunks. |
None
|
Yields:
| Type | Description |
|---|---|
str
|
Incremental response text chunks. |
Raises:
| Type | Description |
|---|---|
GenerationCancelledError
|
If |
Exception
|
Any backend-specific exception raised while executing the model. |
Factory¶
pywry.chat_providers.get_provider
¶
get_provider(name: str, **kwargs: Any) -> ChatProvider
Create a provider instance by name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Provider name. Supported values are |
required |
**kwargs
|
Any
|
Passed to the provider constructor. |
{}
|
Returns:
| Type | Description |
|---|---|
ChatProvider
|
Instantiated provider. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If provider name is unknown. |