compass.llm.config.OpenAIConfig#

class OpenAIConfig(name='gpt-4o', llm_call_kwargs=None, llm_service_rate_limit=4000, text_splitter_chunk_size=10000, text_splitter_chunk_overlap=1000, client_type='azure', client_kwargs=None)[source]#

Bases: LLMConfig

OpenAI LLM configuration

Parameters:
  • name (str, optional) – Name of OpenAI LLM. By default, "gpt-4o".

  • llm_call_kwargs (dict, optional) – Keyword arguments to be passed to the llm service call method (i.e. llm_service.call(**kwargs)). Should not contain the following keys:

    • usage_tracker

    • usage_sub_label

    • messages

    These arguments are provided by the LLM Caller object. By default, None.

  • llm_service_rate_limit (int, optional) – Token rate limit (i.e. tokens per minute) of LLM service being used. By default, 10_000.

  • text_splitter_chunk_size (int, optional) – Chunk size used to split the ordinance text. Parsing is performed on each individual chunk. Units are in token count of the model in charge of parsing ordinance text. Keeping this value low can help reduce token usage since (free) heuristics checks may be able to throw away irrelevant chunks of text before passing to the LLM. By default, 10000.

  • text_splitter_chunk_overlap (int, optional) – Overlap of consecutive chunks of the ordinance text. Parsing is performed on each individual chunk. Units are in token count of the model in charge of parsing ordinance text. By default, 1000.

  • client_type (str, default "azure") – Type of client to set up for this calling instance. Must be one of OpenAIConfig.SUPPORTED_CLIENTS. By default, "azure".

  • client_kwargs (dict, optional) – Keyword-value pairs to pass to underlying LLM client. These typically include things like API keys and endpoints. By default, None.

Methods

Attributes

SUPPORTED_CLIENTS

Currently-supported OpenAI LLM clients

client_kwargs

Parameters to pass to client initializer

llm_service

Object that can be used to submit calls to LLM

text_splitter

Object that can be used to chunk text

SUPPORTED_CLIENTS = {'azure': <class 'openai.lib.azure.AsyncAzureOpenAI'>, 'openai': <class 'openai.AsyncOpenAI'>}#

Currently-supported OpenAI LLM clients

property client_kwargs#

Parameters to pass to client initializer

Type:

dict

property llm_service#

Object that can be used to submit calls to LLM

Type:

LLMService

property text_splitter#

Object that can be used to chunk text

Type:

TextSplitter