compass.llm.config.LLMConfig#

class LLMConfig(name, llm_call_kwargs=None, llm_service_rate_limit=4000, text_splitter_chunk_size=10000, text_splitter_chunk_overlap=1000)[source]#

Bases: ABC

Abstract base class representing a single LLM configuration

Parameters:
  • name (str) – Name of LLM.

  • llm_call_kwargs (dict, optional) – Keyword arguments to be passed to the llm service call method (i.e. llm_service.call(**kwargs)). Should not contain the following keys:

    • usage_tracker

    • usage_sub_label

    • messages

    These arguments are provided by the LLM Caller object. By default, None.

  • llm_service_rate_limit (int, optional) – Token rate limit (i.e. tokens per minute) of LLM service being used. By default, 10_000.

  • text_splitter_chunk_size (int, optional) – Chunk size used to split the ordinance text. Parsing is performed on each individual chunk. Units are in token count of the model in charge of parsing ordinance text. Keeping this value low can help reduce token usage since (free) heuristics checks may be able to throw away irrelevant chunks of text before passing to the LLM. By default, 10000.

  • text_splitter_chunk_overlap (int, optional) – Overlap of consecutive chunks of the ordinance text. Parsing is performed on each individual chunk. Units are in token count of the model in charge of parsing ordinance text. By default, 1000.

Methods

Attributes

llm_service

Object that can be used to submit calls to LLM

text_splitter

Object that can be used to chunk text

property text_splitter#

Object that can be used to chunk text

Type:

TextSplitter

abstract property llm_service#

Object that can be used to submit calls to LLM

Type:

LLMService