package documentation

Undocumented

Module interface No module docstring; 0/1 variable, 0/1 function, 1/10 class documented
Module langchain Undocumented
Module langchain_retriever Undocumented
Module mlflow No module docstring; 3/8 functions, 1/3 class documented
Module multiple No module docstring; 0/1 variable, 1/1 class documented
Module ollama Undocumented
Module openai No module docstring; 0/1 variable, 1/4 function, 0/4 class documented
Module partitioned No module docstring; 1/1 class documented
Module sentence_transformer Undocumented

From __init__.py:

Class ExposedModel No class docstring; 0/2 property, 0/1 class variable, 2/9 methods, 0/5 static method, 0/1 class method documented
Function multiple_models Undocumented
Function ollama_extraction Undocumented
Function openai_completion Returns an OpenAI completion model.
Function openai_embedding Returns an OpenAI embedding model.
Function openai_extraction Undocumented
Function partitioned_on Returns an model that routes the inference request to a new model based on a partition key
Function polars_expression Undocumented
Function polars_predictor Undocumented
Function python_function Undocumented
def multiple_models(*models: ExposedModel) -> ExposedModel: (source)

Undocumented

def ollama_extraction(model: str, base_url: str | ConfigValue = 'http://localhost:11434/v1', api_key: str | ConfigValue = 'ollama', extraction_description: str | None = None) -> ExposedModel: (source)

Undocumented

def openai_completion(model: str, prompt_template: str | None = None, config: OpenAiConfig | None = None) -> ExposedModel: (source)

Returns an OpenAI completion model.

```python @model_contract(

input_features=[MyFeature().name], exposed_model=openai_completion(""),

) class MyCompletion:

my_entity = Int32().as_entity() name = String() response = String().as_prompt_completion() predicted_at = EventTimestamp()
embeddings = await store.model(MyCompletion).predict_over({
"my_entity": [1, 2, 3], "name": ["Hello", "World", "foo"]

}).to_polars() ```

Args:
model (str): the model to use. Look at the OpenAi docs to find the correct one. batch_on_n_chunks (int): When to change to the batch API. Given that the batch size is too big. prompt_template (str): A custom prompt template if wanted. The default will be based on the input features.
Returns:
ExposedModel: a model that sends embedding requests to OpenAI
def openai_embedding(model: str, config: OpenAiConfig | None = None, batch_on_n_chunks: int | None = 100, prompt_template: str | None = None) -> ExposedModel: (source)

Returns an OpenAI embedding model.

```python @model_contract(

input_features=[MyFeature().name], exposed_model=openai_embedding("text-embedding-3-small"),

) class MyEmbedding:

my_entity = Int32().as_entity() name = String() embedding = Embedding(1536) predicted_at = EventTimestamp()
embeddings = await store.model(MyEmbedding).predict_over({
"my_entity": [1, 2, 3], "name": ["Hello", "World", "foo"]

}).to_polars() ```

Args:
model (str): the model to use. Look at the OpenAi docs to find the correct one. batch_on_n_chunks (int): When to change to the batch API. Given that the batch size is too big. prompt_template (str): A custom prompt template if wanted. The default will be based on the input features.
Returns:
ExposedModel: a model that sends embedding requests to OpenAI
def openai_extraction(model: str, extraction_description: str | None = None, config: OpenAiConfig | None = None) -> ExposedModel: (source)

Undocumented

def partitioned_on(key: str, partitions: dict[str, ExposedModel], default_partition: str | None = None) -> ExposedModel: (source)

Returns an model that routes the inference request to a new model based on a partition key

```python @model_contract(

input_features=[MyFeature().name], exposed_model=partitioned_on(

"lang", partitions={

"no": openai_embedding("text-embedding-3-large"), "en": openai_embedding("text-embedding-ada-002"),

}, default_partition="no"

),

) class MyEmbedding:

my_entity = Int32().as_entity() name = String() lang = String() embedding = Embedding(1536) predicted_at = EventTimestamp()
embeddings = await store.model(MyEmbedding).predict_over({
"my_entity": [1, 2, 3], "name": ["Hello", "Hei", "Hola"], "lang": ["en", "no", "es"]

}).to_polars() ```

def polars_expression(expr: pl.Expr, should_filter_nulls: bool = True) -> ExposedModel: (source)

Undocumented

def polars_predictor(callable: Callable[[pl.DataFrame, ModelFeatureStore], Coroutine[None, None, pl.DataFrame]], features: list[FeatureReferencable] | None = None) -> ExposedModel: (source)

Undocumented

def python_function(function: Callable[[pl.DataFrame], pl.Series]) -> DillFunction: (source)

Undocumented