package documentation
Undocumented
Module | interface |
No module docstring; 0/1 variable, 0/1 function, 1/10 class documented |
Module | langchain |
Undocumented |
Module | langchain |
Undocumented |
Module | mlflow |
No module docstring; 3/8 functions, 1/3 class documented |
Module | multiple |
No module docstring; 0/1 variable, 1/1 class documented |
Module | ollama |
Undocumented |
Module | openai |
No module docstring; 0/1 variable, 1/4 function, 0/4 class documented |
Module | partitioned |
No module docstring; 1/1 class documented |
Module | sentence |
Undocumented |
From __init__.py
:
Class |
|
No class docstring; 0/2 property, 0/1 class variable, 2/9 methods, 0/5 static method, 0/1 class method documented |
Function | multiple |
Undocumented |
Function | ollama |
Undocumented |
Function | openai |
Returns an OpenAI completion model. |
Function | openai |
Returns an OpenAI embedding model. |
Function | openai |
Undocumented |
Function | partitioned |
Returns an model that routes the inference request to a new model based on a partition key |
Function | polars |
Undocumented |
Function | polars |
Undocumented |
Function | python |
Undocumented |
def ollama_extraction(model:
str
, base_url: str | ConfigValue
= 'http://localhost:11434/v1', api_key: str | ConfigValue
= 'ollama', extraction_description: str | None
= None) -> ExposedModel
:
(source)
¶
Undocumented
def openai_completion(model:
str
, prompt_template: str | None
= None, config: OpenAiConfig | None
= None) -> ExposedModel
:
(source)
¶
Returns an OpenAI completion model.
```python @model_contract(
input_features=[MyFeature().name], exposed_model=openai_completion(""),
) class MyCompletion:
my_entity = Int32().as_entity() name = String() response = String().as_prompt_completion() predicted_at = EventTimestamp()
- embeddings = await store.model(MyCompletion).predict_over({
- "my_entity": [1, 2, 3], "name": ["Hello", "World", "foo"]
}).to_polars() ```
- Args:
- model (str): the model to use. Look at the OpenAi docs to find the correct one. batch_on_n_chunks (int): When to change to the batch API. Given that the batch size is too big. prompt_template (str): A custom prompt template if wanted. The default will be based on the input features.
- Returns:
- ExposedModel: a model that sends embedding requests to OpenAI
def openai_embedding(model:
str
, config: OpenAiConfig | None
= None, batch_on_n_chunks: int | None
= 100, prompt_template: str | None
= None) -> ExposedModel
:
(source)
¶
Returns an OpenAI embedding model.
```python @model_contract(
input_features=[MyFeature().name], exposed_model=openai_embedding("text-embedding-3-small"),
) class MyEmbedding:
my_entity = Int32().as_entity() name = String() embedding = Embedding(1536) predicted_at = EventTimestamp()
- embeddings = await store.model(MyEmbedding).predict_over({
- "my_entity": [1, 2, 3], "name": ["Hello", "World", "foo"]
}).to_polars() ```
- Args:
- model (str): the model to use. Look at the OpenAi docs to find the correct one. batch_on_n_chunks (int): When to change to the batch API. Given that the batch size is too big. prompt_template (str): A custom prompt template if wanted. The default will be based on the input features.
- Returns:
- ExposedModel: a model that sends embedding requests to OpenAI
def openai_extraction(model:
str
, extraction_description: str | None
= None, config: OpenAiConfig | None
= None) -> ExposedModel
:
(source)
¶
Undocumented
def partitioned_on(key:
str
, partitions: dict[ str, ExposedModel]
, default_partition: str | None
= None) -> ExposedModel
:
(source)
¶
Returns an model that routes the inference request to a new model based on a partition key
```python @model_contract(
input_features=[MyFeature().name], exposed_model=partitioned_on(
"lang", partitions={
"no": openai_embedding("text-embedding-3-large"), "en": openai_embedding("text-embedding-ada-002"),}, default_partition="no"
),
) class MyEmbedding:
my_entity = Int32().as_entity() name = String() lang = String() embedding = Embedding(1536) predicted_at = EventTimestamp()
- embeddings = await store.model(MyEmbedding).predict_over({
- "my_entity": [1, 2, 3], "name": ["Hello", "Hei", "Hola"], "lang": ["en", "no", "es"]
}).to_polars() ```
def polars_predictor(callable:
Callable[ [ pl.DataFrame, ModelFeatureStore], Coroutine[ None, None, pl.DataFrame]]
, features: list[ FeatureReferencable] | None
= None) -> ExposedModel
:
(source)
¶
Undocumented