Text Embeddings
Text embeddings are a way to represent text as a vector. This enables you to use strings of text in a more quantitative way. For example, with text embeddings, we can calculate the cosine similarity of two strings, use advanced similarity searches powered by vector stores and much more.
Seaplane has built-in support for text embeddings in tasks. To use them import
the seaplane_embeddings
from seaplane.integrations.langchain
Seaplane embeddings supports embedding a single string or a list of strings
through, embed_query()
and embed_documents()
respectively.
The Seaplane embeddings are based on the instructor-xl model with a dimension of 768.
Embed Query​
The embed_query()
function embeds a single input string into a vector.
- Python
- Expected Output
Input Variable:
- Input query -
type:string
from seaplane.integrations.langchain import seaplane_embeddings
def my_embedding_task(context):
# embed a single string
seaplane_embeddings.embed_query("Embed this string")
type:list
with elements of type:list
with elements of type:float
[[-0.0005658837035298347, -0.019569026306271553, 0.03404128551483154,
...
-0.049817588180303574, 0.023734183982014656, 0.09254015982151031]]
Embed Documents​
The embed_documents()
function embeds a list of strings into vectors.
- Python
- Expected Output
Input Variables:
- Input query -
type:list
with elements oftype:string
from seaplane.integrations.langchain import seaplane_embeddings
def my_embedding_task(context):
# embed a single string
seaplane_embeddings.embed_documents(["Embed this string", "Also embed this string"])
type:list
with elements of type:list
with elements of type:float
[[-0.0005658837035298347, -0.019569026306271553, 0.03404128551483154,
...
-0.049817588180303574, 0.023734183982014656, 0.09254015982151031],
[-0.0005658837035298347, -0.019569026306271553, 0.03404128551483154,
...
-0.049817588180303574, 0.023734183982014656, 0.09254015982151031]]