Skip to main content
Version: 0.3.94

Models

Seaplane has built-in support for foundational models to use inside your tasks. Additionally, you can upload your private models to our model registry to use inside the Seaplane ecosystem.

There are two types of foundational models: local and external models. Local models are models hosted by Seaplane. External models are (API) integrations with external foundational models such as OpenAI.

You can add any model as a resource to your task by:

  • Marking the task as type='inference'.
  • Adding the model name to the task decorator as model='<model-name>.
  • Adding model as a parameter to the function definition.
  • Calling the model() function inside the task with a parameters object.
Example Model Task
from seaplane import task

@task(id='my-task', type='inference', model='<MODEL-NAME>')
def my_task(data, model):
# return inference result
return model(params)

The parameters object inside the model function call is different for each model; you can learn more about it in the sections below.

Local Models​

Local models are shared resources. This ensures we can offer them at a reasonable price. Seaplane never stores your prompts for future training of models. This makes them safe to use for proprietary and confidential data.

MPT-30B​

MPT-30B-instruct is an open-source large language model developed by Mosaic ML. MPT-30B-instruct has a context window of 8000.

To add MPT-30B to your Seaplane task follow the steps below.

  • Make sure your task is of type inference type='inference'.
  • Add MPT-30B to your task decorator as a resource model='MPT-30B'.
  • Add model as a parameter to your function definition.
  • Call the model() function with the parameters object inside the task.

The parameters object of MPT-30B has one required and one optional argument.

  • prompt (required) - The prompt you want to submit to MPT-30B for inferencing.
  • max_output_length (optional) - Max output length defines the maximum number of tokens the model can use in its response. The default is set to 3000. Keep in mind the context window of MPT-30B is 8000. Anything above 8000 will have fastly degrading results.
MPT-30B Example
from seaplane import task

@task(id='my-task', type='inference', model='MPT-30B')
def my_task(data, model):
# construct params
params = {
"prompt" : "What is a Seaplane?",
"max_output_length" : 5000 #optional
}

# return inference result
return model(params)

External Models​

External models are (API) integrations with external commercial foundational models. Seaplane has no control over how the data is stored or used after you send it to these models. We recommend you treat them as unsafe and do not share any confidential or proprietary data with these models.

OpenAI​

Seaplane currently supports the use of OpenAI GPT-3.5. Through the OpenAI chat completions endpoint. To add GPT-3.5 to your Seaplane task follow the steps below.

  • Make sure your task is of type inference type='inference'.
  • Add GPT-3.5 to your task decorator as a resource model='GPT-3.5.
  • Add model as a parameter to your function definition.
  • Call the model() function with the parameters object inside the task.
info

We can add support for any OpenAI model based on customer requests. Missing something? let us know by emailing support@seaplane.io

The parameters object of GPT-3.5 has three required arguments.

  • model (required) - The internal OpenAI name for GPT-3.5 i.e gpt-3.5-turbo
  • message (required) - A list of messages comprising the conversation so far, including the new prompt. For more information see the OpenAI documentation.
  • temperature (required) - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. For more information see the OpenAI documentation.
GPT 3.5 Example
from seaplane import task

@task(id='my-task', type='inference', model='GPT-3.5')
def my_task(data, model):
# construct params
params = {
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": prompt}],
"temperature": 0.7,
}

# return inference result
return model(params)

Private Models​

You can upload private models to the Seaplane model registry for use inside tasks. Private models are personal resources and are never shared between customers.

info

Private models are coming soon but are available today to select beta testers. Let us know if you want early access by emailing support@seaplane.io