Hello World
In this tutorial, you are building a simple demo application running on the Seaplane platform. While this implementation is complete overkill for the use case, it demonstrates in an easy-to-understand example how you can use Seaplane to build data science pipelines and experiments.
The application is available on /hello
and takes a user's name as input and
responds with Hello World <name>
.
As you will see in the implementation below, the app is broken up into two
@task
components, each adding one word to the string. The first task adds the
word World
, and the second task adds the word Hello
. To learn more about tasks,
have a look at our documentation.
Once deployed, the Seaplane platform sets up and scales all the required infrastructure, including:
- Two containerized workloads
- An API gateway to handle the API request
- Data streams connecting all individual components
To follow along, you need access to the Seaplane platform. You can sign up for
an account here. Make sure you have the seaplane
package installed on your machine by running pip3 install seaplane
in your
terminal. You can learn more about installing the Seaplane package in our
documentation.
If you prefer to skip to the end, you can clone the finished project from our demos repository on GitHub.
Creating the project​
As a first step, you are creating the project by running seaplane init hello-world
. This creates the basic project structure and files.
Delete the contents of main.py
for now. You are replacing it with the hello
world code during this tutorial.
hello-world/
├── hello-world/
│ └── main.py
├── .env
└── pyproject.toml
World Task​
The first task takes the user input and prepends the word World
, leaving us
with World <name>
. In this tutorial, you are adding all tasks and the app in
one file main.py
. For larger projects, we recommend creating a new file for
every individual task and importing it in main.py
from seaplane.apps import task
import json
@task()
def world(context):
json_body = json.loads(context.body)
message = "World " + json_body['name']
context.emit(message)
Let's unpack what we did here. We create a new @task
of type compute
. Once
deployed, Seaplane turns this task into a self-contained, auto-scaling
containerized workload.
The context
object in the world()
function contains the JSON object from the
API call in its body; we will see later how this works once we wire everything together in
the @app
component.
We load the JSON object into json_body
, and extract the name and pre-pend it with the word World
. Finally, we emit the concatenated message, which will be accessed in the next step in the pipeline.
Want to learn more about tasks? Take a look at the documentation here.
Hello Task​
The second task in our pipeline pre-pends the word Hello
to the string,
leaving us with Hello World <name>
.
from seaplane.apps import task
import json
@task()
def hello(context):
world_output = context.body.decode()
message = "Hello " + world_output
context.emit(message)
Let's unpack again what we did here. We created a new compute task. The context
object in the hello()
function contains the output of the world
task. We
will see in the next section how this all ties together. Inside the function, we
retrieve the current string from the context object and prepend it with Hello
.
Finally, we emit the entire string in the function. This output will ultimately go back to the user.
The Application​
With both our tasks created, we can tie it all together in the @app
component.
The application defines the flow of the data, how it's accessible and what
should happen to the output.
Want to learn more about apps? Have a look at our documentation here.
from seaplane.apps import app, start
@app()
def hello_world(body):
string = world(body)
return hello(string)
start()
The code above creates our application and makes it available at the /hello
endpoint. Which translates to
<tenant-id>.on.cplane.cloud/hello-world/<version>/hello
.
It wires the tasks together. The world
task receives the user input from the
API, performs its computation and sends its result to the hello
task. The
output of the hello
task is returned to the user.
Deploying​
To deploy your application open the .env
file in your root project directory.
Replace sp-your-api-key
with your Seaplane API key which you can get from the
Flightdeck.
Open a terminal and navigate to the root of your Seaplane project. Run poetry install
to install all required packages including Seaplane.
Run seaplane deploy
to start the deployment of your application. Once
deployed, we can query our API using cURL
or plane
.
- cURL
- plane
TOKEN=$(curl -X POST https://flightdeck.cplane.cloud/identity/token --header "Authorization: Bearer <API-KEY>")
curl -X POST -H 'Content-Type: application/octet-stream' \
-H "Authorization: Bearer $TOKEN" \
-d '{"name": "<YOUR-NAME>"}' https://carrier.cplane.cloud/v1/endpoints/hello-world/request
Returning the following JSON object:
{"request_id":"<YOUR-BATCH-ID>"}
Configure plane
using this guide if you have not already.
plane endpoints request hello-world -d '{"name": "<YOUR-NAME>"}'
Returning the following output:
DynamicSchema({'request_id': '<YOUR-BATCH-ID>'})
Replace <API-KEY>
with your Seaplane API key. Replace <YOUR-NAME>
with any
string but for the sake of the demo your name ;).
This creates a new batch processing request on Seaplane. By default POST
requests are treated as batches. You can get the result of the processed batch
i.e the output by running a GET
request with the batch ID as follows.
- cURL
- plane
curl -X GET -H "Authorization: Bearer $TOKEN" https://carrier.cplane.cloud/v1/endpoints/hello-world/response/<YOUR-BATCH-ID>.1.1
Returning the following output:
Hello World <YOUR-NAME>
plane endpoints response hello-world <YOUR-BATCH-ID>.1.1
Returning the following output:
b'Hello World <YOUR-NAME>'
And that is all there is to it! You now have your first pipeline deployed in Seaplane. As mentioned before this kind of setup is complete overkill for a hello world application. But you can imagine how powerful it becomes if you replace the tasks with a pre-processing task, a model inference task and a post-processing task. You can start building data-driven APIs in minutes without ever having to think about the underlying infrastructure.
Sign up for an account here if you want to try it yourself!