runpod-worker-helloworld
Getting started with a serverless endpoint on RunPod by creating a custom worker
This project provides a set of starting points for creating your worker (= Docker image) to create a custom serverless endpoint on RunPod:
python -m venv venv.\venv\Scripts\activatesource ./venv/bin/activatepip install -r requirements.txtExecute python src/rp_handler.py, which will then output something like this:
--- Starting Serverless Worker | Version 1.3.7 ---
INFO | Using test_input.json as job input.
DEBUG | Retrieved local job: {'input': {'greeting': 'world'}, 'id': 'local_test'}
INFO | local_test | Started
DEBUG | local_test | Handler output: Hello world
DEBUG | local_test | run_job return: {'output': 'Hello world'}
INFO | Job local_test completed successfully.
INFO | Job result: {'output': 'Hello world'}
INFO | Local testing complete, exiting.
Run all tests: python -m unittest discover, which will then output something like this:
...
----------------------------------------------------------------------
Ran 3 tests in 0.000s
OK
We included a docker-compose.yml which makes it possible to easily run the Docker image locally: docker-compose up
This will only work for Linux-based systems, as we only create an image for Linux, as this is what RunPod requires. To do this for Mac or Windows, you have to follow the steps to build the image manually. Make sure to build the image with the dev tag, as this is used in the docker-compose.yml.
To use your Docker image on RunPod, it must exist in a Docker image registry. We are using Docker Hub for this, but feel free to choose whatever you want.
The repo contains two workflows that publish the image to Docker Hub using GitHub Actions: dev.yml and release.yml.
This process is highly opinionated and you should adapt it to what you are used to.
If you want to use these workflows, you have to add these secrets to your repository:
| Configuration Variable | Description | Example Value |
|---|---|---|
DOCKERHUB_USERNAME | Your Docker Hub username. | your-username |
DOCKERHUB_TOKEN | Your Docker Hub token for authentication. | your-token |
DOCKERHUB_REPO | The repository on Docker Hub where the image will be pushed. | timpietruskyblibla |
DOCKERHUB_IMG | The name of the image to be pushed to Docker Hub. | runpod-worker-helloworld |
When you are developing your image and want to provide bug fixes or features to your community, you can put them into the dev branch. This will trigger the dev workflow, which runs these steps:
dev tagWhen development is done and you are ready for a new release of your image, you can put all your changes into the main branch. This will trigger the release workflow, which runs these steps:
README.mdCHANGELOG.mdlatestdocker build -t <dockerhub_username>/<repository_name>:<tag> --platform linux/amd64 ., in this case: docker build -t timpietruskyblibla/runpod-worker-helloworld:latest --platform linux/amd64 .
exec python failed: Exec format error when you run your worker on RunPod, depending on the OS you are using locallydocker build -t <dockerhub_username>/<repository_name>:<tag> --platform windows/amd64 .docker build -t <dockerhub_username>/<repository_name>:<tag> --platform linux/arm64 .docker images, which provides a list of all images that exist on your computerdocker logindocker push <dockerhub_username>/<repository_name>:<tag>, in this case: docker push timpietruskyblibla/runpod-worker-helloworld:latestNew Templaterunpod-worker-helloworld (it can be anything you want)<dockerhub_username>/<repository_name>:tag, in this case: timpietruskyblibla/runpod-worker-helloworld:latestSave TemplateServerless > Endpoints and click on New Endpointhellworldrunpow-worker-helloworld (or what ever name you gave your template)0 (keep this low, as we just want to test the Hello World)3 (recommended default is 3)5 (leave the default)enabled (doesn't cost more, but provides faster boot for our worker, which is good)1 (keep this low as we are just testing, we don't need multiple GPUs for a hello world)deployrunsync: Sync request to start a job, where you can wait for the job resultrun: Async request to start a job, where you receive an id immediatelystatus: Sync request to find out what the status of a job is, given idcancel: Sync request to cancel a job, given idhealth: Sync request to check the health of the endpoint to see if everything is fineAPI Keys and then on the API Key button<api_key> with your key<endpoint_id> with the ID of the endpoint, you find that when you click on your endpoint, it's part of the URLs shown at the bottom of the first boxcurl -H "Authorization: Bearer <api_key>" https://api.runpod.ai/v2/<endpoint_id>/health
This will return an id that you can then use in the status endpoint to find out if your job was completed.
# Returns a JSON with the id of the job (<job_id>), use that in the status endpoint
curl -X POST -H "Authorization: Bearer <api_key>" -H "Content-Type: application/json" -d '{"input": {"greeting": "world"}}' https://api.runpod.ai/v2/<endpoint_id>/run
# {"id":"<job_id>","status":"IN_QUEUE"}
This endpoint will wait until the job is done and provide the output of our API as the response.
If you do a sync request to an endpoint that has no free workers to pick up the job, you will wait for some time. Either your job will be picked up if a worker gets free or the job gets added to the queue (provided by the endpoint), which will result in you receiving an id. You then have to manually ask the /status endpoint to get information about when the job was completed.
curl -X POST -H "Authorization: Bearer <api_key>" -H "Content-Type: application/json" -d '{"input": {"greeting": "world"}}' https://api.runpod.ai/v2/<endpoint_id>/runsync
# {"delayTime":2218,"executionTime":138,"id":"<job_id>","output":"Hello world","status":"COMPLETED"}
curl -H "Authorization: Bearer <api_key>" https://api.runpod.ai/v2/<endpoint_id>/status/<job_id>
# {"delayTime":3289,"executionTime":1054,"id":"<job_id>","output":"Hello world","status":"COMPLETED"}
Content type
Image
Digest
sha256:ba3c4d1de…
Size
116.6 MB
Last updated
about 2 years ago
Requires Docker Desktop 4.37.1 or later.