The Vast.ai REST API gives you programmatic control over GPU instances — useful for automation, CI/CD pipelines, or building your own tooling on top of Vast.
This guide walks through the complete instance lifecycle: authenticate, search for a GPU, rent it, wait for it to boot, connect to it, and clean up. By the end you’ll understand the core API calls needed to manage instances without touching the web console.
Prerequisites
- A Vast.ai account with credit (~$0.01–0.05, depending on test instance run time)
curl installed
1. Get Your API Key
Generate an API key from the Keys page by clicking +New. Copy the key — you’ll need it for your API calls, and you’ll only see it once.
Export it as an environment variable:
export VAST_API_KEY="your-api-key-here"
2. Verify Authentication
Confirm your key works by listing your current instances. If you have none, this returns an empty list.
curl -s -H "Authorization: Bearer $VAST_API_KEY" \
"https://console.vast.ai/api/v0/instances/"
{
"instances_found": 0,
"instances": []
}
If you get a 401 or 403, double-check your API key. If you already have instances, you’ll see them listed here.
3. Search for GPUs
Find available machines using the bundles endpoint. This query returns the top 5 on-demand RTX 4090s sorted by deep learning performance benchmarked per dollar:
curl -s -H "Authorization: Bearer $VAST_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"verified": {"eq": true},
"rentable": {"eq": true},
"gpu_name": {"eq": "RTX 4090"},
"num_gpus": {"eq": 1},
"direct_port_count": {"gte": 1},
"order": [["dlperf_per_dphtotal", "desc"]],
"type": "on-demand",
"limit": 5
}' \
"https://console.vast.ai/api/v0/bundles/"
Each parameter in the query above controls a different filter:
| Parameter | Value | Meaning |
|---|
verified | {"eq": true} | Only machines verified by Vast.ai (identity-checked hosts) |
rentable | {"eq": true} | Only machines currently available to rent |
gpu_name | {"eq": "RTX 4090"} | Filter to a specific GPU model |
num_gpus | {"eq": 1} | Exactly 1 GPU per instance |
direct_port_count | {"gte": 1} | At least 1 directly accessible port (needed for SSH) |
order | [["dlperf_per_dphtotal", "desc"]] | Sort by deep learning performance per dollar, best value first |
type | "on-demand" | On-demand pricing (vs. interruptible spot/bid) |
limit | 5 | Return at most 5 results |
The response contains an offers array. Note the id of the offer you want — you’ll use it in the next step. If no offers are returned, try relaxing your filters (e.g. a different GPU model or removing direct_port_count).
See the Search Offers reference for the full list of filter parameters and operators.
4. Create an Instance
Rent the machine by sending a PUT request with your Docker image and disk size. Replace OFFER_ID with the id from step 3. disk is in GB and specifies the size of the disk on your new instance.
curl -s -H "Authorization: Bearer $VAST_API_KEY" \
-H "Content-Type: application/json" \
-X PUT \
-d '{
"image": "pytorch/pytorch:2.4.0-cuda12.4-cudnn9-runtime",
"disk": 20,
"onstart": "echo hello && nvidia-smi"
}' \
"https://console.vast.ai/api/v0/asks/OFFER_ID/"
{
"success": true,
"new_contract": 12345678,
"instance_api_key": "d15a..."
}
Save the new_contract value — this is your instance ID. The instance_api_key is a restricted key injected into the container as CONTAINER_API_KEY — it can only start, stop, or destroy that specific instance.
5. Wait Until Ready
The instance needs time to pull the Docker image and boot. Poll the status endpoint until actual_status is "running". Replace INSTANCE_ID with the new_contract value from step 4.
curl -s -H "Authorization: Bearer $VAST_API_KEY" \
"https://console.vast.ai/api/v0/instances/INSTANCE_ID/"
Example response:
{
"instances": {
"actual_status": "loading",
"ssh_host": "...",
"ssh_port": 12345
}
}
The actual_status field progresses through these states:
actual_status | Meaning |
|---|
null | Instance is being provisioned |
"loading" | Docker image is downloading |
"running" | Ready to use |
Poll every 10 seconds. Boot time is typically 1–5 minutes depending on the Docker image size. You can also use the onstart script to send a callback when the instance is ready, instead of polling.
Once actual_status is "running", you’re ready to connect.
6. Connect via SSH
Use the ssh_host and ssh_port from the status response to connect directly to your new instance:
ssh root@SSH_HOST -p SSH_PORT
7. Clean Up
When you’re done, destroy the instance to stop all billing.
Alternatively, to pause an instance temporarily instead of destroying it, you can stop it. Stopping halts compute billing but disk storage charges continue.
Destroy (removes everything):
curl -s -H "Authorization: Bearer $VAST_API_KEY" \
-X DELETE \
"https://console.vast.ai/api/v0/instances/INSTANCE_ID/"
Stop (pauses compute, disk charges continue):
curl -s -H "Authorization: Bearer $VAST_API_KEY" \
-H "Content-Type: application/json" \
-X PUT \
-d '{"state": "stopped"}' \
"https://console.vast.ai/api/v0/instances/INSTANCE_ID/"
Both return {"success": true}.
Next Steps
You’ve now completed the full instance lifecycle through the API: authentication, search, creation, polling, and teardown. From here:
- SSH setup — See the SSH guide for key configuration and advanced connection options.
- Use templates — Avoid repeating image and config parameters on every create call. The Templates API guide covers creating, sharing, and launching from templates.