Skip to content

Object Storage

Available

Object Storage is an S3-compatible storage service for files of any size — user uploads, application backups, media assets, build artifacts, dataset exports.

It speaks the standard S3 API, so any existing tool or SDK that works with Amazon S3 works with Runsite Object Storage by changing only the endpoint URL.

BucketA container for objects, identified by a globally unique name (3–63 characters, lowercase).
ObjectA single file, identified by a key inside a bucket. Objects can be of any size.
Access keyA pair of access_key_id and secret_access_key used to authenticate API requests. Scoped to a set of permissions.
Presigned URLA temporary, signed URL that grants access to an object without exposing credentials.
POST /api/storage/buckets
Content-Type: application/json
{
"name": "user-uploads",
"public": false
}

A bucket name must:

  • Be between 3 and 63 characters.
  • Contain only lowercase letters, digits, and hyphens.
  • Start and end with a letter or digit.

A public bucket allows anonymous read access to its objects via direct HTTPS URLs. A private bucket requires an access key or a presigned URL to read.

GET /api/storage/buckets
PATCH /api/storage/buckets/{bucket_id}

Toggle public access, configure CORS, or update default cache behavior.

DELETE /api/storage/buckets/{bucket_id}

A bucket must be empty before it can be deleted.

Access keys authenticate S3 API requests. Each key has a scoped permission set so you can issue least-privilege credentials per app, per CI job, or per user.

POST /api/storage/keys
Content-Type: application/json
{
"name": "ci-uploader",
"buckets": ["user-uploads"],
"permissions": {
"read": true,
"write": true,
"delete": false,
"admin": false
}
}

The secret_access_key is returned once at creation time — store it securely.

PermissionAllows
readList buckets and objects, download objects
writeUpload and overwrite objects
deleteDelete objects
adminManage buckets and bucket policies
GET /api/storage/keys
DELETE /api/storage/keys/{key_id}

Deleting a key revokes it immediately.

Configure your S3 client with the Runsite endpoint. Use the credentials from your access key and the region returned by the API.

import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({
endpoint: 'https://s3.runsite.app',
region: 'eu-central',
credentials: {
accessKeyId: process.env.RUNSITE_ACCESS_KEY_ID,
secretAccessKey: process.env.RUNSITE_SECRET_ACCESS_KEY,
},
forcePathStyle: true,
});
await s3.send(
new PutObjectCommand({
Bucket: 'user-uploads',
Key: 'avatars/u123.png',
Body: fileBuffer,
ContentType: 'image/png',
}),
);
import boto3
s3 = boto3.client(
's3',
endpoint_url='https://s3.runsite.app',
region_name='eu-central',
aws_access_key_id=os.environ['RUNSITE_ACCESS_KEY_ID'],
aws_secret_access_key=os.environ['RUNSITE_SECRET_ACCESS_KEY'],
)
s3.upload_file('local.png', 'user-uploads', 'avatars/u123.png')

Presigned URLs let users upload to or download from a bucket directly, without your backend proxying the bytes.

POST /api/storage/buckets/{bucket_id}/presign
Content-Type: application/json
{
"key": "avatars/u123.png",
"operation": "put",
"expires_in_seconds": 600,
"content_type": "image/png"
}

The response includes a temporary URL that the client can use to upload directly. URLs expire after expires_in_seconds (default 600, max 86400).

Files larger than 5 MB should use multipart uploads — it is more reliable for large transfers and lets you upload in parallel. The S3 SDK handles this automatically when you call upload or Upload.

If you upload from a browser using presigned URLs, configure CORS on the bucket so the browser allows the cross-origin request:

PATCH /api/storage/buckets/{bucket_id}/cors
Content-Type: application/json
{
"rules": [
{
"allowed_origins": ["https://example.com"],
"allowed_methods": ["GET", "PUT", "POST"],
"allowed_headers": ["*"],
"max_age_seconds": 3600
}
]
}

Quotas

Each workspace has a default storage quota and a per-month transfer allowance. Both are visible in the dashboard and adjustable on request.