Object Storage
Object Storage is an S3-compatible storage service for files of any size — user uploads, application backups, media assets, build artifacts, dataset exports.
It speaks the standard S3 API, so any existing tool or SDK that works with Amazon S3 works with Runsite Object Storage by changing only the endpoint URL.
Concepts
Section titled “Concepts”| Bucket | A container for objects, identified by a globally unique name (3–63 characters, lowercase). |
| Object | A single file, identified by a key inside a bucket. Objects can be of any size. |
| Access key | A pair of access_key_id and secret_access_key used to authenticate API requests. Scoped to a set of permissions. |
| Presigned URL | A temporary, signed URL that grants access to an object without exposing credentials. |
Buckets
Section titled “Buckets”Create a bucket
Section titled “Create a bucket”POST /api/storage/bucketsContent-Type: application/json
{ "name": "user-uploads", "public": false}A bucket name must:
- Be between 3 and 63 characters.
- Contain only lowercase letters, digits, and hyphens.
- Start and end with a letter or digit.
A public bucket allows anonymous read access to its objects via direct HTTPS URLs. A private bucket requires an access key or a presigned URL to read.
List buckets
Section titled “List buckets”GET /api/storage/bucketsUpdate a bucket
Section titled “Update a bucket”PATCH /api/storage/buckets/{bucket_id}Toggle public access, configure CORS, or update default cache behavior.
Delete a bucket
Section titled “Delete a bucket”DELETE /api/storage/buckets/{bucket_id}A bucket must be empty before it can be deleted.
Access keys
Section titled “Access keys”Access keys authenticate S3 API requests. Each key has a scoped permission set so you can issue least-privilege credentials per app, per CI job, or per user.
Create a key
Section titled “Create a key”POST /api/storage/keysContent-Type: application/json
{ "name": "ci-uploader", "buckets": ["user-uploads"], "permissions": { "read": true, "write": true, "delete": false, "admin": false }}The secret_access_key is returned once at creation time — store it securely.
| Permission | Allows |
|---|---|
read | List buckets and objects, download objects |
write | Upload and overwrite objects |
delete | Delete objects |
admin | Manage buckets and bucket policies |
List, rotate, and revoke
Section titled “List, rotate, and revoke”GET /api/storage/keysDELETE /api/storage/keys/{key_id}Deleting a key revokes it immediately.
S3-compatible API
Section titled “S3-compatible API”Configure your S3 client with the Runsite endpoint. Use the credentials from your access key and the region returned by the API.
Node.js (@aws-sdk/client-s3)
Section titled “Node.js (@aws-sdk/client-s3)”import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({ endpoint: 'https://s3.runsite.app', region: 'eu-central', credentials: { accessKeyId: process.env.RUNSITE_ACCESS_KEY_ID, secretAccessKey: process.env.RUNSITE_SECRET_ACCESS_KEY, }, forcePathStyle: true,});
await s3.send( new PutObjectCommand({ Bucket: 'user-uploads', Key: 'avatars/u123.png', Body: fileBuffer, ContentType: 'image/png', }),);Python (boto3)
Section titled “Python (boto3)”import boto3
s3 = boto3.client( 's3', endpoint_url='https://s3.runsite.app', region_name='eu-central', aws_access_key_id=os.environ['RUNSITE_ACCESS_KEY_ID'], aws_secret_access_key=os.environ['RUNSITE_SECRET_ACCESS_KEY'],)
s3.upload_file('local.png', 'user-uploads', 'avatars/u123.png')Presigned URLs
Section titled “Presigned URLs”Presigned URLs let users upload to or download from a bucket directly, without your backend proxying the bytes.
POST /api/storage/buckets/{bucket_id}/presignContent-Type: application/json
{ "key": "avatars/u123.png", "operation": "put", "expires_in_seconds": 600, "content_type": "image/png"}The response includes a temporary URL that the client can use to upload directly. URLs expire after expires_in_seconds (default 600, max 86400).
Multipart uploads
Section titled “Multipart uploads”Files larger than 5 MB should use multipart uploads — it is more reliable for large transfers and lets you upload in parallel. The S3 SDK handles this automatically when you call upload or Upload.
If you upload from a browser using presigned URLs, configure CORS on the bucket so the browser allows the cross-origin request:
PATCH /api/storage/buckets/{bucket_id}/corsContent-Type: application/json
{ "rules": [ { "allowed_origins": ["https://example.com"], "allowed_methods": ["GET", "PUT", "POST"], "allowed_headers": ["*"], "max_age_seconds": 3600 } ]}Quotas
Each workspace has a default storage quota and a per-month transfer allowance. Both are visible in the dashboard and adjustable on request.