Storage Providers
The Storage Service supports multiple storage backends. You can switch between providers by changing the STORAGE_TYPE environment variable -- no code changes are required.
Local storage is for development only. Use a cloud provider for production deployments. Local filesystem storage does not scale beyond a single instance and data is lost if the container is removed.
Local Storage
Minimal configuration for local development. Files are stored on the local filesystem.
STORAGE_TYPE=local
LOCAL_DIRECTORY=uploads
Google Cloud Storage
Update your .env file:
STORAGE_TYPE=gcp
GOOGLE_APPLICATION_CREDENTIALS=/tmp/service-account-file.json
When running with Docker, mount your service account file into the container:
docker run -d --env-file .env -p 3333:3333 \
-v /path/to/local/gcp/service-account-file.json:/tmp/service-account-file.json \
storage-service:latest
Amazon Web Services (AWS S3)
Update your .env file with AWS credentials:
STORAGE_TYPE=aws
S3_REGION=ap-southeast-2
AWS_ACCESS_KEY_ID=your-aws-access-key-id
AWS_SECRET_ACCESS_KEY=your-aws-secret-access-key
AVAILABLE_BUCKETS=documents,files
Then run the container:
docker run -d --env-file .env -p 3333:3333 storage-service:latest
For production deployments on AWS, consider using IAM roles instead of static credentials. See the AWS documentation.
S3-Compatible Providers
Any S3-compatible storage provider can be used by setting STORAGE_TYPE=aws and configuring a custom endpoint via S3_ENDPOINT. The service uses the standard S3 API, so any provider that implements this API will work.
MinIO
Ideal for local development and self-hosted deployments:
STORAGE_TYPE=aws
S3_ENDPOINT=http://minio:9000
S3_FORCE_PATH_STYLE=true
AWS_ACCESS_KEY_ID=minioadmin
AWS_SECRET_ACCESS_KEY=minioadmin
AVAILABLE_BUCKETS=documents,files
DigitalOcean Spaces
STORAGE_TYPE=aws
S3_ENDPOINT=https://syd1.digitaloceanspaces.com
AWS_ACCESS_KEY_ID=your-do-access-key-id
AWS_SECRET_ACCESS_KEY=your-do-secret-access-key
AVAILABLE_BUCKETS=documents,files
Cloudflare R2
STORAGE_TYPE=aws
S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
S3_FORCE_PATH_STYLE=true
AWS_ACCESS_KEY_ID=your-r2-access-key-id
AWS_SECRET_ACCESS_KEY=your-r2-secret-access-key
AVAILABLE_BUCKETS=documents,files
S3 Configuration Reference
| Variable | Required | Description |
|---|---|---|
S3_REGION | Yes (for AWS S3) | AWS region. Not required when using a custom endpoint. |
S3_ENDPOINT | No | Custom endpoint URL for S3-compatible providers. |
S3_FORCE_PATH_STYLE | No | Set to true for path-style URLs (required for MinIO and Cloudflare R2). |
AWS_ACCESS_KEY_ID | Yes | Access key for authentication. |
AWS_SECRET_ACCESS_KEY | Yes | Secret key for authentication. |
Custom Public URL for Document URIs
By default, each storage provider constructs public document URIs from its own endpoint:
- AWS S3:
https://{bucket}.s3.amazonaws.com/{key}orhttps://{bucket}.{endpoint}/{key} - GCP:
https://{bucket}.storage.googleapis.com/{key} - Local:
http://{domain}:{port}/{key}
If you place a CDN or custom domain in front of your storage bucket, you can set PUBLIC_URL to override only the URI returned to clients. Uploads and all other storage operations are unaffected.
PUBLIC_URL changes only the URI returned in API responses. It does not affect where files are uploaded to or how the service communicates with your storage provider. The upload path remains unchanged. The response URI becomes: PUBLIC_URL/key.
PUBLIC_URL applies to aws and gcp storage types. It is ignored for local storage.
| Variable | Required | Description |
|---|---|---|
PUBLIC_URL | No | Base URL for public document URIs. When set, overrides the default URI for the configured storage provider. |
Example: DigitalOcean Spaces with CDN
STORAGE_TYPE=aws
S3_ENDPOINT=https://syd1.digitaloceanspaces.com
AWS_ACCESS_KEY_ID=your-do-access-key-id
AWS_SECRET_ACCESS_KEY=your-do-secret-access-key
AVAILABLE_BUCKETS=documents,files
PUBLIC_URL=https://documents.example.com
Example: Google Cloud Storage with Cloud CDN
STORAGE_TYPE=gcp
GOOGLE_APPLICATION_CREDENTIALS=/tmp/service-account-file.json
PUBLIC_URL=https://cdn.example.com
In both configurations:
- Files are uploaded to the configured storage provider as normal
- The URI returned to clients uses
PUBLIC_URL:https://documents.example.com/{key} - Your CDN custom domain should be configured to serve content from the storage bucket
When PUBLIC_URL is set, the bucket name is not included in the generated URI. This assumes your CDN or custom domain already points to a specific bucket. If you use multiple buckets (via AVAILABLE_BUCKETS), all generated URIs will share the same PUBLIC_URL base.
Only the origin (protocol, hostname, and port) of PUBLIC_URL is used. Any path component (e.g. https://cdn.example.com/subpath) is ignored.