Cloudflare R2 (Ridiculously Resilient) Storage is an S3-compatible object storage service designed to store large amounts of unstructured data like images, videos, audio files, documents, and backups. The key differentiator: zero egress fees.
- 10 GB free storage on free tier
- No egress fees (unlike AWS S3 which charges for downloads)
- No API request fees on free tier
- Pay only for storage and operations at scale
- Drop-in replacement for AWS S3
- Use existing S3 libraries and tools
- Seamless migration from S3
- Standard S3 API compatibility
- Global distribution via Cloudflare's network
- Automatic caching when used with Workers/Pages
- High availability with automatic replication
- Low latency access worldwide
- Image/video hosting for web applications
- User-generated content storage
- Static asset hosting (CSS, JS, fonts)
- Backup storage for databases and files
- Media libraries for content platforms
- File upload/download features in SaaS apps
- 10 GB storage included
- 1 million Class A operations per month (writes, lists)
- 10 million Class B operations per month (reads)
- No egress charges ever!
- Storage: $0.015 per GB/month (~$15 per TB)
- Class A operations: $4.50 per million (writes)
- Class B operations: $0.36 per million (reads)
- Egress: $0 (FREE!) 🎉
Example: Serving 100 GB of images with 1 TB downloads/month
| Service | Storage | Egress | Total |
|---|---|---|---|
| AWS S3 | $2.30 | $92.00 | $94.30 |
| Google Cloud Storage | $2.00 | $120.00 | $122.00 |
| Cloudflare R2 | $1.50 | $0.00 | $1.50 |
R2 is 60-80x cheaper for content delivery!
- Cloudflare account
- Wrangler CLI installed (
npm install -g wrangler) - Basic understanding of object storage
# Create a new bucket
wrangler r2 bucket create my-images
# List all buckets
wrangler r2 bucket list
# Delete a bucket (when empty)
wrangler r2 bucket delete my-bucket-name- Log in to Cloudflare Dashboard
- Select R2 from left sidebar
- Click Create bucket
- Enter bucket name (e.g.,
my-app-storage) - Choose location (optional)
- Click Create bucket
# Set CORS policy (if needed for direct browser uploads)
wrangler r2 bucket cors put my-images --config cors.jsonExample CORS configuration:
{
"cors_rules": [
{
"allowed_origins": ["https://yourdomain.com"],
"allowed_methods": ["GET", "PUT", "POST", "DELETE"],
"allowed_headers": ["*"],
"max_age_seconds": 3600
}
]
}In wrangler.toml:
name = "my-worker"
main = "src/index.ts"
[[r2_buckets]]
binding = "STORAGE"
bucket_name = "my-images"interface Env {
STORAGE: R2Bucket;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
// Upload file
if (url.pathname === '/upload' && request.method === 'POST') {
const formData = await request.formData();
const file = formData.get('file') as File;
if (!file) {
return Response.json({ error: 'No file provided' }, { status: 400 });
}
// Generate unique filename
const filename = `${Date.now()}-${file.name}`;
// Upload to R2
await env.STORAGE.put(filename, file.stream(), {
httpMetadata: {
contentType: file.type,
},
});
return Response.json({
success: true,
url: `/files/${filename}`,
filename,
});
}
// Download file
if (url.pathname.startsWith('/files/')) {
const filename = url.pathname.slice(7);
const object = await env.STORAGE.get(filename);
if (!object) {
return new Response('File not found', { status: 404 });
}
const headers = new Headers();
object.writeHttpMetadata(headers);
headers.set('etag', object.httpEtag);
return new Response(object.body, { headers });
}
return new Response('Not found', { status: 404 });
},
};const ALLOWED_TYPES = ['image/jpeg', 'image/png', 'image/gif', 'image/webp'];
const MAX_SIZE = 5 * 1024 * 1024; // 5MB
export default {
async fetch(request: Request, env: Env): Promise<Response> {
if (request.method !== 'POST') {
return Response.json({ error: 'Method not allowed' }, { status: 405 });
}
const formData = await request.formData();
const file = formData.get('image') as File;
// Validate file
if (!file) {
return Response.json({ error: 'No file provided' }, { status: 400 });
}
if (!ALLOWED_TYPES.includes(file.type)) {
return Response.json({ error: 'Invalid file type' }, { status: 400 });
}
if (file.size > MAX_SIZE) {
return Response.json({ error: 'File too large' }, { status: 400 });
}
// Generate filename with UUID
const extension = file.name.split('.').pop();
const filename = `${crypto.randomUUID()}.${extension}`;
const key = `images/${filename}`;
// Upload to R2
await env.STORAGE.put(key, file.stream(), {
httpMetadata: {
contentType: file.type,
},
customMetadata: {
originalName: file.name,
uploadedAt: new Date().toISOString(),
},
});
return Response.json({
success: true,
url: `https://your-worker.workers.dev/files/${key}`,
filename,
});
},
};export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === '/api/files') {
// List all objects
const listed = await env.STORAGE.list({
limit: 100,
prefix: 'images/', // Optional: filter by prefix
});
const files = listed.objects.map(obj => ({
key: obj.key,
size: obj.size,
uploaded: obj.uploaded,
}));
return Response.json({
files,
truncated: listed.truncated,
cursor: listed.cursor,
});
}
return new Response('Not found', { status: 404 });
},
};export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === '/api/delete' && request.method === 'DELETE') {
const { filename } = await request.json();
// Delete from R2
await env.STORAGE.delete(filename);
return Response.json({ success: true });
}
return new Response('Not found', { status: 404 });
},
};For files larger than 5MB, use multipart uploads:
async function uploadLargeFile(file: File, env: Env) {
const filename = `large-files/${crypto.randomUUID()}-${file.name}`;
// Create multipart upload
const multipart = await env.STORAGE.createMultipartUpload(filename);
const CHUNK_SIZE = 5 * 1024 * 1024; // 5MB chunks
const chunks: R2UploadedPart[] = [];
let offset = 0;
let partNumber = 1;
// Upload in chunks
while (offset < file.size) {
const chunk = file.slice(offset, offset + CHUNK_SIZE);
const buffer = await chunk.arrayBuffer();
const part = await multipart.uploadPart(partNumber, buffer);
chunks.push(part);
offset += CHUNK_SIZE;
partNumber++;
}
// Complete the upload
const object = await multipart.complete(chunks);
return {
success: true,
key: filename,
size: object.size,
};
}export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const filename = url.pathname.slice(1);
const object = await env.STORAGE.get(filename);
if (!object) {
return new Response('Not found', { status: 404 });
}
// Check if client has cached version
const ifNoneMatch = request.headers.get('If-None-Match');
if (ifNoneMatch === object.httpEtag) {
return new Response(null, { status: 304 }); // Not Modified
}
const headers = new Headers();
object.writeHttpMetadata(headers);
headers.set('etag', object.httpEtag);
headers.set('cache-control', 'public, max-age=31536000'); // 1 year
return new Response(object.body, { headers });
},
};// Store metadata with files
await env.STORAGE.put('user-123/profile.jpg', file.stream(), {
httpMetadata: {
contentType: 'image/jpeg',
},
customMetadata: {
userId: '123',
uploadedBy: 'john@example.com',
category: 'profile-pictures',
originalName: file.name,
},
});
// Retrieve metadata
const object = await env.STORAGE.head('user-123/profile.jpg');
console.log(object.customMetadata);// functions/upload.ts
interface Env {
STORAGE: R2Bucket;
}
export const onRequestPost: PagesFunction<Env> = async (context) => {
const formData = await context.request.formData();
const file = formData.get('file') as File;
if (!file) {
return Response.json({ error: 'No file' }, { status: 400 });
}
const filename = `${Date.now()}-${file.name}`;
await context.env.STORAGE.put(filename, file.stream(), {
httpMetadata: { contentType: file.type },
});
return Response.json({ success: true, filename });
};- Create bucket:
my-public-assets - Add custom domain in R2 dashboard
- Configure DNS (automatic if domain on Cloudflare)
- Files accessible at:
https://cdn.yourdomain.com/filename.jpg
Use a Worker to serve R2 files with custom logic:
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const key = url.pathname.slice(1);
// Add security checks, analytics, etc.
const object = await env.STORAGE.get(key);
if (!object) {
return new Response('Not found', { status: 404 });
}
const headers = new Headers();
object.writeHttpMetadata(headers);
// Add custom headers
headers.set('Cache-Control', 'public, max-age=86400');
headers.set('CDN-Cache-Control', 'public, max-age=31536000');
return new Response(object.body, { headers });
},
};R2 supports S3 API, so you can use AWS SDK:
npm install @aws-sdk/client-s3import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const S3 = new S3Client({
region: 'auto',
endpoint: `https://${accountId}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: 'your-access-key-id',
secretAccessKey: 'your-secret-access-key',
},
});
// Upload file
await S3.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'test.jpg',
Body: fileBuffer,
ContentType: 'image/jpeg',
}));- Go to R2 dashboard
- Click Manage R2 API Tokens
- Create API token
- Save Access Key ID and Secret Access Key
-
Test Upload:
curl -X POST https://your-worker.workers.dev/upload \ -F "file=@test-image.jpg" -
Test Download:
curl https://your-worker.workers.dev/files/test-image.jpg \ --output downloaded.jpg
-
Verify in Dashboard:
- Go to R2 bucket in Cloudflare dashboard
- Check file is listed
- Verify size and metadata
-
Test Delete:
curl -X DELETE https://your-worker.workers.dev/api/delete \ -H "Content-Type: application/json" \ -d '{"filename": "test-image.jpg"}'
Upload fails silently:
- Check bucket name in
wrangler.toml - Verify binding name matches code
- Check file size limits
CORS errors in browser:
- Configure CORS policy on bucket
- Add appropriate
Access-Control-Allow-*headers
Files not accessible:
- Check if bucket is public or requires Worker
- Verify custom domain configuration
- Check firewall/security settings
// Organize files logically
await env.STORAGE.put('users/123/avatar.jpg', file);
await env.STORAGE.put('products/456/image-1.jpg', file);
await env.STORAGE.put('documents/2024/report.pdf', file);
// Easy to list by category
const userFiles = await env.STORAGE.list({ prefix: 'users/123/' });// Add cache headers for static assets
headers.set('Cache-Control', 'public, max-age=31536000, immutable');
headers.set('CDN-Cache-Control', 'public, max-age=31536000');// Use Workers to resize/optimize images
import { Image } from '@cloudflare/workers-image';
const image = await Image.load(await file.arrayBuffer());
const optimized = await image
.resize(1200) // Max width
.quality(85)
.toBuffer('jpeg');
await env.STORAGE.put(filename, optimized);// Store file metadata in database for fast queries
await env.DB.prepare(`
INSERT INTO files (key, size, type, uploaded_by, created_at)
VALUES (?, ?, ?, ?, ?)
`).bind(filename, file.size, file.type, userId, Date.now()).run();
// Query files without listing R2
const files = await env.DB.prepare(`
SELECT * FROM files WHERE uploaded_by = ? ORDER BY created_at DESC
`).bind(userId).all();// Generate temporary access URLs
function generateSignedUrl(key: string, expiresIn: number): string {
const expires = Date.now() + expiresIn;
const signature = await crypto.subtle.sign(
'HMAC',
await getSigningKey(),
new TextEncoder().encode(`${key}:${expires}`)
);
return `/files/${key}?expires=${expires}&sig=${btoa(String.fromCharCode(...new Uint8Array(signature)))}`;
}async function uploadAvatar(file: File, userId: string, env: Env) {
const key = `avatars/${userId}.jpg`;
await env.STORAGE.put(key, file.stream(), {
httpMetadata: { contentType: 'image/jpeg' },
customMetadata: { userId, uploadedAt: new Date().toISOString() },
});
await env.DB.prepare(
'UPDATE users SET avatar_url = ? WHERE id = ?'
).bind(`/avatars/${userId}.jpg`, userId).run();
return { success: true, url: `/avatars/${userId}.jpg` };
}async function uploadDocument(file: File, folder: string, env: Env) {
const filename = `${crypto.randomUUID()}.${file.name.split('.').pop()}`;
const key = `documents/${folder}/${filename}`;
await env.STORAGE.put(key, file.stream(), {
httpMetadata: { contentType: file.type },
customMetadata: {
originalName: file.name,
folder,
uploadedAt: new Date().toISOString(),
},
});
return { key, url: `/files/${key}` };
}async function uploadVideo(file: File, env: Env) {
// Use multipart upload for large videos
const key = `videos/${crypto.randomUUID()}.mp4`;
const multipart = await env.STORAGE.createMultipartUpload(key);
// Upload in chunks (implementation from multipart example above)
// ...
return { key, url: `/videos/${key}` };
}- Serve files through custom CDN logic
- Add authentication/authorization
- Implement usage tracking
- Store user uploads from frontend
- Serve static assets
- Handle form submissions with file uploads
- Store file metadata for querying
- Track file usage and analytics
- Implement file permissions
- Cache frequently accessed file metadata
- Store temporary upload tokens
- Implement rate limiting
- Documentation: https://developers.cloudflare.com/r2/
- API Reference: https://developers.cloudflare.com/r2/api/
- Pricing: https://developers.cloudflare.com/r2/pricing/
- Examples: https://developers.cloudflare.com/r2/examples/
- Discord Community: https://discord.cloudflare.com
- No egress fees - Save 90% on bandwidth costs
- S3-compatible - Easy migration from AWS
- Fast globally - Cloudflare's network
- Generous free tier - 10GB free
- Worker integration - Custom logic at the edge
- Simple pricing - No surprise bills
- Reliable storage - Enterprise-grade infrastructure
For any application that needs to store and serve files - images, videos, documents, backups - Cloudflare R2 offers unbeatable value and performance.