Large S3 uploads fail in serverless due to synchronous CopyObject in Media Library exceeding Lambda timeout #3925
Unanswered
melvinmihaylovnulabg
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We are using Filament + Spatie Media Library in a Laravel Vapor (serverless) environment and encountering failures when handling large file uploads (2–10 GB).
The upload process itself works correctly: files are uploaded directly from the browser to S3 using multipart uploads, completely bypassing Lambda.
The issue occurs after upload, when the user submits the form. At this point, Spatie Media Library processes the file via addMediaFromDisk(), which triggers an S3 CopyObject operation to move the file from the temporary location (e.g. livewire-tmp/...) to the final media path (e.g. media/...).
Since S3 does not support true move operations, this results in a full copy of the object. For large files (2+ GB), this operation takes ~30–40 seconds. Because this happens synchronously inside a Lambda request, it exceeds the Lambda execution limit (30 seconds), resulting in timeouts and failed requests (503 errors).
Key points:
Upload to S3 is not the problem (works fine, fully client-side)
Failure occurs during post-upload processing
The blocking S3 copy operation is incompatible with serverless time limits
This makes the current implementation unsuitable for large file handling in serverless environments.
Expected behavior would be one of:
Avoid copying the file entirely (reuse existing object)
Perform the copy asynchronously (e.g. via queue/job)
Allow direct upload to the final destination path to skip the copy step
At the moment, the synchronous CopyObject step is a hard blocker for large uploads in serverless setups.
Beta Was this translation helpful? Give feedback.
All reactions