Borislav Hadzhiev
Last updated: Apr 21, 2021
Check out my new book
When allowing users to upload files to an S3 bucket, we most certainly want to limit the file size they can upload.
We can't do that with s3.getPresignedUrl
, but we can do it with
s3.createPresignedPost
, which has a little more complex API, but not by a
large margin.
The flow of using a presigned url is the same regardless - your frontend makes a request to your backend, possibly specifying the content type of the file you want to upload. The backend responds with the presigned url, which is valid for a specified amount of time, and your frontend uploads the file using the presigned url.
This flow allows you to avoid sending the file to your backend and then to s3, instead you directly upload to s3 from the frontend.
cors
enabled to be able to upload from a Web
Application. Obviously your frontend is on a different domain, therefore you
must enable CORS on the bucket to allow requests from the specific domain. For
example in cdk:import * as s3 from 'aws-cdk-lib/aws-s3'; import * as cdk from 'aws-cdk-lib'; export class MyCdkStack extends cdk.Stack { constructor(scope: cdk.App, id: string, props: cdk.StackProps) { super(scope, id, props); const s3Bucket = new s3.Bucket(this, id, { // 👇 Setting up CORS cors: [ { allowedMethods: [ s3.HttpMethods.GET, s3.HttpMethods.POST, s3.HttpMethods.PUT, ], allowedOrigins: ['http://localhost:3000'], allowedHeaders: ['*'], }, ], }); } }
the lambda that makes the request to s3 for the presigned url must have
s3:putObject
and optionally s3:PutObjectAcl
permissions for the bucket.
the conditions in the params object of s3.createPresignedPost
must be met,
i.e. if you limit Content-Type
and your frontend attempts to upload a file
with a different Content-Type
you will get an error.
const params = { Bucket: bucketName, Fields: { key: filePath, acl: 'public-read', }, Conditions: [ // content length restrictions: 0-1MB] ['content-length-range', 0, 1000000], // specify content-type to be more generic - images only // ['starts-with', '$Content-Type', 'image/'], ['eq', '$Content-Type', fileType], ['starts-with', '$key', identityId], ], // number of seconds for which the presigned policy should be valid Expires: 15, };
to make the file publicly readable, you can set the acl
in the Fields,
public-read
means that anyone who has the link can access it and view the
file. That's why the lambda needs a s3:putObjectAcl
permission on the
bucket.
default expiration time for the presigned post url is 15 minutes, but you most likely want to set a shorter one.
once the frontend has the signed url it can make a POST
request to s3, with
the included fields in the lambda response set as FormData
:
import {client} from '@utils/api-client'; export async function uploadToS3({ fileType, fileContents, }: { fileType: string; fileContents: File; }) { const presignedPostUrl = await getPresignedPostUrl(fileType); const formData = new FormData(); formData.append('Content-Type', fileType); Object.entries(presignedPostUrl.fields).forEach(([k, v]) => { formData.append(k, v); }); formData.append('file', fileContents); // The file must be the last element const response = await fetch(presignedPostUrl.url, { method: 'POST', body: formData, }); if (!response.ok) { throw new Error( 'Invalid file upload, check that your file size is less than 1MB.', ); } return presignedPostUrl.filePath; } type PresignedPostUrlResponse = { url: string; fields: { key: string; acl: string; bucket: string; }; filePath: string; }; async function getPresignedPostUrl(fileType: string) { const presignedPostUrl = await client<PresignedPostUrlResponse>( `get-presigned-url-s3?fileType=${fileType}`, ); return presignedPostUrl; }