Last updated: Apr 21, 2021
Photo from Unsplash
This is an
opinionated post regarding AWS amplify - how it fits in my web
stack, things I like and dislike about it.
host static sites for cheap - taking advantage of automatic
deployment to AWS cloudfront and S3, integrated with builds and
deployments activated from pushes to github - not having to manage
infrastructure for Cloudfront, S3, CodeBuild, CodeDeploy etc
Amplify Authentication module, which is a client library wrapper
around AWS Cognito is the defacto way to implement cognito in your client
side applications, and even though the library is
large - 56.5Kb minified + gzipped.
I have no intention to write and manage an authN and authZ service. Also cognito is tightly integrated with other AWS services like S3, Dynamodb, etc, so if you use AWS it makes sense to use cognito.
Very rapid prototyping / quick application development if you manage to
stay on the happy path and take advantage of the features that are
automated. On the other hand it makes me feel uneasy, once I diverge from the
happy path, that I'd have to dive deep into some VTL resolvers, write
cloudformation in auto-generated files (in JSON), etc.
When you go
outside the happy path and have to edit something, most likely
you have to edit automatically generated JSON cloudformation template, or
add SES permission to your lambda function and hope it doesn't get
overwritten when you edit the lambda's permissions using the CLI later. I
could update the cloudformation template when needed, but I'd rather
manage my infrastructure in CDK if I'm gonna be writing infrastructure as
Their graphql auto resource generation
uses VTL files for resolvers - no
wonder because VTL is the default way to write resolvers using AWS
Appsync, but I have no intention to learn VTL and introduce another
programming language to the stack. Which means that if I have to modify a
resolver's logic, I'd have to plug in a lambda and not use amplify's auto
I realize this is a very opinionated con - not related to Amplify at all, it's just my preference - I've weighed the benefits and drawbacks of VTL and for my money - VTL is a mess to read, write, maintain and test, the only benefits are you don't have to run a lambda resolver so you save on latency and lambda costs. For now I'm sticking with TypeScript.
For storage Amplify integrates with S3 and Dynamodb - Dynamodb is very
stiff and not suitable for every web application, unless:
you know that you're going to have millions of users and you need to be able to scale
you know all your data access patterns in advance so you can model your database correctly.
you have a data schema so simple and stable, that you can just stick in dynamodb and save some money
With Dynamodb simple, common and solved problems like
pagination take a
lot of effort to implement right - it only offers Previous/Next type
pagination. If you're going to go through the hassle of using Dynamodb you
need to do it for the right reason -
Also the way Amplify implements the
@connection graphql directive doesn't
feel right, having
global secondary indexes all over the place and
implementing multi table relationships is not how you want to structure your
data in dynamo - it's going to cost you a lot of money.
When you create a global secondary index you're basically duplicating your table. Duplicating your table for an extra access pattern has to be done with a lot of forethought. You have to put a lot of thought when designing your Dynamodb tables, the default way of designing Dynamodb tables with Amplify feels very questionable for anyone who has used Dynamodb without Amplify.
You could add Aurora serverless datasource to your Graphql API, but no @model support is available. Also the CLI only supports Amazon Aurora MySQL 5.6 dbs running in the us-east-1 region - according to the docs.
In short if you're using Amplify, you
should be using Graphql
because of all the
can leverage for automation, however most of the directives only work with
dynamodb and dynamodb is not suitable for every usecase.
Note: you could use elasticsearch with dynamodb to make your data layer more flexible - dynamodb streams to lambda, lambda adds record to elasticsearch. Then users use dynamodb for mutations and elasticsearch for queries. I haven't dived too deep into elasticsearch because it doesn't fit well into my serverless stack with the hourly billing model.
strange lacking functionality, like:
There isn't an intuitive way to add environment variables to lambda functions through Amplify. Not surprisingly, there is a github issue to request this feature.
The TLDR is - the Environment variables tab in your Amplify console is used to add build configuration environment variables, not backend function environment variables. So to add lambda environment variables, you would have to do that in your AWS Lambda console, for every function, individually.
There isn't an intuitive way to add permissions to lambda functions to access resources outside the amplify stack. For example, if you have a lambda function that needs to talk to SES for sending emails, then you would have to grant your lambda permissions for SES actions. However you can't do that using AWS Amplify CLI.
Updating the lambda's permission in the cloudformation template is a no go, because it gets overwritten if you then add permission to the same lambda function using the CLI, which is a very sticky situation. Your only option is to modify the permissions again using the IAM console. More info in the SES permissions issue.
strange bugs i.e. using aws-amplify/auth module - when you use the
change email functionality and not confirm your new email,
the new email is already updated in the user pool, which means that if a
user changes their email, doesn't confirm the new email or enters a wrong new
email and then logs out, they can only log in with their new - not verified
Obviously the flow should be - an email is first confirmed before updated in the userpool database, otherwise what's the point of requiring users to confirm their emails in the first place.
With the current flow you could pick any email you want like
firstname.lastname@example.org even though you don't own it, or you could misspell when
changing the email and get locked out of your account because no
confirmation step is required. Here's
in amplify's repo.
Also the bug might be a problem with cognito and not amplify, however as a consumer of the service it doesn't really matter where the bug stems from, the issue has been opened for a couple of years now and it doesn't seem to be moving in any direction.
There is no straight forward way to add typescript support for the lambda functions
The default memory for lambda functions is 128MB. I am no lambda performance/cost expert, but with 128MB of memory you end up eating 1-3s coldstarts, and that's without a VPC.Lambdas pricing is based on both the memory and the duration your function runs, so setting the least amount of memory does not necessarily mean it costs the least.
With the generous free tier AWS provides for lambda my first job when I add a lambda to my amplify project is to edit the json cloudformation template and increase my memory to something like 1024MB - I haven't tested the cost difference at scale, but the performance difference is very noticeable.
This is a very opinionated post of how AWS Amplify fits into my current web stack - at the moment I only use AWS Amplify for:
I wrote an article where I go more in depth on the Auth related services - AWS Amplify & Cognito - Review. It went kind of viral on reddit (~350 upvotes), check it out.