The Popsa Platform: a rock solid foundation for growth

The Popsa Platform is at the beating heart of the Popsa technology stack. It is a collection of databases, microservices, serverless functions and batch processing tasks that everyday processes thousands user signups, orders, photo uploads and managing the fulfilment of our beautiful printed products.

We make use of over 30 Amazon Web Services (AWS) services with our core stack built on Elastic Container Service (ECS), DynamoDB, Lambda and S3.


We currently run around 15 Dockerised microservices orchestrated using ECS. Our approach to microservices closely aligns with core Popsa concepts such as “users”, “prints” (the digital representation of the photobook, Christmas ornament or loose prints a user designs inside our app), “orders” (the purchase of a print) and “print jobs” (a fulfilment of an order) are all managed by their individual services - “the user service”, “the print service”, etc. These entity services all encapsulate the logic required to read and update individual objects.

The other type of microservice we run facade third party services - we have two services that facade the Stripe and Braintree SDKs providing common interfaces for tasks such as creating or refunding a transaction.

All of our microservices are written in Go and make use of Micro which provides a number of libraries to help with service discovery (via Consul), circuit breaking, inter-service communication and more. Logs are streamed out of the Docker containers into Fluentd into an ElasticSearch cluster. We use Prometheus for metric monitoring and Grafana for visualising those metrics.

All inter-service communication happens over HTTP 2.0 and uses Protocol Buffers (see our earlier blog post on protocol buffers and how valuable we’ve found them).

The Micro project provides a proxy service which we’ve containerised and also run on ECS allowing Lambda functions and other tooling to talk directly to our microservices without having to build in service discovery and circuit breaking and other boilerplate code.


We make extensive use of serverless “function as a service” technologies such as AWS Lambda. Lambda is like a Swiss army knife - you essentially write a short piece of code that does one thing in response to an event - which means it’s extremely versatile. Every time a customer photo is uploaded to S3 a Lambda function is triggered which downloads the photo, ensures it’s a really a photo (and not say a Word document), performs some machine learning magic, removes the record of the photo from the “missing photos” table and adds it to the “uploaded photos” table.

Other Lambda functions sit behind an API Gateway and perform tasks such as creating on-demand thumbnails, or respond to print job updates from our fulfilment partners.

Coupled with S3’s ability to host static websites we also host fundamental tools such as our internal customer management system on S3 which talks to a API Gateway/Lambda backed, as does our “forgot password” page that users visit via an email link.

The other advantage to Lambda is that it’s extremely cost effective meaning we’re only billed for code when it runs as opposed to paying for it regardless of whether or not the code is actively doing anything. This arguably comes with a few caveats such as initialisation latency but in our testing and experience these are tradeoffs we’re happy to accept.


Our previous monolith platform application was backed by a single high-availability MySQL database. This was great in the early days as it made adding new features and querying data relatively simple.

However as we've evolved the Popsa app and it's capabilities, and evolved the Platform stack to support these features we've moved to more specialist databases.

We now use just over 20 DynamoDB tables to store our live application data. DynamoDB gives us low-millisecond responses, is entirely serverless (no instances to run!) and automatically scales our storage and capacity as we need it. DynamoDB also has a feature called Streams which allows us to react (using Lambda functions) to changes in tables - this is very similar to traditional triggers offered by RDBMS databases. DynamoDB Streams (along with traditional queues allow us to have a highly-event driven architecture, and in the future will allow us to easily deploy regionally as Streams powers DynamoDB Global Tables (master-master regional database tables).

Whilst DynamoDB is a great solution for our application APIs it isn't great at OLAP workloads (i.e. analytics). To provide us this functionality we run serverless ETL jobs to extract data from our DynamoDB tables and store it as parquet in S3. Once it's in S3 we can use Amazon Athena which is a very inexpensive hosted Presto service to run any analytical queries we want.

The final database we're making use of is ElasticSearch. As previously mentioned we run one ElasticSearch cluster that supports searching our application logs. We also run a second cluster that provides search functionality for our internal customer management applications and it provides the data for some Grafana-powered metrics dashboards.

The Future

2019-2020 is going to be another very busy year for Popsa; we're confident that the stack described above will form a rock solid foundation to build upon however it's important that we continue to learn and evaluate other technologies that may unlock new opportunities as we scale Popsa to bring our products from the 50 countries we currently serve to every country.

Some of the things we're looking into at the moment include graph databases, Kubernetes and new techniques to reduce and speed-up data sent from mobile devices to the platform.

If the Popsa Platform sounds like something you want to build on please email and either our CTO Tom or Head of Engineering Alex will get right back to you.