The backend team at Popsa, the UK's fastest growing software company, is responsible for the entirety of the backend, from the underlying AWS infrastructure to the microservices and code that runs on it.
What we do day-to-day
As guardians of Popsa’s infrastructure we are involved in the design and development of features from the get go; supporting with our domain knowledge in API design, security and infrastructure; enabling us to bring exciting features from inception to implementation.
The team can be considered cross discipline, operating across both the backend services and platform/devops domains — this richness keeps the workload varied and exciting.
We are a highly collaborative team
We often find ourselves in Google Meet calls and pairing in person (lockdown allowing) to keep ourselves moving when faced with challenging features or debugging complex issues. This collaboration is core to our team’s principals and results in a rich learning environment in which each of us is always improving.
We support several other teams
Outside the backend team, we work closely with our Customer Success team and the wider engineering team (Data Science, Product Design, iOS and Android). We help debug reported customer issues as well as helping with the design and development of new features within the product.
Working so closely with other teams creates opportunities for learning about the more intricate parts of our stack, especially our rendering system, image upload stack and template based design system – it gives us a deep insight into how other parts of the business operate.
Our customers come first
The backend is more than just a collection of microservices and APIs — we face a variety of challenges when bringing users' designs; rendering them into printable formats, integrating with third party printers around the world, tracking delivery progress, and providing an overview for our Customer Success team to assist with customer queries. We are integral to ensuring the customer journey remains a delightful and memorable experience.
Our core technologies...
The main programming language we utilise as a team is Go. As the code can be compiled directly into a binary it is ideal for containerisation.
It's also well suited for quickly developing server-based applications serving either HTTP or GRPC.
The lack of boilerplate and simplicity in the language allows us to seek out the best software engineers, instead of targeting Go developers specifically, as the language is quick to pick up for those well versed in the art of software engineering.
Elastic Kubernetes Service (EKS)
Last summer we migrated from using Elastic Container Service (ECS) to EKS in an attempt to provide a more flexible hosting environment with more reactive autoscaling capabilities. Since our move to using EKS we have found our microservices to be more reliable, especially under our Q4 load, and has reduced friction in our deployment process.
In situations where a microservice might be too heavy-handed we utilise Lambda to provide small, contained message driven systems alongside singular functions that help us field interactions with our third party printers and propagate changes to data across multiple DynamoDB tables amongst many other things.
We trigger our Lambdas using a number of different mechanisms including API Gateway, Step Functions and Simple Queue Service.
DynamoDB sits as the main database technology utilised by the team. Not only is it fast but it also provides Streams; a mechanism that we utilise greatly to propagate data changes across multiple parts of our stack keeping all the data we provide fresh and up to date with minimal delay.
Protocol Buffers were a new concept to many of us when we started using them in 2017, but they have been a fundamental improvement in our network stack. In fact, we’ve reduced API transmission overhead by 70%. Win.
On the infrastructure side we use Terraform to manage our stack that, in addition to the AWS services above, includes Cognito, Simple Email Service, Athena, S3, Batch, IAM and many more.
Alongside the Go code, Terraform and generated Protocol Buffers in our monorepo you can find small amounts of Python (for use with Lambda@Edge), Kubernetes manifests both manually created and Helm generated, and Serverless manifests that we use to manage many of our Lambdas.
A variety of architecture patterns can be found in our code base including microservices, serverless, finite state machines, producers and consumers and sidecars to name a few. We aren’t married to a single architecture pattern and always look for the most appropriate pattern for the task in hand, be that something we are already familiar with or the introduction and exploration of a new pattern.
Read more about our stack here.
On the devops side we are early adopters of Github Actions. We use it to run tests, build and push Docker images, deploy services, update Kubernetes manifests, apply infrastructure changes and much more. With access to Github Environments as part of the Enterprise package we are currently providing our friends on the Data Science team with the ability to test interactions between their infrastructure and our own. Moreover, we look in the future to utilise environments alongside Terraform Workspaces to provide a seamless development environment that supports ephemeral environments.
As our Data Science colleagues look to productionise much of their great work to enrich the user experience we are well placed to support them. As the team with the richest microservice domain knowledge and as guardians of our AWS infrastructure we continue to support them in a number of ways; from providing sidecar services that provide authentication and metrics without porting much of our Go middleware to Python, to providing CI/CD pipelines that enable them to iterate their infrastructure autonomously alongside our own — we’ve created a novel system that only requires an ‘approval’ sign off from the backend team to trigger automated deployments as part of a Github Action Workflow.