How We Designed Our Scalable Microservice Architecture

A look at how we architected our codebase to meet our ambitious goals and work well for our distributed team.

An abstract illustration depicting a hand building Carted's logo from puzzle pieces.

At Carted, we’re building a universal commerce API that will enable developers to embed commerce functionality into any app or website through a single API. You can read more about our vision if you’d like to know more, but in this article we’re taking a look at how we architected our codebase to meet our ambitious goals and work well for our distributed team.

At every stage, we followed a process of first defining our requirements, and then evaluating various approaches to find the best fit for our purposes. While this approach doesn’t guarantee that we — or your team — will make the right long-term decisions, it does ensure that architectural choices are being made for well-defined reasons that are less influenced by preferences and personal biases.

From microservices, big things grow

Like most projects, Carted’s codebase started as a series of small proof of concepts to better understand the main technical challenges that needed solving, as well as to provide some starting points for what would evolve into our production architecture.

As we transitioned into our development phase, one of the key decisions was whether to use a monolithic or microservice-based architecture. Given the long-term nature of this decision, it’s important that business context is appropriately considered and reflected in the requirements list. Our requirements were:

  • Flexibility to add, remove, and maintain multiple versions of similar functionality (such as platform integrations).
  • Ability to easily scale certain parts of the product from 0 to a million requests/second.
  • ‘Cloud-native’ — taking advantage of usage-based pricing to minimize costs.
  • Ease of onboarding new team members, and quickly building their understanding of the codebase so that they can confidently make changes.
  • Alignment with a fully distributed team, empowering asynchronous work.

There is a school of thought that startup teams should set out with a monolithic codebase and iteratively transition to a microservice architecture if the limitations of the monolithic approach begin to cause too much friction. This thinking is mostly grounded in the idea that managing the build, deployment, and monitoring systems needed for a successful microservice architecture are too cumbersome for a small team and may hinder their ability to move quickly.

While it’s true that microservices need some coordination to work well, the benefits to us far outweighed this extra work and made the decision relatively easy. We’ve included some of those main benefits below:

  • Clear separation of concerns: Microservices divide much more cleanly along both functional and team lines, and enable our distributed engineering team to wholly own and make changes to the implementation of various parts of our system without impacting the work of others.
  • Scale just what we need, when we need it: By having our system broken down into microservices, we’re able to scale up just the right services to meet current demand. This means that we can cut down on over-provisioned compute resources and importantly, prevent wasted cloud costs.
  • Smaller, faster deployments: Startups move quickly, and for developers, that means a codebase that’s constantly in flux. A microservice architecture means that deployments — and associated tests — can be focused on just those elements that have changed. This results in faster iterations and a more productive team.

Tie it all together with gRPC

A microservice that can’t communicate with other services is not very useful, and establishing a method of inter-service communication is a critical part of making these disparate systems behave as if they are one larger unit. REST, RPC, and brokered messaging are three of the most common approaches, and there are a number of great resources on the web that deep-dive into the benefits and disadvantages of each.

With a few possible solutions to the same broad problem, we once again turned to defining our requirements as our first step:

  • A standardized method of inter-service communication with support for our preferred languages (more on this below).
  • Speed! Low-latency messaging, especially on hot paths.
  • Bidirectional communication between ‘clients’ and ‘servers’.
  • Great DX (if code is not business logic, we’d rather not be maintaining it).

From the various solutions we examined, gRPC became an obvious choice. For those unfamiliar, gRPC is a Google-led, open source RPC framework that abstracts much of the boilerplate and infrastructure of inter-service message passing and allows developers to focus on what’s most important.

In terms of our desired outcomes, gRPC is built atop HTTP/2 with long-lived connections that mean both an automatic reduction in per-request latency (through not having to re-establish a connection for each request), but also through HTTP/2 multiplexing which allows multiple messages to be passed simultaneously and importantly — out of order — to avoid waterfalls.

gRPC messages are themselves part of the solution. Protocol buffers are used to serialize and deserialize the structured data that is used as the message body, and a clever compiler turns those language-neutral message definitions into application code for a number of supported languages.

Protocol buffers (protobufs) work similarly to a ‘contract’ that defines the interface for requests and responses for a given service. While the relative permanency of these definitions (compared to traditional JSON messages) can cause some friction, it encourages a more deliberate API design and enables API type definitions to be generated from the code itself without additional manual documentation.

It is certainly true that the advantages of gRPC that we’ve discussed are possible through other communication architectures, however, gRPC makes this much easier by packaging up the various technologies as well as by providing on-going maintenance from a well-established company.

For a startup, these advantages mean that we’re able to offload the tech behind our inter-service communication stack entirely, and instead spend our time where it matters most — writing code our customers care about.

Go with Golang

Language choice wasn’t at the top of our list of priorities when planning our architecture, but our choice to use Go wasn’t accidental. Carted’s product vision encompasses enormous volumes of data, and equally high throughput for our indexing, search, and checkout operations.

With that in mind, we defined a list of characteristics for our ideal language, which are listed below in a loose order of importance:

  • Strongly-typed
  • Robust multi-threading capability
  • Fast
  • Plays nicely with containerization
  • Relatively easy to onboard devs with little or no language-specific experience
  • Wide industry usage

Go ticked all of the boxes from our list, and has been a natural fit for our team. Go’s built binaries are also very lightweight and ideally suited to running in a containerized environment with very few dependencies. The lean images have very fast start-up times when compared to similar code written in PHP or Java (for example) and reduce the surface area for bugs and other maintenance burdens.

It almost goes without saying, but Go also has the advantage of being a first-class supported language for Google Cloud, with all manner of client libraries and other resources available for the full gamut of GCP services that we rely on.

Go wide with Kubernetes

We’ll admit — this choice was a bit of a no-brainer. Containerized microservices using an orchestration platform for deployment, management, and scaling are an industry-accepted best practice and we agree.

Our specific choice to use Docker and Kubernetes stem from familiarity with those technologies within the team, as well as their position as the leaders of their respective fields. These choices have served us well so far, and we do not anticipate reasons to re-evaluate any time soon.

Conclusion

As any engineer will attest, even small architectural decisions can have a huge impact down the line as new features need building or teams grow and evolve. For Carted, ensuring that our architecture could support the breadth and scale of our vision whilst being flexible as we build our way towards it was a critical part of our early engineering journey.

We hope that this look behind the curtain at how we made our decisions can have a helpful impact when making similar decisions of your own — whether at your company or even a hobby project.