Microservices

Scalable microservices for almost any need

How It Works


Having us build your microservice is a very straightforward process that allows for the maximum value with the minimal amount of time.

  1. Fill out our Contact form.
  2. We send you a detailed form of information that we need from your product team.
  3. You fill out the form and return it to us within a few business days.
    1. We evaluate the request and email you with any specific questions we may have.
  4. We build a scope and statement of work contract for you to review and sign.
  5. You review the contract, signing if you’re content.
  6. You deliver 50% of the payment upfront.
  7. We begin work on your microservice.
  8. When the work is completed and the microservice passes all of it’s specifications, we deliver it to you as a Git repository.

That’s it! If you like this process, keep reading to see what we offer. The good news is what you see is what you get, so there are no surprises.

Goals


The goals of this service is to provide a high-quality microservice implementation that is scalable and production-ready as fast as possible. These microservices are designed to be baseline implementations that are performant and observable data abstractions and contain minimal business logic; re: workflows.

Timelines


For small microservices which contain no business logic but are simply data abstractions, you can expect a production-grade microservice to be delivered in 4 weeks from the contracted start date.

For microservices which contain basic business logic - i.e. workflows - the timelines will generally be closer to 6-8 weeks at a minimum. It is important to note that microservices which contain business logic generally don’t operate alone, so expect to contract several microservices with us.

Languages


Not all languages are created equal, and microservices are no exception. To meet performance expectations of the services we provide, we curate the languages that we offer:

  • Go
  • C#

Each of these languages will have ecosystem-specific web frameworks chosen which allow for performance, scalability, and stability. Each of the languages allows us to provide different types of backend microservices with different features, functionalities, and runtime options.

Interfaces


There are many different ways that services can be interfaced through, and we provide different types of connectivity for different languages.

Language Framework Connectivity Generation Message Type
Go Connect HTTP & gRPC HTTP/2 or HTTP/3 JSON and Protobuf
gRPC gRPC HTTP/2 Protobuf
HTTP HTTP HTTP/2 or HTTP/3 JSON
C# gRPC gRPC HTTP/2 or HTTP/3 Protobuf
HTTP HTTP HTTP/2 or HTTP/3 JSON

*For HTTP-only connectivity, we can offer path-based routing to allow for scopeable resourcing.

Data Storage


For data storage, we currently provide 3 types of storage options:

  • PostgreSQL
  • CockroachDB
  • S3-compatible Object Storage

PostgreSQL & CockroachDB


While many types of SQL databases do exist to solve a myriad of problems, we've found over the years that nothing beats PostgreSQL in performance, simplicity, and features. PostgreSQL is highly performant and scalable in single-server mode, but we do not recommend trying to replicate PostgreSQL at all. For distributed workloads, we recommend CockroachDB as the core database provider.

After your microservice has been delivered, if you choose to replace the database provider with another of your preference, you are welcome to! It is important to note that we will not support any databases which are not PostgreSQL or CockroachDB due to complexity and/or costs.

S3-compatible Object Storage


If your microservice needs to store or access files/binaries/etc., we will provide an S3-compatible implementation. Due to the egregious amount of object stores in the market, it is unfeasible for us to support more than one while still providing a high-quality service in the strict timelines you need.

Caching


Working with lots of data is expensive, and we can offset the financial and performance costs with caching! We currently support two options for data operation optimization:

  • In-memory
  • Redis

Our in-memory caching solution will depend on the language you select, but it will be highly performant. For Redis, we will develop against a single-node implementation but if you have a Redis cluster, just let us know and we will leave the necessary connection hooks for you to consume out of the box.

Observability


Regardless of language, each microservice delivered to you will use an OpenTelemetry-compliant SDK to provide full observability for metrics and monitoring of the microservice.

Logging


All of your microservices will log in JSON format. This is allow for the easiest compatibility into your specific logging infrastructure.

Configuration


Configuration will be handled through environment variables and/or by file with a JSON format. JSON is used to keep the format declarative but also easily readable by humans and machines.

Runtime


Once your service has been built, we will package it as both a single binary and container image. We only package for Linux and do not offer Windows-based containers or binaries due to licensing restrictions.

For containers, we will package your application with Helm, optionally including the necessary dependencies if desired. While we do not deploy your services into your environment, we will ensure that each microservice's Helm chart is fully tested against a certified Kubernetes distribution.

Testing


Unit and functional tests will be provided with each microservice. We cannot guarantee a specific coverage percentage of tests due to several factors, but we aim for as much test coverage as is feasible.

Still Interested?