Go Food Delivery Microservices
is an imaginary and practical food delivery microservices, built with Golang and different software architecture and technologies like Microservices Architecture, Vertical Slice Architecture , CQRS Pattern, Domain Driven Design (DDD), Event Sourcing, Event Driven Architecture and Dependency Injection. For communication between independent services, We use asynchronous messaging using RabbitMQ, and sometimes we use synchronous communication for real-time communications using REST and gRPC calls.
You can use this project as a template to build your backend microservice project in the Go language
π‘ This application is not business-oriented
and my focus is mostly on the technical part, I just want to implement a sample using different technologies, software architecture design, principles, and all the things we need for creating a microservices app.
π This Application is in progress
and I will add new features and technologies over time.
For your simplest Golang projects, you can use my go-vertical-slice-template
project:
For more advanced projects, with two microservices
and modular monolith architecture
, check the C# version:
- https://github.com/mehdihadeli/food-delivery-microservices
- https://github.com/mehdihadeli/food-delivery-modular-monolith
- β
Using
Vertical Slice Architecture
as a high-level architecture - β
Using
Event Driven Architecture
on top of RabbitMQ Message Broker with a custom Event Bus - β
Using
Data-Centric Architecture
based on CRUD in Catalogs Read Service - β
Using
Event Sourcing
inAudit Based
services like Orders Service - β
Using
CQRS Pattern
andMediator Pattern
on top of Go-MediatR library - β
Using
Dependency Injection
andInversion of Control
on top of uber-go/fx library - β Using RESTFul api with Echo framework and using swagger with swaggo/swag library
- β Using gRpc for internal service communication
- β Using go-playground/validator and go-ozzo/ozzo-validation for validating input data in the REST and gRpc
- β
Using
Postgres
andEventStoreDB
to write databases with fully supported transactions(ACID) - β
Using
MongoDB
andElastic Search
for read databases (NOSQL) - β
Using
OpenTelemetry
for collectionDistributed Tracing
using Jaeger and Zipkin - β
Using
OpenTelemetry
for collectionMetrics
with using Prometheus and Grafana - β
Using
Unit Test
for testing small units with mocking dependent classes and using Mockery for mocking dependencies - β
Using
End2End Test
andIntegration Test
for testing features with all of their real dependencies using docker containers (cleanup tests) and testcontainers-go library - β
Using
Zap
and structured logging - β
Using
Viper
for configuration management - β
Using docker and
docker-compose
for deployment - π§ Using
Domain Driven Design
in some of the services like Catalogs Write Service and Orders Service - π§ Using
Helm
andKubernetes
for deployment - π§ Using
Outbox Pattern
for all microservices for Guaranteed Delivery or At-least-once Delivery - π§ Using
Inbox Pattern
for handling Idempotency in reciver side and Exactly-once Delivery
- βοΈ
labstack/echo
- High performance, minimalist Go web framework - βοΈ
uber-go/zap
- Blazing fast, structured, leveled logging in Go. - βοΈ
emperror/errors
- Drop-in replacement for the standard library errors package and github.com/pkg/errors - βοΈ
open-telemetry/opentelemetry-go
- OpenTelemetry Go API and SDK - βοΈ
open-telemetry/opentelemetry-go-contrib
- Collection of extensions for OpenTelemetry-Go. - βοΈ
rabbitmq/amqp091-go
- An AMQP 0-9-1 Go client maintained by the RabbitMQ team. Originally by @streadway:streadway/amqp
- βοΈ
stretchr/testify
- A toolkit with common assertions and mocks that plays nicely with the standard library - βοΈ
mehdihadeli/go-mediatr
- Mediator pattern implementation in Golang and helpful in creating CQRS based applications. - βοΈ
grpc-ecosystem/go-grpc-middleware
- Golang gRPC Middlewares: interceptor chaining, auth, logging, retries and more - βοΈ
grpc/grpc-go
- The Go language implementation of gRPC. HTTP/2 based RPC - βοΈ
elastic/go-elasticsearch
- The official Go client for Elasticsearch - βοΈ
avast/retry-go
- Simple golang library for retry mechanism - βοΈ
ahmetb/go-linq
- .NET LINQ capabilities in Go - βοΈ
EventStore/EventStore-Client-Go
- Go Client for Event Store version 20 and above. - βοΈ
olivere/elastic/v7
- Deprecated: Use the official Elasticsearch client for Go at - βοΈ
swaggo/swag
- Automatically generate RESTful API documentation with Swagger 2.0 for Go. - βοΈ
prometheus/client_golang
- Prometheus instrumentation library for Go applications - βοΈ
mongodb/mongo-go-driver
- The Go driver for MongoDB - βοΈ
go-redis/redis
- Type-safe Redis client for Golang - βοΈ
go-gorm/gorm
- The fantastic ORM library for Golang, aims to be developer friendly - βοΈ
go-playground/validator
- Go Struct and Field validation, including Cross Field, Cross Struct, Map, Slice and Array diving - βοΈ
go-ozzo/ozzo-validation
- Validate data of different types include, provide a rich set of validation rules right out of box. - βοΈ
spf13/viper
- Go configuration with fangs - βοΈ
caarlos0/env
- A simple and zero-dependencies library to parse environment variables into structs. - βοΈ
joho/godotenv
- A Go port of Ruby's dotenv library (Loads environment variables from .env files) - βοΈ
mcuadros/go-defaults
- Go structures with default values using tags - βοΈ
uber-go/fx
- A dependency injection based application framework for Go. - βοΈ
testcontainers/testcontainers-go
- Testcontainers for Go is a Go package that makes it simple to create and clean up container-based dependencies for automated integration/smoke tests.
Each microservices are based on these project structures:
In this project I used vertical slice architecture or Restructuring to a Vertical Slice Architecture also I used feature folder structure in this project.
- We treat each request as a distinct use case or slice, encapsulating and grouping all concerns from front-end to back.
- When We add or change a feature in an application in n-tire architecture, we are typically touching many different "layers" in an application. we are changing the user interface, adding fields to models, modifying validation, and so on. Instead of coupling across a layer, we couple vertically along a slice and each change affects only one slice.
- We
Minimize coupling
between slices
, andmaximize coupling
in a slice
. - With this approach, each of our vertical slices can decide for itself how to best fulfill the request. New features only add code, we're not changing shared code and worrying about side effects. For implementing vertical slice architecture using cqrs pattern is a good match.
Also here I used CQRS to decompose my features into very small parts that make our application:
- maximize performance, scalability, and simplicity.
- adding new features to this mechanism is very easy without any breaking changes in another part of our codes. New features only add code, we're not changing shared code and worrying about side effects.
- easy to maintain and any changes only affect one command or query (or a slice) and avoid any breaking changes on other parts
- it gives us a better separation of concerns and cross-cutting concerns (with the help of MediatR behavior pipelines) in our code instead of a big service class for doing a lot of things.
By using CQRS, our code will be more aligned with SOLID principles, especially with:
- Single Responsibility rule - because logic responsible for a given operation is enclosed in its own type.
- Open-Closed rule - because to add a new operation you donβt need to edit any of the existing types, instead you need to add a new file with a new type representing that operation.
Here instead of some Technical Splitting for example a folder or layer for our services
, controllers
, and data models
which increase dependencies between our technical splitting and also jump between layers or folders, We cut each business functionality into some vertical slices, and inner each of these slices we have Technical Folders Structure specific to that feature (command, handlers, infrastructure, repository, controllers, data models, ...).
Usually, when we work on a given functionality we need some technical things for example:
- API endpoint (Controller)
- Request Input (Dto)
- Request Output (Dto)
- Some class to handle Request, For example, Command and Command Handler or Query and Query Handler
- Data Model
Now we could have all of these things beside each other and it decrease jumping and dependencies between some layers or folders.
Keeping such a split works great with CQRS. It segregates our operations and slices the application code vertically instead of horizontally. In Our CQRS pattern each command/query handler is a separate slice. This is where you can reduce coupling between layers. Each handler can be a separated code unit, even copy/pasted. Thanks to that, we can tune down the specific method to not follow general conventions (e.g. use custom SQL query or even different storage). In a traditional layered architecture, when we change the core generic mechanism in one layer, it can impact all methods.
TODO
In this app, I use Conventional Commit and for enforcing its rule I use conventional-changelog/commitlint and typicode/husky with a pre-commit hook. To read more about its setup see commitlint docs and this article and this article.
For applying golangci-lint in IDE level I use intellij-plugin-golangci-lint plugin.
For formatting, I used mvdan/gofumpt, goimports-reviser, golines and golangci-lint in my GoLand and for each package, there is a guide for how to set it up in your IDE, for example. here is the configuration for goimports-reviser.
Also, you can control this formatting with husky
automatically before any commit by installing husky in your dev environment:
- Install Tools:
make install-tools
- Install NPM:
npm init
- Install CommitLint:
npm install --save-dev @commitlint/config-conventional @commitlint/cli
- Create the
commitlint.config.js
file with this content:
module.exports = { extends: '@commitlint/config-conventional']};
- Install Husky:
npm install husky --save-dev
- Add
prepare
command for installing and activatinghusky hooks
that we will add in the next steps, in the package.json file:
npm pkg set scripts.prepare="husky install"
- Create the Husky folder:
mkdir .husky
- Adding hooks for linting and formatting before commit:
npx husky add .husky/pre-commit "make format && git add -A ."
npx husky add .husky/pre-commit "make lint && git add -A ."
- Adding CommitLint to the husky before commit:
npx husky add .husky/commit-msg 'npx --no -- commitlint --edit ${1}'
- Activate and installing all husky hooks with this command:
npm run prepare
For live reloading in dev mode I use air library. for a guide about using these tools, you can read this article.
For running each microservice in live reload mode
, inner each service folder type the bellow command after installing air:
air
The application is in development status. You are feel free to submit a pull request or create the issue according to Contribution Guid.
The project is under MIT license.