The book presents some strategies to migrate from a monolithic architecture to microservices. It gives an introduction to both and discusses how to make the migration work in the company and patterns that can support this goal. It mentions the fact that the view and strategy should be shared with the company and that people should embrace it. Also, the book raised the question of whether migration is really necessary and used the concepts of DDD to find the Bounded Context.

Some principles can resume the first part:

  • Make the most of your monolith
  • Adopt microservices for the right reasons
  • It’s not just architecture
  • Get the support of the business (Identify the components, group of components and dependencies)
  • Migrate incrementally
  • Know your starting point
  • Begin with the end in mind
  • Migrate high-value modules first
  • Success is improved velocity and reliability
  • If it hurts, don’t do it


Patterns

Strangler Fig Application

UI Decomposition

  • Reflect the migration
  • It can be Widget or Page level. Also work with micro frontend.
  • It allows vertical slices of the functionality to be migrated

Brach by Abstraction

  • Alternative when the code in monolith is being improved yet.
  • Summary: it creates an abstraction of an existent functionality and change the clienti to use it. Then a new implementation should be created to use the microservice and then change the client to use the microservice. When the process is finished should do the clenup to use only the microservice.
  • Links:

Parallel Run

  • Alternative when is necessary to verify is the new solution has the same result of the old one.
  • Used when the changes are high risk
  • Techniques: Spy and GitHub Scientis

Decorating Collaborator

  • It can be used when is necessary to add some feature but cannot change the monolith
  • Use a proxy to redirect to the new functionality.
  • It is better when the information required is in request and response

Change data Capture

  • Based on the data changes and cannot check it by monolith


Database Patterns

Shared Database: It is better to read-only static reference data and Database-as-a-Service [1][2][3]

Database View: (1) Create View to legacy; (2) Useful to read-only [1][2]

Database Wrapping Service[1][2]

  • Allow control what can be shared what should be hidden
  • Use in a complex schema,
  • Similar to the View but the logic is in code and can have write logics.

Database as a Service[1][2]

  • Create a read-only database to a specific goal to be exposed externally behind an API service
  • Point: how to update when change: change data capture system (ex: debezium) or batch process to copy data or listener events fired

Aggregate Exposing Monolith [1]

  • Microservice needs to interact with agregated data that is in monolith yet
  • Access those data by an API

Change Data Ownership: Microservice needs data that in monolith but under the control of the new extracted service [1]

Scynchronize Data in Application: Data synchronization jobs are by far the most common type of integration job, during which we take data from one or more systems and move it into another (keeping the data in sync)—in our case, Salesforce. Data can flow from one system to another, and back. Sometimes this causes data conflicts that must be dealt with. Getting synchronizations working right can be tricky, particularly when there are heavy transformations and/or summarizations involved. [1]

    Steps:
  • 1. Bulk synchronie data
  • 2. Synchronize on write, read from old schema.
  • 3. Synchronize on write, read from new schema.
  • 4. Decommission old schema.

Use it when split schema before split the application code and be sure about the synchronization between micro and monolith.[2]

Tracer Writer

  • There are the both database and is used a new service that will get in the new database. The old database can be read but only write in the new database. It reduce the mantainance cost.
  • It's necessary to take care about the consistency between the database.

Split Database: Repository per bouded context

  • This patterns is used when decide split the database before the code
  • It is a logic way to separate de data by the context using repository pattern. The application can have meny repositories interface to access the same database.
  • It is good to focus in monolith and understand the best way to do that. That repository layer will help to see what microservices can exist.

Split Database: Database per bouded context

  • This patterns is used when decide split the database before the code
  • It is phical way to separate the data by context. That decomposition happen before separate the code. For this, each bounded context has its own database.
  • One point to keep the eys is the maintannace of that structure if spend long time with monolith yet.

Split the code first: Monolith as data access layer

  • Create a service in monolith to access the database.
  • Best when the code that manage the data is sting in monolith and not migrated

Split the code first: Multischema storage

  • Persist part of the new databa with new schema in the new database.
  • One point is that one entity can have part of its data in one schema and the others in the other database. When the app have a new functionality it can be good

Split Table: When different bounded context use parts of a table can be useful to microservice split those tables.

Move Fireign-key Relationship to Code

  • Split two tables that has a relationship between then.
  • To have the foreign-key in the new schema brings some points as consistence (delete) and performance (join).
  • It can be done or avoiding deletion or be sure the other system can survive without that information.
  • Patterns:
    • dupplicate static data -> each service has its own copy of data
    • dedicated reference data schema -> realocate the code to be access to both schemas
    • static reference data library -> add the data in a library to be shared. It's trick in case multiple technologies, then is necessary to have different version of the library. better to small valumes of data.
    • static reference data service: create a service -> the main discussion is the cost.

Transaction

Two-Phase commits algorithm: transactionla changes in distributed system where many process need to be executed as part of a bigger operation.

  • 1. Voting phase: exist a coordenator to contact the parts and ask for confirmation
  • 2. commit phase: if all the parts agreed about change the state then go to commit. If at least one fail all the operation is aborted.

cannot guarantee those commits happen exactly in the same time

Used in short-lived operations to avoid block system with big operations

Distributed transactions - Just say No: Let the satate in a single database and mange the stage by a single service

Saga [1][2][3]

  • It is used when you need break the data but don't want to mange distributed transaction and to manage multiple operation without lock the system
  • Long Lived Transactions (LLT): transactions can spend long time and persist in database.
  • LLTs should be breaking into a sequence of transactions. Each onde has a short live and change a smal part of database.
  • In case of failure:
    • backward recovery: rollback / clean up / undo previous commits / it is new transaction to revert
    • forward recovery: continue the transaction from the point of the failure
  • Implementation: [4]
    • Orchestrated Saga[5][6]: centralized coordination and tracking. A tool to support it is BPM Tool[7]
    • Choreographed Saga[8]: not centralized; loosely coupled model; tracking more complicated.
    • It's possible to mix the strategies.
    • Choreograph strategy can be better when working with multiple teams.


Reference