Modern software requires the use of modern application architectures. Modern application architectures require moving away from monolithic applications and embracing service-based architectures.
Monolith applications are extremely hard to scale, both from a traffic scaling standpoint and from the standpoint of your ability to scale the size of your organization to work on the application. The larger the monolith, the slower it is to make changes to the application, the fewer the people who can work on it and manage it effectively, and the greater the likelihood that traffic variations and growth will negatively impact availability.
Service-oriented architectures solve these problems by providing greater flexibility in scaling based on traffic needs, as well as providing a scalable framework to allow larger development organizations to work on the application, thus allowing the applications themselves to get larger and more complex.
Modern software requires the use of modern application architectures, but what is involved in modern software architectures? One of the keys to architecting highly scaled and highly available applications is to utilize service- or microservice-based architectures. Legacy monolithic application development processes do not provide you the capabilities you need to keep your application running at scale and maintain availability.
Historically, most applications appear as single, large, distinct monoliths. The single monolith encompasses all business activities for a single application. To implement an improved piece of business functionality, an individual developer must make changes within the single application, and all developers making changes must make them within the same single application. Developers can easily step on one another’s toes and make conflicting changes that result in problems and outages.
In a service-oriented architecture, individual services are created that encompass a specific subset of business logic. These individual services are interconnected to provide the entire set of business logic for the application.
Let’s compare monolith and service-oriented architectures and see why service-oriented architectures provide better organizational scalability and application scalability.
The Monolith Application Versus the Service-Based Application
A traditional large monolithic application contains all logic and functionality within a single component, with individual code segments intertwined and dependent on each other. It’s a single compiled piece of source that creates a single executable containing most or all aspects of the application. Figure 3-1 shows an application that represents a monolith.
Figure 3-1. A large, complex monolithic application
This is how most applications begin to look if they are constructed and grow as monolithic applications. In Figure 3-1, you see that there are five independent development teams working on overlapping areas of the application. It is impossible to know who is working on what piece of the application at any point in time, and code-change collisions and problems are easy to imagine. Code quality and hence application quality and availability suffer. Additionally, it becomes more and more difficult for individual development teams to make changes without having to deal with the effect of other teams, incompatible changes, and a molasses effect on the organization as a whole.
Figure 3-2 presents the same application constructed as a series of services. Each service has a clear owner. Each team has a clear, nonoverlapping set of responsibilities.
Service-oriented architectures provide the ability to split an application into distinct domains that are each managed by individual groups within your organization. They enable the separation of responsibilities that are critical for building highly scaled applications, allowing work to be done independently on individual services without affecting the work of other groups working on the same overall application.
Figure 3-2. A large, complex service-based application
When building highly scaled applications, service-based application architectures provide the following benefits:
Scaling decisions
Service-based architectures make it possible for scaling decisions to be made at a more granular level, which fosters more efficient system optimization and organization.
Team assignment and focus
Service-based architectures let you assign capabilities to individual teams in such a way that teams can focus on the specific scaling and availability requirements of their system “in the small” and feel confident that their decisions will have the appropriate impact at the larger scale.
Complexity localization
Using service-based architectures, you can think about services as black boxes, making it so that only the owners of a particular service need to understand the complexity within that service. Other developers need to know only the capabilities that service provides, without knowing anything about how it works internally. This compartmenting of knowledge and complexity facilitates the creation of larger applications since individual teams need to understand only their individual subsets of the application. This lets you manage these larger applications more effectively.
Testing
Service-based architectures are easier to test than monolithic applications, which increases your reliability.
Service-oriented architectures can, however, increase the complexity of your system as a whole if the service boundaries are not designed properly. This complexity can lead to lower scalability and decreased system availability. So picking the appropriate service and service boundaries is critical.
The Ownership Benefit
Let’s take a look at a pair of services.
In Figure 3-3, we see two services owned by two distinct teams. The Left Service is consuming the capabilities exposed by the Right Service.
Figure 3-3. A pair of services
Let’s look at this diagram from the perspective of the Left Service owner. Obviously that team needs to know the entire structure, complexity, connectedness, interactions, code, and so on for its service. But what does it need to know about the Right Service? As a start, the team needs to know the following:
· The capabilities provided by the service
· How to call those capabilities (the API syntax)
· The meanings and results of calling those capabilities (the API semantics)
That’s the basic information that the Left Service team needs to know. What doesn’t it need to know about the Right Service? Lots of things—for example:
· The Left Service team does not need to know whether the Right Service is a single service or a construction of many subservices.
· It does not need to know what services the Right Service depends on to perform its responsibilities.
· It does not need to know what language(s) the Right Service is written in.
· It does not need to know what hardware or system infrastructure is needed to operate the Right Service.
· It does not even need to know who is operating the Right Service (however, it does need to know how to contact the owner of the Right Service in case there are issues with it).
The Right Service can be as complex or as simple as needed, as shown in Figure 3-4. But to the owners of the Left Service, the Right Service can be thought of as nothing more than a black box, as shown in Figure 3-5. As long as the Left Service owners know what the interface to the box is (the API), they can use the capabilities the black box provides.
Figure 3-4. What’s inside the Right Service
Figure 3-5. Right Service complexity hidden from dependencies
To manage this, the Left Service must be able to depend on a contract that the Right Service provides. This contract describes everything the Left Service needs to use the Right Service.
The contract contains two parts:
The capabilities of the service (the API)
What the service does
How to call it and what each call means
The responsiveness of the service
How often can the API be used?
When can it be used?
How fast will the API respond?
Is the API dependable?
All of this information describes the contract that the owners of the Right Service provide to the Left Service describing how the Right Service operates. As long as the Right Service behaves according to this contract, the Left Service doesn’t have to know or care anything about how the Right Service performs those commitments.
The responsiveness part of the contract is called a service-level agreement, or SLA. It is a critical component in allowing the Left Service to depend on the Right Service without knowing anything about how the Right Service works. We discuss SLAs in great detail in Chapter 8.
By having a clear ownership for each service, teams can focus on only those portions of the system for which they are responsible, along with the API contracts provided by the owners of the services they depend on. This separation of responsibility makes it easier to scale your organization to contain many more teams; because the coupling between the teams is substantially looser, it doesn’t matter as much how far away (organizationally or physically) one team is from another. As long as the contracts are maintained, you can scale your organization as needed to build larger and more complicated applications.
The Scaling Benefit
Different parts of your application have different scaling needs. The component that generates the home page of your application will be used much more often than the component that generates the user settings page.
By using services with clear APIs and API contracts between them, you can determine and implement the scaling needs required for each service independently. This means that if your home page is the most frequently called page, you can provide more hardware to run that service than you provide for the service that manages your user settings page.
Managing the scaling needs of each service independently enables you to do the following:
· Provide more accurate scaling by having the team that owns the specific capability involved closely in the scaling decision.
· Save system resources by not scaling one component simply because another component requires it.
· Provide ownership of scaling decisions to the team that knows the most about the needs of the service (the service owner).
Service-based architectures make scaling your organization and your application easier, allowing you to scale to a greater level. In the next chapter, we examine services in greater detail.
Splitting into Services
A service provides some capabilities that are needed by the rest of the application. Examples include billing services (which offer the component that bills customers), account creation services (which manage the component that creates accounts), and notification services (which include functionality for notifying users of events and conditions).
A service is a standalone component. The word standalone is critical. A service meets the following criteria:
Maintains its own code base
A service has its own code base that is distinct from the rest of your code base.
Manages its own data
A service that requires maintaining state has its own data that is stored in its own data store. The only access to this separated data is via the service’s defined API. No other service may directly touch another service’s data or state information.
Provides capabilities to others
A service has a well-defined set of capabilities, and it provides these capabilities to other services in your application. In other words, it provides an API.
Consumes capabilities from others
A service uses a well-defined set of capabilities provided by others and uses them in a standard, supported manner. In other words, it uses other services’ APIs.
Single owner
A service is owned and maintained by a single development team within your organization. A single team may own and maintain more than one service, but a single service can have only one team that owns and maintains it.
What Should Be a Service?
How do you decide when a piece of an application or system should be separated out into its own service?
This is a good question, and it’s one that does not have a single correct answer. Some companies that “service-ize” split their application into many very tiny microservices (hundreds or thousands of them). Others split their application into only a handful of larger services. There is no right answer to this problem. However, the industry is trending toward smaller microservices, and more of them. Technologies such as Docker and Kubernetes have made this increased number of microservices a more viable system topology by providing an infrastructure for managing a large number of small services.
NOTE
We use the terms services and microservices interchangeably in this book.
Dividing into Services
So how do you decide where service boundaries should be? Company organization, culture, and the type of application can play a major role in determining service boundaries.
Following is a set of guidelines that you can use to determine where service boundaries can be. These are guidelines, not rules, and they are likely to change over time as our industry progresses. They are useful to help individuals begin thinking about services and about what should be a service.
Here at a high level are the guidelines (in order of priority):
Specific business requirements
Are there any specific business requirements (such as accounting, security, or regulatory) that drive where a service boundary should be?
Distinct and separable team ownership
Is the team that owns the functionality distinct and separable (such as in another city, on another floor, or even just under a different manager), which will help specify where a boundary should be?
Naturally separable data
Is the data the service manages naturally separable from other data used in the system? Does putting data in a separate data store overly burden the system?
Shared capabilities/data
Does it provide some shared capabilities used by lots of other services, and does that shared capability require shared data?
Let’s now look at each of these individually and figure out what they mean.
Guideline #1: Specific Business Requirements
In some cases, there will be specific business requirements that dictate where a service boundary should be. These might be regulatory, legal, or security requirements, or some critical business need.
Imagine your system accepts online credit card payments from your customers. How should you collect, process, and store these credit cards and the payments they represent? A good business strategy would be to put the credit card processing in a different service, separate from the rest of the system.
Putting critical business logic into its own service can be a valuable separation to make. For credit card processing, for example, this may be true for several reasons:
Legal/regulatory requirements
There are legal and regulatory requirements around how you store credit cards that require you to treat them in different ways from other business logic and other business data. Separating credit card processing into a distinct service makes it easier to treat this data differently from the rest of your business data.
Security
You might need additional firewalls around these servers for security reasons.
Validation
You might need to perform additional production testing to verify security of these capabilities in ways significantly stronger than other parts of your system.
Restricting access
You will typically want to restrict access to these servers so that only necessary personnel have access to highly sensitive payment information such as credit cards. You typically do not want or need to provide access to these systems to your entire engineering organization.
Understanding the needs of critical business logic is an important consideration for deciding where service boundaries should be.
Guideline #2: Distinct and Separable Team Ownership
Applications are becoming more and more complicated, and typically larger groups of developers are working on them, often with more specialized responsibilities. Coordination among teams becomes substantially harder as the number of developers, the number of teams, and the number of development locations grow.
Services are a way to give ownership of smaller, distinct, separable modules to different teams.
NOTE
A single service should be owned and operated by a single team that is typically no larger than three to eight developers. That team should be responsible for all aspects of that service.
By doing this, you loosen up the interteam dependencies and make it much easier for individual teams to operate and innovate independently from one another.
As previously stated, a single service should be owned and operated by a single team. The key is to make sure that all aspects of a single service are under the influence of a single team. This means that team is responsible for all development, testing, deployment, performance, and availability aspects of that service.
A single team can successfully manage more than one service, depending on the complexity and activity involved in those services. Additionally, if several services are very similar in nature, it might be easier for a single team to manage all of them.
Separate team for security reasons
Sometimes you want to restrict the number and scope of individuals who have access to the code and data stored within a given service. This is especially true for services that have regulatory or legal constraints, such as the credit card payment processing discussed before. Limiting access to a service with sensitive data can decrease your exposure to issues involved in the compromising of that data. In cases like this, you might physically limit access to the code, the data, and the systems hosting the service to only the key personnel required to support that service.
Additionally, splitting related sensitive data into two or more services, each owned by distinct teams, can reduce the chances of that data being compromised by making it less likely that multiple services with distinct owners will all have data compromised.
SPLITTING DATA FOR SECURITY REASONS
When you are processing credit card payments, the credit card numbers themselves can be stored in one service. The secondary information necessary to use those credit cards (such as billing address and CCV code) could be stored in a second service. By splitting this information across two services, each owned and operated by individual teams, you limit the chance that any one employee can inadvertently or intentionally expose enough data for a rogue agent to use one of your customers’ credit cards inappropriately.
You might even choose to not store the credit card numbers in your services at all and instead store them in a third-party credit card processing company’s services. This ensures that, even if one of your services is compromised, the credit cards themselves will not be.
Guideline #3: Naturally Separable Data
One of the requirements for a service is that its managed state and data need to be separate from other data. For a variety of reasons, it is problematic to have multiple independent code bases operating on the same set of data. Separating the code and the ownership is effective only if you also separate the data.
Figure 3-6 shows a service (Service A) that is trying to access data stored in another service (Service B). It illustrates the correct way for Service A to access data stored in Service B, which is for Service A to make an API call to Service B and then let Service B access the data in its database itself.
Figure 3-6. Correct way to share data
If Service A instead tries to access the data for Service B directly without going through Service B’s API, as shown in Figure 3-7, all sorts of problems can occur. This sort of data integration would require tighter coordination between Service A and Service B than is desired, and it can cause problems when data maintenance and schema migration activities need to occur. In general, the accessing of Service B’s data directly by Service A without involving Service B’s business logic in that process can cause serious data versioning and data corruption issues. It should be strictly avoided.
As you can see, determining data division lines is an important characteristic in determining service division lines. Does it make sense for a given service to be the “owner” of its data and provide access to that data only via external service interfaces? If the answer is “yes,” this is a good candidate for a service boundary. If the answer is “no,” it is not a good service boundary.
A service that needs to operate on data owned by another service must do so via published interfaces (APIs) provided by the service that owns that data.
Figure 3-7. Incorrect way to share data
Guideline #4: Shared Capabilities/Data
Sometimes a service can be created simply because it is responsible for a set of capabilities and its data. These capabilities and data might need to be shared by a variety of other services.
A prime example of this principle is a user identity service, which simply provides information about specific users of the system. This is illustrated in Figure 3-8.
Figure 3-8. Using services to share common data with other services
There might be no complex business logic involved with this data service, but it’s ultimately responsible for all the general information associated with individual users. This information often is used by a large number of other services.
Having a centralized service that provides and manages this single piece of information is highly useful.
Mixed Reasons
The preceding guidelines outline some basic criteria for determining where service boundaries should be. Often, though, it is a combination of reasons that can ultimately make the decision for you.
For example, having a single user identity service makes sense from a data ownership and shared capabilities perspective, but it might not make sense from a team ownership standpoint. Data for which it might make sense to store it in a database associated with user identity might be better stored in a separate service or services.
As a specific example of such data, a user may have search preferences that are typically part of a user profile but are not typically used by anything outside of the search infrastructure. As such, it might make sense to store this data in a search identity service that is distinct from a user identity service. This might be for data complexity reasons or even for performance reasons.
Ultimately, you must use your judgment while also taking the preceding criteria into account. And of course, you must also consider the business logic and requirements dictated by your company and your specific business needs.
Going Too Far
While splitting applications into services has many benefits, often you can go too far. Creating service boundaries using the previously discussed criteria can be taken to the extreme, and too many services can be created.
For example, rather than providing a simple user identity service, you might decide to take that simple service and further divide it into several smaller services, such as the following:
· User human-readable name service
· User physical address management service
· User email address management service
· User hometown management service
Doing this is most likely splitting things up too much.
There are several problems with splitting services into too fine a number of pieces, including overall application performance. But at the most fundamental level, every time you split a piece of functionality into multiple services, you do the following:
· Decrease the complexity of the individual services (usually)
· Increase the complexity of your application as a whole
The smaller service size typically makes individual services less complicated. However, the more services you have, the larger the number of independent services that need to be coordinated, and the more complex your overall application architecture becomes.
Having a system with an excessively large number of services tends to create the following problems in your application:
Big picture
It becomes more difficult to keep the entire application architecture in mind, because the application is becoming more complicated.
More failure opportunities
More independent components need to work together, creating more opportunity for interservice failures to occur.
Harder to change services
Each individual service tends to have more consumers of that service (other services that depend on it). Having more service consumers increases the likelihood of changes to your service negatively affecting one of your consumers.
More dependencies
Each individual service tends to have more dependencies on other services. More dependencies means more places for problems to occur.
Many of these problems can be mitigated by defining solid interface boundaries between services, but this is not a complete solution. Instead, it’s important to find the right balance between the number of services and the size of those services.
Finding the Right Balance
Ultimately, deciding on the proper number of services and the proper size of each service is a complicated problem to solve. It requires careful consideration of the balance between the advantages of creating more services and the disadvantages of creating a more complex system as a whole.
Building too few services will create problems similar to the monolith application, where too many developers will be working on a single service and the individual services themselves become overly complicated.
Building too many services will cause individual services to become trivially simple, while the overall application becomes overly complicated by complex interactions between the services. I’ve actually heard of an example application utilizing microservices that defined a “Yes” service and a “No” service that simply returned those boolean results—this is extreme taken to extreme. It would be great to define exactly what the right size is for a service, but it depends on your application and your company culture. The best advice is to keep this complexity trade-off in mind as you define your services and your architecture.
Finding the appropriate balance for your specific application, organization, and company culture is important in making the most use of a service-based environment.
Determining the appropriate balance in service size is important to creating an application architecture that is optimized for operation and management and to keeping your application highly available and scalable.
You can support our site by clicking on this link and watching the advertisement.