diff --git a/.openpublishing.redirection.json b/.openpublishing.redirection.json index add97bb7064f9..d1ecec325f3b1 100644 --- a/.openpublishing.redirection.json +++ b/.openpublishing.redirection.json @@ -904,6 +904,11 @@ "redirect_url": "/dotnet/standard/net-standard", "redirect_document_id": true }, + { + "source_path": "docs/standard/microservices-architecture/architect-microservice-container-applications/communication-between-microservices.md", + "redirect_url": "/dotnet/standard/microservices-architecture/architect-microservice-container-applications/communication-in-microservice-architecture", + "redirect_document_id": true + }, { "source_path": "docs/standard/serialization/marshal-by-value.md", "redirect_url": "/dotnet/standard/serialization-concepts" diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/asynchronous-message-based-communication.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/asynchronous-message-based-communication.md index d8026ecaa88cb..416d586c2e938 100644 --- a/docs/standard/microservices-architecture/architect-microservice-container-applications/asynchronous-message-based-communication.md +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/asynchronous-message-based-communication.md @@ -27,13 +27,13 @@ There are two kinds of asynchronous messaging communication: single receiver mes Message-based asynchronous communication with a single receiver means there is point-to-point communication that delivers a message to exactly one of the consumers that is reading from the channel, and that the message is processed just once. However, there are special situations. For instance, in a cloud system that tries to automatically recover from failures, the same message could be sent multiple times. Due to network or other failures, the client has to be able to retry sending messages, and the server has to implement an operation to be idempotent in order to process a particular message just once. -Single-receiver message-based communication is especially well suited for sending asynchronous commands from one microservice to another as shown in Figure 4-17 that illustrates this approach. +Single-receiver message-based communication is especially well suited for sending asynchronous commands from one microservice to another as shown in Figure 4-18 that illustrates this approach. Once you start sending message-based communication (either with commands or events), you should avoid mixing message-based communication with synchronous HTTP communication. -![](./media/image17.PNG) +![](./media/image18.PNG) -**Figure 4-17**. A single microservice receiving an asynchronous message +**Figure 4-18**. A single microservice receiving an asynchronous message Note that when the commands come from client applications, they can be implemented as HTTP synchronous commands. You should use message-based commands when you need higher scalability or when you are already in a message-based business process. @@ -51,11 +51,11 @@ If a system uses eventual consistency driven by integration events, it is recomm As noted earlier in the [Challenges and solutions for distributed data management](#challenges-and-solutions-for-distributed-data-management) section, you can use integration events to implement business tasks that span multiple microservices. Thus you will have eventual consistency between those services. An eventually consistent transaction is made up of a collection of distributed actions. At each action, the related microservice updates a domain entity and publishes another integration event that raises the next action within the same end-to-end business task. -An important point is that you might want to communicate to multiple microservices that are subscribed to the same event. To do so, you can use publish/subscribe messaging based on event-driven communication, as shown in Figure 4-18. This publish/subscribe mechanism is not exclusive to the microservice architecture. It is similar to the way [Bounded Contexts](http://martinfowler.com/bliki/BoundedContext.html) in DDD should communicate, or to the way you propagate updates from the write database to the read database in the [Command and Query Responsibility Segregation (CQRS)](http://martinfowler.com/bliki/CQRS.html) architecture pattern. The goal is to have eventual consistency between multiple data sources across your distributed system. +An important point is that you might want to communicate to multiple microservices that are subscribed to the same event. To do so, you can use publish/subscribe messaging based on event-driven communication, as shown in Figure 4-19. This publish/subscribe mechanism is not exclusive to the microservice architecture. It is similar to the way [Bounded Contexts](http://martinfowler.com/bliki/BoundedContext.html) in DDD should communicate, or to the way you propagate updates from the write database to the read database in the [Command and Query Responsibility Segregation (CQRS)](http://martinfowler.com/bliki/CQRS.html) architecture pattern. The goal is to have eventual consistency between multiple data sources across your distributed system. -![](./media/image18.png) +![](./media/image19.png) -**Figure 4-18**. Asynchronous event-driven message communication +**Figure 4-19**. Asynchronous event-driven message communication Your implementation will determine what protocol to use for event-driven, message-based communications. [AMQP](https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol) can help achieve reliable queued communication. @@ -106,5 +106,5 @@ Additional topics to consider when using asynchronous communication are message >[!div class="step-by-step"] -[Previous] (communication-between-microservices.md) +[Previous] (communication-in-microservice-architecture.md) [Next] (maintain-microservice-apis.md) diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/communication-between-microservices.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/communication-in-microservice-architecture.md similarity index 85% rename from docs/standard/microservices-architecture/architect-microservice-container-applications/communication-between-microservices.md rename to docs/standard/microservices-architecture/architect-microservice-container-applications/communication-in-microservice-architecture.md index af5d6444040d7..3baac3f834164 100644 --- a/docs/standard/microservices-architecture/architect-microservice-container-applications/communication-between-microservices.md +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/communication-in-microservice-architecture.md @@ -1,17 +1,17 @@ --- -title: Communication between microservices -description: .NET Microservices Architecture for Containerized .NET Applications | Communication between microservices -keywords: Docker, Microservices, ASP.NET, Container +title: Communication in a microservice architecture +description: .NET Microservices Architecture for Containerized .NET Applications | Communication in a microservice architecture architectures +keywords: Docker, Microservices, ASP.NET, Container author: CESARDELATORRE ms.author: wiwagn -ms.date: 05/26/2017 +ms.date: 10/18/2017 ms.prod: .net-core ms.technology: dotnet-docker ms.topic: article --- -# Communication between microservices +# Communication in a microservice architecture -In a monolithic application running on a single process, components invoke one another using language-level method or function calls. These can be strongly coupled if you are creating objects with code (for example, new ClassName()), or can be invoked in a decoupled way if you are using Dependency Injection by referencing abstractions rather than concrete object instances. Either way, the objects are running within the same process. The biggest challenge when changing from a monolithic application to a microservices-based application lies in changing the communication mechanism. A direct conversion from in-process method calls into RPC calls to services will cause a chatty and not efficient communication that will not perform well in distributed environments. The challenges of designing distributed system properly are well enough known that there is even a canon known as the [The fallacies of distributed computing](https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing) that lists assumptions that developers often make when moving from monolithic to distributed designs. +In a monolithic application running on a single process, components invoke one another using language-level method or function calls. These can be strongly coupled if you are creating objects with code (for example, `new ClassName()`), or can be invoked in a decoupled way if you are using Dependency Injection by referencing abstractions rather than concrete object instances. Either way, the objects are running within the same process. The biggest challenge when changing from a monolithic application to a microservices-based application lies in changing the communication mechanism. A direct conversion from in-process method calls into RPC calls to services will cause a chatty and not efficient communication that will not perform well in distributed environments. The challenges of designing distributed system properly are well enough known that there is even a canon known as the [The fallacies of distributed computing](https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing) that lists assumptions that developers often make when moving from monolithic to distributed designs. There is not one solution, but several. One solution involves isolating the business microservices as much as possible. You then use asynchronous communication between the internal microservices and replace fine-grained communication that is typical in intra-process communication between objects with coarser-grained communication. You can do this by grouping calls, and by returning data that aggregates the results of multiple internal calls, to the client. @@ -41,13 +41,19 @@ A microservice-based application will often use a combination of these communica These axes are good to know so you have clarity on the possible communication mechanisms, but they are not the important concerns when building microservices. The asynchronous nature of client thread execution not even the asynchronous nature of the selected protocol are the important points when integrating microservices. What *is* important is being able to integrate your microservices asynchronously while maintaining the independence of microservices, as explained in the following section. -## Asynchronous microservice integration enforce microservice’s autonomy +## Asynchronous microservice integration enforces microservice’s autonomy As mentioned, the important point when building a microservices-based application is the way you integrate your microservices. Ideally, you should try to minimize the communication between the internal microservices. The less communications between microservices, the better. But of course, in many cases you will have to somehow integrate the microservices. When you need to do that, the critical rule here is that the communication between the microservices should be asynchronous. That does not mean that you have to use a specific protocol (for example, asynchronous messaging versus synchronous HTTP). It just means that the communication between microservices should be done only by propagating data asynchronously, but try not to depend on other internal microservices as part of the initial service’s HTTP request/response operation. If possible, never depend on synchronous communication (request/response) between multiple microservices, not even for queries. The goal of each microservice is to be autonomous and available to the client consumer, even if the other services that are part of the end-to-end application are down or unhealthy. If you think you need to make a call from one microservice to other microservices (like performing an HTTP request for a data query) in order to be able to provide a response to a client application, you have an architecture that will not be resilient when some microservices fail. -Moreover, having dependencies between microservices (like performing HTTP requests between them for querying data) not only makes your microservices not autonomous. In addition, their performance will be impacted. The more you add synchronous dependencies (like query requests) between microservices, the worse the overall response time will get for the client apps. +Moreover, having HTTP dependencies between microservices, like when creating long request/response cycles with HTTP request chains, as shown in the first part of the Figure 4-15, not only makes your microservices not autonomous but also their performance is impacted as soon as one of the services in that chain is not performing well. + +The more you add synchronous dependencies between microservices, such as query requests, the worse the overall response time gets for the client apps. + +![](./media/image15.png) + +**Figure 4-15**. Anti-patterns and patterns in communication between microservices If your microservice needs to raise an additional action in another microservice, if possible, do not perform that action synchronously and as part of the original microservice request and reply operation. Instead, do it asynchronously (using asynchronous messaging or integration events, queues, etc.). But, as much as possible, do not invoke the action synchronously as part of the original synchronous request and reply operation. @@ -67,11 +73,11 @@ There are also multiple message formats like JSON or XML, or even binary formats ### Request/response communication with HTTP and REST -When a client uses request/response communication, it sends a request to a service, then the service processes the request and sends back a response. Request/response communication is especially well suited for querying data for a real-time UI (a live user interface) from client apps. Therefore, in a microservice architecture you will probably use this communication mechanism for most queries, as shown in Figure 4-15. +When a client uses request/response communication, it sends a request to a service, then the service processes the request and sends back a response. Request/response communication is especially well suited for querying data for a real-time UI (a live user interface) from client apps. Therefore, in a microservice architecture you will probably use this communication mechanism for most queries, as shown in Figure 4-16. -![](./media/image15.png) +![](./media/image16.png) -**Figure 4-15**. Using HTTP request/response communication (synchronous or asynchronous) +**Figure 4-16**. Using HTTP request/response communication (synchronous or asynchronous) When a client uses request/response communication, it assumes that the response will arrive in a short time, typically less than a second, or a few seconds at most. For delayed responses, you need to implement asynchronous communication based on [messaging patterns](https://docs.microsoft.com/azure/architecture/patterns/category/messaging) and [messaging technologies](https://en.wikipedia.org/wiki/Message-oriented_middleware), which is a different approach that we explain in the next section. @@ -91,15 +97,15 @@ There is additional value when using HTTP REST services as your interface defini Another possibility (usually for different purposes than REST) is a real-time and one-to-many communication with higher-level frameworks such as [ASP.NET SignalR](https://www.asp.net/signalr) and protocols such as [WebSockets](https://en.wikipedia.org/wiki/WebSocket). -As Figure 4-16 shows, real-time HTTP communication means that you can have server code pushing content to connected clients as the data becomes available, rather than having the server wait for a client to request new data. +As Figure 4-17 shows, real-time HTTP communication means that you can have server code pushing content to connected clients as the data becomes available, rather than having the server wait for a client to request new data. -![](./media/image16.png) +![](./media/image17.png) -**Figure 4-16**. One-to-one real-time asynchronous message communication +**Figure 4-17**. One-to-one real-time asynchronous message communication Since communication is in real time, client apps show the changes almost instantly. This is usually handled by a protocol such as WebSockets, using many WebSockets connections (one per client). A typical example is when a service communicates a change in the score of a sports game to many client web apps simultaneously. >[!div class="step-by-step"] -[Previous] (identify-microservice-domain-model-boundaries.md) +[Previous] (direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md) [Next] (asynchronous-message-based-communication.md) diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/data-sovereignty-per-microservice.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/data-sovereignty-per-microservice.md index d27bf45a14cd2..65805498cbfcf 100644 --- a/docs/standard/microservices-architecture/architect-microservice-container-applications/data-sovereignty-per-microservice.md +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/data-sovereignty-per-microservice.md @@ -15,7 +15,7 @@ An important rule for microservices architecture is that each microservice must This means that the conceptual model of the domain will differ between subsystems or microservices. Consider enterprise applications, where customer relationship management (CRM) applications, transactional purchase subsystems, and customer support subsystems each call on unique customer entity attributes and data, and where each employs a different Bounded Context (BC). -This principle is similar in [domain-driven design (DDD)](https://en.wikipedia.org/wiki/Domain-driven_design), where each [Bounded Context](https://martinfowler.com/bliki/BoundedContext.html) or autonomous subsystem or service must own its domain model (data plus logic and behavior). Each DDD Bounded Context correlates to one business microservice (one or several services). (We expand on this point about the Bounded Context pattern in the next section.) +This principle is similar in [Domain-driven design (DDD)](https://en.wikipedia.org/wiki/Domain-driven_design), where each [Bounded Context](https://martinfowler.com/bliki/BoundedContext.html) or autonomous subsystem or service must own its domain model (data plus logic and behavior). Each DDD Bounded Context correlates to one business microservice (one or several services). (We expand on this point about the Bounded Context pattern in the next section.) On the other hand, the traditional (monolithic data) approach used in many applications is to have a single centralized database or just a few databases. This is often a normalized SQL database that is used for the whole application and all its internal subsystems, as shown in Figure 4-7. diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern.md new file mode 100644 index 0000000000000..fe5ca96a56b4f --- /dev/null +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern.md @@ -0,0 +1,121 @@ +--- +title: Direct client-to-microservice communication versus the API Gateway pattern +description: .NET Microservices Architecture for Containerized .NET Applications | Direct client-to-microservice communication versus the API Gateway pattern +keywords: Docker, Microservices, ASP.NET, Container, API Gateway +author: CESARDELATORRE +ms.author: wiwagn +ms.date: 10/18/2017 +ms.prod: .net-core +ms.technology: dotnet-docker +ms.topic: article +--- +# Direct client-to-microservice communication versus the API Gateway pattern + +In a microservices architecture, each microservice exposes a set of (typically) fine‑grained endpoints. This fact can impact the client‑to‑microservice communication, as explained in this section. + +## Direct client-to-microservice communication + +A possible approach is to use a direct client-to-microservice communication architecture. In this approach, a client app can make requests directly to some of the microservices, as shown in Figure 4-12. + +![](./media/image12.png) + +**Figure 4-12**. Using a direct client-to-microservice communication architecture + +In this approach. each microservice has a public endpoint, sometimes with a different TCP port for each microservice. An example of a URL for a particular service could be the following URL in Azure: + + + +In a production environment based on a cluster, that URL would map to the load balancer used in the cluster, which in turn distributes the requests across the microservices. In production environments, you could have an Application Delivery Controller (ADC) like [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/application-gateway-introduction) between your microservices and the Internet. This acts as a transparent tier that not only performs load balancing, but secures your services by offering SSL termination. This improves the load of your hosts by offloading CPU-intensive SSL termination and other routing duties to the Azure Application Gateway. In any case, a load balancer and ADC are transparent from a logical application architecture point of view. + +A direct client-to-microservice communication architecture could be good enough for a small microservice-based application, especially if the client app is a server-side web application like an ASP.NET MVC app. However, when you build large and complex microservice-based applications (for example, when handling dozens of microservice types), and especially when the client apps are remote mobile apps or SPA web applications, that approach faces a few issues. + +Consider the following questions when developing a large application based on microservices: + +- *How can client apps minimize the number of requests to the backend and reduce chatty communication to multiple microservices?* + +Interacting with multiple microservices to build a single UI screen increases the number of roundtrips across the Internet. This increases latency and complexity on the UI side. Ideally, responses should be efficiently aggregated in the server side—this reduces latency, since multiple pieces of data come back in parallel and some UI can show data as soon as it is ready. + +- *How can you handle cross-cutting concerns such as authorization, data transformations, and dynamic request dispatching?* + +Implementing security and cross-cutting concerns like security and authorization on every microservice can require significant development effort. A possible approach is to have those services within the Docker host or internal cluster, in order to restrict direct access to them from the outside, and to implement those cross-cutting concerns in a centralized place, like an API Gateway. + +- *How can client apps communicate with services that use non-Internet-friendly protocols?* + +Protocols used on the server side (like AMQP or binary protocols) are usually not supported in client apps. Therefore, requests must be performed through protocols like HTTP/HTTPS and translated to the other protocols afterwards. A *man-in-the-middle* approach can help in this situation. + +- *How can you shape a façade especially made for mobile apps? * + +The API of multiple microservices might not be well designed for the needs of different client applications. For instance, the needs of a mobile app might be different than the needs of a web app. For mobile apps, you might need to optimize even further so that data responses can be more efficient. You might do this by aggregating data from multiple microservices and returning a single set of data, and sometimes eliminating any data in the response that is not needed by the mobile app. And, of course, you might compress that data. Again, a façade or API in between the mobile app and the microservices can be convenient for this scenario. + +## Using an API Gateway + +When you design and build large or complex microservice-based applications with multiple client apps, a good approach to consider can be an [API Gateway](http://microservices.io/patterns/apigateway.html). This is a service that provides a single entry point for certain groups of microservices. It is similar to the [Facade pattern](https://en.wikipedia.org/wiki/Facade_pattern) from object‑oriented design, but in this case, it is part of a distributed system. +The API Gateway pattern is also sometimes known as the “backend for frontend” [(BFF)](http://samnewman.io/patterns/architectural/bff/) because you build it while thinking about the needs of the client app. + +Figure 4-13 shows how a custom API Gateway can fit into a microservice-based architecture. +It is important to highlight that in that diagram, you would be using a single custom API Gateway service facing multiple and different client apps. That fact can be an important risk because your API Gateway service will be growing and evolving based on many different requirements from the client apps. Eventually, it will be bloated because of those different needs and effectively it could be pretty similar to a monolithic application or monolithic service. That is why it is very much recommended to split the API Gateway in multiple services or multiple smaller API Gateways, one per form-factor type, for instance. + +![](./media/image13.png) + +**Figure 4-13**. Using an API Gateway implemented as a custom Web API service + +In this example, the API Gateway would be implemented as a custom Web API service running as a container. + +As mentioned, you should implement several API Gateways so that you can have a different façade for the needs of each client app. Each API Gateway can provide a different API tailored for each client app, possibly even based on the client form factor by implementing specific adapter code which underneath calls multiple internal microservices. + +Since a custom API Gateway is usually a data aggregator, you need to be careful with it. Usually it isn't a good idea to have a single API Gateway aggregating all the internal microservices of your application. If it does, it acts as a monolithic aggregator or orchestrator and violates microservice autonomy by coupling all the microservices. Therefore, the API Gateways should be segregated based on business boundaries and not act as an aggregator for the whole application. + +Sometimes a granular API Gateway can also be a microservice by itself, and even have a domain or business name and related data. Having the API Gateway’s boundaries dictated by the business or domain will help you to get a better design. + +Granularity in the API Gateway tier can be especially useful for more advanced composite UI applications based on microservices, because the concept of a fine-grained API Gateway is similar to a UI composition service. We discuss this later in the [Creating composite UI based on microservices](#creating-composite-ui-based-on-microservices-including-visual-ui-shape-and-layout-generated-by-multiple-microservices). + +Therefore, for many medium- and large-size applications, using a custom-built API Gateway is usually a good approach, but not as a single monolithic aggregator or unique central custom API Gateway. + +Another approach is to use a product like [Azure API Management](https://azure.microsoft.com/services/api-management/) as shown in Figure 4-14. This approach not only solves your API Gateway needs, but provides features like gathering insights from your APIs. If you are using an API management solution, an API Gateway is only a component within that full API management solution. + +![](./media/image14.png) + +**Figure 4-14**. Using Azure API Management for your API Gateway + +In this case, when using a product like Azure API Management, the fact that you might have a single API Gateway is not so risky because these kinds of API Gateways are "thinner", meaning that you don't implement custom C# code that could evolve towards a monolithic component. + +This type of product acts more like a reverse proxy for ingress communication, where you can also filter the APIs from the internal microservices plus apply authorization to the published APIs in this single tier. + +The insights available from an API Management system help you get an understanding of how your APIs are being used and how they are performing. They do this by letting you view near real-time analytics reports and identifying trends that might impact your business. Plus, you can have logs about request and response activity for further online and offline analysis. + +With Azure API Management, you can secure your APIs using a key, a token, and IP filtering. These features let you enforce flexible and fine-grained quotas and rate limits, modify the shape and behavior of your APIs using policies, and improve performance with response caching. + +In this guide and the reference sample application (eShopOnContainers), we are limiting the architecture to a simpler and custom-made containerized architecture in order to focus on plain containers without using PaaS products like Azure API Management. But for large microservice-based applications that are deployed into Microsoft Azure, we encourage you to review and adopt Azure API Management as the base for your API Gateways. + +## Drawbacks of the API Gateway pattern + +- The most important drawback is that when you implement an API Gateway, you are coupling that tier with the internal microservices. Coupling like this might introduce serious difficulties for your application. Clemens Vaster, architect at the Azure Service Bus team, refers to this potential difficulty as “the new ESB” in his "[Messaging and Microservices](https://www.youtube.com/watch?v=rXi5CLjIQ9k)" session at GOTO 2016. + +- Using a microservices API Gateway creates an additional possible single point of failure. + +- An API Gateway can introduce increased response time due to the additional network call. However, this extra call usually has less impact than having a client interface that is too chatty directly calling the internal microservices. + +- If not scaled out properly, the API Gateway can become a bottleneck. + +- An API Gateway requires additional development cost and future maintenance if it includes custom logic and data aggregation. Developers must update the API Gateway in order to expose each microservice’s endpoints. Moreover, implementation changes in the internal microservices might cause code changes at the API Gateway level. However, if the API Gateway is just applying security, logging, and versioning (as when using Azure API Management), this additional development cost might not apply. + +- If the API Gateway is developed by a single team, there can be a development bottleneck. This is another reason why a better approach is to have several fined-grained API Gateways that respond to different client needs. You could also segregate the API Gateway internally into multiple areas or layers that are owned by the different teams working on the internal microservices. + +## Additional resources + +- **Charles Richardson. Pattern: API Gateway / Backend for Front-End** + [*http://microservices.io/patterns/apigateway.html*](http://microservices.io/patterns/apigateway.html) + +- **Azure API Management** + [*https://azure.microsoft.com/services/api-management/*](https://azure.microsoft.com/services/api-management/) + +- **Udi Dahan. Service Oriented Composition**\ + [*http://udidahan.com/2014/07/30/service-oriented-composition-with-video/*](http://udidahan.com/2014/07/30/service-oriented-composition-with-video/) + +- **Clemens Vasters. Messaging and Microservices at GOTO 2016** (video) + [*https://www.youtube.com/watch?v=rXi5CLjIQ9k*](https://www.youtube.com/watch?v=rXi5CLjIQ9k) + + +>[!div class="step-by-step"] +[Previous] (identify-microservice-domain-model-boundaries.md) +[Next] (communication-in-microservice-architecture.md) diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/distributed-data-management.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/distributed-data-management.md index d883c3b90a6df..a6bc51673c3c5 100644 --- a/docs/standard/microservices-architecture/architect-microservice-container-applications/distributed-data-management.md +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/distributed-data-management.md @@ -53,7 +53,7 @@ The Ordering microservice should not update the Products table directly, because As stated by the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem), you need to choose between availability and ACID strong consistency. Most microservice-based scenarios demand availability and high scalability as opposed to strong consistency. Mission-critical applications must remain up and running, and developers can work around strong consistency by using techniques for working with weak or eventual consistency. This is the approach taken by most microservice-based architectures. -Moreover, ACID-style or two-phase commit transactions are not just against microservices principles; most NoSQL databases (like Azure Document DB, MongoDB, etc.) do not support two-phase commit transactions. However, maintaining data consistency across services and databases is essential. This challenge is also related to the question of how to propagate changes across multiple microservices when certain data needs to be redundant—for example, when you need to have the product’s name or description in the Catalog microservice and the Basket microservice. +Moreover, ACID-style or two-phase commit transactions are not just against microservices principles; most NoSQL databases (like Azure Cosmos DB, MongoDB, etc.) do not support two-phase commit transactions. However, maintaining data consistency across services and databases is essential. This challenge is also related to the question of how to propagate changes across multiple microservices when certain data needs to be redundant—for example, when you need to have the product’s name or description in the Catalog microservice and the Basket microservice. A good solution for this problem is to use eventual consistency between microservices articulated through event-driven communication and a publish-and-subscribe system. These topics are covered in the section [Asynchronous event-driven communication](#async_event_driven_communication) later in this guide. @@ -63,7 +63,7 @@ Communicating across microservice boundaries is a real challenge. In this contex In a distributed system like a microservices-based application, with so many artifacts moving around and with distributed services across many servers or hosts, components will eventually fail. Partial failure and even larger outages will occur, so you need to design your microservices and the communication across them taking into account the risks common in this type of distributed system. -A popular approach is to implement HTTP (REST)- based microservices, due to their simplicity. An HTTP-based approach is perfectly acceptable; the issue here is related to how you use it. If you use HTTP requests and responses just to interact with your microservices from client applications or from API Gateways, that is fine. But if create long chains of synchronous HTTP calls across microservices, communicating across their boundaries as if the microservices were objects in a monolithic application, your application will eventually run into problems. +A popular approach is to implement HTTP (REST)- based microservices, due to their simplicity. An HTTP-based approach is perfectly acceptable; the issue here is related to how you use it. If you use HTTP requests and responses just to interact with your microservices from client applications or from API Gateways, that is fine. But if you create long chains of synchronous HTTP calls across microservices, communicating across their boundaries as if the microservices were objects in a monolithic application, your application will eventually run into problems. For instance, imagine that your client application makes an HTTP API call to an individual microservice like the Ordering microservice. If the Ordering microservice in turn calls additional microservices using HTTP within the same request/response cycle, you are creating a chain of HTTP calls. It might sound reasonable initially. However, there are important points to consider when going down this path: diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/docker-application-state-data.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/docker-application-state-data.md index 954af6dc63e9d..5f2b1c6cc2386 100644 --- a/docs/standard/microservices-architecture/architect-microservice-container-applications/docker-application-state-data.md +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/docker-application-state-data.md @@ -1,10 +1,10 @@ --- title: State and data in Docker applications description: .NET Microservices Architecture for Containerized .NET Applications | State and data in Docker applications -keywords: Docker, Microservices, ASP.NET, Container +keywords: Docker, Microservices, ASP.NET, Container, SQL, CosmosDB, Docker author: CESARDELATORRE ms.author: wiwagn -ms.date: 05/26/2017 +ms.date: 10/18/2017 ms.prod: .net-core ms.technology: dotnet-docker ms.topic: article @@ -23,10 +23,10 @@ The following solutions are used to manage persistent data in Docker application - [Volume plugins](https://docs.docker.com/engine/tutorials/dockervolumes/) that mount volumes to remote services, providing long-term persistence. -- Remote data sources like SQL or NoSQL databases, or cache services like [Redis](https://redis.io/). - - [Azure Storage](https://docs.microsoft.com/azure/storage/), which provides geo-distributable storage, providing a good long-term persistence solution for containers. +- Remote relational databases like [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) or NoSQL databases like [Azure Cosmos DB](https://docs.microsoft.com/azure/cosmos-db/introduction), or cache services like [Redis](https://redis.io/). + The following provides more detail about these options. **Data volumes** are directories mapped from the host OS to directories in containers. When code in the container has access to the directory, that access is actually to a directory on the host OS. This directory is not tied to the lifetime of the container itself, and the directory can be accessed from code running directly on the host OS or by another container that maps the same host directory to itself. Thus, data volumes are designed to persist data independently of the life of the container. If you delete a container or an image from the Docker host, the data persisted in the data volume is not deleted. The data in a volume can be accessed from the host OS as well. @@ -43,9 +43,9 @@ In addition, when Docker containers are managed by an orchestrator, containers m **Volume plugins** like [Flocker](https://clusterhq.com/flocker/) provide data access across all hosts in a cluster. While not all volume plugins are created equally, volume plugins typically provide externalized persistent reliable storage from immutable containers. -**Remote data sources and cache** tools like Azure SQL Database, Azure Document DB, or a remote cache like Redis can be used in containerized applications the same way they are used when developing without containers. This is a proven way to store business application data. +**Remote data sources and cache** tools like Azure SQL Database, Azure Cosmos DB, or a remote cache like Redis can be used in containerized applications the same way they are used when developing without containers. This is a proven way to store business application data. -**Azure Storage.** Business data usually will need to be placed in external resources or databases, like relational databases or NoSQL databases like Azure Storage and DocDB. Azure Storage, in concrete, provides the following services in the cloud: +**Azure Storage.** Business data usually will need to be placed in external resources or databases, like Azure Storage. Azure Storage, in concrete, provides the following services in the cloud: - Blob storage stores unstructured object data. A blob can be any type of text or binary data, such as document or media files (images, audio, and video files). Blob storage is also referred to as Object storage. @@ -53,7 +53,7 @@ In addition, when Docker containers are managed by an orchestrator, containers m - Table storage stores structured datasets. Table storage is a NoSQL key-attribute data store, which allows rapid development and fast access to large quantities of data. -**Relational databases and NoSQL databases.** There are many choices for external databases, from relational databases like SQL Server, PostgreSQL, Oracle, or NoSQL databases like Azure DocDB, MongoDB, etc. These databases are not going to be explained as part of this guide since they are in a completely different subject. +**Relational databases and NoSQL databases.** There are many choices for external databases, from relational databases like SQL Server, PostgreSQL, Oracle, or NoSQL databases like Azure Cosmos DB, MongoDB, etc. These databases are not going to be explained as part of this guide since they are in a completely different subject. >[!div class="step-by-step"] diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md index c89380a911517..d61bc2f310d9c 100644 --- a/docs/standard/microservices-architecture/architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md @@ -9,7 +9,7 @@ ms.prod: .net-core ms.technology: dotnet-docker ms.topic: article --- -# Identifying domain-model boundaries for each microservice +# Identify domain-model boundaries for each microservice The goal when identifying model boundaries and size for each microservice is not to get to the most granular separation possible, although you should tend toward small microservices if possible. Instead, your goal should be to get to the most meaningful separation guided by your domain knowledge. The emphasis is not on the size, but instead on business capabilities. In addition, if there is clear cohesion needed for a certain area of the application based on a high number of dependencies, that indicates the need for a single microservice, too. Cohesion is a way to identify how to break apart or group together microservices. Ultimately, while you gain more knowledge about the domain, you should adapt the size of your microservice, iteratively. Finding the right size is not a one-shot process. @@ -50,106 +50,6 @@ Basically, there is a shared concept of a user that exists in multiple services There are several benefits to not sharing the same user entity with the same number of attributes across domains. One benefit is to reduce duplication, so that microservice models do not have any data that they do not need. Another benefit is having a master microservice that owns a certain type of data per entity so that updates and queries for that type of data are driven only by that microservice. - -## Direct client-to-microservice communication versus the API Gateway pattern - -In a microservices architecture, each microservice exposes a set of (typically) fine‑grained endpoints. This fact can impact the client‑to‑microservice communication, as explained in this section. - -### Direct client-to-microservice communication - -A possible approach is to use a direct client-to-microservice communication architecture. In this approach, a client app can make requests directly to some of the microservices, as shown in Figure 4-12. - -![](./media/image12.png) - -**Figure 4-12**. Using a direct client-to-microservice communication architecture - -In this approach. each microservice has a public endpoint, sometimes with a different TCP port for each microservice. An example of an URL for a particular service could be the following URL in Azure: - - - -In a production environment based on a cluster, that URL would map to the load balancer used in the cluster, which in turn distributes the requests across the microservices. In production environments, you could have an Application Delivery Controller (ADC) like [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/application-gateway-introduction) between your microservices and the Internet. This acts as a transparent tier that not only performs load balancing, but secures your services by offering SSL termination. This improves the load of your hosts by offloading CPU-intensive SSL termination and other routing duties to the Azure Application Gateway. In any case, a load balancer and ADC are transparent from a logical application architecture point of view. - -A direct client-to-microservice communication architecture is good enough for a small microservice-based application. However, when you build large and complex microservice-based applications (for example, when handling dozens of microservice types), that approach faces possible issues. You need to consider the following questions when developing a large application based on microservices: - -- *How can client apps minimize the number of requests to the backend and reduce chatty communication to multiple microservices?* - -Interacting with multiple microservices to build a single UI screen increases the number of roundtrips across the Internet. This increases latency and complexity on the UI side. Ideally, responses should be efficiently aggregated in the server side—this reduces latency, since multiple pieces of data come back in parallel and some UI can show data as soon as it is ready. - -- *How can you handle cross-cutting concerns such as authorization, data transformations, and dynamic request dispatching?* - -Implementing security and cross-cutting concerns like security and authorization on every microservice can require significant development effort. A possible approach is to have those services within the Docker host or internal cluster, in order to restrict direct access to them from the outside, and to implement those cross-cutting concerns in a centralized place, like an API Gateway. - -- *How can client apps communicate with services that use non-Internet-friendly protocols?* - -Protocols used on the server side (like AMQP or binary protocols) are usually not supported in client apps. Therefore, requests must be performed through protocols like HTTP/HTTPS and translated to the other protocols afterwards. A *man-in-the-middle* approach can help in this situation. - -- *How can you shape a façade especially made for mobile apps? * - -The API of multiple microservices might not be well designed for the needs of different client applications. For instance, the needs of a mobile app might be different than the needs of a web app. For mobile apps, you might need to optimize even further so that data responses can be more efficient. You might do this by aggregating data from multiple microservices and returning a single set of data, and sometimes eliminating any data in the response that is not needed by the mobile app. And, of course, you might compress that data. Again, a façade or API in between the mobile app and the microservices can be convenient for this scenario. - -### Using an API Gateway - -When you design and build large or complex microservice-based applications with multiple client apps, a good approach to consider can be an [API Gateway](http://microservices.io/patterns/apigateway.html). This is a service that provides a single entry point for certain groups of microservices. It is similar to the [Facade pattern](https://en.wikipedia.org/wiki/Facade_pattern) from object‑oriented design, but in this case, it is part of a distributed system. The API Gateway pattern is also sometimes known as the “back end for the front end,” because you build it while thinking about the needs of the client app. - -Figure 4-13 shows how an API Gateway can fit into a microservice-based architecture. - -![](./media/image13.png) - -**Figure 4-13**. Using the API Gateway pattern in a microservice-based architecture - -In this example, the API Gateway would be implemented as a custom Web API service running as a container. - -You should implement several API Gateways so that you can have a different façade for the needs of each client app. Each API Gateway can provide a different API tailored for each client app, possibly even based on the client form factor or device by implementing specific adapter code which underneath calls multiple internal microservices. - -Since the API Gateway is actually an aggregator, you need to be careful with it. Usually it is not a good idea to have a single API Gateway aggregating all the internal microservices of your application. If it does, it acts as a monolithic aggregator or orchestrator and violates microservice autonomy by coupling all the microservices. Therefore, the API Gateways should be segregated based on business boundaries and not act as an aggregator for the whole application. - -Sometimes a granular API Gateway can also be a microservice by itself, and even have a domain or business name and related data. Having the API Gateway’s boundaries dictated by the business or domain will help you to get a better design. - -Granularity in the API Gateway tier can be especially useful for more advanced composite UI applications based on microservices, because the concept of a fine-grained API Gateway is similar to an UI composition service. We discuss this later in the [Creating composite UI based on microservices](#creating-composite-ui-based-on-microservices-including-visual-ui-shape-and-layout-generated-by-multiple-microservices). - -Therefore, for many medium- and large-size applications, using a custom-built API Gateway is usually a good approach, but not as a single monolithic aggregator or unique central API Gateway. - -Another approach is to use a product like [Azure API Management](https://azure.microsoft.com/services/api-management/) as shown in Figure 4-14. This approach not only solves your API Gateway needs, but provides features like gathering insights from your APIs. If you are using an API management solution, an API Gateway is only a component within that full API management solution. - -![](./media/image14.png) - -**Figure 4-14**. Using Azure API Management for your API Gateway - -The insights available from an API Management system help you get an understanding of how your APIs are being used and how they are performing. They do this by letting you view near real-time analytics reports and identifying trends that might impact your business. Plus you can have logs about request and response activity for further online and offline analysis. - -With Azure API Management, you can secure your APIs using a key, a token, and IP filtering. These features let you enforce flexible and fine-grained quotas and rate limits, modify the shape and behavior of your APIs using policies, and improve performance with response caching. - -In this guide and the reference sample application (eShopOnContainers) we are limiting the architecture to a simpler and custom-made containerized architecture in order to focus on plain containers without using PaaS products like Azure API Management. But for large microservice-based applications that are deployed into Microsoft Azure, we encourage you to review and adopt Azure API Management as the base for your API Gateways. - -### Drawbacks of the API Gateway pattern - -- The most important drawback is that when you implement an API Gateway, you are coupling that tier with the internal microservices. Coupling like this might introduce serious difficulties for your application. (The cloud architect Clemens Vaster refers to this potential difficulty as “the new ESB” in his "[Messaging and Microservices](https://www.youtube.com/watch?v=rXi5CLjIQ9k)" session from at GOTO 2016.) - -- Using a microservices API Gateway creates an additional possible point of failure. - -- An API Gateway can introduce increased response time due to the additional network call. However, this extra call usually has less impact than having a client interface that is too chatty directly calling the internal microservices. - -- The API Gateway can represent a possible bottleneck if it is not scaled out properly - -- An API Gateway requires additional development cost and future maintenance if it includes custom logic and data aggregation. Developers must update the API Gateway in order to expose each microservice’s endpoints. Moreover, implementation changes in the internal microservices might cause code changes at the API Gateway level. However, if the API Gateway is just applying security, logging, and versioning (as when using Azure API Management), this additional development cost might not apply. - -- If the API Gateway is developed by a single team, there can be a development bottleneck. This is another reason why a better approach is to have several fined-grained API Gateways that respond to different client needs. You could also segregate the API Gateway internally into multiple areas or layers that are owned by the different teams working on the internal microservices. - -## Additional resources - -- **Charles Richardson. Pattern: API Gateway / Backend for Front-End** - [*http://microservices.io/patterns/apigateway.html*](http://microservices.io/patterns/apigateway.html) - -- **Azure API Management** - [*https://azure.microsoft.com/services/api-management/*](https://azure.microsoft.com/services/api-management/) - -- **Udi Dahan. Service Oriented Composition**\ - [*http://udidahan.com/2014/07/30/service-oriented-composition-with-video/*](http://udidahan.com/2014/07/30/service-oriented-composition-with-video/) - -- **Clemens Vasters. Messaging and Microservices at GOTO 2016** (video) - [*https://www.youtube.com/watch?v=rXi5CLjIQ9k*](https://www.youtube.com/watch?v=rXi5CLjIQ9k) - - >[!div class="step-by-step"] [Previous] (distributed-data-management.md) -[Next] (communication-between-microservices.md) +[Next] (direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md) diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/index.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/index.md index 4a3a002e93c35..5cfaaf7265b86 100644 --- a/docs/standard/microservices-architecture/architect-microservice-container-applications/index.md +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/index.md @@ -15,7 +15,7 @@ ms.topic: article Earlier in this guide, you learned basic concepts about containers and Docker. That was the minimum information you need in order to get started with containers. Although, even when containers are enablers and a great fit for microservices, they are not mandatory for a microservice architecture and many architectural concepts in this architecture section could be applied without containers, too. However, this guidance focuses on the intersection of both due to the already introduced importance of containers. -Enterprise applications can be complex and are often composed of multiple services instead of a single service-based application. For those cases, you need to understand additional architectural approaches, such as the microservices and certain domain-driven design (DDD) patterns plus container orchestration concepts. Note that this chapter describes not just microservices on containers, but any containerized application, as well. +Enterprise applications can be complex and are often composed of multiple services instead of a single service-based application. For those cases, you need to understand additional architectural approaches, such as the microservices and certain Domain-Driven Design (DDD) patterns plus container orchestration concepts. Note that this chapter describes not just microservices on containers, but any containerized application, as well. ## Container design principles @@ -23,7 +23,7 @@ In the container model, a container image instance represents a single process. When you design a container image, you will see an [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/) definition in the Dockerfile. This defines the process whose lifetime controls the lifetime of the container. When the process completes, the container lifecycle ends. Containers might represent long-running processes like web servers, but can also represent short-lived processes like batch jobs, which formerly might have been implemented as Azure [WebJobs](https://docs.microsoft.com/azure/app-service-web/websites-webjobs-resources). -If the process fails, the container ends, and the orchestrator takes over. If the orchestrator was configured to keep five instances running and one fails, the orchestrator will create another container instance to replace the failed process. In a batch job, the process is started with parameters. When the process completes, the work is complete. +If the process fails, the container ends, and the orchestrator takes over. If the orchestrator was configured to keep five instances running and one fails, the orchestrator will create another container instance to replace the failed process. In a batch job, the process is started with parameters. When the process completes, the work is complete. This guidance drills-down on orchestrators, later on. You might find a scenario where you want multiple processes running in a single container. For that scenario, since there can be only one entry point per container, you could run a script within the container that launches as many programs as needed. For example, you can use [Supervisor](http://supervisord.org/) or a similar tool to take care of launching multiple processes inside a single container. However, even though you can find architectures that hold multiple processes per container, that approach it is not very common. diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image15.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image15.png index 9262d68e033f0..0148310091f99 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image15.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image15.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image16.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image16.png index 6968a6fb0ee48..b9b1bd81db4b1 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image16.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image16.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image17.PNG b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image17.PNG index b30c9fc0a06be..6968a6fb0ee48 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image17.PNG and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image17.PNG differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image18.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image18.png index 6cbd702cbc5b2..b30c9fc0a06be 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image18.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image18.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image19.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image19.png index d405de9d209da..6cbd702cbc5b2 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image19.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image19.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image20.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image20.png index 15adb740f6655..97b3c62f1c0c3 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image20.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image20.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image21.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image21.png index 0ccfef1efba15..15adb740f6655 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image21.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image21.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image22.PNG b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image22.PNG index 70b69020c6d9e..0ccfef1efba15 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image22.PNG and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image22.PNG differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image23.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image23.png index eb88b15e6e5ef..70b69020c6d9e 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image23.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image23.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image24.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image24.png index 40ab685febc8b..b315bb3a8bb29 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image24.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image24.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image25.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image25.png index b315bb3a8bb29..eb88b15e6e5ef 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image25.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image25.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image26.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image26.png index 9bd8222f11182..b6f3e33ceae1b 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image26.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image26.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image27.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image27.png index 8b4a4bd957745..93c08c333de26 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image27.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image27.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image28.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image28.png index 38a888ec50b0d..8b4a4bd957745 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image28.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image28.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image29.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image29.png index 52279c3be5739..38a888ec50b0d 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image29.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image29.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image30.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image30.png index 6678228fe7a03..52279c3be5739 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image30.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image30.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image31.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image31.png index ab7a5614da4fc..6678228fe7a03 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image31.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image31.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image32.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image32.png index e59c1f6f2d9f3..ab7a5614da4fc 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image32.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image32.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image33.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image33.png index e45712e2b056e..e59c1f6f2d9f3 100644 Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image33.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image33.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image34.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image34.png new file mode 100644 index 0000000000000..e45712e2b056e Binary files /dev/null and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image34.png differ diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md index 4b1d0cbf08e50..1d652f887dea0 100644 --- a/docs/standard/microservices-architecture/architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md @@ -13,21 +13,21 @@ ms.topic: article Microservices architecture often starts with the server side handling data and logic. However, a more advanced approach is to design your application UI based on microservices as well. That means having a composite UI produced by the microservices, instead of having microservices on the server and just a monolithic client app consuming the microservices. With this approach, the microservices you build can be complete with both logic and visual representation. -Figure 4-19 shows the simpler approach of just consuming microservices from a monolithic client application. Of course, you could have an ASP.NET MVC service in between producing the HTML and JavaScript. The figure is a simplification that highlights that you have a single (monolithic) client UI consuming the microservices, which just focus on logic and data and not on the UI shape (HTML and JavaScript). +Figure 4-20 shows the simpler approach of just consuming microservices from a monolithic client application. Of course, you could have an ASP.NET MVC service in between producing the HTML and JavaScript. The figure is a simplification that highlights that you have a single (monolithic) client UI consuming the microservices, which just focus on logic and data and not on the UI shape (HTML and JavaScript). -![](./media/image19.png) +![](./media/image20.png) -**Figure 4-19**. A monolithic UI application consuming back-end microservices +**Figure 4-20**. A monolithic UI application consuming back-end microservices In contrast, a composite UI is precisely generated and composed by the microservices themselves. Some of the microservices drive the visual shape of specific areas of the UI. The key difference is that you have client UI components (TS classes, for example) based on templates, and the data-shaping-UI ViewModel for those templates comes from each microservice. At client application start-up time, each of the client UI components (TypeScript classes, for example) registers itself with an infrastructure microservice capable of providing ViewModels for a given scenario. If the microservice changes the shape, the UI changes also. -Figure 4-20 shows a version of this composite UI approach. This is simplified, because you might have other microservices that are aggregating granular parts based on different techniques—it depends on whether you are building a traditional web approach (ASP.NET MVC) or an SPA (Single Page Application). +Figure 4-21 shows a version of this composite UI approach. This is simplified, because you might have other microservices that are aggregating granular parts based on different techniques—it depends on whether you are building a traditional web approach (ASP.NET MVC) or an SPA (Single Page Application). -![](./media/image20.png) +![](./media/image21.png) -**Figure 4-20**. Example of a composite UI application shaped by back-end microservices +**Figure 4-21**. Example of a composite UI application shaped by back-end microservices Each of those UI composition microservices would be similar to a small API Gateway. But in this case each is responsible for a small UI area. diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/resilient-high-availability-microservices.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/resilient-high-availability-microservices.md index f5c32e9a40a9a..c838652287698 100644 --- a/docs/standard/microservices-architecture/architect-microservice-container-applications/resilient-high-availability-microservices.md +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/resilient-high-availability-microservices.md @@ -41,11 +41,11 @@ A microservice-based application should not try to store the output stream of ev When you create a microservice-based application, you need to deal with complexity. Of course, a single microservice is simple to deal with, but dozens or hundreds of types and thousands of instances of microservices is a complex problem. It is not just about building your microservice architecture—you also need high availability, addressability, resiliency, health, and diagnostics if you intend to have a stable and cohesive system. -![](./media/image21.png) +![](./media/image22.png) -**Figure 4-21**. A Microservice Platform is fundamental for an application’s health management +**Figure 4-22**. A Microservice Platform is fundamental for an application’s health management -The complex problems shown in Figure 4-21 are very hard to solve by yourself. Development teams should focus on solving business problems and building custom applications with microservice-based approaches. They should not focus on solving complex infrastructure problems; if they did, the cost of any microservice-based application would be huge. Therefore, there are microservice-oriented platforms, referred to as orchestrators or microservice clusters, that try to solve the hard problems of building and running a service and using infrastructure resources efficiently. This reduces the complexities of building applications that use a microservices approach. +The complex problems shown in Figure 4-22 are very hard to solve by yourself. Development teams should focus on solving business problems and building custom applications with microservice-based approaches. They should not focus on solving complex infrastructure problems; if they did, the cost of any microservice-based application would be huge. Therefore, there are microservice-oriented platforms, referred to as orchestrators or microservice clusters, that try to solve the hard problems of building and running a service and using infrastructure resources efficiently. This reduces the complexities of building applications that use a microservices approach. Different orchestrators might sound similar, but the diagnostics and health checks offered by each of them differ in features and state of maturity, sometimes depending on the OS platform, as explained in the next section. diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md index da9e4b128ba57..b01445fc8d444 100644 --- a/docs/standard/microservices-architecture/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md @@ -4,40 +4,50 @@ description: .NET Microservices Architecture for Containerized .NET Applications keywords: Docker, Microservices, ASP.NET, Container author: CESARDELATORRE ms.author: wiwagn -ms.date: 05/26/2017 +ms.date: 10/18/2017 ms.prod: .net-core -ms.technology: dotnet-docker +ms.technology: dotnet-docker, service fabric, kubernetes, azure container service, docker swarm, dc/os ms.topic: article --- # Orchestrating microservices and multi-container applications for high scalability and availability Using orchestrators for production-ready applications is essential if your application is based on microservices or simply split across multiple containers. As introduced previously, in a microservice-based approach, each microservice owns its model and data so that it will be autonomous from a development and deployment point of view. But even if you have a more traditional application that is composed of multiple services (like SOA), you will also have multiple containers or services comprising a single business application that need to be deployed as a distributed system. These kinds of systems are complex to scale out and manage; therefore, you absolutely need an orchestrator if you want to have a production-ready and scalable multi-container application. -Figure 4-22 illustrates deployment into a cluster of an application composed of multiple microservices (containers). +Figure 4-23 illustrates deployment into a cluster of an application composed of multiple microservices (containers). -![](./media/image22.PNG) +![](./media/image23.PNG) -**Figure 4-22**. A cluster of containers +**Figure 4-23**. A cluster of containers It looks like a logical approach. But how are you handling load-balancing, routing, and orchestrating these composed applications? -The Docker CLI meets the needs of managing one container on one host, but it falls short when it comes to managing multiple containers deployed on multiple hosts for more complex distributed applications. In most cases, you need a management platform that will automatically start containers, suspend them or shut them down when needed, and ideally also control how they access resources like the network and data storage. +The plain Docker Engine in single Docker hosts meets the needs of managing single image instances on one host, but it falls short when it comes to managing multiple containers deployed on multiple hosts for more complex distributed applications. In most cases, you need a management platform that will automatically start containers, scale-out containers with multiple instances per image, suspend them or shut them down when needed, and ideally also control how they access resources like the network and data storage. To go beyond the management of individual containers or very simple composed apps and move toward larger enterprise applications with microservices, you must turn to orchestration and clustering platforms. From an architecture and development point of view, if you are building large enterprise composed of microservices-based applications, it is important to understand the following platforms and products that support advanced scenarios: -**Clusters and orchestrators**. When you need to scale out applications across many Docker hosts, as when a large microservice-based application, it is critical to be able to manage all those hosts as a single cluster by abstracting the complexity of the underlying platform. That is what the container clusters and orchestrators provide. Examples of orchestrators are Docker Swarm, Mesosphere DC/OS, Kubernetes (the first three available through Azure Container Service) and Azure Service Fabric. +**Clusters and orchestrators**. When you need to scale out applications across many Docker hosts, as when a large microservice-based application, it is critical to be able to manage all those hosts as a single cluster by abstracting the complexity of the underlying platform. That is what the container clusters and orchestrators provide. Examples of orchestrators are Azure Service Fabric, Kubernetes, Docker Swarm and Mesosphere DC/OS. The last three open-source orchestrators are available in Azure through Azure Container Service. **Schedulers**. *Scheduling* means to have the capability for an administrator to launch containers in a cluster so they also provide a UI. A cluster scheduler has several responsibilities: to use the cluster’s resources efficiently, to set the constraints provided by the user, to efficiently load-balance containers across nodes or hosts, and to be robust against errors while providing high availability. -The concepts of a cluster and a scheduler are closely related, so the products provided by different vendors often provide both sets of capabilities. The following list shows the most important platform and software choices you have for clusters and schedulers. These clusters are generally offered in public clouds like Azure. +The concepts of a cluster and a scheduler are closely related, so the products provided by different vendors often provide both sets of capabilities. The following list shows the most important platform and software choices you have for clusters and schedulers. These orchestrators are generally offered in public clouds like Azure. ## Software platforms for container clustering, orchestration, and scheduling +Kubernetes + +![https://pbs.twimg.com/media/Bt\_pEfqCAAAiVyz.png](./media/image24.png) + +> Kubernetes is an open-source product that provides functionality that ranges from cluster infrastructure and container scheduling to orchestrating capabilities. It lets you automate deployment, scaling, and operations of application containers across clusters of hosts. +> +> Kubernetes provides a container-centric infrastructure that groups application containers into logical units for easy management and discovery. +> +> Kubernetes is mature in Linux, less mature in Windows. + Docker Swarm -![http://rancher.com/wp-content/themes/rancher-2016/assets/images/swarm.png?v=2016-07-10-am](./media/image23.png) +![http://rancher.com/wp-content/themes/rancher-2016/assets/images/swarm.png?v=2016-07-10-am](./media/image25.png) > Docker Swarm lets you cluster and schedule Docker containers. By using Swarm, you can turn a pool of Docker hosts into a single, virtual Docker host. Clients can make API requests to Swarm the same way they do to hosts, meaning that Swarm makes it easy for applications to scale to multiple hosts. > @@ -47,29 +57,24 @@ Docker Swarm Mesosphere DC/OS -![https://mesosphere.com/wp-content/uploads/2016/04/logo-horizontal-styled.png](./media/image24.png) +![https://mesosphere.com/wp-content/uploads/2016/04/logo-horizontal-styled.png](./media/image26.png) > Mesosphere Enterprise DC/OS (based on Apache Mesos) is a production-ready platform for running containers and distributed applications. > > DC/OS works by abstracting a collection of the resources available in the cluster and making those resources available to components built on top of it. Marathon is usually used as a scheduler integrated with DC/OS. - -Google Kubernetes - -![https://pbs.twimg.com/media/Bt\_pEfqCAAAiVyz.png](./media/image25.png) - -> Kubernetes is an open-source product that provides functionality that ranges from cluster infrastructure and container scheduling to orchestrating capabilities. It lets you automate deployment, scaling, and operations of application containers across clusters of hosts. > -> Kubernetes provides a container-centric infrastructure that groups application containers into logical units for easy management and discovery. +> DC/OS is mature in Linux, less mature in Windows. Azure Service Fabric -![https://azure.microsoft.com/svghandler/service-fabric?width=600&height=315](./media/image26.png) +![https://azure.microsoft.com/svghandler/service-fabric?width=600&height=315](./media/image27.png) -> [Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-overview) is a Microsoft microservices platform for building applications. It is an [orchestrator](https://docs.microsoft.com/azure/service-fabric/service-fabric-cluster-resource-manager-introduction) of services and creates clusters of machines. By default, Service Fabric deploys and activates services as processes, but Service Fabric can deploy services in Docker container images. More importantly, you can mix services in processes with services in containers in the same application. +> [Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-overview) is a Microsoft microservices platform for building applications. It is an [orchestrator](https://docs.microsoft.com/azure/service-fabric/service-fabric-cluster-resource-manager-introduction) of services and creates clusters of machines. Service Fabric can deploy services as containers or as plain processes. It can even mix services in processes with services in containers within the same application and cluster. > -> As of May 2017, the feature of Service Fabric that supports deploying services as Docker containers is in preview state. +> Service Fabric provides additional and optional prescriptive [Service Fabric programming models ](https://docs.microsoft.com/azure/service-fabric/service-fabric-choose-framework) like [stateful services](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction) and [Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction). > -> Service Fabric services can be developed in many ways, from using the [Service Fabric programming models ](https://docs.microsoft.com/azure/service-fabric/service-fabric-choose-framework)to deploying [guest executables](https://docs.microsoft.com/azure/service-fabric/service-fabric-deploy-existing-app) as well as containers. Service Fabric supports prescriptive application models like [stateful services](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction) and [Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction). +> Service Fabric is mature in Windows (years evolving in Windows), less mature in Linux. +> Both, Linux Containers and Windows Containers are already released as GA since 2017. ## Using container-based orchestrators in Microsoft Azure @@ -79,25 +84,25 @@ Another choice is to use Microsoft Azure Service Fabric (a microservices platfor ## Using Azure Container Service -A Docker cluster pools multiple Docker hosts and exposes them as a single virtual Docker host, so you can deploy multiple containers into the cluster. The cluster will handle all the complex management plumbing, like scalability, health, and so forth. Figure 4-23 represents how a Docker cluster for composed applications maps to Azure Container Service (ACS). +A Docker cluster pools multiple Docker hosts and exposes them as a single virtual Docker host, so you can deploy multiple containers into the cluster. The cluster will handle all the complex management plumbing, like scalability, health, and so forth. Figure 4-24 represents how a Docker cluster for composed applications maps to Azure Container Service (ACS). ACS provides a way to simplify the creation, configuration, and management of a cluster of virtual machines that are preconfigured to run containerized applications. Using an optimized configuration of popular open-source scheduling and orchestration tools, ACS enables you to use your existing skills or draw on a large and growing body of community expertise to deploy and manage container-based applications on Microsoft Azure. Azure Container Service optimizes the configuration of popular Docker clustering open source tools and technologies specifically for Azure. You get an open solution that offers portability for both your containers and your application configuration. You select the size, the number of hosts, and the orchestrator tools, and Container Service handles everything else. -![](./media/image27.png) +![](./media/image28.png) -**Figure 4-23**. Clustering choices in Azure Container Service +**Figure 4-24**. Clustering choices in Azure Container Service ACS leverages Docker images to ensure that your application containers are fully portable. It supports your choice of open-source orchestration platforms like DC/OS (powered by Apache Mesos), Kubernetes (originally created by Google), and Docker Swarm, to ensure that these applications can be scaled to thousands or even tens of thousands of containers. The Azure Container service enables you to take advantage of the enterprise-grade features of Azure while still maintaining application portability, including at the orchestration layers. -![](./media/image28.png) +![](./media/image29.png) -**Figure 4-24**. Orchestrators in ACS +**Figure 4-25**. Orchestrators in ACS -As shown in Figure 4-24, Azure Container Service is simply the infrastructure provided by Azure in order to deploy DC/OS, Kubernetes or Docker Swarm, but ACS does not implement any additional orchestrator. Therefore, ACS is not an orchestrator as such, only an infrastructure that leverages existing open-source orchestrators for containers. +As shown in Figure 4-25, Azure Container Service is simply the infrastructure provided by Azure in order to deploy DC/OS, Kubernetes or Docker Swarm, but ACS does not implement any additional orchestrator. Therefore, ACS is not an orchestrator as such, only an infrastructure that leverages existing open-source orchestrators for containers. From a usage perspective, the goal of Azure Container Service is to provide a container hosting environment by using popular open-source tools and technologies. To this end, it exposes the standard API endpoints for your chosen orchestrator. By using these endpoints, you can leverage any software that can talk to those endpoints. For example, in the case of the Docker Swarm endpoint, you might choose to use the Docker command-line interface (CLI). For DC/OS, you might choose to use the DC/OS CLI. diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/service-oriented-architecture.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/service-oriented-architecture.md index 6e704bbac9bcc..c2c5366191e66 100644 --- a/docs/standard/microservices-architecture/architect-microservice-container-applications/service-oriented-architecture.md +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/service-oriented-architecture.md @@ -17,9 +17,9 @@ Those services can now be deployed as Docker containers, which solves deployment Docker containers are useful (but not required) for both traditional service-oriented architectures and the more advanced microservices architectures. -Microservices derive from SOA, but SOA is different from microservices architecture. Features like big central brokers, central orchestrators at the organization level, and the [Enterprise Service Bus (ESB)](https://en.wikipedia.org/wiki/Enterprise_service_bus) are typical in SOA. But in most cases these are anti-patterns in the microservice community. In fact, some people argue that “The microservice architecture is SOA done right.” +Microservices derive from SOA, but SOA is different from microservices architecture. Features like big central brokers, central orchestrators at the organization level, and the [Enterprise Service Bus (ESB)](https://en.wikipedia.org/wiki/Enterprise_service_bus) are typical in SOA. But in most cases, these are anti-patterns in the microservice community. In fact, some people argue that “The microservice architecture is SOA done right.” -This guide focuses on microservices, because an SOA approach is less prescriptive than the requirements and techniques used in a microservice architecture. If you know how to build a microservice-based application, you also know how to build a simpler service-oriented application. +This guide focuses on microservices, because a SOA approach is less prescriptive than the requirements and techniques used in a microservice architecture. If you know how to build a microservice-based application, you also know how to build a simpler service-oriented application. diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/using-azure-service-fabric.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/using-azure-service-fabric.md index d4d1e8f7ac254..c2ef9945e86d0 100644 --- a/docs/standard/microservices-architecture/architect-microservice-container-applications/using-azure-service-fabric.md +++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/using-azure-service-fabric.md @@ -4,14 +4,14 @@ description: .NET Microservices Architecture for Containerized .NET Applications keywords: Docker, Microservices, ASP.NET, Container author: CESARDELATORRE ms.author: wiwagn -ms.date: 05/26/2017 +ms.date: 10/18/2017 ms.prod: .net-core ms.technology: dotnet-docker ms.topic: article --- # Using Azure Service Fabric -Azure Service Fabric arose from Microsoft’s transition from delivering box products, which were typically monolithic in style, to delivering services. The experience of building and operating large services at scale, such as Azure SQL Database, Azure Document DB, Azure Service Bus, or Cortana’s Backend, shaped Service Fabric. The platform evolved over time as more and more services adopted it. Importantly, Service Fabric had to run not only in Azure but also in standalone Windows Server deployments. +Azure Service Fabric arose from Microsoft’s transition from delivering box products, which were typically monolithic in style, to delivering services. The experience of building and operating large services at scale, such as Azure SQL Database, Azure Cosmos DB, Azure Service Bus, or Cortana’s Backend, shaped Service Fabric. The platform evolved over time as more and more services adopted it. Importantly, Service Fabric had to run not only in Azure but also in standalone Windows Server deployments. The aim of Service Fabric is to solve the hard problems of building and running a service and utilizing infrastructure resources efficiently, so that teams can solve business problems using a microservices approach. @@ -23,67 +23,68 @@ Service Fabric provides two broad areas to help you build applications that use Service Fabric is agnostic with respect to how you build your service, and you can use any technology. However, it provides built-in programming APIs that make it easier to build microservices. -As shown in Figure 4-25, you can create and run microservices in Service Fabric either as simple processes or as Docker containers. It is also possible to mix container-based microservices with process-based microservices within the same Service Fabric cluster. +As shown in Figure 4-26, you can create and run microservices in Service Fabric either as simple processes or as Docker containers. It is also possible to mix container-based microservices with process-based microservices within the same Service Fabric cluster. -![](./media/image29.png) +![](./media/image30.png) -**Figure 4-25**. Deploying microservices as processes or as containers in Azure Service Fabric +**Figure 4-26**. Deploying microservices as processes or as containers in Azure Service Fabric -Service Fabric clusters based on Linux and Windows hosts can run Docker Linux containers and Windows Containers. +Service Fabric clusters based on Linux and Windows hosts can run Docker Linux containers and Windows Containers, respectively. For up-to-date information about containers support in Azure Service Fabric, see [Service Fabric and containers](https://docs.microsoft.com/azure/service-fabric/service-fabric-containers-overview). -Service Fabric is a good example of a platform where you can define a different logical architecture (business microservices or Bounded Contexts) than the physical implementation that were introduced in the [Logical architecture versus physical architecture](#logical-architecture-versus-physical-architecture) section. For example, if you implement [Stateful Reliable Services](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction) in [Azure Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-overview), which are introduced in the section [Stateless versus stateful microservices](#stateless-versus-stateful-microservices) later, you have a business microservice concept with multiple physical services. +Service Fabric is a good example of a platform where you can define a different logical architecture (business microservices or Bounded Contexts) than the physical implementation that were introduced in the [Logical architecture versus physical architecture](#logical-architecture-versus-physical-architecture) section. For example, if you implement [Stateful Reliable Services](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction) in [Azure Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-overview), which are introduced in the section [Stateless versus stateful microservices](#stateless-versus-stateful-microservices) later, you can have a business microservice concept with multiple physical services. -As shown in Figure 4-26, and thinking from a logical/business microservice perspective, when implementing a Service Fabric Stateful Reliable Service, you usually will need to implement two tiers of services. The first is the back-end stateful reliable service, which handles multiple partitions. The second is the front-end service, or Gateway service, in charge of routing and data aggregation across multiple partitions or stateful service instances. That Gateway service also handles client-side communication with retry loops accessing the backend service used in conjunction with the Service Fabric [reverse proxy](https://docs.microsoft.com/azure/service-fabric/service-fabric-reverseproxy). +As shown in Figure 4-27, and thinking from a logical/business microservice perspective, when implementing a Service Fabric Stateful Reliable Service, you usually will need to implement two tiers of services. The first is the back-end stateful reliable service, which handles multiple partitions (each partition is a stateful service). The second is the front-end service, or Gateway service, in charge of routing and data aggregation across multiple partitions or stateful service instances. That Gateway service also handles client-side communication with retry loops accessing the backend service. +It is called Gateway service if you implement your custom service, or alternatevely you can also use the out-of-the-box Service Fabric [Reverse Proxy service](https://docs.microsoft.com/azure/service-fabric/service-fabric-reverseproxy). -![](./media/image30.png) +![](./media/image31.png) -**Figure 4-26**. Business microservice with several stateful and stateless services in Service Fabric +**Figure 4-27**. Business microservice with several stateful service instances and a custom gateway front-end In any case, when you use Service Fabric Stateful Reliable Services, you also have a logical or business microservice (Bounded Context) that is usually composed of multiple physical services. Each of them, the Gateway service and Partition service could be implemented as ASP.NET Web API services, as shown in Figure 4-26. -In Service Fabric, you can group and deploy groups of services as a [Service Fabric Application](https://docs.microsoft.com/azure/service-fabric/service-fabric-application-model), which is the unit of packaging and deployment for the orchestrator or cluster. Therefore, the Service Fabric Application could be mapped to this autonomous business and logical microservice boundary or Bounded Context, as well. +In Service Fabric, you can group and deploy groups of services as a [Service Fabric Application](https://docs.microsoft.com/azure/service-fabric/service-fabric-application-model), which is the unit of packaging and deployment for the orchestrator or cluster. Therefore, the Service Fabric Application could be mapped to this autonomous business and logical microservice boundary or Bounded Context, as well, so you could deploy these services autonomously. ## Service Fabric and containers -With regard to containers in Service Fabric, you can also deploy services in container images within a Service Fabric cluster. As Figure 4-27 shows, most of the time there will only be one container per service. +With regard to containers in Service Fabric, you can also deploy services in container images within a Service Fabric cluster. As Figure 4-28 shows, most of the time there will only be one container per service. -![](./media/image31.png) +![](./media/image32.png) -**Figure 4-27**. Business microservice with several services (containers) in Service Fabric +**Figure 4-28**. Business microservice with several services (containers) in Service Fabric However, so-called “sidecar” containers (two containers that must be deployed together as part of a logical service) are also possible in Service Fabric. The important thing is that a business microservice is the logical boundary around several cohesive elements. In many cases, it might be a single service with a single data model, but in some other cases you might have physical several services as well. -As of this writing (April 2017), in Service Fabric you cannot deploy SF Reliable Stateful Services on containers—you can only deploy guest containers, stateless services, or actor services in containers. But note that you can mix services in processes and services in containers in the same Service Fabric application, as shown in Figure 4-28. +As of mid-2017, in Service Fabric you cannot deploy SF Reliable Stateful Services on containers—you can only deploy stateless services and actor services in containers. But note that you can mix services in processes and services in containers in the same Service Fabric application, as shown in Figure 4-29. -![](./media/image32.png) +![](./media/image33.png) -**Figure 4-28**. Business microservice mapped to a Service Fabric application with containers and stateful services +**Figure 4-29**. Business microservice mapped to a Service Fabric application with containers and stateful services -Support is also different depending on whether you are using Docker containers on Linux or Windows Containers. Support for containers in Service Fabric will be expanding in upcoming releases. For up-to-date news about container support in Azure Service Fabric, see [Service Fabric and containers](https://docs.microsoft.com/azure/service-fabric/service-fabric-containers-overview) on the Azure website. +For up-to-date news about container support in Azure Service Fabric, see [Service Fabric and containers](https://docs.microsoft.com/azure/service-fabric/service-fabric-containers-overview). ## Stateless versus stateful microservices -As mentioned earlier, each microservice (logical Bounded Context) must own its domain model (data and logic). In the case of stateless microservices, the databases will be external, employing relational options like SQL Server, or NoSQL options like MongoDB or Azure Document DB. +As mentioned earlier, each microservice (logical Bounded Context) must own its domain model (data and logic). In the case of stateless microservices, the databases will be external, employing relational options like SQL Server, or NoSQL options like MongoDB or Azure Cosmos DB. -But the services themselves can also be stateful, which means that the data resides within the microservice. This data might exist not just on the same server, but within the microservice process, in memory and persisted on hard drives and replicated to other nodes. Figure 4-29 shows the different approaches. +But the services themselves can also be stateful in Service Fabric, which means that the data resides within the microservice. This data might exist not just on the same server, but within the microservice process, in memory and persisted on hard drives and replicated to other nodes. Figure 4-30 shows the different approaches. -![](./media/image33.png) +![](./media/image34.png) -**Figure 4-29**. Stateless versus stateful microservices +**Figure 4-30**. Stateless versus stateful microservices A stateless approach is perfectly valid and is easier to implement than stateful microservices, since the approach is similar to traditional and well-known patterns. But stateless microservices impose latency between the process and data sources. They also involve more moving pieces when you are trying to improve performance with additional cache and queues. The result is that you can end up with complex architectures that have too many tiers. In contrast, [stateful microservices](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction#when-to-use-reliable-services-apis) can excel in advanced scenarios, because there is no latency between the domain logic and data. Heavy data processing, gaming back ends, databases as a service, and other low-latency scenarios all benefit from stateful services, which enable local state for faster access. -Stateless and stateful services are complementary. For instance, you can see in Figure 4-20 that a stateful service could be split into multiple partitions. To access those partitions, you might need a stateless service acting as a gateway service that knows how to address each partition based on partition keys. +Stateless and stateful services are complementary. For instance, you can see in Figure 4-30, at the right diagram, that a stateful service could be split into multiple partitions. To access those partitions, you might need a stateless service acting as a gateway service that knows how to address each partition based on partition keys. Stateful services do have drawbacks. They impose a level of complexity that allows to scale out. Functionality that would usually be implemented by external database systems must be addressed for tasks such as data replication across stateful microservices and data partitioning. However, this is one of the areas where an orchestrator like [Azure Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-platform-architecture) with its [stateful reliable services](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction#when-to-use-reliable-services-apis) can help the most—by simplifying the development and lifecycle of stateful microservices using the [Reliable Services API](https://docs.microsoft.com/azure/service-fabric/service-fabric-work-with-reliable-collections) and [Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction). Other microservice frameworks that allow stateful services, that support the Actor pattern, and that improve fault tolerance and latency between business logic and data are Microsoft [Orleans](https://github.com/dotnet/orleans), from Microsoft Research, and [Akka.NET](http://getakka.net/). Both frameworks are currently improving their support for Docker. -Note that Docker containers are themselves stateless. If you want to implement a stateful service, you need one of the additional prescriptive and higher-level frameworks noted earlier. However, at the time of this writing, stateful services in Azure Service Fabric are not supported as containers, only as plain microservices. Reliable services support in containers will be available in upcoming versions of Service Fabric. +Note that Docker containers are themselves stateless. If you want to implement a stateful service, you need one of the additional prescriptive and higher-level frameworks noted earlier. >[!div class="step-by-step"] [Previous] (scalable-available-multi-container-microservice-applications.md) diff --git a/docs/standard/microservices-architecture/docker-application-development-process/docker-app-development-workflow.md b/docs/standard/microservices-architecture/docker-application-development-process/docker-app-development-workflow.md index 38f30923ed4dc..2af6c92feeb70 100644 --- a/docs/standard/microservices-architecture/docker-application-development-process/docker-app-development-workflow.md +++ b/docs/standard/microservices-architecture/docker-application-development-process/docker-app-development-workflow.md @@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications keywords: Docker, Microservices, ASP.NET, Container author: CESARDELATORRE ms.author: wiwagn -ms.date: 05/26/2017 +ms.date: 10/18/2017 ms.prod: .net-core ms.technology: dotnet-docker ms.topic: article @@ -43,7 +43,7 @@ However, just because Visual Studio makes those steps automatic does not mean th ## Step 1. Start coding and create your initial application or service baseline -Developing a Docker application is similar to the way you develop an application without Docker. The difference is that while developing for Docker, you are deploying and testing your application or services running within Docker containers in your local environment (either a Linux VM or a Windows VM). +Developing a Docker application is similar to the way you develop an application without Docker. The difference is that while developing for Docker, you are deploying and testing your application or services running within Docker containers in your local environment. The container can be either a Linux container or a Windows container. ### Set up your local environment with Visual Studio @@ -93,14 +93,14 @@ This action on a project (like an ASP.NET Web application or Web API service) ad You usually build a custom image for your container on top of a base image you can get from an official repository at the [Docker Hub](https://hub.docker.com/) registry. That is precisely what happens under the covers when you enable Docker support in Visual Studio. Your Dockerfile will use an existing aspnetcore image. -Earlier we explained which Docker images and repos you can use, depending on the framework and OS you have chosen. For instance, if you want to use ASP.NET Core and Linux, the image to use is microsoft/aspnetcore:1.1. Therefore, you just need to specify what base Docker image you will use for your container. You do that by adding FROM microsoft/aspnetcore:1.1 to your Dockerfile. This will be automatically performed by Visual Studio, but if you were to update the version, you update this value. +Earlier we explained which Docker images and repos you can use, depending on the framework and OS you have chosen. For instance, if you want to use ASP.NET Core (Linux or Windows), the image to use is microsoft/aspnetcore:2.0. Therefore, you just need to specify what base Docker image you will use for your container. You do that by adding FROM microsoft/aspnetcore:2.0 to your Dockerfile. This will be automatically performed by Visual Studio, but if you were to update the version, you update this value. Using an official .NET image repository from Docker Hub with a version number ensures that the same language features are available on all machines (including development, testing, and production). The following example shows a sample Dockerfile for an ASP.NET Core container. ``` -FROM microsoft/aspnetcore:1.1 +FROM microsoft/aspnetcore:2.0 ARG source @@ -113,7 +113,7 @@ COPY ${source:-obj/Docker/publish} . ENTRYPOINT ["dotnet", " MySingleContainerWebApp.dll "] ``` -In this case, the container is based on version 1.1 of the official ASP.NET Core Docker image for Linux; this is the setting FROM microsoft/aspnetcore:1.1. (For further details about this base image, see the [ASP.NET Core Docker Image](https://hub.docker.com/r/microsoft/aspnetcore/) page and the [.NET Core Docker Image](https://hub.docker.com/r/microsoft/dotnet/) page.) In the Dockerfile, you also need to instruct Docker to listen on the TCP port you will use at runtime (in this case, port 80, as configured with the EXPOSE setting). +In this case, the container is based on version 2.0 of the official ASP.NET Core Docker image (multi-arch for Linux and Windows). This is the setting `FROM microsoft/aspnetcore:2.0`. (For further details about this base image, see the [ASP.NET Core Docker Image](https://hub.docker.com/r/microsoft/aspnetcore/) page and the [.NET Core Docker Image](https://hub.docker.com/r/microsoft/dotnet/) page.) In the Dockerfile, you also need to instruct Docker to listen on the TCP port you will use at runtime (in this case, port 80, as configured with the EXPOSE setting). You can specify additional configuration settings in the Dockerfile, depending on the language and framework you are using. For instance, the ENTRYPOINT line with \["dotnet", "MySingleContainerWebApp.dll"\] tells Docker to run a .NET Core application. If you are using the SDK and the .NET Core CLI (dotnet CLI) to build and run the .NET application, this setting would be different. The bottom line is that the ENTRYPOINT line and other settings will be different depending on the language and platform you choose for your application. @@ -125,17 +125,27 @@ You can specify additional configuration settings in the Dockerfile, depending o - **Build your own image**. In the official Docker documentation. [*https://docs.docker.com/engine/tutorials/dockerimages/*](https://docs.docker.com/engine/tutorials/dockerimages/) -### Using multi-platform image repositories +### Using multi-arch image repositories -A single repo can contain platform variants, such as a Linux image and a Windows image. This feature allows vendors like Microsoft (base image creators) to create a single repo to cover multiple platforms. For example, the [microsoft/dotnet](https://hub.docker.com/r/microsoft/aspnetcore/) repository available in the Docker Hub registry provides support for Linux and Windows Nano Server by using the same repo name with different tags, as shown in the following examples: +A single repo can contain platform variants, such as a Linux image and a Windows image. This feature allows vendors like Microsoft (base image creators) to create a single repo to cover multiple platforms (that is Linux and Windows). For example, the [microsoft/dotnet](https://hub.docker.com/r/microsoft/aspnetcore/) repository available in the Docker Hub registry provides support for Linux and Windows Nano Server by using the same repo name. -- microsoft/dotnet:1.1-runtime - .NET Core 1.1 runtime-only on Linux Debian +If you specify a tag, targeting a platform that is explicit like in the following cases: -- microsoft/dotnet:1.1-runtime-nanoserver - .NET Core 1.1 runtime-only on Windows Nano Server +- **microsoft/aspnetcore:2.0.0-jessie** -In the future, it will be possible to use the same repo name and tag targeting multiple operating systems. That way, when you pull an image from a Windows host, it will pull the Windows variant, and pulling the same image name from a Linux host will pull the Linux variant. + .NET Core 2.0 runtime-only on Linux + +- **microsoft/aspnetcore:2.0.0-nanoserver** + + .NET Core 2.0 runtime-only on Windows Nano Server + +But, and this is new since mid-2017, if you specify the same image name, even with the same tag, the new multi-arch images (like the aspnetcore image which supports multi-arch) will use the Linux or Windows version depending on the Docker host OS you are deploying, as shown in the following example: + +- **microsoft/aspnetcore:2.0** + + Multi-arch: .NET Core 2.0 runtime-only on Linux or Windows Nano Server depending on the Docker host OS + +This way, when you pull an image from a Windows host, it will pull the Windows variant, and pulling the same image name from a Linux host will pull the Linux variant. ### Option B: Creating your base image from scratch @@ -143,6 +153,8 @@ You can create your own Docker base image from scratch. This scenario is not rec ### Additional resources +- **Multi-arch .NET Core images**. +https://github.com/dotnet/announcements/issues/14 - **Create a base image**. Official Docker documentation. [*https://docs.docker.com/engine/userguide/eng-image/baseimages/*](https://docs.docker.com/engine/userguide/eng-image/baseimages/) @@ -187,10 +199,10 @@ The [docker-compose.yml](https://docs.docker.com/compose/compose-file/) file let To use a docker-compose.yml file, you need to create the file in your main or root solution folder, with content similar to that in the following example: ```yml -version: '2' +version: '3' services: - + webmvc: image: eshop/web environment: @@ -204,16 +216,17 @@ services: catalog.api: image: eshop/catalog.api - environment: ConnectionString=Server=catalogdata;Port=5432;Database=postgres;… + environment: + - ConnectionString=Server=sql.data;Database=CatalogDB;… ports: - "81:80" depends_on: - - postgres.data + - sql.data ordering.api: image: eshop/ordering.api environment: - - ConnectionString=Server=ordering.data;Database=OrderingDb;… + - ConnectionString=Server=sql.data;Database=OrderingDb;… ports: - "82:80" extra_hosts: @@ -229,25 +242,21 @@ services: ports: - "5433:1433" - postgres.data: - image: postgres:latest - environment: - POSTGRES_PASSWORD: tempPwd ``` Note that this docker-compose.yml file is a simplified and merged version. It contains static configuration data for each container (like the name of the custom image), which always applies, plus configuration information that might depend on the deployment environment, like the connection string. In later sections, you will learn how you can split the docker-compose.yml configuration into multiple docker-compose files and override values depending on the environment and execution type (debug or release). -The docker-compose.yml file example defines five services: the webmvc service (a web application), two microservices (catalog.api and ordering.api), and two data source containers (sql.data based on SQL Server for Linux running as a container and postgres.data as a Postgres database). Each service is deployed as a container, so a Docker image is required for each. +The docker-compose.yml file example defines five services: the webmvc service (a web application), two microservices (catalog.api and ordering.api), and one data source container, sql.data, based on SQL Server for Linux running as a container. Each service is deployed as a container, so a Docker image is required for each. The docker-compose.yml file specifies not only what containers are being used, but how they are individually configured. For instance, the webmvc container definition in the .yml file: -- Uses the pre-built eshop/web:latest image. However, you could also configure the image to be built as part of the docker-compose execution with an additional configuration based on a build: section in the docker-compose file. +- Uses a pre-built eshop/web:latest image. However, you could also configure the image to be built as part of the docker-compose execution with an additional configuration based on a build: section in the docker-compose file. - Initializes two environment variables (CatalogUrl and OrderingUrl). - Forwards the exposed port 80 on the container to the external port 80 on the host machine. -- Links the web service to the catalog and ordering service with the depends\_on setting. This causes the service to wait until those services are started. +- Links the web app to the catalog and ordering service with the depends\_on setting. This causes the service to wait until those services are started. We will revisit the docker-compose.yml file in a later section when we cover how to implement microservices and multi-container apps. @@ -313,7 +322,7 @@ Running a multi-container application using Visual Studio 2017 cannot get simple As mentioned before, each time you add Docker solution support to a project within a solution, that project is configured in the global (solution-level) docker-compose.yml file, which lets you run or debug the whole solution at once. Visual Studio will start one container for each project that has Docker solution support enabled, and perform all the internal steps for you (dotnet publish, docker build, etc.). -The important point here is that, as shown in Figure 5-12, in Visual Studio 2017 there is an additional **Docker** command under the F5 key. This option lets you run or debug a multi-container application by running all the containers that are defined in the docker-compose.yml files at the solution level. The ability to debug multiple-container solutions means that you can set several breakpoints, each breakpoint in a different project (container), and while debugging from Visual Studio you will stop at breakpoints defined in different projects and running on different containers. +The important point here is that, as shown in Figure 5-12, in Visual Studio 2017 there is an additional **Docker** command for the F5 key action. This option lets you run or debug a multi-container application by running all the containers that are defined in the docker-compose.yml files at the solution level. The ability to debug multiple-container solutions means that you can set several breakpoints, each breakpoint in a different project (container), and while debugging from Visual Studio you will stop at breakpoints defined in different projects and running on different containers. ![](./media/image16.png) diff --git a/docs/standard/microservices-architecture/docker-application-development-process/index.md b/docs/standard/microservices-architecture/docker-application-development-process/index.md index 09b79e0260f09..3eda39a9e50f6 100644 --- a/docs/standard/microservices-architecture/docker-application-development-process/index.md +++ b/docs/standard/microservices-architecture/docker-application-development-process/index.md @@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications keywords: Docker, Microservices, ASP.NET, Container author: CESARDELATORRE ms.author: wiwagn -ms.date: 05/26/2017 +ms.date: 10/18/2017 ms.prod: .net-core ms.technology: dotnet-docker ms.topic: article @@ -19,16 +19,18 @@ ms.topic: article Whether you prefer a full and powerful IDE or a lightweight and agile editor, Microsoft has tools that you can use for developing Docker applications. -**Visual Studio with Tools for Docker**. If you are using Visual Studio 2015, you can install the [Visual Studio Tools for Docker](https://marketplace.visualstudio.com/items?itemName=MicrosoftCloudExplorer.VisualStudioToolsforDocker-Preview) add-in. If you are using Visual Studio 2017, tools for Docker are already built-in. In either case, the tools for Docker let you develop, run, and validate your applications directly in the target Docker environment. You can press F5 to run and debug your application (single container or multiple containers) directly into a Docker host, or press CTRL+F5 to edit and refresh your application without having to rebuild the container. This is the simplest and most powerful choice for Windows developers targeting Docker containers for Linux or Windows. +**Visual Studio (for Windows)**. To develop Docker-based applications, use Visual Studio 2017 or later versions that comes with tools for Docker already built-in. The tools for Docker let you develop, run, and validate your applications directly in the target Docker environment. You can press F5 to run and debug your application (single container or multiple containers) directly into a Docker host, or press CTRL+F5 to edit and refresh your application without having to rebuild the container. This is the most powerful development choice for Docker-based apps. + +**Visual Studio for Mac**. It is an IDE, evolution of Xamarin Studio, that runs on macOS and supports Docker-based application development. This should be the preferred choice for developers working in Mac machines who also want to use a powerful IDE. **Visual Studio Code and Docker CLI**. If you prefer a lightweight and cross-platform editor that supports any development language, you can use Microsoft Visual Studio Code (VS Code) and the Docker CLI. This is a cross-platform development approach for Mac, Linux, and Windows. -These products provide a simple but robust experience that streamlines the developer workflow. By installing [Docker Community Edition (CE)](https://www.docker.com/community-edition) tools, you can use a single Docker CLI to build apps for both Windows and Linux. Additionally, Visual Studio Code supports extensions for Docker such as IntelliSense for Dockerfiles and shortcut tasks to run Docker commands from the editor. +By installing [Docker Community Edition (CE)](https://www.docker.com/community-edition) tools, you can use a single Docker CLI to build apps for both Windows and Linux. Additionally, Visual Studio Code supports extensions for Docker such as IntelliSense for Dockerfiles and shortcut tasks to run Docker commands from the editor. ### Additional resources - **Visual Studio Tools for Docker** - [*https://visualstudiogallery.msdn.microsoft.com/0f5b2caa-ea00-41c8-b8a2-058c7da0b3e4*](https://visualstudiogallery.msdn.microsoft.com/0f5b2caa-ea00-41c8-b8a2-058c7da0b3e4) + [*https://docs.microsoft.com/en-us/aspnet/core/publishing/visual-studio-tools-for-docker*](https://docs.microsoft.com/en-us/aspnet/core/publishing/visual-studio-tools-for-docker) - **Visual Studio Code**. Official site. [*https://code.visualstudio.com/download*](https://code.visualstudio.com/download) diff --git a/docs/standard/microservices-architecture/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md b/docs/standard/microservices-architecture/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md index 2b8d70d28f14d..6c5d2cd2d0431 100644 --- a/docs/standard/microservices-architecture/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md +++ b/docs/standard/microservices-architecture/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md @@ -11,9 +11,9 @@ ms.topic: article --- # Using NoSQL databases as a persistence infrastructure -When you use NoSQL databases for your infrastructure data tier, you typically do not use an ORM like Entity Framework Core. Instead you use the API provided by the NoSQL engine, such as Azure Document DB, MongoDB, Cassandra, RavenDB, CouchDB, or Azure Storage Tables. +When you use NoSQL databases for your infrastructure data tier, you typically do not use an ORM like Entity Framework Core. Instead you use the API provided by the NoSQL engine, such as Azure Cosmos DB, MongoDB, Cassandra, RavenDB, CouchDB, or Azure Storage Tables. -However, when you use a NoSQL database, especially a document-oriented database like Azure Document DB, CouchDB, or RavenDB, the way you design your model with DDD aggregates is partially similar to how you can do it in EF Core, in regards to the identification of aggregate roots, child entity classes, and value object classes. But, ultimately, the database selection will impact in your design. +However, when you use a NoSQL database, especially a document-oriented database like Azure Cosmos DB, CouchDB, or RavenDB, the way you design your model with DDD aggregates is partially similar to how you can do it in EF Core, in regards to the identification of aggregate roots, child entity classes, and value object classes. But, ultimately, the database selection will impact in your design. When you use a document-oriented database, you implement an aggregate as a single document, serialized in JSON or another format. However, the use of the database is transparent from a domain model code point of view. When using a NoSQL database, you still are using entity classes and aggregate root classes, but with more flexibility than when using EF Core because the persistence is not relational. @@ -50,7 +50,7 @@ For instance, the following JSON code is a sample implementation of an order agg } ``` -When you use a C\# model to implement the aggregate to be used by something like the Azure Document DB SDK, the aggregate is similar to the C\# POCO classes used with EF Core. The difference is in the way to use them from the application and infrastructure layers, as in the following code: +When you use a C\# model to implement the aggregate to be used by something like the Azure Cosmos DB SDK, the aggregate is similar to the C\# POCO classes used with EF Core. The difference is in the way to use them from the application and infrastructure layers, as in the following code: ```csharp // C# EXAMPLE OF AN ORDER AGGREGATE BEING PERSISTED WITH DOCUMENTDB API @@ -103,7 +103,7 @@ orderAggregate.AddOrderItem(orderItem2); // *** End of Domain Model Code *** //... -// *** Infrastructure Code using Document DB Client API *** +// *** Infrastructure Code using Cosmos DB Client API *** Uri collectionUri = UriFactory.CreateDocumentCollectionUri(databaseName, collectionName); diff --git a/docs/standard/microservices-architecture/multi-container-microservice-net-applications/microservice-application-design.md b/docs/standard/microservices-architecture/multi-container-microservice-net-applications/microservice-application-design.md index cd56670b6b238..21c98d4fd022c 100644 --- a/docs/standard/microservices-architecture/multi-container-microservice-net-applications/microservice-application-design.md +++ b/docs/standard/microservices-architecture/multi-container-microservice-net-applications/microservice-application-design.md @@ -53,11 +53,11 @@ What should the application deployment architecture be? The specifications for t In this approach, each service (container) implements a set of cohesive and narrowly related functions. For example, an application might consist of services such as the catalog service, ordering service, basket service, user profile service, etc. -Microservices communicate using protocols such as HTTP (REST), asynchronously whenever possible, especially when propagating updates. +Microservices communicate using protocols such as HTTP (REST), but also asynchronously (i.e. AMQP) whenever possible, especially when propagating updates with integration events. Microservices are developed and deployed as containers independently of one another. This means that a development team can be developing and deploying a certain microservice without impacting other subsystems. -Each microservice has its own database, allowing it to be fully decoupled from other microservices. When necessary, consistency between databases from different microservices is achieved using application-level events (through a logical event bus), as handled in Command and Query Responsibility Segregation (CQRS). Because of that, the business constraints must embrace eventual consistency between the multiple microservices and related databases. +Each microservice has its own database, allowing it to be fully decoupled from other microservices. When necessary, consistency between databases from different microservices is achieved using application-level integration events (through a logical event bus), as handled in Command and Query Responsibility Segregation (CQRS). Because of that, the business constraints must embrace eventual consistency between the multiple microservices and related databases. ### eShopOnContainers: A reference application for .NET Core and microservices deployed using containers @@ -67,7 +67,7 @@ The application consists of multiple subsystems, including several store UI fron ![](./media/image1.png) -**Figure 8-1**. The eShopOnContainers reference application, showing the direct client-to-microservice communication and the event bus +**Figure 8-1**. The eShopOnContainers reference application, showing a direct client-to-microservice communication and the event bus **Hosting environment**. In Figure 8-1, you see several containers deployed within a single Docker host. That would be the case when deploying to a single Docker host with the docker-compose up command. However, if you are using an orchestrator or container cluster, each container could be running in a different host (node), and any node could be running any number of containers, as we explained earlier in the architecture section. @@ -79,9 +79,12 @@ The application consists of multiple subsystems, including several store UI fron The application is deployed as a set of microservices in the form of containers. Client apps can communicate with those containers as well as communicate between microservices. As mentioned, this initial architecture is using a direct client-to-microservice communication architecture, which means that a client application can make requests to each of the microservices directly. Each microservice has a public endpoint like https://servicename.applicationname.companyname. If required, each microservice can use a different TCP port. In production, that URL would map to the microservices’ load balancer, which distributes requests across the available microservice instances. -As explained in the architecture section of this guide, the direct client-to-microservice communication architecture can have drawbacks when you are building a large and complex microservice-based application. But it can be good enough for a small application, such as in the eShopOnContainers application, where the goal is to focus on the microservices deployed as Docker containers. +**Important note on API Gateway vs. Direct Communication in eShopOnContainers.** As explained in the architecture section of this guide, the direct client-to-microservice communication architecture can have drawbacks when you are building a large and complex microservice-based application. But it can be good enough for a small application, such as in the eShopOnContainers application, where the goal is to focus on a simpler getting started Docker container-based application and we didn’t want to create a single monolithic API Gateway that can impact the microservices’ development autonomy. -However, if you are going to design a large microservice-based application with dozens of microservices, we strongly recommend that you consider the API Gateway pattern, as we explained in the architecture section. +But, if you are going to design a large microservice-based application with dozens of microservices, we strongly recommend that you consider the API Gateway pattern, as we explained in the architecture section. +This architectural decission could be refactored once thinking about production-ready applications and specially-made facades for remote clients. Having multiple custom API Gateways depending on the client apps' form-factor can provide benefits in regard to different data aggregation per client app plus you can hide internal microservices or APIs to the client apps and authorize in that single tier. + +However, and as mentioned, beware against large and monolithic API Gateways that might kill your microservices' development autonomy. ### Data sovereignty per microservice diff --git a/docs/standard/microservices-architecture/multi-container-microservice-net-applications/subscribe-events.md b/docs/standard/microservices-architecture/multi-container-microservice-net-applications/subscribe-events.md index 610147e8817f4..78608c72e5ebf 100644 --- a/docs/standard/microservices-architecture/multi-container-microservice-net-applications/subscribe-events.md +++ b/docs/standard/microservices-architecture/multi-container-microservice-net-applications/subscribe-events.md @@ -98,7 +98,7 @@ As mentioned earlier in the architecture section, you can have several approache - Using the [Outbox pattern](http://gistlabs.com/2014/05/the-outbox/). This is a transactional table to store the integration events (extending the local transaction). -For this scenario, using the full Event Sourcing (ES) pattern is one of the best approaches, if not *the* best. However, in many application scenarios, you might not be able to implement a full ES system. ES means storing only domain events in your transactional database, instead of storing current state data. Storing only domain events can have great benefits, such as having the history of your system available and being able to determine the state of your system at any moment in the past. However, implementing a full ES system requires you to rearchitect most of your system and introduces many other complexities and requirements. For example, you would want to use a database specifically made for event sourcing, such as [Event Store](https://geteventstore.com/), or a document-oriented database such as Azure Document DB, MongoDB, Cassandra, CouchDB, or RavenDB. ES is a great approach for this problem, but not the easiest solution unless you are already familiar with event sourcing. +For this scenario, using the full Event Sourcing (ES) pattern is one of the best approaches, if not *the* best. However, in many application scenarios, you might not be able to implement a full ES system. ES means storing only domain events in your transactional database, instead of storing current state data. Storing only domain events can have great benefits, such as having the history of your system available and being able to determine the state of your system at any moment in the past. However, implementing a full ES system requires you to rearchitect most of your system and introduces many other complexities and requirements. For example, you would want to use a database specifically made for event sourcing, such as [Event Store](https://geteventstore.com/), or a document-oriented database such as Azure Cosmos DB, MongoDB, Cassandra, CouchDB, or RavenDB. ES is a great approach for this problem, but not the easiest solution unless you are already familiar with event sourcing. The option to use transaction log mining initially looks very transparent. However, to use this approach, the microservice has to be coupled to your RDBMS transaction log, such as the SQL Server transaction log. This is probably not desirable. Another drawback is that the low-level updates recorded in the transaction log might not be at the same level as your high-level integration events. If so, the process of reverse-engineering those transaction log operations can be difficult. diff --git a/docs/standard/microservices-architecture/net-core-net-framework-containers/container-framework-choice-factors.md b/docs/standard/microservices-architecture/net-core-net-framework-containers/container-framework-choice-factors.md index 654031a915bd9..fef6980f0c4a5 100644 --- a/docs/standard/microservices-architecture/net-core-net-framework-containers/container-framework-choice-factors.md +++ b/docs/standard/microservices-architecture/net-core-net-framework-containers/container-framework-choice-factors.md @@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications keywords: Docker, Microservices, ASP.NET, Container author: CESARDELATORRE ms.author: wiwagn -ms.date: 07/13/2017 +ms.date: 10/18/2017 ms.prod: .net-core ms.technology: dotnet-docker ms.topic: article @@ -43,17 +43,16 @@ There are several features of your application that affect your decision. You sh - Your .NET implementation choice is *.NET Framework* based on framework dependency. - Your container platform choice must be *Windows containers* because of the .NET Framework dependency. * Your application uses **SignalR services**. - - Your .NET implementation choice is *.NET Framework*, or *.NET Core (future release)*. - - Your container platform choice must be *Windows containers* because of the .NET Framework dependency. - - When **SignalR services** run on *.NET Core*, you can also choose *Linux containers*. + - Your .NET implementation choice is *.NET Framework*, or *.NET Core 2.1 (when released) or later*. + - Your container platform choice must be *Windows containers* if you chose the .NET Framework dependency. + - When **SignalR services** run on *.NET Core*, you can use *Linux containers or Windows Containers*. * Your application uses **WCF, WF, and other legacy frameworks**. - Your .NET implementation choice is *.NET Framework*, or *.NET Core (in the roadmap for a future release)*. - Your container platform choice must be *Windows containers* because of the .NET Framework dependency. - - When the dependency runs on *.NET Core*, you can also choose *Linux containers*. * Your application involves **Consumption of Azure services**. - Your .NET implementation choice is *.NET Framework*, or *.NET Core (eventually all Azure services will provide client SDKs for .NET Core)*. - - Your container platform choice must be *Windows containers* because of the .NET Framework dependency. - - When the dependency runs on *.NET Core*, you can also choose *Linux containers*. + - Your container platform choice must be *Windows containers* if you use .NET Framework client APIs. + - If you use client APIs available for *.NET Core*, you can also choose between *Linux containers and Windows containers*. >[!div class="step-by-step"] [Previous] (net-framework-container-scenarios.md) diff --git a/docs/standard/microservices-architecture/net-core-net-framework-containers/general-guidance.md b/docs/standard/microservices-architecture/net-core-net-framework-containers/general-guidance.md index f5522a35ba74c..bd1128ad7d435 100644 --- a/docs/standard/microservices-architecture/net-core-net-framework-containers/general-guidance.md +++ b/docs/standard/microservices-architecture/net-core-net-framework-containers/general-guidance.md @@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications keywords: Docker, Microservices, ASP.NET, Container author: CESARDELATORRE ms.author: wiwagn -ms.date: 05/26/2017 +ms.date: 10/18/2017 ms.prod: .net-core ms.technology: dotnet-docker ms.topic: article @@ -13,7 +13,7 @@ ms.topic: article This section provides a summary of when to choose .NET Core or .NET Framework. We provide more details about these choices in the sections that follow. -You should use .NET Core for your containerized Docker server application when: +You should use .NET Core, with Linux or Windows Containers, for your containerized Docker server application when: - You have cross-platform needs. For example, you want to use both Linux and Windows Containers. @@ -25,7 +25,7 @@ In short, when you create new containerized .NET applications, you should consid An additional benefit of using .NET Core is that you can run side by side .NET versions for applications within the same machine. This benefit is more important for servers or VMs that do not use containers, because containers isolate the versions of .NET that the app needs. (As long as they are compatible with the underlying OS.) -You should use .NET Framework for your containerized Docker server application when: +You should use .NET Framework, with Windows Containers, for your containerized Docker server application when: - Your application currently uses .NET Framework and has strong dependencies on Windows. @@ -33,7 +33,15 @@ You should use .NET Framework for your containerized Docker server application w - You need to use third-party .NET libraries or NuGet packages that are not available for .NET Core. -Using .NET Framework on Docker can improve your deployment experiences by minimizing deployment issues. This "lift and shift" scenario is important for "dockerizing" legacy applications (at least, those that are not based on microservices). +Using .NET Framework on Docker can improve your deployment experiences by minimizing deployment issues. This [*"lift and shift" scenario*](https://aka.ms/liftandshiftwithcontainersebook) is important for containerizing legacy applications that were originally developed with the traditional .NET Framework, like ASP.NET WebForms, MVC web apps or WCF (Windows Communication Foundation) services. + +### Additional resources + +- **eBook: Modernize existing .NET Framework applications with Azure and Windows Containers** + [*https://aka.ms/liftandshiftwithcontainersebook*](https://aka.ms/liftandshiftwithcontainersebook) + +- **Sample apps: Modernization of legacy ASP.NET web apps by using Windows Containers** + [*https://aka.ms/eshopmodernizing*](https://aka.ms/eshopmodernizing) >[!div class="step-by-step"] diff --git a/docs/standard/microservices-architecture/net-core-net-framework-containers/net-container-os-targets.md b/docs/standard/microservices-architecture/net-core-net-framework-containers/net-container-os-targets.md index 012240ffc58c6..632d04f1ffcc9 100644 --- a/docs/standard/microservices-architecture/net-core-net-framework-containers/net-container-os-targets.md +++ b/docs/standard/microservices-architecture/net-core-net-framework-containers/net-container-os-targets.md @@ -4,14 +4,18 @@ description: .NET Microservices Architecture for Containerized .NET Applications keywords: Docker, Microservices, ASP.NET, Container author: CESARDELATORRE ms.author: wiwagn -ms.date: 05/26/2017 +ms.date: 10/18/2017 ms.prod: .net-core ms.technology: dotnet-docker ms.topic: article --- # What OS to target with .NET containers -Given the diversity of operating systems supported by Docker and the differences between .NET Framework and .NET Core, you should target a specific OS and specific versions depending on the framework you are using. For instance, in Linux there are many distros available, but only few of them are supported in the official .NET Docker images (like Debian and Alpine). For Windows you can use Windows Server Core or Nano Server; these versions of Windows provide different characteristics (like IIS versus a self-hosted web server like Kestrel) that might be needed by .NET Framework or NET Core. +Given the diversity of operating systems supported by Docker and the differences between .NET Framework and .NET Core, you should target a specific OS and specific versions depending on the framework you are using. + +For Windows, you can use Windows Server Core or Windows Nano Server. These Windows versions provide different characteristics (IIS in Windows Server Core versus a self-hosted web server like Kestrel in Nano Server) that might be needed by .NET Framework or .NET Core, respectively. + +For Linux, multiple distros are available and supported in official .NET Docker images (like Debian). In Figure 3-1 you can see the possible OS version depending on the .NET framework used. @@ -23,11 +27,19 @@ You can also create your own Docker image in cases where you want to use a diffe When you add the image name to your Dockerfile file, you can select the operating system and version depending on the tag you use, as in the following examples: -- microsoft/dotnet:**1.1-runtime** - .NET Core 1.1 runtime-only on Linux +- microsoft/**dotnet:2.0.0-runtime-jessie** + + .NET Core 2.0 runtime-only on Linux + +- microsoft/**dotnet:2.0.0-runtime-nanoserver-1709** + + .NET Core 2.0 runtime-only on Windows Nano Server (Windows Server 2016 Fall Creators Update version 1709) + +- microsoft/**aspnetcore:2.0** + + .NET Core 2.0 multi-architecture: Supports Linux and Windows Nano Server depending on the Docker host. + The aspnetcore image has a few optimizations for ASP.NET Core. -- microsoft/dotnet:**1.1-runtime-nanoserver** - .NET Core 1.1 runtime-only on Nano Server diff --git a/docs/standard/microservices-architecture/net-core-net-framework-containers/net-core-container-scenarios.md b/docs/standard/microservices-architecture/net-core-net-framework-containers/net-core-container-scenarios.md index 0462aab22cac2..503ca8c5b9d91 100644 --- a/docs/standard/microservices-architecture/net-core-net-framework-containers/net-core-container-scenarios.md +++ b/docs/standard/microservices-architecture/net-core-net-framework-containers/net-core-container-scenarios.md @@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications keywords: Docker, Microservices, ASP.NET, Container author: CESARDELATORRE ms.author: wiwagn -ms.date: 05/26/2017 +ms.date: 10/18/2017 ms.prod: .net-core ms.technology: dotnet-docker ms.topic: article @@ -23,7 +23,11 @@ Clearly, if your goal is to have an application (web application or service) tha .NET Core also supports macOS as a development platform. However, when you deploy containers to a Docker host, that host must (currently) be based on Linux or Windows. For example, in a development environment, you could use a Linux VM running on a Mac. -[Visual Studio](https://www.visualstudio.com/) provides an integrated development environment (IDE) for Windows. [Visual Studio for Mac](https://www.visualstudio.com/vs/visual-studio-mac/) is an evolution of Xamarin Studio running in macOS, but as of the time of this writing, it still does not support Docker development. You can also use [Visual Studio Code](https://code.visualstudio.com/) (VS Code) on macOS, Linux, and Windows. VS Code fully supports .NET Core, including IntelliSense and debugging. Because VS Code is a lightweight editor, you can use it to develop containerized apps on the Mac in conjunction with the Docker CLI and the .NET Core CLI (dotnet cli). You can also target .NET Core with most third-party editors like Sublime, Emacs, vi, and the open-source OmniSharp project, which also provides IntelliSense support. In addition to the IDEs and editors, you can use the .NET Core command-line tools (dotnet CLI) for all supported platforms. +[Visual Studio](https://www.visualstudio.com/) provides an integrated development environment (IDE) for Windows and supports Docker development. + +[Visual Studio for Mac](https://www.visualstudio.com/vs/visual-studio-mac/) is an IDE, evolution of Xamarin Studio, running in macOS and supports Docker since mid-2017. + +You can also use [Visual Studio Code](https://code.visualstudio.com/) (VS Code) on macOS, Linux, and Windows. VS Code fully supports .NET Core, including IntelliSense and debugging. Because VS Code is a lightweight editor, you can use it to develop containerized apps on the Mac in conjunction with the Docker CLI and the .NET Core CLI (dotnet cli). You can also target .NET Core with most third-party editors like Sublime Text, Emacs, vi, and the open-source OmniSharp project, which provides IntelliSense support for .NET languages. In addition to the IDEs and editors, you can use the [.NET Core command-line interface (CLI) tools](https://docs.microsoft.com/dotnet/core/tools/?tabs=netcore2x) for all supported platforms. ## Using containers for new ("green-field") projects @@ -31,20 +35,14 @@ Containers are commonly used in conjunction with a microservices architecture, a ## Creating and deploying microservices on containers -You could use the full .NET framework for microservices-based applications (without containers) when using plain processes, because .NET Framework is already installed and shared across processes. However, if you are using containers, the image for .NET Framework (Windows Server Core plus the full .NET Framework within each image) is probably too heavy for a microservices-on-containers approach. +You could use the traditional .NET Framework for building microservices-based applications (without containers) by using plain processes. That way, because the .NET Framework is already installed and shared across processes, processes are light and fast to start. However, if you are using containers, the image for the traditional .NET Framework is also based on Windows Server Core and that makes it too heavy for a microservices-on-containers approach. -In contrast, .NET Core is the best candidate if you are embracing a microservices-oriented system that is based on containers, because .NET Core is lightweight. In addition, its related container images, either the Linux image or the Windows Nano image, are lean and small. +In contrast, .NET Core is the best candidate if you are embracing a microservices-oriented system that is based on containers, because .NET Core is lightweight. In addition, its related container images, either the Linux image or the Windows Nano image, are lean and small making containers light and fast to start. A microservice is meant to be as small as possible: to be light when spinning up, to have a small footprint, to have a small Bounded Context, to represent a small area of concerns, and to be able to start and stop fast. For those requirements, you will want to use small and fast-to-instantiate container images like the .NET Core container image. A microservices architecture also allows you to mix technologies across a service boundary. This enables a gradual migration to .NET Core for new microservices that work in conjunction with other microservices or with services developed with Node.js, Python, Java, GoLang, or other technologies. -There are many orchestrators you can use when targeting microservices and containers. For large and complex microservice systems being deployed as Linux containers, [Azure Container Service](https://azure.microsoft.com/services/container-service/) has multiple orchestrator offerings (Mesos DC/OS, Kubernetes, and Docker Swarm), which makes it a good choice. You can also use Azure Service Fabric for Linux, which supports Docker Linux containers. (At the time of this writing, this offering was still in [preview](https://docs.microsoft.com/azure/service-fabric/service-fabric-linux-overview). Check the [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) for the latest status.) - -For large and complex microservice systems being deployed as Windows Containers, most orchestrators are currently in a less mature state. However, you currently can use Azure Service Fabric for Windows Containers, as well as Azure Container Service. Azure Service Fabric is well established for running mission-critical Windows applications. - -All these platforms support .NET Core and make them ideal for hosting your microservices. - ## Deploying high density in scalable systems When your container-based system needs the best possible density, granularity, and performance, .NET Core and ASP.NET Core are your best options. ASP.NET Core is up to ten times faster than ASP.NET in the full .NET Framework, and it leads other popular industry technologies for microservices, such as Java servlets, Go, and Node.js. diff --git a/docs/standard/microservices-architecture/net-core-net-framework-containers/net-framework-container-scenarios.md b/docs/standard/microservices-architecture/net-core-net-framework-containers/net-framework-container-scenarios.md index bb2005ea088c8..5fdd7af620889 100644 --- a/docs/standard/microservices-architecture/net-core-net-framework-containers/net-framework-container-scenarios.md +++ b/docs/standard/microservices-architecture/net-core-net-framework-containers/net-framework-container-scenarios.md @@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications keywords: Docker, Microservices, ASP.NET, Container author: CESARDELATORRE ms.author: wiwagn -ms.date: 05/26/2017 +ms.date: 10/18/2017 ms.prod: .net-core ms.technology: dotnet-docker ms.topic: article @@ -13,43 +13,37 @@ ms.topic: article While .NET Core offers significant benefits for new applications and application patterns, .NET Framework will continue to be a good choice for many existing scenarios. -## Migrating existing applications directly to a Docker container +## Migrating existing applications directly to a Windows Server container You might want to use Docker containers just to simplify deployment, even if you are not creating microservices. For example, perhaps you want to improve your DevOps workflow with Docker—containers can give you better isolated test environments and can also eliminate deployment issues caused by missing dependencies when you move to a production environment. In cases like these, even if you are deploying a monolithic application, it makes sense to use Docker and Windows Containers for your current .NET Framework applications. -In most cases, you will not need to migrate your existing applications to .NET Core; you can use Docker containers that include the full .NET Framework. However, a recommended approach is to use .NET Core as you extend an existing application, such as writing a new service in ASP.NET Core. +In most cases for this scenario, you will not need to migrate your existing applications to .NET Core; you can use Docker containers that include the full .NET Framework. However, a recommended approach is to use .NET Core as you extend an existing application, such as writing a new service in ASP.NET Core. ## Using third-party .NET libraries or NuGet packages not available for .NET Core -Third-party libraries are quickly embracing the [.NET Standard](https://docs.microsoft.com/dotnet/standard/net-standard), which enables code sharing across all .NET flavors, including .NET Core. With the .NET Standard version 2.0, this will be even easier, because the .NET Core API surface will become significantly bigger. Your .NET Core applications will be able to directly use existing .NET Framework libraries. +Third-party libraries are quickly embracing the [.NET Standard](https://docs.microsoft.com/dotnet/standard/net-standard), which enables code sharing across all .NET flavors, including .NET Core. With the .NET Standard Library 2.0 and beyond the API surface compatibility across different frameworks has become significantly larger and in .NET Core 2.0 applications can also directly reference existing .NET Framework libraries (see [compat shim](https://github.com/dotnet/standard/blob/master/docs/faq.md#how-does-net-standard-versioning-work)). -Be aware that whenever you run a library or process based on the full .NET Framework, because of its dependencies on Windows, the container image used for that application or service will need to be based on a Windows Container image. +However, even with that exceptional progression since .NET Standard 2.0 and .NET Core 2.0, there might be cases where certain NuGet packages need Windows to run and might not support .NET Core. If those packages are critical for your application, then you will need to use .NET Framework on Windows Containers. ## Using.NET technologies not available for .NET Core -Some .NET Framework technologies are not available in the current version of .NET Core (version 1.1 as of this writing). Some of them will be available in later .NET Core releases (.NET Core 2.0), but others do not apply to the new application patterns targeted by .NET Core and might never be available. +Some .NET Framework technologies are not available in the current version of .NET Core (version 2.0 as of this writing). Some of them will be available in later .NET Core releases (.NET Core 2.x), but others do not apply to the new application patterns targeted by .NET Core and might never be available. -The following list shows most of the technologies that are not available in .NET Core 1.1: +The following list shows most of the technologies that are not available in .NET Core 2.0: - ASP.NET Web Forms. This technology is only available on .NET Framework. Currently there are no plans to bring ASP.NET Web Forms to .NET Core. -- ASP.NET Web Pages. This technology is slated to be included in a future .NET Core release, as explained in the [.NET Core roadmap.](https://github.com/aspnet/Home/wiki/Roadmap) - -- ASP.NET SignalR. As of the .NET Core 1.1 release (November 2016), ASP.NET SignalR is not available for ASP.NET Core (neither client nor server). There are plans to include it in a future release, as explained in the .NET Core roadmap. A preview is available at the [Server-side](https://github.com/aspnet/SignalR-Server) and [Client Library](https://github.com/aspnet/SignalR-Client-Net) GitHub repositories. - -- WCF services. Even when a [WCF-Client library](https://github.com/dotnet/wcf) is available to consume WCF services from .NET Core (as of early 2017), the WCF server implementation is only available on .NET Framework. This scenario is being considered for future releases of .NET Core. +- WCF services. Even when a [WCF-Client library](https://github.com/dotnet/wcf) is available to consume WCF services from .NET Core. as of mid-2017, the WCF server implementation is only available on .NET Framework. This scenario might be considered for future releases of .NET Core. - Workflow-related services. Windows Workflow Foundation (WF), Workflow Services (WCF + WF in a single service), and WCF Data Services (formerly known as ADO.NET Data Services) are only available on .NET Framework. There are currently no plans to bring them to .NET Core. -- Language support. As of the release of Visual Studio 2017, Visual Basic and F\# do not have tooling support for .NET Core, but this support is planned for updated versions of Visual Studio. - In addition to the technologies listed in the official [.NET Core roadmap](https://github.com/aspnet/Home/wiki/Roadmap), other features might be ported to .NET Core. For a full list, look at the items tagged as [port-to-core](https://github.com/dotnet/corefx/issues?q=is%3Aopen+is%3Aissue+label%3Aport-to-core) on the CoreFX GitHub site. Note that this list does not represent a commitment from Microsoft to bring those components to .NET Core—the items simply capture requests from the community. If you care about any of the components listed above, consider participating in the discussions on GitHub so that your voice can be heard. And if you think something is missing, please [file a new issue in the CoreFX repository](https://github.com/dotnet/corefx/issues/new). ## Using a platform or API that does not support .NET Core Some Microsoft or third-party platforms do not support .NET Core. For example, some Azure services provide an SDK that is not yet available for consumption on .NET Core. This is temporary, because all Azure services will eventually use .NET Core. For example, the [Azure DocumentDB SDK for .NET Core](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/1.2.1) was released as a preview on November 16, 2016, but it is now generally available (GA) as a stable version. -In the meantime, you can always use the equivalent REST API from the Azure service instead of the client SDK. +In the meantime, if any platform or service in Azure still doesn’t support .NET Core with its client API, you can use the equivalent REST API from the Azure service or the client SDK for full .NET Framework. ### Additional resources diff --git a/docs/standard/microservices-architecture/net-core-net-framework-containers/official-net-docker-images.md b/docs/standard/microservices-architecture/net-core-net-framework-containers/official-net-docker-images.md index 7b4ca3b149bda..20835f6adbb5d 100644 --- a/docs/standard/microservices-architecture/net-core-net-framework-containers/official-net-docker-images.md +++ b/docs/standard/microservices-architecture/net-core-net-framework-containers/official-net-docker-images.md @@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications keywords: Docker, Microservices, ASP.NET, Container author: CESARDELATORRE ms.author: wiwagn -ms.date: 05/26/2017 +ms.date: 10/18/2017 ms.prod: .net-core ms.technology: dotnet-docker ms.topic: article @@ -33,7 +33,7 @@ Why multiple images? When developing, building, and running containerized applic ### During development and build -During development, what is important is how fast you can iterate changes, and the ability to debug the changes. The size of the image is not as important as the ability to make changes to your code and see the changes quickly. Some of our tools, like [yo docker](https://github.com/Microsoft/generator-docker) for Visual Studio Code, use the development ASP.NET Core image (microsoft/aspnetcore-build) during development; you could even use that image as a build container. When building inside a Docker container, the important aspects are the elements that are needed in order to compile your app. This includes the compiler and any other .NET dependencies, plus web development dependencies like npm, Gulp, and Bower. +During development, what is important is how fast you can iterate changes, and the ability to debug the changes. The size of the image is not as important as the ability to make changes to your code and see the changes quickly. Some tools and "build-agent containers", use the development ASP.NET Core image (microsoft/aspnetcore-build) during development and build proces. When building inside a Docker container, the important aspects are the elements that are needed in order to compile your app. This includes the compiler and any other .NET dependencies, plus web development dependencies like npm, Gulp, and Bower. Why is this type of build image important? You do not deploy this image to production. Instead, it is an image you use to build the content you place into a production image. This image would be used in your continuous integration (CI) environment or build environment. For instance, rather than manually installing all your application dependencies directly on a build agent host (a VM, for example), the build agent would instantiate a .NET Core build image with all the dependencies required to build the application. Your build agent only needs to know how to run this Docker image. This simplifies your CI environment and makes it much more predictable. @@ -45,22 +45,16 @@ In this optimized image you put only the binaries and other content needed to ru Although there are multiple versions of the .NET Core and ASP.NET Core images, they all share one or more layers, including the base layer. Therefore, the amount of disk space needed to store an image is small; it consists only of the delta between your custom image and its base image. The result is that it is quick to pull the image from your registry. -When you explore the .NET image repositories at Docker Hub, you will find multiple image versions classified or marked with tags. These help decide which one to use, depending on the version you need, like those in the following list:: +When you explore the .NET image repositories at Docker Hub, you will find multiple image versions classified or marked with tags. These tags help to decide which one to use, depending on the version you need, like those in the following table: -- microsoft/aspnetcore:**1.1** - ASP.NET Core, with runtime only and ASP.NET Core optimizations, on Linux +- microsoft/**aspnetcore:2.0** -- microsoft/aspnetcore-build:**1.0-1.1** - ASP.NET Core, with SDKs included, on Linux + ASP.NET Core, with runtime only and ASP.NET Core optimizations, on Linux and Windows (multi-arch) -- microsoft/dotnet:**1.1-runtime** - .NET Core 1.1, with runtime only, on Linux +- microsoft/**aspnetcore-build:2.0** -- microsoft/dotnet:**1.1-runtime-deps** - .NET Core 1.1, with runtime and framework dependencies for self-contained apps, on Linux + ASP.NET Core, with SDKs included, on Linux and Windows (multi-arch) -- microsoft/dotnet**:1.1.0-sdk-msbuild** - .NET Core 1.1 with SDKs included, on Linux >[!div class="step-by-step"] [Previous] (net-container-os-targets.md) diff --git a/docs/standard/microservices-architecture/toc.md b/docs/standard/microservices-architecture/toc.md index 7c85da195a85f..f6cf7627e3bb5 100644 --- a/docs/standard/microservices-architecture/toc.md +++ b/docs/standard/microservices-architecture/toc.md @@ -19,7 +19,8 @@ ### [Logical architecture versus physical architecture](architect-microservice-container-applications/logical-versus-physical-architecture.md) ### [Challenges and solutions for distributed data management](architect-microservice-container-applications/distributed-data-management.md) ### [Identifying domain-model boundaries for each microservice](architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md) -### [Communication between microservices](architect-microservice-container-applications/communication-between-microservices.md) +### [Direct client-to-microservice communication versus the API Gateway pattern](architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md) +### [Communication between microservices](architect-microservice-container-applications/communication-in-microservice-architecture.md) ### [Asynchronous message-based communication](architect-microservice-container-applications/asynchronous-message-based-communication.md) ### [Creating, evolving, and versioning microservice APIs and contracts](architect-microservice-container-applications/maintain-microservice-apis.md) ### [Microservices addressability and the service registry](architect-microservice-container-applications/microservices-addressability-service-registry.md)