attribute to any program element, including assemblies, modules, types (classes, structures, enumerations, interfaces, and delegates), type members (constructors, methods, properties, fields, and events), parameters, generic parameters, and return values. However, in practice, you should apply the attribute only to assemblies, types, and type members. Otherwise, compilers ignore the attribute and continue to generate compiler warnings whenever they encounter a non-compliant parameter, generic parameter, or return value in your library's public interface.
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/asynchronous-message-based-communication.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/asynchronous-message-based-communication.md
index d8026ecaa88cb..416d586c2e938 100644
--- a/docs/standard/microservices-architecture/architect-microservice-container-applications/asynchronous-message-based-communication.md
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/asynchronous-message-based-communication.md
@@ -27,13 +27,13 @@ There are two kinds of asynchronous messaging communication: single receiver mes
Message-based asynchronous communication with a single receiver means there is point-to-point communication that delivers a message to exactly one of the consumers that is reading from the channel, and that the message is processed just once. However, there are special situations. For instance, in a cloud system that tries to automatically recover from failures, the same message could be sent multiple times. Due to network or other failures, the client has to be able to retry sending messages, and the server has to implement an operation to be idempotent in order to process a particular message just once.
-Single-receiver message-based communication is especially well suited for sending asynchronous commands from one microservice to another as shown in Figure 4-17 that illustrates this approach.
+Single-receiver message-based communication is especially well suited for sending asynchronous commands from one microservice to another as shown in Figure 4-18 that illustrates this approach.
Once you start sending message-based communication (either with commands or events), you should avoid mixing message-based communication with synchronous HTTP communication.
-
+
-**Figure 4-17**. A single microservice receiving an asynchronous message
+**Figure 4-18**. A single microservice receiving an asynchronous message
Note that when the commands come from client applications, they can be implemented as HTTP synchronous commands. You should use message-based commands when you need higher scalability or when you are already in a message-based business process.
@@ -51,11 +51,11 @@ If a system uses eventual consistency driven by integration events, it is recomm
As noted earlier in the [Challenges and solutions for distributed data management](#challenges-and-solutions-for-distributed-data-management) section, you can use integration events to implement business tasks that span multiple microservices. Thus you will have eventual consistency between those services. An eventually consistent transaction is made up of a collection of distributed actions. At each action, the related microservice updates a domain entity and publishes another integration event that raises the next action within the same end-to-end business task.
-An important point is that you might want to communicate to multiple microservices that are subscribed to the same event. To do so, you can use publish/subscribe messaging based on event-driven communication, as shown in Figure 4-18. This publish/subscribe mechanism is not exclusive to the microservice architecture. It is similar to the way [Bounded Contexts](http://martinfowler.com/bliki/BoundedContext.html) in DDD should communicate, or to the way you propagate updates from the write database to the read database in the [Command and Query Responsibility Segregation (CQRS)](http://martinfowler.com/bliki/CQRS.html) architecture pattern. The goal is to have eventual consistency between multiple data sources across your distributed system.
+An important point is that you might want to communicate to multiple microservices that are subscribed to the same event. To do so, you can use publish/subscribe messaging based on event-driven communication, as shown in Figure 4-19. This publish/subscribe mechanism is not exclusive to the microservice architecture. It is similar to the way [Bounded Contexts](http://martinfowler.com/bliki/BoundedContext.html) in DDD should communicate, or to the way you propagate updates from the write database to the read database in the [Command and Query Responsibility Segregation (CQRS)](http://martinfowler.com/bliki/CQRS.html) architecture pattern. The goal is to have eventual consistency between multiple data sources across your distributed system.
-
+
-**Figure 4-18**. Asynchronous event-driven message communication
+**Figure 4-19**. Asynchronous event-driven message communication
Your implementation will determine what protocol to use for event-driven, message-based communications. [AMQP](https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol) can help achieve reliable queued communication.
@@ -106,5 +106,5 @@ Additional topics to consider when using asynchronous communication are message
>[!div class="step-by-step"]
-[Previous] (communication-between-microservices.md)
+[Previous] (communication-in-microservice-architecture.md)
[Next] (maintain-microservice-apis.md)
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/communication-between-microservices.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/communication-in-microservice-architecture.md
similarity index 85%
rename from docs/standard/microservices-architecture/architect-microservice-container-applications/communication-between-microservices.md
rename to docs/standard/microservices-architecture/architect-microservice-container-applications/communication-in-microservice-architecture.md
index af5d6444040d7..3baac3f834164 100644
--- a/docs/standard/microservices-architecture/architect-microservice-container-applications/communication-between-microservices.md
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/communication-in-microservice-architecture.md
@@ -1,17 +1,17 @@
---
-title: Communication between microservices
-description: .NET Microservices Architecture for Containerized .NET Applications | Communication between microservices
-keywords: Docker, Microservices, ASP.NET, Container
+title: Communication in a microservice architecture
+description: .NET Microservices Architecture for Containerized .NET Applications | Communication in a microservice architecture architectures
+keywords: Docker, Microservices, ASP.NET, Container
author: CESARDELATORRE
ms.author: wiwagn
-ms.date: 05/26/2017
+ms.date: 10/18/2017
ms.prod: .net-core
ms.technology: dotnet-docker
ms.topic: article
---
-# Communication between microservices
+# Communication in a microservice architecture
-In a monolithic application running on a single process, components invoke one another using language-level method or function calls. These can be strongly coupled if you are creating objects with code (for example, new ClassName()), or can be invoked in a decoupled way if you are using Dependency Injection by referencing abstractions rather than concrete object instances. Either way, the objects are running within the same process. The biggest challenge when changing from a monolithic application to a microservices-based application lies in changing the communication mechanism. A direct conversion from in-process method calls into RPC calls to services will cause a chatty and not efficient communication that will not perform well in distributed environments. The challenges of designing distributed system properly are well enough known that there is even a canon known as the [The fallacies of distributed computing](https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing) that lists assumptions that developers often make when moving from monolithic to distributed designs.
+In a monolithic application running on a single process, components invoke one another using language-level method or function calls. These can be strongly coupled if you are creating objects with code (for example, `new ClassName()`), or can be invoked in a decoupled way if you are using Dependency Injection by referencing abstractions rather than concrete object instances. Either way, the objects are running within the same process. The biggest challenge when changing from a monolithic application to a microservices-based application lies in changing the communication mechanism. A direct conversion from in-process method calls into RPC calls to services will cause a chatty and not efficient communication that will not perform well in distributed environments. The challenges of designing distributed system properly are well enough known that there is even a canon known as the [The fallacies of distributed computing](https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing) that lists assumptions that developers often make when moving from monolithic to distributed designs.
There is not one solution, but several. One solution involves isolating the business microservices as much as possible. You then use asynchronous communication between the internal microservices and replace fine-grained communication that is typical in intra-process communication between objects with coarser-grained communication. You can do this by grouping calls, and by returning data that aggregates the results of multiple internal calls, to the client.
@@ -41,13 +41,19 @@ A microservice-based application will often use a combination of these communica
These axes are good to know so you have clarity on the possible communication mechanisms, but they are not the important concerns when building microservices. The asynchronous nature of client thread execution not even the asynchronous nature of the selected protocol are the important points when integrating microservices. What *is* important is being able to integrate your microservices asynchronously while maintaining the independence of microservices, as explained in the following section.
-## Asynchronous microservice integration enforce microservice’s autonomy
+## Asynchronous microservice integration enforces microservice’s autonomy
As mentioned, the important point when building a microservices-based application is the way you integrate your microservices. Ideally, you should try to minimize the communication between the internal microservices. The less communications between microservices, the better. But of course, in many cases you will have to somehow integrate the microservices. When you need to do that, the critical rule here is that the communication between the microservices should be asynchronous. That does not mean that you have to use a specific protocol (for example, asynchronous messaging versus synchronous HTTP). It just means that the communication between microservices should be done only by propagating data asynchronously, but try not to depend on other internal microservices as part of the initial service’s HTTP request/response operation.
If possible, never depend on synchronous communication (request/response) between multiple microservices, not even for queries. The goal of each microservice is to be autonomous and available to the client consumer, even if the other services that are part of the end-to-end application are down or unhealthy. If you think you need to make a call from one microservice to other microservices (like performing an HTTP request for a data query) in order to be able to provide a response to a client application, you have an architecture that will not be resilient when some microservices fail.
-Moreover, having dependencies between microservices (like performing HTTP requests between them for querying data) not only makes your microservices not autonomous. In addition, their performance will be impacted. The more you add synchronous dependencies (like query requests) between microservices, the worse the overall response time will get for the client apps.
+Moreover, having HTTP dependencies between microservices, like when creating long request/response cycles with HTTP request chains, as shown in the first part of the Figure 4-15, not only makes your microservices not autonomous but also their performance is impacted as soon as one of the services in that chain is not performing well.
+
+The more you add synchronous dependencies between microservices, such as query requests, the worse the overall response time gets for the client apps.
+
+
+
+**Figure 4-15**. Anti-patterns and patterns in communication between microservices
If your microservice needs to raise an additional action in another microservice, if possible, do not perform that action synchronously and as part of the original microservice request and reply operation. Instead, do it asynchronously (using asynchronous messaging or integration events, queues, etc.). But, as much as possible, do not invoke the action synchronously as part of the original synchronous request and reply operation.
@@ -67,11 +73,11 @@ There are also multiple message formats like JSON or XML, or even binary formats
### Request/response communication with HTTP and REST
-When a client uses request/response communication, it sends a request to a service, then the service processes the request and sends back a response. Request/response communication is especially well suited for querying data for a real-time UI (a live user interface) from client apps. Therefore, in a microservice architecture you will probably use this communication mechanism for most queries, as shown in Figure 4-15.
+When a client uses request/response communication, it sends a request to a service, then the service processes the request and sends back a response. Request/response communication is especially well suited for querying data for a real-time UI (a live user interface) from client apps. Therefore, in a microservice architecture you will probably use this communication mechanism for most queries, as shown in Figure 4-16.
-
+
-**Figure 4-15**. Using HTTP request/response communication (synchronous or asynchronous)
+**Figure 4-16**. Using HTTP request/response communication (synchronous or asynchronous)
When a client uses request/response communication, it assumes that the response will arrive in a short time, typically less than a second, or a few seconds at most. For delayed responses, you need to implement asynchronous communication based on [messaging patterns](https://docs.microsoft.com/azure/architecture/patterns/category/messaging) and [messaging technologies](https://en.wikipedia.org/wiki/Message-oriented_middleware), which is a different approach that we explain in the next section.
@@ -91,15 +97,15 @@ There is additional value when using HTTP REST services as your interface defini
Another possibility (usually for different purposes than REST) is a real-time and one-to-many communication with higher-level frameworks such as [ASP.NET SignalR](https://www.asp.net/signalr) and protocols such as [WebSockets](https://en.wikipedia.org/wiki/WebSocket).
-As Figure 4-16 shows, real-time HTTP communication means that you can have server code pushing content to connected clients as the data becomes available, rather than having the server wait for a client to request new data.
+As Figure 4-17 shows, real-time HTTP communication means that you can have server code pushing content to connected clients as the data becomes available, rather than having the server wait for a client to request new data.
-
+
-**Figure 4-16**. One-to-one real-time asynchronous message communication
+**Figure 4-17**. One-to-one real-time asynchronous message communication
Since communication is in real time, client apps show the changes almost instantly. This is usually handled by a protocol such as WebSockets, using many WebSockets connections (one per client). A typical example is when a service communicates a change in the score of a sports game to many client web apps simultaneously.
>[!div class="step-by-step"]
-[Previous] (identify-microservice-domain-model-boundaries.md)
+[Previous] (direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md)
[Next] (asynchronous-message-based-communication.md)
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/data-sovereignty-per-microservice.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/data-sovereignty-per-microservice.md
index d27bf45a14cd2..65805498cbfcf 100644
--- a/docs/standard/microservices-architecture/architect-microservice-container-applications/data-sovereignty-per-microservice.md
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/data-sovereignty-per-microservice.md
@@ -15,7 +15,7 @@ An important rule for microservices architecture is that each microservice must
This means that the conceptual model of the domain will differ between subsystems or microservices. Consider enterprise applications, where customer relationship management (CRM) applications, transactional purchase subsystems, and customer support subsystems each call on unique customer entity attributes and data, and where each employs a different Bounded Context (BC).
-This principle is similar in [domain-driven design (DDD)](https://en.wikipedia.org/wiki/Domain-driven_design), where each [Bounded Context](https://martinfowler.com/bliki/BoundedContext.html) or autonomous subsystem or service must own its domain model (data plus logic and behavior). Each DDD Bounded Context correlates to one business microservice (one or several services). (We expand on this point about the Bounded Context pattern in the next section.)
+This principle is similar in [Domain-driven design (DDD)](https://en.wikipedia.org/wiki/Domain-driven_design), where each [Bounded Context](https://martinfowler.com/bliki/BoundedContext.html) or autonomous subsystem or service must own its domain model (data plus logic and behavior). Each DDD Bounded Context correlates to one business microservice (one or several services). (We expand on this point about the Bounded Context pattern in the next section.)
On the other hand, the traditional (monolithic data) approach used in many applications is to have a single centralized database or just a few databases. This is often a normalized SQL database that is used for the whole application and all its internal subsystems, as shown in Figure 4-7.
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern.md
new file mode 100644
index 0000000000000..fe5ca96a56b4f
--- /dev/null
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-API-Gateway-pattern.md
@@ -0,0 +1,121 @@
+---
+title: Direct client-to-microservice communication versus the API Gateway pattern
+description: .NET Microservices Architecture for Containerized .NET Applications | Direct client-to-microservice communication versus the API Gateway pattern
+keywords: Docker, Microservices, ASP.NET, Container, API Gateway
+author: CESARDELATORRE
+ms.author: wiwagn
+ms.date: 10/18/2017
+ms.prod: .net-core
+ms.technology: dotnet-docker
+ms.topic: article
+---
+# Direct client-to-microservice communication versus the API Gateway pattern
+
+In a microservices architecture, each microservice exposes a set of (typically) fine‑grained endpoints. This fact can impact the client‑to‑microservice communication, as explained in this section.
+
+## Direct client-to-microservice communication
+
+A possible approach is to use a direct client-to-microservice communication architecture. In this approach, a client app can make requests directly to some of the microservices, as shown in Figure 4-12.
+
+
+
+**Figure 4-12**. Using a direct client-to-microservice communication architecture
+
+In this approach. each microservice has a public endpoint, sometimes with a different TCP port for each microservice. An example of a URL for a particular service could be the following URL in Azure:
+
+
+
+In a production environment based on a cluster, that URL would map to the load balancer used in the cluster, which in turn distributes the requests across the microservices. In production environments, you could have an Application Delivery Controller (ADC) like [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/application-gateway-introduction) between your microservices and the Internet. This acts as a transparent tier that not only performs load balancing, but secures your services by offering SSL termination. This improves the load of your hosts by offloading CPU-intensive SSL termination and other routing duties to the Azure Application Gateway. In any case, a load balancer and ADC are transparent from a logical application architecture point of view.
+
+A direct client-to-microservice communication architecture could be good enough for a small microservice-based application, especially if the client app is a server-side web application like an ASP.NET MVC app. However, when you build large and complex microservice-based applications (for example, when handling dozens of microservice types), and especially when the client apps are remote mobile apps or SPA web applications, that approach faces a few issues.
+
+Consider the following questions when developing a large application based on microservices:
+
+- *How can client apps minimize the number of requests to the backend and reduce chatty communication to multiple microservices?*
+
+Interacting with multiple microservices to build a single UI screen increases the number of roundtrips across the Internet. This increases latency and complexity on the UI side. Ideally, responses should be efficiently aggregated in the server side—this reduces latency, since multiple pieces of data come back in parallel and some UI can show data as soon as it is ready.
+
+- *How can you handle cross-cutting concerns such as authorization, data transformations, and dynamic request dispatching?*
+
+Implementing security and cross-cutting concerns like security and authorization on every microservice can require significant development effort. A possible approach is to have those services within the Docker host or internal cluster, in order to restrict direct access to them from the outside, and to implement those cross-cutting concerns in a centralized place, like an API Gateway.
+
+- *How can client apps communicate with services that use non-Internet-friendly protocols?*
+
+Protocols used on the server side (like AMQP or binary protocols) are usually not supported in client apps. Therefore, requests must be performed through protocols like HTTP/HTTPS and translated to the other protocols afterwards. A *man-in-the-middle* approach can help in this situation.
+
+- *How can you shape a façade especially made for mobile apps? *
+
+The API of multiple microservices might not be well designed for the needs of different client applications. For instance, the needs of a mobile app might be different than the needs of a web app. For mobile apps, you might need to optimize even further so that data responses can be more efficient. You might do this by aggregating data from multiple microservices and returning a single set of data, and sometimes eliminating any data in the response that is not needed by the mobile app. And, of course, you might compress that data. Again, a façade or API in between the mobile app and the microservices can be convenient for this scenario.
+
+## Using an API Gateway
+
+When you design and build large or complex microservice-based applications with multiple client apps, a good approach to consider can be an [API Gateway](http://microservices.io/patterns/apigateway.html). This is a service that provides a single entry point for certain groups of microservices. It is similar to the [Facade pattern](https://en.wikipedia.org/wiki/Facade_pattern) from object‑oriented design, but in this case, it is part of a distributed system.
+The API Gateway pattern is also sometimes known as the “backend for frontend” [(BFF)](http://samnewman.io/patterns/architectural/bff/) because you build it while thinking about the needs of the client app.
+
+Figure 4-13 shows how a custom API Gateway can fit into a microservice-based architecture.
+It is important to highlight that in that diagram, you would be using a single custom API Gateway service facing multiple and different client apps. That fact can be an important risk because your API Gateway service will be growing and evolving based on many different requirements from the client apps. Eventually, it will be bloated because of those different needs and effectively it could be pretty similar to a monolithic application or monolithic service. That is why it is very much recommended to split the API Gateway in multiple services or multiple smaller API Gateways, one per form-factor type, for instance.
+
+
+
+**Figure 4-13**. Using an API Gateway implemented as a custom Web API service
+
+In this example, the API Gateway would be implemented as a custom Web API service running as a container.
+
+As mentioned, you should implement several API Gateways so that you can have a different façade for the needs of each client app. Each API Gateway can provide a different API tailored for each client app, possibly even based on the client form factor by implementing specific adapter code which underneath calls multiple internal microservices.
+
+Since a custom API Gateway is usually a data aggregator, you need to be careful with it. Usually it isn't a good idea to have a single API Gateway aggregating all the internal microservices of your application. If it does, it acts as a monolithic aggregator or orchestrator and violates microservice autonomy by coupling all the microservices. Therefore, the API Gateways should be segregated based on business boundaries and not act as an aggregator for the whole application.
+
+Sometimes a granular API Gateway can also be a microservice by itself, and even have a domain or business name and related data. Having the API Gateway’s boundaries dictated by the business or domain will help you to get a better design.
+
+Granularity in the API Gateway tier can be especially useful for more advanced composite UI applications based on microservices, because the concept of a fine-grained API Gateway is similar to a UI composition service. We discuss this later in the [Creating composite UI based on microservices](#creating-composite-ui-based-on-microservices-including-visual-ui-shape-and-layout-generated-by-multiple-microservices).
+
+Therefore, for many medium- and large-size applications, using a custom-built API Gateway is usually a good approach, but not as a single monolithic aggregator or unique central custom API Gateway.
+
+Another approach is to use a product like [Azure API Management](https://azure.microsoft.com/services/api-management/) as shown in Figure 4-14. This approach not only solves your API Gateway needs, but provides features like gathering insights from your APIs. If you are using an API management solution, an API Gateway is only a component within that full API management solution.
+
+
+
+**Figure 4-14**. Using Azure API Management for your API Gateway
+
+In this case, when using a product like Azure API Management, the fact that you might have a single API Gateway is not so risky because these kinds of API Gateways are "thinner", meaning that you don't implement custom C# code that could evolve towards a monolithic component.
+
+This type of product acts more like a reverse proxy for ingress communication, where you can also filter the APIs from the internal microservices plus apply authorization to the published APIs in this single tier.
+
+The insights available from an API Management system help you get an understanding of how your APIs are being used and how they are performing. They do this by letting you view near real-time analytics reports and identifying trends that might impact your business. Plus, you can have logs about request and response activity for further online and offline analysis.
+
+With Azure API Management, you can secure your APIs using a key, a token, and IP filtering. These features let you enforce flexible and fine-grained quotas and rate limits, modify the shape and behavior of your APIs using policies, and improve performance with response caching.
+
+In this guide and the reference sample application (eShopOnContainers), we are limiting the architecture to a simpler and custom-made containerized architecture in order to focus on plain containers without using PaaS products like Azure API Management. But for large microservice-based applications that are deployed into Microsoft Azure, we encourage you to review and adopt Azure API Management as the base for your API Gateways.
+
+## Drawbacks of the API Gateway pattern
+
+- The most important drawback is that when you implement an API Gateway, you are coupling that tier with the internal microservices. Coupling like this might introduce serious difficulties for your application. Clemens Vaster, architect at the Azure Service Bus team, refers to this potential difficulty as “the new ESB” in his "[Messaging and Microservices](https://www.youtube.com/watch?v=rXi5CLjIQ9k)" session at GOTO 2016.
+
+- Using a microservices API Gateway creates an additional possible single point of failure.
+
+- An API Gateway can introduce increased response time due to the additional network call. However, this extra call usually has less impact than having a client interface that is too chatty directly calling the internal microservices.
+
+- If not scaled out properly, the API Gateway can become a bottleneck.
+
+- An API Gateway requires additional development cost and future maintenance if it includes custom logic and data aggregation. Developers must update the API Gateway in order to expose each microservice’s endpoints. Moreover, implementation changes in the internal microservices might cause code changes at the API Gateway level. However, if the API Gateway is just applying security, logging, and versioning (as when using Azure API Management), this additional development cost might not apply.
+
+- If the API Gateway is developed by a single team, there can be a development bottleneck. This is another reason why a better approach is to have several fined-grained API Gateways that respond to different client needs. You could also segregate the API Gateway internally into multiple areas or layers that are owned by the different teams working on the internal microservices.
+
+## Additional resources
+
+- **Charles Richardson. Pattern: API Gateway / Backend for Front-End**
+ [*http://microservices.io/patterns/apigateway.html*](http://microservices.io/patterns/apigateway.html)
+
+- **Azure API Management**
+ [*https://azure.microsoft.com/services/api-management/*](https://azure.microsoft.com/services/api-management/)
+
+- **Udi Dahan. Service Oriented Composition**\
+ [*http://udidahan.com/2014/07/30/service-oriented-composition-with-video/*](http://udidahan.com/2014/07/30/service-oriented-composition-with-video/)
+
+- **Clemens Vasters. Messaging and Microservices at GOTO 2016** (video)
+ [*https://www.youtube.com/watch?v=rXi5CLjIQ9k*](https://www.youtube.com/watch?v=rXi5CLjIQ9k)
+
+
+>[!div class="step-by-step"]
+[Previous] (identify-microservice-domain-model-boundaries.md)
+[Next] (communication-in-microservice-architecture.md)
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/distributed-data-management.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/distributed-data-management.md
index d883c3b90a6df..a6bc51673c3c5 100644
--- a/docs/standard/microservices-architecture/architect-microservice-container-applications/distributed-data-management.md
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/distributed-data-management.md
@@ -53,7 +53,7 @@ The Ordering microservice should not update the Products table directly, because
As stated by the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem), you need to choose between availability and ACID strong consistency. Most microservice-based scenarios demand availability and high scalability as opposed to strong consistency. Mission-critical applications must remain up and running, and developers can work around strong consistency by using techniques for working with weak or eventual consistency. This is the approach taken by most microservice-based architectures.
-Moreover, ACID-style or two-phase commit transactions are not just against microservices principles; most NoSQL databases (like Azure Document DB, MongoDB, etc.) do not support two-phase commit transactions. However, maintaining data consistency across services and databases is essential. This challenge is also related to the question of how to propagate changes across multiple microservices when certain data needs to be redundant—for example, when you need to have the product’s name or description in the Catalog microservice and the Basket microservice.
+Moreover, ACID-style or two-phase commit transactions are not just against microservices principles; most NoSQL databases (like Azure Cosmos DB, MongoDB, etc.) do not support two-phase commit transactions. However, maintaining data consistency across services and databases is essential. This challenge is also related to the question of how to propagate changes across multiple microservices when certain data needs to be redundant—for example, when you need to have the product’s name or description in the Catalog microservice and the Basket microservice.
A good solution for this problem is to use eventual consistency between microservices articulated through event-driven communication and a publish-and-subscribe system. These topics are covered in the section [Asynchronous event-driven communication](#async_event_driven_communication) later in this guide.
@@ -63,7 +63,7 @@ Communicating across microservice boundaries is a real challenge. In this contex
In a distributed system like a microservices-based application, with so many artifacts moving around and with distributed services across many servers or hosts, components will eventually fail. Partial failure and even larger outages will occur, so you need to design your microservices and the communication across them taking into account the risks common in this type of distributed system.
-A popular approach is to implement HTTP (REST)- based microservices, due to their simplicity. An HTTP-based approach is perfectly acceptable; the issue here is related to how you use it. If you use HTTP requests and responses just to interact with your microservices from client applications or from API Gateways, that is fine. But if create long chains of synchronous HTTP calls across microservices, communicating across their boundaries as if the microservices were objects in a monolithic application, your application will eventually run into problems.
+A popular approach is to implement HTTP (REST)- based microservices, due to their simplicity. An HTTP-based approach is perfectly acceptable; the issue here is related to how you use it. If you use HTTP requests and responses just to interact with your microservices from client applications or from API Gateways, that is fine. But if you create long chains of synchronous HTTP calls across microservices, communicating across their boundaries as if the microservices were objects in a monolithic application, your application will eventually run into problems.
For instance, imagine that your client application makes an HTTP API call to an individual microservice like the Ordering microservice. If the Ordering microservice in turn calls additional microservices using HTTP within the same request/response cycle, you are creating a chain of HTTP calls. It might sound reasonable initially. However, there are important points to consider when going down this path:
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/docker-application-state-data.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/docker-application-state-data.md
index 954af6dc63e9d..5f2b1c6cc2386 100644
--- a/docs/standard/microservices-architecture/architect-microservice-container-applications/docker-application-state-data.md
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/docker-application-state-data.md
@@ -1,10 +1,10 @@
---
title: State and data in Docker applications
description: .NET Microservices Architecture for Containerized .NET Applications | State and data in Docker applications
-keywords: Docker, Microservices, ASP.NET, Container
+keywords: Docker, Microservices, ASP.NET, Container, SQL, CosmosDB, Docker
author: CESARDELATORRE
ms.author: wiwagn
-ms.date: 05/26/2017
+ms.date: 10/18/2017
ms.prod: .net-core
ms.technology: dotnet-docker
ms.topic: article
@@ -23,10 +23,10 @@ The following solutions are used to manage persistent data in Docker application
- [Volume plugins](https://docs.docker.com/engine/tutorials/dockervolumes/) that mount volumes to remote services, providing long-term persistence.
-- Remote data sources like SQL or NoSQL databases, or cache services like [Redis](https://redis.io/).
-
- [Azure Storage](https://docs.microsoft.com/azure/storage/), which provides geo-distributable storage, providing a good long-term persistence solution for containers.
+- Remote relational databases like [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) or NoSQL databases like [Azure Cosmos DB](https://docs.microsoft.com/azure/cosmos-db/introduction), or cache services like [Redis](https://redis.io/).
+
The following provides more detail about these options.
**Data volumes** are directories mapped from the host OS to directories in containers. When code in the container has access to the directory, that access is actually to a directory on the host OS. This directory is not tied to the lifetime of the container itself, and the directory can be accessed from code running directly on the host OS or by another container that maps the same host directory to itself. Thus, data volumes are designed to persist data independently of the life of the container. If you delete a container or an image from the Docker host, the data persisted in the data volume is not deleted. The data in a volume can be accessed from the host OS as well.
@@ -43,9 +43,9 @@ In addition, when Docker containers are managed by an orchestrator, containers m
**Volume plugins** like [Flocker](https://clusterhq.com/flocker/) provide data access across all hosts in a cluster. While not all volume plugins are created equally, volume plugins typically provide externalized persistent reliable storage from immutable containers.
-**Remote data sources and cache** tools like Azure SQL Database, Azure Document DB, or a remote cache like Redis can be used in containerized applications the same way they are used when developing without containers. This is a proven way to store business application data.
+**Remote data sources and cache** tools like Azure SQL Database, Azure Cosmos DB, or a remote cache like Redis can be used in containerized applications the same way they are used when developing without containers. This is a proven way to store business application data.
-**Azure Storage.** Business data usually will need to be placed in external resources or databases, like relational databases or NoSQL databases like Azure Storage and DocDB. Azure Storage, in concrete, provides the following services in the cloud:
+**Azure Storage.** Business data usually will need to be placed in external resources or databases, like Azure Storage. Azure Storage, in concrete, provides the following services in the cloud:
- Blob storage stores unstructured object data. A blob can be any type of text or binary data, such as document or media files (images, audio, and video files). Blob storage is also referred to as Object storage.
@@ -53,7 +53,7 @@ In addition, when Docker containers are managed by an orchestrator, containers m
- Table storage stores structured datasets. Table storage is a NoSQL key-attribute data store, which allows rapid development and fast access to large quantities of data.
-**Relational databases and NoSQL databases.** There are many choices for external databases, from relational databases like SQL Server, PostgreSQL, Oracle, or NoSQL databases like Azure DocDB, MongoDB, etc. These databases are not going to be explained as part of this guide since they are in a completely different subject.
+**Relational databases and NoSQL databases.** There are many choices for external databases, from relational databases like SQL Server, PostgreSQL, Oracle, or NoSQL databases like Azure Cosmos DB, MongoDB, etc. These databases are not going to be explained as part of this guide since they are in a completely different subject.
>[!div class="step-by-step"]
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md
index c89380a911517..d61bc2f310d9c 100644
--- a/docs/standard/microservices-architecture/architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md
@@ -9,7 +9,7 @@ ms.prod: .net-core
ms.technology: dotnet-docker
ms.topic: article
---
-# Identifying domain-model boundaries for each microservice
+# Identify domain-model boundaries for each microservice
The goal when identifying model boundaries and size for each microservice is not to get to the most granular separation possible, although you should tend toward small microservices if possible. Instead, your goal should be to get to the most meaningful separation guided by your domain knowledge. The emphasis is not on the size, but instead on business capabilities. In addition, if there is clear cohesion needed for a certain area of the application based on a high number of dependencies, that indicates the need for a single microservice, too. Cohesion is a way to identify how to break apart or group together microservices. Ultimately, while you gain more knowledge about the domain, you should adapt the size of your microservice, iteratively. Finding the right size is not a one-shot process.
@@ -50,106 +50,6 @@ Basically, there is a shared concept of a user that exists in multiple services
There are several benefits to not sharing the same user entity with the same number of attributes across domains. One benefit is to reduce duplication, so that microservice models do not have any data that they do not need. Another benefit is having a master microservice that owns a certain type of data per entity so that updates and queries for that type of data are driven only by that microservice.
-
-## Direct client-to-microservice communication versus the API Gateway pattern
-
-In a microservices architecture, each microservice exposes a set of (typically) fine‑grained endpoints. This fact can impact the client‑to‑microservice communication, as explained in this section.
-
-### Direct client-to-microservice communication
-
-A possible approach is to use a direct client-to-microservice communication architecture. In this approach, a client app can make requests directly to some of the microservices, as shown in Figure 4-12.
-
-
-
-**Figure 4-12**. Using a direct client-to-microservice communication architecture
-
-In this approach. each microservice has a public endpoint, sometimes with a different TCP port for each microservice. An example of an URL for a particular service could be the following URL in Azure:
-
-
-
-In a production environment based on a cluster, that URL would map to the load balancer used in the cluster, which in turn distributes the requests across the microservices. In production environments, you could have an Application Delivery Controller (ADC) like [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/application-gateway-introduction) between your microservices and the Internet. This acts as a transparent tier that not only performs load balancing, but secures your services by offering SSL termination. This improves the load of your hosts by offloading CPU-intensive SSL termination and other routing duties to the Azure Application Gateway. In any case, a load balancer and ADC are transparent from a logical application architecture point of view.
-
-A direct client-to-microservice communication architecture is good enough for a small microservice-based application. However, when you build large and complex microservice-based applications (for example, when handling dozens of microservice types), that approach faces possible issues. You need to consider the following questions when developing a large application based on microservices:
-
-- *How can client apps minimize the number of requests to the backend and reduce chatty communication to multiple microservices?*
-
-Interacting with multiple microservices to build a single UI screen increases the number of roundtrips across the Internet. This increases latency and complexity on the UI side. Ideally, responses should be efficiently aggregated in the server side—this reduces latency, since multiple pieces of data come back in parallel and some UI can show data as soon as it is ready.
-
-- *How can you handle cross-cutting concerns such as authorization, data transformations, and dynamic request dispatching?*
-
-Implementing security and cross-cutting concerns like security and authorization on every microservice can require significant development effort. A possible approach is to have those services within the Docker host or internal cluster, in order to restrict direct access to them from the outside, and to implement those cross-cutting concerns in a centralized place, like an API Gateway.
-
-- *How can client apps communicate with services that use non-Internet-friendly protocols?*
-
-Protocols used on the server side (like AMQP or binary protocols) are usually not supported in client apps. Therefore, requests must be performed through protocols like HTTP/HTTPS and translated to the other protocols afterwards. A *man-in-the-middle* approach can help in this situation.
-
-- *How can you shape a façade especially made for mobile apps? *
-
-The API of multiple microservices might not be well designed for the needs of different client applications. For instance, the needs of a mobile app might be different than the needs of a web app. For mobile apps, you might need to optimize even further so that data responses can be more efficient. You might do this by aggregating data from multiple microservices and returning a single set of data, and sometimes eliminating any data in the response that is not needed by the mobile app. And, of course, you might compress that data. Again, a façade or API in between the mobile app and the microservices can be convenient for this scenario.
-
-### Using an API Gateway
-
-When you design and build large or complex microservice-based applications with multiple client apps, a good approach to consider can be an [API Gateway](http://microservices.io/patterns/apigateway.html). This is a service that provides a single entry point for certain groups of microservices. It is similar to the [Facade pattern](https://en.wikipedia.org/wiki/Facade_pattern) from object‑oriented design, but in this case, it is part of a distributed system. The API Gateway pattern is also sometimes known as the “back end for the front end,” because you build it while thinking about the needs of the client app.
-
-Figure 4-13 shows how an API Gateway can fit into a microservice-based architecture.
-
-
-
-**Figure 4-13**. Using the API Gateway pattern in a microservice-based architecture
-
-In this example, the API Gateway would be implemented as a custom Web API service running as a container.
-
-You should implement several API Gateways so that you can have a different façade for the needs of each client app. Each API Gateway can provide a different API tailored for each client app, possibly even based on the client form factor or device by implementing specific adapter code which underneath calls multiple internal microservices.
-
-Since the API Gateway is actually an aggregator, you need to be careful with it. Usually it is not a good idea to have a single API Gateway aggregating all the internal microservices of your application. If it does, it acts as a monolithic aggregator or orchestrator and violates microservice autonomy by coupling all the microservices. Therefore, the API Gateways should be segregated based on business boundaries and not act as an aggregator for the whole application.
-
-Sometimes a granular API Gateway can also be a microservice by itself, and even have a domain or business name and related data. Having the API Gateway’s boundaries dictated by the business or domain will help you to get a better design.
-
-Granularity in the API Gateway tier can be especially useful for more advanced composite UI applications based on microservices, because the concept of a fine-grained API Gateway is similar to an UI composition service. We discuss this later in the [Creating composite UI based on microservices](#creating-composite-ui-based-on-microservices-including-visual-ui-shape-and-layout-generated-by-multiple-microservices).
-
-Therefore, for many medium- and large-size applications, using a custom-built API Gateway is usually a good approach, but not as a single monolithic aggregator or unique central API Gateway.
-
-Another approach is to use a product like [Azure API Management](https://azure.microsoft.com/services/api-management/) as shown in Figure 4-14. This approach not only solves your API Gateway needs, but provides features like gathering insights from your APIs. If you are using an API management solution, an API Gateway is only a component within that full API management solution.
-
-
-
-**Figure 4-14**. Using Azure API Management for your API Gateway
-
-The insights available from an API Management system help you get an understanding of how your APIs are being used and how they are performing. They do this by letting you view near real-time analytics reports and identifying trends that might impact your business. Plus you can have logs about request and response activity for further online and offline analysis.
-
-With Azure API Management, you can secure your APIs using a key, a token, and IP filtering. These features let you enforce flexible and fine-grained quotas and rate limits, modify the shape and behavior of your APIs using policies, and improve performance with response caching.
-
-In this guide and the reference sample application (eShopOnContainers) we are limiting the architecture to a simpler and custom-made containerized architecture in order to focus on plain containers without using PaaS products like Azure API Management. But for large microservice-based applications that are deployed into Microsoft Azure, we encourage you to review and adopt Azure API Management as the base for your API Gateways.
-
-### Drawbacks of the API Gateway pattern
-
-- The most important drawback is that when you implement an API Gateway, you are coupling that tier with the internal microservices. Coupling like this might introduce serious difficulties for your application. (The cloud architect Clemens Vaster refers to this potential difficulty as “the new ESB” in his "[Messaging and Microservices](https://www.youtube.com/watch?v=rXi5CLjIQ9k)" session from at GOTO 2016.)
-
-- Using a microservices API Gateway creates an additional possible point of failure.
-
-- An API Gateway can introduce increased response time due to the additional network call. However, this extra call usually has less impact than having a client interface that is too chatty directly calling the internal microservices.
-
-- The API Gateway can represent a possible bottleneck if it is not scaled out properly
-
-- An API Gateway requires additional development cost and future maintenance if it includes custom logic and data aggregation. Developers must update the API Gateway in order to expose each microservice’s endpoints. Moreover, implementation changes in the internal microservices might cause code changes at the API Gateway level. However, if the API Gateway is just applying security, logging, and versioning (as when using Azure API Management), this additional development cost might not apply.
-
-- If the API Gateway is developed by a single team, there can be a development bottleneck. This is another reason why a better approach is to have several fined-grained API Gateways that respond to different client needs. You could also segregate the API Gateway internally into multiple areas or layers that are owned by the different teams working on the internal microservices.
-
-## Additional resources
-
-- **Charles Richardson. Pattern: API Gateway / Backend for Front-End**
- [*http://microservices.io/patterns/apigateway.html*](http://microservices.io/patterns/apigateway.html)
-
-- **Azure API Management**
- [*https://azure.microsoft.com/services/api-management/*](https://azure.microsoft.com/services/api-management/)
-
-- **Udi Dahan. Service Oriented Composition**\
- [*http://udidahan.com/2014/07/30/service-oriented-composition-with-video/*](http://udidahan.com/2014/07/30/service-oriented-composition-with-video/)
-
-- **Clemens Vasters. Messaging and Microservices at GOTO 2016** (video)
- [*https://www.youtube.com/watch?v=rXi5CLjIQ9k*](https://www.youtube.com/watch?v=rXi5CLjIQ9k)
-
-
>[!div class="step-by-step"]
[Previous] (distributed-data-management.md)
-[Next] (communication-between-microservices.md)
+[Next] (direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md)
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/index.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/index.md
index 4a3a002e93c35..5cfaaf7265b86 100644
--- a/docs/standard/microservices-architecture/architect-microservice-container-applications/index.md
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/index.md
@@ -15,7 +15,7 @@ ms.topic: article
Earlier in this guide, you learned basic concepts about containers and Docker. That was the minimum information you need in order to get started with containers. Although, even when containers are enablers and a great fit for microservices, they are not mandatory for a microservice architecture and many architectural concepts in this architecture section could be applied without containers, too. However, this guidance focuses on the intersection of both due to the already introduced importance of containers.
-Enterprise applications can be complex and are often composed of multiple services instead of a single service-based application. For those cases, you need to understand additional architectural approaches, such as the microservices and certain domain-driven design (DDD) patterns plus container orchestration concepts. Note that this chapter describes not just microservices on containers, but any containerized application, as well.
+Enterprise applications can be complex and are often composed of multiple services instead of a single service-based application. For those cases, you need to understand additional architectural approaches, such as the microservices and certain Domain-Driven Design (DDD) patterns plus container orchestration concepts. Note that this chapter describes not just microservices on containers, but any containerized application, as well.
## Container design principles
@@ -23,7 +23,7 @@ In the container model, a container image instance represents a single process.
When you design a container image, you will see an [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/) definition in the Dockerfile. This defines the process whose lifetime controls the lifetime of the container. When the process completes, the container lifecycle ends. Containers might represent long-running processes like web servers, but can also represent short-lived processes like batch jobs, which formerly might have been implemented as Azure [WebJobs](https://docs.microsoft.com/azure/app-service-web/websites-webjobs-resources).
-If the process fails, the container ends, and the orchestrator takes over. If the orchestrator was configured to keep five instances running and one fails, the orchestrator will create another container instance to replace the failed process. In a batch job, the process is started with parameters. When the process completes, the work is complete.
+If the process fails, the container ends, and the orchestrator takes over. If the orchestrator was configured to keep five instances running and one fails, the orchestrator will create another container instance to replace the failed process. In a batch job, the process is started with parameters. When the process completes, the work is complete. This guidance drills-down on orchestrators, later on.
You might find a scenario where you want multiple processes running in a single container. For that scenario, since there can be only one entry point per container, you could run a script within the container that launches as many programs as needed. For example, you can use [Supervisor](http://supervisord.org/) or a similar tool to take care of launching multiple processes inside a single container. However, even though you can find architectures that hold multiple processes per container, that approach it is not very common.
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image15.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image15.png
index 9262d68e033f0..0148310091f99 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image15.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image15.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image16.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image16.png
index 6968a6fb0ee48..b9b1bd81db4b1 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image16.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image16.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image17.PNG b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image17.PNG
index b30c9fc0a06be..6968a6fb0ee48 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image17.PNG and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image17.PNG differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image18.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image18.png
index 6cbd702cbc5b2..b30c9fc0a06be 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image18.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image18.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image19.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image19.png
index d405de9d209da..6cbd702cbc5b2 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image19.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image19.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image20.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image20.png
index 15adb740f6655..97b3c62f1c0c3 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image20.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image20.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image21.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image21.png
index 0ccfef1efba15..15adb740f6655 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image21.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image21.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image22.PNG b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image22.PNG
index 70b69020c6d9e..0ccfef1efba15 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image22.PNG and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image22.PNG differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image23.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image23.png
index eb88b15e6e5ef..70b69020c6d9e 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image23.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image23.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image24.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image24.png
index 40ab685febc8b..b315bb3a8bb29 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image24.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image24.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image25.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image25.png
index b315bb3a8bb29..eb88b15e6e5ef 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image25.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image25.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image26.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image26.png
index 9bd8222f11182..b6f3e33ceae1b 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image26.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image26.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image27.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image27.png
index 8b4a4bd957745..93c08c333de26 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image27.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image27.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image28.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image28.png
index 38a888ec50b0d..8b4a4bd957745 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image28.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image28.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image29.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image29.png
index 52279c3be5739..38a888ec50b0d 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image29.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image29.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image30.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image30.png
index 6678228fe7a03..52279c3be5739 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image30.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image30.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image31.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image31.png
index ab7a5614da4fc..6678228fe7a03 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image31.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image31.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image32.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image32.png
index e59c1f6f2d9f3..ab7a5614da4fc 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image32.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image32.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image33.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image33.png
index e45712e2b056e..e59c1f6f2d9f3 100644
Binary files a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image33.png and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image33.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image34.png b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image34.png
new file mode 100644
index 0000000000000..e45712e2b056e
Binary files /dev/null and b/docs/standard/microservices-architecture/architect-microservice-container-applications/media/image34.png differ
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md
index 4b1d0cbf08e50..1d652f887dea0 100644
--- a/docs/standard/microservices-architecture/architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/microservice-based-composite-ui-shape-layout.md
@@ -13,21 +13,21 @@ ms.topic: article
Microservices architecture often starts with the server side handling data and logic. However, a more advanced approach is to design your application UI based on microservices as well. That means having a composite UI produced by the microservices, instead of having microservices on the server and just a monolithic client app consuming the microservices. With this approach, the microservices you build can be complete with both logic and visual representation.
-Figure 4-19 shows the simpler approach of just consuming microservices from a monolithic client application. Of course, you could have an ASP.NET MVC service in between producing the HTML and JavaScript. The figure is a simplification that highlights that you have a single (monolithic) client UI consuming the microservices, which just focus on logic and data and not on the UI shape (HTML and JavaScript).
+Figure 4-20 shows the simpler approach of just consuming microservices from a monolithic client application. Of course, you could have an ASP.NET MVC service in between producing the HTML and JavaScript. The figure is a simplification that highlights that you have a single (monolithic) client UI consuming the microservices, which just focus on logic and data and not on the UI shape (HTML and JavaScript).
-
+
-**Figure 4-19**. A monolithic UI application consuming back-end microservices
+**Figure 4-20**. A monolithic UI application consuming back-end microservices
In contrast, a composite UI is precisely generated and composed by the microservices themselves. Some of the microservices drive the visual shape of specific areas of the UI. The key difference is that you have client UI components (TS classes, for example) based on templates, and the data-shaping-UI ViewModel for those templates comes from each microservice.
At client application start-up time, each of the client UI components (TypeScript classes, for example) registers itself with an infrastructure microservice capable of providing ViewModels for a given scenario. If the microservice changes the shape, the UI changes also.
-Figure 4-20 shows a version of this composite UI approach. This is simplified, because you might have other microservices that are aggregating granular parts based on different techniques—it depends on whether you are building a traditional web approach (ASP.NET MVC) or an SPA (Single Page Application).
+Figure 4-21 shows a version of this composite UI approach. This is simplified, because you might have other microservices that are aggregating granular parts based on different techniques—it depends on whether you are building a traditional web approach (ASP.NET MVC) or an SPA (Single Page Application).
-
+
-**Figure 4-20**. Example of a composite UI application shaped by back-end microservices
+**Figure 4-21**. Example of a composite UI application shaped by back-end microservices
Each of those UI composition microservices would be similar to a small API Gateway. But in this case each is responsible for a small UI area.
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/resilient-high-availability-microservices.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/resilient-high-availability-microservices.md
index f5c32e9a40a9a..c838652287698 100644
--- a/docs/standard/microservices-architecture/architect-microservice-container-applications/resilient-high-availability-microservices.md
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/resilient-high-availability-microservices.md
@@ -41,11 +41,11 @@ A microservice-based application should not try to store the output stream of ev
When you create a microservice-based application, you need to deal with complexity. Of course, a single microservice is simple to deal with, but dozens or hundreds of types and thousands of instances of microservices is a complex problem. It is not just about building your microservice architecture—you also need high availability, addressability, resiliency, health, and diagnostics if you intend to have a stable and cohesive system.
-
+
-**Figure 4-21**. A Microservice Platform is fundamental for an application’s health management
+**Figure 4-22**. A Microservice Platform is fundamental for an application’s health management
-The complex problems shown in Figure 4-21 are very hard to solve by yourself. Development teams should focus on solving business problems and building custom applications with microservice-based approaches. They should not focus on solving complex infrastructure problems; if they did, the cost of any microservice-based application would be huge. Therefore, there are microservice-oriented platforms, referred to as orchestrators or microservice clusters, that try to solve the hard problems of building and running a service and using infrastructure resources efficiently. This reduces the complexities of building applications that use a microservices approach.
+The complex problems shown in Figure 4-22 are very hard to solve by yourself. Development teams should focus on solving business problems and building custom applications with microservice-based approaches. They should not focus on solving complex infrastructure problems; if they did, the cost of any microservice-based application would be huge. Therefore, there are microservice-oriented platforms, referred to as orchestrators or microservice clusters, that try to solve the hard problems of building and running a service and using infrastructure resources efficiently. This reduces the complexities of building applications that use a microservices approach.
Different orchestrators might sound similar, but the diagnostics and health checks offered by each of them differ in features and state of maturity, sometimes depending on the OS platform, as explained in the next section.
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md
index da9e4b128ba57..b01445fc8d444 100644
--- a/docs/standard/microservices-architecture/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/scalable-available-multi-container-microservice-applications.md
@@ -4,40 +4,50 @@ description: .NET Microservices Architecture for Containerized .NET Applications
keywords: Docker, Microservices, ASP.NET, Container
author: CESARDELATORRE
ms.author: wiwagn
-ms.date: 05/26/2017
+ms.date: 10/18/2017
ms.prod: .net-core
-ms.technology: dotnet-docker
+ms.technology: dotnet-docker, service fabric, kubernetes, azure container service, docker swarm, dc/os
ms.topic: article
---
# Orchestrating microservices and multi-container applications for high scalability and availability
Using orchestrators for production-ready applications is essential if your application is based on microservices or simply split across multiple containers. As introduced previously, in a microservice-based approach, each microservice owns its model and data so that it will be autonomous from a development and deployment point of view. But even if you have a more traditional application that is composed of multiple services (like SOA), you will also have multiple containers or services comprising a single business application that need to be deployed as a distributed system. These kinds of systems are complex to scale out and manage; therefore, you absolutely need an orchestrator if you want to have a production-ready and scalable multi-container application.
-Figure 4-22 illustrates deployment into a cluster of an application composed of multiple microservices (containers).
+Figure 4-23 illustrates deployment into a cluster of an application composed of multiple microservices (containers).
-
+
-**Figure 4-22**. A cluster of containers
+**Figure 4-23**. A cluster of containers
It looks like a logical approach. But how are you handling load-balancing, routing, and orchestrating these composed applications?
-The Docker CLI meets the needs of managing one container on one host, but it falls short when it comes to managing multiple containers deployed on multiple hosts for more complex distributed applications. In most cases, you need a management platform that will automatically start containers, suspend them or shut them down when needed, and ideally also control how they access resources like the network and data storage.
+The plain Docker Engine in single Docker hosts meets the needs of managing single image instances on one host, but it falls short when it comes to managing multiple containers deployed on multiple hosts for more complex distributed applications. In most cases, you need a management platform that will automatically start containers, scale-out containers with multiple instances per image, suspend them or shut them down when needed, and ideally also control how they access resources like the network and data storage.
To go beyond the management of individual containers or very simple composed apps and move toward larger enterprise applications with microservices, you must turn to orchestration and clustering platforms.
From an architecture and development point of view, if you are building large enterprise composed of microservices-based applications, it is important to understand the following platforms and products that support advanced scenarios:
-**Clusters and orchestrators**. When you need to scale out applications across many Docker hosts, as when a large microservice-based application, it is critical to be able to manage all those hosts as a single cluster by abstracting the complexity of the underlying platform. That is what the container clusters and orchestrators provide. Examples of orchestrators are Docker Swarm, Mesosphere DC/OS, Kubernetes (the first three available through Azure Container Service) and Azure Service Fabric.
+**Clusters and orchestrators**. When you need to scale out applications across many Docker hosts, as when a large microservice-based application, it is critical to be able to manage all those hosts as a single cluster by abstracting the complexity of the underlying platform. That is what the container clusters and orchestrators provide. Examples of orchestrators are Azure Service Fabric, Kubernetes, Docker Swarm and Mesosphere DC/OS. The last three open-source orchestrators are available in Azure through Azure Container Service.
**Schedulers**. *Scheduling* means to have the capability for an administrator to launch containers in a cluster so they also provide a UI. A cluster scheduler has several responsibilities: to use the cluster’s resources efficiently, to set the constraints provided by the user, to efficiently load-balance containers across nodes or hosts, and to be robust against errors while providing high availability.
-The concepts of a cluster and a scheduler are closely related, so the products provided by different vendors often provide both sets of capabilities. The following list shows the most important platform and software choices you have for clusters and schedulers. These clusters are generally offered in public clouds like Azure.
+The concepts of a cluster and a scheduler are closely related, so the products provided by different vendors often provide both sets of capabilities. The following list shows the most important platform and software choices you have for clusters and schedulers. These orchestrators are generally offered in public clouds like Azure.
## Software platforms for container clustering, orchestration, and scheduling
+Kubernetes
+
+
+
+> Kubernetes is an open-source product that provides functionality that ranges from cluster infrastructure and container scheduling to orchestrating capabilities. It lets you automate deployment, scaling, and operations of application containers across clusters of hosts.
+>
+> Kubernetes provides a container-centric infrastructure that groups application containers into logical units for easy management and discovery.
+>
+> Kubernetes is mature in Linux, less mature in Windows.
+
Docker Swarm
-
+
> Docker Swarm lets you cluster and schedule Docker containers. By using Swarm, you can turn a pool of Docker hosts into a single, virtual Docker host. Clients can make API requests to Swarm the same way they do to hosts, meaning that Swarm makes it easy for applications to scale to multiple hosts.
>
@@ -47,29 +57,24 @@ Docker Swarm
Mesosphere DC/OS
-
+
> Mesosphere Enterprise DC/OS (based on Apache Mesos) is a production-ready platform for running containers and distributed applications.
>
> DC/OS works by abstracting a collection of the resources available in the cluster and making those resources available to components built on top of it. Marathon is usually used as a scheduler integrated with DC/OS.
-
-Google Kubernetes
-
-
-
-> Kubernetes is an open-source product that provides functionality that ranges from cluster infrastructure and container scheduling to orchestrating capabilities. It lets you automate deployment, scaling, and operations of application containers across clusters of hosts.
>
-> Kubernetes provides a container-centric infrastructure that groups application containers into logical units for easy management and discovery.
+> DC/OS is mature in Linux, less mature in Windows.
Azure Service Fabric
-
+
-> [Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-overview) is a Microsoft microservices platform for building applications. It is an [orchestrator](https://docs.microsoft.com/azure/service-fabric/service-fabric-cluster-resource-manager-introduction) of services and creates clusters of machines. By default, Service Fabric deploys and activates services as processes, but Service Fabric can deploy services in Docker container images. More importantly, you can mix services in processes with services in containers in the same application.
+> [Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-overview) is a Microsoft microservices platform for building applications. It is an [orchestrator](https://docs.microsoft.com/azure/service-fabric/service-fabric-cluster-resource-manager-introduction) of services and creates clusters of machines. Service Fabric can deploy services as containers or as plain processes. It can even mix services in processes with services in containers within the same application and cluster.
>
-> As of May 2017, the feature of Service Fabric that supports deploying services as Docker containers is in preview state.
+> Service Fabric provides additional and optional prescriptive [Service Fabric programming models ](https://docs.microsoft.com/azure/service-fabric/service-fabric-choose-framework) like [stateful services](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction) and [Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction).
>
-> Service Fabric services can be developed in many ways, from using the [Service Fabric programming models ](https://docs.microsoft.com/azure/service-fabric/service-fabric-choose-framework)to deploying [guest executables](https://docs.microsoft.com/azure/service-fabric/service-fabric-deploy-existing-app) as well as containers. Service Fabric supports prescriptive application models like [stateful services](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction) and [Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction).
+> Service Fabric is mature in Windows (years evolving in Windows), less mature in Linux.
+> Both, Linux Containers and Windows Containers are already released as GA since 2017.
## Using container-based orchestrators in Microsoft Azure
@@ -79,25 +84,25 @@ Another choice is to use Microsoft Azure Service Fabric (a microservices platfor
## Using Azure Container Service
-A Docker cluster pools multiple Docker hosts and exposes them as a single virtual Docker host, so you can deploy multiple containers into the cluster. The cluster will handle all the complex management plumbing, like scalability, health, and so forth. Figure 4-23 represents how a Docker cluster for composed applications maps to Azure Container Service (ACS).
+A Docker cluster pools multiple Docker hosts and exposes them as a single virtual Docker host, so you can deploy multiple containers into the cluster. The cluster will handle all the complex management plumbing, like scalability, health, and so forth. Figure 4-24 represents how a Docker cluster for composed applications maps to Azure Container Service (ACS).
ACS provides a way to simplify the creation, configuration, and management of a cluster of virtual machines that are preconfigured to run containerized applications. Using an optimized configuration of popular open-source scheduling and orchestration tools, ACS enables you to use your existing skills or draw on a large and growing body of community expertise to deploy and manage container-based applications on Microsoft Azure.
Azure Container Service optimizes the configuration of popular Docker clustering open source tools and technologies specifically for Azure. You get an open solution that offers portability for both your containers and your application configuration. You select the size, the number of hosts, and the orchestrator tools, and Container Service handles everything else.
-
+
-**Figure 4-23**. Clustering choices in Azure Container Service
+**Figure 4-24**. Clustering choices in Azure Container Service
ACS leverages Docker images to ensure that your application containers are fully portable. It supports your choice of open-source orchestration platforms like DC/OS (powered by Apache Mesos), Kubernetes (originally created by Google), and Docker Swarm, to ensure that these applications can be scaled to thousands or even tens of thousands of containers.
The Azure Container service enables you to take advantage of the enterprise-grade features of Azure while still maintaining application portability, including at the orchestration layers.
-
+
-**Figure 4-24**. Orchestrators in ACS
+**Figure 4-25**. Orchestrators in ACS
-As shown in Figure 4-24, Azure Container Service is simply the infrastructure provided by Azure in order to deploy DC/OS, Kubernetes or Docker Swarm, but ACS does not implement any additional orchestrator. Therefore, ACS is not an orchestrator as such, only an infrastructure that leverages existing open-source orchestrators for containers.
+As shown in Figure 4-25, Azure Container Service is simply the infrastructure provided by Azure in order to deploy DC/OS, Kubernetes or Docker Swarm, but ACS does not implement any additional orchestrator. Therefore, ACS is not an orchestrator as such, only an infrastructure that leverages existing open-source orchestrators for containers.
From a usage perspective, the goal of Azure Container Service is to provide a container hosting environment by using popular open-source tools and technologies. To this end, it exposes the standard API endpoints for your chosen orchestrator. By using these endpoints, you can leverage any software that can talk to those endpoints. For example, in the case of the Docker Swarm endpoint, you might choose to use the Docker command-line interface (CLI). For DC/OS, you might choose to use the DC/OS CLI.
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/service-oriented-architecture.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/service-oriented-architecture.md
index 6e704bbac9bcc..c2c5366191e66 100644
--- a/docs/standard/microservices-architecture/architect-microservice-container-applications/service-oriented-architecture.md
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/service-oriented-architecture.md
@@ -17,9 +17,9 @@ Those services can now be deployed as Docker containers, which solves deployment
Docker containers are useful (but not required) for both traditional service-oriented architectures and the more advanced microservices architectures.
-Microservices derive from SOA, but SOA is different from microservices architecture. Features like big central brokers, central orchestrators at the organization level, and the [Enterprise Service Bus (ESB)](https://en.wikipedia.org/wiki/Enterprise_service_bus) are typical in SOA. But in most cases these are anti-patterns in the microservice community. In fact, some people argue that “The microservice architecture is SOA done right.”
+Microservices derive from SOA, but SOA is different from microservices architecture. Features like big central brokers, central orchestrators at the organization level, and the [Enterprise Service Bus (ESB)](https://en.wikipedia.org/wiki/Enterprise_service_bus) are typical in SOA. But in most cases, these are anti-patterns in the microservice community. In fact, some people argue that “The microservice architecture is SOA done right.”
-This guide focuses on microservices, because an SOA approach is less prescriptive than the requirements and techniques used in a microservice architecture. If you know how to build a microservice-based application, you also know how to build a simpler service-oriented application.
+This guide focuses on microservices, because a SOA approach is less prescriptive than the requirements and techniques used in a microservice architecture. If you know how to build a microservice-based application, you also know how to build a simpler service-oriented application.
diff --git a/docs/standard/microservices-architecture/architect-microservice-container-applications/using-azure-service-fabric.md b/docs/standard/microservices-architecture/architect-microservice-container-applications/using-azure-service-fabric.md
index d4d1e8f7ac254..c2ef9945e86d0 100644
--- a/docs/standard/microservices-architecture/architect-microservice-container-applications/using-azure-service-fabric.md
+++ b/docs/standard/microservices-architecture/architect-microservice-container-applications/using-azure-service-fabric.md
@@ -4,14 +4,14 @@ description: .NET Microservices Architecture for Containerized .NET Applications
keywords: Docker, Microservices, ASP.NET, Container
author: CESARDELATORRE
ms.author: wiwagn
-ms.date: 05/26/2017
+ms.date: 10/18/2017
ms.prod: .net-core
ms.technology: dotnet-docker
ms.topic: article
---
# Using Azure Service Fabric
-Azure Service Fabric arose from Microsoft’s transition from delivering box products, which were typically monolithic in style, to delivering services. The experience of building and operating large services at scale, such as Azure SQL Database, Azure Document DB, Azure Service Bus, or Cortana’s Backend, shaped Service Fabric. The platform evolved over time as more and more services adopted it. Importantly, Service Fabric had to run not only in Azure but also in standalone Windows Server deployments.
+Azure Service Fabric arose from Microsoft’s transition from delivering box products, which were typically monolithic in style, to delivering services. The experience of building and operating large services at scale, such as Azure SQL Database, Azure Cosmos DB, Azure Service Bus, or Cortana’s Backend, shaped Service Fabric. The platform evolved over time as more and more services adopted it. Importantly, Service Fabric had to run not only in Azure but also in standalone Windows Server deployments.
The aim of Service Fabric is to solve the hard problems of building and running a service and utilizing infrastructure resources efficiently, so that teams can solve business problems using a microservices approach.
@@ -23,67 +23,68 @@ Service Fabric provides two broad areas to help you build applications that use
Service Fabric is agnostic with respect to how you build your service, and you can use any technology. However, it provides built-in programming APIs that make it easier to build microservices.
-As shown in Figure 4-25, you can create and run microservices in Service Fabric either as simple processes or as Docker containers. It is also possible to mix container-based microservices with process-based microservices within the same Service Fabric cluster.
+As shown in Figure 4-26, you can create and run microservices in Service Fabric either as simple processes or as Docker containers. It is also possible to mix container-based microservices with process-based microservices within the same Service Fabric cluster.
-
+
-**Figure 4-25**. Deploying microservices as processes or as containers in Azure Service Fabric
+**Figure 4-26**. Deploying microservices as processes or as containers in Azure Service Fabric
-Service Fabric clusters based on Linux and Windows hosts can run Docker Linux containers and Windows Containers.
+Service Fabric clusters based on Linux and Windows hosts can run Docker Linux containers and Windows Containers, respectively.
For up-to-date information about containers support in Azure Service Fabric, see [Service Fabric and containers](https://docs.microsoft.com/azure/service-fabric/service-fabric-containers-overview).
-Service Fabric is a good example of a platform where you can define a different logical architecture (business microservices or Bounded Contexts) than the physical implementation that were introduced in the [Logical architecture versus physical architecture](#logical-architecture-versus-physical-architecture) section. For example, if you implement [Stateful Reliable Services](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction) in [Azure Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-overview), which are introduced in the section [Stateless versus stateful microservices](#stateless-versus-stateful-microservices) later, you have a business microservice concept with multiple physical services.
+Service Fabric is a good example of a platform where you can define a different logical architecture (business microservices or Bounded Contexts) than the physical implementation that were introduced in the [Logical architecture versus physical architecture](#logical-architecture-versus-physical-architecture) section. For example, if you implement [Stateful Reliable Services](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction) in [Azure Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-overview), which are introduced in the section [Stateless versus stateful microservices](#stateless-versus-stateful-microservices) later, you can have a business microservice concept with multiple physical services.
-As shown in Figure 4-26, and thinking from a logical/business microservice perspective, when implementing a Service Fabric Stateful Reliable Service, you usually will need to implement two tiers of services. The first is the back-end stateful reliable service, which handles multiple partitions. The second is the front-end service, or Gateway service, in charge of routing and data aggregation across multiple partitions or stateful service instances. That Gateway service also handles client-side communication with retry loops accessing the backend service used in conjunction with the Service Fabric [reverse proxy](https://docs.microsoft.com/azure/service-fabric/service-fabric-reverseproxy).
+As shown in Figure 4-27, and thinking from a logical/business microservice perspective, when implementing a Service Fabric Stateful Reliable Service, you usually will need to implement two tiers of services. The first is the back-end stateful reliable service, which handles multiple partitions (each partition is a stateful service). The second is the front-end service, or Gateway service, in charge of routing and data aggregation across multiple partitions or stateful service instances. That Gateway service also handles client-side communication with retry loops accessing the backend service.
+It is called Gateway service if you implement your custom service, or alternatevely you can also use the out-of-the-box Service Fabric [Reverse Proxy service](https://docs.microsoft.com/azure/service-fabric/service-fabric-reverseproxy).
-
+
-**Figure 4-26**. Business microservice with several stateful and stateless services in Service Fabric
+**Figure 4-27**. Business microservice with several stateful service instances and a custom gateway front-end
In any case, when you use Service Fabric Stateful Reliable Services, you also have a logical or business microservice (Bounded Context) that is usually composed of multiple physical services. Each of them, the Gateway service and Partition service could be implemented as ASP.NET Web API services, as shown in Figure 4-26.
-In Service Fabric, you can group and deploy groups of services as a [Service Fabric Application](https://docs.microsoft.com/azure/service-fabric/service-fabric-application-model), which is the unit of packaging and deployment for the orchestrator or cluster. Therefore, the Service Fabric Application could be mapped to this autonomous business and logical microservice boundary or Bounded Context, as well.
+In Service Fabric, you can group and deploy groups of services as a [Service Fabric Application](https://docs.microsoft.com/azure/service-fabric/service-fabric-application-model), which is the unit of packaging and deployment for the orchestrator or cluster. Therefore, the Service Fabric Application could be mapped to this autonomous business and logical microservice boundary or Bounded Context, as well, so you could deploy these services autonomously.
## Service Fabric and containers
-With regard to containers in Service Fabric, you can also deploy services in container images within a Service Fabric cluster. As Figure 4-27 shows, most of the time there will only be one container per service.
+With regard to containers in Service Fabric, you can also deploy services in container images within a Service Fabric cluster. As Figure 4-28 shows, most of the time there will only be one container per service.
-
+
-**Figure 4-27**. Business microservice with several services (containers) in Service Fabric
+**Figure 4-28**. Business microservice with several services (containers) in Service Fabric
However, so-called “sidecar” containers (two containers that must be deployed together as part of a logical service) are also possible in Service Fabric. The important thing is that a business microservice is the logical boundary around several cohesive elements. In many cases, it might be a single service with a single data model, but in some other cases you might have physical several services as well.
-As of this writing (April 2017), in Service Fabric you cannot deploy SF Reliable Stateful Services on containers—you can only deploy guest containers, stateless services, or actor services in containers. But note that you can mix services in processes and services in containers in the same Service Fabric application, as shown in Figure 4-28.
+As of mid-2017, in Service Fabric you cannot deploy SF Reliable Stateful Services on containers—you can only deploy stateless services and actor services in containers. But note that you can mix services in processes and services in containers in the same Service Fabric application, as shown in Figure 4-29.
-
+
-**Figure 4-28**. Business microservice mapped to a Service Fabric application with containers and stateful services
+**Figure 4-29**. Business microservice mapped to a Service Fabric application with containers and stateful services
-Support is also different depending on whether you are using Docker containers on Linux or Windows Containers. Support for containers in Service Fabric will be expanding in upcoming releases. For up-to-date news about container support in Azure Service Fabric, see [Service Fabric and containers](https://docs.microsoft.com/azure/service-fabric/service-fabric-containers-overview) on the Azure website.
+For up-to-date news about container support in Azure Service Fabric, see [Service Fabric and containers](https://docs.microsoft.com/azure/service-fabric/service-fabric-containers-overview).
## Stateless versus stateful microservices
-As mentioned earlier, each microservice (logical Bounded Context) must own its domain model (data and logic). In the case of stateless microservices, the databases will be external, employing relational options like SQL Server, or NoSQL options like MongoDB or Azure Document DB.
+As mentioned earlier, each microservice (logical Bounded Context) must own its domain model (data and logic). In the case of stateless microservices, the databases will be external, employing relational options like SQL Server, or NoSQL options like MongoDB or Azure Cosmos DB.
-But the services themselves can also be stateful, which means that the data resides within the microservice. This data might exist not just on the same server, but within the microservice process, in memory and persisted on hard drives and replicated to other nodes. Figure 4-29 shows the different approaches.
+But the services themselves can also be stateful in Service Fabric, which means that the data resides within the microservice. This data might exist not just on the same server, but within the microservice process, in memory and persisted on hard drives and replicated to other nodes. Figure 4-30 shows the different approaches.
-
+
-**Figure 4-29**. Stateless versus stateful microservices
+**Figure 4-30**. Stateless versus stateful microservices
A stateless approach is perfectly valid and is easier to implement than stateful microservices, since the approach is similar to traditional and well-known patterns. But stateless microservices impose latency between the process and data sources. They also involve more moving pieces when you are trying to improve performance with additional cache and queues. The result is that you can end up with complex architectures that have too many tiers.
In contrast, [stateful microservices](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction#when-to-use-reliable-services-apis) can excel in advanced scenarios, because there is no latency between the domain logic and data. Heavy data processing, gaming back ends, databases as a service, and other low-latency scenarios all benefit from stateful services, which enable local state for faster access.
-Stateless and stateful services are complementary. For instance, you can see in Figure 4-20 that a stateful service could be split into multiple partitions. To access those partitions, you might need a stateless service acting as a gateway service that knows how to address each partition based on partition keys.
+Stateless and stateful services are complementary. For instance, you can see in Figure 4-30, at the right diagram, that a stateful service could be split into multiple partitions. To access those partitions, you might need a stateless service acting as a gateway service that knows how to address each partition based on partition keys.
Stateful services do have drawbacks. They impose a level of complexity that allows to scale out. Functionality that would usually be implemented by external database systems must be addressed for tasks such as data replication across stateful microservices and data partitioning. However, this is one of the areas where an orchestrator like [Azure Service Fabric](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-platform-architecture) with its [stateful reliable services](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-introduction#when-to-use-reliable-services-apis) can help the most—by simplifying the development and lifecycle of stateful microservices using the [Reliable Services API](https://docs.microsoft.com/azure/service-fabric/service-fabric-work-with-reliable-collections) and [Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction).
Other microservice frameworks that allow stateful services, that support the Actor pattern, and that improve fault tolerance and latency between business logic and data are Microsoft [Orleans](https://github.com/dotnet/orleans), from Microsoft Research, and [Akka.NET](http://getakka.net/). Both frameworks are currently improving their support for Docker.
-Note that Docker containers are themselves stateless. If you want to implement a stateful service, you need one of the additional prescriptive and higher-level frameworks noted earlier. However, at the time of this writing, stateful services in Azure Service Fabric are not supported as containers, only as plain microservices. Reliable services support in containers will be available in upcoming versions of Service Fabric.
+Note that Docker containers are themselves stateless. If you want to implement a stateful service, you need one of the additional prescriptive and higher-level frameworks noted earlier.
>[!div class="step-by-step"]
[Previous] (scalable-available-multi-container-microservice-applications.md)
diff --git a/docs/standard/microservices-architecture/docker-application-development-process/docker-app-development-workflow.md b/docs/standard/microservices-architecture/docker-application-development-process/docker-app-development-workflow.md
index 38f30923ed4dc..2af6c92feeb70 100644
--- a/docs/standard/microservices-architecture/docker-application-development-process/docker-app-development-workflow.md
+++ b/docs/standard/microservices-architecture/docker-application-development-process/docker-app-development-workflow.md
@@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications
keywords: Docker, Microservices, ASP.NET, Container
author: CESARDELATORRE
ms.author: wiwagn
-ms.date: 05/26/2017
+ms.date: 10/18/2017
ms.prod: .net-core
ms.technology: dotnet-docker
ms.topic: article
@@ -43,7 +43,7 @@ However, just because Visual Studio makes those steps automatic does not mean th
## Step 1. Start coding and create your initial application or service baseline
-Developing a Docker application is similar to the way you develop an application without Docker. The difference is that while developing for Docker, you are deploying and testing your application or services running within Docker containers in your local environment (either a Linux VM or a Windows VM).
+Developing a Docker application is similar to the way you develop an application without Docker. The difference is that while developing for Docker, you are deploying and testing your application or services running within Docker containers in your local environment. The container can be either a Linux container or a Windows container.
### Set up your local environment with Visual Studio
@@ -93,14 +93,14 @@ This action on a project (like an ASP.NET Web application or Web API service) ad
You usually build a custom image for your container on top of a base image you can get from an official repository at the [Docker Hub](https://hub.docker.com/) registry. That is precisely what happens under the covers when you enable Docker support in Visual Studio. Your Dockerfile will use an existing aspnetcore image.
-Earlier we explained which Docker images and repos you can use, depending on the framework and OS you have chosen. For instance, if you want to use ASP.NET Core and Linux, the image to use is microsoft/aspnetcore:1.1. Therefore, you just need to specify what base Docker image you will use for your container. You do that by adding FROM microsoft/aspnetcore:1.1 to your Dockerfile. This will be automatically performed by Visual Studio, but if you were to update the version, you update this value.
+Earlier we explained which Docker images and repos you can use, depending on the framework and OS you have chosen. For instance, if you want to use ASP.NET Core (Linux or Windows), the image to use is microsoft/aspnetcore:2.0. Therefore, you just need to specify what base Docker image you will use for your container. You do that by adding FROM microsoft/aspnetcore:2.0 to your Dockerfile. This will be automatically performed by Visual Studio, but if you were to update the version, you update this value.
Using an official .NET image repository from Docker Hub with a version number ensures that the same language features are available on all machines (including development, testing, and production).
The following example shows a sample Dockerfile for an ASP.NET Core container.
```
-FROM microsoft/aspnetcore:1.1
+FROM microsoft/aspnetcore:2.0
ARG source
@@ -113,7 +113,7 @@ COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", " MySingleContainerWebApp.dll "]
```
-In this case, the container is based on version 1.1 of the official ASP.NET Core Docker image for Linux; this is the setting FROM microsoft/aspnetcore:1.1. (For further details about this base image, see the [ASP.NET Core Docker Image](https://hub.docker.com/r/microsoft/aspnetcore/) page and the [.NET Core Docker Image](https://hub.docker.com/r/microsoft/dotnet/) page.) In the Dockerfile, you also need to instruct Docker to listen on the TCP port you will use at runtime (in this case, port 80, as configured with the EXPOSE setting).
+In this case, the container is based on version 2.0 of the official ASP.NET Core Docker image (multi-arch for Linux and Windows). This is the setting `FROM microsoft/aspnetcore:2.0`. (For further details about this base image, see the [ASP.NET Core Docker Image](https://hub.docker.com/r/microsoft/aspnetcore/) page and the [.NET Core Docker Image](https://hub.docker.com/r/microsoft/dotnet/) page.) In the Dockerfile, you also need to instruct Docker to listen on the TCP port you will use at runtime (in this case, port 80, as configured with the EXPOSE setting).
You can specify additional configuration settings in the Dockerfile, depending on the language and framework you are using. For instance, the ENTRYPOINT line with \["dotnet", "MySingleContainerWebApp.dll"\] tells Docker to run a .NET Core application. If you are using the SDK and the .NET Core CLI (dotnet CLI) to build and run the .NET application, this setting would be different. The bottom line is that the ENTRYPOINT line and other settings will be different depending on the language and platform you choose for your application.
@@ -125,17 +125,27 @@ You can specify additional configuration settings in the Dockerfile, depending o
- **Build your own image**. In the official Docker documentation.
[*https://docs.docker.com/engine/tutorials/dockerimages/*](https://docs.docker.com/engine/tutorials/dockerimages/)
-### Using multi-platform image repositories
+### Using multi-arch image repositories
-A single repo can contain platform variants, such as a Linux image and a Windows image. This feature allows vendors like Microsoft (base image creators) to create a single repo to cover multiple platforms. For example, the [microsoft/dotnet](https://hub.docker.com/r/microsoft/aspnetcore/) repository available in the Docker Hub registry provides support for Linux and Windows Nano Server by using the same repo name with different tags, as shown in the following examples:
+A single repo can contain platform variants, such as a Linux image and a Windows image. This feature allows vendors like Microsoft (base image creators) to create a single repo to cover multiple platforms (that is Linux and Windows). For example, the [microsoft/dotnet](https://hub.docker.com/r/microsoft/aspnetcore/) repository available in the Docker Hub registry provides support for Linux and Windows Nano Server by using the same repo name.
-- microsoft/dotnet:1.1-runtime
- .NET Core 1.1 runtime-only on Linux Debian
+If you specify a tag, targeting a platform that is explicit like in the following cases:
-- microsoft/dotnet:1.1-runtime-nanoserver
- .NET Core 1.1 runtime-only on Windows Nano Server
+- **microsoft/aspnetcore:2.0.0-jessie**
-In the future, it will be possible to use the same repo name and tag targeting multiple operating systems. That way, when you pull an image from a Windows host, it will pull the Windows variant, and pulling the same image name from a Linux host will pull the Linux variant.
+ .NET Core 2.0 runtime-only on Linux
+
+- **microsoft/aspnetcore:2.0.0-nanoserver**
+
+ .NET Core 2.0 runtime-only on Windows Nano Server
+
+But, and this is new since mid-2017, if you specify the same image name, even with the same tag, the new multi-arch images (like the aspnetcore image which supports multi-arch) will use the Linux or Windows version depending on the Docker host OS you are deploying, as shown in the following example:
+
+- **microsoft/aspnetcore:2.0**
+
+ Multi-arch: .NET Core 2.0 runtime-only on Linux or Windows Nano Server depending on the Docker host OS
+
+This way, when you pull an image from a Windows host, it will pull the Windows variant, and pulling the same image name from a Linux host will pull the Linux variant.
### Option B: Creating your base image from scratch
@@ -143,6 +153,8 @@ You can create your own Docker base image from scratch. This scenario is not rec
### Additional resources
+- **Multi-arch .NET Core images**.
+https://github.com/dotnet/announcements/issues/14
- **Create a base image**. Official Docker documentation.
[*https://docs.docker.com/engine/userguide/eng-image/baseimages/*](https://docs.docker.com/engine/userguide/eng-image/baseimages/)
@@ -187,10 +199,10 @@ The [docker-compose.yml](https://docs.docker.com/compose/compose-file/) file let
To use a docker-compose.yml file, you need to create the file in your main or root solution folder, with content similar to that in the following example:
```yml
-version: '2'
+version: '3'
services:
-
+
webmvc:
image: eshop/web
environment:
@@ -204,16 +216,17 @@ services:
catalog.api:
image: eshop/catalog.api
- environment: ConnectionString=Server=catalogdata;Port=5432;Database=postgres;…
+ environment:
+ - ConnectionString=Server=sql.data;Database=CatalogDB;…
ports:
- "81:80"
depends_on:
- - postgres.data
+ - sql.data
ordering.api:
image: eshop/ordering.api
environment:
- - ConnectionString=Server=ordering.data;Database=OrderingDb;…
+ - ConnectionString=Server=sql.data;Database=OrderingDb;…
ports:
- "82:80"
extra_hosts:
@@ -229,25 +242,21 @@ services:
ports:
- "5433:1433"
- postgres.data:
- image: postgres:latest
- environment:
- POSTGRES_PASSWORD: tempPwd
```
Note that this docker-compose.yml file is a simplified and merged version. It contains static configuration data for each container (like the name of the custom image), which always applies, plus configuration information that might depend on the deployment environment, like the connection string. In later sections, you will learn how you can split the docker-compose.yml configuration into multiple docker-compose files and override values depending on the environment and execution type (debug or release).
-The docker-compose.yml file example defines five services: the webmvc service (a web application), two microservices (catalog.api and ordering.api), and two data source containers (sql.data based on SQL Server for Linux running as a container and postgres.data as a Postgres database). Each service is deployed as a container, so a Docker image is required for each.
+The docker-compose.yml file example defines five services: the webmvc service (a web application), two microservices (catalog.api and ordering.api), and one data source container, sql.data, based on SQL Server for Linux running as a container. Each service is deployed as a container, so a Docker image is required for each.
The docker-compose.yml file specifies not only what containers are being used, but how they are individually configured. For instance, the webmvc container definition in the .yml file:
-- Uses the pre-built eshop/web:latest image. However, you could also configure the image to be built as part of the docker-compose execution with an additional configuration based on a build: section in the docker-compose file.
+- Uses a pre-built eshop/web:latest image. However, you could also configure the image to be built as part of the docker-compose execution with an additional configuration based on a build: section in the docker-compose file.
- Initializes two environment variables (CatalogUrl and OrderingUrl).
- Forwards the exposed port 80 on the container to the external port 80 on the host machine.
-- Links the web service to the catalog and ordering service with the depends\_on setting. This causes the service to wait until those services are started.
+- Links the web app to the catalog and ordering service with the depends\_on setting. This causes the service to wait until those services are started.
We will revisit the docker-compose.yml file in a later section when we cover how to implement microservices and multi-container apps.
@@ -313,7 +322,7 @@ Running a multi-container application using Visual Studio 2017 cannot get simple
As mentioned before, each time you add Docker solution support to a project within a solution, that project is configured in the global (solution-level) docker-compose.yml file, which lets you run or debug the whole solution at once. Visual Studio will start one container for each project that has Docker solution support enabled, and perform all the internal steps for you (dotnet publish, docker build, etc.).
-The important point here is that, as shown in Figure 5-12, in Visual Studio 2017 there is an additional **Docker** command under the F5 key. This option lets you run or debug a multi-container application by running all the containers that are defined in the docker-compose.yml files at the solution level. The ability to debug multiple-container solutions means that you can set several breakpoints, each breakpoint in a different project (container), and while debugging from Visual Studio you will stop at breakpoints defined in different projects and running on different containers.
+The important point here is that, as shown in Figure 5-12, in Visual Studio 2017 there is an additional **Docker** command for the F5 key action. This option lets you run or debug a multi-container application by running all the containers that are defined in the docker-compose.yml files at the solution level. The ability to debug multiple-container solutions means that you can set several breakpoints, each breakpoint in a different project (container), and while debugging from Visual Studio you will stop at breakpoints defined in different projects and running on different containers.

diff --git a/docs/standard/microservices-architecture/docker-application-development-process/index.md b/docs/standard/microservices-architecture/docker-application-development-process/index.md
index 09b79e0260f09..3eda39a9e50f6 100644
--- a/docs/standard/microservices-architecture/docker-application-development-process/index.md
+++ b/docs/standard/microservices-architecture/docker-application-development-process/index.md
@@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications
keywords: Docker, Microservices, ASP.NET, Container
author: CESARDELATORRE
ms.author: wiwagn
-ms.date: 05/26/2017
+ms.date: 10/18/2017
ms.prod: .net-core
ms.technology: dotnet-docker
ms.topic: article
@@ -19,16 +19,18 @@ ms.topic: article
Whether you prefer a full and powerful IDE or a lightweight and agile editor, Microsoft has tools that you can use for developing Docker applications.
-**Visual Studio with Tools for Docker**. If you are using Visual Studio 2015, you can install the [Visual Studio Tools for Docker](https://marketplace.visualstudio.com/items?itemName=MicrosoftCloudExplorer.VisualStudioToolsforDocker-Preview) add-in. If you are using Visual Studio 2017, tools for Docker are already built-in. In either case, the tools for Docker let you develop, run, and validate your applications directly in the target Docker environment. You can press F5 to run and debug your application (single container or multiple containers) directly into a Docker host, or press CTRL+F5 to edit and refresh your application without having to rebuild the container. This is the simplest and most powerful choice for Windows developers targeting Docker containers for Linux or Windows.
+**Visual Studio (for Windows)**. To develop Docker-based applications, use Visual Studio 2017 or later versions that comes with tools for Docker already built-in. The tools for Docker let you develop, run, and validate your applications directly in the target Docker environment. You can press F5 to run and debug your application (single container or multiple containers) directly into a Docker host, or press CTRL+F5 to edit and refresh your application without having to rebuild the container. This is the most powerful development choice for Docker-based apps.
+
+**Visual Studio for Mac**. It is an IDE, evolution of Xamarin Studio, that runs on macOS and supports Docker-based application development. This should be the preferred choice for developers working in Mac machines who also want to use a powerful IDE.
**Visual Studio Code and Docker CLI**. If you prefer a lightweight and cross-platform editor that supports any development language, you can use Microsoft Visual Studio Code (VS Code) and the Docker CLI. This is a cross-platform development approach for Mac, Linux, and Windows.
-These products provide a simple but robust experience that streamlines the developer workflow. By installing [Docker Community Edition (CE)](https://www.docker.com/community-edition) tools, you can use a single Docker CLI to build apps for both Windows and Linux. Additionally, Visual Studio Code supports extensions for Docker such as IntelliSense for Dockerfiles and shortcut tasks to run Docker commands from the editor.
+By installing [Docker Community Edition (CE)](https://www.docker.com/community-edition) tools, you can use a single Docker CLI to build apps for both Windows and Linux. Additionally, Visual Studio Code supports extensions for Docker such as IntelliSense for Dockerfiles and shortcut tasks to run Docker commands from the editor.
### Additional resources
- **Visual Studio Tools for Docker**
- [*https://visualstudiogallery.msdn.microsoft.com/0f5b2caa-ea00-41c8-b8a2-058c7da0b3e4*](https://visualstudiogallery.msdn.microsoft.com/0f5b2caa-ea00-41c8-b8a2-058c7da0b3e4)
+ [*https://docs.microsoft.com/en-us/aspnet/core/publishing/visual-studio-tools-for-docker*](https://docs.microsoft.com/en-us/aspnet/core/publishing/visual-studio-tools-for-docker)
- **Visual Studio Code**. Official site.
[*https://code.visualstudio.com/download*](https://code.visualstudio.com/download)
diff --git a/docs/standard/microservices-architecture/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md b/docs/standard/microservices-architecture/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md
index 2b8d70d28f14d..6c5d2cd2d0431 100644
--- a/docs/standard/microservices-architecture/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md
+++ b/docs/standard/microservices-architecture/microservice-ddd-cqrs-patterns/nosql-database-persistence-infrastructure.md
@@ -11,9 +11,9 @@ ms.topic: article
---
# Using NoSQL databases as a persistence infrastructure
-When you use NoSQL databases for your infrastructure data tier, you typically do not use an ORM like Entity Framework Core. Instead you use the API provided by the NoSQL engine, such as Azure Document DB, MongoDB, Cassandra, RavenDB, CouchDB, or Azure Storage Tables.
+When you use NoSQL databases for your infrastructure data tier, you typically do not use an ORM like Entity Framework Core. Instead you use the API provided by the NoSQL engine, such as Azure Cosmos DB, MongoDB, Cassandra, RavenDB, CouchDB, or Azure Storage Tables.
-However, when you use a NoSQL database, especially a document-oriented database like Azure Document DB, CouchDB, or RavenDB, the way you design your model with DDD aggregates is partially similar to how you can do it in EF Core, in regards to the identification of aggregate roots, child entity classes, and value object classes. But, ultimately, the database selection will impact in your design.
+However, when you use a NoSQL database, especially a document-oriented database like Azure Cosmos DB, CouchDB, or RavenDB, the way you design your model with DDD aggregates is partially similar to how you can do it in EF Core, in regards to the identification of aggregate roots, child entity classes, and value object classes. But, ultimately, the database selection will impact in your design.
When you use a document-oriented database, you implement an aggregate as a single document, serialized in JSON or another format. However, the use of the database is transparent from a domain model code point of view. When using a NoSQL database, you still are using entity classes and aggregate root classes, but with more flexibility than when using EF Core because the persistence is not relational.
@@ -50,7 +50,7 @@ For instance, the following JSON code is a sample implementation of an order agg
}
```
-When you use a C\# model to implement the aggregate to be used by something like the Azure Document DB SDK, the aggregate is similar to the C\# POCO classes used with EF Core. The difference is in the way to use them from the application and infrastructure layers, as in the following code:
+When you use a C\# model to implement the aggregate to be used by something like the Azure Cosmos DB SDK, the aggregate is similar to the C\# POCO classes used with EF Core. The difference is in the way to use them from the application and infrastructure layers, as in the following code:
```csharp
// C# EXAMPLE OF AN ORDER AGGREGATE BEING PERSISTED WITH DOCUMENTDB API
@@ -103,7 +103,7 @@ orderAggregate.AddOrderItem(orderItem2);
// *** End of Domain Model Code ***
//...
-// *** Infrastructure Code using Document DB Client API ***
+// *** Infrastructure Code using Cosmos DB Client API ***
Uri collectionUri = UriFactory.CreateDocumentCollectionUri(databaseName,
collectionName);
diff --git a/docs/standard/microservices-architecture/multi-container-microservice-net-applications/microservice-application-design.md b/docs/standard/microservices-architecture/multi-container-microservice-net-applications/microservice-application-design.md
index cd56670b6b238..21c98d4fd022c 100644
--- a/docs/standard/microservices-architecture/multi-container-microservice-net-applications/microservice-application-design.md
+++ b/docs/standard/microservices-architecture/multi-container-microservice-net-applications/microservice-application-design.md
@@ -53,11 +53,11 @@ What should the application deployment architecture be? The specifications for t
In this approach, each service (container) implements a set of cohesive and narrowly related functions. For example, an application might consist of services such as the catalog service, ordering service, basket service, user profile service, etc.
-Microservices communicate using protocols such as HTTP (REST), asynchronously whenever possible, especially when propagating updates.
+Microservices communicate using protocols such as HTTP (REST), but also asynchronously (i.e. AMQP) whenever possible, especially when propagating updates with integration events.
Microservices are developed and deployed as containers independently of one another. This means that a development team can be developing and deploying a certain microservice without impacting other subsystems.
-Each microservice has its own database, allowing it to be fully decoupled from other microservices. When necessary, consistency between databases from different microservices is achieved using application-level events (through a logical event bus), as handled in Command and Query Responsibility Segregation (CQRS). Because of that, the business constraints must embrace eventual consistency between the multiple microservices and related databases.
+Each microservice has its own database, allowing it to be fully decoupled from other microservices. When necessary, consistency between databases from different microservices is achieved using application-level integration events (through a logical event bus), as handled in Command and Query Responsibility Segregation (CQRS). Because of that, the business constraints must embrace eventual consistency between the multiple microservices and related databases.
### eShopOnContainers: A reference application for .NET Core and microservices deployed using containers
@@ -67,7 +67,7 @@ The application consists of multiple subsystems, including several store UI fron

-**Figure 8-1**. The eShopOnContainers reference application, showing the direct client-to-microservice communication and the event bus
+**Figure 8-1**. The eShopOnContainers reference application, showing a direct client-to-microservice communication and the event bus
**Hosting environment**. In Figure 8-1, you see several containers deployed within a single Docker host. That would be the case when deploying to a single Docker host with the docker-compose up command. However, if you are using an orchestrator or container cluster, each container could be running in a different host (node), and any node could be running any number of containers, as we explained earlier in the architecture section.
@@ -79,9 +79,12 @@ The application consists of multiple subsystems, including several store UI fron
The application is deployed as a set of microservices in the form of containers. Client apps can communicate with those containers as well as communicate between microservices. As mentioned, this initial architecture is using a direct client-to-microservice communication architecture, which means that a client application can make requests to each of the microservices directly. Each microservice has a public endpoint like https://servicename.applicationname.companyname. If required, each microservice can use a different TCP port. In production, that URL would map to the microservices’ load balancer, which distributes requests across the available microservice instances.
-As explained in the architecture section of this guide, the direct client-to-microservice communication architecture can have drawbacks when you are building a large and complex microservice-based application. But it can be good enough for a small application, such as in the eShopOnContainers application, where the goal is to focus on the microservices deployed as Docker containers.
+**Important note on API Gateway vs. Direct Communication in eShopOnContainers.** As explained in the architecture section of this guide, the direct client-to-microservice communication architecture can have drawbacks when you are building a large and complex microservice-based application. But it can be good enough for a small application, such as in the eShopOnContainers application, where the goal is to focus on a simpler getting started Docker container-based application and we didn’t want to create a single monolithic API Gateway that can impact the microservices’ development autonomy.
-However, if you are going to design a large microservice-based application with dozens of microservices, we strongly recommend that you consider the API Gateway pattern, as we explained in the architecture section.
+But, if you are going to design a large microservice-based application with dozens of microservices, we strongly recommend that you consider the API Gateway pattern, as we explained in the architecture section.
+This architectural decission could be refactored once thinking about production-ready applications and specially-made facades for remote clients. Having multiple custom API Gateways depending on the client apps' form-factor can provide benefits in regard to different data aggregation per client app plus you can hide internal microservices or APIs to the client apps and authorize in that single tier.
+
+However, and as mentioned, beware against large and monolithic API Gateways that might kill your microservices' development autonomy.
### Data sovereignty per microservice
diff --git a/docs/standard/microservices-architecture/multi-container-microservice-net-applications/subscribe-events.md b/docs/standard/microservices-architecture/multi-container-microservice-net-applications/subscribe-events.md
index 610147e8817f4..78608c72e5ebf 100644
--- a/docs/standard/microservices-architecture/multi-container-microservice-net-applications/subscribe-events.md
+++ b/docs/standard/microservices-architecture/multi-container-microservice-net-applications/subscribe-events.md
@@ -98,7 +98,7 @@ As mentioned earlier in the architecture section, you can have several approache
- Using the [Outbox pattern](http://gistlabs.com/2014/05/the-outbox/). This is a transactional table to store the integration events (extending the local transaction).
-For this scenario, using the full Event Sourcing (ES) pattern is one of the best approaches, if not *the* best. However, in many application scenarios, you might not be able to implement a full ES system. ES means storing only domain events in your transactional database, instead of storing current state data. Storing only domain events can have great benefits, such as having the history of your system available and being able to determine the state of your system at any moment in the past. However, implementing a full ES system requires you to rearchitect most of your system and introduces many other complexities and requirements. For example, you would want to use a database specifically made for event sourcing, such as [Event Store](https://geteventstore.com/), or a document-oriented database such as Azure Document DB, MongoDB, Cassandra, CouchDB, or RavenDB. ES is a great approach for this problem, but not the easiest solution unless you are already familiar with event sourcing.
+For this scenario, using the full Event Sourcing (ES) pattern is one of the best approaches, if not *the* best. However, in many application scenarios, you might not be able to implement a full ES system. ES means storing only domain events in your transactional database, instead of storing current state data. Storing only domain events can have great benefits, such as having the history of your system available and being able to determine the state of your system at any moment in the past. However, implementing a full ES system requires you to rearchitect most of your system and introduces many other complexities and requirements. For example, you would want to use a database specifically made for event sourcing, such as [Event Store](https://geteventstore.com/), or a document-oriented database such as Azure Cosmos DB, MongoDB, Cassandra, CouchDB, or RavenDB. ES is a great approach for this problem, but not the easiest solution unless you are already familiar with event sourcing.
The option to use transaction log mining initially looks very transparent. However, to use this approach, the microservice has to be coupled to your RDBMS transaction log, such as the SQL Server transaction log. This is probably not desirable. Another drawback is that the low-level updates recorded in the transaction log might not be at the same level as your high-level integration events. If so, the process of reverse-engineering those transaction log operations can be difficult.
diff --git a/docs/standard/microservices-architecture/net-core-net-framework-containers/container-framework-choice-factors.md b/docs/standard/microservices-architecture/net-core-net-framework-containers/container-framework-choice-factors.md
index 654031a915bd9..fef6980f0c4a5 100644
--- a/docs/standard/microservices-architecture/net-core-net-framework-containers/container-framework-choice-factors.md
+++ b/docs/standard/microservices-architecture/net-core-net-framework-containers/container-framework-choice-factors.md
@@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications
keywords: Docker, Microservices, ASP.NET, Container
author: CESARDELATORRE
ms.author: wiwagn
-ms.date: 07/13/2017
+ms.date: 10/18/2017
ms.prod: .net-core
ms.technology: dotnet-docker
ms.topic: article
@@ -43,17 +43,16 @@ There are several features of your application that affect your decision. You sh
- Your .NET implementation choice is *.NET Framework* based on framework dependency.
- Your container platform choice must be *Windows containers* because of the .NET Framework dependency.
* Your application uses **SignalR services**.
- - Your .NET implementation choice is *.NET Framework*, or *.NET Core (future release)*.
- - Your container platform choice must be *Windows containers* because of the .NET Framework dependency.
- - When **SignalR services** run on *.NET Core*, you can also choose *Linux containers*.
+ - Your .NET implementation choice is *.NET Framework*, or *.NET Core 2.1 (when released) or later*.
+ - Your container platform choice must be *Windows containers* if you chose the .NET Framework dependency.
+ - When **SignalR services** run on *.NET Core*, you can use *Linux containers or Windows Containers*.
* Your application uses **WCF, WF, and other legacy frameworks**.
- Your .NET implementation choice is *.NET Framework*, or *.NET Core (in the roadmap for a future release)*.
- Your container platform choice must be *Windows containers* because of the .NET Framework dependency.
- - When the dependency runs on *.NET Core*, you can also choose *Linux containers*.
* Your application involves **Consumption of Azure services**.
- Your .NET implementation choice is *.NET Framework*, or *.NET Core (eventually all Azure services will provide client SDKs for .NET Core)*.
- - Your container platform choice must be *Windows containers* because of the .NET Framework dependency.
- - When the dependency runs on *.NET Core*, you can also choose *Linux containers*.
+ - Your container platform choice must be *Windows containers* if you use .NET Framework client APIs.
+ - If you use client APIs available for *.NET Core*, you can also choose between *Linux containers and Windows containers*.
>[!div class="step-by-step"]
[Previous] (net-framework-container-scenarios.md)
diff --git a/docs/standard/microservices-architecture/net-core-net-framework-containers/general-guidance.md b/docs/standard/microservices-architecture/net-core-net-framework-containers/general-guidance.md
index f5522a35ba74c..bd1128ad7d435 100644
--- a/docs/standard/microservices-architecture/net-core-net-framework-containers/general-guidance.md
+++ b/docs/standard/microservices-architecture/net-core-net-framework-containers/general-guidance.md
@@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications
keywords: Docker, Microservices, ASP.NET, Container
author: CESARDELATORRE
ms.author: wiwagn
-ms.date: 05/26/2017
+ms.date: 10/18/2017
ms.prod: .net-core
ms.technology: dotnet-docker
ms.topic: article
@@ -13,7 +13,7 @@ ms.topic: article
This section provides a summary of when to choose .NET Core or .NET Framework. We provide more details about these choices in the sections that follow.
-You should use .NET Core for your containerized Docker server application when:
+You should use .NET Core, with Linux or Windows Containers, for your containerized Docker server application when:
- You have cross-platform needs. For example, you want to use both Linux and Windows Containers.
@@ -25,7 +25,7 @@ In short, when you create new containerized .NET applications, you should consid
An additional benefit of using .NET Core is that you can run side by side .NET versions for applications within the same machine. This benefit is more important for servers or VMs that do not use containers, because containers isolate the versions of .NET that the app needs. (As long as they are compatible with the underlying OS.)
-You should use .NET Framework for your containerized Docker server application when:
+You should use .NET Framework, with Windows Containers, for your containerized Docker server application when:
- Your application currently uses .NET Framework and has strong dependencies on Windows.
@@ -33,7 +33,15 @@ You should use .NET Framework for your containerized Docker server application w
- You need to use third-party .NET libraries or NuGet packages that are not available for .NET Core.
-Using .NET Framework on Docker can improve your deployment experiences by minimizing deployment issues. This "lift and shift" scenario is important for "dockerizing" legacy applications (at least, those that are not based on microservices).
+Using .NET Framework on Docker can improve your deployment experiences by minimizing deployment issues. This [*"lift and shift" scenario*](https://aka.ms/liftandshiftwithcontainersebook) is important for containerizing legacy applications that were originally developed with the traditional .NET Framework, like ASP.NET WebForms, MVC web apps or WCF (Windows Communication Foundation) services.
+
+### Additional resources
+
+- **eBook: Modernize existing .NET Framework applications with Azure and Windows Containers**
+ [*https://aka.ms/liftandshiftwithcontainersebook*](https://aka.ms/liftandshiftwithcontainersebook)
+
+- **Sample apps: Modernization of legacy ASP.NET web apps by using Windows Containers**
+ [*https://aka.ms/eshopmodernizing*](https://aka.ms/eshopmodernizing)
>[!div class="step-by-step"]
diff --git a/docs/standard/microservices-architecture/net-core-net-framework-containers/net-container-os-targets.md b/docs/standard/microservices-architecture/net-core-net-framework-containers/net-container-os-targets.md
index 96008565897e0..632d04f1ffcc9 100644
--- a/docs/standard/microservices-architecture/net-core-net-framework-containers/net-container-os-targets.md
+++ b/docs/standard/microservices-architecture/net-core-net-framework-containers/net-container-os-targets.md
@@ -4,14 +4,18 @@ description: .NET Microservices Architecture for Containerized .NET Applications
keywords: Docker, Microservices, ASP.NET, Container
author: CESARDELATORRE
ms.author: wiwagn
-ms.date: 05/26/2017
+ms.date: 10/18/2017
ms.prod: .net-core
ms.technology: dotnet-docker
ms.topic: article
---
# What OS to target with .NET containers
-Given the diversity of operating systems supported by Docker and the differences between .NET Framework and .NET Core, you should target a specific OS and specific versions depending on the framework you are using. For instance, in Linux there are many distros available, but only few of them are supported in the official .NET Docker images (like Debian and Alpine). For Windows you can use Windows Server Core or Nano Server; these versions of Windows provide different characteristics (like IIS versus a self-hosted web server like Kestrel) that might be needed by .NET Framework or NET Core.
+Given the diversity of operating systems supported by Docker and the differences between .NET Framework and .NET Core, you should target a specific OS and specific versions depending on the framework you are using.
+
+For Windows, you can use Windows Server Core or Windows Nano Server. These Windows versions provide different characteristics (IIS in Windows Server Core versus a self-hosted web server like Kestrel in Nano Server) that might be needed by .NET Framework or .NET Core, respectively.
+
+For Linux, multiple distros are available and supported in official .NET Docker images (like Debian).
In Figure 3-1 you can see the possible OS version depending on the .NET framework used.
@@ -23,11 +27,19 @@ You can also create your own Docker image in cases where you want to use a diffe
When you add the image name to your Dockerfile file, you can select the operating system and version depending on the tag you use, as in the following examples:
-- microsoft/dotnet**:1.1-runtime**
- .NET Core 1.1 runtime-only on Linux
+- microsoft/**dotnet:2.0.0-runtime-jessie**
+
+ .NET Core 2.0 runtime-only on Linux
+
+- microsoft/**dotnet:2.0.0-runtime-nanoserver-1709**
+
+ .NET Core 2.0 runtime-only on Windows Nano Server (Windows Server 2016 Fall Creators Update version 1709)
+
+- microsoft/**aspnetcore:2.0**
+
+ .NET Core 2.0 multi-architecture: Supports Linux and Windows Nano Server depending on the Docker host.
+ The aspnetcore image has a few optimizations for ASP.NET Core.
-- microsoft/dotnet:**1.1-runtime-nanoserver**
- .NET Core 1.1 runtime-only on Windows Nano Server
diff --git a/docs/standard/microservices-architecture/net-core-net-framework-containers/net-core-container-scenarios.md b/docs/standard/microservices-architecture/net-core-net-framework-containers/net-core-container-scenarios.md
index 0462aab22cac2..503ca8c5b9d91 100644
--- a/docs/standard/microservices-architecture/net-core-net-framework-containers/net-core-container-scenarios.md
+++ b/docs/standard/microservices-architecture/net-core-net-framework-containers/net-core-container-scenarios.md
@@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications
keywords: Docker, Microservices, ASP.NET, Container
author: CESARDELATORRE
ms.author: wiwagn
-ms.date: 05/26/2017
+ms.date: 10/18/2017
ms.prod: .net-core
ms.technology: dotnet-docker
ms.topic: article
@@ -23,7 +23,11 @@ Clearly, if your goal is to have an application (web application or service) tha
.NET Core also supports macOS as a development platform. However, when you deploy containers to a Docker host, that host must (currently) be based on Linux or Windows. For example, in a development environment, you could use a Linux VM running on a Mac.
-[Visual Studio](https://www.visualstudio.com/) provides an integrated development environment (IDE) for Windows. [Visual Studio for Mac](https://www.visualstudio.com/vs/visual-studio-mac/) is an evolution of Xamarin Studio running in macOS, but as of the time of this writing, it still does not support Docker development. You can also use [Visual Studio Code](https://code.visualstudio.com/) (VS Code) on macOS, Linux, and Windows. VS Code fully supports .NET Core, including IntelliSense and debugging. Because VS Code is a lightweight editor, you can use it to develop containerized apps on the Mac in conjunction with the Docker CLI and the .NET Core CLI (dotnet cli). You can also target .NET Core with most third-party editors like Sublime, Emacs, vi, and the open-source OmniSharp project, which also provides IntelliSense support. In addition to the IDEs and editors, you can use the .NET Core command-line tools (dotnet CLI) for all supported platforms.
+[Visual Studio](https://www.visualstudio.com/) provides an integrated development environment (IDE) for Windows and supports Docker development.
+
+[Visual Studio for Mac](https://www.visualstudio.com/vs/visual-studio-mac/) is an IDE, evolution of Xamarin Studio, running in macOS and supports Docker since mid-2017.
+
+You can also use [Visual Studio Code](https://code.visualstudio.com/) (VS Code) on macOS, Linux, and Windows. VS Code fully supports .NET Core, including IntelliSense and debugging. Because VS Code is a lightweight editor, you can use it to develop containerized apps on the Mac in conjunction with the Docker CLI and the .NET Core CLI (dotnet cli). You can also target .NET Core with most third-party editors like Sublime Text, Emacs, vi, and the open-source OmniSharp project, which provides IntelliSense support for .NET languages. In addition to the IDEs and editors, you can use the [.NET Core command-line interface (CLI) tools](https://docs.microsoft.com/dotnet/core/tools/?tabs=netcore2x) for all supported platforms.
## Using containers for new ("green-field") projects
@@ -31,20 +35,14 @@ Containers are commonly used in conjunction with a microservices architecture, a
## Creating and deploying microservices on containers
-You could use the full .NET framework for microservices-based applications (without containers) when using plain processes, because .NET Framework is already installed and shared across processes. However, if you are using containers, the image for .NET Framework (Windows Server Core plus the full .NET Framework within each image) is probably too heavy for a microservices-on-containers approach.
+You could use the traditional .NET Framework for building microservices-based applications (without containers) by using plain processes. That way, because the .NET Framework is already installed and shared across processes, processes are light and fast to start. However, if you are using containers, the image for the traditional .NET Framework is also based on Windows Server Core and that makes it too heavy for a microservices-on-containers approach.
-In contrast, .NET Core is the best candidate if you are embracing a microservices-oriented system that is based on containers, because .NET Core is lightweight. In addition, its related container images, either the Linux image or the Windows Nano image, are lean and small.
+In contrast, .NET Core is the best candidate if you are embracing a microservices-oriented system that is based on containers, because .NET Core is lightweight. In addition, its related container images, either the Linux image or the Windows Nano image, are lean and small making containers light and fast to start.
A microservice is meant to be as small as possible: to be light when spinning up, to have a small footprint, to have a small Bounded Context, to represent a small area of concerns, and to be able to start and stop fast. For those requirements, you will want to use small and fast-to-instantiate container images like the .NET Core container image.
A microservices architecture also allows you to mix technologies across a service boundary. This enables a gradual migration to .NET Core for new microservices that work in conjunction with other microservices or with services developed with Node.js, Python, Java, GoLang, or other technologies.
-There are many orchestrators you can use when targeting microservices and containers. For large and complex microservice systems being deployed as Linux containers, [Azure Container Service](https://azure.microsoft.com/services/container-service/) has multiple orchestrator offerings (Mesos DC/OS, Kubernetes, and Docker Swarm), which makes it a good choice. You can also use Azure Service Fabric for Linux, which supports Docker Linux containers. (At the time of this writing, this offering was still in [preview](https://docs.microsoft.com/azure/service-fabric/service-fabric-linux-overview). Check the [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) for the latest status.)
-
-For large and complex microservice systems being deployed as Windows Containers, most orchestrators are currently in a less mature state. However, you currently can use Azure Service Fabric for Windows Containers, as well as Azure Container Service. Azure Service Fabric is well established for running mission-critical Windows applications.
-
-All these platforms support .NET Core and make them ideal for hosting your microservices.
-
## Deploying high density in scalable systems
When your container-based system needs the best possible density, granularity, and performance, .NET Core and ASP.NET Core are your best options. ASP.NET Core is up to ten times faster than ASP.NET in the full .NET Framework, and it leads other popular industry technologies for microservices, such as Java servlets, Go, and Node.js.
diff --git a/docs/standard/microservices-architecture/net-core-net-framework-containers/net-framework-container-scenarios.md b/docs/standard/microservices-architecture/net-core-net-framework-containers/net-framework-container-scenarios.md
index bb2005ea088c8..5fdd7af620889 100644
--- a/docs/standard/microservices-architecture/net-core-net-framework-containers/net-framework-container-scenarios.md
+++ b/docs/standard/microservices-architecture/net-core-net-framework-containers/net-framework-container-scenarios.md
@@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications
keywords: Docker, Microservices, ASP.NET, Container
author: CESARDELATORRE
ms.author: wiwagn
-ms.date: 05/26/2017
+ms.date: 10/18/2017
ms.prod: .net-core
ms.technology: dotnet-docker
ms.topic: article
@@ -13,43 +13,37 @@ ms.topic: article
While .NET Core offers significant benefits for new applications and application patterns, .NET Framework will continue to be a good choice for many existing scenarios.
-## Migrating existing applications directly to a Docker container
+## Migrating existing applications directly to a Windows Server container
You might want to use Docker containers just to simplify deployment, even if you are not creating microservices. For example, perhaps you want to improve your DevOps workflow with Docker—containers can give you better isolated test environments and can also eliminate deployment issues caused by missing dependencies when you move to a production environment. In cases like these, even if you are deploying a monolithic application, it makes sense to use Docker and Windows Containers for your current .NET Framework applications.
-In most cases, you will not need to migrate your existing applications to .NET Core; you can use Docker containers that include the full .NET Framework. However, a recommended approach is to use .NET Core as you extend an existing application, such as writing a new service in ASP.NET Core.
+In most cases for this scenario, you will not need to migrate your existing applications to .NET Core; you can use Docker containers that include the full .NET Framework. However, a recommended approach is to use .NET Core as you extend an existing application, such as writing a new service in ASP.NET Core.
## Using third-party .NET libraries or NuGet packages not available for .NET Core
-Third-party libraries are quickly embracing the [.NET Standard](https://docs.microsoft.com/dotnet/standard/net-standard), which enables code sharing across all .NET flavors, including .NET Core. With the .NET Standard version 2.0, this will be even easier, because the .NET Core API surface will become significantly bigger. Your .NET Core applications will be able to directly use existing .NET Framework libraries.
+Third-party libraries are quickly embracing the [.NET Standard](https://docs.microsoft.com/dotnet/standard/net-standard), which enables code sharing across all .NET flavors, including .NET Core. With the .NET Standard Library 2.0 and beyond the API surface compatibility across different frameworks has become significantly larger and in .NET Core 2.0 applications can also directly reference existing .NET Framework libraries (see [compat shim](https://github.com/dotnet/standard/blob/master/docs/faq.md#how-does-net-standard-versioning-work)).
-Be aware that whenever you run a library or process based on the full .NET Framework, because of its dependencies on Windows, the container image used for that application or service will need to be based on a Windows Container image.
+However, even with that exceptional progression since .NET Standard 2.0 and .NET Core 2.0, there might be cases where certain NuGet packages need Windows to run and might not support .NET Core. If those packages are critical for your application, then you will need to use .NET Framework on Windows Containers.
## Using.NET technologies not available for .NET Core
-Some .NET Framework technologies are not available in the current version of .NET Core (version 1.1 as of this writing). Some of them will be available in later .NET Core releases (.NET Core 2.0), but others do not apply to the new application patterns targeted by .NET Core and might never be available.
+Some .NET Framework technologies are not available in the current version of .NET Core (version 2.0 as of this writing). Some of them will be available in later .NET Core releases (.NET Core 2.x), but others do not apply to the new application patterns targeted by .NET Core and might never be available.
-The following list shows most of the technologies that are not available in .NET Core 1.1:
+The following list shows most of the technologies that are not available in .NET Core 2.0:
- ASP.NET Web Forms. This technology is only available on .NET Framework. Currently there are no plans to bring ASP.NET Web Forms to .NET Core.
-- ASP.NET Web Pages. This technology is slated to be included in a future .NET Core release, as explained in the [.NET Core roadmap.](https://github.com/aspnet/Home/wiki/Roadmap)
-
-- ASP.NET SignalR. As of the .NET Core 1.1 release (November 2016), ASP.NET SignalR is not available for ASP.NET Core (neither client nor server). There are plans to include it in a future release, as explained in the .NET Core roadmap. A preview is available at the [Server-side](https://github.com/aspnet/SignalR-Server) and [Client Library](https://github.com/aspnet/SignalR-Client-Net) GitHub repositories.
-
-- WCF services. Even when a [WCF-Client library](https://github.com/dotnet/wcf) is available to consume WCF services from .NET Core (as of early 2017), the WCF server implementation is only available on .NET Framework. This scenario is being considered for future releases of .NET Core.
+- WCF services. Even when a [WCF-Client library](https://github.com/dotnet/wcf) is available to consume WCF services from .NET Core. as of mid-2017, the WCF server implementation is only available on .NET Framework. This scenario might be considered for future releases of .NET Core.
- Workflow-related services. Windows Workflow Foundation (WF), Workflow Services (WCF + WF in a single service), and WCF Data Services (formerly known as ADO.NET Data Services) are only available on .NET Framework. There are currently no plans to bring them to .NET Core.
-- Language support. As of the release of Visual Studio 2017, Visual Basic and F\# do not have tooling support for .NET Core, but this support is planned for updated versions of Visual Studio.
-
In addition to the technologies listed in the official [.NET Core roadmap](https://github.com/aspnet/Home/wiki/Roadmap), other features might be ported to .NET Core. For a full list, look at the items tagged as [port-to-core](https://github.com/dotnet/corefx/issues?q=is%3Aopen+is%3Aissue+label%3Aport-to-core) on the CoreFX GitHub site. Note that this list does not represent a commitment from Microsoft to bring those components to .NET Core—the items simply capture requests from the community. If you care about any of the components listed above, consider participating in the discussions on GitHub so that your voice can be heard. And if you think something is missing, please [file a new issue in the CoreFX repository](https://github.com/dotnet/corefx/issues/new).
## Using a platform or API that does not support .NET Core
Some Microsoft or third-party platforms do not support .NET Core. For example, some Azure services provide an SDK that is not yet available for consumption on .NET Core. This is temporary, because all Azure services will eventually use .NET Core. For example, the [Azure DocumentDB SDK for .NET Core](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/1.2.1) was released as a preview on November 16, 2016, but it is now generally available (GA) as a stable version.
-In the meantime, you can always use the equivalent REST API from the Azure service instead of the client SDK.
+In the meantime, if any platform or service in Azure still doesn’t support .NET Core with its client API, you can use the equivalent REST API from the Azure service or the client SDK for full .NET Framework.
### Additional resources
diff --git a/docs/standard/microservices-architecture/net-core-net-framework-containers/official-net-docker-images.md b/docs/standard/microservices-architecture/net-core-net-framework-containers/official-net-docker-images.md
index 7b4ca3b149bda..20835f6adbb5d 100644
--- a/docs/standard/microservices-architecture/net-core-net-framework-containers/official-net-docker-images.md
+++ b/docs/standard/microservices-architecture/net-core-net-framework-containers/official-net-docker-images.md
@@ -4,7 +4,7 @@ description: .NET Microservices Architecture for Containerized .NET Applications
keywords: Docker, Microservices, ASP.NET, Container
author: CESARDELATORRE
ms.author: wiwagn
-ms.date: 05/26/2017
+ms.date: 10/18/2017
ms.prod: .net-core
ms.technology: dotnet-docker
ms.topic: article
@@ -33,7 +33,7 @@ Why multiple images? When developing, building, and running containerized applic
### During development and build
-During development, what is important is how fast you can iterate changes, and the ability to debug the changes. The size of the image is not as important as the ability to make changes to your code and see the changes quickly. Some of our tools, like [yo docker](https://github.com/Microsoft/generator-docker) for Visual Studio Code, use the development ASP.NET Core image (microsoft/aspnetcore-build) during development; you could even use that image as a build container. When building inside a Docker container, the important aspects are the elements that are needed in order to compile your app. This includes the compiler and any other .NET dependencies, plus web development dependencies like npm, Gulp, and Bower.
+During development, what is important is how fast you can iterate changes, and the ability to debug the changes. The size of the image is not as important as the ability to make changes to your code and see the changes quickly. Some tools and "build-agent containers", use the development ASP.NET Core image (microsoft/aspnetcore-build) during development and build proces. When building inside a Docker container, the important aspects are the elements that are needed in order to compile your app. This includes the compiler and any other .NET dependencies, plus web development dependencies like npm, Gulp, and Bower.
Why is this type of build image important? You do not deploy this image to production. Instead, it is an image you use to build the content you place into a production image. This image would be used in your continuous integration (CI) environment or build environment. For instance, rather than manually installing all your application dependencies directly on a build agent host (a VM, for example), the build agent would instantiate a .NET Core build image with all the dependencies required to build the application. Your build agent only needs to know how to run this Docker image. This simplifies your CI environment and makes it much more predictable.
@@ -45,22 +45,16 @@ In this optimized image you put only the binaries and other content needed to ru
Although there are multiple versions of the .NET Core and ASP.NET Core images, they all share one or more layers, including the base layer. Therefore, the amount of disk space needed to store an image is small; it consists only of the delta between your custom image and its base image. The result is that it is quick to pull the image from your registry.
-When you explore the .NET image repositories at Docker Hub, you will find multiple image versions classified or marked with tags. These help decide which one to use, depending on the version you need, like those in the following list::
+When you explore the .NET image repositories at Docker Hub, you will find multiple image versions classified or marked with tags. These tags help to decide which one to use, depending on the version you need, like those in the following table:
-- microsoft/aspnetcore:**1.1**
- ASP.NET Core, with runtime only and ASP.NET Core optimizations, on Linux
+- microsoft/**aspnetcore:2.0**
-- microsoft/aspnetcore-build:**1.0-1.1**
- ASP.NET Core, with SDKs included, on Linux
+ ASP.NET Core, with runtime only and ASP.NET Core optimizations, on Linux and Windows (multi-arch)
-- microsoft/dotnet:**1.1-runtime**
- .NET Core 1.1, with runtime only, on Linux
+- microsoft/**aspnetcore-build:2.0**
-- microsoft/dotnet:**1.1-runtime-deps**
- .NET Core 1.1, with runtime and framework dependencies for self-contained apps, on Linux
+ ASP.NET Core, with SDKs included, on Linux and Windows (multi-arch)
-- microsoft/dotnet**:1.1.0-sdk-msbuild**
- .NET Core 1.1 with SDKs included, on Linux
>[!div class="step-by-step"]
[Previous] (net-container-os-targets.md)
diff --git a/docs/standard/microservices-architecture/toc.md b/docs/standard/microservices-architecture/toc.md
index 7c85da195a85f..f6cf7627e3bb5 100644
--- a/docs/standard/microservices-architecture/toc.md
+++ b/docs/standard/microservices-architecture/toc.md
@@ -19,7 +19,8 @@
### [Logical architecture versus physical architecture](architect-microservice-container-applications/logical-versus-physical-architecture.md)
### [Challenges and solutions for distributed data management](architect-microservice-container-applications/distributed-data-management.md)
### [Identifying domain-model boundaries for each microservice](architect-microservice-container-applications/identify-microservice-domain-model-boundaries.md)
-### [Communication between microservices](architect-microservice-container-applications/communication-between-microservices.md)
+### [Direct client-to-microservice communication versus the API Gateway pattern](architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern.md)
+### [Communication between microservices](architect-microservice-container-applications/communication-in-microservice-architecture.md)
### [Asynchronous message-based communication](architect-microservice-container-applications/asynchronous-message-based-communication.md)
### [Creating, evolving, and versioning microservice APIs and contracts](architect-microservice-container-applications/maintain-microservice-apis.md)
### [Microservices addressability and the service registry](architect-microservice-container-applications/microservices-addressability-service-registry.md)
diff --git a/docs/toc.md b/docs/toc.md
index 4d0018f470a8c..1ec307bc3bf2f 100644
--- a/docs/toc.md
+++ b/docs/toc.md
@@ -100,7 +100,7 @@
### [F# unit testing with dotnet test and xUnit]()
### [F# unit testing with dotnet test and MSTest]()
### [Running selective unit tests](core/testing/selective-unit-tests.md)
-### [Live unit testing .NET Core projects with Visual Studio]()
+### [Live unit testing .NET Core projects with Visual Studio](/visualstudio/test/live-unit-testing-start)
## [Versioning](core/versions/index.md)
### [.NET Core Support](core/versions/lts-current.md)
diff --git a/includes/migration-guide/retargeting/introduction.md b/includes/migration-guide/retargeting/introduction.md
index 57fe62ac1cac1..e3dc350cf9331 100644
--- a/includes/migration-guide/retargeting/introduction.md
+++ b/includes/migration-guide/retargeting/introduction.md
@@ -5,7 +5,7 @@ Retargeting changes affect apps that are recompiled to target a different .NET F
* Changes in the runtime environment. These affect only apps that specifically target the retargeted .NET Framework. Apps that target previous versions of the .NET Framework behave as they did when running under those versions.
-In the topics that describe etargeting changes, we have classified individual items by their expected impact, as follows:
+In the topics that describe retargeting changes, we have classified individual items by their expected impact, as follows:
**Major**
This is a significant change that affects a large number of apps or that requires substantial modification of code.
diff --git a/includes/migration-guide/retargeting/winforms/accessibility-improvements-windows-forms-controls.md b/includes/migration-guide/retargeting/winforms/accessibility-improvements-windows-forms-controls.md
index 26e3288869401..7cc437cae30d6 100644
--- a/includes/migration-guide/retargeting/winforms/accessibility-improvements-windows-forms-controls.md
+++ b/includes/migration-guide/retargeting/winforms/accessibility-improvements-windows-forms-controls.md
@@ -3,7 +3,7 @@
| | |
|---|---|
|Details|Windows Forms Framework is improving how it works with accessibility technologies to better support Windows Forms customers. These include the following changes:- Changes to improve display during High Contrast mode.
- Changes to improve the property browser experience. Property browser improvements include:
- Better keyboard navigation through the various drop-down selection windows.
- Reduced unnecessary tab stops.
- Better reporting of control types.
- Improved narrator behavior.
- Changes to implement missing UI accessibility patterns in controls.
|
-|Suggestion|**How to opt in or out of these changes** In order for the application to benefit from these changes, it must run on the .NET Framework 4.7.1 or later. The application can benefit from these changes in either of the following ways:- It is recompiled to target the .NET Framework 4.7.1. These accessibility changes are enabled by default on Windows Forms applications that target the .NET Framework 4.7.1 or later.
- It opts out of the legacy accessibility behaviors by adding the following [AppContext Switch](~/docs/framework/configure-apps/file-schema/runtime/appcontextswitchoverrides-element.md) to the `` section of the *app.config* file and setting it to false, as the following example shows.
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.7"/>
</startup>
<runtime>
|
+|Suggestion|**How to opt in or out of these changes** In order for the application to benefit from these changes, it must run on the .NET Framework 4.7.1 or later. The application can benefit from these changes in either of the following ways:- It is recompiled to target the .NET Framework 4.7.1. These accessibility changes are enabled by default on Windows Forms applications that target the .NET Framework 4.7.1 or later.
- It opts out of the legacy accessibility behaviors by adding the following [AppContext Switch](~/docs/framework/configure-apps/file-schema/runtime/appcontextswitchoverrides-element.md) to the `` section of the *app.config* file and setting it to `false`, as the following example shows:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.7"/>
</startup>
<runtime>
|
|Scope|Major|
|Version|4.7.1|
|Type|Retargeting|
diff --git a/includes/migration-guide/runtime/security/rsacng-dsacng-are-once-again-usable-partial-trust-scenarios.md b/includes/migration-guide/runtime/security/rsacng-dsacng-are-once-again-usable-partial-trust-scenarios.md
index f1896424bf794..6066eeff6a358 100644
--- a/includes/migration-guide/runtime/security/rsacng-dsacng-are-once-again-usable-partial-trust-scenarios.md
+++ b/includes/migration-guide/runtime/security/rsacng-dsacng-are-once-again-usable-partial-trust-scenarios.md
@@ -2,7 +2,7 @@
| | |
|---|---|
-|Details|CngLightup (used in several higher-level crypto apis, such as T:System.Security.Cryptography.Xml.EncryptedXml
) and T:System.Security.Cryptography.RSACng
in some cases rely on full trust. These include P/Invokes without asserting F:System.Security.Permissions.SecurityPermissionFlag.UnmanagedCode
permissions, and code paths where T:System.Security.Cryptography.CngKey
has permission demands for F:System.Security.Permissions.SecurityPermissionFlag.UnmanagedCode
. Starting with the .NET Framework 4.6.2, CngLightup was used to switch to T:System.Security.Cryptography.RSACng
wherever possible. As a result, partial trust apps that successfully used T:System.Security.Cryptography.Xml.EncryptedXml
began to fail and throw T:System.Security.SecurityException
exceptions.This change adds the required asserts so that all functions using CngLightup have the required permissions.|
+|Details|CngLightup (used in several higher-level crypto apis, such as ) and in some cases rely on full trust. These include P/Invokes without asserting permissions, and code paths where has permission demands for . Starting with the .NET Framework 4.6.2, CngLightup was used to switch to wherever possible. As a result, partial trust apps that successfully used began to fail and throw exceptions.This change adds the required asserts so that all functions using CngLightup have the required permissions.|
|Suggestion|If this change in the .NET Framework 4.6.2 has negatively impacted your partial trust apps, upgrade to the .NET Framework 4.7.1.|
|Scope|Edge|
|Version|4.6.2|
diff --git a/samples/snippets/csharp/VS_Snippets_CLR_System/system.Random/cs/Random2.cs b/samples/snippets/csharp/VS_Snippets_CLR_System/system.Random/cs/Random2.cs
index 2bfc60597032e..247893f1cacc5 100644
--- a/samples/snippets/csharp/VS_Snippets_CLR_System/system.Random/cs/Random2.cs
+++ b/samples/snippets/csharp/VS_Snippets_CLR_System/system.Random/cs/Random2.cs
@@ -8,7 +8,7 @@ public static void Main()
// Instantiate random number generator using system-supplied value as seed.
Random rand = new Random();
// Generate and display 5 random byte (integer) values.
- byte[] bytes = new byte[4];
+ byte[] bytes = new byte[5];
rand.NextBytes(bytes);
Console.WriteLine("Five random byte values:");
foreach (byte byteValue in bytes)
diff --git a/samples/snippets/csharp/VS_Snippets_CLR_System/system.guid.newguid/cs/ng.cs b/samples/snippets/csharp/VS_Snippets_CLR_System/system.guid.newguid/cs/ng.cs
index d047a3a55f895..c9e4ccd0ca5c5 100644
--- a/samples/snippets/csharp/VS_Snippets_CLR_System/system.guid.newguid/cs/ng.cs
+++ b/samples/snippets/csharp/VS_Snippets_CLR_System/system.guid.newguid/cs/ng.cs
@@ -3,15 +3,15 @@
using System;
-class Sample
+class Sample
{
- public static void Main()
+ public static void Main()
{
- Guid g;
-// Create and display the value of two GUIDs.
- g = Guid.NewGuid();
- Console.WriteLine(g);
- Console.WriteLine(Guid.NewGuid());
+ Guid g;
+ // Create and display the value of two GUIDs.
+ g = Guid.NewGuid();
+ Console.WriteLine(g);
+ Console.WriteLine(Guid.NewGuid());
}
}
diff --git a/samples/snippets/csharp/VS_Snippets_Winforms/System.Drawing.Icon.ExtractAssociatedIconEx/CS/Form1.cs b/samples/snippets/csharp/VS_Snippets_Winforms/System.Drawing.Icon.ExtractAssociatedIconEx/CS/Form1.cs
index 9b2a50ddcbcff..3e4e58899bf68 100644
--- a/samples/snippets/csharp/VS_Snippets_Winforms/System.Drawing.Icon.ExtractAssociatedIconEx/CS/Form1.cs
+++ b/samples/snippets/csharp/VS_Snippets_Winforms/System.Drawing.Icon.ExtractAssociatedIconEx/CS/Form1.cs
@@ -47,8 +47,7 @@ public void ExtractAssociatedIconEx()
Icon iconForFile = SystemIcons.WinLogo;
item = new ListViewItem(file.Name, 1);
- iconForFile = Icon.ExtractAssociatedIcon(file.FullName);
-
+
// Check to see if the image collection contains an image
// for this extension, using the extension as a key.
if (!imageList1.Images.ContainsKey(file.Extension))
@@ -75,4 +74,4 @@ static void Main()
Application.Run(new Form1());
}
}
-}
\ No newline at end of file
+}
diff --git a/samples/snippets/visualbasic/VS_Snippets_CLR_System/system.guid.newguid/vb/ng.vb b/samples/snippets/visualbasic/VS_Snippets_CLR_System/system.guid.newguid/vb/ng.vb
index c5d232402656c..bb174c05cd855 100644
--- a/samples/snippets/visualbasic/VS_Snippets_CLR_System/system.guid.newguid/vb/ng.vb
+++ b/samples/snippets/visualbasic/VS_Snippets_CLR_System/system.guid.newguid/vb/ng.vb
@@ -3,9 +3,9 @@
Imports System
Class Sample
- Public Shared Sub Main()
+ Public Shared Sub Main()
Dim g As Guid
-' Create and display the value of two GUIDs.
+ ' Create and display the value of two GUIDs.
g = Guid.NewGuid()
Console.WriteLine(g)
Console.WriteLine(Guid.NewGuid())
diff --git a/xml/Microsoft.VisualBasic/Interaction.xml b/xml/Microsoft.VisualBasic/Interaction.xml
index aada969ce3fb2..d6dab3aee0bd0 100644
--- a/xml/Microsoft.VisualBasic/Interaction.xml
+++ b/xml/Microsoft.VisualBasic/Interaction.xml
@@ -26,7 +26,7 @@
## Examples
- The following example uses the `Shell` function to run an application specified by the user. Specifying as the second argument opens the application in normal size and gives it the focus.
+ The following example uses the `Shell` function to run an application specified by the user. Specifying as the second argument opens the application in normal size and gives it the focus.
```
Dim procID As Integer
@@ -1200,7 +1200,7 @@ ID = Shell("""C:\Program Files\display.exe"" -a -q", , True, 100000)
## Examples
- The following example uses the `Shell` function to run an application specified by the user. Specifying as the second argument opens the application in normal size and gives it the focus.
+ The following example uses the `Shell` function to run an application specified by the user. Specifying as the second argument opens the application in normal size and gives it the focus.
[!code-vb[VbVbalrFunctions#47](~/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrFunctions/VB/Class1.vb#47)]
diff --git a/xml/Microsoft.Win32/Registry.xml b/xml/Microsoft.Win32/Registry.xml
index 0ce324a7d4622..61a19e42fac40 100644
--- a/xml/Microsoft.Win32/Registry.xml
+++ b/xml/Microsoft.Win32/Registry.xml
@@ -291,7 +291,7 @@
Valid root names are HKEY_CURRENT_USER, HKEY_LOCAL_MACHINE, HKEY_CLASSES_ROOT, HKEY_USERS, HKEY_PERFORMANCE_DATA, HKEY_CURRENT_CONFIG, and HKEY_DYN_DATA. For example, in Visual Basic the string "HKEY_CURRENT_USER\MyTestKey" accesses key/value pairs for the subkey "MyTestKey" in the HKEY_CURRENT_USER root.
- When the method retrieves expandable string values (), it expands environment strings using data from the local environment. If a value containing expandable references to environment variables has been stored as a string (), rather than as an expandable string (), does not expand it. You can expand such a string after it has been retrieved by calling the method.
+ When the method retrieves expandable string values (), it expands environment strings using data from the local environment. If a value containing expandable references to environment variables has been stored as a string (), rather than as an expandable string (), does not expand it. You can expand such a string after it has been retrieved by calling the method.
> [!NOTE]
> The recommended way to retrieve data from HKEY_PERFORMANCE_DATA is to use the class rather than the method.
@@ -492,11 +492,11 @@
> [!NOTE]
> The method opens a registry key, sets the value, and closes the key each time it is called. If you need to modify a large number of values, the method might provide better performance. The class also provides methods that allow you to add an access control list (ACL) to a registry key, to test the data type of a value before retrieving it, and to delete keys.
- This overload of stores 64-bit integers as strings (). To store 64-bit numbers as values, use the method overload.
+ This overload of stores 64-bit integers as strings (). To store 64-bit numbers as values, use the method overload.
- This overload of stores all string values as objects, even if they contain expandable references to environment variables. To save string values as expandable strings (), use the method overload.
+ This overload of stores all string values as objects, even if they contain expandable references to environment variables. To save string values as expandable strings (), use the method overload.
- This overload is equivalent to calling the method overload with .
+ This overload is equivalent to calling the method overload with .
> [!NOTE]
> On Windows 98 and Windows Millennium Edition (Windows Me), the registry is not Unicode, and not all Unicode characters are valid for all code pages. A Unicode character that is invalid for the current code page is replaced by the best available match. No exception is thrown.
@@ -575,10 +575,10 @@
> [!NOTE]
> The method opens a registry key, sets the value, and closes the key each time it is called. If you need to modify a large number of values, the method might provide better performance. The class also provides methods that allow you to add an access control list (ACL) to a registry key, to test the data type of a value before retrieving it, and to delete keys.
- If the type of the specified `value` does not match the specified `valueKind`, and the data cannot be converted, is thrown. For example, you can store a as a , but only if its value is less than the maximum value of a . You cannot store a single string value as a .
+ If the type of the specified `value` does not match the specified `valueKind`, and the data cannot be converted, is thrown. For example, you can store a as a , but only if its value is less than the maximum value of a . You cannot store a single string value as a .
> [!NOTE]
-> If boxed values are passed for or , the conversion is done using the invariant culture.
+> If boxed values are passed for or , the conversion is done using the invariant culture.
> [!NOTE]
> On Windows 98 and Windows Millennium Edition (Windows Me), the registry is not Unicode, and not all Unicode characters are valid for all code pages. A Unicode character that is invalid for the current code page is replaced by the best available match. No exception is thrown.
diff --git a/xml/Microsoft.Win32/RegistryKey.xml b/xml/Microsoft.Win32/RegistryKey.xml
index 9b627c5745667..4dfca3273c1f1 100644
--- a/xml/Microsoft.Win32/RegistryKey.xml
+++ b/xml/Microsoft.Win32/RegistryKey.xml
@@ -368,9 +368,9 @@
## Remarks
The method creates a registry key that has the access control specified by the `registrySecurity` parameter. The object that is returned represents the registry key, but that object is not restricted by the access control specified in the `registrySecurity` parameter.
- If `permissionCheck` is , the key is opened for read/write access. If `permissionCheck` is , the key is opened for read access.
+ If `permissionCheck` is , the key is opened for read/write access. If `permissionCheck` is , the key is opened for read access.
- For backward compatibility, the key is opened for reading and writing if `permissionCheck` is and the parent key also has . If the parent key has any other setting, read/write status is controlled by the parent key's setting.
+ For backward compatibility, the key is opened for reading and writing if `permissionCheck` is and the parent key also has . If the parent key has any other setting, read/write status is controlled by the parent key's setting.
In order to perform this action, the user must have permissions at this level and below in the registry hierarchy.
@@ -1002,9 +1002,9 @@
method overload with the bitwise combination of the following flags: , , and . You can use that overload to search for other permissions.
+ This method overload is equivalent to calling the method overload with the bitwise combination of the following flags: , , and . You can use that overload to search for other permissions.
- The user must have rights to call this method.
+ The user must have rights to call this method.
]]>
@@ -1040,9 +1040,9 @@
, , and . Alternatively, you can use the method overload, which specifies exactly that combination of values.
+ To request the access permissions currently granted to users, specify the bitwise combination of the following flags: , , and . Alternatively, you can use the method overload, which specifies exactly that combination of values.
- The user must have rights to call this method.
+ The user must have rights to call this method.
]]>
@@ -1130,10 +1130,10 @@
> [!NOTE]
> A registry key can have one value that is not associated with any name. When this unnamed value is displayed in the registry editor, the string "(Default)" appears instead of a name. To retrieve this unnamed value, specify either `null` or the empty string ("") for `name`.
- When the method retrieves expandable string values (), it expands environment strings using data from the local environment. To retrieve expandable string values from the registry on a remote computer, use the method overload to specify that you do not want environment strings expanded.
+ When the method retrieves expandable string values (), it expands environment strings using data from the local environment. To retrieve expandable string values from the registry on a remote computer, use the method overload to specify that you do not want environment strings expanded.
> [!NOTE]
-> If a value containing expandable references to environment variables has been stored as a string (), rather than as an expandable string (), does not expand it. You can expand such a string after it has been retrieved by calling the method.
+> If a value containing expandable references to environment variables has been stored as a string (), rather than as an expandable string (), does not expand it. You can expand such a string after it has been retrieved by calling the method.
> [!NOTE]
> The recommended way to retrieve data from the key is to use the class rather than the method.
@@ -1195,10 +1195,10 @@
> [!NOTE]
> A registry key can have one value that is not associated with any name. When this unnamed value is displayed in the registry editor, the string "(Default)" appears instead of a name. To retrieve this unnamed value, specify either `null` or the empty string ("") for `name`.
- When the method retrieves expandable string values (), it expands environment strings using data from the local environment. To retrieve expandable string values from the registry on a remote computer, use the overload to specify that you do not want environment strings expanded.
+ When the method retrieves expandable string values (), it expands environment strings using data from the local environment. To retrieve expandable string values from the registry on a remote computer, use the overload to specify that you do not want environment strings expanded.
> [!NOTE]
-> If a value containing expandable references to environment variables has been stored as a string (), rather than as an expandable string (), the method does not expand it. You can expand such a string after it has been retrieved by calling the method.
+> If a value containing expandable references to environment variables has been stored as a string (), rather than as an expandable string (), the method does not expand it. You can expand such a string after it has been retrieved by calling the method.
> [!NOTE]
> The recommended way to retrieve data from the key is to use the class rather than the method.
@@ -1262,7 +1262,7 @@
when retrieving a registry value of type to retrieve the string without expanding embedded environment variables.
+ Use this overload to specify special processing of the retrieved value. For example, you can specify when retrieving a registry value of type to retrieve the string without expanding embedded environment variables.
Use the `defaultValue` parameter to specify the value to return if `name` does not exist.
@@ -1548,7 +1548,7 @@
. The requested key must be a root key on the remote machine, and is identified by the appropriate value.
+ The local machine registry is opened if `machineName` is . The requested key must be a root key on the remote machine, and is identified by the appropriate value.
In order for a key to be opened remotely, both the server and client machines must be running the remote registry service, and have remote administration enabled.
@@ -1610,7 +1610,7 @@
. The requested key must be a root key on the remote machine, and is identified by the appropriate value.
+ The local machine registry is opened if `machineName` is . The requested key must be a root key on the remote machine, and is identified by the appropriate value.
In order for a key to be opened remotely, both the server and client machines must be running the remote registry service, and have remote administration enabled.
@@ -1718,7 +1718,7 @@
## Remarks
Rather than throwing an exception, this method returns `null` if the requested key does not exist.
- If `permissionCheck` is , the key is opened for reading and writing; if `permissionCheck` is or , the key is opened for reading unless the parent key was opened with .
+ If `permissionCheck` is , the key is opened for reading and writing; if `permissionCheck` is or , the key is opened for reading unless the parent key was opened with .
In order to use the method, you must have an instance of the class. To get an instance of , use one of the static members of the class.
@@ -1884,9 +1884,9 @@
## Remarks
Rather than throwing an exception, this method returns `null` if the requested key does not exist.
- If `permissionCheck` is , the key is opened for reading and writing; if `permissionCheck` is or , the key is opened for reading unless the parent key was opened with .
+ If `permissionCheck` is , the key is opened for reading and writing; if `permissionCheck` is or , the key is opened for reading unless the parent key was opened with .
- The access specified for `permissionCheck` takes precedence over the access specified for `rights`. For example, if you specify for `permissionCheck` and for `rights`, an attempt to write to the subkey throws an exception.
+ The access specified for `permissionCheck` takes precedence over the access specified for `rights`. For example, if you specify for `permissionCheck` and for `rights`, an attempt to write to the subkey throws an exception.
In order to use the method, you must have an instance of the class. To get an instance of , use one of the static members of the class.
@@ -1996,9 +1996,9 @@
If the specified `name` does not exist in the key, it is created and the associated value is set to `value`.
- This overload of stores 64-bit integers as strings (). To store 64-bit numbers as values, use the overload that specifies .
+ This overload of stores 64-bit integers as strings (). To store 64-bit numbers as values, use the overload that specifies .
- This overload of stores all string values as , even if they contain expandable references to environment variables. To save string values as expandable strings (), use the overload that specifies .
+ This overload of stores all string values as , even if they contain expandable references to environment variables. To save string values as expandable strings (), use the overload that specifies .
Numeric types other than 32-bit integers are stored as strings by this method overload. Enumeration elements are stored as strings containing the element names.
@@ -2084,10 +2084,10 @@
> [!NOTE]
> Specifying the registry data type is the same as using the overload.
- If the type of the specified `value` does not match the specified `valueKind`, and the data cannot be converted, is thrown. For example, you can store a as a , but only if its value is less than the maximum value of a . You cannot store a single string value as a .
+ If the type of the specified `value` does not match the specified `valueKind`, and the data cannot be converted, is thrown. For example, you can store a as a , but only if its value is less than the maximum value of a . You cannot store a single string value as a .
> [!NOTE]
-> If boxed values are passed for or , the conversion is done using the invariant culture.
+> If boxed values are passed for or , the conversion is done using the invariant culture.
> [!CAUTION]
> Do not expose objects in such a way that a malicious program could create thousands of meaningless subkeys or key/value pairs. For example, do not allow callers to enter arbitrary keys or values.
diff --git a/xml/Microsoft.Win32/SystemEvents.xml b/xml/Microsoft.Win32/SystemEvents.xml
index 6c81cdaa4d7b8..e371dd488b6f0 100644
--- a/xml/Microsoft.Win32/SystemEvents.xml
+++ b/xml/Microsoft.Win32/SystemEvents.xml
@@ -49,7 +49,7 @@
The service in this example starts a thread that runs an instance of `HiddenForm`. The events are hooked up and handled in the form. The events must be hooked up in the load event of the form, to make sure that the form is completely loaded first; otherwise the events will not be raised.
> [!NOTE]
-> The example provides all the necessary code, including the form initialization code typically generated by [!INCLUDE[vsprvs](~/includes/vsprvs-md.md)] designers. If you are developing your service in [!INCLUDE[vsprvs](~/includes/vsprvs-md.md)], you can omit the second partial class and use the **Properties** window to set the height and width of the hidden form to zero, the border style to , and the window state to .
+> The example provides all the necessary code, including the form initialization code typically generated by [!INCLUDE[vsprvs](~/includes/vsprvs-md.md)] designers. If you are developing your service in [!INCLUDE[vsprvs](~/includes/vsprvs-md.md)], you can omit the second partial class and use the **Properties** window to set the height and width of the hidden form to zero, the border style to , and the window state to .
To run the example:
diff --git a/xml/System.AddIn.Contract.Automation/IRemoteMethodInfoContract.xml b/xml/System.AddIn.Contract.Automation/IRemoteMethodInfoContract.xml
index d4ea8594f315e..05c215b265c81 100644
--- a/xml/System.AddIn.Contract.Automation/IRemoteMethodInfoContract.xml
+++ b/xml/System.AddIn.Contract.Automation/IRemoteMethodInfoContract.xml
@@ -80,7 +80,7 @@
returns a default in which the property is set to the value and the property is set to the value .
+ If the invoked method does not have a return value (for example, the method is a constructor), returns a default in which the property is set to the value and the property is set to the value .
]]>
diff --git a/xml/System.AddIn.Contract.Automation/RemoteTypeData.xml b/xml/System.AddIn.Contract.Automation/RemoteTypeData.xml
index e3838b4c6356a..b96c061962469 100644
--- a/xml/System.AddIn.Contract.Automation/RemoteTypeData.xml
+++ b/xml/System.AddIn.Contract.Automation/RemoteTypeData.xml
@@ -259,7 +259,7 @@
, the value of this field is .
+ If the remote type is an , the value of this field is .
]]>
diff --git a/xml/System.AddIn.Contract/RemoteArgument.xml b/xml/System.AddIn.Contract/RemoteArgument.xml
index ce2ebe4641e44..0836c4e06b2f1 100644
--- a/xml/System.AddIn.Contract/RemoteArgument.xml
+++ b/xml/System.AddIn.Contract/RemoteArgument.xml
@@ -32,7 +32,7 @@
provides constructors for each of the types that it supports. You can also use the methods to create objects. The methods automatically call the appropriate constructor for your argument type.
- If you create a using the default parameterless constructor, the property is set to the value and the property is set to the value .
+ If you create a using the default parameterless constructor, the property is set to the value and the property is set to the value .
]]>
@@ -57,7 +57,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -110,7 +110,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -135,7 +135,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -160,7 +160,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -185,7 +185,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -210,7 +210,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -235,7 +235,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -260,7 +260,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -285,7 +285,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -310,7 +310,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -335,7 +335,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -365,7 +365,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -390,7 +390,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -415,7 +415,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -445,7 +445,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -475,7 +475,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -505,7 +505,7 @@
property to , the property to , and the property to `false`.
+ This constructor sets the property to , the property to , and the property to `false`.
]]>
@@ -533,7 +533,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -631,7 +631,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -659,7 +659,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -687,7 +687,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -715,7 +715,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -743,7 +743,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -771,7 +771,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -799,7 +799,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -827,7 +827,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -855,7 +855,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -883,7 +883,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -916,7 +916,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -944,7 +944,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -972,7 +972,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -1005,7 +1005,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -1038,7 +1038,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
@@ -1071,7 +1071,7 @@
property to , the property to , and the property to the value of the `isByRef` parameter.
+ This constructor sets the property to , the property to , and the property to the value of the `isByRef` parameter.
]]>
diff --git a/xml/System.AddIn.Hosting/AddInProcess.xml b/xml/System.AddIn.Hosting/AddInProcess.xml
index 20042a8de76ce..a542bf11cd14e 100644
--- a/xml/System.AddIn.Hosting/AddInProcess.xml
+++ b/xml/System.AddIn.Hosting/AddInProcess.xml
@@ -53,7 +53,7 @@
constructor with the flag, to specify that the process that runs the add-in will have the same bits-per-word as the host process.
+ This constructor has the same effect as using the constructor with the flag, to specify that the process that runs the add-in will have the same bits-per-word as the host process.
]]>
diff --git a/xml/System.AddIn.Hosting/AddInToken.xml b/xml/System.AddIn.Hosting/AddInToken.xml
index f354236051c82..f9661c5893d91 100644
--- a/xml/System.AddIn.Hosting/AddInToken.xml
+++ b/xml/System.AddIn.Hosting/AddInToken.xml
@@ -524,7 +524,7 @@
Use the enumerator returned by this method to iterate through the qualification data items of the pipeline segments associated with the current token. Each item of qualification data is a structure that identifies the pipeline segment and contains a name/value pair from a attribute applied to that segment.
> [!NOTE]
-> The add-in model does not use qualification data that is applied to the host view of the add-in. As a result, when you enumerate qualification data you will not find any items whose property is .
+> The add-in model does not use qualification data that is applied to the host view of the add-in. As a result, when you enumerate qualification data you will not find any items whose property is .
Alternatively, you can use the property to get a nested set of dictionaries that contain the qualification data of the pipeline segments.
@@ -651,7 +651,7 @@
The keys and values of these inner dictionaries are the names and values specified in the attributes for the segments. If no qualification data has been applied to a segment, its dictionary is empty.
> [!NOTE]
-> The add-in model does not use qualification data that is applied to the host view of the add-in. As a result, the dictionary for is always empty.
+> The add-in model does not use qualification data that is applied to the host view of the add-in. As a result, the dictionary for is always empty.
Alternatively, you can obtain qualification data by enumerating an as if it were a collection of structures, using a `foreach` statement (`For Each` in Visual Basic, `for each` in Visual C++). See the example provided for the structure.
@@ -692,7 +692,7 @@
Use the enumerator returned by this method to iterate through the qualification data items of the pipeline segments associated with the current token. Each item of qualification data is a structure that identifies the pipeline segment and contains the name/value pair from a attribute applied to that segment.
> [!NOTE]
-> The add-in model does not use qualification data that is applied to the host view of the add-in. As a result, when you enumerate qualification data you will not find any items whose property is .
+> The add-in model does not use qualification data that is applied to the host view of the add-in. As a result, when you enumerate qualification data you will not find any items whose property is .
Alternatively, you can use the property to get a nested set of dictionaries containing the qualification data of the pipeline segments.
diff --git a/xml/System.CodeDom/CodeTryCatchFinallyStatement.xml b/xml/System.CodeDom/CodeTryCatchFinallyStatement.xml
index e40bc5962a9f0..b13b3c3119b09 100644
--- a/xml/System.CodeDom/CodeTryCatchFinallyStatement.xml
+++ b/xml/System.CodeDom/CodeTryCatchFinallyStatement.xml
@@ -33,7 +33,7 @@
The property contains the statements to execute within a `try` block. The property contains the `catch` clauses to handle caught exceptions. The property contains the statements to execute within a `finally` block.
> [!NOTE]
-> Not all languages support `try`/`catch` blocks. Call the method with the flag to determine whether a code generator supports `try`/`catch` blocks.
+> Not all languages support `try`/`catch` blocks. Call the method with the flag to determine whether a code generator supports `try`/`catch` blocks.
diff --git a/xml/System.Collections/IStructuralComparable.xml b/xml/System.Collections/IStructuralComparable.xml
index 5386f7e255f96..4d9d942c5f133 100644
--- a/xml/System.Collections/IStructuralComparable.xml
+++ b/xml/System.Collections/IStructuralComparable.xml
@@ -19,11 +19,6 @@
netstandard
2.0.0.0
-
- FSharp.Core
- 2.3.98.1
- 3.98.4.0
-
Supports the structural comparison of collection objects.
@@ -80,11 +75,6 @@
netstandard
2.0.0.0
-
- FSharp.Core
- 2.3.98.1
- 3.98.4.0
-
System.Int32
diff --git a/xml/System.Collections/IStructuralEquatable.xml b/xml/System.Collections/IStructuralEquatable.xml
index 7af7b0f91e353..0a0de35e2c268 100644
--- a/xml/System.Collections/IStructuralEquatable.xml
+++ b/xml/System.Collections/IStructuralEquatable.xml
@@ -19,11 +19,6 @@
netstandard
2.0.0.0
-
- FSharp.Core
- 2.3.98.1
- 3.98.4.0
-
Defines methods to support the comparison of objects for structural equality.
@@ -79,11 +74,6 @@
netstandard
2.0.0.0
-
- FSharp.Core
- 2.3.98.1
- 3.98.4.0
-
System.Boolean
@@ -142,11 +132,6 @@
netstandard
2.0.0.0
-
- FSharp.Core
- 2.3.98.1
- 3.98.4.0
-
System.Int32
diff --git a/xml/System.ComponentModel.Design.Serialization/CollectionCodeDomSerializer.xml b/xml/System.ComponentModel.Design.Serialization/CollectionCodeDomSerializer.xml
index e91d3e759a547..90e08aeecf41b 100644
--- a/xml/System.ComponentModel.Design.Serialization/CollectionCodeDomSerializer.xml
+++ b/xml/System.ComponentModel.Design.Serialization/CollectionCodeDomSerializer.xml
@@ -104,7 +104,7 @@
2. If the collection is an , the method will cast the collection to an and add through that interface.
- 1. If the collection has no *Add* method, but is marked with , will enumerate the collection and serialize each element.
+ 1. If the collection has no *Add* method, but is marked with , will enumerate the collection and serialize each element.
]]>
diff --git a/xml/System.ComponentModel.Design/IRootDesigner.xml b/xml/System.ComponentModel.Design/IRootDesigner.xml
index c511e1fac53b1..6900f2fe00e66 100644
--- a/xml/System.ComponentModel.Design/IRootDesigner.xml
+++ b/xml/System.ComponentModel.Design/IRootDesigner.xml
@@ -86,7 +86,7 @@
This method returns a view object that can present a user interface to the user. The returned data type is an , because there can be a variety of different user interface technologies. Development environments typically support more than one technology.
> [!NOTE]
-> The and fields are obsolete. Use for new designer implementations.
+> The and fields are obsolete. Use for new designer implementations.
]]>
@@ -124,7 +124,7 @@
The enumeration indicates the supported view technologies.
> [!NOTE]
-> The and fields are obsolete. Use for new designer implementations.
+> The and fields are obsolete. Use for new designer implementations.
]]>
diff --git a/xml/System.ComponentModel/BindableAttribute.xml b/xml/System.ComponentModel/BindableAttribute.xml
index a56db792ff35b..893c4eb284922 100644
--- a/xml/System.ComponentModel/BindableAttribute.xml
+++ b/xml/System.ComponentModel/BindableAttribute.xml
@@ -291,7 +291,7 @@
is set to the constant member . Therefore, when you want to check whether the attribute is set to this value in your code, you must specify the attribute as .
+ When you mark a property with this value, the is set to the constant member . Therefore, when you want to check whether the attribute is set to this value in your code, you must specify the attribute as .
]]>
diff --git a/xml/System.ComponentModel/DesignerSerializationVisibilityAttribute.xml b/xml/System.ComponentModel/DesignerSerializationVisibilityAttribute.xml
index 3a766f7cad5d1..313964cff48a2 100644
--- a/xml/System.ComponentModel/DesignerSerializationVisibilityAttribute.xml
+++ b/xml/System.ComponentModel/DesignerSerializationVisibilityAttribute.xml
@@ -83,7 +83,7 @@
and sets its value to .
+ The following code example specifies how a property on a component is saved by a designer. This code creates a new and sets its value to .
[!code-cpp[Classic DesignerSerializationVisibilityAttribute.DesignerSerializationVisibilityAttribute Example#1](~/samples/snippets/cpp/VS_Snippets_Winforms/Classic DesignerSerializationVisibilityAttribute.DesignerSerializationVisibilityAttribute Example/CPP/source.cpp#1)]
[!code-csharp[Classic DesignerSerializationVisibilityAttribute.DesignerSerializationVisibilityAttribute Example#1](~/samples/snippets/csharp/VS_Snippets_Winforms/Classic DesignerSerializationVisibilityAttribute.DesignerSerializationVisibilityAttribute Example/CS/source.cs#1)]
@@ -155,7 +155,7 @@
. Therefore, when you want to check whether the attribute is set to this value in your code, you must specify the attribute as .
+ When you mark a property with this value, this attribute is set to the constant member . Therefore, when you want to check whether the attribute is set to this value in your code, you must specify the attribute as .
]]>
diff --git a/xml/System.ComponentModel/EditorBrowsableAttribute.xml b/xml/System.ComponentModel/EditorBrowsableAttribute.xml
index 19d7b1c6e1891..eb46ba670d6ac 100644
--- a/xml/System.ComponentModel/EditorBrowsableAttribute.xml
+++ b/xml/System.ComponentModel/EditorBrowsableAttribute.xml
@@ -81,7 +81,7 @@
.
+ The default for this property is .
]]>
@@ -218,7 +218,7 @@
.
+ The default for this property is .
]]>
diff --git a/xml/System.ComponentModel/ICustomTypeDescriptor.xml b/xml/System.ComponentModel/ICustomTypeDescriptor.xml
index 66039d5ed0336..56182010658b1 100644
--- a/xml/System.ComponentModel/ICustomTypeDescriptor.xml
+++ b/xml/System.ComponentModel/ICustomTypeDescriptor.xml
@@ -311,7 +311,7 @@
## Remarks
The events for this instance can differ from the set of events that the class provides. For example, if the component is site-based, the site can add or remove additional events.
- Implementors can return if no properties are specified. This method should never return `null`.
+ Implementors can return if no properties are specified. This method should never return `null`.
]]>
@@ -390,7 +390,7 @@
## Remarks
The properties for this instance can differ from the set of properties that the class provides. For example, if the component is sited, the site can add or remove additional properties.
- Implementers can return if no properties are specified. This method should never return `null`.
+ Implementers can return if no properties are specified. This method should never return `null`.
]]>
diff --git a/xml/System.ComponentModel/LocalizableAttribute.xml b/xml/System.ComponentModel/LocalizableAttribute.xml
index 2793cb748f00a..53ac3b3620236 100644
--- a/xml/System.ComponentModel/LocalizableAttribute.xml
+++ b/xml/System.ComponentModel/LocalizableAttribute.xml
@@ -91,7 +91,7 @@
, sets its value to , and binds it to the property.
+ The following example marks a property as needing to be localized. This code creates a new , sets its value to , and binds it to the property.
[!code-cpp[Classic LocalizableAttribute.LocalizableAttribute Example#1](~/samples/snippets/cpp/VS_Snippets_Winforms/Classic LocalizableAttribute.LocalizableAttribute Example/CPP/source.cpp#1)]
[!code-csharp[Classic LocalizableAttribute.LocalizableAttribute Example#1](~/samples/snippets/csharp/VS_Snippets_Winforms/Classic LocalizableAttribute.LocalizableAttribute Example/CS/source.cs#1)]
@@ -129,7 +129,7 @@
. Therefore, when you want to check whether the attribute is set to this value in your code, you must specify the attribute as .
+ When you mark a property with this value, this attribute is set to the constant member . Therefore, when you want to check whether the attribute is set to this value in your code, you must specify the attribute as .
]]>
diff --git a/xml/System.ComponentModel/MergablePropertyAttribute.xml b/xml/System.ComponentModel/MergablePropertyAttribute.xml
index 70b9795ec1860..6fa515d5e67d1 100644
--- a/xml/System.ComponentModel/MergablePropertyAttribute.xml
+++ b/xml/System.ComponentModel/MergablePropertyAttribute.xml
@@ -95,7 +95,7 @@
, sets its value to , and binds it to the property.
+ The following example marks a property as appropriate to merge. This code creates a new , sets its value to , and binds it to the property.
[!code-cpp[Classic MergablePropertyAttribute.MergablePropertyAttribute Example#1](~/samples/snippets/cpp/VS_Snippets_Winforms/Classic MergablePropertyAttribute.MergablePropertyAttribute Example/CPP/source.cpp#1)]
[!code-csharp[Classic MergablePropertyAttribute.MergablePropertyAttribute Example#1](~/samples/snippets/csharp/VS_Snippets_Winforms/Classic MergablePropertyAttribute.MergablePropertyAttribute Example/CS/source.cs#1)]
@@ -181,7 +181,7 @@
. Therefore, when you want to check whether the attribute is set to this value in your code, you must specify the attribute as .
+ When you mark a property with this value, this attribute is set to the constant member . Therefore, when you want to check whether the attribute is set to this value in your code, you must specify the attribute as .
]]>
diff --git a/xml/System.ComponentModel/RecommendedAsConfigurableAttribute.xml b/xml/System.ComponentModel/RecommendedAsConfigurableAttribute.xml
index 6695ca66208ab..44965d598981f 100644
--- a/xml/System.ComponentModel/RecommendedAsConfigurableAttribute.xml
+++ b/xml/System.ComponentModel/RecommendedAsConfigurableAttribute.xml
@@ -100,7 +100,7 @@
, sets its value to , and binds it to the property.
+ The following example marks a property as usable as an application setting. This code creates a new , sets its value to , and binds it to the property.
[!code-cpp[Classic RecommendedAsConfigurableAttribute.RecommendedAsConfigurableAttribute Example#1](~/samples/snippets/cpp/VS_Snippets_Winforms/Classic RecommendedAsConfigurableAttribute.RecommendedAsConfigurableAttribute Example/CPP/source.cpp#1)]
[!code-csharp[Classic RecommendedAsConfigurableAttribute.RecommendedAsConfigurableAttribute Example#1](~/samples/snippets/csharp/VS_Snippets_Winforms/Classic RecommendedAsConfigurableAttribute.RecommendedAsConfigurableAttribute Example/CS/source.cs#1)]
diff --git a/xml/System.ComponentModel/RunInstallerAttribute.xml b/xml/System.ComponentModel/RunInstallerAttribute.xml
index b1e147355696e..9ab988f207375 100644
--- a/xml/System.ComponentModel/RunInstallerAttribute.xml
+++ b/xml/System.ComponentModel/RunInstallerAttribute.xml
@@ -111,7 +111,7 @@
. Therefore, when you want to check whether the attribute is set to this value in your code, you must specify the attribute as .
+ When you mark a property with this value, this attribute is set to the constant member . Therefore, when you want to check whether the attribute is set to this value in your code, you must specify the attribute as .
]]>
diff --git a/xml/System.Configuration.Internal/DelegatingConfigHost.xml b/xml/System.Configuration.Internal/DelegatingConfigHost.xml
index 56a23675f16ab..e65b875537cef 100644
--- a/xml/System.Configuration.Internal/DelegatingConfigHost.xml
+++ b/xml/System.Configuration.Internal/DelegatingConfigHost.xml
@@ -57,8 +57,8 @@
System.Configuration.Internal.IInternalConfigurationBuilderHost
- To be added.
- To be added.
+ Gets the object if the delegated host provides the functionality required by that interface.
+ An object.
To be added.
@@ -1002,10 +1002,10 @@
- To be added.
- To be added.
- To be added.
- To be added.
+ The to process.
+ to use to process the .
+ Processes a object using the provided .
+ The processed .
To be added.
@@ -1026,10 +1026,10 @@
- To be added.
- To be added.
- To be added.
- To be added.
+ The to process.
+ to use to process the .
+ Processes the markup of a configuration section using the provided .
+ The processed .
To be added.
diff --git a/xml/System.Configuration.Internal/IInternalConfigurationBuilderHost.xml b/xml/System.Configuration.Internal/IInternalConfigurationBuilderHost.xml
index 8369dbcc597ee..8d4131429bf46 100644
--- a/xml/System.Configuration.Internal/IInternalConfigurationBuilderHost.xml
+++ b/xml/System.Configuration.Internal/IInternalConfigurationBuilderHost.xml
@@ -13,7 +13,7 @@
- To be added.
+ Defines the supplemental interface to for configuration hosts that wish to support the application of objects.
To be added.
@@ -34,10 +34,11 @@
- To be added.
- To be added.
- To be added.
- To be added.
+ The to process.
+
+ to use to process the configSection.
+ Processes a object using the provided .
+ The processed .
To be added.
@@ -58,10 +59,11 @@
- To be added.
- To be added.
- To be added.
- To be added.
+ The to process.
+
+ to use to process the rawXml.
+ Processes the markup of a configuration section using the provided .
+ The processed .
To be added.
diff --git a/xml/System.Configuration/ConfigurationBuilder.xml b/xml/System.Configuration/ConfigurationBuilder.xml
index 62ea82ed1bbd4..714e6182daea3 100644
--- a/xml/System.Configuration/ConfigurationBuilder.xml
+++ b/xml/System.Configuration/ConfigurationBuilder.xml
@@ -11,7 +11,7 @@
- To be added.
+ Represents the base class to be extended by custom configuration builder implementations.
To be added.
@@ -26,7 +26,7 @@
- To be added.
+ Initializes a new instance of the class.
To be added.
@@ -46,9 +46,9 @@
- To be added.
- To be added.
- To be added.
+ The to process.
+ Accepts a object from the configuration system and returns a modified or new object for further use.
+ The processed .
To be added.
@@ -68,9 +68,9 @@
- To be added.
- To be added.
- To be added.
+ The to process.
+ Accepts an representing the raw configuration section from a config file and returns a modified or new for further use.
+ The processed .
To be added.
diff --git a/xml/System.Configuration/ConfigurationBuilderCollection.xml b/xml/System.Configuration/ConfigurationBuilderCollection.xml
index 115b288b7e6e5..643d458dc5a39 100644
--- a/xml/System.Configuration/ConfigurationBuilderCollection.xml
+++ b/xml/System.Configuration/ConfigurationBuilderCollection.xml
@@ -11,7 +11,7 @@
- To be added.
+ Maintains a collection of objects by name.
To be added.
@@ -26,7 +26,7 @@
- To be added.
+ Initializes a new instance of the class.
To be added.
@@ -46,9 +46,12 @@
- To be added.
- To be added.
+ The object to add to the collection.
+ Adds a object to the object.
To be added.
+
+ is .
+ The configuration provider in must implement the class .
@@ -67,9 +70,9 @@
- To be added.
- To be added.
- To be added.
+ A configuration builder name.
+ Gets the object from the that is configured with the provided name.
+ The object that is configured with the provided .
To be added.
diff --git a/xml/System.Configuration/ConfigurationBuilderSettings.xml b/xml/System.Configuration/ConfigurationBuilderSettings.xml
index 610b4908ff8c8..e59e062cc4436 100644
--- a/xml/System.Configuration/ConfigurationBuilderSettings.xml
+++ b/xml/System.Configuration/ConfigurationBuilderSettings.xml
@@ -11,7 +11,7 @@
- To be added.
+ Represents a group of configuration elements that configure the providers for the configuration section.
To be added.
@@ -26,7 +26,7 @@
- To be added.
+ Initializes a new instance of the class.
To be added.
@@ -48,8 +48,8 @@
System.Configuration.ProviderSettingsCollection
- To be added.
- To be added.
+ Gets a collection of objects that represent the properties of configuration builders.
+ The objects.
To be added.
@@ -66,8 +66,8 @@
System.Configuration.ConfigurationPropertyCollection
- To be added.
- To be added.
+ Gets the of a .
+ A of a .
To be added.
diff --git a/xml/System.Configuration/ConfigurationBuildersSection.xml b/xml/System.Configuration/ConfigurationBuildersSection.xml
index b86f3310d4566..b782608050377 100644
--- a/xml/System.Configuration/ConfigurationBuildersSection.xml
+++ b/xml/System.Configuration/ConfigurationBuildersSection.xml
@@ -11,7 +11,7 @@
- To be added.
+ Provides programmatic access to the section. This class can't be inherited.
To be added.
@@ -26,7 +26,7 @@
- To be added.
+ Initializes a new instance of the class.
To be added.
@@ -48,8 +48,8 @@
System.Configuration.ProviderSettingsCollection
- To be added.
- To be added.
+ Gets a of all objects in all participating configuration files.
+ The objects in all participating configuration files.
To be added.
@@ -69,10 +69,13 @@
- To be added.
- To be added.
- To be added.
+ A configuration builder name or a comma-separated list of names. If builderName is a comma-separated list of names, a special aggregate object that references and applies all named configuration builders is returned.
+ Returns a object that has the provided configuration builder name.
+ A object that has the provided configuration .
To be added.
+ A configuration provider type can't be instantiated under a partially trusted security policy ( is not present on the target assembly).
+ ConfigurationBuilders.IgnoreLoadFailure is disabled by default. If a bin-deployed configuration builder can't be found or instantiated for one of the sections read from the configuration file, a is trapped and reported. If you wish to ignore load failures, enable ConfigurationBuilders.IgnoreLoadFailure.
+ ConfigurationBuilders.IgnoreLoadFailure is disabled by default. While loading a configuration builder if a occurs for one of the sections read from the configuration file, a is trapped and reported. If you wish to ignore load failures, enable ConfigurationBuilders.IgnoreLoadFailure.
diff --git a/xml/System.Configuration/SchemeSettingElement.xml b/xml/System.Configuration/SchemeSettingElement.xml
index 7829061ff9644..8657502349ac2 100644
--- a/xml/System.Configuration/SchemeSettingElement.xml
+++ b/xml/System.Configuration/SchemeSettingElement.xml
@@ -22,7 +22,7 @@
## Remarks
The class represents the \ element under the Uri section within a configuration file. The class represents an instance of an element in the class.
- The class and the \ section in a configuration file looks generic, implying that an application can specify any enumeration value for any scheme. In fact, only the flag for HTTP and HTTPS schemes are supported. All other settings are ignored.
+ The class and the \ section in a configuration file looks generic, implying that an application can specify any enumeration value for any scheme. In fact, only the flag for HTTP and HTTPS schemes are supported. All other settings are ignored.
]]>
diff --git a/xml/System.Configuration/SchemeSettingElementCollection.xml b/xml/System.Configuration/SchemeSettingElementCollection.xml
index 8786cca7654cb..3b89b799610d1 100644
--- a/xml/System.Configuration/SchemeSettingElementCollection.xml
+++ b/xml/System.Configuration/SchemeSettingElementCollection.xml
@@ -27,7 +27,7 @@
## Remarks
The class represents the \ element under the Uri section within a configuration file.
- The class and the \ section in a configuration file looks generic, implying that an application can specify any enumeration value for any scheme. In fact, only the flag for HTTP and HTTPS schemes are supported. All other settings are ignored.
+ The class and the \ section in a configuration file looks generic, implying that an application can specify any enumeration value for any scheme. In fact, only the flag for HTTP and HTTPS schemes are supported. All other settings are ignored.
By default, the class un-escapes percent encoded path delimiters before executing path compression. This was implemented as a security mechanism against attacks like the following:
diff --git a/xml/System.Configuration/SectionInformation.xml b/xml/System.Configuration/SectionInformation.xml
index 85c0530194408..ce75136c5a94c 100644
--- a/xml/System.Configuration/SectionInformation.xml
+++ b/xml/System.Configuration/SectionInformation.xml
@@ -290,8 +290,8 @@
System.Configuration.ConfigurationBuilder
- To be added.
- To be added.
+ Gets the object for this configuration section.
+ The object for this configuration section.
To be added.
diff --git a/xml/System.Data.SqlClient/SqlDataReader.xml b/xml/System.Data.SqlClient/SqlDataReader.xml
index 80993945986b4..4b7364c1aa8af 100644
--- a/xml/System.Data.SqlClient/SqlDataReader.xml
+++ b/xml/System.Data.SqlClient/SqlDataReader.xml
@@ -38,7 +38,7 @@
and are the only properties that you can call after the is closed. Although the property may be accessed while the exists, always call before returning the value of to guarantee an accurate return value.
- When using sequential access (), an will be raised if the position is advanced and another read operation is attempted on the previous column.
+ When using sequential access (), an will be raised if the position is advanced and another read operation is attempted on the previous column.
> [!NOTE]
> For optimal performance, avoids creating unnecessary objects or making unnecessary copies of data. Therefore, multiple calls to methods such as return a reference to the same object. Use caution if you are modifying the underlying value of the objects returned by methods such as .
diff --git a/xml/System.Data.SqlTypes/SqlBoolean.xml b/xml/System.Data.SqlTypes/SqlBoolean.xml
index edb74a7d04288..0f549b15c5fb2 100644
--- a/xml/System.Data.SqlTypes/SqlBoolean.xml
+++ b/xml/System.Data.SqlTypes/SqlBoolean.xml
@@ -1485,7 +1485,7 @@
The false operator can be used to test the of the to determine whether it is false.
Returns if the supplied parameter is is false, otherwise.
- .]]>
+ .]]>
@@ -1844,7 +1844,7 @@
The true operator can be used to test the of the to determine whether it is true.
Returns if the supplied parameter is is true, otherwise.
- .]]>
+ .]]>
diff --git a/xml/System.Diagnostics/BooleanSwitch.xml b/xml/System.Diagnostics/BooleanSwitch.xml
index be92787212b29..bef5665c99423 100644
--- a/xml/System.Diagnostics/BooleanSwitch.xml
+++ b/xml/System.Diagnostics/BooleanSwitch.xml
@@ -218,7 +218,7 @@
By default, this field is set to `false` (disabled). To enable the switch, assign this field the value of `true`. To disable the switch, assign the value to `false`. The value of this property is determined by the value of the base class property .
> [!NOTE]
-> This method uses the flag to prevent being called from untrusted code; only the immediate caller is required to have permission. If your code can be called from partially trusted code, do not pass the user input to class methods without validation.
+> This method uses the flag to prevent being called from untrusted code; only the immediate caller is required to have permission. If your code can be called from partially trusted code, do not pass the user input to class methods without validation.
diff --git a/xml/System.Diagnostics/EventSchemaTraceListener.xml b/xml/System.Diagnostics/EventSchemaTraceListener.xml
index ac2e51c133ab7..6c32cd30e029e 100644
--- a/xml/System.Diagnostics/EventSchemaTraceListener.xml
+++ b/xml/System.Diagnostics/EventSchemaTraceListener.xml
@@ -72,7 +72,7 @@
|`EventID`|None|Always present.|This element represents parameter input (`id`).|
|`Execution`|`ProcessID`|Depends on the presence of the flag in the property.|The `ProcessID` attribute is specified in the . On the Microsoft Windows 98 and Windows Millenium Edition operating systems, if `ProcessID` is larger than 2,147,483,647, it is a positive representation of a negative number and should be converted to obtain the correct process identifier.|
||`ThreadID`|Present when `ProcessID` is present.|The `ThreadID` attribute is specified in the .|
-|`Level`|None|Always present.|This element represents parameter input (the numeric value of `eventType`). Parameter values that are larger than 255 are output as a level 8, which represents . Trace event types , , , , and are output as levels 1, 2, 4, 8, and 10, respectively.|
+|`Level`|None|Always present.|This element represents parameter input (the numeric value of `eventType`). Parameter values that are larger than 255 are output as a level 8, which represents . Trace event types , , , , and are output as levels 1, 2, 4, 8, and 10, respectively.|
|`LogicalOperationStack`|None|Depends on the presence of the flag in the property.|Only one logical operation can exist. Therefore, the values are written as `LogicalOperation` nodes under the `LogicalOperationStack` element.|
|`OpCode`|None|Present when `Level` is greater than 255.|This element represents Trace event types that have numeric values greater than 255. , , , , or are output as levels 1, 2, 4, 8, and 10, respectively.|
|`Provider`|`GUID`|Always present.|Always empty.|
diff --git a/xml/System.Diagnostics/TraceFilter.xml b/xml/System.Diagnostics/TraceFilter.xml
index 8698d813286a8..028436efc0b58 100644
--- a/xml/System.Diagnostics/TraceFilter.xml
+++ b/xml/System.Diagnostics/TraceFilter.xml
@@ -107,7 +107,7 @@
method to indicate tracing should occur when the trace event type of the event is equal to .
+ The following code example shows how to override the method to indicate tracing should occur when the trace event type of the event is equal to .
[!code-cpp[System.Diagnostics.TraceFilter#2](~/samples/snippets/cpp/VS_Snippets_CLR_System/system.diagnostics.tracefilter/cpp/source.cpp#2)]
[!code-csharp[System.Diagnostics.TraceFilter#2](~/samples/snippets/csharp/VS_Snippets_CLR_System/system.diagnostics.tracefilter/cs/source.cs#2)]
diff --git a/xml/System.Diagnostics/TraceListener.xml b/xml/System.Diagnostics/TraceListener.xml
index 57e95dfea5bed..85cfae63cd8fd 100644
--- a/xml/System.Diagnostics/TraceListener.xml
+++ b/xml/System.Diagnostics/TraceListener.xml
@@ -992,7 +992,7 @@
property determines the optional content of trace output. The property can be set in the configuration file or programmatically during execution to include additional data specifically for a section of code. For example, you can set the property for the console trace listener to to add call stack information to the trace output.
+ The property determines the optional content of trace output. The property can be set in the configuration file or programmatically during execution to include additional data specifically for a section of code. For example, you can set the property for the console trace listener to to add call stack information to the trace output.
The enumeration is not used by the following classes and methods:
diff --git a/xml/System.Diagnostics/TraceSource.xml b/xml/System.Diagnostics/TraceSource.xml
index 31d4b9365ef84..cc71528a5f3b9 100644
--- a/xml/System.Diagnostics/TraceSource.xml
+++ b/xml/System.Diagnostics/TraceSource.xml
@@ -882,7 +882,7 @@
## Remarks
The method provides an informational message intended to be read by users and not by tools.
- calls the method, setting `eventType` to and passing the informative message as the message string. The method in turn calls the method of each trace listener.
+ calls the method, setting `eventType` to and passing the informative message as the message string. The method in turn calls the method of each trace listener.
]]>
@@ -938,7 +938,7 @@
The method provides an informational message intended to be read by users and not by tools.
- calls the method, setting `eventType` to and passing the message content as an object array with formatting information. The method in turn calls the method of each trace listener.
+ calls the method, setting `eventType` to and passing the message content as an object array with formatting information. The method in turn calls the method of each trace listener.
]]>
@@ -993,7 +993,7 @@
method calls the method of each trace listener in the property to write the trace information. The default method in the base class calls the method to process the call, setting `eventType` to and appending a string representation of the `relatedActivityId` GUID to `message`.
+ The method calls the method of each trace listener in the property to write the trace information. The default method in the base class calls the method to process the call, setting `eventType` to and appending a string representation of the `relatedActivityId` GUID to `message`.
is intended to be used with the logical operations of a . The `relatedActivityId` parameter relates to the property of a object. If a logical operation begins in one activity and transfers to another, the second activity logs the transfer by calling the method. The call relates the new activity identity to the previous identity. The most likely consumer of this functionality is a trace viewer that can report logical operations that span multiple activities.
diff --git a/xml/System.Diagnostics/TraceSwitch.xml b/xml/System.Diagnostics/TraceSwitch.xml
index 5160ec545b0f0..fcf29d1653f73 100644
--- a/xml/System.Diagnostics/TraceSwitch.xml
+++ b/xml/System.Diagnostics/TraceSwitch.xml
@@ -47,7 +47,7 @@
```
- This configuration section defines a with the set to `mySwitch`, and the set to 1, which corresponds to the enumeration value .
+ This configuration section defines a with the set to `mySwitch`, and the set to 1, which corresponds to the enumeration value .
> [!NOTE]
> In the .NET Framework version 2.0, you can use text to specify the value for a switch. For example, `true` for a or the text representing an enumeration value, such as `Error` for a . The line `` is equivalent to ``.
@@ -58,7 +58,7 @@
[!code-csharp[Classic TraceSwitch.TraceError Example#3](~/samples/snippets/csharp/VS_Snippets_CLR_Classic/classic TraceSwitch.TraceError Example/CS/remarks.cs#3)]
[!code-vb[Classic TraceSwitch.TraceError Example#3](~/samples/snippets/visualbasic/VS_Snippets_CLR_Classic/classic TraceSwitch.TraceError Example/VB/remarks.vb#3)]
- By default, the switch property is set using the value specified in the configuration file. If the constructor cannot find initial switch settings in the configuration file, the of the new switch defaults to .
+ By default, the switch property is set using the value specified in the configuration file. If the constructor cannot find initial switch settings in the configuration file, the of the new switch defaults to .
You must enable tracing or debugging to use a switch. The following syntax is compiler specific. If you use compilers other than C# or Visual Basic, refer to the documentation for your compiler.
@@ -77,7 +77,7 @@
## Examples
- The following code example creates a new and uses the switch to determine whether to print error messages. The switch is created at the class level. `MyMethod` writes the first error message if the property is set to or higher. However, `MyMethod` does not write the second error message if the is less than .
+ The following code example creates a new and uses the switch to determine whether to print error messages. The switch is created at the class level. `MyMethod` writes the first error message if the property is set to or higher. However, `MyMethod` does not write the second error message if the is less than .
[!code-cpp[Classic TraceSwitch.TraceError Example#1](~/samples/snippets/cpp/VS_Snippets_CLR_Classic/classic TraceSwitch.TraceError Example/CPP/source.cpp#1)]
[!code-csharp[Classic TraceSwitch.TraceError Example#1](~/samples/snippets/csharp/VS_Snippets_CLR_Classic/classic TraceSwitch.TraceError Example/CS/source.cs#1)]
@@ -139,7 +139,7 @@
[!code-csharp[Classic TraceSwitch.TraceError Example#3](~/samples/snippets/csharp/VS_Snippets_CLR_Classic/classic TraceSwitch.TraceError Example/CS/remarks.cs#3)]
[!code-vb[Classic TraceSwitch.TraceError Example#3](~/samples/snippets/visualbasic/VS_Snippets_CLR_Classic/classic TraceSwitch.TraceError Example/VB/remarks.vb#3)]
- When the constructor cannot find initial switch settings in the configuration file, the property of the new switch is set to .
+ When the constructor cannot find initial switch settings in the configuration file, the property of the new switch is set to