Skip to content

architecture high level overview

felix-b edited this page Oct 10, 2016 · 2 revisions

NOTE: This page is still being written

DISCLAIMER: This documentation refers to milestone Boda, which is currently in early stages of development. Some or all of the features mentioned here may not yet exist, or be unstable.

High level architecture overview

Here we explain architecture of the platform; we don't consider any specific application. Nevertheless, we do our best to discuss the platform and the potential applications built on top, as a whole.

N-Tier architecture

In general, NWheels-based applications follow the N-tier architecture:

  • Databases (data tier)
  • Microservices (business tier)
  • User interaction engines (presentation tier)

N-tier architecture

Server side and client side

Traditionally, the data and the business tiers are said to belong to server side, and the presentation tier is said to belong to client side. However, we will see below that these associations don't limit the variety of supported deployment scenarios.

Typically, server side runs on a cluster of server or virtual machines in public clouds, in proprietary data centers, or on premises. The client side can potentially run on any device which is capable of user interaction, e.g. PC, mobile, TV.

There are a couple of exceptions, though:

  • IVR apps: client side of IVR apps runs on top of an IVR platform, which usually runs in data centers of IVR platform vendor.

  • Fat clients: it is possible to deploy all or some of business tier microservices together with the presentation tier (and optionally with a local database) on a client PC. In this kind of deployment, all deployed tiers run in a single process. Fat client architecture makes sense for peer-to-peer applications, and when there is a need to offload server machines by decentralizing part of server-side processing.

User interaction engines

Presentation tier of NWheels-based applications is implemented automatically by user interaction engines. Engine-based approach to UI is beneficial, because it slashes great deal of effort and reduces the technical risk of getting it wrong (and the impact of UI implementation on project outcome is often underestimated).

Native implementations of UI engine exist for every supported presentation platform. The engines execute user interface according to UIDL (User Interface Definition Language) specification of a client app.

  • As opposed to Application, which means the whole enterprise application as a software system, a client app (or simply app) means a thin program in the presentation tier, whose responsibility is interaction with the end user, and delegation of the rest to the business tier.

Client apps can run on any presentation platform, for which implementation of the UI engine exist. This includes (but is not limited to): SPA (Single-Page Application) in web browser, desktop GUI applications on Windows/Linux/OS X, native mobile apps on Android/iOS/Windows Phone, as well as on less common kinds of platforms, like Smart TV and IVR (Interactive Voice Response).

The UIDL specification is explained in detail in the User interface page. It is a conceptual-level, technology-agnostic specification, which is powerful enough to capture application-specific UI compositions, behaviors, and user workflows, as well as bindings to data and business capabilities of the application.

UI themes

It is worth nothing that on visual presentation platforms, UIDL apps can have stunning and unique-looking GUIs. Yet, the GUIs have consistent looks and behaviors, native to the platform they execute on. This is possible because UIDL refrains from specifying any visual aspects of the user interface. One-size-fits-all approach clearly fails when applied to diversity of GUI platforms, and not all UI platforms are at all visual (for example, IVR platforms use phone keypad and voice as the input, and only voice as the output).

Instead, the visual aspects are captured in pluggable UI themes. The themes are specialized to concrete UI engines, so their actual structure depends on concrete engine implementation. Sets of tweakable stock themes will be available in the Marketplace, for every supported UI engine. In addition, it will be possible to modify stock themes, or to develop brand new custom themes.

Custom/modified UI themes can include new conceptual widgets (explained in User interface page) with their UI implementations. Such themes can also include specialized UI implementations of widgets provided out of the box.

The pluggability of UI themes also makes rebranding a trivial feature.

Specialized apps

An enterprise application can have multiple client apps. Though the same UIDL specification (client app) can run on different presentation platforms, often there will be specialized apps for different kinds of platforms. For example, desktop PC apps and web apps intended to be accessed from a PC, will usually include more elaborate user workflows and expose more business functions, compared to mobile apps and to IVR.

UIDL lifecycle

The UIDL specifications of client apps are authored in C# (or other .NET language), using UIDL framework API, and are integral part of the application codebase. In fact, each UIDL app belongs to a microservice. When a UIDL specification executes, its calls to UIDL API compose a UIDL document object, being built behind. Depending on UI engine implementation, the UIDL document can either be built into client-side deployment, or be downloaded from a backend API endpoint provided by the microservice.

Microservices and composite UI

A typical way of structuring business tier microservices is relating each to a business capability of the application, or to a logically complete set of closely related capabilities.

As we mentioned above, each client app belongs to one of the microservices. Does it mean that a client app can only represent capabilities of one microservice? Obviously, the answer is not. There are several options.

  1. Bindings in a client app can relate to data and functions from other microservices as well. If an app is mainly related to capabilities of one microservice, and partially uses capabilities of other microservies, it makes sense to make that app part of microservice it mainly belongs to.

  2. If an app equally uses capabilities of multiple microservices, and doesn't clearly belong to any of them, it's OK to have a separate microservice responsible for UIDL spec of that app, or for all apps with no "home" microservice.

  3. Client app is not a monolith. As explained in User interface, UIDL spec is composed of UI parts. Instead of providing a complete app, a microservice can provide UI parts that represent its data and capabilities. Then a complete app, possibly provided by a UIDL-spec-only microservice, can include UI parts from multiple capability-related microservices. This approach complies with Composite UI pattern, and provides many benefits of loose coupling.

Backend APIs

The purpose of backend API endpoints is to allow client apps and external systems query data and request business functions from microservices in the business tier. Backend API endpoints are subset of communication endpoints. Communication endpoints are explained in detail in the Communication endpoints page.

Protocols

The communication protocols of the endpoints may vary greatly. Those may be (including but not limited to) HTTP, Web Sockets, or connections to messaging middleware. In fact, a protocol is a pluggable component of an endpoint (in compliance with binding of the endpoint ABC pattern). For the purpose of interoperability, protocols for well-established standards will be supported, e.g. OData.

Fat clients

A special case of microservices embedded inside a fat client app, is supported for PC desktop presentation platforms. In a fat client, all or some of microservices reside together, embedded in-process with the UI engine. Communication with backend APIs of embedded microservices is translated into in-memory object invocations.

APIs are automatic

Backend APIs are provided automatically, with no application-specific code involved. The exact URLs and their semantics depend on the application frameworks used and the concrete models in the application. For example, DDD framework provides URLs for CRUD operations on aggregates in bounded contexts owned by a microservice. It also provides URLs for sending domain commands, and publishing domain events.

Backend APIs for UIDL apps are provided automatically as well, by the UIDL framework. The exact URLs depend on the UIDL spec of a concrete app. According to the UIDL spec, UI engines access API endpoints to request data and business functions, or to query completion status of previous requests.

Custom APIs

However, the above doesn't limit your ability to provide custom APIs. TODO: provide link to more details on building custom APIs.

API requests are checked for authorization

Inside microservices, all backend API requests automatically pass authorization check, before they are granted requested access to the resource. The rules are combined from access control list of the client principal, together with authorization requirements specified by the resources themselves.

  • For example, with DDD framework, entities, transactions, queries, commands, and events - all have the ability to define authorization requirements.

TODO: provide link to more details on authorization.

Business tier structure, scalability, and high availability

Business tier comprises one or more microservices, and the Service Locator component (explained in the next seciton).

Microservices are responsible for:

  • processing of incoming requests
  • execution of the business logic
  • retrieval and persistence of data
  • consumption and publishing of events

Since microservices relate to business capabilities, the exact set of microservices is an application-specific decision. It is also possible to have monolith business tier, which is a special case of having only one microservice.

Instances

  • Each microservice runs in one or more instances. One cannot run a microservice without specifying an instance. The instances are either uniquely named, or ordinarily numbered.

  • Multiple instances of one microservice can have identical or different configurations. Identical configurations are good for elastic scalability with load balancing and map/reduce partitioning. Different configurations are typically employed for integrations with different external systems, or for serving different tenants.

Run modes

Each microservice can run in one of two modes:

  • Daemon: microservice is started by deployment/provisioning infrastructure, and runs until it is requested to stop. It will open communication endpoints and subscribe to incoming events of interest.

    • Daemon mode is beneficial for low-latency and high-throughput scenarios, for constantly running background processing, or when valuable in-memory state is accumulated that serves subsequent operations.
    • In daemon mode, only one process in the entire environment simultaneously executes business logic of a microservice instance (an active replica,explained later in this section).
  • Batch: microservice is invoked from command line. It will perform requested operation, or a script of operations, and exit. It will not open communication endpoints and won't subscribe to future events. If specified, the microservice will process queued events and communication messages received by the moment of invocation.

    • Batch mode is beneficial for ad-hoc or scheduled operations, when no valuable in-memory state is accumulated that serves subsequent operations, and when additional latency due to process startup delay is acceptable.
    • In batch mode, it is possible for a microservice instance to execute simultaneously in multiple processes and on multiple machines. The microservice must be stateless and have no affinity to anything on any specific machine.

Choosing the right mode

In general, daemon mode is more complex, fragile, and resource-demanding, compared to batch mode, which is simpler and thus more reliable. Daemon mode also potentially translates into higher hosting costs on public clouds.

For these reasons, batch mode should be the default choice for a microservice, unless it fails to satisfy the requirements. Only then daemon mode should be chosen.

High availability of daemons

Daemon-mode microservices are hosted by daemon processes, or simply daemons. Deployment/provisioning technology stack is responsible to guarantee that sufficient number of daemon processes are up and running, in order to host the daemon microservices (a typical option is Docker cluster in Swarm mode).

Thus if a daemon process dies, another process starts (probably on a different server), to take over the failed one. Yet, this is not enough. The valuable in-memory state of daemon microservice instances must survive the failovers. For that, daemon microservice instances run in multiple replicas.

  • Replicas: each microservice instance in daemon mode runs in one or more replicas. Multiple replicas of one microservice instance form an active/passive failover cluster. One replica is active, and the rest are passive. During normal operation, changes to in-memory state performed by the active replica, are replicated through network communication channels to passive replicas. In this way, passive replicas are ready to take over when the active replica fails, without loosing the in-memory state.

The above multi-replica redundancy is implemented by a pluggable distributed consensus algorithm, with Raft being the first candidate for implementation.

The figure below demonstrates three daemon microservices with different sets of instances and replicas:

Microservice daemons

More details on high availability can be found in TODO: provide link to Scalability & Availability framework.

Microservice commands, REST API, and scripting

Besides event-driven and scheduled processing inside microservices, the interaction with microservices takes the form of microservice commands.

Commands can be received in a number of different ways: as incoming requests from UI apps and external systems, as commands read from scripts, or commands typed by administrator in a command-line shell.

The way commands are submitted to a microservice instance differs depending on microservice run mode:

  • daemon-mode microservices expose a REST API endpoint, on which they receive commands
  • batch-mode microservices read commands from the standard input

The set of commands supported by a microservice instance includes, at minimum, administration commands for reporting on status and configuration. In addition, the set of commands is typically extended by frameworks and application components inside the instance.

More details on REST API and CLI can be found in TODO: provide link to REST API and CLI.

Service Locator, load balancing, and map/reduce partitioning

The Service Locator is a daemon-mode microservice, which is responsible for:

  • availability of environment's communication endpoints to the outside world
  • dispatching of requests received through Backend API communication endpoints, to microservices
  • load balancing and map/reduce partitioning among multiple instances of microservices

From the Service Locator point of view, a backend API request can be one of two kinds:

  • single-instance request is a request, which should be routed to exactly one instance of a microservice.
  • map/reduce request is a request, which should be mapped to multiple instances of a microservice. For request/reply endpoints (as one-way endpoints provide no reply), the reply will be reduced from replies of the multiple instances .

With both kinds, the routing logic includes load balancing and/or data partitioning.

  • load balancing routes a request to a machine according to a strategy which can be driven by resource usage metrics, or just by a round-robin or random algorithm.
  • data partitioning routes a request to the instance which serves matching partition of data. The logic that determines destination partition is either application-specific, or is implemented by a convention TODO: provide link to more details on routing logic implementation.

The way Service Locator serves a request to a microservice, differs depending on whether the microservice is deployed in daemon or batch mode.

  • With a daemon-mode microservice, the Service Locator acts as a reverse proxy. Depending on whether it is a single-instance or a map/reduce request, it chooses one or more instances of the microservice, which will serve the request, and establishes connections with currently active replicas of the chosen instances.

  • With a batch-mode microservice, the Service Locator invokes one or more instances of the microservice, in batch mode.

REST API and remote shell

As a microservice, Service Locator provides a REST API endpoint with commands that covers Service Locator capabilities, including reporting on status of the environment, and sending commands to microservices.

The remote shell is implemented by consuming the Service Locator REST API, which is exposed through HTTPS protocol only. In addition, IP-filtering can be applied, as well as X509 or username/password authentication.

More information on Service Locator API and the command-line interface is available here TODO: provide link to REST API and CLI.

Databases

Applications can work with zero or more database instances and schemas. It is possible to combine databases of different vendors and technologies (relational, No-SQL, event-oriented).

This is not limited to full-fledged database servers like MS SQL Server or Mongo DB; it may also include lightweight in-process engines like LiteDB and SQLite. Even a flat set of disk files can be adapted to serve a database.

By default, application code is not exposed to specifics of any concrete database technology. Instead, application code works with an application framework that covers data retrieval and persistence (e.g., DDD framework).

As discussed in [[Doing one thing well|architecture-doing-one-thing-well], application frameworks that cover data retrieval and persistence, have ports for adapters to concrete database engines. The adapters are packaged in the form of pluggable technology stack modules (the concept of modules is explained in Micro-service anatomy).

For many good architectural reasons, it is best that business tier microservices will be the only clients of the databases. Moreover, each table (or a document collection, or an event stream) in a database should be accessed by exactly one microservice. This is naturally achieved, for example, when using the DDD framework.

Environments and deployment

Server-side deployments are organized in environments.

An environment is a set of machines and roles. Each machine is assigned one or more roles. Different roles correspond to different requirements, which deployment containers (explained below) have for target machines, typically for installed hardware and operating system. The set of roles is arbitrary and application-specific. For example, an application can define these roles: db, web, filesvr, processor.

An environment is identified by type, and optionally, by name. Though both are arbitrary and application-specific, they usually follow these common semantics:

  • the type identifies the purpose of the environment, e.g. dev, test, uat, prod.
  • the name, when specified, distinguishes between multiple environments of the same type.

Before an application can be deployed, it must be packaged. Application package consists of a set of containers, configuration of SLA constraints, and a version number. The exact format of containers depends on concrete technology stack picked for deployment. Typically those will be Docker containers.

Once application package is created, it can be pushed to an environment. Both Create Package and Push to Environment are fully automated procedures.

If a previously deployed version of the application exists in the target environment, it will be replaced by the version being deployed (except for the data tier). Besides the container deployment, it is possible to define Initialize, Upgrade, and Downgrade sequences of steps for each microservice, which handle versioning of resources outside the containers (the typical example is database schema migration). Depending on whether another version was deployed, and whether it was a higher or a lower version, the right sequence of steps will be executed:

  • Initialize - if there was no other version deployed
  • Upgrade - if a lower version was deployed
  • Downgrade - if a higher version was deployed

TODO: provide link to more details on Deployment

Function-as-a-Service deployments

On microservice level, every microservice deployed in the batch mode is enabled for function-as-a-service approach:

  • every microservice provides URLs for operations on its data and invocation of its business functions.
  • in batch mode, microservice is invoked on demand, triggered by incoming requests, and exits once the processing is done.

However, in order to actually lower hosting costs on a public cloud, one has to pay per activation, in contrast to paying for always-on servers. This means that invocation of microservice cannot be performed by Service Locator daemon, because it's a daemon and it requires an always-on server.

Thus, a special implementation of Service Locator is required to integrate with FaaS platforms of cloud vendors (e.g. AWS Lambda).

Background

Architecture

Feature Explorer

Platform Frameworks

  1. Kernel Layer
  2. Platform Layer
  3. Scalability & Availability Layer

Application Frameworks

  1. Domain-Driven Design
  2. Processing Workflows
  3. User Interface
  4. Data representation
  5. Semantic Logging & Data Collection
  6. Testing

Building Block Domains

  1. Infrastructure Domains
  2. Business Domains

Technology Stacks

  1. Database
  2. User Interface
  3. Communication Endpoints
  4. Scalability
  5. Services/Libraries

Developer Tools & Resources

  1. Developer Tools
  2. nwheels.org powered by NWheels
  3. Popular communities
  4. Books and videos

Contributors

Clone this wiki locally