Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated definition links of 15 terms #206

Merged
merged 13 commits into from
Aug 17, 2021
4 changes: 2 additions & 2 deletions definitions/api_gateway.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ category: technology
## API Gateway

### What it is
An API gateway is a tool that aggregates unique application APIs, making them all available in one place. It allows organizations to move key functions, such as authentication and authorization or limiting the number of requests between applications, to a centrally managed location. An API gateway functions as a common interface to (often external) API consumers.
An [API](https://github.com/cncf/glossary/blob/main/definitions/application_programming_interface.md) gateway is a tool that aggregates unique application APIs, making them all available in one place. It allows organizations to move key functions, such as authentication and authorization or limiting the number of requests between applications, to a centrally managed location. An API gateway functions as a common interface to (often external) API consumers.

### Problem it addresses
If you’re making APIs available to external consumers, you'll want one entry point to manage and control all access. Additionally, if you need to apply functionality on those interactions, an API gateway allows you to uniformly apply it to all traffic without requiring any app code changes.

### How it helps
Providing one single access point for various APIs in an application, API gateways make it easier for organizations to apply cross-cutting business or security logic in a central location. They also allow application consumers to go to a single address for all their needs. An API gateway can simplify operational concerns like security and observability by providing a single access point for requests to all web services in a system. As all requests flow through the API gateway, it presents a single place to add functionality like metrics-gathering, rate-limiting, and authorization.
Providing one single access point for various APIs in an application, API gateways make it easier for organizations to apply cross-cutting business or security logic in a central location. They also allow application consumers to go to a single address for all their needs. An API gateway can simplify operational concerns like security and [observability](https://github.com/cncf/glossary/blob/main/definitions/observability.md) by providing a single access point for requests to all web services in a system. As all requests flow through the API gateway, it presents a single place to add functionality like metrics-gathering, rate-limiting, and authorization.



4 changes: 2 additions & 2 deletions definitions/application_programming_interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ category: technology
An API is a way for computer programs to interact with each other. Just as humans interact with a website via a web page, an API allows computer programs to interact with each other. Unlike human interactions, APIs have limitations on what can and cannot be asked of them. The limitation on interaction helps to create stable and functional communication between programs.

### Problem it Addresses
As applications become more complex, small code changes can have drastic effects on other functionality. Applications need to take a modular approach to their functionality if they can grow and maintain stability simultaneously. Without APIs, there is a lack of a framework for the interaction between applications. Without a shared framework, it is challenging for applications to scale and integrate.
As applications become more complex, small code changes can have drastic effects on other functionality. Applications need to take a modular approach to their functionality if they can grow and maintain stability simultaneously. Without APIs, there is a lack of a framework for the interaction between applications. Without a shared framework, it is challenging for applications to [scale](https://github.com/cncf/glossary/blob/main/definitions/scalability.md) and integrate.

### How it helps
APIs allow computer programs or applications to interact and share information in a defined and understandable manner. They are the building blocks for modern applications and they provide developers with a way to integrate applications together. Whenever you hear about microservices working together, you can infer that they interact via an API.
APIs allow computer programs or applications to interact and share information in a defined and understandable manner. They are the building blocks for modern applications and they provide developers with a way to integrate applications together. Whenever you hear about [microservices](https://github.com/cncf/glossary/blob/main/definitions/microservices.md) working together, you can infer that they interact via an API.

4 changes: 2 additions & 2 deletions definitions/auto_scaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ category: property
---
## Autoscaling

Autoscaling is the ability of a system to scale automatically, typically, in terms of computing resources. With an autoscaling system, resources are automatically added when needed and can scale to meet fluctuating user demands. The autoscaling process varies and is configurable to scale based on different metrics, such as memory or process time. Managed cloud services are typically associated with autoscaling functionality as there are more options and implementations available than most on-premise deployments.
Autoscaling is the ability of a system to [scale](https://github.com/cncf/glossary/blob/main/definitions/scalability.md) automatically, typically, in terms of computing resources. With an autoscaling system, resources are automatically added when needed and can scale to meet fluctuating user demands. The autoscaling process varies and is configurable to scale based on different metrics, such as memory or process time. Managed cloud services are typically associated with autoscaling functionality as there are more options and implementations available than most on-premise deployments.

Previously, infrastructure and applications were architected to consider peak system usage. This architecture meant that more resources were underutilized and inelastic to changing consumer demand. The inelasticity meant higher costs to the business and lost business from outages due to overdemand.

By leveraging the cloud, virtualizing, and containerizing applications and their dependencies, organizations can build applications that scale according to user demands. They can monitor application demand and automatically scale them, providing an optimal user experience. Take the increase in viewership Netflix experiences every Friday evening. Autoscaling out means dynamically adding more resources: for example, increasing the number of servers allowing for more video streaming and scaling back once consumption has normalized.
By leveraging the cloud, [virtualizing](https://github.com/cncf/glossary/blob/main/definitions/virtualization.md), and [containerizing](https://github.com/cncf/glossary/blob/main/definitions/containerization.md) applications and their dependencies, organizations can build applications that scale according to user demands. They can monitor application demand and automatically scale them, providing an optimal user experience. Take the increase in viewership Netflix experiences every Friday evening. Autoscaling out means dynamically adding more resources: for example, increasing the number of servers allowing for more video streaming and scaling back once consumption has normalized.


4 changes: 2 additions & 2 deletions definitions/bare_metal_machine.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ category: technology

### What it is

Bare metal refers to a physical computer, more specifically a server, that has one, and only one, operating system. The distinction is important in modern computing because many, if not most, servers are virtual machines. A physical server is typically a fairly large computer with powerful hardware built-in. Installing an operating system and running applications directly on that physical hardware, without virtualization, is referred to as running on “bare metal.”
Bare metal refers to a physical computer, more specifically a server, that has one, and only one, operating system. The distinction is important in modern computing because many, if not most, servers are [virtual machines](https://github.com/cncf/glossary/blob/main/definitions/virtual_machine.md). A physical server is typically a fairly large computer with powerful hardware built-in. Installing an operating system and running applications directly on that physical hardware, without [virtualization](https://github.com/cncf/glossary/blob/main/definitions/virtualization.md), is referred to as running on “bare metal.”

### The problem it addresses

Expand All @@ -17,4 +17,4 @@ Pairing one operating system with one physical computer is the original pattern

By dedicating all compute resources of a computer to a single operating system, you potentially provide the best possible performance to the operating system. If you need to run a workload that must have extremely fast access to hardware resources, bare metal may be the right solution.

In the context of cloud native apps, we generally think of performance in terms of scaling to a large number of concurrent events, which can be handled by horizontal scaling (adding more machines to your resource pool). But some workloads may require vertical scaling (adding more power to an existing physical machine) and/or an extremely fast physical hardware response in which case bare metal is better suited. Bare metal also allows you to tune the physical hardware and possibly even hardware drivers to help accomplish your task.
In the context of [cloud native apps](https://github.com/cncf/glossary/blob/main/definitions/cloud_native_apps.md), we generally think of performance in terms of [scaling](https://github.com/cncf/glossary/blob/main/definitions/scalability.md) to a large number of concurrent events, which can be handled by [horizontal scaling](https://github.com/cncf/glossary/blob/main/definitions/horizontal_scaling.md) (adding more machines to your resource pool). But some workloads may require vertical scaling (adding more power to an existing physical machine) and/or an extremely fast physical hardware response in which case bare metal is better suited. Bare metal also allows you to tune the physical hardware and possibly even hardware drivers to help accomplish your task.
2 changes: 1 addition & 1 deletion definitions/cloud_computing.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,6 @@ Cloud computing is a model that offers compute resources like CPU, network, and
Organizations traditionally faced two main problems when attempting to expand their use of computing power. They either acquire, support, design, and pay for facilities to host their physical servers and network or expand and maintain those facilities. Cloud computing allows organizations to outsource some portion of their computing needs to another organization.

### How it Helps
Cloud providers offer organizations the ability to rent compute resources on-demand and pay for usage. This allows for two major innovations: organizations can try things without wasting time planning and spending CAPEX on new physical infrastructure and they can scale as needed and on-demand. Cloud computing allows organizations to adopt as much or as little infrastructure as they need.
Cloud providers offer organizations the ability to rent compute resources on-demand and pay for usage. This allows for two major innovations: organizations can try things without wasting time planning and spending CAPEX on new physical infrastructure and they can [scale](https://github.com/cncf/glossary/blob/main/definitions/scalability.md) as needed and on-demand. Cloud computing allows organizations to adopt as much or as little infrastructure as they need.


6 changes: 3 additions & 3 deletions definitions/cloud_native_apps.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ category: concept
## Cloud Native Apps

### What it is
Cloud native applications are specifically designed to take advantage of innovations in cloud computing. These applications integrate easily with their respective cloud architectures, taking advantage of the cloud’s resources and scaling capabilities. It also refers to applications that take advantage of innovations in infrastructure driven by cloud computing. Cloud native applications today include apps that run in a cloud provider’s datacenter and on cloud native platforms on-premise.
Cloud native applications are specifically designed to take advantage of innovations in [cloud computing](https://github.com/cncf/glossary/blob/main/definitions/cloud_computing.md). These applications integrate easily with their respective cloud architectures, taking advantage of the cloud’s resources and [scaling](https://github.com/cncf/glossary/blob/main/definitions/scalability.md) capabilities. It also refers to applications that take advantage of innovations in infrastructure driven by cloud computing. Cloud native applications today include apps that run in a cloud provider’s datacenter and on cloud native platforms on-premise.

### Problem it Addresses
Traditionally, on-premise environments provided compute resources in a fairly bespoke way. Each datacenter had services that tightly coupled applications to specific environments, often relying heavily on manual provisioning for infrastructure, like virtual machines and services. This, in turn, constrained developers and their applications to that specific datacenter. Applications that weren't designed for the cloud couldn't take advantage of a cloud environment’s resiliency and scaling capabilities. For example, apps that require manual intervention to start correctly cannot scale automatically, nor can they be automatically restarted in the event of a failure.
Traditionally, on-premise environments provided compute resources in a fairly bespoke way. Each datacenter had services that [tightly coupled](https://github.com/cncf/glossary/blob/main/definitions/tightly_coupled_architectures.md) applications to specific environments, often relying heavily on manual provisioning for infrastructure, like [virtual machines](https://github.com/cncf/glossary/blob/main/definitions/virtual_machine.md) and services. This, in turn, constrained developers and their applications to that specific datacenter. Applications that weren't designed for the cloud couldn't take advantage of a cloud environment’s resiliency and scaling capabilities. For example, apps that require manual intervention to start correctly cannot scale automatically, nor can they be automatically restarted in the event of a failure.

### How it Helps
While there is no “one size fits all” path to cloud native applications, they do have some commonalities. Cloud native apps are resilient, manageable, and aided by the suite of cloud services that accompany them. The various cloud services enable a high degree of observability, enabling users to detect and address issues before they escalate. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.
While there is no “one size fits all” path to cloud native applications, they do have some commonalities. Cloud native apps are resilient, manageable, and aided by the suite of cloud services that accompany them. The various cloud services enable a high degree of [observability](https://github.com/cncf/glossary/blob/main/definitions/observability.md), enabling users to detect and address issues before they escalate. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.


2 changes: 1 addition & 1 deletion definitions/cloud_native_security.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ category: concept

### What it is

Cloud native security is an approach that builds security into cloud native applications. It ensures that security is part of the entire application lifecycle from development to production. Cloud native security seeks to ensure the same standards as traditional security models while adapting to the particulars of cloud native environments, namely rapid code changes and highly ephemeral infrastructure. Cloud native security is highly related to the practice called [DevSecOps](https://github.com/cncf/glossary/blob/main/definitions/devsecops.md).
Cloud native security is an approach that builds security into [cloud native applications](https://github.com/cncf/glossary/blob/main/definitions/cloud_native_apps.md). It ensures that security is part of the entire application lifecycle from development to production. Cloud native security seeks to ensure the same standards as traditional security models while adapting to the particulars of cloud native environments, namely rapid code changes and highly ephemeral infrastructure. Cloud native security is highly related to the practice called [DevSecOps](https://github.com/cncf/glossary/blob/main/definitions/devsecops.md).

### The problem it addresses

Expand Down
2 changes: 1 addition & 1 deletion definitions/cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ category: Concept

### What it is

A cluster is a group of computers or applications that work together towards a common goal. In the context of cloud native computing, the term is most often applied to Kubernetes. A Kubernetes cluster is a set of services (or workloads) that run in their own containers, usually on different machines. The collection of all these containerized services, connected over a network, represent a cluster.
A cluster is a group of computers or applications that work together towards a common goal. In the context of cloud native computing, the term is most often applied to Kubernetes. A Kubernetes cluster is a set of services (or workloads) that run in their own containers, usually on different machines. The collection of all these [containerized](https://github.com/cncf/glossary/blob/main/definitions/containerization.md) services, connected over a network, represent a cluster.

### The problem it addresses

Expand Down
4 changes: 2 additions & 2 deletions definitions/continuous_delivery.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ category: concept
Continuous delivery, often abbreviated as CD, is a set of practices in which code changes are automatically deployed into an acceptance environment (or, in the case of continuous deployment, into production). CD crucially includes procedures to ensure that software is adequately tested before deployment and provides a way to rollback changes if deemed necessary. Continuous integration (CI) is the first step towards continuous delivery (i.e., changes have to merge cleanly before being tested and deployed).

### Problem it Addresses
Deploying reliable updates becomes a problem at scale. Ideally, we'd deploy more frequently to deliver better value to end-users. However, doing it manually translates into high transaction costs for every change. Historically, to avoid these costs, organizations have released less frequently, deploying more changes at once and increasing the risk that something goes wrong.
Deploying [reliable](https://github.com/cncf/glossary/blob/main/definitions/reliability.md) updates becomes a problem at scale. Ideally, we'd deploy more frequently to deliver better value to end-users. However, doing it manually translates into high transaction costs for every change. Historically, to avoid these costs, organizations have released less frequently, deploying more changes at once and increasing the risk that something goes wrong.

### How it Helps
CD strategies create a fully automated path to production that tests and deploys the software using various deployment strategies such as canary or blue-green releases. This allows developers to deploy code frequently, giving them peace of mind that the new revision has been tested. Typically, trunk-based development is used in CD strategies as opposed to feature branching or pull requests.
CD strategies create a fully automated path to production that tests and deploys the software using various deployment strategies such as [canary](https://github.com/cncf/glossary/blob/main/definitions/canary_deployment.md) or [blue-green](https://github.com/cncf/glossary/blob/main/definitions/blue_green_deployment.md) releases. This allows developers to deploy code frequently, giving them peace of mind that the new revision has been tested. Typically, trunk-based development is used in CD strategies as opposed to feature branching or pull requests.


Loading