diff --git a/docs/aws/concepts/autoscaling.md b/docs/aws/concepts/autoscaling.md index c0bd074..9ab1b39 100644 --- a/docs/aws/concepts/autoscaling.md +++ b/docs/aws/concepts/autoscaling.md @@ -1,37 +1,43 @@ -title: Autoscaling +title: Auto Scaling -The best way to optimize costs in the cloud is to not spend it in the first place. Enter Autoscaling. Autoscaling leverages the elasticity of the cloud to dynamically provision and remove capacity based on demand. That means that as demands decrease autoscaling will automatically scale down resources and allow you to save on costs accordingly. +The best way to optimize costs in the cloud is to not spend it in the first place. Enter Auto Scaling. Auto Scaling leverages the elasticity of the cloud to dynamically provision and remove capacity based on demand. That means that as demands decrease, Auto Scaling will automatically scale down resources and enable you to save on costs accordingly. -Autoscaling applies to a variety of different services, some of which are described in more detail below. If you're looking for EC2 autoscaling concepts, please see the AWS EC2 service page for the [autoscaling section](/aws/services/ec2-pricing/#autoscaling). +Auto Scaling applies to a variety of different services, some of which are described in more detail below. If you're looking for EC2 Auto Scaling concepts, please see the AWS EC2 service page for the [Auto Scaling section](/aws/services/ec2-pricing/#auto-scaling). -## Application Autoscaling +## Application Auto Scaling -For other resources in AWS, [Application Autoscaling](https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html) provides the ability to adjust provisioned resources. +For other resources in AWS, [Application Auto Scaling](https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html) provides the ability to adjust provisioned resources. -Application Autoscaling supports the following services: +Application Auto Scaling supports the following services: * AppStream 2.0 fleets * Aurora replicas * Amazon Comprehend document classification and entity recognizer endpoints * [DynamoDB](/aws/services/dynamodb-pricing/) tables and global secondary indexes * [Amazon Elastic Container Service (ECS)](/aws/services/ecs-and-fargate-pricing/) services +* ElastiCache for Redis clusters (replication groups) * Amazon EMR clusters * Amazon Keyspaces (for Apache Cassandra) tables * [Lambda](/aws/services/lambda-pricing/) function provisioned concurrency * Amazon Managed Streaming for Apache Kafka (MSK) broker storage +* Amazon Neptune clusters * SageMaker endpoint variants +* SageMaker inference components +* SageMaker Serverless provisioned concurrency +* Spot Fleet requests +* Custom resources that are provided by your own applications or services. -## Autoscaling Strategies +## Auto Scaling Strategies -There are various methods by which autoscaling can occur. These are listed below in no particular order: +There are various methods by which Auto Scaling can occur. These are listed below in no particular order: -* **Target Scaling** adds or removes capacity to keep a metric as near a specific value as possible. For example, target average CPU utilization of 50% across a set of ECS Tasks. If CPU utilization gets too high, add nodes. If CPU utilization gets too low, remove nodes. -* **Step Scaling** will adjust capacity up and down by dynamic amounts, depending on the magnitude of a metric. +* **Target Scaling** adds or removes capacity to keep a metric as close to a specific value as possible. For example, a target average CPU utilization of 50% across a set of ECS Tasks. If CPU utilization gets too high, nodes are added. If CPU utilization gets too low, nodes are removed. +* **Step Scaling** will adjust capacity up and down by dynamic amounts depending on the magnitude of a metric. * **Scheduled Scaling** will adjust minimum and maximum capacity settings on a schedule. -* **Simple Scaling** will add or remove EC2 instances from an Auto Scaling Group when an alarm is in alert state. +* **Simple Scaling** will add or remove EC2 instances from an Auto Scaling Group when an alarm is in an alert state. * **Predictive Scaling** can leverage historical metrics to preemptively scale EC2 workloads based on daily or weekly trends. -* **Manual Scaling** is possible with EC2 instances if teams need to intervene with an autoscaling group. This allows you to manually adjust the autoscaling target without any automation. +* **Manual Scaling** is possible with EC2 instances if teams need to intervene with an Auto Scaling Group. This allows you to manually adjust the Auto Scaling target without any automation. ## Other Considerations @@ -39,7 +45,7 @@ Adding capacity is generally an easy process. For compute, it's just a matter of Reducing capacity can be tricky depending on the application. Web applications generally have their requests clean up to prepare for termination within 30 seconds. Load balancers are often used to drain requests off instances and then terminate the instances "cleanly". Queue/batch workers, on the other hand, need to be done with their work, or stash their work somewhere before the node can be terminated. Otherwise, requests and/or data can be lost or incomplete. -DynamoDB Provisioned Capacity has [restrictions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) regarding how frequently it can be reduced (4 times per day at any time, plus any time when there hasn't been a reduction in the last hour). There are no restrictions regarding increasing capacity. Tables and Secondary Indexes are managed/scaled independently. +DynamoDB Provisioned Capacity has [restrictions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html) regarding how frequently it can be reduced (four times per day at any time, plus any time when there hasn't been a reduction in the last hour). There are no restrictions regarding increasing capacity. Tables and Secondary Indexes are managed/scaled independently. Scaling cooldown can be the trickiest part of the process. It's generally best to aggressively scale up/out and conservatively scale down/in. A long cooldown process might be necessary when scaling out an application with a long startup process, but it can also block future scale out events, resulting in application instability. Scaling policies should be regularly evaluated and tuned. diff --git a/docs/aws/concepts/credits.md b/docs/aws/concepts/credits.md index 7fb63e2..b098c94 100644 --- a/docs/aws/concepts/credits.md +++ b/docs/aws/concepts/credits.md @@ -1,18 +1,18 @@ title: Credits on AWS -Most public cloud infrastructure and service providers have a concept of credits. Credits are incentives typically given to customers opening up new accounts to attract them to build upon their platform. They allow you you to build, learn and get integrated into providers without have to spend money right from the beginning. +Most public cloud infrastructure and service providers have a concept of credits. Credits are incentives typically given to customers opening up new accounts to attract them to build upon their platform. They allow you to build, learn, and integrate into providers without having to spend money right away. -Credit allotments usually are around $5,000 or $10,000 depending on the provider but can be as high as $100,000. +Credit allotments usually are around $5,000 or $10,000 depending on the provider, but can be as high as $100,000. ## Startup Credits Across Clouds -One strategy that is often used for especially cost-conscious startups for public cloud infrastructure providers who have the ability to easily move workloads is to receive credits from multiple providers and run workloads across different providers until credits expire across all of them. So for example, a startup may get $10,000 of AWS credits and $10,000 of GCP credits. A subset of customers will run their application on AWS until their $10,000 is completely utilized then migrate to GCP to use up $10,000 worth of credits there to get $20,000 in total free usage. +One strategy that is often used for especially cost-conscious startups utilizing public cloud infrastructure providers, who have the ability to easily move workloads, is to receive credits from multiple providers and run workloads across different providers until credits expire across all of them. So for example, a startup may get $10,000 of AWS credits and $10,000 of GCP credits. A subset of customers will run their application on AWS until their $10,000 is completely utilized then migrate to GCP to use up $10,000 worth of credits there to get $20,000 in total free usage. -Typically this is advised against because the operational overhead of running workloads across multiple clouds typically isn't worth it. The use-cases that this tends to work for is for very transferable or ephemeral workloads such as training models on GPUs or running containers with no associated state. +Typically, this is advised against because the operational overhead of running workloads across multiple clouds typically isn't worth it. The use cases that this tends to work for are very transferable or ephemeral workloads such as training models on GPUs or running containers with no associated state. ## Credit Expiration -It's important to note that credits typically have a lifecycle tied to them that causes them to expire. Oftentimes this catches customers by surprise. Usually credits are granted on a 1 year basis which means if you have remaining credits that aren't utilized by the expiration term, they're automatically removed from your account. It's important to keep track of your credit expiration dates as to not be caught off-guard. +It's important to note that credits typically have a lifecycle tied to them that causes them to expire. Oftentimes, this catches customers by surprise. Usually, credits are granted on a one-year basis, which means if you have remaining credits that aren't utilized by the expiration term, they're automatically removed from your account. It's important to keep track of your credit expiration dates, so you are not caught off-guard. !!! Contribute diff --git a/docs/aws/concepts/io-operations.md b/docs/aws/concepts/io-operations.md index 15b9b6f..35ce2b2 100644 --- a/docs/aws/concepts/io-operations.md +++ b/docs/aws/concepts/io-operations.md @@ -1,24 +1,24 @@ title: I/O Operations (IOPS) on AWS | Cloud Cost Handbook -## I/O Operations +## Input/Output Operations -I/O Operations (IOPS) are a relatively low level unit in AWS for measuring disk performance. The maximum size of an IOP is 256 KiB for SSD volumes and 1 GiB for HDD volumes. 1 GiB of storage is worth 3 IOPS so a 1,000 GiB EBS Volume has 3,000 IOPS available. When using these volume types you are charged for the amount of provisioned iops even if you don't fully utilize them. +Input/output operations per second (IOPS) are a relatively low-level unit in AWS for measuring disk performance. The maximum size of an IOP is 256 KiB for SSD volumes and 1 GiB for HDD volumes. 1 GiB of storage is worth 3 IOPS, so a 1,000 GiB EBS Volume has 3,000 IOPS available. When using these volume types you are charged for the amount of provisioned IOPS even if you don't fully utilize them. As indicated on the [EBS](/aws/services/ebs-pricing) page: -> Provisioned IOPS SSD volumes use a consistent IOPS rate, which you specify when you create the volume, and Amazon EBS delivers the provisioned performance 99.9 percent of the time. +> Provisioned IOPS SSD volumes use a consistent IOPS rate, which you specify when you create the volume, and Amazon EBS delivers the provisioned performance 99.9% of the time. -The ["performance consistency"](https://blog.maskalik.com/blog/2020/05/31/aws-rds-you-may-not-need-provisioned-iops/) between a Provisioned IOPS volume and a general purpose (`gp2`, `gp3`), throughput optimized (`st1`), or cold HDD (`sc1`) is going to be better for both random and sequential disk access. Note that for operations with "large and sequential" accesses, provisioned iops are likely less efficient than an `st1` volume. +The [performance consistency](https://blog.maskalik.com/blog/2020/05/31/aws-rds-you-may-not-need-provisioned-iops/) between a Provisioned IOPS volume and a general purpose (`gp2`, `gp3`), throughput optimized (`st1`), or cold HDD (`sc1`) is going to be better for both random and sequential disk access. Note that for operations with large and sequential accesses, provisioned IOPS are likely less efficient than a `st1` volume. ## IOPS Considerations -- **Volume Type** There are multiple volume types with different impacts on IOPS. -- **I/O Demand** Most likely the workload has a bursty demand pattern, where consistently high throughput is not as important as meeting spikes of demand. As the workload deviates from this, provisioned IOPS become more important. -- **Throughput Limits** The instance will have an upper limit of throughput it can support. For example, an [i2.xlarge](https://instances.vantage.sh/aws/ec2/i2.xlarge.html) can support up to 62,500 IOPS. If the number of Provisioned IOPS is even higher than this limit, it is a waste because the instance cannot use them all up. +- **Volume Type:** There are multiple volume types with different impacts on IOPS. +- **I/O Demand:** Most likely the workload has a bursty demand pattern, where consistently high throughput is not as important as meeting spikes of demand. As the workload deviates from this, provisioned IOPS become more important. +- **Throughput Limits:** The instance will have an upper limit of throughput it can support. For example, an [i2.xlarge](https://instances.vantage.sh/aws/ec2/i2.xlarge.html) can support up to 62,500 IOPS. If the number of Provisioned IOPS is even higher than this limit, it's a waste, because the instance cannot use them all up. ## Optimal Provisioned IOPS -The most common cost waste with IOPS is having too many of them. It is commonly believed that the key to [RDS](/aws/services/rds-pricing/) is to have some amount of Provisioned IOPS. Happily, we do not have to guess. +The most common cost waste with IOPS is having too many of them. It is commonly believed that the key to [RDS](/aws/services/rds-pricing/) is to have some amount of Provisioned IOPS. Luckily, we don't have to guess. AWS [suggests](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html) inspecting the `VolumeQueueLength` metric for [CloudWatch](/aws/services/cloudwatch-pricing/). This metric is reported as IOPS, which means the formula is simple: if `VolumeQueueLength` is greater than the number of provisioned IOPS and latency is an issue, then you should consider increasing the number of provisioned IOPS. diff --git a/docs/aws/concepts/regions.md b/docs/aws/concepts/regions.md index acf6f17..09e8b57 100644 --- a/docs/aws/concepts/regions.md +++ b/docs/aws/concepts/regions.md @@ -1,6 +1,6 @@ title: Regions Pricing -Pricing for public cloud infrastructure providers typically varies by geographic region. Depending on the nature of your applications, you may not have a choice but to be located as close to your users as possible for latency purposes. That being said, it is worth looking at pricing on a per region basis as there can be significant discounts on a per-region basis. +Pricing for public cloud infrastructure providers typically varies by geographic region. Depending on the nature of your applications, you may not have a choice but to be located as close to your users as possible for latency purposes. That being said, it is worth looking at pricing on a per region basis, as there can be significant discounts on a per region basis. The [Instances](https://instances.vantage.sh/) pricing tool has prices for popular AWS services in all regions. To see a list of AWS regions, consult this reference [list of AWS regions](/aws/reference/aws-regions). diff --git a/docs/aws/concepts/reserved-instances.md b/docs/aws/concepts/reserved-instances.md index cb7b1a0..4bbcf1b 100644 --- a/docs/aws/concepts/reserved-instances.md +++ b/docs/aws/concepts/reserved-instances.md @@ -1,14 +1,14 @@ title: Reserved Instances -Reserved Instances (oftentimes referred to as their abbreviation of RIs) are one of the most popular and high-impact cost reduction methods you can leverage for cutting your bill. Reserved Instances give you the ability to pay upfront for certain AWS services to receive a discount. As a result, if you are able to profile usage across your AWS account and know that you'll hit certain usage levels, Reserved Instances can typically save you money. +Reserved Instances (RIs) are one of the most popular and high-impact cost-reduction methods you can leverage for cutting your bill. Reserved Instances give you the ability to pay upfront for certain AWS services to receive a discount. As a result, if you are able to profile usage across your AWS account and know that you'll hit certain usage levels, Reserved Instances can typically save you money. -Reserved Instances are available to a variety of AWS services such as [EC2](../services/ec2-pricing.md), [ElastiCache](../services/elasticache-pricing.md) and [RDS](../services/rds-pricing.md). AWS Billing automatically applies your Reserved Instance discounted rate when attributes of your instance usage match attributes of an active Reserved Instance. For general compute usage (EC2, Fargate, etc.), [Savings Plans](savings-plans.md) are _always_ preferred to Reserved Instances as they give you the same discount but are more flexible across all compute. +Reserved Instances are available to a variety of AWS services such as [EC2](../services/ec2-pricing.md), [ElastiCache](../services/elasticache-pricing.md), and [RDS](../services/rds-pricing.md). AWS Billing automatically applies your Reserved Instance discounted rate when attributes of your instance usage match attributes of an active Reserved Instance. For general compute usage (EC2, Fargate, etc.), [Savings Plans](savings-plans.md) are _always_ preferred to Reserved Instances, since they give you the same discount but are more flexible across all compute. -It's important to note that Reserved Instances aren't actually separate instances. They are merely financial instruments that you buy and are automatically applied to your account. As a result, you can continue to spin up and use on-demand instances and purchase Reserved Instances concurrently. As on-demand instances match your Reserved Instance attributes, you'll automatically receive discounts. +It's important to note that Reserved Instances aren't actually separate instances. They are merely financial instruments that you buy and are automatically applied to your account. As a result, you can continue to spin up and use On-Demand Instances and purchase Reserved Instances concurrently. As On-Demand Instances match your Reserved Instance attributes, you'll automatically receive discounts. ## Reserved Instance Term -AWS gives different discounts depending on the term that you pay upfront for. You can yield greater savings for paying upfront for longer terms but lose flexibility as a result. We find that smaller customers just getting started in their infrastructure journey tend to prefer 1-Year Reserved Instances whereas more mature organizations will leverage 3-Year Reserved Instances for the greatest savings as they can more accurately model and predict their usage. +AWS gives different discounts depending on the term that you pay upfront for. You can yield greater savings for paying upfront for longer terms, but lose flexibility as a result. We find that smaller customers just getting started in their infrastructure journey tend to prefer 1-Year Reserved Instances, whereas more mature organizations will leverage 3-Year Reserved Instances for the greatest savings as they can more accurately model and predict their usage. !!! Contribute Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file diff --git a/docs/aws/concepts/rightsizing.md b/docs/aws/concepts/rightsizing.md index 137d069..bac5bc6 100644 --- a/docs/aws/concepts/rightsizing.md +++ b/docs/aws/concepts/rightsizing.md @@ -1,14 +1,14 @@ title: Rightsizing -Rightsizing is a term used for identifying and augmenting certain resources for greater utilization and potential cost savings. Typically rightsizing occurs when you're over-provisioned and can apply to a variety of services with some examples below: +Rightsizing is a term used for identifying and augmenting certain resources for greater utilization and potential cost savings. Typically, rightsizing occurs when you're over-provisioned, and can apply to a variety of services, with some examples below: -* **EC2 Instances**: Oftentimes customers will choose one EC2 Instance that is over-allocated in terms the amount of vCPU and GB of RAM it is allocated. As a result, customers may be paying more on a per EC2-Instance basis. Customers who are able to identify opportunities for rightsizing EC2 Instances can typically save significantly, especially if the EC2 Instance type chosen represents a large pool of instances. -* **EBS Volumes**: EBS Volumes are typically a large cost driver for many organizations and are often heavily under-utilized. EBS charges you for the amount of storage you have allocated, not what you use, so it's important to keep an eye on Volume utilization to rightsize and save accordingly. -* **RDS Instances**: RDS Instances are similar to EC2 Instances in that they're typically overprovisioned but rarely utilized appropriately. While RDS rightsizing can result in significant cost savings, databases tend to be one of the services that makes sense to leave overprovisioned that you can grow into as downtime for a database during a rightsizing process may not ultimately be worth the organization cost. -* **Container Services**: ECS, Fargate and EKS allow you to run services of containers on a pool of underlying EC2 instances either managed by you or managed by AWS if you're using Fargate. Container Services are some of the hardest services to appropriately rightsize but can represent significant saving opportunities, especially for AWS Fargate. +* **EC2 Instances**: Oftentimes, customers will choose one EC2 Instance that is over-allocated in terms of the amount of vCPU and GBs of RAM it is allocated. As a result, customers may be paying more on a per EC2 Instance basis. Customers who are able to identify opportunities for rightsizing EC2 Instances can typically save significantly, especially if the EC2 Instance type chosen represents a large pool of instances. +* **EBS Volumes**: EBS Volumes are frequently a large cost driver for many organizations, and are often heavily under-utilized. EBS charges you for the amount of storage you have allocated, not what you use, so it's important to keep an eye on volume utilization to rightsize and save accordingly. +* **RDS Instances**: RDS Instances are similar to EC2 Instances in that they're often overprovisioned, but rarely utilized appropriately. While RDS rightsizing can result in significant cost savings, databases tend to be one of the services that make sense to leave overprovisioned to accommodate growth. Also, downtime for a database during a rightsizing process may ultimately not be worth the cost to your organization. +* **Container Services**: ECS, Fargate, and EKS allow you to run services of containers on a pool of underlying EC2 instances either managed by you or managed by AWS if you're using Fargate. Container Services are some of the hardest services to appropriately rightsize but can represent significant saving opportunities, especially for AWS Fargate. -The first step in rightsizing is to have monitoring and observability in place to even know what your utilization is for these various services. Assuming you feel confident in your usage patterns and how they relate to utilization, your organization can begin to make some decision for potential area to rightsize. +The first step in rightsizing is to have monitoring and observability in place to even know what your utilization is for these various services. Assuming you feel confident in your usage patterns and how they relate to utilization, your organization can begin to make some decisions for potential areas to rightsize. !!! Contribute Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file diff --git a/docs/aws/concepts/savings-plans.md b/docs/aws/concepts/savings-plans.md index d7a7ca5..3301c29 100644 --- a/docs/aws/concepts/savings-plans.md +++ b/docs/aws/concepts/savings-plans.md @@ -1,8 +1,8 @@ title: Savings Plans -Savings Plans are a flexible pricing model offering discounted prices compared to On-Demand pricing, in exchange for a specific usage commitment. Savings Plans are typically the highest impact, lowest effort way of realizing savings on your AWS account. They are roughly the same concept as [Reserved Instances](../reserved-instances) but offer greater flexibility as they 1) can be utilized across multiple compute services (i.e., EC2 _and_ Fargate) and 2) you aren't locked into a specific instance family. Similar to Reserved Instances, there are greater discounts for prepaying for a longer term. +Savings Plans are a flexible pricing model offering discounted prices compared to On-Demand pricing, in exchange for a specific usage commitment. Savings Plans are typically the highest impact, lowest effort way of realizing savings on your AWS account. They are roughly the same concept as [Reserved Instances](../reserved-instances) but offer greater flexibility as they (1) can be utilized across multiple compute services (i.e. EC2 _and_ Fargate) and (2) you aren't locked into a specific instance family. Similar to Reserved Instances, there are greater discounts for prepaying for a longer term. -After purchasing a Savings Plan, AWS Billing will automatically apply savings as corresponding on-demand resources match the conditions of your Savings Plans. Savings Plans are only applicable to usage across Amazon EC2, AWS Lambda, and AWS Fargate. Machine Learning Savings Plans (sometimes called SageMaker Savings Plans) are available for Sagemaker. Typically, customers will use Savings Plans for these services and [Reserved Instances](/aws/concepts/reserved-instances/) for other services that aren't covered such as RDS and ElastiCache. +After purchasing a Savings Plan, AWS Billing will automatically apply savings as corresponding on-demand resources match the conditions of your Savings Plans. Savings Plans are exclusive to Amazon EC2, AWS Lambda, and AWS Fargate usage. Machine Learning Savings Plans (sometimes called SageMaker Savings Plans) are available for Sagemaker. Typically, customers will use Savings Plans for these services and [Reserved Instances](/aws/concepts/reserved-instances/) for other services that aren't covered such as RDS and ElastiCache. !!! Contribute Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). diff --git a/docs/aws/concepts/tags.md b/docs/aws/concepts/tags.md index cfe8d93..704b4fc 100644 --- a/docs/aws/concepts/tags.md +++ b/docs/aws/concepts/tags.md @@ -1,39 +1,39 @@ title: Tagging Resources -Tags are one of the most powerful (though often overlooked) tools that can assist with your ability to observe and allocate cloud costs as it relates to public cloud infrastructure providers like AWS, Azure and GCP. While different accounts can be useful for separating resources and costs across different environments (production, staging, qa, test, etc) or teams/business-units, tags are helpful for segmenting costs as it relates to your application. We encourage customers to adopt tagging strategies as early on at their organizations as possible. Similar to an effective unit testing suite, over time tags can give you confidence in understanding where your costs are coming from. +Tags are one of the most powerful (though often overlooked) tools that can assist with your ability to observe and allocate cloud costs related to public cloud infrastructure providers like AWS, Azure, and GCP. While different accounts can be useful for separating resources and costs across different environments (production, staging, QA, test, etc.) or teams/business units, tags are helpful for segmenting costs related to your application. We encourage customers to adopt tagging strategies as early on as possible in their organizations. Similar to an effective unit testing suite, over time tags can give you confidence in understanding where your costs are coming from. -Tags on AWS consist of two different parts: a `key` and a `value`. As a basic example you can imagine an example `key` with the value of "service" and a `value` which could be "front-end", "back-end", "search" or "cache". Upon assigning tags to resources, you can get greater visibility into where your costs are coming from. Instead of seeing how your costs are trending in aggregate, you can see how each part of your application is growing assuming you've leveraged tags correctly. Additionally, tags can be part of your existing workflows and are typically very easy to accommodate in infrastructure-as-code configuration files such as CloudFormation or Terraform. +Tags in AWS consist of two different parts: a `key` and a `value`. As a basic example, you can imagine an example `key` with the value of "service" and a `value` that could be "front-end", "back-end", "search" or "cache". Upon assigning tags to resources, you can get greater visibility into where your costs are coming from. Instead of seeing how your costs are trending in aggregate, you can see how each part of your application is growing, assuming you've leveraged tags correctly. Additionally, tags can be part of your existing workflows and are typically very easy to accommodate in infrastructure-as-code configuration files such as CloudFormation or Terraform. Tags, at their core, are metadata attached to cloud resources. They serve as markers, providing context and categorization. Beyond just identification, tags play a pivotal role in: -**Cost Allocation.** Understand which department, project, or application consumes resources and incurs costs. +**Cost Allocation:** Understand which department, project, or application consumes resources and incurs costs. -**Cost Optimization.** Identify underutilized resources and make informed decisions about scaling or termination. +**Cost Optimization:** Identify underutilized resources and make informed decisions about scaling or termination. -**Forecasting.** Predict future expenses by analyzing tagged resource consumption. +**Forecasting:** Predict future expenses by analyzing tagged resource consumption. -**Resource Management.** Efficiently manage, search, and filter resources based on specific criteria. +**Resource Management:** Efficiently manage, search, and filter resources based on specific criteria. -**Security and Compliance.** Ensure resources meet specific security standards or compliance requirements. +**Security and Compliance:** Ensure resources meet specific security standards or compliance requirements. -**Operational Clarity.** Quickly identify resources during troubleshooting or operational tasks. +**Operational Clarity:** Quickly identify resources during troubleshooting or operational tasks. -**Alerting.** Set event-based notifications for specific resources. +**Alerting:** Set event-based notifications for specific resources. -**Automation.** Automate lifecycle management or schedule shutdowns. +**Automation:** Automate lifecycle management or schedule shutdowns. ## Activating Cost Allocation Tags -One of the more generally confusing experiences that customers experience on AWS is that tags are not incorporated into billing reports by default and need to be "activated". After you have assigned resources tags, here are the steps to "activate" the tags for them to be incorporated into billing data: +One of the more generally confusing experiences that customers experience with AWS is that tags are not incorporated into billing reports by default and need to be activated. After you have assigned tags to resources, here are the steps to activate the tags for them to be incorporated into billing data: -To activate your tags +To activate your tags: -- Sign in to the AWS Management Console and open the Billing and Cost Management console at [https://console.aws.amazon.com/billing/home?#/tags](https://console.aws.amazon.com/billing/home?#/tags). -- In the navigation pane, choose Cost Allocation Tags. +- Sign in to the AWS Management Console and open the [Billing and Cost Management Console](https://console.aws.amazon.com/billing/home?#/tags). +- In the navigation pane, choose `Cost Allocation Tags`. - Select the tags that you want to activate. -- Choose Activate. +- Choose `Activate`. -After you create and apply tags to your resources, it can take up to 24 hours for the tags to appear in your reports. After you select your tags for activation, it can take up to 24 hours for tags to activate as well. +After you create and apply tags to your resources, it can take up to 24 hours for the tags to appear in your reports. Then, after you select your tags for activation, it can take up to 24 hours for the tags to activate. ## Types of Tags @@ -41,13 +41,13 @@ Distinguishing between tag types can help in understanding their origin and purp While customization is key, starting with commonly used tags can provide a foundational framework: -- Environment: Differentiate between Development, Testing, and Production. -- Owner: Pinpoint responsibility, aiding in accountability and management. -- Project: Allocate resources to specific initiatives or campaigns. -- Cost Center / Business Unit: Facilitate financial reporting and budget allocation. -- Service: Categorize resources based on the service they support or belong to. -- Customer: Especially for SaaS providers, understand resource consumption per client. -- Function: Understand the role or purpose of a resource in the ecosystem. +- **Environment:** Differentiate between Development, Testing, and Production. +- **Owner:** Pinpoint responsibility, aiding in accountability and management. +- **Project:** Allocate resources to specific initiatives or campaigns. +- **Cost Center / Business Unit:** Facilitate financial reporting and budget allocation. +- **Service:** Categorize resources based on the service they support or belong to. +- **Customer:** Especially for SaaS providers, understand resource consumption per client. +- **Function:** Understand the role or purpose of a resource in the ecosystem. ## Tagging Strategy @@ -55,15 +55,15 @@ A tagging strategy is not a one-size-fits-all solution. It requires careful cons A multi-cloud or hybrid cloud environment might require a more nuanced approach for tagging. For example, you may want to align tag values across Datadog and AWS so that you can group costs across providers for a single service together. -Utilize tools and scripts to automate tagging for consistency and efficiency. Several types of reports can be built in cloud cost management tools which will show you which resources are not tagged so you can make progress. +Utilize tools and scripts to automate tagging for consistency and efficiency. Several types of reports can be built with cloud cost management tools to show you which resources are not tagged so you can make progress. -To harness the full potential of tags, maintain consistency with a clear naming convention and stick to it. Tools like AWS Tag Editor or infrastructure-as-code solutions like Terraform, Pulumi, or CloudFormation can help enforce tagging. After the initial setup, make sure to review regularly. As the organization evolves, so will its tagging needs. Periodic reviews ensure relevance. To ensure stickiness of the strategy, educate and train team members so they understand the importance of tagging and how to do it correctly. +To harness the full potential of tags, maintain consistency with a clear naming convention and stick to it. Tools like AWS Tag Editor or infrastructure-as-code solutions like Terraform, Pulumi, or CloudFormation can help enforce tagging. After the initial setup, make sure to review regularly. As the organization evolves, so will its tagging needs. Periodic reviews ensure relevance. To ensure the stickiness of the strategy, educate and train team members so they understand the importance of tagging and how to do it correctly. ## Implementing Tagging -It’s rare to plan and launch a tagging strategy from scratch. More likely than not, a company already has some infrastructure tagging, and a need to improve this visibility for deeper cost visibility. When implementing a new tagging program that adds to or replaces existing tags, we recommend a few collaborative approaches. +It’s rare to plan and launch a tagging strategy from scratch. More likely than not, a company already has some infrastructure tagging, and a need to improve this visibility for deeper cost visibility. When implementing a new tagging program that adds or replaces existing tags, we recommend a few collaborative approaches. -Firstly, there should be an audit of existing tags. Decide what tags to keep, which to ignore, and measure the accuracy of what tags exist. Then, identify untagged resources. Measure how much of your infrastructure is untagged, and use that to track progress as the program progresses. In the process you will want to partner with engineering. Clearly communicate new tagging guidelines, and support engineers owning the work to tag infrastructure. Finally, gamify the process. Find ways to recognize or reward teams as tagging work is completed. +Firstly, there should be an audit of existing tags. Decide what tags to keep and which to ignore, and measure the accuracy of what tags exist. Then, identify untagged resources. Measure how much of your infrastructure is untagged, and use that to track progress as the program progresses. In the process, you will want to partner with engineering. Clearly communicate new tagging guidelines, and support engineers owning the work to tag infrastructure. Finally, gamify the process. Find ways to recognize or reward teams as tagging work is completed. !!! Contribute Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). diff --git a/docs/aws/services/batch-pricing.md b/docs/aws/services/batch-pricing.md index 89ef0b7..335ff6d 100644 --- a/docs/aws/services/batch-pricing.md +++ b/docs/aws/services/batch-pricing.md @@ -1,22 +1,22 @@ title: Batch Pricing | Cloud Cost Handbook -[AWS Batch Pricing Page](https://aws.amazon.com/batch/pricing/){ .md-button } +[AWS Batch Pricing Page](https://aws.amazon.com/batch/pricing/){ .md-button target="_blank" } ## Summary AWS Batch combines job scheduling and job execution into one managed service. Example Batch workloads include video rendering, log file ingestion, model training, simulation, and cosmology. Under the hood, Batch provisions [EC2](/aws/services/ec2-pricing/) or [Fargate](/aws/services/ecs-and-fargate-pricing/) instances and executes containerized jobs on them. -Users set a range of vCPUs and memory that are needed to execute the job. You can also choose specific instance types, which can be helpful for cost optimizations. For both EC2 and Fargate jobs it is possible to select on-demand or spot instances. Job execution itself can be managed with scheduling, allocation, and parallelization parameters. +Users set a range of vCPUs and memory that are needed to execute the job. You can also choose specific instance types, which can be helpful for cost optimizations. For both EC2 and Fargate jobs, it is possible to select on-demand or spot instances. Job execution itself can be managed with scheduling, allocation, and parallelization parameters. ## Pricing Dimensions Batch is free! -AWS Batch only consumes the underlying EC2 or Fargate resources, however it does not break these out in the bill. If Batch is pointed at existing EC2 instances, in other words the Batch Compute Environment is `UNMANAGED`, the cost of the Batch jobs will be the elapsed time they run on the instance. +AWS Batch only consumes the underlying EC2 or Fargate resources, however it does not break these out in the bill. If Batch is pointed at existing EC2 instances, or in other words the Batch Compute Environment is `UNMANAGED`, the cost of the Batch jobs will be the elapsed time they run on the instance. To view the cost of a job in a `MANAGED` Batch environment, you must inspect the ECS tasks that are associated with it. You can view the cost of AWS batch jobs through ECS in [Cost Reports](/tools/cost-reports/). -Another technique is to add a [tag](/aws/concepts/tags/) to the [Compute Environment](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) that is created to run the Batch job in. This does not allow you to track costs down to the job level. +Another technique is to add a [tag](/aws/concepts/tags/) to the [compute environment](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) that is created to run the Batch job. This does not allow you to track costs down to the job level. ## Where Jobs Should Run @@ -24,18 +24,20 @@ One consideration is whether Batch is even the right tool for the job, and if so | | Lambda | Fargate (Spot) | Fargate | EC2 (Spot) | EC2 | | ------------------------- | ---------------------------------------------------- | -------------------------------- | ----------------------------- | ----------- | ----------- | -| **Job Length** | <15 mins | 5 - 10 mins | 5 - 10 mins | 5 - 45 mins | Hours | -| **Compute Limits** | [Lambda](/aws/services/lambda-pricing/) Runtime Only | <4 vCPUs, <30 GiB memory, no GPU | <4 vCPUS, <30 GiB mem, no GPU | None | None | -| **Startup Time** | <1 sec | 30 - 90 secs | 30 - 90 secs | 5 - 15 mins | 5 - 15 mins | -| **Job is Fault Tolerant** | No | Yes | No | Yes | No | +| Job Length | <15 mins | 5 - 10 mins | 5 - 10 mins | 5 - 45 mins | Hours | +| Compute Limits | [Lambda](/aws/services/lambda-pricing/) Runtime Only | <4 vCPUs, <30 GiB memory, no GPU | <4 vCPUS, <30 GiB mem, no GPU | None | None | +| Startup Time | <1 sec | 30 - 90 secs | 30 - 90 secs | 5 - 15 mins | 5 - 15 mins | +| Job is Fault Tolerant | No | Yes | No | Yes | No | ## Batch Cost Optimization Tips [AWS recommends](https://aws.amazon.com/blogs/hpc/aws-batch-best-practices/) a few techniques to lower costs for Batch jobs: -- The most cost effective [allocation strategy](https://aws.amazon.com/blogs/compute/optimizing-for-cost-availability-and-throughput-by-selecting-your-aws-batch-allocation-strategy/) for non interruptible workloads is `BEST_FIT`. This strategy is sensitive to capacity constraints however and so an entire workload may have to wait for available machines. To avoid this, `BEST_FIT_PROGRESSIVE` tries to find the best instances but falls back to less cost efficient instances that will still complete the job (e.g. have the minimum required number of vCPUs). For Fargate and EC2 Spot workloads, `SPOT_CAPACITY_OPTIMIZED` uses the same auto scaling algorithm as Spot Fleets to get the best price. +- The most cost-effective [allocation strategy](https://aws.amazon.com/blogs/compute/optimizing-for-cost-availability-and-throughput-by-selecting-your-aws-batch-allocation-strategy/) for non-interruptible workloads is `BEST_FIT`. This strategy is sensitive to capacity constraints however and so an entire workload may have to wait for available machines. To avoid this, `BEST_FIT_PROGRESSIVE` tries to find the best instances but falls back to less cost-efficient instances that will still complete the job (e.g. have the minimum required number of vCPUs). For Fargate and EC2 Spot workloads, `SPOT_CAPACITY_OPTIMIZED` uses the same Auto Scaling algorithm as Spot Fleets to get the best price. - Use smaller containers and image layers. Loading each container consumes compute time for the job. Furthermore, pulling containers across [NAT Gateways](/aws/services/vpc-pricing/#nat-gateway) will rack up data transfer charges. Prefer PrivateLink for pulling containers. - Use multiple availability zones. All things considered, `BEST_FIT_PROGRESSIVE` will find the cheapest AZ to run the workload in, so do not artificially limit yourself here. !!! Contribute Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Sep 1, 2022_ \ No newline at end of file diff --git a/docs/aws/services/cloudfront-pricing.md b/docs/aws/services/cloudfront-pricing.md index 4346b48..0211acc 100644 --- a/docs/aws/services/cloudfront-pricing.md +++ b/docs/aws/services/cloudfront-pricing.md @@ -1,10 +1,10 @@ title: Cloudfront Pricing | Cloud Cost Handbook -[Amazon CloudFront Pricing Page](https://aws.amazon.com/cloudfront/pricing/){ .md-button } +[Amazon CloudFront Pricing Page](https://aws.amazon.com/cloudfront/pricing/){ .md-button target="_blank" } ## Summary -Amazon CloudFront is a content delivery network (CDN) service used to distribute and cache traffic from one region to multiple geographic endpoints globally. Every CloudFront distribution includes an origin which is used to pull the original data from. An origin will typically be an S3 bucket or Load Balancer Endpoint. The traffic is distributed globally to speed up the access to an application which receives visitors from across the globe. CloudFront Distributions are billed based on the amount of traffic they request from the origin, distribute out to the internet as well as per request processed. Distribution out to the internet is priced differently depending on the region which it is accessed. Regions are grouped into geographic regions. When creating a distribution it is possible to select which regions CloudFront will serve traffic from. +Amazon CloudFront is a content delivery network (CDN) service used to distribute and cache traffic from one region to multiple geographic endpoints globally. Every CloudFront distribution includes an origin which is used to pull the original data from. An origin will typically be an S3 bucket or Load Balancer endpoint. The traffic is distributed globally to speed up access to an application that receives visitors from across the globe. CloudFront distributions are billed based on the amount of traffic they request from the origin, distribute to the internet, as well as per request processed. Distribution to the internet is priced differently depending on the region where it is accessed. Regions are grouped into geographic regions. When creating a distribution it is possible to select which regions CloudFront will serve traffic from. ## Pricing Dimensions @@ -20,57 +20,59 @@ Origin Shield can be enabled in order to reduce the amount of traffic being serv ## CloudFront Security Savings Bundle -The CloudFront Security Savings Bundle is a simple way to save up to 30% on the CloudFront charges on your AWS bill when you make a 1-year upfront commitment with no service-level configuration changes needed. You're billed in equal installments over the 12 months, starting from the time you purchase the security savings bundle. Once you purchase the CloudFront Security Savings Bundle, the savings are automatically applied to your bill. If you're familiar with [Savings Plans](/aws/concepts/savings-plans) or [Reserved Instances](/aws/concepts/reserved-instances), this is essentially the CloudFront equivalent to those conceptually speaking. +The CloudFront Security Savings Bundle is a simple way to save up to 30% on the CloudFront charges on your AWS bill when you make a 1-year upfront commitment with no service-level configuration changes needed. You're billed in equal installments over the 12 months, starting from the time of purchase. Once you purchase the CloudFront Security Savings Bundle, the savings are automatically applied to your bill. If you're familiar with [Savings Plans](/aws/concepts/savings-plans) or [Reserved Instances](/aws/concepts/reserved-instances), this is essentially the CloudFront equivalent to those, conceptually speaking. -The reason for this being named a "bundle" is that by making this purchase you also get credits towards the AWS Web Application Firewall (WAF) service. Ten percent of the amount you pay in committed use for a CloudFront Security Savings Bundle will be granted toward AWS WAF. So for example if you pay $500 for a CloudFront Security Savings Bundle, $50 will also be applied towards AWS WAF. +The reason for this being named a bundle is that by making this purchase you also get credits towards the AWS Web Application Firewall (WAF) service. Ten percent of the amount you pay in committed use for a CloudFront Security Savings Bundle will be granted toward AWS WAF. So for example, if you pay $500 for a CloudFront Security Savings Bundle, $50 will also be applied towards AWS WAF. ## Custom Pricing -For customers who are willing to make certain minimum traffic commits (typically 10 TB/month or higher) they can contact AWS and negotiate custom discounted rates. +For customers who are willing to make certain minimum traffic commits (typically 10TB/month or more) they can contact AWS and negotiate custom discounted rates. -## CloudFront Versus Cloudflare +## CloudFront Vs Cloudflare -Cloudflare[^whynoothervendors] is an edge network that offers a number of different performance, availability and security services. One of those services is an edge caching service that offer effectively the same service as Amazon CloudFront. The most important distinction between CloudFront and Cloudflare is not a technical differentiation but a business model differentiation. CloudFront utilizes a metered pricing model whereby you pay based on the amount of traffic that is served via the CloudFront service.[^cloudfrontpricing] Cloudflare, on the other hand, offers flat-rate pricing for its service without any bandwidth caps.[^cloudflaretos] +Cloudflare[^whynoothervendors] is an edge network that offers a number of different performance, availability, and security services. One of those services is an edge caching service that offers effectively the same service as Amazon CloudFront. The most important distinction between CloudFront and Cloudflare is not a technical differentiation, but a business model differentiation. CloudFront utilizes a metered pricing model whereby you pay based on the amount of traffic that is served via the CloudFront service.[^cloudfrontpricing] Cloudflare, on the other hand, offers flat-rate pricing for its service without any bandwidth caps.[^cloudflaretos] -What this means is that as a customer of [Cloudflare's Business plan](https://www.cloudflare.com/plans/business/), you can pay $200 per month and delivery unlimited traffic via the Cloudflare CDN. Seems too good to be true? Feel free to browse the official Cloudflare community where this question is [asked](https://community.cloudflare.com/t/to-support-about-cdn-plan/166219) and [answered](https://community.cloudflare.com/t/cloudflare-doesnt-mention-in-plans-that-how-much-monthly-bandwidth-will-provides/161097) multiple times. +What this means is that as a customer of [Cloudflare's Business plan](https://www.cloudflare.com/plans/business/), you can pay $200 per month and deliver unlimited traffic via the Cloudflare CDN. Seems too good to be true? Feel free to browse the official Cloudflare community where this question is [asked](https://community.cloudflare.com/t/to-support-about-cdn-plan/166219) and [answered](https://community.cloudflare.com/t/cloudflare-doesnt-mention-in-plans-that-how-much-monthly-bandwidth-will-provides/161097) multiple times. ### Considerations -Price is not the only consideration that goes into making a decision about whether to utilize CloudFront or a competing CDN service. Performance, availability, user experience, support and legal compliance are other factors that will factor into the decision to utilize one service over another. +Price is not the only consideration that goes into making a decision about whether to utilize CloudFront or a competing CDN service. Performance, availability, user experience, support, and legal compliance are other factors that will factor into the decision to utilize one service over another. #### Availability -In order to offer customers unlimited bandwidth, Cloudflare utilizes service degradation based on their plan levels to prioritize higher tier customers in the event of a service degradation. The two most common service degradations for Cloudflare are either a DDoS attack that is overwhelming one or more points-of-presence (PoP) in the network or a legitimate surge in traffic due to any number of events. +In order to offer customers unlimited bandwidth, Cloudflare utilizes service degradation based on their plan levels to prioritize higher-tier customers in the event of service degradation. The two most common service degradations for Cloudflare are either a distributed denial-of-service (DDoS) attack that is overwhelming one or more points-of-presence (PoP) in the network, or a legitimate surge in traffic due to any number of events. -When the resources for a PoP are being depleted and service is being degraded, Cloudflare will choose to route traffic for customers out of that location based on the plan level they are subscribed to. Free traffic will be routed away from the PoP first, then Pro, Business, etc. The effect of having traffic routed out of a specific PoP is that users that are closest to the PoP will have some level of service degradation since they will instead have their traffic served from a PoP that is farther away than their most ideal PoP. In locations where the next nearest available PoP is close this degradation will be practically unnoticeable. In locations where the next available PoP is topologically distant service degradation can potentially be significant. +When the resources for a PoP are being depleted and service is being degraded, Cloudflare will choose to route traffic for customers out of that location based on the plan level they are subscribed to. Free traffic will be routed away from the PoP first, then Pro, Business, etc. The effect of having traffic routed out of a specific PoP is that users that are closest to the PoP will have some level of service degradation since they will instead have their traffic served from a PoP that is farther away than their most ideal PoP. In locations where the next nearest available PoP is close this degradation will be practically unnoticeable. In locations where the next available PoP is topologically distant, service degradation can potentially be significant. #### Technical -In the scenario that you are utilizing CloudFront and have an Amazon service designated as the origin for the content being served, typically this would be an S3 bucket or maybe EC2 with an attached EBS volume, you should consider that by switch from CloudFront as your CDN to Cloudflare you will incur egress charges for data transfer from AWS to Cloudflare. AWS does not charge customers any egress fees when moving content from an AWS service like S3 or EC2 to CloudFront.[^freeoriginegress] The amount of charges will largely be dependent on your particular services cache hit ratio. The higher the cache hit ratio, the less cache misses that will incur AWS egress charges. +In the scenario that you are utilizing CloudFront and have an Amazon service designated as the origin for the content being served, typically an S3 bucket or maybe EC2 with an attached EBS volume, you should consider that by switching from CloudFront as your CDN to Cloudflare you will incur egress charges for data transfer from AWS to Cloudflare. AWS does not charge customers any egress fees when moving content from an AWS service like S3 or EC2 to CloudFront.[^freeoriginegress] The amount of charges will largely be dependent on your particular services cache hit ratio. The higher the cache hit ratio, the less cache misses that will incur AWS egress charges. -This practice favors pairing CloudFront with an AWS service as origin. That being said, for most customers with significant CloudFront traffic they will still come out on top by considering a flat-rate priced CDN plan. +This practice favors pairing CloudFront with an AWS service as the origin. That being said, most customers with significant CloudFront traffic will still come out on top by considering a flat-rate priced CDN plan. -On top of this, you can also consider moving your content off of an AWS service to a provider in the [Bandwidth Alliance](https://www.cloudflare.com/bandwidth-alliance/). By utilizing the Cloudflare CDN service and a Bandwidth Alliance partner as the content origin, you can take advantage of the flat-rate pricing of the Cloudflare self-serve plans and eliminate all egress costs between Cloudflare and your origin provider of choice. This effectively gives you the same benefit that AWS offers customer of no egress charges between an AWS service and CloudFront but with the power of the flat-rate pricing that is available via the Cloudflare self-serve plans. Further details can be found at in the [S3 service article](https://handbook.vantage.sh/aws/services/s3-pricing/#s3-versus-bandwidth-alliance-partner) of the Cloud Cost Handbook. +On top of this, you can also consider moving your content off of an AWS service to a provider in the [Bandwidth Alliance](https://www.cloudflare.com/bandwidth-alliance/). By utilizing the Cloudflare CDN service and a Bandwidth Alliance partner as the content origin, you can take advantage of the flat-rate pricing of the Cloudflare self-serve plans and eliminate all egress costs between Cloudflare and your origin provider of choice. This effectively gives you the same benefit that AWS offers customers, of no egress charges between an AWS service and CloudFront, but with the power of the flat-rate pricing that is available via the Cloudflare self-serve plans. Further details can be found in the [S3 service article](https://handbook.vantage.sh/aws/services/s3-pricing/#s3-vs-bandwidth-alliance-partner) of the Cloud Cost Handbook. - + #### Sales -Another side effect of subscribing to a self-serve plan from Cloudflare is that users of these plans are used as part of the sales funnel for the Cloudflare sales team. What this means is that by signing up for Cloudflare you are giving your contact information that can be utilized by the sales team in order for them to contact you about other Cloudflare services and offerings. +Another side effect of subscribing to a self-serve plan from Cloudflare is that users of these plans are used as part of the sales funnel for the Cloudflare sales team. What this means is that by signing up for Cloudflare you are giving your contact information that can be utilized by the sales team for them to contact you about other Cloudflare services and offerings. -The important thing to remember is that, as long as you aren't breaking the Cloudflare Terms of Service (ToS) they cannot force you to purchase any additional services. +The important thing to remember is that as long as you aren't breaking the Cloudflare Terms of Service (ToS) they cannot force you to purchase any additional services. -[^whynoothervendors]: This guide is calling special attention to Cloudflare and no other vendors in this space due to the unique offerings that Cloudflare has that no other provider offers. Specifically, that they offer self-serve plans with flat-rate pricing and no bandwidth caps. If you are aware of any other services with a similar offering, please submit an issue or pull request and we will update the guide. +[^whynoothervendors]: This guide is calling special attention to Cloudflare and no other vendors in this space due to the unique offerings that Cloudflare has that no other provider offers. Specifically, they offer self-serve plans with flat-rate pricing and no bandwidth caps. If you are aware of any other services with a similar offering, please submit an issue or pull request and we will update the guide. [^cloudfrontpricing]: Direct link to CloudFront pricing that details metered pricing model: https://aws.amazon.com/cloudfront/pricing/ -[^cloudflaretos]: Direct link to the Cloudflare Terms of Service for the self-serve plans (i.e. the Free, Pro and Business plans):https://www.cloudflare.com/terms/ +[^cloudflaretos]: Direct link to the Cloudflare Terms of Service for the self-serve plans (i.e. the Free, Pro, and Business plans):https://www.cloudflare.com/terms/ [^freeoriginegress]: "If you are using an AWS origin, effective December 1, 2014, data transferred from origin to edge locations (Amazon CloudFront "origin fetches") will be free of charge." https://aws.amazon.com/cloudfront/pricing/
!!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Aug 8, 2021_ \ No newline at end of file diff --git a/docs/aws/services/cloudtrail-pricing.md b/docs/aws/services/cloudtrail-pricing.md index 296de6a..4700e92 100644 --- a/docs/aws/services/cloudtrail-pricing.md +++ b/docs/aws/services/cloudtrail-pricing.md @@ -11,23 +11,23 @@ AWS CloudTrail maintains logs and records of actions and events that occur in yo | Dimension | Description | | ------------- |-------------| -|Ingestion and storage| For CloudTrail Lakes, you pay for both ingesting and storing of logs/events from AWS sources and non-AWS sources. Pricing does not differ between source. This pricing includes 7 years of storage. | +|Ingestion and Storage| For CloudTrail Lakes, you pay for both ingesting and storing of logs/events from AWS sources and non-AWS sources. Pricing does not differ between source. This pricing includes seven years of storage. | |Analysis| Analysis charges for CloudTrail Lakes are based on the volume of logs you analyze. You are charged per GB of scanned data. CloudTrail Insights analysis charges are based on the Insight type.| -| Events delivered| For CloudTrail Trails, pricing is based on the number of data events and management events delivered to S3. Your first management event delivery to S3 is free. | +| Events Delivered| For CloudTrail trails, pricing is based on the number of data events and management events delivered to S3. Your first management event delivery to S3 is free. | ## Event History -Use the event history feature directly in the CloudTrail console to view and search historical event and log data. The event history captures only management events (for example, if you create or delete S3 buckets). The event history does not include data events (for example, if you read or write an S3 object). Event history shows only a 90-day history of the account's activity. You can query across only one [Region](/aws/concepts/regions/){target="_blank"} and a single attribute. Event history has no additional charge. +Use the Event history feature directly in the CloudTrail console to view and search historical event and log data. The Event history captures only management events (e.g. if you create or delete S3 buckets). The Event history does not include data events (e.g. if you read or write an S3 object). Event history shows only a 90-day history of the account's activity. You can query across only one [Region](/aws/concepts/regions/) and a single attribute. Event history has no additional charge. ## CloudTrail Trails -Trails collect and store AWS account activity. Trails support the delivery of both management and data events. Unlike the basic event history, Trails contain an event record history that can be greater than 90 days. Additionally, you can specify where to send this activity to in S3 buckets, CloudWatch Logs, or Amazon EventBridge. You have the option to [set up Amazon SNS notifications](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/configure-sns-notifications-for-cloudtrail.html){target="_blank"} to alert you when CloudTrail adds a new log file to an S3 bucket. +Trails collect and store AWS account activity. Trails support the delivery of both management and data events. Unlike the basic Event history, trails contain an event record history that can be greater than 90 days. Additionally, you can specify where to send this activity to in S3 buckets, CloudWatch Logs, or Amazon EventBridge. You have the option to [set up Amazon SNS notifications](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/configure-sns-notifications-for-cloudtrail.html) to alert you when CloudTrail adds a new log file to an S3 bucket. ## CloudTrail Lake -A CloudTrail Lake allows you to store and analyze API activity and data logs for up to 7 years. You have the ability to view log data from multiple sources and query on numerous records. Compared to the event history, you can create more customized views and run queries for multiple Regions and attributes. You pay for both ingestion and storage based on "uncompressed data ingested during the month." AWS offers [a few suggestions for reducing usage costs](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake-manage-costs.html#cloudtrail-lake-manage-costs-tools){target="_blank"}, including configuring your options to not ingest future events. +A CloudTrail Lake allows you to store and analyze API activity and data logs for up to seven years. You have the ability to view log data from multiple sources and query on numerous records. Compared to the Event history, you can create more customized views and run queries for multiple Regions and attributes. You pay for both ingestion and storage based on "uncompressed data ingested during the month." AWS offers [a few suggestions for reducing usage costs](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake-manage-costs.html#cloudtrail-lake-manage-costs-tools), including configuring your options to not ingest future events. ## CloudTrail Insights @@ -38,3 +38,5 @@ CloudTrail Insights analyzes management events and reports on unusual or suspici !!! Contribute Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Oct 4, 2023_ \ No newline at end of file diff --git a/docs/aws/services/cloudwatch-pricing.md b/docs/aws/services/cloudwatch-pricing.md index f5e8e02..58634c0 100644 --- a/docs/aws/services/cloudwatch-pricing.md +++ b/docs/aws/services/cloudwatch-pricing.md @@ -1,39 +1,41 @@ title: CloudWatch Pricing | Cloud Cost Handbook -[Amazon CloudWatch Pricing Page](https://aws.amazon.com/cloudwatch/pricing/){ .md-button } +[Amazon CloudWatch Pricing Page](https://aws.amazon.com/cloudwatch/pricing/){ .md-button target="_blank"} ## Summary -Amazon CloudWatch is a logging, monitoring and observability service. As with most monitoring/observability tools the cost of service is based on the amount of data that is collected and stored as well as a number of other factors. CloudWatch is no different. For most use-cases, the largest CloudWatch costs are made up of the number of metrics and logs that a user is ingesting and storing. +Amazon CloudWatch is a logging, monitoring, and observability service. As with most monitoring and observability tools, the cost of service is based on the amount of data that is collected and stored, as well as a number of other factors. CloudWatch is no different. For most use cases, the largest CloudWatch costs are made up of the number of metrics and logs that a user is ingesting and storing. -Oftentimes CloudWatch is leveraged automatically by other AWS services for metric and log storage. Users are sometimes surprised when they spin up a number of unrelated services which they accounted for during planning but are then greeted with an accompanying spike in CloudWatch costs that they didn't account for. +Oftentimes, CloudWatch is leveraged automatically by other AWS services for metric and log storage. Users are sometimes surprised when they spin up a number of unrelated services which they accounted for during planning, but are then greeted with an accompanying spike in CloudWatch costs that they didn't account for. -CloudWatch stores and process data from an umbrella of different AWS services which means that sometimes it isn't obvious why the overall CloudWatch bill has increased. Diving into subcategory costs can help shed light on which other AWS services are causing CloudWatch costs to increase. +CloudWatch stores and processes data from an umbrella of different AWS services which means that sometimes it isn't obvious why the overall CloudWatch bill has increased. Diving into subcategory costs can help shed light on which other AWS services are causing CloudWatch costs to increase. ## Pricing Dimensions | Dimension | Description | | -------- | -------- | -| Custom Metric Storage | AWS charges you for the number of custom metrics you store with them per month. CloudWatch's unit pricing is progressive; the first 10,000 metrics tracked is $0.30 per metric per month, the next 240,000 costs $0.10 and so on.[^1] This gives users with large number of metrics automatic economies of scale as they grow the number of metrics tracked. Note: Pricing is dependant on the region where you store your metrics.[^2] | +| Custom Metric Storage | AWS charges you for the number of custom metrics you store with them per month. CloudWatch's unit pricing is progressive; the first 10,000 metrics tracked is $0.30 per metric per month, the next 240,000 costs $0.10 and so on.[^1] This gives users with large numbers of metrics automatic economies of scale as they grow the number of metrics tracked. Note: Pricing is dependant on the region where you store your metrics.[^2] | | CloudWatch API Requests | AWS charges you for the following API requests: `GetMetricData`, `GetInsightRuleReport`, `GetMetricWidgetImage`, `GetMetricStatistics`, `ListMetrics`, `PutMetricData`, `GetDashboard`, `ListDashboards`, `PutDashboard` and `DeleteDashboards`. For most AWS services there is no API charge for sending metrics. A user would normally be charged by the CloudWatch API for ingesting metrics from a non-AWS service or for a third-party infrastructure monitoring/observability tool reading from the API to collect metrics. Pricing is dependant on the region where CloudWatch is deployed.[^3] | | CloudWatch Dashboards | AWS charges $3.00 per month per CloudWatch dashboard. | -| CloudWatch Alarms | CloudWatch alarms are priced based on the resolution of the alarm (60 seconds versus 10 seconds) and if you need to combine multiple alarms together into a more complex alarm like anomaly detection or a composite alarm. Pricing is dependant on the region where CloudWatch is deployed.[^3] | +| CloudWatch Alarms | CloudWatch alarms are priced based on the resolution of the alarm (e.g. 60 seconds vs 10 seconds) and if you need to combine multiple alarms together into a more complex alarm like anomaly detection or a composite alarm. Pricing is dependant on the region where CloudWatch is deployed.[^3] | | CloudWatch Logs | AWS charges you for two components as it relates to CloudWatch Logs: (1) ingestion and (2) storage. | | CloudWatch Events | AWS charges you for CloudWatch Events which are changes in your AWS environment. For example, you can trigger an event whenever an EC2 instance is created. You are charged a rate per one million events. | -| CloudWatch Contributor Insights | Contributor Insights are only available for CloudWatch Logs and DynamoDB. For CloudWatch Logs, Contributor Insights are priced per-rule per-month, and for every million log events per month that match your rule. For DynamoDB, Contributor Insights are priced per-rule per-month and for every million DynamoDB Events, which occur when items are read from or written to your DynamoDB table. | +| CloudWatch Contributor Insights | Contributor Insights are only available for CloudWatch Logs and DynamoDB. For CloudWatch Logs, Contributor Insights are priced per rule month, and for every million log events per month that match your rule. For DynamoDB, Contributor Insights are priced per rule per month and for every million DynamoDB Events, which occur when items are read from or written to your DynamoDB table. | | CloudWatch Canaries | Canaries are priced based on the number of runs. Pricing is very specific to region. Be sure to check where you are running your CloudWatch Canaries to be aware of the price for that region. | ## CloudWatch Cost Optimizations -By default, CloudWatch Log Groups retain logs _indefinitely_. However, you can choose a retention period of anywhere from [1 day to 10 years](https://docs.aws.amazon.com/managedservices/latest/userguide/log-customize-retention.html). After logs expire, they will be deleted and reduce storage costs. To set a retention period, chose "Edit Retention" in the CloudWatch console by following [these instructions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#SettingLogRetention). +By default, CloudWatch Log Groups retain logs _indefinitely_. However, you can choose a retention period of anywhere from [one day to 10 years](https://docs.aws.amazon.com/managedservices/latest/userguide/log-customize-retention.html). After logs expire, they will be deleted which will reduce storage costs. To set a retention period, choose `Edit Retention` in the CloudWatch console by following [these instructions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#SettingLogRetention).
-[^1]: Price is based on US East (Ohio) region as of July 28, 2021. See footnote below for comment about pricing per region. +[^1]: Price is based on US East (Ohio) region as of July 28, 2021. See the footnote below for a comment about pricing per region. [^2]: Or at least this is what the [CloudWatch pricing page](https://aws.amazon.com/cloudwatch/pricing/) states. If you click through all of the regions the prices are all the same for custom metric storage as of July 28, 2021. [^3]: Pricing is fairly uniform across regions. Special regions like Sao Paulo and GovCloud diverge from the standard pricing. !!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Sep 26, 2021_ \ No newline at end of file diff --git a/docs/aws/services/config-pricing.md b/docs/aws/services/config-pricing.md index b749ebf..019f9d3 100644 --- a/docs/aws/services/config-pricing.md +++ b/docs/aws/services/config-pricing.md @@ -3,7 +3,7 @@ title: Config Pricing | Cloud Cost Handbook [Amazon Config Pricing Page](https://aws.amazon.com/config/pricing/){ .md-button target="_blank"} ## Summary -AWS Config is a service used to record and monitor configuration items in your AWS environment, and then evaluate them using Config Rule Evaluations. A configuration item is recorded when a monitored resource changes. Resources are AWS entities like EC2 instances, S3 buckets, and IAM roles. You can then apply compliance checks and policies using Config rules. +AWS Config is a service used to record and monitor configuration items in your AWS environment, and then evaluate them using Config rule evaluations. A configuration item is recorded when a monitored resource changes. Resources are AWS entities like EC2 instances, S3 buckets, and IAM roles. You can then apply compliance checks and policies using Config rules. Its most common use case is for compliance monitoring to ensure that your AWS resources adhere to security and operational best practices. However, it is also used for resource administration, managing and troubleshooting configuration changes, and security analysis. @@ -11,28 +11,18 @@ Its most common use case is for compliance monitoring to ensure that your AWS re | Dimension | Description | | ------------------------------ | ----------- | | Configuration Item Recordings | The rate is $0.003 for each configuration item recorded. | -| Config Rule Evaluations | See the [Config Rule Evaluations Pricing](#config-rule-evaluations-pricing) section for more information. | -| Conformance Pack Evaluations | See the [Conformance Pack Evaluations Pricing](#conformance-pack-evaluations-pricing) section for more information. | +| Config Rule Evaluations | See the [Config Rule Evaluations](#config-rule-evaluations) section for more information. | +| Conformance Pack Evaluations | See the [Conformance Pack Evaluations](#conformance-pack-evaluations) section for more information. | | S3 Storage | Additional [S3](/aws/services/s3-pricing) costs could occur for snapshots and history files. | | SNS Charges | You will incur additional charges if you opt into SNS notifications. | | Lambda Charges | If you create custom rules that use [Lambda](/aws/services/lambda-pricing) functions for evaluation, you will incur Lambda charges based on usage. | -## Config Rule Evaluations Pricing +## Config Rule Evaluations -| AWS Config Rules Evaluations | Price Per Rule Evaluation Per Region | -| --- | --- | -| First 100,000 | $0.001 | -| Next 400,000 (100,001-500,000) | $0.0008 | -| Over 500,000 | $0.00 | +With Config, you can create Config rules or use the predefined rules (called managed rules) that reflect your desired configurations. These rules are continuously monitored for compliance, with any deviations flagged as noncompliant by Config. You are charged per Config rule evaluation on a tiered model, with evaluations getting less expensive the more there are. After 500,000 evaluations you are no longer charged. -## Conformance Pack Evaluations Pricing -Conformance Pack Evaluations occur when a resource is evaluated by a Config rule within a Conformance Pack. - -| Conformance Pack Evaluations | Price Per Conformance Pack Evaluation Per Region | -| --- | --- | -| First 100,000 | $0.001 | -| Next 400,000 (100,001-500,000) | $0.0008 | -| Over 500,000 | $0.0005 | +## Conformance Pack Evaluations +Conformance pack evaluations occur when a resource is evaluated by a Config rule within a conformance pack. A conformance pack is a "collection of AWS Config rules and remediation actions." Charges are per conformance pack evaluation on a tiered basis, with the cost decreasing as more evaluations occur. ## Config Cost Optimization Tips [AWS recommends](https://aws.amazon.com/blogs/mt/cost-optimization-recommendations-for-aws-config/) a few tips for cost optimization: @@ -40,7 +30,9 @@ Conformance Pack Evaluations occur when a resource is evaluated by a Config rule - Only select from the resources needed when configuring Config. - Only monitor resources in the regions that are necessary for your specific use case. - Set up a lifecycle policy to auto-delete configuration history records after a specified number of days in order to reduce storage costs. -- Customize Conformance Packs to ensure there are no duplicate rules. +- Customize conformance packs to ensure there are no duplicate rules. !!! Contribute Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Oct 6, 2023_ diff --git a/docs/aws/services/dynamodb-pricing.md b/docs/aws/services/dynamodb-pricing.md index 8b82598..0eb0e17 100644 --- a/docs/aws/services/dynamodb-pricing.md +++ b/docs/aws/services/dynamodb-pricing.md @@ -1,25 +1,24 @@ title: DynamoDB Pricing | Cloud Cost Handbook -[Amazon DynamoDB Pricing Page](https://aws.amazon.com/dynamodb/pricing/){ .md-button } +[Amazon DynamoDB Pricing Page](https://aws.amazon.com/dynamodb/pricing/){ .md-button target="_blank"} ## Summary DynamoDB is Amazon's primary managed NoSQL database service. -It offers single-digit-millisecond latency, scales to effectively unlimited requests-per-second & storage, and has (largely) predictable pricing. +It offers single-digit millisecond latency, scales to effectively unlimited requests per second, and has (largely) predictable pricing. -DynamoDB, like most NoSQL datastores, differs substantially from relational databases - it can only be queried via primary key attributes on the base table & indexes. +DynamoDB, like most NoSQL datastores, differs substantially from relational databases—it can only be queried via primary key attributes on the base table and indexes. ## Pricing Dimensions -| Dimension | Options | Description | Docs | -|----------------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------| -| Billing Mode | `On Demand, Provisioned Throughput` | Choose between paying per read/write or per allocated requests-per-second-throughput | [Docs](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.OnDemand) | -| Write Type | `Standard, Transactional` | Transactional operations allow *ACID guarantees* at twice the standard cost | [Docs](https://aws.amazon.com/blogs/aws/new-amazon-dynamodb-transactions/) | -| Read Type | `Eventually Consistent, Strongly Consistent, Transactional` | Dynamo reads are by default *Eventually Consistent* - when you read from a table, the response might not reflect the results of a recently completed write. | [Docs](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html) | -| Read Operation | `GetItem, Scan, Query` | Scans return the entire contents of a table; Queries allow a much faster & cheaper read of a subsection of the table | [Docs](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-query-scan.html) | -| Indexes | `None, Local Secondary Index, Global Secondary Index` | Indexes allow an alternate set of `partition_key` + `sort_key` to be used for queries | [Overview](https://www.dynamodbguide.com/secondary-indexes/) | - +| Dimension | Options | Description | +|----------------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------| +| [Billing Mode]((https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.OnDemand)) | `On Demand, Provisioned Throughput` | Choose between paying per read/write or per allocated requests per second throughput. | +| [Write Type](https://aws.amazon.com/blogs/aws/new-amazon-dynamodb-transactions/) | `Standard, Transactional` | Transactional operations allow *ACID guarantees* at twice the standard cost. | +| [Read Type](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html) | `Eventually Consistent, Strongly Consistent, Transactional` | Dynamo reads are by default *Eventually Consistent* - when you read from a table, the response might not reflect the results of a recently completed write. | +| [Read Operation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-query-scan.html) | `GetItem, Scan, Query` | Scans return the entire contents of a table; Queries allow a much faster & cheaper read of a subsection of the table. | +| [Indexes](https://www.dynamodbguide.com/secondary-indexes/) | `None, Local Secondary Index, Global Secondary Index` | Indexes allow an alternate set of `partition_key` + `sort_key` to be used for queries. | ## Billing Mode @@ -27,43 +26,43 @@ DynamoDB, like most NoSQL datastores, differs substantially from relational data Provisioned Throughput is cheaper if you have a meaningful number of reads/writes *distributed evenly across time*. Any reads/writes above the provisioned threshold *will fail*, so it is not well suited to bursty or unpredictable workloads. -Provisioned Throughput includes optional Auto-Scaling if throughput thresholds are being exceeded ([Docs](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html)). +Provisioned Throughput includes optional Auto Scaling if throughput thresholds are being exceeded ([docs](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html)). -[AWS Free Tier](https://aws.amazon.com/free) includes 25 reads/writes-per-second of Provisioned Throughput (across any # of tables), but *does not* include any On Demand mode usage. +[AWS Free Tier](https://aws.amazon.com/free) includes 25 reads/writes per second of Provisioned Throughput (across any number of tables) but *does not* include any on-demand mode usage. | Billing Mode | Unit | Unit Definition | |------------------------|-------------------------------|------------------------------------------------------------| -| On Demand | **Read Request Unit (RRU)** | read two <4KB items, eventually consistent | -| On Demand | **Write Request Unit (WRU)** | write one <1KB item | -| Provisioned Throughput | **Read Capacity Unit (RCU)** | two reads *per second* (<4KB items), eventually consistent | -| Provisioned Throughput | **Write Capacity Unit (WCU)** | one write *per second* (<1KB item) | +| On-Demand | Read Request Unit (RRU) | Read two <4KB items, eventually consistent | +| On-Demand | Write Request Unit (WRU) | Write one <1KB item | +| Provisioned Throughput | Read Capacity Unit (RCU) | Two reads *per second* (<4KB items), eventually consistent | +| Provisioned Throughput | Write Capacity Unit (WCU) | One write *per second* (<1KB item) | ## Write Type -Standard writes are relatively straightforward and include single item writes (`table.put_item`) and batch writes (`batch.put_item`). +Standard writes are relatively straightforward and include single-item writes (`table.put_item`) and batch writes (`batch.put_item`). Transactions (`client.execute_transaction`) group up to 25 writes (or reads, updates, or deletes) together and *guarantee that they succeed or fail together.* -For a given write of an item up to *1 KB* in size: +For a given write of an item up to *1KB* in size: | Type | Cost | |----------------------|----------------------------| -| Standard single item | 1 WRU | -| Standard batch | 1 WRU (per item) | +| Standard Single Item | 1 WRU | +| Standard Batch | 1 WRU (per item) | | Transactional | 2 WRU (2x) | -| Oversize 4 KB item | 4 WRU (4x, size dependent) | +| Oversize 4KB Item | 4 WRU (4x, size dependent) | ## Read Type -Part of what makes DynamoDB a compelling offering is its hybrid approach to the CAP theorem[^1] - it can adjust between eventually and strongly consistent as needed. +Part of what makes DynamoDB a compelling offering is its hybrid approach to the CAP theorem[^1]—it can adjust between eventually and strongly consistent as needed. Wherever acceptable to the business needs and current data modeling, it is faster and cheaper to use eventually consistent reads. -That said, some business logic unequivocally dictates strongly consistent reads (for example: an ATM reading a customer's balance). +That said, some business logic unequivocally dictates strongly consistent reads (e.g. an ATM reading a customer's balance). -For a given read of an item up to *4 KB* in size: +For a given read of an item up to *4KB* in size: | Type | Cost | |-----------------------|------------| @@ -75,11 +74,11 @@ For a given read of an item up to *4 KB* in size: ## Read Operation -Getting a single item is as simple as providing its `partition_key` (and `sort_key`, if the table has one) +Getting a single item is as simple as providing its `partition_key` (and `sort_key` if the table has one). Queries, however, are much more involved. NoSQL databases like DynamoDB can require significant upfront data modeling work to enable the query flexibility that SQL-based databases have by default. -Scans require reading the entire table, and are correspondingly slow and expensive. Wherever possible, *avoid scanning Dynamo tables.* +Scans require reading the entire table and are correspondingly slow and expensive. Wherever possible, *avoid scanning Dynamo tables.* | Type | Cost | |-----------|--------------------------------------| @@ -96,29 +95,29 @@ Indexes allow alternate partition and sort keys to be used to query items. They Indexes can help control costs in two primary ways: -1. Queries on a new index return less unnecessary items (than the alternative/existing query) and thus cost less RRUs. +1. Queries on a new index return fewer unnecessary items (than the alternative/existing query) and thus cost less RRUs. -2. Each index optionally allows a subset of item attributes to be **projected** to that index. Projecting a subset can save on read costs if items are regularly >4 KB, but the projected attribute names+values sum to <4KB.[^2] +2. Each index optionally allows a subset of item attributes to be **projected** to that index. Projecting a subset can save on read costs if items are regularly >4KB, but the projected attribute names+values sum to <4KB.[^2] | Type | Primary Key Attributes | |----------------------------------|----------------------------------------------| | Base table | Initial `pk` + optional `sk` | -| **Local Secondary Index (LSI)** | Initial `pk` + *different* `sk` | -| **Global Secondary Index (GSI)** | *Different* `pk` + optional *different* `sk` | +| Local Secondary Index (LSI) | Initial `pk` + *different* `sk` | +| Global Secondary Index (GSI) | *Different* `pk` + optional *different* `sk` | !!! info - The provisioned throughput settings of a global secondary index are separate from those of its base table + The provisioned throughput settings of a global secondary index is separate from those of its base table. ## Other DynamoDB tables can optionally enforce a **Time To Live (TTL)** on items in the table, such that they expire after that amount of time (guaranteed within +48 hours). -Dynamo exposes the time-ordered sequence of item-level changes on a given table via **DynamoDB Streams**. Reading change-data from Streams is slightly cheaper per request than reading the table itself (on pay per use BillingMode). The first 2.5M reads per month are free. +Dynamo exposes the time-ordered sequence of item-level changes on a given table via **DynamoDB Streams**. Reading change data from Streams is slightly cheaper per request than reading the table itself (on pay-per-use BillingMode). The first 2.5M reads per month are free. ## Further Reading * An overview of the architecture of DynamoDB can be found in the [DynamoDB Paper](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf) -* You can, if you so choose, use a SQL-like syntax to interface with DynamoDB via the recently-launched [PartiSQL support](https://aws.amazon.com/about-aws/whats-new/2020/11/you-now-can-use-a-sql-compatible-query-language-to-query-insert-update-and-delete-table-data-in-amazon-dynamodb/) +* You can, if you so choose, use a SQL-like syntax to interface with DynamoDB via [PartiSQL support](https://aws.amazon.com/about-aws/whats-new/2020/11/you-now-can-use-a-sql-compatible-query-language-to-query-insert-update-and-delete-table-data-in-amazon-dynamodb/)
@@ -126,7 +125,9 @@ Dynamo exposes the time-ordered sequence of item-level changes on a given table [^1]: The [CAP Theorem](https://en.wikipedia.org/wiki/CAP_theorem) is a computer science theorem that observes that a distributed datastore cannot guarantee all three of Consistency, Availability, and Partition Tolerance. -[^2]: If a query to an index with projection requests attribute values not in the projected values, it will incur twice the normal read cost, as the remaining attribute values must be fetched from the base table +[^2]: If a query to an index with projection requests attribute values not in the projected values, it will incur twice the normal read cost, as the remaining attribute values must be fetched from the base table. !!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Jul 31, 2021_ diff --git a/docs/aws/services/ebs-pricing.md b/docs/aws/services/ebs-pricing.md index a13c9bc..d4f89be 100644 --- a/docs/aws/services/ebs-pricing.md +++ b/docs/aws/services/ebs-pricing.md @@ -1,10 +1,10 @@ title: EBS Pricing | Cloud Cost Handbook -[Amazon EBS Pricing Page](https://aws.amazon.com/ebs/pricing/){ .md-button } +[Amazon EBS Pricing Page](https://aws.amazon.com/ebs/pricing/){ .md-button target="_blank"} ## Summary -Amazon Elastic Block Storage (EBS) is Amazon's block storage offering that allows you to create "Volumes" which is the base primitive of everything related to EBS. There are multiple EBS "Volume Types" that offer different capabilities and have their own set of pricing. EBS costs are factored into the cost category of "EC2-Other" on your AWS bill which can oftentimes complicate understanding where these costs are coming from. +Amazon Elastic Block Storage (EBS) is Amazon's block storage offering that allows you to create volumes, which is the base primitive of everything related to EBS. There are several EBS volume types, each with different capabilities and its own set of pricing. EBS costs are factored into the cost category of [EC2-Other](/aws/services/ec2-other-pricing) on your AWS bill, which can oftentimes complicate understanding where these costs are coming from. It is important to note that you are charged for the amount of _provisioned_ storage not _utilized_ storage. So, for example, if you create a 20GB EBS Volume and only utilize 1GB of it, you are still charged for all 20GB. @@ -12,31 +12,33 @@ It is important to note that you are charged for the amount of _provisioned_ sto | Dimension | Description | |--------|--------| -| Volume Storage Hours | When you create an EBS Volume you allocate a certain amount of storage to it. Ultimately, the main cost of an EBS Volume is the result of the amount of hours you're using an EBS Volume and the size you allocate. | -| Volume Type | EBS has different types of Volume Types which are documented below. Each Volume Type has different rates. | -| Provisioned IOPS | Certain EBS Volume types (io1, io2) allow you to specify an amount of provisioned input/output operations per second which is abbreviated as IOPS and pronounced as "eye-ops". When using these volume types you are charged for the amount of provisioned IOPS even if you don't fully utilize them. | +| Volume Storage Hours | When you create an EBS volume you allocate a certain amount of storage to it. Ultimately, the main cost of an EBS volume is the result of the amount of hours you're using an EBS Volume and the size you allocate. | +| Volume Type | EBS has different types of volume types which are documented below. Each volume type has different rates. | +| Provisioned IOPS | Certain EBS volume types (io1, io2) allow you to specify an amount of provisioned input/output operations per second ([IOPS](/aws/concepts/io-operations/)). When using these volume types you are charged for the amount of provisioned IOPS even if you don't fully utilize them. | | Amazon EBS Snapshots | Amazon EBS Snapshots are a point-in-time copy of your block volume data. EBS Snapshots are stored incrementally, which means you are billed only for the changed blocks stored. | | EBS Snapshot API Requests | EBS charges you for the amount of API calls you make for snapshots. These are charged in increments of thousands of API requests. | ## Volume Types -Amazon EBS offers different Volume Types that have different pricing rates and functionality. Each EBS Volume Type is described below: +Amazon EBS offers different volume types that have different pricing rates and functionality. Each EBS volume type is described below: | Volume Type | Description | |------|-----| | General Purpose SSD (gp2, gp3) | General Purpose SSD (gp3) volumes offer cost-effective storage that is ideal for a broad range of workloads. | -| Provisioned IOPS (io1, io2) | Provisioned IOPS SSD (io1 and io2) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. Provisioned IOPS SSD volumes use a consistent IOPS rate, which you specify when you create the volume, and Amazon EBS delivers the provisioned performance 99.9 percent of the time. | +| Provisioned IOPS (io1, io2) | Provisioned IOPS SSD (io1 and io2) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. Provisioned IOPS SSD volumes use a consistent IOPS rate, which you specify when you create the volume, and Amazon EBS delivers the provisioned performance 99.9% of the time. | | Throughput Optimized HDD (st1) | Throughput Optimized HDD (st1) volumes offer magnetic storage for frequently accessed data. It is a good fit for large, sequential workloads. | | Cold HDD (sc1) | Cold HDD (sc1) volumes are cheapest in comparison to other volume types. It is intended for infrequently accessed, large, sequential workloads. | ## Stranded Volumes -Oftentimes, EBS Volumes are created in conjunction with other AWS resources such as EC2 instances but are de-coupled from the lifecycle of those other resources. One common pattern we see is that developers will create EC2 instances with EBS Volumes attached but when they delete the EC2 instance, they assume that the EBS Volume is destroyed accordingly. In larger-scale environments with autoscaling, this problem can grow significantly as a part of an AWS bill. +Oftentimes, EBS volumes are created in conjunction with other AWS resources such as EC2 instances but are de-coupled from the lifecycle of those other resources. One common pattern we see is that developers will create EC2 instances with EBS volume attached but when they delete the EC2 instance, they assume that the EBS volume is destroyed accordingly. In larger-scale environments with Auto Scaling, this problem can grow significantly as a part of an AWS bill. -We recommend that you periodically profile for unattached or stranded EBS Volumes. +We recommend that you periodically profile for unattached or stranded EBS volume.
!!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Sep 26, 2023_ diff --git a/docs/aws/services/ec2-other-pricing.md b/docs/aws/services/ec2-other-pricing.md index ed699ce..dba2be7 100644 --- a/docs/aws/services/ec2-other-pricing.md +++ b/docs/aws/services/ec2-other-pricing.md @@ -8,20 +8,22 @@ EC2-Other is a category of AWS costs that typically causes the greatest amount o |-----|-----| |EBS Volume Usage|Usage for [EBS Volumes](ebs-pricing.md).| |EBS Snapshot Usage|Usage for [EBS Snapshots](ebs-pricing.md).| -|CPU Credits from t2/t3/t4g EC2 instances|T-family EC2 Instances can carry potential CPU credit charges as described more below.| +|CPU Credits from t2/t3/t4g EC2 Instances|T-family EC2 Instances can carry potential CPU credit charges as described more below.| |NAT Gateway Usage|Hourly usage for NAT Gateways.| -|Data Transfer| | -|Idle Elastic IP Address usage|AWS charges you for unattached IP addresses. It's typically good hygiene to occasionally monitor for stranded resources and clean them up.| +|Data Transfer|Associated with transferring data in and out of EC2 instances.| +|Idle Elastic IP Address Usage|AWS charges you for unattached IP addresses. It's typically good hygiene to occasionally monitor for stranded resources and clean them up.| ## Stranded Resources -Unused or stranded EBS Volumes and IP Addresses can add up over time especially if these resources are created automatically as part of an autoscaling service where they're spun up but not down. You should consider occasionally auditing your unattached EBS Volumes and IP addresses to see if you can clean them up to save costs. +Unused or stranded EBS Volumes and IP Addresses can add up over time, especially if these resources are created automatically as part of an Auto Scaling service where they're spun up but not down. You should consider occasionally auditing your unattached EBS Volumes and IP addresses to see if you can clean them up to save costs. -## What are t2/t3/T4g CPU credit charges? +## What are t2/t3/T4g CPU Credit Charges? -T2, T3 and T4g instances have a concept of "Unlimited mode" whereby you are charged a per-vCPU hour for bursting into this CPU usage. If you are leveraging these EC2 Instance Types with `unlimited` mode enabled, you should consider keeping an eye on these costs. Depending on how much your costs trend here, you may want to consider "[rightsizing](../concepts/rightsizing.md)" to a different instance type that is allocated additional CPU. +T2, T3, and T4g instances have a concept of Unlimited mode, whereby you are charged a per vCPU hour for bursting into this CPU usage. If you are leveraging these EC2 Instance Types with `unlimited` mode enabled, you should consider keeping an eye on these costs. Depending on how much your costs trend here, you may want to consider [rightsizing](../concepts/rightsizing.md) to a different instance type that is allocated additional CPU.
!!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Jul 11, 2021_ diff --git a/docs/aws/services/ec2-pricing.md b/docs/aws/services/ec2-pricing.md index c071771..e1ae7fc 100644 --- a/docs/aws/services/ec2-pricing.md +++ b/docs/aws/services/ec2-pricing.md @@ -1,10 +1,10 @@ title: EC2 Pricing | Cloud Cost Handbook -[Amazon EC2 Pricing Page](https://aws.amazon.com/ec2/pricing/){ .md-button } +[Amazon EC2 Pricing Page](https://aws.amazon.com/ec2/pricing/){ .md-button target="_blank"} ## Summary -Amazon EC2 (Elastic Cloud Compute) is Amazon’s most popular service and usually one of the top cost centers for most companies. Amazon EC2 allows customers to create virtual private servers and has different pricing depending on the “instance type” you use. Instance types are grouped into families with varying generations. Each instance type has a different mix of underlying hardware, allocated resources and as a result: pricing. Additionally, depending on the underlying software running on the EC2 instance you may be charged different rates. +Amazon EC2 (Elastic Cloud Compute) is Amazon’s most popular service and usually one of the top cost centers for most companies. Amazon EC2 allows customers to create virtual private servers and has different pricing depending on the instance type you use. Instance types are grouped into families with varying generations. Each instance type has a different mix of underlying hardware, allocated resources, and as a result: pricing. Additionally, depending on the underlying software running on the EC2 instance you may be charged different rates. ## Pricing Dimensions @@ -12,32 +12,29 @@ Amazon EC2 (Elastic Cloud Compute) is Amazon’s most popular service and usuall |------|-------| | Instance Type Usage | EC2 instance types are billed on one second increments, with a minimum of 60 seconds. For certain instance types with pre-installed software, you are billed in increments of hours. | | Instance Type Lifecycle | EC2 has different lifecycle types - the two most often used are on-demand and spot. These concepts are discussed more in-depth below. | -| AMI | AMI stands for Amazon Machine Images. Depending on the AMI you use (i.e., Linux vs Windows) you potentially pay an additional amount of money on top of the instance type base usage. | +| Amazon Machine Images (AMI) | Depending on the AMI you use (i.e. Linux vs Windows) you potentially pay an additional amount of money on top of the instance type base usage. | ## On-Demand vs Spot -By default, EC2 instances are launched in "on-demand" mode and charged accompanying on-demand rates which are the most expensive. AWS also offers "Spot" instances which can offer significant cost savings by using unused additional compute capacity. However, Spot tends to only work for fault-tolerant workloads as AWS can pre-empt and terminate these instances within two minutes if need be. +By default, EC2 instances are launched in On-Demand mode and charged accompanying on-demand rates (which are the most expensive). AWS also offers Spot Instances, which can offer significant cost savings by using unused additional compute capacity. However, Spot tends to only work for fault-tolerant workloads as AWS can pre-empt and terminate these instances within two minutes if need be. -Depending on your application's needs, you can consider using Spot instances for significant cost savings in the event you are comfortable with these instances being terminated. In general, your application's architecture should be comfortable with either 1) there being no spot instances available or 2) these instances being terminated. +Depending on your application's needs, you can consider using Spot Instances for significant cost savings in the event you are comfortable with these instances being terminated. In general, your application's architecture should be comfortable with either (1) there being no Spot Instances available or (2) these instances being terminated. -## Autoscaling -EC2 [autoscaling](../../concepts/autoscaling/) is provided by a primitive named [Autoscaling Groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html). Autoscaling Groups have [lifecycle hooks](https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html) to accommodate complex workflows regarding instance creation or termination and can support multiple instance types or spot instances using a [Mixed Instance Policy](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-purchase-options.html). New instances are added based on a launch template (or launch config). This can be a challenge for organizations without good practices around creating machine images or automation for standing up applications. +## Auto Scaling +EC2 [Auto Scaling](../../concepts/autoscaling/) is provided by a primitive named [Auto Scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html). Auto Scaling groups have [lifecycle hooks](https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html) to accommodate complex workflows regarding instance creation or termination and can support multiple instance types or Spot Instances using a [Mixed Instance Policy](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-purchase-options.html). New instances are added based on a launch template (or launch config). This can be a challenge for organizations without good practices around creating machine images or automation for standing up applications. ## Rightsizing -Rightsizing refers to the process of ensuring that you're using the proper instance type suited for your application or workload. For example if you're using the largest instance type in a particular family but not using the CPU, Storage and Memory allocated to it fully you may be overpaying for what you need. Rightsizing is usually a manual process that involves engineering time for looking at a combination of application-level performance metrics like application CPU and Memory consumption and infrastructure-related attributes like what kind of underlying CPU powers an instance type. +Rightsizing refers to the process of ensuring that you're using the proper instance type suited for your application or workload. For example, if you're using the largest instance type in a particular family but not using the CPU, storage, and memory allocated to it fully, you may be overpaying for what you need. Rightsizing is usually a manual process that involves engineering time for looking at a combination of application-level performance metrics like application CPU and memory consumption and infrastructure-related attributes like what kind of underlying CPU powers an instance type. ## Savings Plans -EC2 Instances are covered by AWS Savings Plans. Savings Plans are covered more in depth as a general concept [here](/aws/concepts/savings-plans/). As it relates to EC2, Savings Plans are preferable as they present the same savings as Reserved Instances but aren't constrained to a single instance type. +EC2 instances are covered by AWS Savings Plans. Savings Plans are covered more in-depth as a general concept [here](/aws/concepts/savings-plans/). As it relates to EC2, Savings Plans are preferable as they present the same savings as Reserved Instances but aren't constrained to a single instance type. ## Reserved Instances -EC2 Instances are covered by AWS Reserved Instances. Reserved Instances are covered more in depth as a general concept [here](/aws/concepts/reserved-instances/). As it relates to EC2, Reserved Instances aren't preferred as they present the same savings as Savings Plans but are constrained to a single instance type where as Savings Plans give greater flexibility. - -## Generational Upgrades -EC2 instance types are grouped into families (discussed below) with multiple generations. For example a `m4.4xlarge` is of family type `m` and generation `4`. The next generation for the same instance type would be `m5.4xlarge`. Typically, as cloud infrastructure providers release new families it's cheaper and more performant to run the later generation instance types. Upgrade instances from one generation to another can be a major area of cost savings. Generation upgrades usually result in between 5% and 10% cost savings per generation and varies per family. +EC2 instances are covered by AWS Reserved Instances. Reserved Instances are covered more in-depth as a general concept [here](/aws/concepts/reserved-instances/). As it relates to EC2, Reserved Instances aren't preferred as they present the same savings as Savings Plans but are constrained to a single instance type whereas Savings Plans give greater flexibility. ## Instance Type Families -EC2 Instance Types are organized into "Families" and each family can have multiple "Generations". By looking at each instance type you can infer its Family and Generation from the instance type name. For example, a `c5.4xlarge` is the `c` Family and `5th` Generation. Below is a table of EC2 Instance Families and simple descriptions: +EC2 Instance Types are organized into families and each family can have multiple generations. By looking at each instance type you can infer its family and generation from the instance type name. For example, a `c5.4xlarge` is the `c` family and `5th` generation. Below is a table of EC2 Instance Families and simple descriptions: | Family | Description | @@ -58,7 +55,12 @@ EC2 Instance Types are organized into "Families" and each family can have multip | x | Lowest price-per-GB RAM instances | | z | Highest core frequency | +## Generational Upgrades +Typically, as cloud infrastructure providers release new generations for families, it's cheaper and more performant to run the later generation instance types. Upgrading instances from one generation to another can be a major area of cost savings. Generation upgrades usually result in between 5% and 10% cost savings per generation and vary per family. +
!!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Aug 14, 2021_ diff --git a/docs/aws/services/ecr-pricing.md b/docs/aws/services/ecr-pricing.md index 038a048..e2167c3 100644 --- a/docs/aws/services/ecr-pricing.md +++ b/docs/aws/services/ecr-pricing.md @@ -1,42 +1,43 @@ title: ECR Pricing | Cloud Cost Handbook -[Amazon ECR Pricing Page](https://aws.amazon.com/ecr/pricing/){ .md-button } +[Amazon ECR Pricing Page](https://aws.amazon.com/ecr/pricing/){ .md-button target="_blank"} ## Summary -Amazon Elastic Container Registry (ECR) is a fully managed container registry that allows you to store container images. You can create as many "Repositories" as you'd like that are free. As you push container "Images" to your repository, you're charged for the storage of these images which can accrue over time. Additionally, ECR charges different data transfer rates for private versus public repositories. +Amazon Elastic Container Registry (ECR) is a fully managed container registry that allows you to store container images. You can create as many repositories as you'd like that are free. As you push container images to your repository, you're charged for the storage of these images, which can accrue over time. Additionally, ECR charges different data transfer rates for private vs public repositories. ## Pricing Dimensions |Dimension|Description| |----|----| -|Container Image Storage|Amazon ECR charges a rate per month for the amount of storage per GB you store for container images.| -|Data Transfer|Amazon ECR charges different rates for data transfer from public and private repositories.| +|Container Image Storage|ECR charges a rate per month for the amount of storage per GB you store for container images.| +|Data Transfer|ECR charges different rates for data transfer from public and private repositories.| ## Storage Costs per ECR Repository Determining the cost per Container Repository can be a lot of effort, especially if you have a large quantity of images. To calculate the storage cost per container repository: -* List all of your container images -* Collect the image digest from each -* Determine just the unique digests of the layers in your container repository +* List all of your container images. +* Collect the image digest from each. +* Determine just the unique digests of the layers in your container repository. * Get the size of each unique digest. If you prefer not to do this manually yourself, [Vantage](https://www.vantage.sh/) will compute the size and corresponding cost of all repositories automatically when you connect an AWS account. ## Lifecycle Policies -ECR stores every container image you push to a registry by default. Over time, the storage of all of these images can add up. Amazon offers a primitive called a "Lifecycle Policy" that allows you to set conditions for having Amazon clean up images on your behalf. There are two types of lifecycle policies: +ECR stores every container image you push to a registry by default. Over time, the storage of all of these images can add up. Amazon offers a primitive called a lifecycle Policy that allows you to set conditions for having Amazon clean up images on your behalf. There are two types of lifecycle policies: |Lifecycle Policy|Description| |--|--| -|imageCountMoreThan|ECR allows you to define a certain number of images to retain and anything over that count will be cleaned up. For example if you set a Lifecycle Policy with a imageCountMoreThan value of 10, your most recent 10 images will always be kept.| -|sinceImagePushed|ECR allows you to set lifecycle policies with a value of sinceImagePushed which has a value of a certain number of days. So for example if you have a Lifecycle Policy applied with a sinceImagePushed value of 7, ECR will delete images as often as they are older than 7 days.| +|`imageCountMoreThan`|ECR allows you to define a certain number of images to retain and anything over that count will be cleaned up. For example, if you set a lifecycle Policy with a `imageCountMoreThan` value of 10, your most recent 10 images will always be kept.| +|`sinceImagePushed`|ECR allows you to set lifecycle policies with a value of `sinceImagePushed`, which has a value of a certain number of days. So, for example, if you have a lifecycle Policy applied with a `sinceImagePushed` value of seven, ECR will delete images when they are older than seven days.| -__Note__: that when you apply a Lifecycle Policy, it is evaluated immediately. So if you have 500 images in a repository and impose a lifecycle policy of 10 as soon as that policy is applied ECR will delete the 490 oldest images. +!!! Note + When you apply a lifecycle Policy, it is evaluated immediately. So, if you have 500 images in a repository and impose a lifecycle policy of 10, as soon as that policy is applied ECR will delete the 490 oldest images. ### Example `imageCountMoreThan` Lifecycle Policy -Here's an example of how to impose a Lifecycle Policy via the AWS CLI using the value of imageCountMoreThan: +Here's an example of how to impose a lifecycle Policy via the AWS CLI using the value of `imageCountMoreThan`: ``` aws ecr put-lifecycle-policy \ @@ -106,4 +107,6 @@ Where the content of the file for policy.json is the following:
!!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Jul 11, 2021_ diff --git a/docs/aws/services/ecs-and-fargate-pricing.md b/docs/aws/services/ecs-and-fargate-pricing.md index c106d12..5353491 100644 --- a/docs/aws/services/ecs-and-fargate-pricing.md +++ b/docs/aws/services/ecs-and-fargate-pricing.md @@ -1,34 +1,36 @@ title: ECS & Fargate Pricing | Cloud Cost Handbook [ECS Pricing Page](https://aws.amazon.com/ecs/pricing/){ .md-button } -[Fargate Pricing Page](https://aws.amazon.com/fargate/pricing/){ .md-button } +[Fargate Pricing Page](https://aws.amazon.com/fargate/pricing/){ .md-button target="_blank"} ## Summary -Elastic Container Service (ECS) allows you to run docker containers through a primitive named a "Task". Tasks ultimately run on EC2 instances which are either managed by you (ECS on EC2) or fully managed by AWS (Fargate). +Amazon Elastic Container Service (ECS) allows you to run docker containers through a primitive named a task. Tasks ultimately run on EC2 instances which are either managed by you (ECS on EC2) or fully managed by AWS (Fargate). -There is no additional charge to you when using ECS on self-managed EC2 as you're just paying for EC2 instances that you create and manage. Fargate charges you for the vCPU and Memory for a ECS Task or EKS Pod and you pay a premium for managing the underlying EC2 instances. +There is no additional charge to you when using ECS on self-managed EC2 as you're just paying for EC2 instances that you create and manage. Fargate charges you for the vCPU and Memory for an ECS task or EKS Pod and you pay a premium for managing the underlying EC2 instances. ## Fargate Pricing Dimensions |Dimension|Description| |---|---| -|vCPU Hours|When configuring a Fargate Task or EKS Pod you assign a certain amount vCPU and are charged a corresponding per-hour vCPU rate.| -|GB Memory Hours|When configuring a Fargate Task or EKS Pod you assign a certain amount GB of Memory and are charged a corresponding per-hour GB of Memory rate.| +|vCPU Hours|When configuring a Fargate task or EKS Pod you assign a certain amount of vCPU and are charged a corresponding per hour vCPU rate.| +|GB Memory Hours|When configuring a Fargate task or EKS Pod you assign a certain amount of GB of memory and are charged a corresponding per hour GB of memory rate.| ### Fargate Spot -Fargate has the ability to run in a Spot capacity which is conceptually the same premise as [EC2 Spot](/aws/services/ec2-pricing/#on-demand-vs-spot) - allowing you to run Tasks at up to a 70% discount off the Fargate on-demand price. +Fargate has the ability to run in a Spot capacity which is conceptually the same premise as [EC2 Spot](/aws/services/ec2-pricing/#on-demand-vs-spot), allowing you to run tasks at up to a 70% discount off the Fargate on-demand price. When the capacity for Fargate Spot is available, you will be able to launch tasks based on your specified request. When AWS needs the capacity back, tasks running on Fargate Spot will be interrupted with two minutes of notification. If the capacity for Fargate Spot stops being available, Fargate will scale down tasks running on Fargate Spot while maintaining any regular tasks you are running. -### Fargate vs self-managed EC2 on ECS or EKS +### Fargate vs Self-Managed EC2 on ECS or EKS -Fargate charges a significant premium for managing the underlying nodes. Additionally, Fargate has varying degrees of vCPU performance that differ depending on the Task. As a result, Fargate can have pitfalls relative to self-managed ECS or EKS on EC2 beyond just the additional costs. +Fargate charges a significant premium for managing the underlying nodes. Additionally, Fargate has varying degrees of vCPU performance that differ depending on the task. As a result, Fargate can have pitfalls relative to self-managed ECS or EKS on EC2 beyond just the additional costs. -For a more in-depth article for seeing how Fargate is priced relative to self-managed EC2, please read the following blog post for [understanding Fargate pricing](https://www.vantage.sh/blog/fargate-pricing). +For a more in-depth article for seeing how Fargate is priced relative to self-managed EC2, please read the following blog post: [AWS Fargate Pricing Explained](https://www.vantage.sh/blog/fargate-pricing).
!!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Aug 8, 2021_ diff --git a/docs/aws/services/efs-pricing.md b/docs/aws/services/efs-pricing.md new file mode 100644 index 0000000..588d6c1 --- /dev/null +++ b/docs/aws/services/efs-pricing.md @@ -0,0 +1,36 @@ +title: EFS Pricing | Cloud Cost Handbook + +[Amazon EFS Pricing Page](https://aws.amazon.com/efs/pricing/){ .md-button target="_blank"} + +## Summary + +Amazon Elastic File System (EFS) is a scalable elastic file storage system, where workloads are scaled up and down automatically as files are added and removed. Some use cases include containerized and serverless applications, big data analytics, development and testing, database backups, and machine learning training. + +## Pricing Dimensions + +| Dimension | Description | +|--------|--------| +|Storage Classes|See the [Storage Classes](#storage-classes) section for more information.| +|File Systems|There are two file systems to choose from—EFS Regional File System (Multi-AZ) and EFS One Zone. With the Regional File System option, files are stored across at least three Availability Zones (AZ). For files where availability and durability are less important, One Zone stores files in just one AZ within an AWS region, at a much lower storage price.| +|Throughput Modes|There are two throughput modes—Elastic Throughput mode and Provisioned Throughput mode. Elastic Throughput mode is recommended for unpredictable peak throughput needs or spiky throughput usage. Use Provisioned Throughput for high peak throughput capacity. Elastic Throughput mode charges for reads and writes per GB transferred whereas Provisioned Throughput charges are based on MB/s. Also, Infrequent Access storage is more expensive using Provisioned Throughput.| +|Storage|Charges for storage vary depending on your region, as wall as your choice of storage class, file system, and throughput mode.| +|Data Transfer|With Elastic Throughput mode, you are charged for reads and writes per GB transferred. You are also charged for tiering between storage classes. Charges for writes are more expensive in the cost-optimized storage classes.| + +## Storage Classes + +Amazon EFS offers three storage classes, each with different pricing rates and functionality. Availability and cost are the tradeoffs, the more available the data is, the higher the storage costs are. Each EFS storage class is described below: + +| Storage Class | Description | +|------|-----| +|EFS Standard|Standard is the high-speed, low-latency option for regularly accessed or modified data workloads.| +|EFS Infrequent Access|Providing the same features, durability, throughput, and IOPS scalability as Standard, the Infrequent Access class is ideal for workloads where the “sub-millisecond latencies” of Standard are not needed. Use this class for data that is accessed a few times a quarter.| +|EFS Archive|Just like Infrequent Access, Archive provides the same features, durability, throughput, and IOPS scalability as Standard. Archive is recommended for workloads where data is accessed a few times a year.| + +Make use of EFS Lifecycle Management to set lifecycle policies to transition files into lower-cost storage classes after periods of no use. + +
+ +!!! Contribute + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Feb 5, 2024_ diff --git a/docs/aws/services/elasticache-pricing.md b/docs/aws/services/elasticache-pricing.md index 3cac37f..715ab91 100644 --- a/docs/aws/services/elasticache-pricing.md +++ b/docs/aws/services/elasticache-pricing.md @@ -1,10 +1,10 @@ title: ElastiCache Pricing | Cloud Cost Handbook -[Amazon ElastiCache Pricing Page](https://aws.amazon.com/elasticache/pricing/){ .md-button } +[Amazon ElastiCache Pricing Page](https://aws.amazon.com/elasticache/pricing/){ .md-button target="_blank"} ## Summary -Amazon ElastiCache allows you to set up, run, and scale popular open-source compatible in-memory data stores like Redis or Memcached. ElastiCache ultimately runs atop EC2 instances with pre-configured software and are prefixed with `"cache."` and are referred to as Nodes. +Amazon ElastiCache allows you to set up, run, and scale popular open-source compatible in-memory data stores, like Redis or Memcached. ElastiCache ultimately runs atop EC2 instances with pre-configured software, is prefixed with `cache`, and is referred to as Nodes. ## Pricing Dimensions @@ -16,7 +16,7 @@ Amazon ElastiCache allows you to set up, run, and scale popular open-source comp ## Reserved Instances ElastiCache Nodes do have Reserved Instances that can give you significant savings. Reserved Instances are covered as a general concept found [here](../concepts/reserved-instances.md). -Typically, as ElastiCache nodes remain on for longer durations and aren't members of auto-scaling groups, they are good candidates for cost savings via Reserved Instances. +Typically, as ElastiCache Nodes remain on for longer durations and aren't members of Auto Scaling groups, they are good candidates for cost savings via Reserved Instances. ## Savings Plans @@ -25,4 +25,6 @@ ElastiCache Nodes are **not** covered under AWS Savings Plans.
!!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Jul 11, 2021_ diff --git a/docs/aws/services/elasticsearch-pricing.md b/docs/aws/services/elasticsearch-pricing.md index 933a8b9..3860033 100644 --- a/docs/aws/services/elasticsearch-pricing.md +++ b/docs/aws/services/elasticsearch-pricing.md @@ -1,16 +1,16 @@ title: Elasticsearch Service Pricing | Cloud Cost Handbook -[Amazon Elasticsearch Service Pricing Page](https://aws.amazon.com/elasticsearch-service/pricing/){ .md-button } +[Amazon Elasticsearch Service Pricing Page](https://aws.amazon.com/what-is/elasticsearch){ .md-button target="_blank"} ## Summary -Amazon Elasticsearch Service is a full-managed service which runs [Elasticsearch](https://www.elastic.co/elastic-stack/) which is used primarily for querying JSOn based search and analytics data. Amazon Elasticsearch Service is billed per instance for the amount of EBS storage attached to the instance and the type of instance which is used to run the service. +Amazon Elasticsearch Service is a fully managed service that runs [Elasticsearch](https://www.elastic.co/elastic-stack/) which is used primarily for querying JSON-based search and analytics data. Amazon Elasticsearch Service is billed per instance for the amount of EBS storage attached to the instance and the type of instance that is used to run the service. ## Pricing Dimensions |Dimension|Description| |----|----| -|Instance Type Usage|Elasticsearch instance types are billed at an hourly rate and charged that hourly rate on a per-second basis for your usage.| +|Instance Type Usage|Elasticsearch instance types are billed at an hourly rate and charged that hourly rate on a per second basis for your usage.| |Attached Storage|Elasticsearch allows you to attach storage to the instances either as General Purpose Storage or Provisioned IOPS storage. Behind the scenes these are just managed EBS Volumes that Elasticsearch orchestrates on your behalf. However, on your monthly AWS bill these charges will show up under the Elasticsearch service and not under the EBS service. There are also options for high performance local SSD disks for storage optimized instances.| ## Storage Optimized Instances @@ -23,4 +23,6 @@ As Elasticsearch instances are **not** covered by [AWS Savings Plans](/aws/conce
!!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Jul 19, 2021_ diff --git a/docs/aws/services/elb-pricing.md b/docs/aws/services/elb-pricing.md index c990c06..e78b5d6 100644 --- a/docs/aws/services/elb-pricing.md +++ b/docs/aws/services/elb-pricing.md @@ -1,10 +1,10 @@ title: Elastic Load Balancer Pricing | Cloud Cost Handbook -[Amazon ELB Pricing Page](https://aws.amazon.com/elasticloadbalancing/pricing/){ .md-button } +[Amazon ELB Pricing Page](https://aws.amazon.com/elasticloadbalancing/pricing/){ .md-button target="_blank"} ## Summary -Amazon Elastic Load Balancer (ELB) is a service which distributes traffic from a single endpoint (public or private) to one or many private resources. Most commonly an Elastic Load Balancer will be exposed to the public internet and will distribute the incoming traffic to several app servers (usually running on EC2 or ECS). Elastic Load Balancers can also be used to distribute private traffic from one service to another. There are different options for the type of ELB and they are priced differently and come with different feature sets. +Amazon Elastic Load Balancer (ELB) is a service that distributes traffic from a single endpoint (public or private) to one or many private resources. Most commonly, an Elastic Load Balancer will be exposed to the public internet and will distribute the incoming traffic to several app servers (usually running on EC2 or ECS). Elastic Load Balancers can also be used to distribute private traffic from one service to another. There are different options for the type of ELB and they are priced differently and come with different feature sets. ## Pricing Dimensions @@ -14,35 +14,35 @@ Amazon Elastic Load Balancer (ELB) is a service which distributes traffic from a | Load Balancer Data Processed | Each type of load balancer has a formula for how the data processed by the load balancer is turned into an additional hourly charged. | ## Application Load Balancer -Application Load Balancers (ALB) are useful for distributing layer 7 (HTTP, HTTPS, gRPC) traffic to application servers or other backends. ALBs have a standard hourly rate per region and a formula for calculating "LCU"-hours. The dimensions for calculating LCU are: +Application Load Balancers (ALB) are useful for distributing layer 7 (HTTP, HTTPS, gRPC) traffic to application servers or other backends. ALBs have a standard hourly rate per region and a formula for calculating LCU-hours. The dimensions for calculating LCU are: | Dimension | Description | | ---------- | -- | | New Connections | A single LCU is 25 new connections per second. | -| Active connections | A single LCU is 3,000 active connections per minute. | -| Processed bytes | A single LCU is 1 GB per hour for EC2 instances, containers and IP addresses as targets and 0.4 GB per hour for Lambda functions as targets. | -| Rule evaluations | A single LCU is 1,000 rule evaluations per second. | +| Active Connections | A single LCU is 3,000 active connections per minute. | +| Processed Bytes | A single LCU is 1GB per hour for EC2 instances, containers, and IP addresses as targets and 0.4GB per hour for Lambda functions as targets. | +| Rule Evaluations | A single LCU is 1,000 rule evaluations per second. | Whichever of these dimensions produces the highest LCU for an hour is what is used to create the charge for LCU-hour. ## Network Load Balancer -Network Load Balancers (NLB) are used for forwarding layer 4 traffic (TCP, UDP, TLS) to any other resource with an IP address. NLBs have a standard hourly rate per region and a formula for calculating "NLCU"-hours depending on the type of network traffic. The dimensions for calculating NCLU are: +Network Load Balancers (NLB) are used for forwarding layer 4 traffic (TCP, UDP, TLS) to any other resource with an IP address. NLBs have a standard hourly rate per region and a formula for calculating NLCU-hours depending on the type of network traffic. The dimensions for calculating NCLU are: | Dimension | TCP | UDP | TLS | | ----------- | ----------- |-----|-----| | New Connection or Flow | 800 | 400 | 50 | | Active Connection or Flow | 100,000 | 50,000 | 3,000 | -| Processed bytes | 1GB | 1GB | 1GB | +| Processed Bytes | 1GB | 1GB | 1GB | ## Gateway Load Balancer -Gateway Load Balancers are used to proxy traffic through third-party virtual appliances which support GENEVE. GLBs have a standard hourly rate per region and a formula for calculating "GLCU"-hours. The dimensions for calculating GLCU are: +Gateway Load Balancers are used to proxy traffic through third-party virtual appliances, which support GENEVE. GLBs have a standard hourly rate per region and a formula for calculating GLCU-hours. The dimensions for calculating GLCU are: | Dimension | Description | | ---------- | -- | | New Connections | A single LCU is 600 new connections per second. | -| Active connections | A single LCU is 60,000 active connections per minute. | -| Processed bytes | A single LCU is 1 GB per hour for EC2 instances, containers and IP addresses as targets and 0.4 GB per hour for Lambda functions as targets. | +| Active Connections | A single LCU is 60,000 active connections per minute. | +| Processed Bytes | A single LCU is 1GB per hour for EC2 instances, containers, and IP addresses as targets and 0.4GB per hour for Lambda functions as targets. | ## Classic Load Balancer Classic load balancers are the original type of load balancer which has since been superseded by ALB and NLB. CLBs support both layer 7 and layer 4 traffic. CLBs have a standard hourly rate per region and a standard per GB rate per region for traffic processed. @@ -50,4 +50,6 @@ Classic load balancers are the original type of load balancer which has since be
!!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Jul 30, 2021_ diff --git a/docs/aws/services/emr-pricing.md b/docs/aws/services/emr-pricing.md index ad0ba96..69dd043 100644 --- a/docs/aws/services/emr-pricing.md +++ b/docs/aws/services/emr-pricing.md @@ -1,12 +1,12 @@ title: EMR Pricing | Cloud Cost Handbook -[Amazon EMR Pricing Page](https://aws.amazon.com/emr/pricing/){ .md-button } +[Amazon EMR Pricing Page](https://aws.amazon.com/emr/pricing/){ .md-button target="_blank"} ## Summary Amazon Elastic Map Reduce (EMR) is software infrastructure for running map reduce and other big data workloads. It supports open-source frameworks like Apache Spark, projects like Hadoop, and SQL tools like Presto. -EMR runs on top of EC2 or EKS instances and also has a serverless option. EMR is available for a wide variety of instances which allows for tight optimization of workloads, for example choosing a compute optimized vs. a memory optimized instance for Spark vs. Hive. +EMR runs on top of EC2 or EKS instances and also has a serverless option. EMR is available for a wide variety of instances which allows for tight optimization of workloads, for example choosing a compute-optimized vs a memory-optimized instance for Spark vs Hive. To see which EC2 instances are available for EMR, you can add the `On EMR` and `EMR Cost` columns on [ec2instances.info](https://instances.vantage.sh). @@ -17,12 +17,14 @@ EMR is billed differently based on the underlying compute service. | Dimension | Description | | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Running on EC2 | EMR is billed as an additional cost per hour for the instance. For example, a [m6g.16xlarge](https://instances.vantage.sh/aws/ec2/m6g.16xlarge.html) has an EMR cost of ~$0.60 per hour. | -| Running on EKS | Running on EKS involves 2 dimensions: vCPUs and GiB of memory, with a minimum charge of 1 minute. | +| Running on EKS | Running on EKS involves 2 dimensions: vCPUs and GiB of memory, with a minimum charge of one minute. | | Serverless | Serverless has Compute, Memory, and Storage dimensions. | ## EMR Optimization -Every EMR instance above can also be run as a spot instance, which is likely to be appropriate for "fault tolerant" workloads on EMR. As of 2023, it is also possible to use Spot Fleets with the [price-capacity-optimized allocation strategy](https://aws.amazon.com/about-aws/whats-new/2023/06/amazon-emr-price-allocation-ec2-spot-instances/) for running EMR workloads. Lastly, data transfer charges are likely accumulating from the movement of your big data through the EMR system. You can dramatically reduce these charges, or even eliminate them, by connecting to EMR using [interface VPC endpoints](https://docs.aws.amazon.com/emr/latest/ManagementGuide/interface-vpc-endpoint.html). +Every EMR instance above can also be run as a Spot Instance, which is likely to be appropriate for fault-tolerant workloads on EMR. As of 2023, it is also possible to use Spot Fleets with the [price-capacity-optimized allocation strategy](https://aws.amazon.com/about-aws/whats-new/2023/06/amazon-emr-price-allocation-ec2-spot-instances/) for running EMR workloads. Lastly, data transfer charges are likely accumulating from the movement of your big data through the EMR system. You can dramatically reduce these charges, or even eliminate them, by connecting to EMR using [interface VPC endpoints](https://docs.aws.amazon.com/emr/latest/ManagementGuide/interface-vpc-endpoint.html). !!! Contribute Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Aug 22, 2023_ diff --git a/docs/aws/services/lambda-pricing.md b/docs/aws/services/lambda-pricing.md index c08e1eb..ec578a5 100644 --- a/docs/aws/services/lambda-pricing.md +++ b/docs/aws/services/lambda-pricing.md @@ -1,10 +1,10 @@ title: Lambda Pricing | Cloud Cost Handbook -[Lambda Pricing Page](https://aws.amazon.com/lambda/pricing/){ .md-button } +[Lambda Pricing Page](https://aws.amazon.com/lambda/pricing/){ .md-button target="_blank"} ## Summary -AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. You are charged based on the number of requests for your functions and the duration, the time it takes for your code to execute. +AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. You are charged based on the number of requests for your functions and the duration (the time it takes for your code to execute). ## Pricing Dimensions @@ -25,3 +25,5 @@ If you have existing saving plans in use on Lambda and are looking to make the s !!! Contribute Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Sep 26, 2022_ diff --git a/docs/aws/services/rds-pricing.md b/docs/aws/services/rds-pricing.md index cc9a5cf..77bd3b3 100644 --- a/docs/aws/services/rds-pricing.md +++ b/docs/aws/services/rds-pricing.md @@ -1,28 +1,28 @@ title: RDS Pricing | Cloud Cost Handbook -[Amazon RDS Pricing Page](https://aws.amazon.com/rds/pricing/){ .md-button } +[Amazon RDS Pricing Page](https://aws.amazon.com/rds/pricing/){ .md-button target="_blank"} ## Summary -Amazon Relational Database Service (RDS) provides you with the ability to create databases running certain software such as MySQL, Postgres, SQL Server and more. RDS instances ultimately are preconfigured EC2 instances running certain managed database software. As a result, you'll see similarities between instance types for RDS and EC2 where RDS instances are prefixed with "db." +Amazon Relational Database Service (RDS) provides you with the ability to create databases running certain software such as MySQL, Postgres, SQL Server, and more. RDS instances ultimately are preconfigured EC2 instances running certain managed database software. As a result, you'll see similarities between instance types for RDS and EC2 where RDS instances are prefixed with `db`. ## Pricing Dimensions |Dimension|Description| |----|----| -|Instance Type Usage|RDS instance types are billed at an hourly rate and charged that hourly rate on a per-second basis for your usage.| -|Database Software|As RDS allows you to run different types of database software there are varying costs depending on which database software you choose to use. For example you can run Oracle and MySQL database on the same RDS instance types but they have different pricing as Oracle licensing contributes a higher cost than MySQL.| -|Availability|RDS allows you to run RDS instances in either "Single AZ" or "Multi AZ" deployments. "Multi AZ" deployments are more highly available but carry a larger cost.| -|Attached Storage|RDS allows you to attach storage to RDS instances either as General Purpose Storage or Provisioned IOPS storage. Behind the scenes these are just managed EBS Volumes that RDS orchestrates on your behalf. However, on your monthly AWS bill these charges will show up under the RDS service and not under the EBS service.| +|Instance Type Usage|RDS instance types are billed at an hourly rate and charged that hourly rate on a per second basis for your usage.| +|Database Software|As RDS allows you to run different types of database software, there are varying costs depending on which database software you choose to use. For example, you can run Oracle and MySQL databases on the same RDS instance types, but they have different pricing, as Oracle licensing contributes a higher cost than MySQL.| +|Availability Zones (AZ)|RDS allows you to run RDS instances in either Single-AZ or Multi-AZ deployments. Multi-AZ deployments are more highly available but carry a larger cost.| +|Attached Storage|RDS allows you to attach storage to RDS instances either as General Purpose storage or Provisioned IOPS storage. Behind the scenes these are just managed EBS Volumes that RDS orchestrates on your behalf. However, on your monthly AWS bill these charges will show up under the RDS service and not under the EBS service.| |Backup Storage|You have the ability to turn on backups for your RDS instances and are charged an accompanying storage rate for backups.| ## Reserved Instances As RDS instances are **not** covered by [AWS Savings Plans](/aws/concepts/savings-plans/), you must rely on procuring [Reserved Instances](/aws/concepts/reserved-instances/) specifically for RDS. Reserved Instances are covered in depth under General Concepts and we encourage you to read up more on them there for the most up-to-date information. -## Single vs Multi Availability Zones +## Single-AZ vs Multi-AZ -RDS allows you to deploy instances in either a single availability zone or across multiple availability zones. Shorthand, this is referenced as either "single-AZ" or "multi-AZ". The benefit of being multi-AZ is that you're provided with enhanced availability and durability for your database as Amazon provisions and maintains a standby in a different availability zone for automatic failover in the event of a scheduled or unplanned outage. +With RDS, you can choose between a single-AZ or multi-AZ. The benefit of being multi-AZ is that you're provided with enhanced availability and durability for your database as Amazon provisions and maintains a standby in a different availability zone for automatic failover in the event of a scheduled or unplanned outage. From a cost consideration perspective, multi-AZ rates are double what single-AZ rates are for the added durability that you're provided. @@ -30,3 +30,5 @@ From a cost consideration perspective, multi-AZ rates are double what single-AZ !!! Contribute Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Aug 10, 2021_ diff --git a/docs/aws/services/redshift-pricing.md b/docs/aws/services/redshift-pricing.md index 1bd5128..7959efe 100644 --- a/docs/aws/services/redshift-pricing.md +++ b/docs/aws/services/redshift-pricing.md @@ -3,12 +3,12 @@ title: Redshift Pricing | Cloud Cost Handbook [Amazon Redshift Pricing Page](https://aws.amazon.com/redshift/pricing/){ .md-button target="_blank"} ## Summary -Redshift is a cloud data warehouse that enables organizations to analyze large volumes of data using SQL queries. The data can be structured and semi-structured across data warehouses, operational databases, and data lakes. With Redshift you can share and query live data across organizations, accounts, and regions. +Redshift is a cloud data warehouse that enables organizations to analyze large volumes of data using SQL queries. The data can be structured and semi-structured across data warehouses, operational databases, and data lakes. With Redshift, you can share and query live data across organizations, accounts, and regions. ## Pricing Dimensions | Dimension | Description | | ------------- |-------------| -|[Node Type](https://instances.vantage.sh/redshift/){ target="_blank" }| You are billed an hourly rate based on your selected node type and node quantity for the duration your cluster is active. The recommended node types for Redshift are RA3 and DC2. Choose based on data size to ensure the best price and performance. If your data is under 1TB uncompressed it is recommended to use DC2 Node. If your data is currently over 1TB uncompressed or will exceed 1TB in the future, it is recommended to use RA3.| +|[Node Type](https://instances.vantage.sh/redshift/){ target="_blank" }| You are billed at an hourly rate based on your selected node type and node quantity for the duration your cluster is active. The recommended node types for Redshift are RA3 and DC2. Choose based on data size to ensure the best price and performance. If your data is under 1TB uncompressed, it is recommended to use DC2 Node. If your data is currently over 1TB uncompressed or will exceed 1TB in the future, it is recommended to use RA3.| |[Paid Features](#paid-features)| Additional features can accrue additional costs.| |Data Transfer|Data transfers between Redshift and S3 within the same AWS Region for tasks like backup, restore, load, and unload operations are free of charge. However, any other data transfers into and out of Redshift incur standard AWS data transfer rates.| |Backup Storage|Redshift charges for manual snapshots taken using the console, API, or CLI. This includes manual snapshots taken for RA3 clusters. Storing backups beyond the allocated storage capacity on DC and DS clusters results in additional charges based on the standard S3 storage rates. Should you retain recovery points beyond the initial free 24-hour period, they will lead to additional charges as part of RMS.| @@ -16,12 +16,12 @@ Redshift is a cloud data warehouse that enables organizations to analyze large v ## Paid Features ### Redshift Serverless -With Redshift Serverless you can run analytics and scale without setting up and managing warehouse infrastructure. It is ideal for difficult to predict compute needs, immediately needed ad-hoc analytics, and test and development environments. +With Redshift Serverless, you can run analytics and scale without setting up and managing warehouse infrastructure. It is ideal for difficult-to-predict compute needs, immediately needed ad-hoc analytics, and test and development environments. -You only pay for the capacity used and capacity is automatically scaled up and down depending on need, as well as shutting off during inactivity. Data warehouse capacity is measured in Redshift Processing Units (RPUs). You are billed in RPU-hours on a per-second basis. Since Redshift Serverless automatically provisions the appropriate resources, you do not need to choose a node type. The features concurrency scaling and Redshift Spectrum are included in the cost. +You only pay for the capacity used and capacity is automatically scaled up and down depending on need, as well as shutting off during inactivity. Data warehouse capacity is measured in Redshift Processing Units (RPUs). You are billed in RPU hours on a per second basis. Since Redshift Serverless automatically provisions the appropriate resources, you do not need to choose a node type. The features concurrency scaling and Redshift Spectrum are included in the cost. ### Redshift Spectrum -This feature enables you to execute SQL queries on data stored in [S3](/aws/services/s3-pricing). The billing is based on the volume of data scanned by Redshift Spectrum, which will be rounded up to the nearest megabyte, with a minimum fee of 10 MB per query. +This feature enables you to execute SQL queries on data stored in [S3](/aws/services/s3-pricing). The billing is based on the volume of data scanned by Redshift Spectrum, which will be rounded up to the nearest megabyte, with a minimum fee of 10MB per query. ### Redshift Managed Storage Redshift Managed Storage is a feature that allows you to store and manage data within your cluster. It is exclusively available for RA3 node types. Billing is a fixed GB-month rate regardless of data size. Usage of managed storage is computed on an hourly basis, taking into account the total amount of data stored. @@ -37,3 +37,5 @@ This functionality enables you to create, train, and deploy machine learning (ML !!! Contribute Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Sep 19, 2023_ diff --git a/docs/aws/services/route-53-pricing.md b/docs/aws/services/route-53-pricing.md index da809cf..cba75e5 100644 --- a/docs/aws/services/route-53-pricing.md +++ b/docs/aws/services/route-53-pricing.md @@ -1,10 +1,10 @@ title: Route53 Pricing | Cloud Cost Handbook -[Route53 Pricing Page](https://aws.amazon.com/route53/pricing/){ .md-button } +[Route53 Pricing Page](https://aws.amazon.com/route53/pricing/){ .md-button target="_blank"} ## Summary -Amazon Route 53 is a Domain Name System (DNS) web service. Typically Route 53 doesn't tend to be a large cost center for the vast majority of companies. +Amazon Route 53 is a Domain Name System (DNS) web service. Typically, Route 53 doesn't tend to be a large cost center for the vast majority of companies. ## Pricing Dimensions @@ -17,4 +17,6 @@ Amazon Route 53 is a Domain Name System (DNS) web service. Typically Route 53 do
!!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Jul 11, 2021_ diff --git a/docs/aws/services/s3-pricing.md b/docs/aws/services/s3-pricing.md index 2ef7d34..126243b 100644 --- a/docs/aws/services/s3-pricing.md +++ b/docs/aws/services/s3-pricing.md @@ -1,51 +1,60 @@ title: S3 Pricing | Cloud Cost Handbook -[Amazon S3 Pricing Page](https://aws.amazon.com/s3/pricing/){ .md-button } +[Amazon S3 Pricing Page](https://aws.amazon.com/s3/pricing/){ .md-button target="_blank"} ## Summary -Amazon Simple Storage Service (S3) is an object storage service that allows customers to store files called *objects*. Objects are organized into namespaces called *buckets* at no additional cost. Ultimately, you are charged on the dimensions below, which are a mix of how much you store with specific storage types, the bandwidth for accessing those files, and the requests you make to the S3 service. +Amazon Simple Storage Service (S3) is an object storage service that allows customers to store files called objects. Objects are organized into namespaces called buckets at no additional cost. Ultimately, you are charged on the dimensions below, which are a mix of how much you store with specific storage types, the bandwidth for accessing those files, and the requests you make to the S3 service. ## Pricing Dimensions |Dimension|Description| |----|----| -|Object Storage Amount|AWS charges you for how much you store across all objects and across all buckets. Each region has a different pricing rate on a per-GB basis, and as you store more data on S3, you get discounts on a per-GB basis.| -|Object Storage Class|Amazon S3 has many different storage classes, further discussed below. S3 Standard is the default storage class, but you can get discounts for other tiers.| -|Bandwidth|AWS charges you for the amount of egress you consume for accessing S3 objects. You should keep an eye on how much bandwidth is being consumed—where you can potentially have runaway costs with significant use.| -|Request Metrics|AWS charges you for GET, SELECT, PUT, COPY, POST, and LIST requests. S3 also charges you different rates depending on which of these request types you're using. This is oftentimes an unknown cost that occurs and that you should keep an eye on.| +|Object Storage Amount|AWS charges you for how much you store across all objects and across all buckets. Each region has a different pricing rate on a per GB basis, and as you store more data on S3, you may get discounts on a per GB basis.| +|Object Storage Class|Amazon S3 has many different storage classes, further discussed below. Standard is the default storage class, but you can get lower rates for some other tiers.| +|Bandwidth|AWS charges you for the amount of egress you consume for accessing S3 objects. You should keep an eye on how much bandwidth is being consumed—that's where you can potentially have runaway costs with significant use.| +|Request Metrics|AWS charges you for `GET`, `SELECT`, `PUT`, `COPY`, `POST`, and `LIST` requests. S3 also charges you different rates depending on which of these request types you're using. This is oftentimes an unknown cost that occurs and that you should keep an eye on.| ## Intelligent-Tiering -S3 Intelligent-Tiering is an Amazon S3 storage class that will automatically optimize storage costs on your behalf. S3 Intelligent-Tiering will monitor access patterns of S3 objects and shift them between four different storage classes to deliver you with automatic savings. +S3 Intelligent-Tiering is an Amazon S3 storage class that will automatically optimize storage costs on your behalf. S3 Intelligent-Tiering will monitor access patterns of S3 objects and shift them between five different storage classes to deliver you automatic savings. -Typically, customers have files stored in S3 Standard storage, but they may not think to ever optimize these costs and overpay for the number of files they're storing in S3. By using Intelligent-Tiering, you can focus on your application development and allow S3 Intelligent-Tiering to manage shifting their objects' storage classes on their behalf. +Typically, customers have files stored in S3 Standard storage, but they may not think to ever optimize these costs and overpay for the number of files they're storing in S3. By using Intelligent-Tiering, you can focus on your application development and allow S3 Intelligent-Tiering to manage shifting their objects' storage classes on your behalf. ## Understanding Storage Classes -S3 currently supports 19 different object storage types within an S3 bucket. Each bucket is capable of holding objects from a single class or multiple classes. A light overview of these storage types is included below: - -|Storage Type|Description| -|----|----| -|Standard Storage‍|Standard Storage (StandardStorage) is for general purpose storage for any type of data, typically used for frequently accessed data. Standard Storage is priced on a tiered basis, where it gets incrementally cheaper to store data as you store more.| -|Express One Zone|This storage class is a high-performance storage class that is built to provide consistent single-digit millisecond data access. It provides 10 times faster access than S3 Standard; however, the storage pricing is a lot higher—at almost 7 times the rate of Standard storage pricing. Request pricing is charged at a flat rate that's half the rate of Standard pricing, for request sizes up to 512KB. Additional per-GB charges apply for request sizes greater than 512KB | -|Intelligent Tiering - Frequent Access (IntelligentTieringFAStorage)|Objects uploaded to S3 Intelligent Tiering are automatically stored in the frequent access tier which has the same rates as Standard Storage.| -|Intelligent-Tiering - Infrequent Access (IntelligentTieringIAStorage)| Objects in Frequent Access that haven't been accessed in 30 consecutive days are moved to this tier, where prices drop significantly.| -|Intelligent-Tiering - Archive Access (IntelligentTieringAAStorage)|Upon activating the archive access tier for intelligent tiering, S3 will automatically move objects that haven't been accessed for 90 days to archive access, where the pricing is the same as Glacier.| -|Intelligent Tiering - Deep Archive Access (IntelligentTieringDAAStorage)|Upon activating the deep archive access tier for Intelligent-Tiering, S3 will automatically move objects that haven't been accessed for 180 days to deep archive access.| -|S3 Standard - Infrequent Access (StandardIAStorage)|S3 Standard Infrequent Access is for data that is accessed less frequently, but requires rapid access when needed. It offers the high durability, high throughput, and low latency of S3 Standard, with a low per-GB storage price and per-GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files.| -|Standard Infrequently Access Overhead (StandardIASizeOverhead)|There is a minimum billable size of 128KB. , if you stored an object at 28KB, the StandardIASizeOverhead rate would increase by 128KB–28KB or 100KB and represented by this metric.| -|S3 Standard - Infrequent Access (One Zone)|S3 Infrequent Access One Zone is for data that is accessed less frequently, but requires rapid access when needed. Unlike other S3 Storage classes that store data in a minimum of three Availability Zones, S3 Infrequent Access One Zone stores data in a single AZ and costs 20% less than S3 Standard Infrequent Access.| -|One Zone Size Overhead (OneZoneIASizeOverhead)|There is a minimum billable size of 128KB. For example, if you stored an object at 28KB, the StandardIASizeOverhead rate would increase by 128KB–28KB or 100KB and represented by this metric.| -|S3 Glacier (GlacierStorage)|S3 Glacier is a secure, durable, and low-cost storage class for data archiving. You can reliably store any amount of data at costs that are competitive with or cheaper than on-premises solutions. To keep costs low yet suitable for varying needs, S3 Glacier provides three retrieval options that range from a few minutes to hours.| -|S3 Glacier Overhead (GlacierObjectOverhead)|For each object that is stored in S3 Glacier, 40 KB of chargeable overhead is added for metadata| -|S3 Glacier Object Overhead (GlacierObjectOverhead)|Amazon S3 Glacier also requires an additional 32KB of data per object for S3 Glacier’s index and metadata.| -|S3 Glacier Deep Archive (DeepArchiveStorage)|S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term retention and digital preservation for data that may be accessed once or twice in a year. It is designed for customers—particularly those in highly-regulated industries, such as the Financial Services, Healthcare, and Public Sectors–that retain data sets for 7–10 years or longer to meet regulatory compliance requirements. S3 Glacier Deep Archive can also be used for backup and disaster recovery use cases, and is a cost-effective and easy-to-manage alternative to magnetic tape systems, whether they are on-premises libraries or off-premises services.| -|Deep Archive Object Overhead (DeepArchiveObjectOverhead)|For each object that is stored in S3 Glacier, 40 KB of chargeable overhead is added for metadata.| -|Deep Archive S3 Object Overhead (DeepArchiveS3ObjectOverhead)|Amazon S3 Deep Archive also requires an additional 32KB of data per object for S3 Deep Archive index and metadata.| -|Deep Archive Staging Storage (DeepArchiveStagingStorage)|Staging storage is where the parts of Multipart Upload are staged until the CompleteMultipart request is issued. The parts are staged in S3 Standard, and storage is charged at the S3 Standard price.| -|S3 Reduced Redundancy Storage|Reduced Redundancy Storage is an Amazon S3 storage option that enables customers to store noncritical, reproducible data at lower levels of redundancy than Amazon S3 Standard storage. It provides a highly available solution for distributing or sharing content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced. The Reduced Redundancy option stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as standard Amazon S3 storage.| - +S3 currently supports 28 different object storage types within an S3 bucket. Each bucket is capable of holding objects from a single class or multiple classes. A light overview of these storage types is included below: + +| Storage Type | Description | +| --- | --- | +| Standard Storage (StandardStorage) | Standard Storage is for general purpose storage of any type of data, typically used for frequently accessed data. Standard Storage is priced on a tiered basis where it gets incrementally cheaper to store data as you store more. | +| Intelligent-Tiering - Frequent Access (IntelligentTieringFAStorage) | Objects uploaded to Intelligent-Tiering are automatically stored in the Frequent Access tier which has the same rates as Standard Storage. | +| Intelligent-Tiering - Infrequent Access (IntelligentTieringIAStorage) | Objects in Frequent Access that haven't been accessed in 30 consecutive days are moved to this tier in which prices drop significantly. | +| Intelligent-Tiering - Archive Instant Access (IntelligentTieringAIAStorage) | Objects that haven’t been accessed in 90 consecutive days are moved to this tier in which prices drop even more. | +| Intelligent-Tiering - Archive Access (IntelligentTieringAAStorage) | Upon activating the Archive Access tier for Intelligent-Tiering, S3 will automatically move objects that haven’t been accessed for 90 days (or more depending on your configuration) to Archive Access where the pricing is the same as Glacier. | +| Intelligent-Tiering - Deep Archive Access (IntelligentTieringDAAStorage) | Upon activating the Deep Archive Access tier for Intelligent-Tiering, S3 will automatically move objects that haven’t been accessed for 180 days (or more depending on your configuration) to Deep Archive Access. | +| Intelligent-Tiering - Archive Access Object Overhead (IntAAObjectOverhead) | For each object that is stored in the Intelligent-Tiering - Archive Access tier, 32KB of chargeable overhead is added for index and related metadata, charged at Glacier Flexible Retrieval rates. | +| Intelligent-Tiering - Archive Access S3 Object Overhead (IntAAS3ObjectOverhead) | Intelligent-Tiering - Archive Access also requires an additional 8KB of data per object for the name of the object and other metadata, charged at Standard Storage rates | +| Intelligent-Tiering - Deep Archive Access Object Overhead (IntDAAObjectOverhead) | For each object that is stored in the Intelligent-Tiering - Deep Archive Access tier, 32KB of chargeable overhead is added for index and related metadata, charged at Glacier Flexible Retrieval rates. | +| Intelligent-Tiering - Deep Archive Access S3 Object Overhead (IntDAAS3ObjectOverhead) | Intelligent-Tiering - Deep Archive Access also requires an additional 8KB of data per object for the name of the object and other metadata, charged at Standard Storage rates. | +| Standard - Infrequent Access (StandardIAStorage) | Standard - Infrequent Access is for data that is accessed less frequently but requires rapid access when needed. It offers the high durability, high throughput, and low latency of Standard Storage, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance makes Standard - Infrequent Access ideal for long-term storage, backups, and as a data store for disaster recovery files. | +| Standard - Infrequent Access Size Overhead (StandardIASizeOverhead) | There is a minimum billable object size of 128KB. For example, if you stored an object at 28KB, the rate would increase by 100KB, (128KB - 28KB) and is represented by this metric. | +| Standard - Infrequent Access Object Overhead (StandardIAObjectOverhead) | For each object stored in Standard - Infrequent Access 32KB of chargeable overhead is added for metadata. | +| Standard - Infrequent Access (One Zone) (OneZoneIAStorage) | Standard - Infrequent Access (One Zone) is for data that is accessed less frequently but requires rapid access when needed. Unlike other S3 storage classes which store data in a minimum of three Availability Zones (AZ), Standard - Infrequent Access (One Zone) stores data in a single AZ and costs much less than Standard - Infrequent Access. | +| One Zone Size Overhead (OneZoneIASizeOverhead) | There is a minimum billable object size of 128KB. For example, if you stored an object at 28KB, the rate would increase by 100KB, (128KB - 28KB) and is represented by this metric. | +| Glacier Instant Retrieval (GlacierInstantRetrievalStorage) | Glacier Instant Retrieval (GlacierInstantRetrievalStorage) is a high-latency, low-cost, durable archive storage class. The use case is ideal for data that requires long-term storage and is only accessed once per quarter. | +| Glacier Instant Retrieval Size Overhead (GlacierInstantRetrievalSizeOverhead) | There is a minimum billable object size of 128KB. For example, if you stored an object at 28KB, the rate would increase by 100KB, (128KB - 28KB) and is represented by this metric. | +| Glacier Flexible Retrieval (GlacierStorage) | Glacier Flexible Retrieval (formerly called Glacier) is a secure, durable, and low-cost storage class for data archiving. You can reliably store any amount of data at costs that are competitive with or cheaper than on-premises solutions. To keep costs low yet suitable for varying needs, Glacier provides three retrieval options that range from a few minutes to hours. | +| Glacier Overhead (GlacierObjectOverhead) | For each object that is stored in Glacier, 32KB of chargeable overhead is added for index and related metadata, charged at Glacier Flexible Retrieval rates. | +| Glacier S3 Object Overhead (GlacierS3ObjectOverhead) | Glacier also requires an additional 8KB of data per object for the name of the object and other metadata, charged at Standard Storage rates. | +| Glacier Staging Storage (GlacierStagingStorage) | Staging storage serves as the temporary holding space for the components of a Multipart Upload until the CompleteMultipart request is initiated. These parts are temporarily stored in Standard Storage, and chargers based on Standard Storage pricing. | +| Glacier Deep Archive (DeepArchiveStorage) | Glacier Deep Archive (DeepArchiveStorage) is tied with Intelligent-Tiering - Deep Archive Access as Amazon S3’s lowest-cost storage class. It supports long-term retention and digital preservation of data that may be accessed once or twice a year. It is designed for customers‚ particularly those in highly-regulated industries, such as the Financial Services, Healthcare, and Public Sectors, that retain data sets for 7-10 years or longer to meet regulatory compliance requirements. Glacier Deep Archive can also be used for backup and disaster recovery use cases, and is a cost-effective and easy-to-manage alternative to magnetic tape systems, whether they are on-premises libraries or off-premises services. | +| Deep Archive Object Overhead (DeepArchiveObjectOverhead) | For each object that is stored in Glacier Deep Archive, 32KB of chargeable overhead is added for index and related metadata, charged at Glacier Deep Archive rates. | +| Deep Archive S3 Object Overhead (DeepArchiveS3ObjectOverhead) | Glacier Deep Archive also requires an additional 8KB of data per object for the name of the object and other metadata, charged at Standard Storage rates. | +| Deep Archive Staging Storage (DeepArchiveStagingStorage) | Staging storage is where the parts of Multipart Upload are staged until the CompleteMultipart request is issued. The parts are staged in Standard Storage, and storage is charged at the Standard Storage price. | +| S3 Reduced Redundancy Storage | is an Amazon S3 storage option that enables customers to store noncritical, reproducible data at lower levels of redundancy than Standard Storage. It provides a highly available solution for distributing or sharing content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced. The Reduced Redundancy Storage option stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as Standard Storage. | +| Express One Zone (ExpressOneZone) | Express One Zone, like Standard - Infrequent Access (One Zone) is a single AZ storage class. It can provide extremely quick, single-digit millisecond access to your data at a lower price than Standard Storage. Some examples of use cases are Machine Learning and Financial Modeling. | +| Outposts (Outposts) | AWS Outposts extend AWS services, tools, etc to your on-premises AWS Outposts environment. Ideal for locally required data, with S3 on Outposts you can reliably store and access data on your Outpost. | ## S3 Bucket Request Metrics @@ -53,7 +62,6 @@ S3 does not have ingress, egress, or request metrics turned on by default, leavi Below is an example of how to enable these metrics for a S3 bucket via the AWS CLI. Just be sure to replace `YOUR_BUCKET_NAME` with your actual bucket name and `YOUR_BUCKET_REGION` with the appropriate bucket region. - ``` aws s3api put-bucket-metrics-configuration --bucket YOUR_BUCKET_NAME @@ -65,12 +73,11 @@ aws s3api put-bucket-metrics-configuration !!! Note It takes roughly 15 minutes for AWS to begin delivering these metrics after being enabled. +## S3 Vs Cloudflare Bandwidth Alliance Partner -## S3 Versus Cloudflare Bandwidth Alliance Partner +The [Cloudflare Bandwidth Alliance](https://www.cloudflare.com/bandwidth-alliance/) is a group of infrastructure providers that have decided to either completely waive or massively discount egress fees for shared customers. This can be a huge source of savings for customers who have an AWS bill where S3 egress costs make up a large portion of the aforementioned bill. -The [Cloudflare Bandwidth Alliance](https://www.cloudflare.com/bandwidth-alliance/) is a group of infrastructure providers that have decided to either completely waive or massively discount egress fees for shared customers. This can be a huge source of savings for customers that have an AWS bill where S3 egress costs make up a large portion of the aforementioned bill. - -By moving S3 content to Cloudflare's content delivery network (CDN) service in tandem with a Bandwidth Alliance provider, you can get no-cost content transit from their Cloudflare origin server to Cloudflare servers distributed around the world. This effectively reproduces the cost benefit that users get for pairing CloudFront with an AWS service, like S3.[^noegressfees] Utilizing one of Cloudflare's self-serve plans, you can also cap their cost to deliver content via flat-rate pricing. Further details can be found at in the [CloudFront service article](https://handbook.vantage.sh/aws/services/cloudfront-pricing/#cloudfront-versus-cloudflare) of the Cloud Cost Handbook. +By moving S3 content to Cloudflare's content delivery network (CDN) service in tandem with a Bandwidth Alliance provider, you can get no-cost content transit from their Cloudflare origin server to Cloudflare servers distributed around the world. This effectively reproduces the cost benefit that users get for pairing CloudFront with an AWS service, like S3.[^noegressfees] Utilizing one of Cloudflare's self-serve plans, you can also cap their cost to deliver content via flat-rate pricing. Further details can be found in the [CloudFront section](/aws/services/cloudfront-pricing/) of the Cloud Cost Handbook. ### Considerations @@ -78,16 +85,17 @@ Price is not the only consideration that goes into making a decision about wheth #### Complexity -AWS had made it exceedingly easy for customers to utilize other AWS services in tandem, but there is a non-trivial cost for an organization to decide to split their infrastructure over multiple service providers. Developers will need to learn and understand both systems and when to choose one design pattern over the other. There will be two sets of documentation that will need to be addressed when designing or troubleshooting systems. - -#### Use-cases +AWS has made it exceedingly easy for customers to utilize other AWS services in tandem, but there is a non-trivial cost for an organization to decide to split their infrastructure over multiple service providers. Developers will need to learn and understand both systems and when to choose one design pattern over the other. There will be two sets of documentation that will need to be addressed when designing or troubleshooting systems. -The primary use-case in favor of utilizing this cost efficiency architecture strategy is if a user has a large amount of static content that is stored on S3 and being served to end-users via the internet. +#### Use Cases -[^noegressfees]: "If you are using an AWS origin, effective December 1, 2014, data transferred from origin to edge locations (Amazon CloudFront "origin fetches") will be free of charge." https://aws.amazon.com/cloudfront/pricing/ +The primary use case in favor of utilizing this cost-efficiency architecture strategy is if a user has a large amount of static content that is stored on S3 and being served to end-users via the internet. +[^noegressfees]: "If you are using an AWS origin, effective December 1, 2014, data transferred from origin to edge locations (Amazon CloudFront origin fetches) will be free of charge." https://aws.amazon.com/cloudfront/pricing/
!!! Contribute Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Feb 2, 2024_ diff --git a/docs/aws/services/vpc-pricing.md b/docs/aws/services/vpc-pricing.md index 0b3ce06..649c5fe 100644 --- a/docs/aws/services/vpc-pricing.md +++ b/docs/aws/services/vpc-pricing.md @@ -1,10 +1,10 @@ title: VPC Pricing | Cloud Cost Handbook -[Amazon VPC Pricing Page](https://aws.amazon.com/vpc/pricing/){ .md-button } +[Amazon VPC Pricing Page](https://aws.amazon.com/vpc/pricing/){ .md-button target="_blank"} ## Summary -Amazon Virtual Private Cloud (VPC) is a service which allows customers to logically isolate their resources into different networks. Unless explicitly configured every VPC is completely isolated from every other VPC. There is no charge for a VPC in itself, however some optional sub-components of a VPC can incur charges. +Amazon Virtual Private Cloud (VPC) is a service that allows customers to logically isolate their resources into different networks. Unless explicitly configured, every VPC is completely isolated from every other VPC. There is no charge for a VPC in itself, however some optional sub-components of a VPC can incur charges. ## Pricing Dimensions @@ -14,7 +14,7 @@ Amazon Virtual Private Cloud (VPC) is a service which allows customers to logica |NAT Gateway Transfer|NAT Gateways are billed per GB which is processed by the gateway regardless of where the data is being transferred to or from.| ## NAT Gateway -NAT (Network Address Translation) Gateways enable resources running inside of VPCs to connect to services outside of the VPC without needing to have those resources exposed to the public internet. Besides the standard usage and transfer charges on NAT Gateways you will also be charged standard bandwidth transfer charges on top of that depending on where the traffic is going. +NAT (Network Address Translation) Gateways enable resources running inside of VPCs to connect to services outside of the VPC without needing to expose those resources to the public internet. Besides the standard usage and transfer charges on NAT Gateways, you will also be charged standard bandwidth transfer charges on top of that depending on where the traffic is going. ## Amazon VPC Endpoints VPC Endpoints allow resources to connect to other AWS services outside of a VPC, such as S3, without the need for a NAT Gateway. This is a good way to prevent NAT Gateway usage and transfer charges. @@ -22,4 +22,6 @@ VPC Endpoints allow resources to connect to other AWS services outside of a VPC,
!!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Jul 19, 2021_ diff --git a/docs/aws/services/workspaces-pricing.md b/docs/aws/services/workspaces-pricing.md index 6d29600..9352aa8 100644 --- a/docs/aws/services/workspaces-pricing.md +++ b/docs/aws/services/workspaces-pricing.md @@ -1,25 +1,27 @@ title: WorkSpaces Pricing | Cloud Cost Handbook -[Amazon WorkSpaces Pricing Page](https://aws.amazon.com/workspaces/pricing){ .md-button } +[Amazon WorkSpaces Pricing Page](https://aws.amazon.com/workspaces/pricing){ .md-button target="_blank"} ## Summary -Amazon WorkSpaces is a fully managed, persistent desktop virtualization service. You can use Amazon WorkSpaces to provision either Windows or Linux desktops - each come with their own set of pricing implications discussed below. WorkSpace pricing can either be done in a monthly or hourly fashion. +Amazon WorkSpaces is a fully managed, persistent desktop virtualization service. You can use Amazon WorkSpaces to provision either Windows or Linux desktops, each with its own set of pricing implications discussed below. WorkSpace pricing can either be done in a monthly or hourly fashion. ## Pricing Dimensions |Dimension|Description| |----|----| -|Compute Type| WorkSpaces offers seven different types of compute types. They are `Value`, `Standard`, `Performance`, `Power` and `PowerPro`, `Graphics`, `GraphicsPro`. Each of these classes has a different set of underlying resources that contribute to costs differently. The order that these classes are listed in are from cheapest to most expensive. | +|Compute Type| WorkSpaces offers seven different types of compute types. They are `Value`, `Standard`, `Performance`, `Power`, `PowerPro`, `Graphics`, and `GraphicsPro`. Each of these classes has a different set of underlying resources that contribute to costs differently. The order that these classes are listed in are from cheapest to most expensive. | |Platform Type| Linux or Windows. You are charged an additional amount of money for running on Windows vs Linux. You may also bring your own license for Windows WorkSpaces to reduce Windows licensing costs if you have that available. | -|Running Mode| `AUTO_STOP` or `ALWAYS_ON`. When you choose `AUTO_STOP` you are choosing to create a WorkSpace that has a pre-determined expiration time in which that WorkSpace will terminate and is billed per hour. When you choose `ALWAYS_ON` you are charged on a monthly rate basis and the WorkSpace will persist being on until you take action to terminate it. | -|WorkSpace Size| Each Compute Type offers four different configurations with different amounts of vCPU and GB of Memory. `Graphic` and `GraphicsPro` only offer one size. Depending on the size you choose, you will pay a more expensive rate. | +|Running Mode|`AUTO_STOP` or `ALWAYS_ON`. When you choose `AUTO_STOP` you are choosing to create a WorkSpace that has a pre-determined expiration time in which that WorkSpace will terminate and is billed per hour. When you choose `ALWAYS_ON` you are charged on a monthly rate basis and the WorkSpace will persist being on until you take action to terminate it. | +|WorkSpace Size| Each compute type offers four different configurations with different amounts of vCPU and GB of Memory. `Graphic` and `GraphicsPro` only offer one size. Depending on the size you choose, you will pay a more expensive rate. | ## Monitoring Unused WorkSpaces -WorkSpaces have an attribute named `last_known_user_connection_timestamp` that maintains a timestamp of when was the last time a user has accessed this WorkSpace. You should periodically audit WorkSpaces to ensure that they're being used as otherwise it can be wasteful and a contributor to costs. In the event that this timestamp isn't present, it means that a user has never actually connected to this instance. Additionally, you can look for a certain amount of time that has progressed since a user has accessed it - in the event that a user hasn't accessed a WorkSpace in over a few weeks, it may be a good candidate for clean up and cost savings. +WorkSpaces have an attribute named `last_known_user_connection_timestamp` that maintains a timestamp of the last time a user accessed a specific WorkSpace. You should periodically audit WorkSpaces to ensure that they're being used as otherwise it can be wasteful and a contributor to costs. In the event that this timestamp isn't present, it means that a user has never actually connected to this instance. Additionally, you can look for a certain amount of time that has progressed since a user has accessed it, in the event that a user hasn't accessed a WorkSpace in over a few weeks, it may be a good candidate for a cleanup for cost savings.
!!! Contribute - Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). \ No newline at end of file + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + +_Last updated Jul 28, 2021_ diff --git a/docs/datadog/committed-use-discounts.md b/docs/datadog/committed-use-discounts.md index c35a05a..e42c512 100644 --- a/docs/datadog/committed-use-discounts.md +++ b/docs/datadog/committed-use-discounts.md @@ -12,11 +12,11 @@ You may contact your Datadog account manager to realize discounts from on-demand 20-50% savings from your variable usage plans are possible. Note that variable usage plans are still billed annually but a minimum commitment will result in greater savings. Datadog does not publicly share all of the rates for minimum commitments. -One example for container monitoring states: +One example of container monitoring states: > Additional containers will be billed at $0.002 per container per hour. In addition, you can purchase prepaid containers at $1 per container per month. -In a month that has 744 hours, the “on-demand” cost of a container will be $1.488 whereas a committed container would be $1.00 which represents a 32.8% discount. +In a month that has 744 hours, the on-demand cost of a container will be $1.488 whereas a committed container would be $1.00 which represents a 32.8% discount.
diff --git a/docs/index.md b/docs/index.md index f1296b8..29d6fb3 100644 --- a/docs/index.md +++ b/docs/index.md @@ -12,11 +12,11 @@ These are general concepts that don't necessarily map directly to a particular s ### Provider Services -Provider services are meant to be the source of truth for explaining not only the pricing mechanics of the service but also to explain potentially nuanced concepts related to costs for that service. The focus of these pages is meant to be for the _pricing_ of these services and not related to the actual management or orchestration of the service itself. +Provider services are meant to be the source of truth for explaining not only the pricing mechanics of the service but also to explain potentially nuanced concepts related to costs for that service. The focus of these pages is meant to be on the _pricing_ of these services and not related to the actual management or orchestration of the service itself. !!! contribute "A Note About Currency" - Any listed prices are provided in US Dollars (USD). This is also the [default currency](https://repost.aws/knowledge-center/supported-aws-currencies){ target="_blank"} AWS uses for billing. + Any listed prices are provided in US Dollars (USD). This is also the [default currency](https://repost.aws/knowledge-center/supported-aws-currencies) AWS uses for billing. ## Contributing diff --git a/docs/snowflake/adjustments-for-included-cloud-services.md b/docs/snowflake/adjustments-for-included-cloud-services.md index 563183e..dfe9279 100644 --- a/docs/snowflake/adjustments-for-included-cloud-services.md +++ b/docs/snowflake/adjustments-for-included-cloud-services.md @@ -20,10 +20,10 @@ Snowflake starts billing for cloud services only after they exceed 10% of your w The following common data operations consume cloud services on Snowflake. You can follow recommended patterns to avoid them. -* **Full clones.** Consider selectively cloning your databases for development, ETL, or backup purposes. Cloning consumes only cloud services credits, so if you run a large clone operation on the same day when fewer queries are run, you will pay. Instead, you can clone only the tables you need to stay under the 10% threshold. -* **Fragmented schemas.** Snowflake does not recommend using schema design techniques from Hadoop, OLTP, or NoSQL databases where you may have denormalized data spread out across multiple schemas. Instead, use one schema to minimize metadata lookups. -* **Very complex queries.** The query optimization software Snowflake runs is broken out into cloud services. So if you write SQL queries that are thousands of lines long, or contain many joins or excessive recursion, you may find yourself with higher cloud services costs. -* **Excessively frequent queries.** Lastly, the SQL API handles the ingestion of each SQL query internally. Requesting this API (running queries) tens of thousands of times per day will start to result in charges. +* **Full clones:** Consider selectively cloning your databases for development, ETL, or backup purposes. Cloning consumes only cloud services credits, so if you run a large clone operation on the same day when fewer queries are run, you will pay. Instead, you can clone only the tables you need to stay under the 10% threshold. +* **Fragmented schemas:** Snowflake does not recommend using schema design techniques from Hadoop, OLTP, or NoSQL databases where you may have denormalized data spread out across multiple schemas. Instead, use one schema to minimize metadata lookups. +* **Very complex queries:** The query optimization software Snowflake runs is broken out into cloud services. So if you write SQL queries that are thousands of lines long, or contain many joins or excessive recursion, you may find yourself with higher cloud services costs. +* **Excessively frequent queries:** Lastly, the SQL API handles the ingestion of each SQL query internally. Requesting this API (running queries) tens of thousands of times per day will start to result in charges. It's possible that these issues may be caused by third-party services running on Snowflake and not your team itself. You can explicitly monitor the queries your company is running by adding [query tagging](https://www.vantage.sh/blog/snowflake-costs-per-query-using-query-tags). You can also [review additional tips](https://www.vantage.sh/blog/snowflake-compute-costs) for saving on your Snowflake compute bills. diff --git a/docs/tools/cost-reports.md b/docs/tools/cost-reports.md index d18bde9..65fee54 100644 --- a/docs/tools/cost-reports.md +++ b/docs/tools/cost-reports.md @@ -10,13 +10,13 @@ These use cases have come up repeatedly in the cloud costs community. Contributo ![Untagged](https://assets.vantage.sh/blog/governance/untagged-resources.png) -Many organizations use [tags](/aws/concepts/tags) to keep track of all their cloud resources. For practioners, keeping the percentage of untagged resources low means greater visibility inside cost tools. [Tags must be enabled](https://www.vantage.sh/blog/aws-cost-explorer#cost-by-tagged-resources) to be used in Cost Reports. +Many organizations use [tags](/aws/concepts/tags) to keep track of all their cloud resources. For practitioners, keeping the percentage of untagged resources low means greater visibility inside cost tools. [Tags must be enabled](https://www.vantage.sh/blog/aws-cost-explorer#cost-by-tagged-resources) to be used in Cost Reports. ### Showback Report ![Showback](https://assets.vantage.sh/blog/showback-cost-allocation/showback-cost-allocation-2.png) -Shared resources like support or a database cluster make divyying up costs among teams difficult. Use the Cost Allocation tool to create a Showback or Chargeback report for transparent reporting. +Shared resources like support or a database cluster make divvying up costs among teams difficult. Use the Cost Allocation tool to create a Showback or Chargeback report for transparent reporting. ### Compute Costs without Data diff --git a/docs/tools/instances.md b/docs/tools/instances.md index 95ffc71..72be988 100644 --- a/docs/tools/instances.md +++ b/docs/tools/instances.md @@ -2,7 +2,7 @@ title: Instances Pricing Documentation | Cloud Cost Handbook [AWS Instance Types Comparison](https:/instances.vantage.sh/){ .md-button } -[EC2Instances.info](https://instances.vantage.sh) shows current pricing for AWS EC2, RDS, and ElastiCache instances. The tool is completely [open source](https://github.com/vantage-sh/ec2instances.info) and uses the same Amazon APIs available to everyone. Development for Instances is coordinated through the [Vantage Slack](https://vantage.sh/slack/) as well as on [Github](https://github.com/vantage-sh/ec2instances.info). +[EC2Instances.info](https://instances.vantage.sh) shows current pricing for AWS EC2, RDS, and ElastiCache instances. The tool is completely [open source](https://github.com/vantage-sh/ec2instances.info) and uses the same Amazon APIs available to everyone. Development for Instances is coordinated through the [Vantage Slack](https://vantage.sh/slack) as well as on [Github](https://github.com/vantage-sh/ec2instances.info). ## Why? @@ -12,7 +12,7 @@ Because it's frustrating to compare instances using Amazon's own [instance type] ![Columns and Filters](/img/tools/instances/column_selector.png) -Nearly every service attribute available for a specific instance is available, although most are hidden by default. You can add more attributes, for example GPUs, in by clicking the `Columns` dropdown. Other dropdowns allow for selecting the `Region`, changing the per-unit basis of calculation (e.g. for vCPUs), and changing the term of the `Reserved` instance purchase. +Nearly every service attribute available for a specific instance is available, although most are hidden by default. You can add more attributes, for example, GPUs, by clicking the `Columns` dropdown. Other dropdowns allow for selecting the `Region`, changing the per-unit basis of calculation (e.g. vCPUs), and changing the term of the `Reserved` instance purchase. For each column that is shown, it can be further filtered using simple glob matching, and the entire table can be searched using the top right search box. @@ -32,7 +32,7 @@ By clicking on an individual row in the table, you can select it to be compared ![Detail Pages](/img/tools/instances/detail-pages.png) -For EC2 and RDS, the "API Name" column contains clickable links to each instance type. The Detail Page for the instance is essentially a pivot of the main table, with some additional tools to make the information more digestible. +For EC2 and RDS, the `API Name` column contains clickable links to each instance type. The Detail Page for the instance is essentially a pivot of the main table, with some additional tools to make the information more digestible. ### Pricing Widget @@ -40,11 +40,11 @@ In the upper left, a pricing widget has selectors for calculating the estimated ### Instance Attributes -In the middle of each Detail Page are the major categories of attributes and their values. These attributes are all selectable as columns in the main Instances pages. To request more attributes, click "Open a ticket" in the bottom right. +In the middle of each Detail Page are the major categories of attributes and their values. These attributes are all selectable as columns in the main Instances pages. To request more attributes, click `Open a ticket` in the bottom right. ## Saving and Clearing Filters -Instances automatically saves the filters and selections that are applied to local storage. This means that when you open a new session you will be greeted with the most recent set of filters and columns. This can be helpful for working on services which mostly use the same types of instances. +Instances automatically saves the filters and selections that are applied to local storage. This means that when you open a new session you will be shown the most recent set of filters and columns. This can be helpful for working on services that mostly use the same types of instances. To reset the table, click `Clear Filters`. @@ -58,11 +58,11 @@ The table, with its filters applied, sorted, and with columns shown and hidden, ## Contributors -EC2Instances.info was started by [@powdahound](http://twitter.com/powdahound), contributed to by [many](https://github.com/vantage-sh/ec2instances.info/contributors), is now managed and maintained by [Vantage](https://vantage.sh/) and awaits your improvements on [GitHub](https://github.com/vantage-sh/ec2instances.info). In the development of Detail Pages, we used components of designs from [cloudhw.info](https://cloudhw.info/) with permission from [Joshua Powers](https://powersj.io/). +EC2Instances.info was started by [@powdahound](http://twitter.com/powdahound), contributed to by [many](https://github.com/vantage-sh/ec2instances.info/contributors), is now managed and maintained by [Vantage](https://vantage.sh/), and awaits your improvements on [GitHub](https://github.com/vantage-sh/ec2instances.info). In the development of Detail Pages, we used components of designs from [cloudhw.info](https://cloudhw.info/) with permission from [Joshua Powers](https://powersj.io/). ## Warning EC2Instances.info is not maintained by or affiliated with Amazon. The data shown is not guaranteed to be accurate or current. Please [report issues](http://github.com/vantage-sh/ec2instances.info/issues) you see. !!! Contribute -Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). + Contribute to this page on [GitHub](https://github.com/vantage-sh/handbook) or join the `#cloud-costs-handbook` channel in the [Vantage Community Slack](https://vantage.sh/slack). diff --git a/mkdocs.yml b/mkdocs.yml index 4039fe2..904bad0 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -38,7 +38,7 @@ nav: - 'Instances': 'tools/instances.md' - 'General Concepts': - - 'Autoscaling': 'aws/concepts/autoscaling.md' + - 'Auto Scaling': 'aws/concepts/autoscaling.md' - 'Credits': 'aws/concepts/credits.md' - 'IOPS': 'aws/concepts/io-operations.md' - 'Regions' : 'aws/concepts/regions.md' @@ -59,8 +59,8 @@ nav: - 'EC2-Other': 'aws/services/ec2-other-pricing.md' - 'ECR': 'aws/services/ecr-pricing.md' - 'ECS & Fargate': 'aws/services/ecs-and-fargate-pricing.md' + - 'EFS': 'aws/services/efs-pricing.md' - 'ElastiCache': 'aws/services/elasticache-pricing.md' - - 'Elasticsearch': 'aws/services/elasticsearch-pricing.md' - 'ELB': 'aws/services/elb-pricing.md' - 'EMR': 'aws/services/emr-pricing.md' - 'Lambda': 'aws/services/lambda-pricing.md'