Skip to content
Home » Blog » Cost optimization in AWS Cloud

Cost optimization in AWS Cloud

Cost optimization in AWS Cloud

Date /

By /

Cost optimization is one of the 5 pillars of the AWS architecture best practices framework.

AWS’ s variety of services and pricing options provide the flexibility to effectively manage your costs and maintain the performance and capacity needed.

AWS, through BigCheese BigCheese, help customers achieve the highest performance from AWS AWS services services for the lowest cost. These are the certifications of BigCheese and its team:

AWS Cloud

In the following, and expanding on what we have said on this topic in our blog, we describe the 5 actions you should take to optimize the costs of using AWS services. You can implement these practices with your IT team or you can rely on BigCheese’s experience to get you started on this path.

Cost optimization in AWS:

Objectives, at the end of reading this document we want you to:

  • Know the large-scale actions you need to take into account to optimize the costs of using AWS.
  • Know what actions you need to take to optimize the performance of your AWS infrastructure.
  • You can tell your team what to watch out for. Or you can hire someone to do it with the security of knowing the scope.
  • You can put together your “Cost Optimization Plan”. BigCheese can help you at this point either in the definition or implementation, BigCheese will give you a roadmap put together of what the next steps might be for you to optimize costs in your AWS account. The objective is for you to understand this roadmap and the importance of each step. Once the consultancy is completed, we give you the know-how to continue applying what you have learned in a constant way to create a process of continuous improvement to the rest of the products and areas of the company.
  • Meet BigCheese as experts and strategic partners of AWS in Uruguay.

Capacity must match demand

One of the main benefits of using AWS services is to be able to elastically make use of the services and thus create cost optimization in AWS.

In case they are already using services in a non-optimized way, the first step is to identify the waste of resources, which will involve removing unused infrastructure in order to move to a plan of elastic use of resources.

The following is a description of the concrete actions to be taken to detect cost optimization opportunities for the most frequently used services.

Identify the instances of Amazon EC2 instances with low utilization and reduce the cost by stopping or downsizing will help you with cost optimization on AWS.

We recommend using the AWS Cost Explorer Resource Optimization tool to report on EC2 instances that are idle or have low utilization. Costs can be reduced by stopping or reducing the size of these oversized instances.

It is possible to automate the process of stopping idle instances using AWS Instance Scheduler, for example to shut down unused instances outside of office hours, or on weekends (saving up to 68%).

It is also possible to automate the resizing of EC2 instances using AWS Operations Conductor.

Another interesting alternative is to use EC2 Autoscaling groups to start or shut down instances according to the load peaks that your application has, achieving less idle servers during periods of low use, and in turn allowing scaling to support high traffic bursts that your business has, without losing performance and optimizing costs.

Identify Amazon RDS, Amazon Redshift instances with low utilization and reduce costs.

From the Amazon Trusted Advisor we can run the RDS inactive database instance check, to identify database instances that have not had any connection during the last 7 days.

To reduce these costs associated with downtime, it is possible to stop these instances in an automated way, which is explained step by step by the AWS blog in this post.

For Redshift, use Trusted Advisor’s Redshift Underutilized Clusters report to identify clusters that have had no connections for the last 7 days, and less than 5% of the cluster average wide CPU usage for 99% of the last 7 days.

To reduce costs, we can pause these clusters in an automated way with the step-by-step explained in this blog.

Analyze Amazon DynamoDB usage and reduce costs by leveraging autoscaling modes autoscaling y on-demand .

First we must analyze DynamoDB usage by monitoring 2 metrics in Cloudwatch: ConsumedReadCapacityUnits and ConsumedWriteCapacityUnits.

Then, we can automatically scale the DynamoDB tables by configuring the AutoScaling function, following the steps explained here

Another alternative is to use the on-demand option. This option allows you to pay only for what is used, without the need to provision resources. Obviously, the cost of on-demand computing is a bit more expensive than with ProvisionedCapacity, but for certain point workloads it is a viable option.

Analyzes the use of Amazon EBS volumes.

  • EBS volumes that have very low activity (less than 1 IOPS per day) over a 7-day period indicate that they are probably not in use. Identify these volumes with Trusted Advisor Underutilized Amazon EBS Volumes Check.
  • To reduce costs, first make a snapshot of the volume (in case you need it later), then delete these volumes. Snapshot creation can be automated using Amazon Data Lifecycle Manager. Follow these steps to remove EBS volumes.

Analyze Amazon S3 usage and reduce costs by leveraging lower-cost storage.

  • To analyze storage access patterns in the object dataset for 30 days or more you have to use S3 Analytics. This way you will get recommendations on where you can leverage S3 with infrequent access (S3 IA) to reduce costs.
  • We can automate the movement of these objects to a lower cost storage tier through lifecycle policies.
  • Alternatively, you can also use S3 Intelligent-Tiering, which automatically analyzes and moves your objects to the appropriate storage tier.

Overhaul networks and reduce costs by eliminating idle load balancers.

  • Use(Load Balancers check) for Trusted Advisor’s idle load balancers check to get a report of load balancers that have RequestCount of less than 100 for the last 7 days.
  • Then, follow these steps to eliminate these load balancers and reduce costs.
  • In addition, you can also follow these steps to check your data transfer costs using Cost Explorer.

Keep your instances in the latest generation

Each new generation of instances that AWS makes available tends to improve efficiency and lower costs. For example, a t3.medium instance has better performance and costs less than a t2.medium instance(AWS claims that T3 improves the price-performance ratio by 30% compared to T2 instances). So, upgrading your instances to the latest generation will give you more value for your money.

Choose the pricing model according to your needs for cost optimization in AWS.

AWS allows us to use Reserved Instances (RI) to reduce costs for Amazon RDS, Amazon Redshift, Amazon ElastiCache and Amazon Elasticsearch.

For certain services such as Amazon EC2 or Amazon RDS, we can commit to usage by reserving capacity for specific periods of time (1 or 3 years).

While it does not require payment in advance (it is an option), what we do is commit to pay for a certain service for a fixed period of time. For example, we are going to reserve an EC2 t3.medium linux instance for a year, and no matter how we use it (we can even shut down one and create another one), the fact is that this instance will be discounted from the cost. on-demand of our bill, and we will pay it at a tiered price (as RI on the same bill) for 12 months.

With Reserved Instances, we can achieve cost optimization in AWS up to 72% over the equivalent on-demand capacity.

Reserved Instances is available in 3 options:

  • Paying everything in advance(AURI),
  • Making a partial payment in advance(PURI)
  • Without paying anything in advance(NURI).

Obviously, the greater the advance payment, the greater the discount.

It uses the suggestions provided in the AWS Cost Explorer Reserved Instance Purchase Recommendations, which is based on on-demand usage of Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache and Amazon Elasticsearch services.

In either case, BigCheese suggests moving to a reserved instance scheme after the workload is stabilized on the defined infrastructure.

In case of thinking in long term reserves (3 years) we suggest to analyze the option of convertible instances… 3 years is a long time in technology and we could be losing the possibility of making use of new instances even at a better price at some point during this period.

Analyzes the AWS service bill on a regular basis

AWS provides us with several tools to analyze the consumption of the services we are using and thus understand what we are paying for.

BigCheese suggests systematizing AWS invoice analysis to find cost optimization opportunities in AWS. Specifically, we suggest that you follow these steps:

  • Analyze the “Cost and Usage” report to see if there are services that are candidates to be resized.
  • Trusted Advisor will help to find unused services.
  • Create budget plans to monitor billing changes.
  • Share billing information with the different teams/sections of your company that use AWS services. Involve them in the cost optimization plan.

Modernize your applications

This section is a whole topic in itself, which probably deserves its own whitepaper.

By way of summary, let’s consider that many of our systems were conceived in different contexts and eventually were not “thought” to use the cloud as a service.

Some of the items involved in the systems modernization analysis are:

Licensing

Sometimes we use certain systems that require licensing, without having specific restrictions to operate with such options. For the particular case of databases AWS helps us in the execution of the migration through AWS Migration Service.

Evaluating the possibility of dispensing with licensing when it is not strictly required can generate an important availability of resources.

Serverless architectures

Migrating an entire monolithic system to serverless architecture or microservices can be very costly and eventually involves many risks. But there are less traumatic strategies that can generate concrete benefits in terms of both performance and costs.

Perhaps we can decouple part of our systems, thinking about those functionalities that generate greater demand of resources or are the ones that become the bottleneck when competing for resources from our servers.

It is becoming more common to see more traditional or monolithic architectures that make use of Lambda functions y Dynamo DB by integrating these services into current solutions, thus generating hybrid architectures that maintain the reliability of the systems that are already operating and improve the fact of requiring certain infrastructure, which is generally oversized to meet specific workloads or particular functionalities.

Consider using Aurora

Aurora is the AWS relational database, compatible with MySQL and PostgreSQL. It is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases and offers the security, availability and reliability of commercial-grade databases at one-tenth the cost.

In case of not having specific requirements on MySQL or PostgreSQL, Aurora can become an option that provides higher performance for less cost, without requiring major technical adaptations to our systems.

AWS cost optimization

Overview to get the most out of your current AWS Cloud resources:

In short, we can often get a little more for what we are already paying for, or pay a little less for the same service/use.

There are many points to attack when it comes to cost optimization. BigCheese suggests analyzing costs by type of service used in AWS and applying Paretto to, in the first instance, execute some of the strategies proposed for the most significant consumption.

In the form of bullets, here is our summary of concrete actions that can be executed with little effort and serve to take the first steps in cost optimization:

  • Create S3 lifecycle policies to move infrequently accessed files to cold-storage.
  • Use Reserved Instances whenever possible (when you have been paying for the same instance on-demand for some time).
  • Size your servers according to current demand and do not oversize. You can always adjust automatically when you need more power.
  • Create schedules for shutting down/starting servers that you do not use outside office hours (e.g. testing servers!).
  • Deliver your static content through a CDN and not directly from its source. Images, styles, and static sites are good candidates for lowering the computation used.
  • Whenever possible we should choose horizontal scaling. Having fewer but larger servers generally delivers less performance (and costs more) than having more smaller servers.
  • Upgrade to the latest generation of AWS instances, whenever possible.
  • Be sure to analyze your AWS bill.
  • When you have completed these steps, start again!

Related news