ML on AWS CUR Report

Getting FinOps Insight Using ML on AWS CUR Report

Written by Mansi Bora

| Jun 18, 2024

3 MIN READ

Introduction

A prominent financial institution faced significant fluctuations in AWS service costs, challenging their budgeting and resource allocation efforts. To enhance their FinOps practices, I, as a consultant, leveraged Machine Learning (ML) on AWS Cost and Usage Reports (CUR) to gain detailed insights into their AWS expenditures. This approach aimed to identify key cost drivers and implement effective resource management strategies.

The Challenge

The institution’s substantial and varying AWS costs made it difficult to maintain financial stability and operational efficiency. Identifying the primary cost contributors and understanding their usage patterns was crucial to devising optimization strategies.

The Solution: Implementing ML for Cost Insights

Data Collection and Preparation
The process began with collecting hourly AWS CUR data, encompassing costs for multiple AWS services such as EC2, S3, Glue, and CloudWatch. The data was structured into a pandas DataFrame to facilitate comprehensive analysis. Pandas, a powerful data manipulation library in Python, was employed for efficient handling and processing of the large dataset.

Exploratory Data Analysis
Exploratory Data Analysis (EDA) was conducted to understand the distribution and trends within the data. The EDA involved several steps and visualizations:

  1. Descriptive Statistics: Summary statistics provided an overview of the data’s central tendency, dispersion, and shape. This included mean, median, standard deviation, and quartiles for each service’s cost.
  2. Time Series Analysis: Line plots were used to visualize the hourly costs of different services over time, revealing patterns and trends.
  3. Distribution Analysis: Histograms and density plots highlighted the distribution of costs for each service, identifying any skewness or outliers.
  4. Correlation Analysis: Heatmaps and correlation matrices were employed to identify relationships between the costs of different services, helping to pinpoint potential dependencies.
  5. Service Comparison: Bar charts and box plots were used to compare the costs of various services, emphasizing the major cost drivers.

The total cost for each service was calculated, and services were ranked accordingly, revealing the primary cost drivers.

Results and Optimization Strategies

The ML analysis identified services like EC2, S3, and Glue as significant cost contributors. Specific optimization strategies were sugested like:

EC2 Instances

Amazon EC2 (Elastic Compute Cloud) provides resizable compute capacity in the cloud. Here’s how you can optimize costs:

  • Spot Instances: These are spare AWS EC2 instances available at a discounted rate compared to On-Demand instances. They are ideal for workloads with flexible start and end times, allowing you to take advantage of unused capacity. You can save significantly on costs by using Spot Instances for fault-tolerant applications, batch processing, and jobs that can be interrupted and resumed.
  • Optimized Instance Types: AWS offers a wide range of EC2 instance types optimized for different use cases such as compute, memory, storage, and GPU-intensive tasks. Choosing the right instance type based on your application’s requirements can optimize costs by ensuring you are not over provisioning resources.

S3 Storage

Amazon S3 (Simple Storage Service) is a scalable object storage service. Here’s how you can optimize costs and manage data efficiently:

  • Lifecycle Policies: Lifecycle policies in S3 enable you to automatically transition objects between different storage classes (e.g., from Standard to Standard-IA or Glacier) or expire them after a specified time. This helps in reducing storage costs by moving less frequently accessed data to cheaper storage tiers.
  • Intelligent-Tiering: This S3 storage class is designed to optimize costs by automatically moving data between two access tiers: frequent access and infrequent access. It monitors access patterns and moves objects that haven’t been accessed for 30 consecutive days to the infrequent access tier, saving costs without performance impact.

Glue Jobs

AWS Glue is a serverless data integration service. Here’s how to optimize costs related to Glue jobs:

  • Adjust Worker Types: AWS Glue allows you to specify the type and number of workers for your ETL (Extract, Transform, Load) jobs. By adjusting the worker types based on the job requirements, you can optimize costs by using the right amount of compute power without over-provisioning.
  • Scheduled Jobs during Off-Peak Hours: Running Glue jobs during off-peak hours (when compute resources are typically cheaper) can reduce costs. AWS Glue supports scheduling jobs based on time triggers, allowing you to schedule jobs to run during specific times when compute costs are lower.
    By implementing these strategies, you can effectively optimize costs across EC2 instances, S3 storage, and Glue jobs while ensuring efficient resource utilization based on your workload requirements.

Detailed Monitoring and Automated Alerts

To maintain continuous cost management, it was suggested to establish detailed monitoring and automated alerts using AWS Cost Explorer and CloudWatch. This will enable real-time tracking of costs and immediate response to any anomalies.

Conclusion

By applying Machine Learning to AWS CUR data, the financial institution gained valuable FinOps insights, identifying major cost drivers for implementing effective optimization strategies. This approach resulted in the understanding of significant cost savings and improved resource utilization techniques, aligning with the institution’s financial goals. Continuous monitoring and Machine Learning models along with automation will ensure long-term cost management and operational efficiency.


Go to Top