S3 Cleanup Script

Getting an early jump on spring cleaning in the new year? As you may well know you cannot simply delete a non-empty S3 bucket. I’ve created a handy little python script to help. Simply save the below script as s3Deleter.py and execute with s3Deleter.py <bucketname> <list || delete>

import boto3
import sys

    bucketname = sys.argv[1]
    print ("you failed to enter a bucket name, please enter a valid bucket name as the first argument")

    action = sys.argv[2].lower()
    print ("you failed to enter a bucket action, please enter a valid bucket action (list or delete) as the second argument")

allowed_actions = ['list','delete']

if action not in allowed_actions:
    print("you provided an invalid action argument. Please enter either list or delete as the argument")

s3 = boto3.resource('s3')
bucket = s3.Bucket(bucketname)

def bucket_list():
    print("listing all files in the following S3 Bucket: " + str(bucketname))
    for key in bucket.objects.all():

def delete_bucket_contents():
    print("deleting all files in the following S3 Bucket: " + str(bucketname))

def delete_bucket():
    print("Deleting the bucket: " + str(bucketname))

def main():
    if action == "list":
    if action == "delete":


Maximizing Savings in AWS

The world most certainly looks different than the it did when I last posted. With the COVID-19 virus spreading throughout the world and many countries having stay inside orders the world economy has cratered in many industries. This has of course caused companies to figure out where they can reduce and save. Here are a few of my tips for saving money in AWS.


Reserve, reserve, reserve. Instances that you will be running 24×7 for at least a year you should be purchasing reserved capacity. Whether or not you choose up-front or no up-front reservations is up to you and your accounting team, but either will save you drastic amounts of money over using on-demand pricing. If you may need to scale up the size of instances I recommend convertible instances. Just know if you do any type of up-front and have premium AWS support it is charged as a percentage of your bill and the up-front reservations will make this increase in the month you buy them.

So what you should you reserve?
– RDS Instances
– EC2 Instances
– Redshift
– ElasticSearch
– ElastiCache

Spot Instances and AutoScaling

If your application has components that can scale up and down rapidly the use of autoscaling will help drastically. This will allow you to automatically spin instances up or down as traffic increases or decreases. For non-production parts of your application or ones that can tolerate fault you can also look at using spot instances. Spot instance pricing can be a fraction of on demand pricing. In my organization we use this on an application that processes items from an S3 queue that aren’t time sensitive.

S3 Lifecycle Policies

You might be surprised if you look closely at your bill how much you are spending on S3. Particularly if you store media or backups in S3 and never really think about them once they are there. Using lifecycle policies you can move items to S3 infrequent access tier after a period or time or into glacier for longterm retention. You can also enable policies to expire content beyond a certain age, this can be helpful for deleting data that is older than your organizations contractual obligations and policies for data retention. For some S3 storage heavy orgs this can be a huge source of savings.

Right Sizing Provisioned IOPS

One thing that can be easy to forget is how expensive provisioned IOPS can get when you are using them for RDS and EC2. This can be particularly easy to miss if say you’re using an AMI that was created with a large amount of provisioned IOPS but yet the instance is either non-production or not IO intensive. I’ve been able to save a substantial amount on AWS bills buy reducing over-provisioned PIOPS

Consolidation of Resources

Consider an organization that has dev, qa, staging, and production environments. Staging should mirror the production environment but dev and qa environments may be able to use a single RDS instance for each instead of multiple like staging and production. Your mileage will vary greatly depending on the amount of systems you have.

Shutting Down Non-Production Environments Off Hours

If you have non-production instances that you don’t have reservations or or that might be running on spot instance can also save by shutting down these environments overnight and on weekends. Of course this depends on the hours and distribution of teams. This may not be feasible in global organizations with teams across multiple timezones. The easy way to automate this is to write a python boto3 startup and shutdown script, deploy them in a lambda and set them up to run on a cron.

Contact Your AWS Account Rep

If your organization does over $1 million per year in AWS business you may be able to qualify for some pricing breaks. Contact your AWS rep to find out. Additionally if you are starting to deploy new AWS services as part of a proof of concept you can also contact your account manager and ask for a proof of concept credits and they may be able to help offset or waive some of the costs during your POC phase.

Use the Tools

AWS cost explorer and billing has a ton of different reports you can run, including looking at your daily cost trends over time, your reservation coverage by service, and a breakdown of where your spend is going by service.

Other Thoughts?

If I missed anything or you have any other great tips please feel free to leave them in the comments. Hopefully this was a helpful post and may you find lots of savings in your AWS bill!

Python 2.7 EOL

Well it’s finally happened. Python 2.7 has gone end of life. This may not mean anything very immediately for you on your local servers, however this can post a problem with AWS Lambdas or other cloud platform functions.

Fortunately Python has a remarkably easy solution for making this transition seamless. 2to3 is a python code translation program that allows you to convert your python 2.x code the 3.x code.

To view the incompatible code and what the updated code would look like in a diff like sort of statement run the following:

python 2to3 <yourPathToFile.py>

If you would like to write the changes to the file you can run the same command with a -w flag:

python 2to3 -w <yourPathToFile.py>

This made my AWS Python lambdas quick and easy to update, simply updating my cloudformation templates with the updated code after running 2to3 and changing the runtime in the Yaml file to Runtime: “python3.6”