This blog kicks off a series in which we’ll address topics aligned with the AWS Well-Architected framework. If you were at AWS re:Invent, you probably heard AWS CTO Werner Vogels talk about well-architected cloud architecture. The AWS Well-Architected Framework provides a consistent approach and guidance to evaluate architectures and implement scalable designs, with a focus on operational excellence, security, reliability, performance efficiency, and cost optimization.

Rather than calling them inactive access keys, we should refer to them as ticking time bombs. An exaggeration maybe, but not by much.

Hackers were able to steal the information of 57 million Uber riders by hacking into a GitHub account — and there they were, AWS access keys, “securely” saved in a repo. Now we don’t know if the keys were even used in the app development, but it’s not uncommon to see unused keys on AWS accounts.

AWS keys are a necessary evil; sometimes we need to generate them. But there’s a way to limit risk. I believe that perfection is the enemy of good, and rather than taking a big bang approach to solving your AWS keys issues, I’ll share a pragmatic approach that you can start now.

1. Disable or remove inactive keys.

The AWS Identity and Access Management (IAM) console shows the keys that are inactive. You would hope that these keys get deleted or disabled. Or, the other possibility is, they are saved in code somewhere or saved in clear text on employees’ machines. It’s a best practice to disable these keys. The AWS console shows their status.

You earn bonus points if there are inactive keys for inactive employees — remove the users while you’re at it. Here is an AWS best practice for managing keys.

We use nOps Change Management to track inactive keys across all AWS project accounts. We set the alert for 30 days as a default.

2. Don’t store keys in GitHub.

If your app is dependent on AWS keys, and you need time to switch to roles, it’s never a good idea to store keys in GitHub. Even if you remove the keys from GitHub, they may still be visible in the commit history.

You can use tools like truffleHog. It scans the Git repo and the commit history to find checked-in keys. If you see keys in Git, rotate them or disable them, depending on whether they are still active or not.

If you have to use AWS keys, there are secure ways to store them, like (AWS Systems Manager) Parameter Store, HashiCorp Vault, or (AWS) Key Management Service (KMS). Anywhere other than GitHub is fine. Well, maybe not. Sharing on Slack is not fine. Sharing keys for different users is also not fine. The point is, keep them super secure. If you have a burning reason to share the keys with someone, ask yourself, can I use roles instead?

This is one of the few cases where sharing is not caring.

3. Use roles.

The idea behind IAM roles is simple. Rather than using keys, you are granting permission to an EC2 instance to make API keys. So there are no long-term credentials stored in code anywhere. This eliminates the potential for someone compromising your keys. Compromising instances is far more difficult, especially if you are following best practices like allowing access to servers only through VPN

Here’s a little secret practice — don’t only use roles in your app to make API calls, you can use roles to enable different employees to provision resources.

4. Control access.

One of my favorite services on AWS is Service Catalog. It allows you to create, manage and share with other engineers catalogs of IT products and services that are approved for use on AWS. You can share simple services, like an Amazon S3 bucket, or more complex services such as a multi-tier application containing an auto-scaling group and Amazon RDS with Amazon DynamoDB tables.

Here’s the amazing thing. The user can provision resources without needing write permissions for AWS. The benefits are twofold. Engineers can still provision resources at the same speed. And IT can easily share these services that are already following security best practices like encryption at data rest, versioning on S3 bucket, etc.

What’s more, you can use a tool like nOps to create custom workflows, so for a specific service catalog, nOps can trigger a workflow and notify users with an approval email.

5. Continuous monitoring

It’s hard to eliminate keys completely since there are times developers need to create keys to test things locally. Which is totally fine. But you should add monitoring for inactive keys because you can’t control what you don’t track.

We use nOps. It not only tracks AWS infrastructure changes but also provides real-time alerts. It’s as simple as getting a reminder for a meeting. nOps provides preset workflows that let you track keys that are inactive for a particular time period. In a way, it helps with security and change management compliance without doing a whole lot of manual work.

6. Run a Well-Architected review.

As noted at the top of this blog, AWS has provided the Well-Architected Framework as a consistent approach and guidance for creating and managing cloud architecture. The framework has five pillars, and security is one of them. Anyone who manages AWS infrastructure should familiarize themselves with the framework.

Even more practical, AWS has an assessment that validates the different pillars to ensure modern infrastructures are well architected. (nClouds is one of a select number of AWS partners approved to execute Well-Architected reviews.) The assessment covers operational excellence, security, reliability, performance efficiency, and cost optimization. And is sure to identify issues like inactive access keys.

We’ll cover other topics addressed by Well-Architected in future blogs. Review the steps outlined in this blog and get started on your ticking time bombs. You can get started with nOps here. Or contact us if you’re interested in a Well-Architected review.