Welcome to the last part of my AWS security specialty notes. With one simple goal! To give an overview and main points where to look and what to expect. This part if final revision for AWS certification knowledge.
Also if you are not looking for passing the exam feel free to read the major security technology points from AWS. At lest to current date, because the cloud tech is changing with speed of light.
AWS Web application firewall (WAF)
WAf stands for Web application firewall which helps you to protect layer 7 of ISO/OSI.
You can integrate WAF with multiple other AWS resources like API GW, Application loadbalancers or Cloud Front. Its not possible to integrate the WAF with EC2 or RDS directly.
You can build your own rules and use also managed rules from marketplace.
WAF version 1
WAF version one is known as WAF Classic. WAF contains ACL which can be allocated to a resource. ACL contain Rules and rules contain Conditions. Each rule can evaluate your request and perform following actions:
- allow – will pass the request to target AWS resource
- block – will block the incoming request which matches the condition.
- count – will count matching requests.
WAF Claasic supports two types of rules :
- Rate-based rule – rate limiting added to regular rules
- Regular rule – combination of conditions is evaluated agaist incoming request.
Available condition types for requests which:
- Originates from an IP address or a range of IP addresses
- Originates from a specific country or set of countries
- Contain a specified string or match a regular expression (regex)
- Is bigger than a specified length
- contains SQL code (SQL injection pattern)
- contains malicious scripts (XSS injection pattern)
Important note: The default action determines whether AWS WAF Classic allows or blocks a request that doesn’t match all the conditions
There is state of the art architecture for WAF automation with feedback loop available here:
WAF Security Automations | AWS Solutions
The target architecture looks like:
WAF Wersion 2
AWS WAF version 2 removes some limitations to amount of rules available to the Web application firewall.
WAF v2 is addiging capacity units and rule groups.
- Web ACLs – Works in the same way as classic firewall. Contains the rules.
- Rules – Each rule contains a statement that defines the inspection criteria. There are Allow, block or count actions.
- Rules groups – You can use rules individually or in reusable rule groups.
AWS Managed Rules and AWS Marketplace provides (vendor) sell managed rule groups for Enterprise usage. These can vary in capacity unit consumption.
Capacity for rules:
AWS WAF manages capacity for rules, rule groups, and web ACLs:
- Rule capacity – AWS WAF calculates rule capacity when you create or update a rule. The AWS console displays the capacity units used when the used adds new add the rules.
- Rule group capacity – AWS WAF requires that each rule group is assigned an immutable capacity at creation.
- Web ACL capacity – The maximum capacity for a web ACL is 1,500 capacity units. Can be increased by AWS on request.
AWS WAF is placed in an Auto Scaling group between two ELB (Elastic Load Balancers). It help to separate public facing environment and and internal load balancing. If you use managed firewall like FortiWeb or F5 you can even scale instances of your firewalls and absorb the incoming attacks.
Object based database as a service is named DynamoDB.
To ensure that the database is encrypted at rest :
SSE for Dynamo db encryption types can be:
- Default – AWS owned CMK. The key is owned by DynamoDB (no additional charge).
- KMS – Customer managed CMK. (AWS KMS charges apply).
- KMS – AWS managed CMK. (AWS KMS charges apply).
DynamoDB encryption client – Protect the data in the table before it goes to dynamo DB table.
- Can be used with AWS KMS or Cloud HSM. Can support custom cryptographic keys.
- Allows to use Server side encryption with AES-256
Controlling the acces to DynamoDB via VPC endpoint is also good point to remember. DynamoDB endpoint has a policy that controls the use of the endpoint to access DynamoDB resources.
Example when you want to restrict the access to specific table:
Reading: Endpoints for Amazon DynamoDB – Amazon Virtual Private Cloud
AWS documentation says: DAX – Amazon DynamoDB Accelerator (DAX) encryption at rest provides an additional layer of data protection by helping secure your data from unauthorized access to the underlying storage.
DAX Encryption at Rest – Amazon DynamoDB
Managing Encrypted Tables in DynamoDB – Amazon DynamoDB
On-Demand Backup and Restore for DynamoDB – Amazon DynamoDB
RDS stands for relational database service.
There is lot from encrption to Publi IP od RDS to know. We cannot cover everything here.
Best practices for protection include:
- Create an individual IAM user for each person who manages Amazon RDS resources
- Do not use root credentials to manage the DB.
- Grant each user the minimum set of permissions (Least privilege)
- Use security groups to access only the resources you need.
- Mark DB as public only when it is needed.
- Use IAM groups for multiple users.
- Rotate your IAM credentials regularly. In best case use Secret management like AWS secret manager.
- Choose correct use case (MultiAZ or single use) based on your needs (prod / dev).
- Enable enhanced monitoring for production systems.
- For logging and monitoring check : Logging and Monitoring in Amazon RDS – Amazon Relational Database Service
AWS REdshift is basicaly Data warehouse offered by AWS. AWS Redshit is using four tier encryption. How to implement KMS integration with KMS can be found here:
Amazon Redshift uses a four-tier, key-based architecture for encryption. The architecture consists of data encryption keys, a database key, a cluster key, and a master key.
Redshift encryption key tiers:
- Data encryption keys encrypt data blocks in the Redhift cluster.
- The database key encrypts data encryption keys in the Redshift cluster.
- The cluster key encrypts the database key for the Amazon Redshift cluster.
- The master key encrypts the cluster key.
AWS Redshift security (as AWS documentation states) :
- Sign-in credentials — Access to AWS Redshift Management Console is controlled by your AWS account privileges.
- Access management — To control access to specific Amazon Redshift resources, you define AWS Identity and Access Management (IAM) accounts.
- Cluster security groups — User must cluster security group and associate it with a cluster.
- Cluster encryption — To encrypt the data in all your user-created tables administrator must enable cluster encryption when you launch the cluster. Server-side encryption (SSE) using account’s default key managed by the AWS Key Management Service (KMS).
- SSL connections — To encrypt the connection between SQL client and cluster, admin must use SSL encryption.
- Load data encryption — S3 encryption protection.
- Data in transit — AWS Redshift uses accelerated SSL to communicate with Amazon S3 or Amazon DynamoDB for COPY, UNLOAD, backup, and restore operations in Amazon cloud
- Column-level access control — To have column-level access control for data in Amazon Redshift , administrator must create column-level grants.
Logging : To enable logging you must check yes in Configure Audit Logging dialog box.
Reading: How Amazon Redshift uses AWS KMS – AWS Key Management Service
Encryption in-transit can be enabled via replication group by setting the parameter TransitEncryptionEnabled to true. Encryption in transit is optional.
Encryption at-rest is enabled by explicitly setting the parameter AtRestEncryptionEnabled to true.
Redis auth – Redis authentication tokens enable Redis to require a token (password) before allowing clients to execute commands. (include the parameter –auth-token)
Redis is also compliant with FedRAMP, HIPAA, PCI DSS. Memcached is not.
Reading: Data Protection in Amazon ElastiCache – Amazon ElastiCache for Redis
Do not provide so good features ad Redis in context of security. For better security you need to combine best practices from IAM, Security groups and overall VPC security.
There are multiple services giving to the customer (user) container platform. (Fargate, ESC, EKS)
If you want to know more and have few minutes check the 101 video about containers.
Some good things to rememeber:
- When creating cluster assign well defined security groups.
- Crete container instance security group.
- When using ECS configure properly ECS agent.
- This role is required by tasks to pull container images and publish container logs to Amazon CloudWatch on your behalf.
There are two types of task definitions:
- Fargate – server-less
- EC2 based
Container security itself:
- Port mapping – Port mappings allow containers to access ports on the host container instance to send or receive traffic.
- Priviledged or unpriviledged containers.
- Private repository authentication.
- Well defined mount points.
- Docker security options – provided by SELinux and AppArmor multi-level security systems.
- Log driver and log offloading to central logging service.
If you want to use managed kubernetes service then you need to focus on EKS. There is one very important point and that integration with AppMesh.
Getting started with AWS App Mesh and Amazon ECS – Amazon Elastic Container Service
Amazon Elasticsearch Service
Basically managed ELK stack. This service represents Elasticseach, Logstash and Kibana tirade. This log management and visualization platform can be used to gather the logs and analyse the data from your instances.
And that’s all for my humble AWS noted for security specialty. Its not all but I hope that even if you are not going to certification it helped you to understand this very complex topic, which can take years to learn in detail and all the depths. I will do some minor updates after these blog posts will be released. So I dedicate right to perform minor changes on content.
I hope that you enjoyed my reading and looking forward in next blog post!