3. AWS Security Model Overview
Certifications & Accreditations Shared Responsibility Model
Sarbanes-Oxley (SOX) compliance Customer/SI Partner/ISV controls
ISO 27001 Certification guest OS-level security, including
PCI DSS Level I Certification patching and maintenance
HIPAA compliant architecture Application level security, including
password and role based access
SAS 70(SOC 1) Type II Audit
Host-based firewalls, including
FISMA Low & Moderate ATOs
Intrusion Detection/Prevention
DIACAP MAC III-Sensitive Systems
Separation of Access
Physical Security VM Security Network Security
Multi-level, multi-factor controlled Multi-factor access to Amazon Instance firewalls can be configured
access environment Account in security groups;
Controlled, need-based access for Instance Isolation The traffic may be restricted by
AWS employees (least privilege) • Customer-controlled firewall at protocol, by service port, as well as
Management Plane Administrative Access the hypervisor level by source IP address (individual IP
Multi-factor, controlled, need-based • Neighboring instances or Classless Inter-Domain Routing
access to administrative host prevented access (CIDR) block).
All access • Virtualized disk management Virtual Private Cloud (VPC)
logged, monitored, reviewed layer ensure only account provides IPSec VPN access from
AWS Administrators DO NOT have owners can access storage existing enterprise data center to a
logical access inside a customer’s disks (EBS) set of logically isolated AWS
VMs, including applications and resources
Support for SSL end point
data encryption for API calls
6. AWS Certifications
Sarbanes-Oxley (SOX)
ISO 27001 Certification
Payment Card Industry Data Security
Standard (PCI DSS) Level 1 Compliant
SSAE 16 (SOC 1) Type II Audit
FISMA A&As
• Multiple NIST Low Approvals to Operate (ATO)
• NIST Moderate, GSA issued ATO
• FedRAMP
DIACAP MAC III Sensitive IATO
Customers have deployed various compliant applications such as HIPAA
(healthcare)
7. SOC 1 Type II
Amazon Web Services now publishes a Service Organization Controls 1 (SOC 1), Type 2
report every six months and maintains a favorable unbiased and unqualified opinion
from its independent auditors. AWS identifies those controls relating to the operational
performance and security to safeguard customer data. The SOC 1 report audit attests
that AWS’ control objectives are appropriately designed and that the individual controls
defined to safeguard customer data are operating effectively. Our commitment to the SOC 1
report is on-going and we plan to continue our process of periodic audits.
The audit for this report is conducted in accordance with the Statement on Standards for
Attestation Engagements No. 16 (SSAE 16) and the International Standards for Assurance
Engagements No. 3402 (ISAE 3402) professional standards. This dual-standard report can
meet a broad range of auditing requirements for U.S. and international auditing bodies. This
audit is the replacement of the Statement on Auditing Standards No. 70 (SAS 70) Type II
report.
This report is available to customers under NDA.
8. SOC 1
Type II – Control Objectives
Control Objective 1: Security Organization
Control Objective 2: Amazon Employee Lifecycle
Control Objective 3: Logical Security
Control Objective 4: Secure Data Handling
Control Objective 5: Physical Security
Control Objective 6: Environmental Safeguards
Control Objective 7: Change Management
Control Objective 8: Data Integrity, Availability and Redundancy
Control Objective 9: Incident Handling
9. ISO 27001
AWS has achieved ISO 27001 certification of our
Information Security Management System (ISMS)
covering AWS infrastructure, data centers in all regions
worldwide, and services including Amazon Elastic
Compute Cloud (Amazon EC2), Amazon Simple Storage
Service (Amazon S3) and Amazon Virtual Private Cloud
(Amazon VPC). We have established a formal program
to maintain the certification.
10. PCI DSS Level 1 Service Provider
PCI DSS 2.0 compliant
Covers core infrastructure & services
• EC2, VPC, S3, EBS, RDS, ELB, and IAM
Use normally, no special configuration
Leverage the work of our QSA
AWS will work with merchants and designated Qualified
Incident Response Assessors (QIRA)
• can support forensic investigations
Certified in all regions
11. Physical Security
Amazon has been building large-scale data centers for
many years
Important attributes:
• Non-descript facilities
• Robust perimeter controls
• Strictly controlled physical access
• 2 or more levels of two-factor auth
Controlled, need-based access for
AWS employees (least privilege)
All access is logged and reviewed
12. GovCloud US West US West US East South EU Asia Asia
(US ITAR (Northern (Oregon) (Northern America (Ireland) Pacific Pacific
Region) California) Virginia) (Sao Paulo) (Singapore) (Tokyo)
AWS Regions
AWS Edge Locations
13. AWS Regions and Availability Zones
Customer Decides Where Applications and Data Reside
14. AWS Identity and Access Management
Enables a customer to create multiple Users and
manage the permissions for each of these Users.
Secure by default; new Users have no access to
AWS until permissions are explicitly granted. Us
AWS IAM enables customers to minimize the
use of their AWS Account credentials. Instead
all interactions with AWS Services and resources
should be with AWS IAM User security
credentials.er
Customers can enable MFA devices for their
AWS Account as well as for the Users they have
created under their AWS Account with AWS
IAM.
15.
16. AWS MFA Benefits
Helps prevent anyone with unauthorized
knowledge of your e-mail address and password
from impersonating you
Requires a device in your physical possession to
gain access to secure pages on the AWS Portal or
to gain access to the AWS Management Console
Adds an extra layer of protection to sensitive
information, such as your AWS access identifiers
Extends protection to your AWS resources such as
Amazon EC2 instances and Amazon S3 data
17. Amazon EC2 Security
Host operating system
• Individual SSH keyed logins via bastion host for AWS admins
• All accesses logged and audited
Guest operating system
• Customer controlled at root level
• AWS admins cannot log in
• Customer-generated keypairs
Firewall
• Mandatory inbound instance firewall, default deny mode
• Outbound instance firewall available in VPC
• VPC subnet ACLs
Signed API calls
• Require X.509 certificate or customer’s secret AWS key
18. Amazon EC2 Instance Isolation
Customer 1 Customer 2 … Customer n
Hypervisor
Virtual Interfaces
Customer 1 Customer 2 Customer n
Security Groups Security Groups … Security Groups
Firewall
Physical Interfaces
19. Virtual Memory & Local Disk
Amazon EC2
Instances
Encrypted
File System Amazon EC2
Instance
Encrypted
Swap File
• Proprietary Amazon disk management prevents one Instance from
reading the disk contents of another
• Local disk storage can also be encrypted by the customer for an added
layer of security
20. EBS Wiping / Data Destruction
Blocks Zeroed Out Upon Provisioning
Logical-to-Physical Block Mapping
• Created during provisioning
• Destroyed during de-provisioning
Failed or Decommissioned Hardware
• Degaussed
• Physically destroyed
21. Network Security Considerations
DDoS (Distributed Denial of Service):
• Standard mitigation techniques in effect
MITM (Man in the Middle):
• All endpoints protected by SSL
• Fresh EC2 host keys generated at boot
IP Spoofing:
• Prohibited at host OS level
Unauthorized Port Scanning:
• Violation of AWS TOS
• Detected, stopped, and blocked
• Ineffective anyway since inbound ports
blocked by default
Packet Sniffing:
• Promiscuous mode is ineffective
• Protection at hypervisor level
22. Amazon Virtual Private Cloud (VPC)
Create a logically isolated environment in Amazon’s highly
scalable infrastructure
Specify your private IP address range into one or more public or private
subnets
Control inbound and outbound access to and from individual subnets using
stateless Network Access Control Lists
Protect your Instances with stateful filters for inbound and outbound traffic using
Security Groups
Attach an Elastic IP address to any instance in your VPC so it can be reached
directly from the Internet
Bridge your VPC and your onsite IT infrastructure with an industry standard
encrypted VPN connection and/or AWS Direct
Connect
Use a wizard to easily create your VPC in 4 different topologies
25. Amazon VPC - Dedicated Instances
New option to ensure physical hosts are not shared with
other customers
$10/hr flat fee per Region + small hourly charge
Can identify specific Instances as dedicated
Optionally configure entire VPC as dedicated
26. AWS Deployment Models
Logical Server Granular Logical Physical Government Only ITAR Sample Workloads
and Information Network server Physical Network Compliant
Application Access Policy Isolation Isolation and Facility (US Persons
Isolation Isolation Only)
Commercial Public facing apps. Web
Cloud sites, Dev test etc.
Virtual Private Data Center extension,
Cloud (VPC) TIC environment, email,
FISMA low and
Moderate
AWS GovCloud US Persons Compliant
(US) and Government
Specific Apps.
27. Thanks!
Remember to visit
https://aws.amazon.com/security
Hinweis der Redaktion
Good afternoon.., I’m Ryan Holland a Solutions Architect on our partner team, and today’s presentation is about security and complaince in the AWS environemnt, and specifically we will cover many of the security controls and complaince and certification regimes in place at AWS, so lets get started.
We we think about AWS I find often times people immediatlly think of EC2 or maybe S3, but its important to understand there’s quite a bit more to the platofrm and when we are discussing the security controls they will apply to many parts of the AWS computing platform, and on top of this platform will be your application so come to a comprehensive soltuoin we have to have security that starts at the infrastructure and expends into your application.
This slide is a real eyechart but at a high level this covers the areas of security that we will cover in the presentation we will look at Physical security, network security, the security of our manamgnet plane how we demonstrate these the security measures we have in place to our customers which is through certifications and acceditations. Also we will focus on the Shared Responsilbilty Model, up in the right hand corner.This shared responsilbilty model for security is very important because it outlines who is responsible the many areas of security.
In a bit more detail you can see here we’ve outlined the specific areas that are the responsiblilty of AWS to secure and those that are the customers responsiblitiy. AWS is going to be responsilbe for a large portion of the security of the platform, but there are areas that will be out of our control, and thus will need to be the responsibility of the customer to properly secure. Looking at what is AWS’s responsibility you can think of it as everything from the ground up to the hypervisor, which will include the phyical security of the datacenters, the infrastrucute in the DCs and the virtualization components. On the customer side of this model you can see that its focused on the application and the opperating system but also there are some areas which are tools provided as part of the infrastructure that the custoemr must configure properly, such as Security Groups and NACLs
Security is a broad topic, and so one thing I would encourage everyone to do is visit our security and compliance website at /security, this has a good deal of information including security bulitins, and best practice documentation that will help you evauluate your implementation in AWS to ensure you’re opperating in a way that most effectivive in protecting your infrastructure and information. Also are two WPs I’d like to specifically mention and that’s the Security Whitepaper, which goes into pretty good detail on many of the controls we have in place at AWS and who we approach. The second is the Risk and Compliance WP, which covers specific complaince certifications that customers might be subject to, such as PCI or HIPAA. Also make sure you check regularly for updates to these, the R&C WP was recently updated in July to include the CSA questionare for example so thee WP will be updated fiarly regularly.
So lets look at many of the Certifications and Audits that we have at AWS, now some of these such as ISO27001 will be certifications where theres a prescibes set of the controls that must be met and others are more audits of controls we have defined. The first one there SOX probably doesn’t to many folks here in Brazil but it will be important for any compnay that is traded on a US exchange so for them is this very important. The second one ISO27001 tends to be very important for customers outside North America since its an international stanard along with PCI which also has a set of requirments that business need to meet regardless of geography. there’s also the SSAE (SOC1) audit that we will look at in some detail here shortly, this replaced the SAS70 awhile back. And two others FISMA and DIACAP which are US federal government, civilaian and military certitfication that we won’t really address today.
Lets first take a look at the SSAE16 SOC1 report, this is an audit rather than a certification and what that means is we have defined a set of controls that define how we opperate our services and protect customer data, an auditor then reviews these controls to ensure they are effective and then comes into our environment ensures the controls are being followed. This gives our customers some good insight into our securtity controls. This report replaces the SAS70 Type 2 report that many people might be familiar with. We’ll now look at the control objectives we covered in our SOC1 audit and if you’d like to see the full SOC1 report we do make that avilable to our customers under non-disclousure agreementAmazon Web Services publishes a Statement on Auditing Standards No. 70 (SAS 70) Type II Audit report every six months and maintains a favorable opinion from its independent auditors. AWS identifies those controls relating to the operational performance and security of its services. Through the SAS 70 Type II report, an auditor evaluates the design of the stated control objectives and control activities and attests to the effectiveness of their design. The auditors also verify the operation of those controls, attesting that the controls are operating as designed. Provided a customer has signed a non-disclosure agreement with AWS, this report is available to customers who require a SAS 70 to meet their own audit and compliance needs. The AWS SAS 70 control objectives are provided here. The report itself identifies the control activities that support each of these objectives. Security Organization Controls provide reasonable assurance that information security policies have been implemented and communicated throughout the organization.Amazon User Access Controls provide reasonable assurance that procedures have been established so that Amazon user accounts are added, modified and deleted in a timely manner and are reviewed on a periodic basis.Logical Security Controls provide reasonable assurance that unauthorized internal and external access to data is appropriately restricted and access to customer data is appropriately segregated from other customers.Secure Data Handling Controls provide reasonable assurance that data handling between the customer’s point of initiation to an AWS storage location is secured and mapped accurately.Physical Security Controls provide reasonable assurance that physical access to Amazon’s operations building and the data centers is restricted to authorized personnel.Environmental Safeguards Controls provide reasonable assurance that procedures exist to minimize the effect of a malfunction or physical disaster to the computer and data center facilities.Change Management Controls provide reasonable assurance that changes (including emergency / non-routine and configuration) to existing IT resources are logged, authorized, tested, approved and documented.Data Integrity, Availability and RedundancyControls provide reasonable assurance that data integrity is maintained through all phases including transmission, storage and processing.Incident Handling Controls provide reasonable assurance that system incidents are recorded, analyzed, and resolved. AWS’ commitment to SAS 70 is on-going, and AWS will continue the process of periodic audits. In addition, in 2011 AWS plans to convert the SAS 70 to the new Statement on Standards for Attestation Engagements (SSAE) 16 format (equivalent to the International Standard on Assurance Engagements [ISAE] 3402). The SSAE 16 standard replaces the existing SAS 70 standard, and implementation is currently expected to be required by all SAS 70 publishers in 2011. This new report will be similar to the SAS 70 Type II report, but with additional required disclosures and a modified format.
Next lets look at ISO 27001, many of the control objectives we cover in the SOC1 are also relavent in ISO27001 the main difference being rather than being an audit where we define the control objectives and an auditor validates that we are following those controls ISO27001 is a certification which outlines a set of requirments that our Information Secutiy Management System must have.
The next certification I want to discuss is PCI, AWS satisfies the requirements under PCI DSS for shared hosting providers. AWS also has been successfully validated against standards applicable to a Level 1 service provider under PCI DSS Version 2.0. Now that doesn’t mean that by using AWS you are automatically PCI Complaince, itmeans Merchants and other PCI service providers can use the AWS infrastructure for storing, processing, and transmitting credit card information in the cloud, as long as those customers create PCI compliant for their part of the shared environment. This is another area where out partners, such as SafeNet, can help customers build complaint architecture in AWS, as part of the customer responsibility for PCI will be to ensure that cardholder data is encrypted at reast.
So now that we’ve covered quite a bit on compliance and certifications lets dig a bit deeper in a few areas, the first one is physical security of our datacenters. This is an area we have a lot of expirence in as Amaozn has been building and mainintating large datacenters for many years now, even before AWS Amazon.comobvously needed quite a bit of infrastrcutre. Some key points about how we design these, first off they are all designed to the same standards so our datacenters here in SP have the same sets of controls in place as the DCs in NorVA, looking at the buildings themselves they’ll be non-descript, what we mean by that is you’ll never see a big amazon or AWS sign on building, in fact the vast majority of AWS employees don’t even know where they are beyond the regional description, they know we have some here in Sao Paulo but not what buildings they are. And the main drivers for that is related to the third bullet there, and that’s ensurings that access to the DC are needs-based and tightly controlled, so if you don’t need access to a datacenter to perform your job fucntion they you’re not allowed go in.
As we are building these datacenters we are doing so in many differetngeophraphies, you can see here on this slide the many regions we have throught the world. The Orange Squares are where we have regions, these are where we have all the common services within AWS such as EC2 and S3 etc, the red circles are cloudfront edge locations which are used to cache content so you have have it disributed as close to your customers as possible. this expansion not only allows customers achieve lower latency and higher throughput but also can be very importantt for meeting regulatory or complaince mandates around data sovereignty where you might not be allowed to have data exist outside certain countries or regions
There are several examples of this, Europe is a prety well known for mandating that PII of EU citizens not be stored outside the EU, the US Government is another example but many other countries and individual compnaies have similar policies. We handle this by defining our Regions as the boundries for which any of our services can replicate data. Inside a region we will have several availability zones, each of these AZ is designed to contain any failure within that AZ, because of this many of our services will replicate data to multiple Azs to provide for redundancy and duribility, its also a recommended best practice for customers to arhcitect their applications in the same way, but we will never replicatate or copy your data outside the region you place it in. Now you as the custoer and owner of that data are free to copy your data but we wont do it for you.
Another key service for security is understanding who can access your infrastructure and what actions they can perform when doing so. This isn’t something that’s new to the cloud, its something I’m sure everyone who runs their own infrastructure today is alredy doing in some fashion. To allow customers to have RBAC to their infrastructure and contorl what actions an administrator can do within the AWS environment we have a service called IAM. This allows you to create multiple users and groups and assign them the set of permissions that are needed for their role. This also extends to applications as well, as we see more and more applications either written to support our APIs, which is great for enabling automation and scalability of applications, but we need to make sure that the keys issued for those applications meets the test of least privledge, and IAM allows us to do that by specifically allowing accesses for both people and applications only to the API calls they need.
Another feature of IAM is to enable MFA, this is a good security practice and really simple to use. You can use hardware tokens like the gemalto you see there, but also soft-tokens are supported so reallistically anyone with a smartphone has access to an MFA device
The obvious benefit of MFA is to add additional security to a username/password authentication scheme so that if someone has their password compromised the attacker cannot use it without alos getting their token but another benefit in AWS is being able to add an extra layer of protection for critical items in S3 where a MFA code can be required to delte an object.
Lets take a deeper look into EC2 itself now, we look at security here from two perspectives the Host opertating system and the Guest OS. The host OS is the part we control, that’s what has our code on it an sits on the physical HW. Every individual uses their own SSH keys, no passwords, and must use those bastion hosts we discussed earlier. And again all their access is going to be audited and loged, and the audited piece is really important just logging things doesn’t accomplish much if you don’t look at the logs The Guest OS, this is controlled by you, you have full root or administrator access to your instances in EC2. We don’t have any access to this OS, we don’t have copies of your private keys, we don’t have your windows adminsitrator passwords. And each of these instances will have a firewall, because we require it, the firewall is called a security group and this is an inbound firewall in EC2 standard, in VPC which we’ll talk about some more here in a minute, you also get the ability to do egress filtering with security groups as well. By default the security groups will deny all traffic so administrators will have to specifically open the ports needed for specific sources, this helps ensure as small of an attack surface on your OS or application as possible.
These security groups also play a key role in providing instance isolation, in a multi-tenate environment its natural that customers would want to make sure that nother customers instance cannot send traffic to theirs unless they want to allow it. This slide gives a good graphical explaination the traffic flow between instnaces, which is to say there’s can be no direct communication from instance to instance without going through that firewall layer and having the traffic pass through the security groups. Note this firewall lives outside the OS, which allows for some seperations of duties in that perhaps the server admin has full privledges to the OS but not the AWS infrastructure they cannot unlaterally make changes to those firewall rules.
Virtual memory and local disks, this relates to the ephemeral or instance storage available to each instance type except micros. This storage is presented as block devices but Customer instances have no access to raw disk devices, but instead are presented with virtualized disks. The AWS proprietary disk virtualization layer automatically resets every block of storage used by the customer, so that one customer’s data are never unintentionally exposed to another. AWS recommends customers further protect their data. One common solution is to run an encrypted file system on top of the virtualized disk device.
Next I want to cover some of the procedures we have in place for data remnants and data destructions , first when a new storage volume is provisioned we will zero out all the blocks. Also that volume that is presented to the user is has a logical to physical mapping in our infrastructure that is created at the time the volume is provisioned and then destroyed during de-provisioning. Ultimately when the drives are replaced they will be first degaussed and then physically destroyed. Additionally encryption is another layer of protection that can be employed on EBS as well, one of our partners Trend Micro will be doing a session later today on their EBS encryption product.
Network security is something that comes up quite a bit with customers, so looking at the common threats starting with DoS attacks, this is something we have good expirence in dealing with and we do have mitigation techniques in place to help identify and remove attack packets, but it something that you will have to be involved with
Point of Slide: to explain VPC's high-level architecture, walking them through the discrete elements of a VPC, and a specific data flow to exemplify 1) data-in-transit security and continued 1) AAA control by the enterprise.AWS (”orange cloud"): What everybody knows of AWS today.Customer’s Network (“blue square”): The customer’s internal IT infrastructure.VPC (”blue square on top of orange cloud"): Secure container for other object types; includes Border Router for external connectivity. The isolated resources that customers have in the AWS cloud.Cloud Router (“orange router surrounded by clouds”): Lives within a VPC; anchors an AZ; presents stateful filtering.Cloud Subnet (“blue squares” inside VPC): connects instances to a Cloud Router.VPN Connection: Customer Gateway and VPN Gateway anchor both sides of the VPN Connection, and enables secure connectivity; implemented using industry standard mechanisms. Please note that we currently require whatever customer gateway device is used supports BGP. We actually terminate two (2) tunnels - one tunnel per VPN Gateway - on our side. Besides providing high availability, we can service one device while maintaining service. As such, we can either connect to one of the customer's BGP-supporting devices (preferably running JunOS or IOS).
Multiple Levels of SecurityVirtual Private Cloud: Each VPC is a distinct, isolated network within the cloud. At creation time, an IP address range for each VPC is selected by the customer. Network traffic within each VPC is isolated from all other VPCs; therefore, multiple VPCs may use overlapping (even identical) IP address ranges without loss of this isolation. By default, VPCs have no external connectivity. Customers may create and attach an Internet Gateway, VPN Gateway, or both to establish external connectivity, subject to the controls below. API: Calls to create and delete VPCs, change routing, security group, and network ACL parameters, and perform other functions are all signed by the customer’s Amazon Secret Access Key, which could be either the AWS Accounts Secret Access Key or the Secret Access key of a user created with AWS IAM. Without access to the customer’s Secret Access Key, Amazon VPC API calls cannot be made on the customer’s behalf. In addition, API calls can be encrypted with SSL to maintain confidentiality. Amazon recommends always using SSL-protected API endpoints. AWS IAM also enables a customer to further control what APIs a newly created user has permissions to call. Subnets: Customers create one or more subnets within each VPC; each instance launched in the VPC is connected to one subnet. Traditional Layer 2 security attacks, including MAC spoofing and ARP spoofing, are blocked. Route Tables and Routes: Each Subnet in a VPC is associated with a routing table, and all network traffic leaving a subnet is processed by the routing table to determine the destination. VPN Gateway: A VPN Gateway enables private connectivity between the VPC and another network. Network traffic within each VPN Gateway is isolated from network traffic within all other VPN Gateways. Customers may establish VPN Connections to the VPN Gateway from gateway devices at the customer premise. Each connection is secured by a pre-shared key in conjunction with the IP address of the customer gateway device. Internet Gateway: An Internet Gateway may be attached to a VPC to enable direct connectivity to Amazon S3, other AWS services, and the Internet. Each instance desiring this access must either have an Elastic IP associated with it or route traffic through a NAT instance. Additionally, network routes are configured (see above) to direct traffic to the Internet Gateway. AWS provides reference NAT AMIs that can be extended by customers to perform network logging, deep packet inspection, application-layer filtering, or other security controls. This access can only be modified through the invocation of Amazon VPC APIs. AWS supports the ability to grant granular access to different administrative functions on the instances and the Internet Gateway, therefore enabling the customer to implement additional security through separation of duties. Amazon EC2 Instances: Amazon EC2 instances running with an Amazon VPC contain all of the benefits described above related to the Host Operating System, Guest Operating System, Hypervisor, Instance Isolation, and protection against packet sniffing. Tenancy: VPC allows customers to launch Amazon EC2 instances that are physically isolated at the host hardware level; they will run on single tenant hardware. A VPC can be created with ‘dedicated’ tenancy, in which case all instances launched into the VPC will utilize this feature. Alternatively, a VPC may be created with ‘default’ tenancy, but customers may specify ‘dedicated’ tenancy for particular instances launched into the VPC. Firewall (Security Groups): Like Amazon EC2, Amazon VPC supports a complete firewall solution enabling filtering on both ingress and egress traffic from an instance. The default group enables inbound communication from other members of the same group and outbound communication to any destination. Traffic can be restricted by any IP protocol, by service port, as well as source/destination IP address (individual IP or Classless Inter-Domain Routing (CIDR) block). The firewall isn’t controlled through the Guest OS; rather it can be modified only through the invocation of Amazon VPC APIs. AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall, therefore enabling the customer to implement additional security through separation of duties. The level of security afforded by the firewall is a function of which ports are opened by the customer, and for what duration and purpose. Well-informed traffic management and security design are still required on a per-instance basis. AWS further encourages customers to apply additional per-instance filters with host-based firewalls such as IPtables or the Windows Firewall. Network Access Control Lists: To add a further layer of security within Amazon VPC, customers can configure Network ACLs. These are stateless traffic filters that apply to all traffic inbound or outbound from a subnet within VPC. These ACLs can contain ordered rules to allow or deny traffic based upon IP protocol, by service port, as well as source/destination IP address. Like security groups, network ACLs are managed through Amazon VPC APIs, adding an additional layer of protection and enabling additional security through separation of duties.
Amazon Simple Data Base (SimpleDB) SecurityAmazon SimpleDB APIs provide domain-level controls that only permit authenticated access by the domain creator, therefore the customer maintains full control over who has access to their data. Amazon SimpleDB access can be granted based on an AWS Account ID. Once authenticated, an AWS Account has full access to all operations. Access to each individual domain is controlled by an independent Access Control List that maps authenticated users to the domains they own. A user created with AWS IAM only has access to the operations and domains for which they have been granted permission via policy. Amazon SimpleDB is accessible via SSL-encrypted endpoints. The encrypted endpoints are accessible from both the Internet and from within Amazon EC2. Data stored within Amazon SimpleDB is not encrypted by AWS; however the customer can encrypt data before it is uploaded to Amazon SimpleDB. These encrypted attributes would be retrievable as part of a Get operation only. They could not be used as part of a query filtering condition. Encrypting before sending data to Amazon SimpleDB helps protect against access to sensitive customer data by anyone, including AWS.Amazon SimpleDB Data Management When a domain is deleted from Amazon SimpleDB, removal of the domain mapping starts immediately, and is generally processed across the distributed system within seconds. Once the mapping is removed, there is no remote access to the deleted domain. When item and attribute data are deleted within a domain, removal of the mapping within the domain starts immediately, and is also generally complete within seconds. Once the mapping is removed, there is no remote access to the deleted data. That storage area is then made available only for write operations and the data are overwritten by newly stored data.