The document summarizes different compute options on AWS including Amazon EC2, ECS, Lambda, and Lightsail. It provides an overview of each service, discussing how they compare in terms of what AWS manages versus what customers manage. EC2 is infrastructure as a service, ECS is container management as a service, Lambda is serverless functions, and Lightsail offers simple virtual private servers. The document aims to help customers understand which compute option is best for different types of applications and workload needs.
25. Anatomy of a Lambda functionImport sdk
Import http-lib
Import ham-sandwich
Pre-handler-secret-getter()
Pre-handler-db-connect()
Function myhandler(event, context) {
<Event handling logic> {
result = SubfunctionA()
}else {
result = SubfunctionB()
return result;
}
Function Pre-handler-secret-getter() {
}
Function Pre-handler-db-connect(){
}
Function subFunctionA(thing){
## logic here
}
Function subFunctionB(thing){
## logic here
}
Business logic sub-functions
Your handler
Dependencies, configuration information,
common helper functions
Common helper functions
Here we’ve configured 172.31.0.0/16 as the VPC CIDR and created two public subnets (172.31.0.0/24, 172.31.1.0/24) and two private subnets (172.31.128.0/24, 172.31.129.0/24).
As I mentioned earlier, we EC2 stands for Elastic Compute Cloud.
We have racks of EC2 servers deployed across all of our regions, with each AWS regions consisting of multiple availability zones or AZs as we call then, and each AZ is typically multiple data centers.
Within these racks, we have sometimes dozens of servers that each contain Processors, Memory, Networking and sometime local storage. As part of the EC2 stack, we have an hypervisor that partitions these resources, in to virtual machines or guests, which we call as an EC2 instance.
EBS is a distributed system.
Your EBS volume is a logical volume comprised of MANY PHYSICAL DEVICES.
Because it’s a service distributed across many physical devices, this allows EBS to deliver better performance and durability than if we were simply mapping Volume -> Disk
Gp2:
General Purpose SSD
io1: Provisioned IOPS SSD
st1: Throughput Optimized HDD
sc1: Cold HDD
Snapshots
First time you take snapshot, every modified block is copied to S3
Subsequent snapshots are incremental and only changed blocks are backed up
Deleting a snapshot only removes data exclusive to that snapshot
Point-in-time backup of modified volume blocks
Stored in S3, accessed via EBS APIs
Subsequent snapshots are incremental
Deleting snapshot will only remove data exclusive to that snapshot
Crash consistent
I mentioned earlier that we have a little over 270 instances in our portfolio and that they are targeted for different type of workloads.
Lets break that down.
Every servers has 4 hey computing resources– CPU, Memory, Storage, Network capabilities
Some workloads are more CPU intensive, and more memory intensive,
So we created different SKUs or familes – that’s the first letter to the right
As we added new technology to our instances, we realized we wanted to expose these innovations – so we introduced generations – what CPU capabilities and cjupsets, network capabilities
Last one is size – pretty simple tshirt – still have the same ratio, chipset, and but each size has twice the CPU, memory and storage of the previous size – enabling to scale up your workloads
So we talked about ECS, Fargate, and Lambda and so the serverless operations model looks like this
1/ You can start at the very bottom with EC2 and have access to all the knobs you want to manage or you could go completely serverless with lambda and Fargate where you’re focusing just on your application.
2/ So the layers of abstractions available to you with AWS is super empowering because your teams have the choice to pick the layer of abstraction they’re most comfortable with and we will provide you the tools, services, and APIs necessary to help you build your application
So we talked about ECS, Fargate, and Lambda and so the serverless operations model looks like this
1/ You can start at the very bottom with EC2 and have access to all the knobs you want to manage or you could go completely serverless with lambda and Fargate where you’re focusing just on your application.
2/ So the layers of abstractions available to you with AWS is super empowering because your teams have the choice to pick the layer of abstraction they’re most comfortable with and we will provide you the tools, services, and APIs necessary to help you build your application
Add in a ASG with an instance as a teaser for next webinar
So we talked about ECS, Fargate, and Lambda and so the serverless operations model looks like this
1/ You can start at the very bottom with EC2 and have access to all the knobs you want to manage or you could go completely serverless with lambda and Fargate where you’re focusing just on your application.
2/ So the layers of abstractions available to you with AWS is super empowering because your teams have the choice to pick the layer of abstraction they’re most comfortable with and we will provide you the tools, services, and APIs necessary to help you build your application
1/ Our customers tell us that most of their effort goes into characterizing the low-level scaling and operating characteristics of the infrastructure.
2/ If you’re building in small teams and these teams have ownership of the pieces of software they’re working on, then they may end up with large number of infrastructure components they need to manage. How should you think about running your applications on such a distributed architecture?
3/ We spend a lot of time thinking about how can we help remove this complexity? These days the question we increasingly ask ourselves is “what do developers really need to build their applications?”
So we talked about ECS, Fargate, and Lambda and so the serverless operations model looks like this
1/ You can start at the very bottom with EC2 and have access to all the knobs you want to manage or you could go completely serverless with lambda and Fargate where you’re focusing just on your application.
2/ So the layers of abstractions available to you with AWS is super empowering because your teams have the choice to pick the layer of abstraction they’re most comfortable with and we will provide you the tools, services, and APIs necessary to help you build your application
1/ And this is what your layers of management end up looking like. You’ve this completely managed orchestration or container management layer but you also have these software management layers just to run your application.
2/ And all you really want here is to run your containers. And Fargate enables you to do just that. So if you notice here, there is no management of instances, your infra is ready to scale as you application is.
3/ There are no 2 levels of management of scale anymore. You only define the requirement of your application in terms of a task – how should the service scale, what metrics do you care about and how many more such container or task you want Fargate to launch.
Show how to build a cluster and deploy a container. Also show Application load balancer.
So we talked about ECS, Fargate, and Lambda and so the serverless operations model looks like this
1/ You can start at the very bottom with EC2 and have access to all the knobs you want to manage or you could go completely serverless with lambda and Fargate where you’re focusing just on your application.
2/ So the layers of abstractions available to you with AWS is super empowering because your teams have the choice to pick the layer of abstraction they’re most comfortable with and we will provide you the tools, services, and APIs necessary to help you build your application
1/ AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).
2/ Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
3/ Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers.
4/ Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design. This is why customers such as Vanguard, Accenture, Foursquare, and Ancestry have chosen to run their mission critical applications on Fargate.
Show how to build a cluster and deploy a container. Also show Application load balancer.
So we talked about ECS, Fargate, and Lambda and so the serverless operations model looks like this
1/ You can start at the very bottom with EC2 and have access to all the knobs you want to manage or you could go completely serverless with lambda and Fargate where you’re focusing just on your application.
2/ So the layers of abstractions available to you with AWS is super empowering because your teams have the choice to pick the layer of abstraction they’re most comfortable with and we will provide you the tools, services, and APIs necessary to help you build your application
What does this mean? Based on what we’ve learnt from our customers it means identifying the right abstractions at every layer of the stack and removing accidental complexity wherever we can. Specifically this means
1. Hiding infrastructure abstractions. You want to build your business logic, not have entire teams trying to figure out how to manage infrastructure
2. You want to focus on the desired behavior of the application and let us manage the undifferentiated heavylifting of the infrastructure.
3. You don’t want to spend time thinking about how applications should be deployed onto your infrastructure, which ones should run next to each other, etc.
Show how to build a cluster and deploy a container. Also show Application load balancer.