We are a lean team consisting of developers, lead architects, business analysts, and a project manager. To scale our applications and optimize costs, we need to reduce the amount of undifferentiated heavy lifting (e.g., patching, server management) from our projects. We have identified AWS serverless services that we will use. However, we need approval from a security and cost perspective. We need to build a business case to justify this paradigm shift for our entire technology organization. In this session, we learn to migrate existing applications and build a strategy and financial model to lay the foundation to build everything in a truly serverless way on AWS.
***REPLACE ICONS
***VALIDATE AVAILABILITY AND DURABILITY - More than ec2 maybe higher security?)
To gain agility, you need to free up resources by looking for opportunities to remove undifferentiated heavy lifting.
Things like server and cluster provisioning, OS maintenance, patching, and so on - none of that falls into the category of your core competencies.
So you can make some choices about what control you have to have, versus what you’re comfortable letting someone else take care of.
You can partner for noncore capabilities to rapidly create new products, free up capital, and shift risk. If a partner can do nonstrategic things better, faster & cheaper, why not? That keeps you lean & nimble. And if it already exists, do I really need to build it again?
Using serverless technologies frees up resources, and lets you focus on what really matters, creating new differentiation for your business.
You don’t have to provision or manage infrastructure, you always have just enough resources to get the job done, you don’t pay for what you’re not using, and your environment is inherently available and durable. All of these capabilities are characteristics of serverless infrastructure.
By reducing operational overhead, your developers reclaim time and energy that can be spent on delivering new and better customer outcomes.
More and more, we’re seeing modern apps that are designed, architected, deployed, managed, and secured with serverless technologies.
We are missing app repo icon in developer tools (aaron kao should have an updated graphic here)
The microservice architectural style is an approach to developing a single application as a suite of small services. Each runs in its own process and communicates with lightweight mechanisms, often an HTTP resource API.
Key parameters:
Language support - Lambda – C#, C++, Python, Go, node.js…maybe you have PHP, COBOL, old VisualBasic
App is Stateful
are long-running
require more predictable performance
run at significant scale constantly, and the pay-per-invocation pricing model becomes too cost
as there are no memory or time limitations like there are with serverless.
On-premises hybrid cloud – kubernetes/microservices
Asychronous
Articles
https://logz.io/blog/serverless-vs-containers/
https://www.thorntech.com/2018/08/containers-vs-serverless/
https://www.cloudflare.com/learning/serverless/serverless-vs-containers/
Use serverless for workloads where serverless meets their needs
Use containers for where it doesn’t, for example, for workloads that:
are long-running
require more predictable performance
require more resilience than can be easily achieved with serverless
run at significant scale constantly, and the pay-per-invocation pricing model becomes too cost
Dynatrace, or NewRelic
Microservices do have distinct advantages:
Better Organization: Microservice architectures are typically better organized. Each microservice has a very specific job, and it is not concerned with the jobs of other components.
Decoupled: Decoupled services are also easier to recompose and reconfigure to serve the purposes of different apps (for example, serving both the web clients and public API). They also allow for fast, independent delivery of individual parts within a larger, integrated system.
Performance: Under the right circumstances, microservices can also have performance advantages depending on how they’re organized. It’s possible to isolate hot services and scale them independently of the rest of the app.
Fewer Mistakes: Microservices enable parallel development by establishing a hard-to-cross boundary between different parts of your system. By doing this, you make it hard — or at least harder — to do the wrong thing: namely, connecting parts that shouldn’t be connected, and coupling too tightly those that need to be connected.
Recap the common use cases for serverless
Web Applications: By combining AWS Lambda with other AWS services, developers can build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers – with zero administrative effort required for scalability, back-ups or multi-data center redundancy.
Mention Flask and Express
Backends: You can build serverless backends using AWS Lambda, Amazon API Gateway, and Amazon DynamoDB to handle web, mobile, Internet of Things (IoT) requests.
Data Processing: You can build a variety of real-time data processing systems using AWS Lambda, Amazon Kinesis, Amazon S3, and Amazon DynamoDB.
This is my architecture….
Nordstroms. Hello, Retail! API GatewayLambda-->Kinesis S3(Kinesis Firehose) and lambda to DynamoD and Redshit and Aurora (why three data sources ?- different users and use cases-simple reader, analytics(redshift), . You'll learn how Rob and his team are leveraging Kinesis and many other AWS services to build experiments such as the Hello, Retail! proof-of-concept. This project is 100% serverless and 100% event-sourced, designed around an immutable, ordered, and distributed ledger. We'll dive into the architecture, explain the difference between event-driven and event-source solutions, and show you how to build your own with Kinesis, Kinesis Firehose, S3, API Gateway, and Lambda. https://www.youtube.com/watch?v=O7PTtm_3Os4
Kbb.com/cox automative – monotholic to lambda microservices using strangler.
Capital One Cloud Custodian - Custodian is an open source rules
engine for fleet management in AWS….compliance as code….S3, lambda, cloudwatch…http://aws-de-media.s3.amazonaws.com/images/TransformationDay/TDay_Slides/Capital_One_AWS.pdf
Accenture ACP - https://www.youtube.com/watch?v=-RjGE-bnEjI - discovers changes across hundreds of AWS accounts by using CloudWatch, API GatewayLambdaS3-lambda-Elasticsearch, and thousands of simultaneous Lambda functions. Accenture Cloud Platform helps large customers with their governance, cost analytics, resource visibility, and infrastructure management needs in the cloud
https://www.slideshare.net/AmazonWebServices/wild-rydes-serverless-devops-to-the-rescue – Wild Rydes Serverless DevOps
Edmunds - https://www.youtube.com/watch?v=snuKfIaufP0
https://aws.amazon.com/solutions/case-studies/thomson-reuters/ - (user experience) Using Amazon Kinesis, our solution delivers new events to user dashboards in less than 10 seconds, -KinesisS3(kinesis firehost) and AWS Lambda ElasticSearch, Kibana, an open-source data analytics and visualization tool.…hundreds of digital products and services for customers ranging from law firms to banks to consumers. In 2016, Thomson Reuters decided to build a solution that would enable it to capture, analyze, and visualize analytics data generated by its offerings, providing insights to help product teams continuously improve the user experience
FINRA File Transfer Protocol (FTP), Amazon Simple Storage Service (Amazon S3), AWS Lambda, EMR functions processes half a trillion validations of stock trades daily - https://aws.amazon.com/solutions/case-studies/finra-data-validation/. EMR and Lambda
Hearst – Web site clicksteam analyss. Kinesis and Kinises firehost(S3), lambda, EMR https://aws.amazon.com/solutions/case-studies/hearst-data-analytics/ …. https://www.slideshare.net/AmazonWebServices/bdt306-how-hearst-publishing-manages-clickstream-analytics-with-aws
Vevo – monothilic / .NET. …lambda and dynamo….video hosting services…bursting. https://www.slideshare.net/AwsReinventSlides/aws-reinvent-2016-content-and-data-platforms-at-vevo-rebuilding-and-scaling-from-zero-svr308 … monothilic / .NET. …lambda and dynamo
Expedia – API Gateway-Lambda. https://acloud.guru/series/serverlessconf-nyc-2017/view/patterns-architectures-expedia, not for doing a booking. CI/CD, operations(cloudtrail to S#lambda-Dynamodb), test and run, auto scaling, chatbots. At Expedia we run hundreds of different Lamdba & API Gateway which are executed more than 4 billion times a month. As part of the talk, I would like to cover different patterns & architectures on how we are using Lambda & API Gateway at Expedia
Thomas Reuters - The initial event ingestion layer is composed of Elastic Load Balancing and customized NGINX web servers in an Auto Scaling group. After SSL/TLS termination, the ingestion layer augments events with metadata and encrypts them using AWS Key Management Service (KMS).
The ingestion layer hands off secured data to a streaming data pipeline composed of Amazon Kinesis Streams, Amazon Kinesis Firehose, and AWS Lambda serverless compute. Thomson Reuters evaluated other streaming data tools, including Apache Kafka, but found them difficult to manage and scale. The company did not want to worry about managing the software stack and a fleet of servers, so instead chose Amazon Kinesis because it is fully managed.
The Amazon Kinesis streaming-data pipeline automatically batches data and delivers it cost effectively into a master data set for permanent storage in an Amazon Simple Storage Service (Amazon S3) bucket, replicated across regions. The master data set enables Thomson Reuters to apply additional transformation steps, recover data in the event of a system loss of state, and support new business cases. If events cannot immediately be dispatched from the ingestion layer to the data pipelines, a failover mechanism delivers them to Amazon S3 to be replayed when the system returns to normal operations.
AWS Lambda allows Thomson Reuters to load and process the streaming data cost effectively and without needing to provision or manage any servers. Lambda collects data from the Kinesis pipeline and loads it into the master dataset in Amazon S3. Lambda is also triggered by Amazon S3’s data notifications whenever new data is stored, and performs the additional transformations on the master dataset. Lambda runs code only when triggered by data via integrations with Kinesis and Amazon S3, and it charges for compute processing only when the code is running.
A parallel real-time pipeline attached to the Amazon Kinesis stream delivers the events to a secure, multi-tenant Elasticsearch cluster through a custom extract, transform, and load (ETL) server connected to the Thomson Reuters Services platform, all hosted on AWS. The real-time data is made available to authorized Thomson Reuters product teams through Kibana, an open-source data analytics and visualization tool.
The Thomson Reuters Services platform also provides the authentication and authorization layer using AWS Identity and Access Management (IAM) and Amazon S3 cross-account access features. To monitor the solution, the company uses Amazon CloudWatch.
Two-pizza teams and flow - https://agile2018.sched.com/event/EU94/two-pizza-team-heartburn-relief-solutions-to-team-dependencies-mike-Griffiths
Mainframe DB2 -FTPS3lambda function is trigger from S3 file arrival (RDS for state)lambda EC2 batch job to load into dynamodb (mobile app, alexa)redshift(data analytics) using EC2batch jobs.
Lambda functions to trigger arrival of transaction files from the mainframe
Lambda functions to maintain the state of the files
Lambda Functions to read and ingest data into DynamoDB
Lambda functions to provision WCU and RCU dynamically on DynamoDB
DynamoDB as the read-only datastore
Capital One
https://medium.com/capitalonetech/serverless-transactions-serve-customers-e4a279940707
Millions of customer transactions.
https://www.youtube.com/watch?v=7plkSUN6DAE#t=31m18s - AWS re:Invent 2017: Optimizing Serverless Application Data Tiers with Amazon DynamoD (SRV301)
A mainframe is a complex system where any change requires analysis of a deep web of dependencies. We determined that in our legacy systems, close to 80% of the traffic was related to reading transactions. This insight gave us our focus: implement a system in the Cloud that would serve the read-only traffic and be fed by the mainframe in batch and in near real-time modes.
Second, establish success criteria – data modernization, mobile access/digitalHere’s what our team agreed that serverless needed to deliver:
Consumer accounts and financial transactions on modern cloud-based serverless infrastructure, within a system that is scalable, reliable, and extensible
Built in the Cloud and served via scalable APIs
Handles large scale traffic as well as any type of spikes in traffic (e.g., Black Friday, etc).
Follows DevOps best practices
Seamlessly scales when demand increases
Supports batch and real-time workloads
Provides consistent performance at any load (equal to or better than legacy systems)
Cost effective (cheaper than mainframe per transaction)
Does not compromise on security (Data at rest and in motion encryption)
Easily integrates with other services (real-time fraud analytics, etc)
With the high-level challenge set, we identified the following additional problems that had to be solved:
Choosing a datastore in the cloud (we opted for DynamoDB)
Loading billions of transactional records into DynamoDB
Creating a messaging infrastructure to keep transactions available on the mainframe and in our cloud-based system in sync near real-time
Building a new version of the getTransactions API in the cloud
Migrating traffic from all channels from the Mainframe system to our cloud-based system
As you can see in the picture above, there are various serverless components in the architecture.
Lambda functions to trigger arrival of transaction files from the mainframe
Lambda functions to maintain the state of the files
Lambda Functions to read and ingest data into DynamoDB
Lambda functions to provision WCU and RCU dynamically on DynamoDB
DynamoDB as the read-only datastore
So after talking to many customers we have noticed a trend.
We see these as the capabilities of a modern app
Secure == through every part of the app lifecycle
Automated == everything is codified and programmatic
About me –
Moved from consulting to Fender 9 years ago
I now lead our B2B eCommerce practice and like to solve problems
For those of you who have never heard OF Fender, you have definitely heard A Fender
PAUSE 5 SEC THEN SPEAK: Play first ~12 seconds of Jimi Woodstock Star Spangled Banner
You’ve probably also seen them with: Bruno Mars, Brad Paisley, Flea and of course Hendrix.
Fender was started 70 years ago. Leo Fender was a radio technician actually, not a guitarist, but he also liked to solve problems.
He wanted to get the guitar out of churches for only a few dozen, and on to stages for thousands.
He used inexpensive woods, a modular design and worked with musicians to develop the instrument.
With easy to assemble instruments, he was able to mass produce the instrument and scale it to be used by most working musicians today,
He was a CREATIVE BUILDER…
Building Blocks
Modular design including bolt-on necks
Build, Measure and learn
Introduced a prototype of the iconic Telecaster in 1951, P-bass later that year
Leo collected user feedback, which heavily influenced his design process, resulting in the Stratocaster in 1954
Scale and simplify – streamline process and do it on a mass scale
Simplify the design to solid-body
Easier to assemble…and repair
Streamlined the process for mass production
Leo launched the Telecaster and P-Bass in 1951.
Feedback from musicians lead to the Stratocaster in 1954
You can go to a guitar store today and buy all three of these. And they look the same.
Fast forward 70 years, Fender has:
Over 2000 employees
Factories in Corona, CA and Ensenada, Mexico
Regional HQ in Arizona, London, and Tokyo
98% of business is B2B
Fender has a global network of dealers and distributors
We had been serving them primarily with field sales and phone support, but in 2009 we introduced a B2B eCommerce system
As many companies do, we acquired software licenses, customized the system, and deployed on physical infrastructure.
In our case, we were running the ATG eCommerce system on physical servers at a data center in Boston.
A few years after, we physically moved the servers to our data center in Scottsdale, and then later virtualized the system
We followed more or less a classic architecture – user activity passed through a firewall, a load balancer in front of multiple Apache servers, with the core system running on ATG, which was connected to a SQL server and its own SAN. We had actually already replaced the built in search functionality with SOLR, and directly connected to our ERP systems.
<pause>
This system worked well for a while, but it was difficult to support new customizations, we suffered from sporadic downtimes, and deployments were frequently difficult.
Overall, monolith was a good term to refer to our B2B eCommerce system.
Complete with scaling issues, tight coupling to ERP systems, and issues with one part of the application could take down the whole system (i.e. failed product launch)
Of course many sites which were designed in 2009 will feel dated now, but looks aside one of our biggest challenges was just the performance of the site. Because we were tightly coupled to the ERP systems, page load times of 10 or more seconds was not unheard of.
We had limited control of product information and couldn’t connect to our new PIM system because the products component was core to the eCommerce system.
One of the most used features, order status, was difficult to get to, partly due to the ‘sales’ first approach we took.
As I mentioned, customizations were becoming increasing difficult, and deployments were problematic and required coordination across multiple teams.
So when it was time to develop a business case for a new approach, we focused on several things:
Cost Savings
by custom building our eCommerce system, we could remove the need for software license fees
By building in the cloud we could avoid hardware purchase and support, as well as data center expenses
Employee productivity
efficiencies in our development team by moving to a devOps model
We could say “yes” to requests for customizations to support our internal sales and service teams
focus on value adding work
Not to mention continuous learning and new opportunities for growth to our employees
Operational Improvements
To run a global eCommerce system, there are no convenient options for downtimes
We wanted to be able to scale to meet performance expectations
Run as a suite of microservices instead of a single product, loosely coupled to ERP and other systems
Business agility
deploy new features and functionality quickly and consistently
support customization / personalization
and provide access via mobile or any device the user chooses
Once our business case was approved, we mapped out how we could AWS for our key data loads.
Product info was stored in our ERP systems, and then moved to a PIM system where we added in all product images and loaded to Amazon CloudFront. This content then sent to S3 in XML format, we used SNS to trigger a lambda function to load the data in Dynamo, and then indexed into Elastic
For order info, we followed a similar path, extracting from ERP and loading into S3, with lambda to move to Dynamo, but then used EC2 to support dynamo streams and logstash to index into Elastic.
We process inventory updates every 30 minutes from both ERP systems, so we opted to use Redis here for faster retrieval processing, and also to use the same source to feeds additional systems like B2C and internal applications
For the architecture to support the actual site
User system will download the static content – images and javascript – from Cloudfront
Activity is routed through classic load balancer to control API calls to the core framework, which was built in Angular and Node
Amazon ElastiCache is used for session management
Postgre db stores the dealer repository
We run lambda function inside the VPC for order creation and invoice generation
because we use live calls to ERP via direct connect
API Gateway to connect to lambda functions for online payments, warranty claims, finance details, and others which then connect to additional services
And then we use Amazon SES for email management including order confirmations and support ticket generations
Here a few shots of the new eCommerce system which we completed the global rollout for earlier this year.
The dashboard can be personalized for each user, so they can set their most used features front and center
Product search
Product details
Showcase
Mobile
Customizations
Hardware eliminated
Performance
Software license
Customer satisfaction
Order volume
Lambda functions running in production
Avoid using Lambda inside VPC if possible
Log everything to cloudwatch and to ELK stack or use service like Epsagon or IOPipe
API Gateway for better security handling
AWS KMS for storing keys
Consider a service like Snyk for scanning vulnerable libraries inside your code
SQL injection is still a problem, use DynamoDB if possible
Team structure changes / change management
Blueprint / foundation for future projects
Continue on the path to moving to a serverless architecture, this will further reduce costs and other improvements
Implement security enhancements identified during well architected review and a service like Snyk for scanning vulnerable libraries inside code
New functionality like credit card processing and global online payment methods with a company like Stripe
Ongoing customer feedback loop so we can continue to make improvement both for our external and internal users
Data and analytics
And maybe looking to Alexa skill integration, here is short video of a proof of concept we just completed
Thank you for your time, Paras coming back up…
<play song>
So how do we build modern apps?
These are abstract and philosophical discussion on the hows