2. What to Expect from the Session
1. What we learned as we evolved our release processes
2. Overview of release process terminology
3. A tour of AWS CodePipeline
4. Look under the hood of AWS CodePipeline
5. Extending AWS CodePipeline
13. Release Processes have four major phases
Source Build Test Production
• Check-in
source code
such as .java
files.
• Peer review
new code
• Compile code
• Unit tests
• Style
Checkers –
FindBugs,
CheckStyle
• Code Metrics
– Cobertura,
EMMA
• Packaging -
docker
• Integration
tests with
other systems
• Load testings
• UI Tests
• Penetration
testing
• Incremental
rollout to
production
environments
15. A real pipeline of a simple service
Build and Unit
Test
DeploymentsValidation
With increase confidence we increase the blast radius
Does it compile
and pass unit
tests?
Does it
integrate in
an isolated
stack?
Does it
integrate
against
prod?
Does it
integrate
in
production
region 1?
Does it
integrate in
production
region 2?
Deploy
to prod
18. A build service is not enough
• Our release processes emphasize safety, so we have more steps
• Many CI systems hide the release process, making failures hard to
find
• CI systems don’t provide needed modeling primitives
• Serial and parallel execution
• Easily add a new step to your process
• Pause for manual approvals
• Multiple deployment actions
• Multiple source actions
• CI systems don’t allow multiple changes concurrently through the
release process
23. CodePipeline concepts on the pipeline page
PipelineStage
Action
Pipeline Run
Source change
• starts a run; and
• creates an artifact to be used by other actions.
Manual Approval
24. CI is great start. CD with CodePipepline is
better.
• Visualizes your release process so they can be understood
• Allows powerful modeling of your release process
• Serial and parallel execution
• Easily add a new step to your process
• Pause for manual approvals
• Multiple deployment steps
• Allows multiple changes to be processed concurrently
28. Extend AWS CodePipeline Using Custom Actions
Update tickets Provision resources
Update dashboards
Mobile testing
Send notifications Security scan
29. How would we send a message to slack?
CodePipeline
App Pipeline
Source
Source
GitHub
Build
JenkinsForReinvent
Jenkins
Deploy
RailsApp
Elastic Beanstalk
30. AWS CodePipeline extension options
Per account based extension – for customers
• Option 1: AWS Lambda function
• Option 2: Custom Actions
Global Extensions – for AWS partners
• Option 3: Third Party Actions
32. Extend AWS CodePipeline with Lambda
Push message to slack when our Pipeline run completes
1. Add in a Lambda Invoke stage
2. Select a “send message to slack” function
3. Run the pipeline
33.
34.
35. Lambda example – include libs and handler
var AWS = require('aws-sdk');
var https = require('https');
exports.handler = function(event, context) {
var cp = new AWS.CodePipeline();
…
};
36. Lambda example – setup HTTP config
var httpParams = {
hostname: 'slack.com',
path: '/api/chat.postMessage?token=MYTOKEN&
text=Hello&channel=%23testing',
method: 'GET'
};
37. Lambda example – send message to Slack
// Send Message to Slack
var req = https.request(httpParams, function(response) {
response.on('data', function(c) {});
response.on('end', sendResultToCodePipeline);
response.resume();
});
req.end();
38. Lambda example – notify AWS CodePipeline
var sendResultToCodePipeline = function () {
var jobId = event["CodePipeline.job"].id;
cp.putJobSuccessResult({ jobId: jobId }, function(err,
data) {
if(err) { context.fail(err); }
else { context.succeed("Passed"); }
});
};
40. Custom Actions and Job Workers collaborate
Stage
Action
Custom Action
1. Poll for Job
2. Acknowledge Job
3. Put Success
EC2 instance
Job Worker
41. Creating a custom action and job worker takes
3 easy steps
1. Register you custom action in CodePipeline
2. Write your Custom Action
• Integrate with an external service
• Write stand alone Custom Action
3. Deploy custom action
43. Make your Custom Action available to users
1. Register you custom action in CodePipeline
2. Write your Custom Action
• Integrate with an external service
• Combine Custom Action and processing task
3. Deploy custom action
44. CodePipeline
App Pipeline
Source
Source
GitHub
Build
JenkinsOnEc2
Jenkins
Deploy
Action
Custom Action
RailsApp
Elastic Beanstalk
RegisterCustomAction.json
{
"category": "Deploy",
"provider": "Slack-Notifier",
"version": "2",
"settings": {
"entityUrlTemplate": "https://codepipeline-demo.slack.com/messages/general/",
"executionUrlTemplate": "https://codepipeline-demo.slack.com/archives/general/{ExternalExecutionId}"
},
"inputArtifactDetails": {
"maximumCount": 0,
"minimumCount": 0
},
"outputArtifactDetails": {
"maximumCount": 0,
"minimumCount": 0
}
}
Unique identifier information
Files to consume during the action
Files to produce during the action
45. Use the AWS CLI to register the Custom Action
$ aws codepipeline
create-custom-action-type
--cli-input-json
file://lib/custom_action/RegisterCustomAction.json
46. Write the code to talk to AWS CodePipeline and
Slack
1. Register you custom action in CodePipeline
2. Write your Job Worker
• Integrate with an external service. e.g. Slack
• Combine with processing task.
3. Deploy Job Worker
51. Deploy Job Worker code to compute instance
1. Register you custom action in CodePipeline
2. Write your Job Worker
• Integrate with an external service
• Combine Job Worker and processing task
3. Deploy Job Worker
52. Recap creation of a custom action and job
worker
1. Register you custom action in CodePipeline
2. Write your Custom Action
• Integrate with an external service
• Combine Custom Action and processing task
3. Deploy custom action
53. What extension method should I use?
Lambda Custom Action
Short running tasks are easy to build Can perform any type of workload
Long running tasks need more work Control over links displayed in console
Node.js, Python and Java support Any language support
Runs on AWS Can run on-premise
No servers to provision or manage Requires compute resources
54. What did we cover today?
• The benefits of moving to Continuous Delivery
• We can get our software out in front of our users much
more rapidly
• By moving faster we can actually ensure better quality
• CodePipeline allows for integration with almost any
service or tool you can think of!
• Plus visualization of what’s going on!
55. How you can try AWS CodePipeline
• Use your AWS account to create a free pipeline
• We have examples and a tutorial
• There is thorough documentation too
• We provide support in the forums
• More CodePipeline code in awslabs on github.com
59. Images
Haystack rock - https://commons.wikimedia.org/wiki/File:Haystack_rock_00022.jpg
Heatpipe tunnel copenhagen 2009 -
https://commons.wikimedia.org/wiki/File:Heatpipe_tunnel_copenhagen_2009.jpg
Lewis Hine, Boy studying -
https://commons.wikimedia.org/wiki/File:Lewis_Hine,_Boy_studying,_ca._1924.jpg
Cells – https://pixabay.com/en/stem-cell-sphere-163711/
Hinweis der Redaktion
- welcome everyone
- my name is Rob Brigham, and I'm here with members from the AWS Developer Tools group
- we build the tools that developers inside of Amazon use, as well as a new set of AWS tools that all of customers can use
- today, we're going to talk about DevOps at Amazon, and give you an inside peak at how Amazon develops our web applications and services
I’ll share what Amazon learned as we adopted continuous delivery practices.
Next we’ll take a tour of AWS CodePipeline. I’ll start with an overview of CodePipeline concepts and then we’ll walkthrough of the product in the console.
We’ll then look under the hood of how work is coordinated and executed in CodePipeline.
Finally, we show the extensibility and flexibility of CodePipeline by integrating a new services into AWS CodePipeline
- now to make this more concrete, let's look at the story of Amazon's transformation to DevOps
- like most companies, we did not start out this way
In 2001, Amazon had already been a successful company for quite a few years.
We were growing fast and we were listening to what customers wanted, resulting in a continuous stream of new functionality to the Amazon.com website.
As the retail website continued to become more feature rich, we experienced an increasing number of issues when building and maintaining the site.
Building the website became hard, testing the website became hard and deploying also became hard.
Over time we had lost the agility of a startup.
We decided to divide
We broke into small teams
We broke our software into small pieces that could be fully owned and managed by each of our small teams
And We invented very flexible Build, Test and Deployment tools, that put control back in the hands of each team.
This meant that teams could now manage the end-to-end software development cycle by themselves and at their own pace.
These structural changes removed a bottleneck in our processes, and we continued to grow.
8 years later
In 2009, we had continued to grow rapidly
Teams had long since moved to being fully independent, small teams with the ability to prioritize their own work and deliver software at their own cadence.
We’d come a long way, but we felt we could be more efficient.
The problem is, that we weren’t sure of where our bottlenecks were.
So we conducted a study. “Understanding Amazon’s Software Development Process Through Data” in 2009 by John Rauser
We wanted to find out the steps, and timing of the steps, that were taken from code check-in through to code being available in production. This included the time it took to build, test and deploy our software.
We learned that this was taking a long time. In the order of weeks. And we didn’t want it take weeks to get a code change out to production.
What we did discover was our processes had lot of human, manual work in them which were taking most of the time. Developers would use tickets or emails to track their release process. Developers would ticket or email other developers to run a build at which point a bunch of requests would batch up before being run. Once the build was done, new tickets were cut to deploy their software. Those requests may also batch up, increasing the time it took for a change to reach production.
This was the problem we needed to solve. We needed to automate the production line of developer work so that humans were not longer causing developers to wait, when that work could be automated away.
To solve our problem with manual coordination of software delivery we created Pipelines.
Pipelines automates the orchestration of work in the software development life cycle. The software production line.
Pipelines allowed teams to model their software release process. We built pipelines to be incredibly flexible. It is flexible enough for some of our largest products to use including S3, EC2 and the Amazon.com website.
With Pipelines, we now had a platform that could automate the coordination of build, test and deployments of software for all of Amazon.
Very successful internally. Used by over 90% of the teams.
The combination of:
the organizational changes
architectural changes; and
new tools like pipelines
Amazon was able to perform 50 million deployments last year or 1 every 1.6 seconds.
We learned a lot when moving to an automated release process.
We delivered software to customers Faster:
Pipelines allowed us to automate away the waiting time between tasks that had been present.
We reduced the amount of boring, error prone and repetitive work that humans had to do. And instead gave the repetitive work to computers. Computers are great at this type of work because they don’t get bored and they don’t make mistakes.
Teams that adopted pipelines saw the time it took to from code check-in to seeing that code in customer’s hands was now in the order of minutes instead of weeks.
We found that automated release processes were Safer:
In theory, continuous delivery does not reduce the number of mistakes that developers make, so we do not expect the rate of bugs per line of code checked in to change when teams adopt continuous delivery.
Generally, teams that automated their processes with Pipelines were seeing a reduction in customer facing errors
Because it now took minutes or hours for a change to get to customers, teams were pushing out smaller changes, more often. Smaller changes contain less risk of incorporating a new defects, which contributes to safer releases.
Another reason teams saw less errors was that many had decided to automate their current test processes. The automated tests were integrated into the teams pipelines and the tests were continuously improved over time.
Visualizing your release process was key to improving it.
PROBLEM: Documenting your process is important, but sometimes words can be confusing.
Visualizing the processes made it easier to understand.
Once a team had modelled their process in pipelines, they could iterate on it. Inefficiencies could easily be identified as they were usually the manual steps, and teams could work on automating each manual step, one at a time until there were no manual steps left.
Release processes could not be inspected. More experienced engineers could help make other people’s release processes better. This processes built trust within the company that a team was following good deployment practices.
// The Agile community has been using visualization techniques, known as “Big Visible Charts”, such as burndown charts, story walls, parking lot diagram and story mapping to achieve the same results.
Much of the benefit of CD comes from process simplification and standardization
PROBLEM: In the past teams would have one process for a bug fix and another process for a feature release. This could lead a team to push out bug fix that would pass their targeted tests, but may fail in some other part of the system, causing a customer impacting event. Teams often developed not just two, but many ways to release code with each of them having different quality standards, some of which would lead to outages as different people typed in different commands on to test and deploy software.
In Pipelines there is one release process for one application. Whether your shipping out a bug fix or a large new feature, the process for releasing software for an application is always the same.
(simplification) This caused many teams to revisit their release processes as they now needed a single, standardized process in order to automatically release software. A common outcome for teams was that their release processes were simplified, often dramatically.
(standardization) Standardizing a release process meant that all software releases now went through the same quality checks every time. This was a contributor to increased quality as any confusion around the processes was removed.
(consistency) Automating the process meant also that there were less opportunities for people to make errors. Builds, Tests and Deployments were always triggered the same way with the same parameters. Teams wouldn’t build the wrong software or make mistakes dues to humans typing in different information from one release to the next. This was another factor in increasing quality.
I want to take a moment to talk about different release processes.
Each team’s release process takes a different shape to accommodate the needs of each team.
Nearly all release processes can be simplified down to four stages – source, build, test and production. Each phase of the process provides increase confidence that the code being made available to customers will work in the way that was intended.
During the source phase, developers check changes into a source code repository. Many teams require peer feedback on code changes before shipping code into production. Some teams use code reviews to provide peer feedback on the quality of code change. Others use pair programming as a way to provide real time peer feedback.
During the Build phase an application’s source code is built and the quality of the code is tested on the build machine. The most common type of quality check are automated tests that do not require a server in order to execute and can be initiated from a test harness. Some teams extend their quality tests to include code metrics and style checks. There is an opportunity for automation any time a human is needed to make a decision on the code.
The goal of the test phase is to perform tests that cannot be done on during the build phase and require the software to be deployed to a production like stages. Often these tests include testing integration with other live systems, load testing, UI testing and penetration testing. At Amazon we have many different pre-production stages we deploy to. A common pattern is for engineers to deploy builds to a personal development stage where an engineer can poke and prod their software running in a mini prod like stage to check that their automated tests are working correctly. Teams deploy to pre-production stages where their application interacts with other systems to ensure that the newly changed software work in an integrated environment.
Finally code gets deployed to production. Different teams have different deployment strategies though we all share a goal of reducing risk when deploying new changes and minimizing the impact if a bad change does get out to production.
Each of these steps can be automated without the entire release process being automated. There are several levels of release automation that I’ll step through.
Continuous Integration
Continuous Integration is the practice of checking in your code to the mainline branch on a daily basis and verifying each change with an automated build and test process. Over the past 10 years Continuous Integration has gained popularity in the software community. In the past developers were working in isolation for an extended period of time and only attempting to merge their changes into the mainline of their code once their feature was completed. Batching up changes to merge back into the mainline made not only merging the business logic hard, but it also made merging the test logic difficult. Continuous Integration practices have made teams more productive and allowed them to develop new features faster. Continuous Integration requires teams to write automated tests which, as we learned, improve the quality of the software being released and reduce the time it takes to validate that the new version of the software is good.
There are different definitions of Continuous Integration, but the one we hear from our customers is that CI stops at the build stage, so I’m going to use that definition.
Continuous Delivery
Continuous Delivery extends Continuous Integration to include testing out to production-like stages and running verification testing against those deployments. Continuous Delivery may extend all the way to a production deployment, but they have some form of manual intervention between a code check-in and when that code is available for customers to use.
Continuous Delivery is a big step forward over Continuous Integration allowing teams to be gain a greater level of certainty that their software will work in production.
Continuous Deployment
Continuous Deployment extends continuous delivery and is the automated release of software to customers from check in through to production without human intervention. Many of the teams at Amazon have reached a state of continuous deployment. Continuous Deployment reduces the time for your customers to get value from the code your team has just written, with the team getting faster feedback on the changes you’ve made. This fast customer feedback loop allow you to iterate quickly, allowing you to deliver more valuable software to your customers, quicker.
Let’s look at a real pipeline at Amazon. This pipeline deploys code to production.
Our goal is release software to production quickly and safely. The pipeline is designed to gain increased confidence that our change will be safe.
The first check completed is to ensure the change builds and the unit tests pass.
The next change is to ensure the new artifact works when integrated with other, dependent services. We use integration tests on the isolated stack to achieve this goal. Engineers use this stack to debug integration issues.
We then check that each of the services to be deployed to production works against the current production stack. He we combine the new code with the production configuration to identify both code and config bugs.
We then deploy to a subset of production, a OneBox. We always run more than one server in production so that we have redundancy. We always strive to provide the best experience for our customers.
Once we gain confidence we then deploy to the rest of the production fleet within the region.
Repeat the onebox and production deployments for each region.
AWS has wide pipelines as we incrementally rollout to
CI servers have increased their features beyond simple compilation and unit tests execution. Build Servers, or Continuous Integration servers, have pluggable deployment functionality too.
But… CI systems made it hard for us to see the root cause of a release failure because it was all hidden inside the build logs.
When our AWS customers asked what we do that may be useful to them, we looked internally and realized that we had a lot of great tools that allowed us to move quickly and safely. We realized that we’d made something very special with Pipelines and we wanted to make it available to you. CodePipeline is the externalization of Pipelines and it allow you to release software like Amazon does.
CodePipeline builds, tests and deploys your code, every time there is a code change, based on the release process you define.
We have partnered with popular source, build, test and deployment systems to provide out-of-the-box integrations.
Jenkins, CloudBees and Solano offer CI services for build stages
BlazeMeter, Apica, HP StormRunner and Runscope are load testing partners.
GhostInspector is a User Interface Testing partner
GitHub is a source code partner
Xebia Labs is a deployment partner.
(only show this screen briefly while I bring up the console)
https://console.aws.amazon.com/codepipeline/home?region=us-east-1#/view/SampleAppPipeline
OPEN BROWSER
Dashboard page:
The CodePipeline homepage shows the pipelines that your team has already built. You can also create a new Pipeline from this page. Let’s take a closer look at a pipeline.
(only show this screen briefly while I bring up the console)
https://console.aws.amazon.com/codepipeline/home?region=us-east-1#/view/SampleAppPipeline
OPEN BROWSER
Dashboard page:
The CodePipeline homepage shows the pipelines that your team has already built. You can also create a new Pipeline from this page. Let’s take a closer look at a pipeline.
Pipelines:
A pipeline represent the workflow of your release process. We’ve build CodePipeline to be very flexible in the way you can configure your workflow.
Artifact:
Artifacts are the files that are passed through a pipeline. For instance, when a pipeline is first triggered, a source artifact is created an placed in an S3 bucket. I’ll talk more about these when we extend CodePipeline.
Pipeline Revisions/Run:
Each time a new changes is committed to your source location a new revision is triggered. The new code change passes through all steps in the pipeline. A pipeline can have multiple revisions flowing through it at the same time. Pipelines runs can also be manually started by releasing a change.
Stage:
A stage is a collection of one or more actions.
Transitions:
Stages in a pipeline are connected by transitions and are represented by arrows on the console. Transitions can be disabled or enabled between stages.
Action:
An action, or plugin, is a task that will act upon the current revision running through the pipeline. You can configure action to be executed in a specific order either in serial or in parallel.
Each action has two links. The first link is underneath your action name and is a link back to the action webpage. CodePipeline will provide a summary of the actions information, but for a more detailed look into the configuration of your action you can go to the action’s page. For example if I had a test action then this link would take me to my test suite definition.
The second link shows the details of the last pipeline run. Here you can get details on what occurred the last time the action performed it’s task. Keeping with the previous example, if I had a test action then this link would be to the results of the last execution of my test suite.
Configurable Workflow
CodePipeline is also easy to configure. We can edit this pipeline and modify an existing action or add in a new one. You’ll see we have actions categorized into source, build, test and deployment actions with many partner to choose from. You can also add in your own actions into these lists as I’ll show when we extend CodePipelines.
I just showed you what CodePipeline looks like from the outside
Lets look inside and see how CodePipeline processed a run.
Let’s take a look at an example Pipeline. I’ve created a simple 3 stage Pipeline to talk though my example.
Source actions are special actions. They continuously poll the source providers, such as GitHub and S3, in order to detect changes. Once a change is detected, the new pipeline run is created and the new pipeline begins its run. The source actions retrieve a copy of the source information and place it into a customer owned S3 bucket.
Once the source action is completed, the Source stage is marked as successful and we transition to the Build stage.
In the Build Stage we have one action, Jenkins. Jenkins was integrated into CodePipeline as a CustomAction and has the same lifecycle as all custom actions. Talk through interaction
Once the build action is completed, the Build stage is marked as successful and we transition to the Deploy stage
The Deploy stage contains one action, an AWS Elastic Beanstalk deployment action. The Beanstalk action retrieves the build artifact from the customer’s S3 bucket and deploys it to the Elastic Beanstalk web container.
Talk about why we want to extend CodePipeline – provide some reasoning.
This is the user experience
Why?
Shows how easy it is to integrate into AWS CodePipeline
Its fun
The user experience that we’ll get when using a Lambda Function
Let’s quickly run through what occurs in a custom action.
Here is a Custom Action as shown in CodePipeline.
CLICK.
Here is an EC2 instance with a service that processes an artifact in a pipeline
Poll for job
Ack job
Do custom logic, the magic.
Put Success
When the custom action polls for a job, the job contains information on the input and output artifacts, if there are any. The Custom Action can then download a copy of the input artifact and product an output artifact as specified in the action definition through the console.
That’s a quick run through. We’ll revisit this again in a few moments when we build our own custom action.
What do we need to do in order to build a custom action?
Register an Action
Write the code to post a message to our messaging app
Deploy the code
We’re going to add a new custom action in the deploy stage that will send a message to our messaging App, Slack.
The Custom Action will
poll for jobs
acknowledge the job
send a motivational message to our messaging app
and then return successfully.
This is the high level architecture of what we’re going to build.
What do we need to do in order to build a custom action that keeps our boys at Bespoke Suits for Dogs?
Let’s talk through what we’re going to build
Register an Action
Write the code to post a message to our messaging app
Deploy the code
OPEN BROWSER – start codepipeline run
aws codepipeline create-custom-action-type --cli-input-json file://lib/custom_action/RegisterCustomAction.json
Go to CodePipeline and add in the custom action
Start a run.
What do we need to do in order to build a custom action that keeps our boys at Bespoke Suits for Dogs?
Let’s talk through what we’re going to build
Register an Action
Write the code to post a message to our messaging app
Deploy the code
The dogbot_says_hi method contains the heart of the logic of the custom action. It’s not important what occurs in here, so I’m going to skip over it and keep our focus on the work that needed for CodePipeline integration.
The execution_id is passed back to CodePipeline and is used to render the URL for the pipeline run on the action.
Deployed to an Elastic Beanstalk worker. I’m using the EB worker because it has build in pollers and is a good fit for custom actions that do not need a UI.
I’m actually deploying the custom action with another pipeline that I’ve previously setup. I won’t show it today as I don’t want to introduce more complexity to the talk.
(reword) Wrap it up
Showed you how easy it is to extend code pipeline
How straight forward it is to integrate with an existing service
We’ve had one customer write their custom action in cron and bash.
What we learned as we evolved our release processes
Overview of release processes
A tour of AWS CodePipeline
Look under the hood of AWS CodePipeline
Extending AWS CodePipeline
Give CodePipeline a try. The first pipeline is free.
We have good documentation online on how our product works, getting started and diving deeper into building custom actions
Come and talk to us in the forums. WE’re active in the AWS forums and we’re always happy to help.
We have more code sample in the awslabs including a custom action example.