You've heard about Continuous Integration and Continuous Deilvery but how do you get code from your machine to production in a rapid, repeatable manner? Let a build pipeline do the work for you! Sam Brown will walk through the how, the when and the why of the various aspects of a Contiuous Delivery build pipeline and how you can get started tomorrow implementing changes to realize build automation. This talk will start with an example pipeline and go into depth with each section detailing the pros and cons of different steps and why you should include them in your build process.
2. Thanks to Mike McGarr and
Excella Consulting for hosting!!
3. Sam Brown
11+ Years as a Java developer with commercial and
federal clients
Practicing continuous integration/continuous delivery
for ~6 years
DevOps Evangelist at Excella (www.excella.com)
Certified Scrum Master
Puppet Certified Professional
4. Basic components of an automated enterprise
Continuous integration
Dependency management
Automated build tools
to build...
Shared API libraries
Custom web applications
Products
5. “The purpose of a pipeline is to transport some resource
from point A to point B quickly and effectively with minimal
upkeep or attention required once built” – me
So how did „pipelines‟ get applied to software? Let‟s
try a few changes to this statement...
“The purpose of a pipeline is to transport from
to quickly and
effectively with minimal upkeep or attention required once
built” – me
6. Build pipelines require measurements and verification of
the code to ensure:
Adherence to standards
Quality proven through testing
A product that meets user‟s needs
The purpose is not just transport, but to ensure that our
product is high-quality, prepared for the environment it will
reach, and satisfies the end-user.
7. “An automated manifestation of the process required to get your
team’s application code to the end-user, typically implemented via
continuous integration server, with emphasis on eliminating
defects” – me (again)
8. …in fact, NONE ARE!
Build pipelines will vary as much as
applications
Different teams have different needs
Simplicity is key
One Size
Fits All!
11. System of record
Just do it!
Take advantage of commit hooks
Build from trunk and reduce server-side
branches
Tag often
Don‟t check in broken code!
12. Purpose: Integrate, build and unit test code for quick
feedback
Best Practices
Runs in under 10 minutes (rapid feedback)
Unit tests do not require external resources
Run on EVERY developer check-in
Fixing broken builds is the top priority!
Gamification to drive adoption
80% test coverage or BETTER
Challenges
LOTS of builds
False sense of security
Writing tests is hard
13. Purpose: Test component and/or external resource
integration
Best Practices
Test connectivity with external resources
Test frameworks load correctly
Test application components work together
Test configuration
Fewer integration tests than unit tests
Challenges
External resources may not be available in all environments
○ Mock locally
Can be time consuming
○ Use local resources
○ Separate short/long running tests
14. Purpose: Use automated tools to inspect code
Best Practices
Check syntax
Find security vulnerabilities
Record test coverage
Discover complexity
Optional: Fail based on a metric
Optional: View technical debt
Challenges
Not all code analysis tools are free
Learning/installing new tools
15. Purpose: Label code and package as
deployable
Best Practices
Labeling allows you to go back in time
Packaging code for deployment
Reduce complexity by combining steps
NO configuration in package -> Package once,
deploy multiple
Challenges
Labeling can be resource intensive
Many packaging options
16. Purpose: Make artifacts available for
deployment or available to other teams
Best Practices
Publish a versioned artifact
Make repository available
Reduce complexity by combining steps
Challenges
Requires initial complex setup
Security requirements around exposing artifacts
○ Use a tool with security built-in like Nexus
19. Purpose: Check syntax and compile prior to
application
Puppet Lint – Static format checker for
Puppet manifests
No-op Test Run – Ensure that manifest
compiles
Challenges
Puppet-lint requires a ruby-based environment
No-op test needs production-like VM
Long feedback loop
20. Purpose: Test infrastructure in a prod-like environment
Puppet Apply –Puppet application against VM that
mimics DEV/TEST/PROD
Infrastructure Tests – Test your environment!
Example tests:
Users and groups created
Packages installed
Services running
Firewall configured
Challenges
Long feedback loop
Yet another language (cucumber/rspec/other)
VM must be up to date with DEV/TEST/PROD
21. cucumber-puppet rspec-puppet
Feature: Services require 'spec_helper'
Scenario Outline: Service should be running and bind to describe 'logrotate::rule' do
port let(:title) { 'nginx' }
When I run `lsof -i :<port>`
Then the output should match /<service>.*<user>/ it { should include_class('logrotate::rule') }
Examples: it do
| service | user | port | should contain_file('/etc/logrotate.d/nginx').with({
| master | root | 25 | 'ensure' => 'present',
| apache2 | www-data | 80 | 'owner' => 'root',
| dovecot | root | 110 | 'group' => 'root',
| mysqld | mysql | 3306 | 'mode' => '0444',
})
end
http://projects.puppetlabs.com/projects/cucu end
mber-puppet/wiki
http://rspec-puppet.com/
22.
23. Repeatable, automated, process to ensure that our application is
properly installed in the target environment and that the application
meets acceptance criteria.
24. Purpose: Test acceptance criteria in a prod-like
environment
Puppet Apply – Apply Puppet manifests including
deploying application
Run Acceptance Tests – “End-to-end” testing
End-user perspective
Meets user-defined acceptance criteria
Possible tools: Cucumber, Selenium, Geb, Sikuli
Challenges
Maintain a production-like VM
Acceptance tests brittle
○ Test at the right level
Acceptance tests long running
○ Run nightly
25. Purpose: Label application and infrastructure
code, deploy to DEV environment
Label Release Candidate – Known
“accepted” versions will be deployed
together
Deploy to DEV – Automated deployment
Infrastructure AND application
Challenges
DEV updating, not deployed from scratch
○ Create tests for ALL possible scenarios
Security
○ Work with security early and often!
28. Purpose: Enable the test team to pull the
latest code
Pull-based deployment
Manual Testing/Approval
Challenges
Enabling test team is a paradigm shift
Producing changes too fast
○ Create good release notes
○ Not every build needs manual testing
29. Purpose: Enable operations team to pull the
latest code into production
“Push-button” deployment to production
Requires testing approval
Challenges
Audit/security check before deployment
○ Discuss with operations
○ Automate as much as possible and prudent
Paradigm shift for operations, TOO EASY!
○ Engage the operations team as early and often
Rollback/Roll forward strategy
○ Easier with RPM‟s, I prefer roll forward
30. Remove human error
Repeatability tests and improves the
process
Visibility from code to deployment
Baked-in quality
Metrics, metrics, metrics
Rapid and constant feedback
Releases are non-events
31. Why do we store old/obsolete versions?
Rollback
Auditing
History?
Any other reason?
My view: Store only the latest build and current production release
Bugs fixed in latest version
(Almost) impossible to reproduce environments
Version control has history
Exception: Other teams dependent on a previous version
Store major/minor revisions
Reasoning: In a continuous delivery environment, delivering frequently
allows you to keep moving forward with new features AND bug fixes!
32. Put EVERYTHING in version control
Start simple, up your unit test coverage.
Analyze your code in order to focus
Install CI and start with two build steps
Start and maintain a wiki
And lastly…
11+ Years as a Java developer6 years Practicing continuous integration/continuous deliveryDevOps Evangelist CSMPuppet Certified
Some assumptions about enterprises tackling automationThey possess some standard components to automate building shared API’s, products and/or custom web applicationsBuilding software is mostly at a very micro level when viewed through the enterpriseIgnoring business logic, there are still a LOT of places software could fail in this view
Eliminate defects in:The processThe product
…in fact, NONE ARE!Build pipelines varyDifferent teams, different needsStart simply
Our use-case Pipeline- Building a web-services based web application- Has an environment build- Fork/Join- Does NOT flow all of the way to Production
System of RecordJust use it!Commit HooksBuild trunkTag often (cheap)No broken code
Under 10 minNo external resourcesEvery checkinFix broken builds80% coverageChallenges:Logs of buildsFalse securityWriting tests is hard
Test connectivityTest frameworksTest componentsTest configFewer tests than unitChallenges:External resourcesTime consumingLocal resourcesSplit tests
Check syntaxFind security issuesRecord test coverageDiscover complexity and areas of focusFail based on some metric not metCheck out technical debtChallengesFinding a free toolLearning/integrating these tools
Labeling snapshots your codePackage for easier deploymentSteps can be combinedNo config in package!ChallengesLabeling may create copies of code baseMany packaging options (RPM)
Make artifacts availableAlways versionMake repo available to allCombine stepsChallenges:Complex setupSecurity challenges exposing artifacts
Our use-case Pipeline- Building a web-services based web application- Has an environment build- Fork/Join- Does NOT flow all of the way to Production
Infastructure as code!- Puppet, Chef, cfengine, batch scripts should all be in version control just like application code
Check infrastructure language syntaxNo-op checks compile and a test runChallengesRequires rubyNeeds a prod-like VMLong feedback loop
Applying changes to prod-like VMRun tests to ensure infrastructure is readyChallengesLong feedback loopAnother language to learnUp to date VM needed
Our use-case Pipeline- Building a web-services based web application- Has an environment build- Fork/Join- Does NOT flow all of the way to Production
Bringing sub-lines together for full runTest that application runsEnd to EndEnd user perspectiveMeets criteriaChallengesUp to date VMBrittle acceptance testsLong running tests
Label/Tag infrastructure and code, they go together!Deploy to DEV for additional developer testingTest things that can’t be automated?ChallengesDEV is updating here, should it start from scratch?Security!Is a DEV deployment necessary? Where else could this apply?
Our use-case Pipeline- Building a web-services based web application- Has an environment build- Fork/Join- Does NOT flow all of the way to Production
Flipped which side seems simple and which side seems hard!
Pull-based deploymentManual testing and approvalChallenges:Change in process/paradigmNot every build needs manual testing! Mind shift!
Push button to production – SCARY!Requires test aprovalChallengesAuditing/Security – where does this happen? (Automate, collaborate)Change for operations (this is too easy)Rollback/Roll-forward strategy (RPM’s make this easier, my preference)
Remove manual processes and human errorRepeatability to test and improve the build processVisibility for the entire teamQuality is “baked in”Metrics on anything you want to measure to gain insightRapid and constant feedback at all stagesReleases become non –events (hopefully)
Why do we keep reems of versions? Are we going back? Auditing?My view Store the latest build and current production release ONLYBugs fixed in next deployment Environments are difficult to reproduceVersion control has your historyException might be creating API’sFrequent delivery allows you to continue pushing forward instead of looking backwards
Version controlStart simple with unit test coverageAnalyze your code -> Shows you where to focus effortInstall CI and start with 2 build stepsSTART A WIKI!!