Many organizations have already taken the leap into DevOps. Luckily, this daunting path is now well lit with best practices from customer experience, toolkits for success, and warning signs for ugly DevOps practices.
Guest host Amy DeMartine, Senior analyst at Forrester Research, and Perforce invite you to a live broadcast on using DevOps to break your team's bad habits and increase your business value with speed, minimal errors, and pro knowledge on new ways to collaborate.
In this broadcast, you'll learn to:
- Use Agile and DevOps to improve collaboration and simplify delivery
- Avoid bad DevOps habits
- Build a toolkit for success and embrace uncertainty
- Reference a use case from one of Perforce's largest customers when setting up your own release cycles
The pendulum of history swings, particularly in computing history
DevOps can be seen as the natural outcome of what should be a familiar pattern
By tracing this pattern, we can draw a useful analogy that highlights useful truths
The whole developer-operator distinction was born with the nature of the first computers. Dev and ops were necessarily opposed, due to the nature of the computer and their goals/roles.
Being so scarce, expensive, and hard to maintain, the first generation of were universally shared resources, supported (and protected) by a dedicated IT "priesthood" of operators. Developers had their source on cards, tapes, shared storage, disks, etc. and requested runs, sometimes through an entirely separate language (e.g., IBM's JCL). The notion of an "install set" really wasn't a thing. Programs had to be compiled, executed, and the resulting output (or list of errors) dumped as a single unit of work.
The first major inflection point (roughly 1975 – 1992) was the advent of the personal computer (PC), which changed everything, for the first time making it possible for individuals to own a computer and thus load and execute their own software, sometimes without relying on dedicated operations personnel at all. This generation of computing, was characterized by its reliance on developers producing a single directory structure and often installing the new version on any officer servers/machines themselves. The individual programmer, or small team, was responsible for producing the release, and often had to invoke the various steps manually. For example, it wasn't uncommon to execute a make utility to build object modules from C files and subsequently link them into an executable image. The subsequent "release process" was often little more than copying that entire folder tree to the appropriate local folder, network share, or appropriate media (floppy disks, tapes, etc.) for distribution.
PC software grew exponentially more complicated as computing power became increasingly ubiquitous (roughly 1992 – 2007). Challenges included supporting various client/server deployment models, network topologies, multiple platforms, etc. Because of this, the burdens on developers grew correspondingly and complicated the release process to the point where installers became essential tools for software delivery to the masses. Increasingly what developers produced was handed off through a relatively primitive pipeline stitched together from custom processes, all of this typically managed by build/release engineer(s) distinctly separate from developers. The individual programmer, or small team, was often responsible only for producing working builds, which were then handed off to others. For example, it wasn't uncommon for developers to execute a make utility to build object modules from C/C++ source files and subsequently link them into an executable image. That image could then be included in a directory structure, along with other files and folders (often hand copied), to form an overall release target. Said "golden build" was then handed off to others who invoked subsequent tools to transform that resulting target into working installers for appropriate distribution media (CD, DVD, etc.).
What characterizes the third wave of software development was rectifying the second wave error of separating developers and operations. The focus of the Agile movement on iterating with diminishing release cycles made the old ways impossible to sustain. The early pipeline and its processes had to be reimagined with both developers and operations specialists involved, alongside other stakeholders as well. For example, circa 2016 it isn't uncommon for the entire pipeline to be driven by a Continuous Integration tool like Jenkins. Such CI tools often to execute a make utility to build object modules from *.c files and subsequently link them into an executable image. That image could then be included in a directory structure, along with other files and folders (often hand copied), to form an overall release target. Subsequent tools were often invoked by hand to transform that resulting target into appropriate distribution media (CD, DVD, etc.).
TODO: The logos I copied/pasted need some help. It would be nice if there were some way to make this consistent with the original diagram, like the previous slides, but I’m out of ideas on that point.
For details see: http://devops.com/2016/06/28/scania-adopting-devops-auto-production/
TODO: Need the link to the Scania video from MERGE 2016