We deal with the impact of applying DevOps practices (infrastructure as code, continuous integration, release and deployment on demand) in the development of embedded projects spanning the full spectrum of mechanics, electronics and software.
Incorporating devops concepts within the regular test flow enables collaboration between engineers of all three disciplines with very fast feedback cycles.
Software is in a position to provide tools and data to electronics/mechanics to verify design decisions while moving forward with the implementation.
Deliverables are provided at any point in time of the development presenting a constant stream of functioning prototypes from the beginning of development while providing an almost seamless transition from 3D printed proof-of-concept using evaluation boards to integrated device.
In the talk we present the basic DevOps principles and concepts and how they map to established procedures like EOL Tests, affect production considerations and ease hardware-software integration.
10. 2. September 2015Applying devops principles to testing embedded systems | var
Deal from the
beginning with the
deployment story
Deep Water Dive
11. Applying devops principles to testing embedded systems | var 2. September 2015
https://upload.wikimedia.org/wikipedia/commons/0/0c/401_551_AW_N%C3%BCrnberg
The tools are part of the
system
21. Great Power…
• Easy transition between hardware revisions
• Consistency between development, test,
production & manufacturing
• Shippable at all times
• Re-usable
Let's start with the elephant in the room: DEVOPS - the definition!
The only thing I am sure about DevOps
For all the discussions and debate and google searches I have not been able to reduce devops to a single sentence definition.
Basically, looking it up on the net you can come up with "Devops is a group of concepts
following a (small) set of principles using a cornucopia of tools". You will see it refered as a mindset or a movement.
I know, right? Broad, vague, entirely...fuzzy.
So let us progressively make it better.
Basic Principles
Holistic System Thinking: Look at the system as a whole. Think about developing, deploying, operating and retiring it.
No Silos: Do not isolate roles within a team (dev,test,ops). Everyone should be aware of how the system fits together, how the tools work.
Short and fast feedback loops: The art of collecting metrics and delivering them to those who need them.
Infrastructure-as-Code: Treat your servers and instances as code. This means tests and version control.
Automate Everything. Codify the team expertise so that everyone can use it.
Consistency and Communication in Infrastructure. The two key words when dealing with the complexity of modern SW development.
I’ll make a small detour to establish common vocabulary:
Developer infrastructure is the every day environment used for software development. This includes editors, compilers and any settings or hardware required to test at your desk. Key word consistency
Project infrastructure is every application used to enable and support the collaboration between team members and stakeholders: version control, issue tracking, FTP servers etc. Key word communication.
Solution infrastructure is the production environment, where software goes to work. Key word money.
Now for embedded development we have to deal with hardware at every point.
So what do these principles and concepts mean for embedded.
I’m saying just embedded and not just embedded testing because, you know, holistic system view and no silos…
Thinking about the system means I need to think about how to test the system, which starts with getting the software ON the system.
Package and deployment stories need to be told early and the question on how to figure out what is the state of the system (from version to operational status) is the first we ask.
Start with the update/upgrade story which leads to the compatibility story, which forces you to think about the packaging story which gives you release management!
You can’t be holistic if you leave vital parts out.
We don’t just deliver the binaries for a firmware, we provide a whole environment with tools (flash, update, debug, script etc.)
The tools we develop are *part* of the system, they need to be tested as well and participate in the compatibility and update/upgrade stories, which also means that are part of our deliverables, the release package.
They need to be tested as well, they form part of our contract for system operation.
We follow the PullRequest work flow (also known as GitHub flow) and have a bunch of test stations with HW attached.
Build Server, test stations and developer VMs are all managed with Chef for consistency.
Our CI aims to mitigate the single biggest problem in embedded software: Build times.
Builds are triggered for branches for which a PR has been created. These builds also run a set of smoke tests to prevent major breakage. Once that is green we can merge.
A merge triggers a build in our master branch which is a pure software build (so unit tests but no hardware) and ends successfully with the creation of a release package that is uploaded to our depot server.
All integration jobs after the master build operate on release packages. Each job ensures that further time investment is justified.
So each job tags the release and each tag says something about the quality of the release.
The important point to take from that busy slide is that the first deliverable of our build pipeline is a complete release package containing everything we need to run and test the system.
These packages pass then through the quality gates.
Quick and dirty early on, long and thorough the further we go.
Once a release package passes all the gates, it is ready to ship.
This methodology has two advantages: the more time is invested the more our confidence in the quality of the system grows and so the better justified is the unavoidable manual testing time investment.
It also helps us deal with long build times by breaking them up in stages and thus shortening the feedback loop in case of severe breakage.
How do we make it through the gates?
Automate provisioning of environments using a tools like Chef, Puppet or Ansible
Aim for parity between development and testing environments. The CI system should just replicate the things that developers do (or can do) in their day-to-day working environment.
When this is not possible aim for incremental additions, not radical differences.
Isolate environments - this in respect to multiple projects: Keep them separate, ideally each with it’s own VM.
Single source of authority means we have one place with our compiler and linker settings and assorted toolchain configuration.
Once we have these in one place, any IDE will just be in the way. Settings tend to drift, they are not diffable, viewable together, not easy to determine what needs to be checked in or not and by default they destroy out-of-band builds
Don’t put test specifications in Word documents if possible.
Keep them by the code, versioned and executable.
This is a big subject, worthy of it’s own presentation. For the moment I’ll just show you how we approach this at Zuehlke.
We have not eliminated Word documents unfortunately but we are better able to manage fuzziness in specifications.
The real value is that associated script at the bottom which builds on a DSL implemented in Ruby, the same code that is driven from the Rake tasks. In this script the verbs and commands used match the way we talk about the project.
What you see in the screenshot is the tasks available on the command line in a release package.
This are also available in the development environment next to 10s more that are development specific.
Don’t stop at build/test/deploy/release. Add them all. Build a vocabulary
Command Line is King! (but not everything)
Radiator gets the information to the team ASAP
Away from isolated tasks, spherical view of the system, intensive communication and coordination across disciplines