If testers sit passively through agile planning, important testing activities will be missed or glossed over. Testing late in the sprint becomes a bottleneck, quickly diminishing the advantages of agile development. However, testers can actively advocate for customers’ concerns while helping the team implement robust solutions. Rob Sabourin shows how testers contribute to the estimation, task definition, clarification, and the scoping work required to implement user stories. Testers apply their elicitation skills to understand what users need, collecting great examples that explore typical, alternate, and error scenarios. Rob shares many examples of how agile stories can be broken into a variety of test-related tasks for implementing infrastructure, data, non-functional attributes, privacy, security, robustness, exploration, regression, and business rules. Rob shares his experiences helping transform agile testers from passive planning participants into dynamic advocates who address the product owner’s critical business concerns, the team’s limited resources, and the project’s technical risks.
The first ‘run’ of this at StarWest 2011 went quite well. There were lots of questions around how automation fit into this model—in fact all types of testing. So it might be a useful extension for future versions?The other thing that came out were questions around setting up charters/sessions – so how to “begin”. In a workshop variant of this I’d like to decompose a real app and do some iterative, simulated session execution.And also what the ‘”gap” between sessions “looks like” – so, debrief, note reviews, bug triage, re-planning charters, setting up the next session strategies, etc. That would be another focus point.
The first ‘run’ of this at StarWest 2011 went quite well. There were lots of questions around how automation fit into this model—in fact all types of testing. So it might be a useful extension for future versions?The other thing that came out were questions around setting up charters/sessions – so how to “begin”. In a workshop variant of this I’d like to decompose a real app and do some iterative, simulated session execution.And also what the ‘”gap” between sessions “looks like” – so, debrief, note reviews, bug triage, re-planning charters, setting up the next session strategies, etc. That would be another focus point.
Bob: I get asked all the time about ratio’s. They no longer matter. If you have zero testers, you still need to test. But now you don’t have any expertise for that role.MT: (Challenge: you should set a baseline of expectations, the best scenario I have seen work is 1 manual and 1 automator per 3-4 developers). Only once you get a ton of you automation completed can someone play both roles. Currently fighting the old stereo type for waterfall 80/20 rule for testers on a team…does NOT work. End the end it’s is the whole team responsible for the completion of the story/sprint so if all your testers are on vacation devs have to pick up the tasks.Bob: Might want to tell the Sharda story here? Solid manual tester who made a huge difference.Bob: Might want to explore whether “soft skills” are more important that “technical skills”. That being said, I would want some SDET-like folks.MT: Agree, Saradha turned into a bad egg and that will kill a SCRUM team.
Bob: You hear this a lot, always from the agile pundits, purists, and most coaches. Many of these folks have only done “agile coaching” for too many years OR they’re working in small start-up teams or greenfield projects.Bob: Talk about first miss start. The reality is that it takes TIME to build up automation in a legacy based system and you need a consistent strategy. Might share the story of iContact where we had 2 miss-starts on automation before we really clicked on Mary: Talk about second restart. Dedicated focus and solid skill of Mike was a huge help! Did spoil the automation team, only wanted to build frameworks after.Bob: And making the “Business Care” is always a challenge. I like the Salesforce example of an organization-wide commitment to coverage!Mary: iContact story on all the “promotion” I had to do to make people care. But it worked.Bob/Mary: Explore the anti-pattern of focusing on ATDD – BDD as a test automation activity first.
QA Manager should partner with the Dev Manager to explore unit testing standards and make sure everyonen is on the same page
Bob: One of the challenges here is skill. In many organizations I’ve seen, there isn’t a strong level of experience in writing solid unit tests. Or there are reactions (negative) to retro-fitting legacy systems with unit tests.Mary: Total believer of writing the right test for the right reason. Not to have 100% code coverage. QA can code review unit tests, Dev’s can code review integration/regression tests. So training and mentoring is a strong part of your improvement play.TRUST also comes into play. I usually ask “testers” if they trust developer-based tests & testing (whether manual or automation). Nearly 90% of the respondents (not a scientific poll ;-) say no…and will create replicated tests to cover the same things.Might also mention the connect this has to Done-Ness criteria; for automation incremental completeness
In the shorter “versions”, this slide is “hidden”. In the ½ day workshop, we’ll cover it…Mary: agreed, comments from slide 3A could go here as well.Bob: Mary, be prepared to comment on this one as an “extension” to Slide #3.
Mary: Discuss Test Ideas and Mind Maps from iContact. Here the story of our journey at iContact would be helpful. Explaining our use of a “Test Plan” for each Sprint to drive team-based conversations around how they’ll attack testing (quality) within each sprint for the work they’ve signed up to do. Notice I didn’t say for each story…but for the “body of work” in support of their Sprint Goal.And if you can’t (doesn’t make sense to) automate everything first, you do need a record of test cases. But “agilify” them. What do I mean by that? Perhaps put that question out to attendees…
In the shorter “versions”, this slide is “hidden”. In the ½ day workshop, we’ll cover it…Bob: I’m not sure if Mary saw it this way… But I’ve always felt that ET was a key to our whole-team attitudes towards quality and testing. Yes, we had to coach it into people by setting the standard. But, getting folks to pair / collaborate and test together was key. The dynamic pairing, the inclusion of other functions (customer support), using it on release nights as our “regression mechanism”, and having the whole team take responsibility for product quality.
Bob: I always remember a discussion that Lisa Crispin and I had at a conference. She was away from her team, and if you know Lisa’s context, she’s a tester on a very small XP/Agile team. My questions was – what happens while she’s “away”. And her answer was around her whole team tester and was responsible for quality. The burden didn’t just fall to her. So she felt “comfortable” traveling, do what she could to test remotely, but that her team understood their role.The other point here is that Testing doesn’t mean Quality. Let’s broaden that view in this discussion—particularly speaking to “Finding the Customer”.Mary: Talk about the QA manager and Director at iContact leading/championing this, even to the point of salary equity. Might want to share how our testers at iContact take a very vital and interactive role in our sprint reviews. Not only at a feature level…but in making our testing & quality efforts visible.
Bob:This is the place where I want to speak to HEALTHY, day-by-day dynamics between testers and developers within highly productive agile teams. Topics: Mary: Pairing, Code reviews, Paired demo’s with the POMary:Micro-handoffsMary:Bug reporting ;-) Report OR Fix? DO NOT WRITE UP EVERY BUGBob:Power of WIP, Power of Collaboration – sitting togetherBob:New hires…learning curve
Bob:First point of emphasis is the balancing act of supporting Self-Directed teams. So, introduce the notion of self-direction.Bob:Perhaps tell the Teradata story where the Managers went quiet as “Chickens”Mary:Emphasize the importance of Done-Ness in guiding agile teams; also the notion of Guard-Rails. When I got to iContact the QA team could not recite what the Done-ness criteria was.Mary: Servant leadership….Information Radiators, Transparency, and Agile-centric metric would be good to discuss. From QA and Test centric towards Team-Centric metrics (Value, Throughput, Quality, Team)
These are all “exit criteria”.What might be interesting here is to introduce the discussion around “Readiness Criteria” – particularly for User Stories. What would be some of the aspects of that?And what behaviors or changes in sprint execution would it influence?
In the shorter “versions”, this slide is “hidden”. In the ½ day workshop, we’ll cover it…Mary: It would be interesting if Mary could tell a story around our KPI attempts at iContact. Some of the things we tried and the variations. Also, recounting our struggles to get the metrics, no matter what we measured, to drive action and improvement actions across our teams…Retrospect more than 2 Sev 1’s per release.
Bob: Perhaps share my ChannelAdvisor retrospective tale. Not only talking about trivial changes vs. impactful changes…but it leads into:CourageTeam trustWIP and co-locationCollaborationWhole teamMary: Quarterly QA only retrospective.Bob: Then perhaps discuss my blog post regarding failure…chat with the iContact Scrum Masters regarding not failing enough ;-)
In the shorter “versions”, this slide is “hidden”. In the ½ day workshop, we’ll cover it…Bob: Lots of agile teams “go flat” over time. Under the banner of quality, I usually ask testers to influence the continual improvement practices within their teams. Not that it’s solely their job, but I’d like them to consider the opportunity to “lead” in this area.The stage for this is a couple-fold:How the team is executing and delivering valueHow the team is increasing productivityHow the team is holding to their quality agreements (Done-ness, testing as much as possible, no escapes, etc.)How the team is challenging themselves in the retrospectives… Mary: I don’t have any direct stories to tell here. I wonder if Mary does?
Mary: A place to start here is the “partnership” I expect between testers and their Product Owners. Helping to make the Backlogs clearer…more testable. The KEY of Acceptance Tests in providing clarity.Mary: Talking about BDD again if we have not already done so much.Bob: Clarity leading towards better solutions, better designs, simpler solutionsTesters “taking the stage” as part of Sprint Reviews; showing the product, but also showing “Quality” in action!
In the shorter “versions”, this slide is “hidden”. In the ½ day workshop, we’ll cover it…Mary: Talk about Leadership meetings that I hold with Dev/QA Managers and SM/POBob: We should look at this in other contexts with attendees, for example, in regulated environments where requirements need to be verified/traced…what do you do? I think the key thing is to not look at requirements completeness as an “entry” vehicle any longer. They are now complete upon exit of working code from an iteration (or better yet) in the customers hands via a release. AND the customer determines completeness by “usage”.
Bob: Let’s try to gather some feedback here. What did we do well as a group? What could we get better at? And what did we miss entirely?