Présenté par Maron Guillaume, Dashlane en français
A return of experience of Dashlane implementation of the Software Development Life Cycle, centered on features.
To support the company's growth (+100% headcount in 1 year) and support the remote work, we have scaled our processes and created guidelines and tools to help teams be more efficient and deliver higher quality features. Each time a team develops a feature, it usually requires similar steps: defining the goal, scoping, get input from sales and marketing, specing, desining, technical and design reviews, estimation, planning, release dates communication, developing, testing, documentation, feature flipping, monitoring, retrospective, user's feedback review and so on.
The mindset: increasing autonomy by clarifying guidelines, and improving quality and efficiency by sharing knowledge and tools from one team to another. Creating a playbook, so all teams have the same processes, but can adapt to their needs.
We will discuss the challenges Dashlane faced while growing its team, what we tried and what are the guidelines we created for the different steps of the Development Cycle, and how it positively impacted our development predictability, quality, and speed.
Maron Guillaume, Co-Founder & VP Engineering, Dashlane
Guillaume met his co-founders while finishing his master's degree in Engineering at Ecole Centrale Paris. With Alexis Fogel, Jean Guillou and Bernard Liautaud, he started Dashlane a few months after graduating and grew the Engineering team from 2 to 100+ people. Guillaume wants to build a sustainable company based on strong human values and continuous learning. Passionate about software development, his main goal as VP-Engineering is to help the Engineering team to deliver high-quality software, at the right speed, within a safe and enjoyable working environment. He tries to achieve that through management, processes, and technology practices. Outside of work, Guillaume is a DIY'er and is often thinking about the next thing he will improve in the house, or build for his son (or his cat!).
4. Confidential Information 4
• Founded in 2009 by Bernard Liautaud and
3 Centrale students
• ~270 employees in Paris, New York &
Lisbon (~120 in the Engineering team)
• Consumer product (B2C) + Enterprise offer
(B2B)
• 15 “Product & Engineering” teams
6. Confidential Information 6
• Feature flip
• Automated tests
• Estimation and planning
• Release dates communication
• Monitoring
• Documentation
• Go to market plan
• Dogfooding
7. Confidential Information 7
• Oral communication doesn’t
scale
• In a remote world, writing
things down is critical
8. Confidential Information 8
• Adapting the Software Development
Life Cycle to our reality
• Organizing a feature centric process
• Creating tools and guidelines to help
teams to improve predictability,
quality, and time to market
10. Confidential Information 10
• Confluence, what did you
expect?
• A template used for each
feature
• With lots of tips, tools and
guidelines
• Pointing to in depth-
ressources when needed
13. Confidential Information 13
• Build and share high level scope to get
feedback
• Phase projects in 2-3 sprints phases
• Describe elements that needs to be
modified
• Not final, can evolve
14. Confidential Information 14
• Getting feedback from stakeholders
before estimating
• Avoiding de-prioritizing a fully prepared project
• Get feedback from sales and marketing
• Getting early feedback from engineering
• Identify early on things that are technical not
possible
15. Confidential Information 15
• Ensure PMK is involved on day 1
• Why are we building this feature?
Added value for the user
Competitive advantage (product and technical)
• High-level marketing plan
How do we want to talk about the feature?
What marketing actions? (Email, Blog, etc.)
What technical capabilities? (Deep link, feature flip,
etc.)
• Not final, can evolve
18. Confidential Information 18
• Increase quality and speed by having consistent documentation
• Technical and product documentation regrouped in a dedicated
Confluence Space
• Templates to have consistent format and clear expectations
Scope, Specifications and Design
19. Confidential Information 19
• Needed when modifying the business
logic
• Increase quality by sharing kowledge
and being challenged on technical
choices
• Equivalent or Architecture Review for
the UI
• Feedback loop to review UI choices
Eg. generic component versus
specific
21. Confidential Information 21
• Estimations are precise enough to
know if we are on track
• Only estimate what you know
Complete specs and design
Estimate only parts without unknowns
Create phases for the unknowns
If you don’t have a date, don’t share a date
Sharing estimations outside of the
team is recommended
22. Confidential Information 22
• Based on estimation, plan the work
of the team
• Think about:
• Cooldown periods
• Hackathon
• Vacations
• Dependencies
• Potential hotfixes
• Rollout period
• Plan based on current staffing,
not future staffing
23. Confidential Information 23
• Dependencies should be
avoided: scrum teams must be
autonomous
• When not avoidable, have a clear
& documented agreement
• For big dependencies, build a
way to follow progress versus
plan
24. Confidential Information 24
• Use release ranges to have ambitious goals
but realistic goals
• Best and conservative cases
• Have ranges for Code Freeze and End of the
Feature Rollout
Release is ambiguous: code release? Feature
release? Start or end of the rollout?
• Don’ts
Don’t use “worst case”
Don’t use “current estimate”
25. Confidential Information 25
Jan Feb March April May June
Team
On the horizon
To be scoped and estimated
Work in process
Releases ranges
01/29
01/22 02/05
Release ranges with best & conservative release dates
Confidence: green High, medium Orange
01/29 02/12
02/19 03/19
02/19 03/19
01/15
04/16 05/14
04/16 05/14
03/05 04/02
Team 02/05
01/22
02/05 03/05
02/19
02/05
04/02 06/11
03/05 04/02
Today
04/02 05/14
04/16 05/28
Accomplished/delivered to date
05/28 06/25
04/02 04/30
02/19 03/19
Feature name
Feature name
Feature name
Team
Team
Team
Team
Team
Team
Team
Team
26. Confidential Information 26
0
2
4
6
8
10
12
14
Feature 1 Feature 2 Feature 3 Feature 4 Feature 5 Feature 6 Feature 7 Feature 8 Feature 9 Feature 10
Delivery - Actual versus estimates (weeks)
Sooner than best case Sooner than conservative case Later than Conservative case
28. Confidential Information 28
• Use feature flips when possible
to secure the release train
• Keep the feature flip code for 6
weeks
• Decreases the number of hotfixes
• Share knowledge with new
comers
“Also wanted to mention that I've found the info on
archiving feature flips in the FDC page to be really
good, it literally made my day yesterday. ”
29. Confidential Information 29
• Automation in Teams: Teams
implement the Happy Path
Automated Test
• Happy Path is the nominal
scenario of usage for a feature
• This needs to be accounted for
during estimation
• Identify lacking guidelines
31. Confidential Information 31
• We recommend having
dogfooding as much as
possible, even for small/non-
critical features
• Teams can communicate in
slack in #general
• Confluence template to gather
company’s feedback
33. Confidential Information 33
• Specs to add logs and monitor
the usage of the features
• Defining the feature rollout
monitoring: technical logs,
dashboards,…
35. Confidential Information 35
• We recommend teams to step back and
share learnings after finishing a feature
• Learning loop across teams:
What did we do well?
What will we do differently going forward?
What would have allowed the team to be
more efficient? (think about both internal
and external topics)
37. Confidential Information 37
• Too much bureaucracy?
Minimum Viable Bureaucracy is not 0 bureaucracy
Test, learn, adapt
• Using ranges will decrease the sense of urgency?
Is there any incentive to finish sooner than the Best Case?
Estimations have helped identifying opportunity to improve speed and time to
market
• Not lean enough?
This can prevent a team to develop quickly a simple feature
Following FDC can transform a 1 week feature in a 2 weeks feature
1 additional week for good documentation, feature flip, and automated tests,
seems worth it in most cases
Guidelines versus rules: teams can decide not to follow guidelines
38. Confidential Information 38
• Better integration of sales, marketing
and user support
• Work more on Speed
• Experiment and adapt guidelines to:
Small features
Discovery teams
R&D teams
42. Confidential Information 44
• Present as an experiment, and get
feedback
• Avoid blocking steps – use quality tools,
not quality gates
• Find champions within team and
departments
• Start small, then expand
Topics
Teams
• Show results
43. Confidential Information 45
• We need teams to be autonomous and
creative, not blocked within a process
• Create tools, not templates
• Let teams experiment outside of the
guidelines
But let’s not fail twice on the same
issue
Hinweis der Redaktion
Just a word on Dashlane.
We are a Password Manager, and we helping customers and businesses manage their digital identity in a safe and user friendly way
Dashlane a été créé il y a 12 ans et nous avons des bureaux à Paris, Lisbonne et New York.
Notre produit s'adresse aussi bien aux consommateurs, B2C, qu'au monde de l'entreprise, B2B.
Pour vous donner une idée, nous avons 15 équipes 'Agile' Produit & Engineering
***
We were founded 12 years ago and we now have 3 offices in Paris, Lisbon and New York.
We are a B2C and B2B product.
To give you a sense of scale, we have 270 employees – 120 in the engineering team – and we have around 15 “Product & Engineering” agile teams.
***
Let’s talk about headcount for a second.
We raised our D round in 2019
With this new capital, we double our team size over 1 year
We were able to tackle more project at the same time for sure, but we also started to see many projects being late in 2020
To be efficient despite this very fast growth of the team, started to revisit our development processes
We tried to take a step back and to understand why we were constantly late in our delivery
We identified of course some technical elements of our tech stack that could be improve and simplified.
But another things that became clear was that we had a lot of assumptions on how our teams were supposed to operate, but that those assumptions were not clearly communicated to the teams.We were missing guidelines on many things such as feature flipping, automated tests, estimations, planning, release date, etc…
Another thing that became clear was that between doubling our size, and the new remote world, oral communication clearly didn’t scale
So what we decided to create a Feature development Cycle
It is basically an adaptation of the SDLC to our reality
It would be organized around features
It would be a set of tools and guidelins to help teams to improve predictability, quality and time to market
We organized in in 6 parts:
- anticipating
- requirements
- organizing
- developing
- learning
I will come back to each part in a minute
Before looking at each part in the process, a quick word about how we implemented this process.
We decided to go with a simple confluence page which is quite long and contains tips, tools, and guidelines
We ask teams to duplicate this page for each feature they work on.
Let’s go through each part of the FDC. My goal here is to present you the guidelines we are using, and how it helped us
Of course there are no 1 size fits all, and what work for us might not work for you for many reasons, but I hope you will find some elements interestin.
Before we start working on a feature, we recommend teams to build and share a high level scope to get early feedback fgrom stakeholders.
We also recommend teams to phase out their projects in 2-3 sprint phases to encourage incremental delivery or at least ease up future planning work
This is not final product or technical spec, but a high level scope to allow feedback loops to happen
The main goal of building this high level scope and phasing is to get feedback earlier from staleholders and from engineering.
We had a project or redesigning a part of our app. We built the spec, we built the design, and we estimated the work. After the estimations, the project got de prioritized because it was simply too big. From my perspective, this was the result of stakeholders expecting a simple refresh on the UI, while the actual work to be done was way more significant, and was requiring signficant UI and UX changes. I believe that a High-Level scoping would have avoided wasting days and weeks specing, designing and estimating.
Another recent exemple is how High-Level Scoping started a discussion between our Sales department and a team, on the topic of the discoverability. The sales team was arguing that we needed to invest more on the discoverability, while the team planned to work on discoverability later. Without this exercise, the feedback would have happened after the release of the feature.
One last exemple is about getting feedback early on from engineering. While migrating our 2FA feature to our Web Extension, the Product team was planning to display the list of authenticators activated by a user, such as Google Authenticators. As those authenticators are offline and without any link with our app, the Engineering team was able to warn in advance that, and as a result saving time in spec and design.
Let’s talk about marketing and how it is integrated in our development process.
We usually say that we want team to be autonomous into deliverying software.
Actually this is not true. We don't want to create a feature factory.
We want teams to be autonomous into delivering value
Too often, marketing or selling is perceived as seomthing that happens “after the feature development”.
I believe we need to change this mindset and have marketing and sales part of the feature development process.
So we introcued a PMK context step, so the marketing team can, early on, provide a high level view of the marketing plan: why we are building the feature? What will be the market plan? Same here, this is not final and it can evolve
Why? So the dev team knows in advance if we need to build a deep link, a feature flip, an in-app screen to discover the feature, etc.
They can take it into account in their estimation
The last step we have in the Anticipating bucket is Identify the Knowledge gap
Often when discussing with teams and trying to identify how can we go faster, 2 things are discussed:
- devs in the team didn’t know this part of the software
- existing docmentation was not good enough
The goal here is to try to identify those knowledge gaps in advance – when possible, so we help teams to get ready to hit the ground rolling by organizing training or creating missing documentation.
Let's talk about the Requirements now
First, documentation.
Documentation is something that everybody knows we need to do, but what happened at Dashlane was that without clear organization and guidelines, we ended up with a messy situation with incomplete documentation scaterred in many places.
We created
- one confluence space to hold technical & product documentation
- guidelines to describe what we expect them to create in terms of documentation
We integrated that in the FDC so guidelines are clear for everyone
And we ask teams to link the technical and product spec and the design in the FDC page, so it is easy for anyone to find it back during the development, including stakeholders
Similarly with reviews, we were used to do so, but as we grew, it was more and more difficult for people to know when they are expected to do reviews.
So we created a simple guidelines to describe when we should do a review – basically when we modify the business logic, and we linked to the FDC page additional documentation to explain more in depth architecture reviews for new comers.
Architecture reviews usually focus on the backend part
What is interesting is that, when reviewing the estimations on a big project that involved a lot of ciphering and complicated business logic, we realized the UI was as costly as the backend, which was very counter intuitive.
We invested some time to understand why, and to simplify, the conclusion was that we were not able to leverage our design system and our component library to develop UI quickly enough. An example is that we are too often building specific components instead or re-using generic ones.
So, we are currently working on a similar concept than architecture review but targeted on the UIThe goal is to have a feedback loop between design, dev of the team, and developers with expertise on our UI, to give feedback on the design: can we be more efficient by using existing components? Are there choices in the designs that are costly and what there are alternative.
Now that we have requirements, we can talk about organizing
Let’s talk about estimation.
I want to tell you the story of this project that was supposed to be shipped in February. 2 weeks before the release date, the team communicates that they won’t be able to release before April. 2 weeks before April, the team communicates that they won’t be able to release before July. I have sure this sounds familiar too many people.
We invested a lot in the past few months trying to improve our predictability.
One thing we realized is that we didn’t have any guidelines about estimations: what should you take into account, how precise estimation should be. So when we were asking teams to do a better job at estimation, we were not really able to describe what a “good estimation” is.
The first thing we tried to clarify is how precise an estimation should be. Now the guideline is that estimation should allow you to know approximately when you will release, but most important should be precise enough to know if you are on track.
This allowed us for instance to detect quite early that we were not on track with on of our project. Because we detected it early, we were able to adjust our staffing and bring more people to support the team.
Using this guidelines, teams started to spend more time estimating, and we were able to gather, as we go, some do’s and don’ts. Remember the project I was telling you about, with ciphering and heavy business logic, but also very costly UI? The team estimated the work, but in the first estimation, the UI wasn’t actually a big part of the work. Few weeks after starting the project, the team realized they were off track, and re-estimated. The reason they were off track was that the initial estimation was done without the design being ready.
So among many guidelines we’ve gathered as we were looking at past cause of mis estimations, there is 1 very important in my opinion: only estimate what you know – on complete specs and design. If something is unclear or unkown, either wait to clarify before estimating, or create 2 phases in your project and only estimate the first one
I know it is easier said than done, but I believe that if you don’t have a date, we should not share a date.
Another important thing we recommend teams to do is to share their estimates with people outside of the team for feedback- this usually triggers ver useful discussions
Recently, we estimated a big feature. We had 2 ways of implementing the feature: the simple way, less ideal for users, and a difficult way, better for our users.
We decided to start with the simple way – less ideal for our users, and to iterate later.
The team estimated their work, and shared with other engineers the estimation.The main feedback was “wait, we chose the simple way, but your estimation is just so big. It the simple way really simpler than the difficult way?” We are now comparing again the 2 scenario. Without this feedback loop, we would not have triggered this discussion.
Initially we were doing estimation and planning at the same time.
But we have learnt the hard way that they are very distinct things.
Estimating is focused on “how much time this task will need” while planning is supposed to anchor that in the reality of a team:
- when will engineers start working on this feature?
- all together, or one of the engineer will stay focus on the previous feature to do left voers and bugfixes?
- what about hackaton, vacations, dependencies, etc…
- how to account for the fact you sometimes have to stop the feature rollout and do a hotfix?
We tried to gather in one place all of those tips and guildelines so it is easily accesible by teams and project managers
One important thing that I want to call out is to plan based on current staffing. In my opinion, teams should only commit to things they control, and a team never controls the arrival of an additional developer.
If we think a developer will join the team in 2 months, once they are here and onboarded, we can always revisit the planning of we can release earlier.
At Dashlane we use ProductPlan to build and present the actual plannings of the teams.
Ok, so dependencies are easy: ideally they should not exist
Teams should only commit to things they control, and by nature they don’t control dependencies.
So we recommend to have a clear and documented aggrement on the dependency date.
For significant dependencies, we should have a way to follow the progress versus the plan.
Now that we have the estimations, the planning and that dependencies are clear, we can determine and communicate a release date to our stakeholders, right?
We actually started to use release range instead of release dates because:
- if the release date is too optimistic, we keep failing. This impacts the trust in the engineering team, and prevents sales and marketing from doing their job
- if the release date is too conservative, there is a high pressure from stakeholders to have more aggressive goals
On top of that, some elements are beyond the control of the team, such as a release blocked in the release train because another team needs to do a hotfix
So we ask teams to come up with release ranges: best and conservatice cases.
We recommend using specific ranges for code freeze and end of the feature rollout. Code freeze allow us to know if we are on track, and end of the feature rollout to communicate efficiently with stakeholders so they can know when all users will have the feature.
2 things I want to highlight if you want to try release ranges.
We recommend not using the terminology “worst case” because things can always go worst
Also, as we are getting closer to releasing what I have noticed is that the team is asked “we are so close, can’t we get rid of the range and communicate a release date, or can we at least have a current estimate?"
I recommend not doing so
I have a story to illustrate why. Last year, we were working on a very important feature, involving a lot of people from marketing, user support sales, etc. We were migrating our users form our native desktop experience to our web desktop experience.
We had a weekly steerco meeting with 30 people, including the CPO and the CTO.
We had done good estimation and planning work, and we were on track. As we were approcahing our best case scenario – which was still achievable, we started having the same question each week “now that we are very close from the best case, can we zero in on a date” and I was repeating each time “no, sorry, we are on track to release between Nov 14th and Dec 15th”At the end it alsmost became a joke.
On the last steerco – the Wednesday before the planned release on Monday – we were still on track, we were still able to achieve our best case.
So we are asked if we can now say with certainty we will release on Monday. I start saying « ok, now I can say we will for sure really on Monday » I mean there were absolutely no issue at all.
I guess you know what happened. On Monday, we spotted something weird on our tracking, and we had to postpone the release for 1 week.
TLDR, we recommend not using a “current estimate” on top of best and conservative range.
OUTSIDE PRES
Here is how we show the release ranges on a delivery plan.
On the left in grey you have the features that were released, and their initial range
On the middle in green and yellow you have release ranges of to be released features. The color is the level of confidence
On the right you have the features to be released.
OUTSIDE PRES
Concretely, we have seen a material difference in terms of predictability
Since we rolled out this process with new estimation guidelines, most of the feature have been released within our release range.
In this graph, you have for the last features we released, the difference in weeks between the initial release range and the actual release date.In red when we were later than conservative case date, and in green when we were sooner than conservative case.
In orange you would have when we release before the best case, which is quite unlikely.
Let's talk about the developing section
Of course the goal here is not to describe all guidelines related to development, such as coding guidelines and so on, but to highlights some guidelines on items that involves several teams: QA & dev, Product and dev, etc
We are pushing our apps to stores: Extension Store, mobile stores.
Because of that, when we have an issue, it can take days for us to get another build in production.
As a result, we recommend team to use feature flips when it is possible. So the default is to use feature flip, unless there is a good reason not too (not possible, too costly, etc.)
We keep the feature flip code for 6 weeks, and we then remove it to avoid the code base to become unreadable.
We have seen as a result of this and other guidelines a noticable decrease in the number of hotfixes we do on our Web Extension: when there is an issue on a feature, we can stop the rollout of the feature instead of rushing for a hotfix
Also, we have received good feedback on the fact the information was easily accessible
---
Removed:
On top of that, Chrome Extension Store doesn’t allow to cancell a version once it has been submitted. If there is a critical bug, all the new users will have it until we submit and get approved a new version.
Of course we need tests:
unit tests
Integration tests
Automated tests
Manual tests
For automated tests, we have:
- an automation engineer team that provide tooling and support
- mission teams that implement themselves the automated test for the happy path
This is new, before, the Automation Engineers were implementing all the Automated tests, and of course this doesn’t scale.
One interesting things here is that, while working on the guidelines for the test automation, we actually realized that we have no clear guideline to decide if a test has to be run with the smoke tests, the regression tests, or the acceptance tests.
So writting down the guidelines surfaced this issue
In an effort to be more user focused, we identified that we were not doing enogh dogfooding, we were not enough testing our own software.
So we are recommending to do dogfooding as much as possible, even for small feature.
One thing we clarified in our guidelines is that it is ok, and actually expected to communicate on our #general slack channel where all the company is, which is something some people can be reluctant to do.
We also created a template to gather company’s feedback on a feature, so starting a dogfooding session is quick and simple
We have been able to find more bugs as a result
As we approach the release date, it is important that the team collaborates with the marketing team to make sure everything is ready for the Go-To Market plan
The goal here is not to give guidelines and describe how Marketing should build their Go to market plan: this is up to marketing to deicde that.
The goal here is simply to integrate this as a step of our Feature Development Cycle, so the marketing plan is not something considered as external from the team, but something that happens as part of the development process.
In order to allow all teams member to know the impact of their work, we recommend to communicate back to the team the results of the marketing plan.
And of course, we need to monitor our features.
We need to clarify what logs do we want to add or modify to monitor the usage of the feature.
We also need to anticipate and determine how concretely we will track the feature rollout of the features: are there technical logs we need to look at, are there dashboards we can follow.
And finally, as for any iterative process, we need to end by a feedback and learning loop.
We recommend team to step back and share their learning.
What went well?
What could we have done better?
What could the company have done better so we can be more efficient?
Etc
Of course, in what's next, we need to adress concerns and feedback received on the process
One of them is about bureaucracy. My view is that we need to have a minimum viable bureaucracy, not 0 bureaucracy, and we need to adapt and to iterate
If the process start slowing down the teams without as associated upside on quality for instance then we need to change it. So it is important to monitor the impact on the teams.
Using ranges might decrease the sense of urgency
From my experience the best case scenario given by a developer is always too optimistc, so delivering sooner should not be a goal IMO
It is still important to keep it to have strong ambitions within the team.
Also, this is why it is important to use the estimation work to undestand where we need to invest so we can develop faster
Another point – kind of related with the bureaucracy, is that doing all of that can transform a 1 week feature in a 2 week feature, and prevent teams to move fast on small product improvements.It is a company choice, and I believe that if the additional time is to have proper documentation, feature flip, automated tests and so on, that it is worth it, but of course it is important to keep an eye on that.
We still have plenty of things to do.
We want to use FDC to better integrate sales, marketing and user support in the developmen cycle. We have started but we need to do more
We need to focus more on speed, now that we are better at predictability
We need to work and experiment with teams to adapt the guidelines that might not be applicable to small feature, discovery teams, and R&D teams
Thanks for your time, I hope it was interesting, and if you have any question, I believe we have 10-15 minutes for question.
And also please find a promotion code to get 6 month of Free Dashlane Premium.
Thanks for your time, I hope it was interesting, and if you have any question, I believe we have 10-15 minutes for question.
And also please find a promotion code to get 6 month of Free Dashlane Premium.
INTERNAL PREZ
-----
We still have plenty of things to do.
We want to use FDC to better integrate sales, marketing and user support in the developmen cycle.
We need to focus more on speed, now that we are better at predictability
We need to work and experiment with teams to adapt the guidelines that might not be applicable to small feature, discovery teams, and R&D teams
We need to build a clearer expected timeline, so project managers know when to start each step
We need to clarifyin testing strategy, as more feature flipping also has downsides
Then, we neet to define better how this FDC fits into the Product Development Cycle.
DASHLANE SLIDE
Some tips I can give you if you want to deploy this kind of process
Change is difficult.
I recommend using an experimental and iterative approach.
Take feedback, act on feedback, show to the team you are acting on feedback.
Build quality tools, not quality gates. Quality gates are great for coding, not for the development process
Start small, then expand, don’t deploy everything at once.
Show results
Creating guidelines not rules may be the most impotant guideline I can recommend.
You need creativity and autonomy.
If a team decide not to follow the guidelines, this is fine. Ask them to explain why, and let them experiment, and share the learning at the end.