When my friends or new colleagues ask what my job entails, I usually explain that I help teams to perform at their best by helping them streamline their processes and ways of working.
What I usually avoid telling them is that a substantial part of my career to date seems to have revolved around discussing 'Sprint length'. I've lost count of the number of discussions & debates that I have been involved in over teams being 'required' to work in a particular cadence, usually one week.
The logic behind one-week Sprints usually revolves around a few assumptions:
- If Sprints are good, more Sprints must be better
- Shorter planning cycles must be better than longer ones
- Teams can plan a shorter period easier than a longer one
- If something unexpected happens, like an incident in production or a requirement from another team, the team can act on it sooner.
Before going further, let's address that last one because it is a bit separate to the others. As long as they are not endangering their Sprint Goal, a team is always free to adjust their Sprint Backlog. There is no issue with them reacting to issues in production or for another team. If such work would endanger their Sprint goal, then that's invaluable transparency that potentially warrants an adaption to the process
A few years ago I blogged about the purpose of a Sprint.
In Scrum, the Sprint serves many purposes. Above all though, it’s a time box within which the team can creatively and ingeniously craft solutions to complex problems. It gives the team an uninterrupted period to ‘get their heads down’ and work together to build something awesome, and it gives the business a commitment that within a certain period their agreed goal will have been met.
When I talk to teams about their Sprint Length, one of the key criteria that I need to understand is how long it typically takes them to solve a [meaningful] complex problem. I frame everything around this, because if the team is not using the time-box to solve such a problem then the value that Scrum is likely to bring the team -- and business -- is easily diminished.
If the Sprint length is notably longer than the time that it actually takes to solve the problem, then the time-box doesn't offer as powerful risk control as it otherwise could, and the team is potentially missing opportunities to adapt their approach.
If the Sprint length is shorter than it takes to solve a meaningful problem, it ends up taking multiple Sprints to solve the problem. Within this, there tends to be two failure modes:
1) The team allows tasks to roll over the Sprint boundary, and typically the notion of a Sprint provides no real benefit. I've previously described this as 'continuous chunky vomit'; Value does come through eventually, but it isn't pretty.
2) The team excessively plan up front to ensure that each Sprint they still achieve a goal. But that goal is typically devoid of any meaningful value to the customer, the business, or the team itself.
Whilst neither failure mode is good, the second one is far worse because it eradicates transparency and moves a team's focus from 'satisfying the customer through early and continuous delivery of valuable software', toward simply delivering 'something'.
There are warning signs of a team falling into this trap:
- They 'spike' almost everything because they are afraid of uncertainty and are afraid of taking risks, so they never dare venture into the unknown. This means that almost everything takes at least two Sprints (one Sprint for the spike, and one Sprint for the work).
- Everything has to be planned carefully. Look out for Sprints where the goal is creating a detailed plan of how the work will be carried out. This could be in the form of a technical architecture, or simply a large backlog of 'tickets' on the Product Backlog that don't change.
- It is tricky to identify the start and end points for a notable piece of work, like a new feature or service.
But why is this really a problem?
Empirical process control can only work when one can inspect the genuine delivery of actual valuable working product. If a team is unable to deliver product value, then what are they able to inspect? Of course, said value can be small, but it has to exist and be meaningful. A thin-slice that brings no value to anyone cannot be declared good enough.
If teams are framing their work around meaningful problems, then it is infinitely easier to inspect and adapt everything, because the entire process, the tools, the people and their ambitions are framed around something that tangibly matters for the customer, and by extension the business. If the team's work is framed in almost any other way then they are essentially surrounded by fog, guided by their best-guess plan and an unhealthy amount of blind faith that if they keep heading down this path that, eventually, their destination will emerge.
If a team can typically solve a meaningful problem in a week then that's probably how long their Sprint should be. If it takes them two weeks, then their Sprint should be a couple of weeks long. Nobody understands the work that is required more than the team itself, and although they may need some help and support from somebody like a Scrum Master, their judgement and opinion is what matters.
The other angle that mustn't be forgotten is that the Sprint is the container for teams to creatively solve a complex problem. A complex problem by definition will have unknowns, and unknown-unknowns. The team needs to have space and capacity to explore the problem space and try out different approaches. They need to expect some ideas not to work, or to be harder than they first expected, and have time to either change their approach or simply persevere.
Lean startup encourages teams to 'fail fast' and learn as quickly as possible, and for teams to be able to do that but still uphold a meaningful commitment, they need to have enough time and enough flexibility of scope in their Sprint.