When is a Sprint a success?

Let me cut to the chase and offer the following as a definition of a successful Sprint:

A successful Sprint is one where an increment of significant customer/user value has been sustainably delivered in broad alignment with what the team had forecast that they would deliver. As a consequence of their work they now have a better understanding of their problem-domain, and of their next steps. Additionally, the team have agreed what they will do differently (improve) to try and increase the likelihood of success in their subsequent Sprint(s).

Success criteria for a Sprint is something that I have discussed more times that I can keep track of. From interns and graduates fresh out of university to seasoned C-level managers, everybody seems to have their own idea of what 'success' looks like in Scrum.

Probably the most commonly cited definition is that 'every card brought into the Sprint in Sprint Planning is done'. I think that this viewpoint revolves around the common misunderstanding that the Sprint Backlog is a task list, and therefore the definition of success is coupled to that list being completed.

That's a kinda scary definition, because it inverts the first principle of agile software development by suggesting that following a plan is the most important thing.

Another common angle is that 'success' means that the Sprint Goal is done. I greatly prefer this over the previous definition; a well crafted Sprint Goal ensures that there is flexibility in scope, but also provides a 'guarantee' that value in being delivered to the user/customer. If a team is achieving this every Sprint, that is a typically a pretty good sign for the health of the team, and the project. But it is definitely not the only sign of success.

For example, imagine a team that takes on a hugely complicated/complex Sprint Goal, and delivers all bar one very small part of it. Perhaps it is a global product and they managed to release an industry-changing new feature in all bar the smallest of their supported markets. This is probably still a triumphant victory for not just the team but their wider business. If they had taken on a smaller goal which they did get completely 'done' but made no material difference to the business or industry, would that have been more successful? I think that we'd all agree not.

The purpose of a Sprint is to give teams a chance to creatively solve complex problems. It's an empirical process which means it all revolves around inspection and adaption.

This starts to highlight the problem with having black and white 'success' criteria. In a complex domain, we cannot [reliably] predict the relationship between cause and effect. In other words, we can't know before we start exactly how things will unfold. It's like trying to define success for each and every child, before they are even born.

Our definition of success really needs to acknowledge the uncertainty that has brought us towards an empirical agile process, rather than a tightly defined (more waterfall) one. This takes us to where we started this post, with a definition of success which can be applied uniformly and crucially explained to key stakeholders like management.

Your Sprint Goal still matters, but I think it is for the team to define and understand exactly what it means to them. The Scrum Guide says it "…is an objective set for the Sprint that… provides guidance to the Development Team on why it is building the Increment", and I think that is a pretty good starting point.

If you've got a different (or more concise) definition of success, please comment below.

Job titles in Agile, and why they are bad

95% of teams/companies that claim to be Agile, aren't actually practicing Agile.

I don't know how true that statistic is, but I could easily believe it from conversations that I've had.

There are a lot of things that teams and companies struggle with when it comes to Agile, but one that stands out above the rest is that teams get caught up with particular people having particular jobs to do. At the most basic level, this means that developers develop, testers test, designers design and managers manage.

But then, why should anybody be surprised by that - almost everybody has a job title, which pretty much draws a box around what work is theirs, and what work is somebody else's. It's one of the most fundamental differences between a truly agile start up and a lethargic mid-size company.

In Agile it is skill-sets rather than job tiles that matter. Anybody with the right skills should be able to pick up a task, and job titles create a significant barrier to that.

The two friends who are fresh out of Uni' and are each working sixteen hours a day to try to make it big with their startup do anything that they can, whether that is developing, designing, testing or sales, etc.

It doesn't take a genius to work out why companies continue to assign job titles - it makes a lot of things, not least recruitment and HR a lot easier. But do the advantages outweigh the disadvantages? It's a difficult question to answer, but I'd be inclined to say that they probably don't.

There's a lot of cultural challenges around Agile which makes recruiting the right people critical, yet many have inadvertently stopped people from becoming truly performant before they've even started.

The Scrum Guide makes reference to the 'Development Team', which encompasses everybody who isn't Product Owner or Scrum Master. It's one of the less subtle parts of Scrum that people get wrong, sacrificing a big benefit.

Is perfection really so hard to achieve that you shouldn't even try?

There's a new fad in the world of Software Development and every company that's worth its salt seems to be jumping on the band wagon. The exact phrasing differs slightly from place to place, but it usually goes something like this:

Done is better than Perfect

The idea is quite simple and I fully understand the sentiment behind it, but I completely disagree with the message that it sends out.

Are "Done" and "Perfect" mutually exclusive; the inference is that we can only have one, and not the other? That the pursuit of perfection is so challenging that we just shouldn't waste time trying to achieve it? You can analyse it extensively.

Some will argue that I'm missing the point that they are trying to make about the need to release, and maybe I am, but I feel that when you communicate with a message as strong as this that on at least some level all people will over-analyse it, and reach potentially alarming conclusions.

The IT industry does not have a problem with people sitting on software, fettling with every last detail in a never-ending pursuit for perfection. You don't have to look hard to find significant flaws in any website or application, regardless of platform or company size.

I think that the majority of users would rather our posters read things like: "Fix one more defect, it will make my experience so much better".

I think that the problem that the industry has mis-identified is actually one of an abundance of indecision, rather than an unnecessary pursuit of perfection.

Companies like Facebook are strong proponents of 'Done is better than perfect' - there are posters are all over their office walls saying just that. While there is no denying Facebook's success as a company, correlation does not equal causation and I ask you whether another company in another industry could get away with the decisions and mistakes that Facebook make?

You can afford to get a lot of things wrong when you've got a monopoly, or at least a significant market share, but that's something that most companies don't have the luxury of; there's almost always a viable competitor waiting to capitalise on every imperfection in their product and process.

So what would my posters say?

Strive for perfection without becoming overly obsessed by it

It sends out a more positive message and comes closer to addressing the root cause, while simultaneously promoting its own cause - it arguably isn't a perfect slogan but it serves the intended purpose, and any more time spent on it would arguably be wasted.

But while it is closer, it still doesn't really address the root cause. Nobody wants to be the person who brings down the site, costing the company money; a degree of cautiousness, indecision and fear is somewhat expected. With great power comes great responsibility.

If a company is keen to achieve frequent releases with a degree of controlled risk it shouldn't be achieved by saying 'imperfect releases are good', but rather by having a culture where 'mistakes are ok, as long as we learn from them'. It's not a problem that can be solved by posters, but rather should be an engrained culture that is fully supported and embraced from the bottom to the top of the organisation.

Sometimes the biggest critics aren't management, but rather a team's peers who openly criticise release decisions. Granted this is sometimes warranted, but their energy would be better spent creating a more supportive and collaborative environment where mistakes don't happen in the first place. As William Rogers did after the Challenger Disaster.

One of the biggest threats that comes from accepting imperfect software is that the line between what is acceptable and what is unacceptable becomes increasingly blurred. If we imagine an arbitrary quality (or user-satisfaction) scale of 0-100%, when you start accepting quality of 95% then you'll soon start accepting 93%, then 92%, etc., until ultimately your users choose a competitor. When you strive for perfection, it's still unlikely that you'll actually achieve it (in everybody's eyes at least) but you have at least a chance of getting close.

Imperfections are like broken windows - when you start accepting them they quickly appear everywhere

A/B testing, canary deployments (deploying to a subset of traffic as a [final] verification that the release is ok) and many other tools available to us today make achieving perfection easier than ever to achieve, and yet the industry seems content on using these tools as justification for releasing things that just aren't right. There's only so many times that you can frustrate and disappoint 0.1% of your active users before you've done irreparable damage. Imagine if a restaurant poisoned 0.1% of their customers - how long would they be in business?

If my case wasn't strong enough on its own, Facebook is down as I write this...