Constraints and Requirements

A couple of weeks ago I saw an exchange on Twitter about constraints, which spurred me on to write this post that I’d been mulling over for a while. I’d been thinking about self-organising teams, and what constraints are needed to help direct their self-organising efforts. Then I noticed this conversation on Twitter:

@ourfounder (Jim Benson): Professionals hate constraints. Often people we think are lazy or angry have simply given up on the system. #agile #kanban
Retweeted by @flowchainsensei (Bob Marshall)
@nrcantor (Noah Cantor) : @flowchainsensei @ourfounder Whose constraints? @jasonfried says ‘Creativity loves constraint’ in #Rework. YMMV?
‏@ourfounder: minimal constraints are loved. Beyond that is undue burden.
‏@flowchainsensei: Two meanings for “constraint”?
‏@nrcantor: That’s what I was wondering.
‏@flowchainsensei: Violent constraints vs nonviolent constraints?
‏@nrcantor: Boss imposed vs circumstantially imposed?
‏@flowchainsensei: Something like that

All those involved in this exchange seem to agree that ‘constraints’ are not always a burden, they can be a positive influence. But what makes a positive constraint? Are acceptable constraints minimal, non-violent, circumstantial, or something else?

There’s a fair amount of blog writing on the ‘myths’ of self-organising teams; the most common myth being that self-organising teams need no management, they just get on with their jobs without external direction. Countering that myth, Jeffrey Palermo points out that “Self-organisation does not require self-directing”. “A team that is self-directing will likely not accomplish anything significant. A software team within a company cannot make every decision. It can only make certain ones. For instance, what market segment to compete in is a decision that has likely already been made; however, what unit testing framework to use might be up to them.

The point Jeffrey Palermo doesn’t make, and nor did the tweeters @ourfounder, @flowchainsensei and @nrcantor, is the distinction between constraints on ‘what’ the team is trying to do, and constraints on ‘how’ the team should do it. Which market segment to compete in is a ‘What?’ constraint. Which unit testing framework to use is ‘How?’. I suggest that some constraints on ‘What’ to do are essential for a team to achieve cohesion and a sense of purpose. ‘How?’ constraints, on the other hand, are the ones that are likely to chafe and irritate smart, capable team members. This is especially true where the constraint relates to business process. Many businesses have ‘how?’ constraints relating to technology choice, e.g. for compatibility with existing systems. ‘How?’ constraints relating to process, however, must demonstrate their benefits to the team, as well as to the wider business, or teams will resist them.

If you’re familiar with Tom Gilb‘s work, you may have heard him talk about the distinction between ‘Requirements’ and ‘Design’. In Gilb’s view, a function requirement only qualifies as a ‘requirement’ if it does what the business exists to do. A cornerstone of a bank’s business model (if we can leave cynicism aside!) is to lend money and charge interest. “Calculate interest” is justified as a requirement. Anything less fundamental than that is a design decision. Requirements and Design should be clearly separated, but they are often confused and mistaken for one another. Many people try to dive into design too soon, before top-level requirements are clearly understood. My distinction between ‘What’ and ‘How’ constraints corresponds with Tom Gilb’s Requirements and Design definitions. ‘What’ to develop is a requirement; ‘how’ to develop it is design.

Following Gilb’s logic, decisions about ‘what’ a business is trying to do should be taken by the business leaders, and provided as requirements (aka constraints) to development teams. Those are healthy constraints, without which the team has no purchase and will probably struggle to achieve a shared understanding of their goal. Decisions about ‘how’ to meet those requirements should be devolved to teams as far as possible. Going back to that Twitter conversation: I’d suggest that excessive ‘how’ constraints are the kind that @ourfounder‘s professionals hate.

Agility and the ‘Alignment Trap’

I enjoyed Allan Kelly‘s whistle-stop ’90-minute Guide to Agile’ at SyncConf in February. I was particularly interested in the‘Alignment Trap’, which suggests that IT can support a business better by effectively getting things done (x-axis below) than by alignment to business objectives (y-axis below).


This sounded to me like: “First do things right, then do the right things”. I was curious; this advice seemed to run counter to systems thinking, in particular Russell Ackoff’s claim that “it’s much better to do the right thing wrong than the wrong thing right”. So I read the original study by Bain & Co (full article only available to subscribers). It turned out that my interpretation of ‘alignment’ was quite different from the researchers’. They defined alignment as “the degree to which the IT group understands the priorities of the business and expends its resources, pursues projects and provides information consistent with them”. In these terms, ‘well-aligned’ IT organisations could look like this:

various divisions were driving independent initiatives, each one designed to address its own competitive needs. IT’s effort to satisfy its various (and sometimes conflicting) business constituencies created a set of Byzantine overlapping systems that might satisfy individual units for a while but did not advance the company’s business as a whole.”

If this is alignment, the results of the study don’t defy the principles of systems thinking after all. I’d expected that ‘alignment’ would mean alignment to overall business goals, not departmental or special-interest goals. Perhaps I should have foreseen this; after all, I have seen plenty of evidence in my own working life of how difficult it can be to establish and communicate an overarching mission. With competing interests and siloed IT initiatives, this company’s problems look like local optimisations resulting in a negative impact on the end-to-end system.

The Bain researchers advise companies to build IT effectiveness – reliable, streamlined systems, predictable deliveries – as a necessary step before alignment. This is where Agile coaching (e.g. Allan Kelly) comes in, and it’s an approach I support 100%. The report identified 3 principles for building effectiveness:

  1. Emphasising Simplicity: (though this reduces scope for tactical case-by-case fixes),
  2. ‘Rightsourcing’ Capabilities: i.e. if outsourcing, do so only after a function is very well understood internally,
  3. and most importantly in my view,

  4. Creating end-to-end accountability, i.e. shoring up IT’s responsibility to the business as a whole, not just component units.

The report stumbles in its conclusions, in my opinion. It lists the three principles for building effectiveness, and follows up with:

Then, and only then, the best performers tightly align (my emphasis) their IT organization to the strategic objectives of the overall business, using governance principles that cross organizational lines and making business executives responsible for key IT initiatives”,

which sounds to me exactly like a repeat of Principle 3 of effectiveness. Effectiveness and alignment blur together. Or have I missed something?

Perhaps they mean that, once ‘effectiveness’ is achieved, IT can again respond to requests from individual departments and special interests, as long as the effectiveness principles of simplicity and end-to-end accountability are respected. Who would argue with that?

This topic also sparked another question for me. The focus of the Alignment Trap report is on big corporates where IT is an enabler. It’s a support show, not the main act. Would the conclusions hold true in an environment where software is the product? In particular, would they apply in a startup environment? This echoes the debate about the merits of TDD in a start-up environment, with uncle bob martin pro-TDD, and Nate Kohari sceptical. Can early-stage startups afford to focus on simplification and effectiveness, when what they are doing is not validated from a business standpoint? In a startup, is it more essential to ‘do the right thing’ before doing things right?

Estimates are storytelling

Are stories more useful than maths? English civil court cases are decided on “the balance of probabilities”, but legal cases rarely involve probabilistic reasoning. Imagine a court that is trying to determine how a fire was caused. It doesn’t help to tell the judge that “25% of similar fires were caused by a discarded cigarette, 20% by arson and 40% by electrical failure”. In this scenario, past patterns are of little help. The court wants to find a single truth: what happened in this particular instance?

John Kay points out that the way courts do this is by narrative. Witnesses tell stories. Persuasive stories convince judges and juries. In this narrative context, the mathematical term “balance of probabilities” occasionally confuses even experienced judges. When the ship Popi M sank with no known cause, the owners claimed from their insurers, suggesting that the ship had been hit by a submarine. The insurers argued that the ship had fallen apart through wear and tear. In court, the judge accepted the submarine theory, saying that it was extremely unlikely, but the less unlikely of the two scenarios. It was only when the appeal reached the House of Lords that the meaning of “balance of probabilities” was questioned. The Lords pointed out that a story can only reach the “balance of probabilities” standard of proof if it is more likely than not that it really happened. If a judge finds that an event – a submarine crash – is extremely improbable, it does not make sense also to find that it is more likely to have occurred than not. A rap across the knuckles for the judges in the lower courts!

In Kay’s words: “narrative reasoning is the most effective means humans have developed of handling complex and ill-defined problems“. But there’s a clear distinction between trying to establish the truth of a past event, and anticipating the likelihood of something that hasn’t happened yet. When we’re trying to anticipate how long software development will take, it is helpful to express chances in percentage terms, as long as we are also ready to make adjustments as circumstances change. Using only narrative, without historic data, to anticipate the future, is like looking into a crystal ball.

I think John Kay has helped me to see a key reason why estimation for software development is still so widely used and so popular. Despite its shortcomings, many people seem willing, even eager, to turn to their mental crystal balls. Why? Is it because estimation is a narrative style of thinking? David Anderson characterises estimation as ‘‘Cartesian-style decomposition’; I suggest there is a large dose of narrative involved too. When we estimate, we tell ourselves a story about the future. Storytelling has been central to human communication for thousands of years. The narrative approach to legal cases has probably held sway for almost as long.

If people are drawn to estimation because they are drawn to narrative, does that help to explain why the lure can be so strong? If so, efforts to wean an industry off estimating might be harder than either David Anderson or Ron Jeffries realises. Perhaps the answer is to offer an alternative narrative. Time to collect stories of successful happy outcomes using probabilistic approaches, to weave them into compelling narrative … stories that get stuck in peoples’ minds.

Beyond Estimation

I gave a short talk last week at Agile on the Rocks, an informal event organised by Pivotal Labs and Wazoku. We had a lively audience, some tremendous other speakers in the lineup, and everyone had fun.

I chose to talk about Estimation, a topic I’ve been thinking more about since XP Day last November. I talked about the anti-estimation backlash that’s in the air. One the one hand we have luminaries like Ron Jeffries writing entertaining tirades urging developers to stop estimating, and apologising for the existence of proxy units like story points. On the other hand the Kanban method, which emphasises studying past rates of feature delivery to provide a probabilistic understanding of what may happen in the future, appears to be growing in popularity all the time. The anti-estimation movement has the wind behind it.

But turning away from estimation can be tough. Perhaps if you have the stature and the track record of Ron Jeffries, it’s easy to convince customers and stakeholders to “Skip Estimating. Just Build Something Now”. But many of us still work for businesses where the buying side won’t finance software development projects without estimates. Nader Talai rightly points out that when buyers demand estimates it is often predictability that they really crave. However, the type of metrics that enable predictability – lead time and lead time distribution – rely on a strong track record of delivery data, and building that takes time. How can you start weaning a team off an addiction to estimates, while you build the data to offer them something better?

This got me thinking about some of the fundamental practices that worked well five years ago at We didn’t identify ourselves as an Agile shop; we weren’t using Scrum or XP. But we did a pretty good job of being agile with a small ‘a’: responsive, able to deliver frequently and ready to embrace changing requirements.

I think there were two fundamental reasons for this. Number 1: technical practices that supported frequent releases. Most importantly, extensive automated regression testing. Number 2: product and business development people who stayed focused on solving a problem, instead of chasing a predefined set of features. That mindset helped the product people to be more comfortable with uncertainty, and to accept changes while development was under way.

I suggest that these are the two fundamental building blocks of the kind of trust between tech team and product team that takes them beyond the estimation battleground. If the tech team can deliver frequently and keep quality high, and the buying side can accept that they don’t know the future either, you’re on the right track.

Forecasting the Future: Philosophy or Sport?

I follow two thinkers and commentators on management and the future of work who have both recently written articles that draw on very different areas of knowledge to make the same point. David Anderson, thought leader in the Kanban method for knowledge work, has identified himself as a member of the Pragmatic school of philosophy. John Kay is an economist, and has been referring to sports science. They are both urging us to stop speculating and to stop claiming that we can know the future.

From John Kay: “Physicists studying sport have established that many fieldsmen are very good at catching balls, but bad at answering the question: “Where in the park will the ball land?” Good players don’t forecast the future, but adapt to it. That is the origin of the saying “keep your eye on the ball”.

“The skill of the sports player is not the result of superior knowledge of the future, but of an ability to employ and execute good strategies for making decisions in a complex and changing world. The same qualities are characteristic of the successful executive.

From David Anderson: “We must abandon imaginary crystal ball gazing, and the use of made up numbers in deterministic plans. Instead we must study what has happened in the past and use it to provide a probabilistic understanding of what may happen in the future. Doing so is counter-intuitive. It’s hard for us to think and act this way but we simply must make this shift!”

“Study actual lead time distributions for work completed in the same environment. When someone asks you, “how long will this [next piece of work] take?” Answer it by quoting the data from the distribution. Do not speculate! Discard that crystal ball in your mind’s eye and embrace your Pragmatic future!

These two articles make a great pair. It’s fascinating to see how such varied disciplines are used to deliver the same message.

Certification: could it make Kanban Complicated?

After contrasting Scrum and Kanban approaches to ‘invisible tech tasks’ in my last post, I came across Liz Keogh‘s far more thorough and highly readable review of differences and similarities between Scrum and Kanban. Liz wrote her post over a year ago, but most of what she wrote is just as true today. One notable change is the introduction, by Lean Kanban University, of a professional designation for Kanban coaches and managers.

I’ve written a couple of posts on this blog expressing scepticism about certification as an indicator of effective performance , and about the “qualification spiral” . I’ve also spent the last 4 years of my working life involved to varying degrees with the Scrum framework, and have been proud to remain un-Scrum certified. In knowledge work, I felt that even highly-regarded, hard-to-achieve qualifications could not substitute for practical problem-solving experience, so I was never convinced that two days of training and study for my Scrum certification would make a meaningful difference to my practice.

Despite that default scepticism, the Kanban program has a certain appeal. It involves a peer review approval process, and an expectation that designated professionals will play an active role in the Kanban community. This suggests that the leadership is committed to maintaining the standards, and perhaps the collaborative, experimental feel, of a small, intimate community. Of course the real attraction is the Kanban approach itself: the emphasis on intelligent metrics with some counter-intuitive practices on the side is very compelling.

But … I wonder whether a professional designation edges the Kanban community towards the Complicated domain, as opposed to the Complex, to use Cynefin-speak? I have a theory that professions and industries tend to migrate from handing Complex to Complicated problems as they mature. More practitioners, widely dispersed, possibly alternative certification bodies, financial imperatives … might this lead to mechanistic solution-seeking and unwillingness to embrace complexity? I had a thought-provoking discussion with some people on the LinkedIn Systems Thinking group on this topic, and most agreed with the general idea. Or at least I think they did – there were a couple of contributions I didn’t understand ;) . Is professional accreditation the first step along that road for Kanban? Perhaps not. Perhaps, as Kim B suggests, the field of practice will ‘pulse’ between mostly-stable and mostly-emergent states.

At the moment the Kanban community seems to be embracing learning and making major strides forward in terms of influencing how people approach knowledge work. It’s a community I’d be privileged to join. I wonder if I will build up enough Kanban experience to make a real contribution to that professional community, and perhaps to become accredited? If so, will I find a community that has stayed true to its founding principles? Kept both its intellectual rigour and its appreciation for complex environments and human challenges? I hope so.

Kanban: supporting the necessary stuff the user doesn’t care about

One of the things I like best about the Kanban approach to software development is its practical handling of ‘tech tasks’. These are tasks that don’t create obvious user value. They often involve the technology infrastructure: tasks such as updating an operating system to a newer version, or building a ‘like-live’ environment for acceptance testing. Every development team has to do those tasks sometimes, and they can be a source of tension when we’re talking about priorities. Product managers, understandably, want to focus on the features the users are asking for. Developers often know that things will go badly wrong if the infrastructure doesn’t get enough attention, but struggle to articulate their creeping sense of doom in terms that sway product managers.

Most Agile approaches acknowledge this tension – with the concept of a ‘chore’ that has no ‘story points’ and therefore doesn’t contribute to ‘velocity’ – but don’t offer many practical tips on how to handle it. Some implementations of the Scrum framework, heavy on the principle that the Product Owner is responsible for priority decisions, may even exacerbate the tension. The key point about tech tasks is that the product owner (and the client) is not qualified to make decisions about it. The Product Owner has no frame of reference for comparing a tech task with a user request. Henrik Kniberg talks about how difficult it is to compare apples with oranges, and suggests instead wrapping (concealing?) a tech task in a task that does have demonstrable user value, or reducing a team’s commitments in order to carve out time in a sprint for tech tasks. Luckily for me I’ve worked with some fair-minded and reasonable product owners who have agreed to create space for tech tasks, but it’s always felt like a fragile accommodation, and I’ve wished we had a mutually-agreed principle to underpin it.

David Anderson’s ‘Kanban’ book suggests ways to edge towards that mutually-agreed principle. Instead of starting a fresh negotiation for each tech task as it arises, Kanban suggests reframing the conversation as “What % of the team’s capacity shall we allocate to different types of work?” Part of that conversation is the question: “How much capacity will we give to tech tasks?” Once the decision is made, the Kanban concept of limiting work in progress (limited WIP) supports a team in following through on the decision. It’s far simpler to check that 20% of tickets on a card wall are in the “infrastructure” swimlane than to use estimates to forecast that tech tasks will account for 20% of effort in a given sprint.

No system or framework can dodge awkward conversations, and Scrum users and Kanban users have very similar conversations at some stages. Both need to agree, in principle, that time is needed for tech tasks. Kanban supports the agreement over the longer term, by offering practical tools and mechanisms for keeping on track with the decision. The principle may need to be revisited at times – the tension created by WIP limits is positive tension, after all – but there is no need for developers to go cap-in-hand to the product owner about each tweak they want to make to the infrastructure. If the agreement is going to change, the team registers that it is changing its process policies, and if the system is running smoothly, the team will have access to metrics that reveal the impact of a policy change on the system as a whole.

Do you know of any examples of policies like this? What tech task capacity allocation has worked for you, and in what circumstances?

Metaphors for Agile practices?

While reading about Clean Language, I’ve been thinking about the importance of metaphor in helping ideas become sticky. I’ve written a bit on this blog about the Complex and Complicated domains in the Cynefin framework, defining the terms in dry, wordy ways. Then I saw a tweet – I think it was by Paul Klipp – along the lines of: “Complicated is filing your company’s annual accounts. Complex is coming home to find your wife (husband, partner, housemate) crying into a glass of wine”. And it struck me how much more powerful was that metaphor than my static definitions. The metaphor has stuck with me, comes to mind easily, and I’m sure I’ll borrow it in future when I need a shorthand for those Cynefin concepts.

My most successful experience of using metaphor to influence and persuade in a work situation was with a developer who had written a heap of code but was nervous about committing his changes into the main branch of the code repository. He happened to like cycle racing, so I suggested that frequent commits were like riding at the front of the peloton – a way to keep out of trouble and reduce the chances of being held up behind other people’s crashes. That seemed to resonate with him; more than a lecture about coding practices would have done, in any case.

So now I’m trying to think of metaphors to help me convey some other ideas more deftly and memorably, instead of all straight-up and earnest:

  • Why slack is so important;
  • Why big chunks of work are best broken down into small batches;
  • Why we learn more about capacity by measuring what we can do than by trying to improve our estimates.
  • I have some ideas forming, and would love suggestions.

    Arms races and peacock tails

    I enjoyed John Kay‘s ‘Parable of the Ox’: an engaging story with a message about financial markets. Kay thinks that analysts and professional investors have stopped paying attention to whether the companies they research generate value. All that now matters is second-guessing what other peoples’ opinions will do to price trends, and timing those predictions to make a profit. Corporate investors behave as though the underlying health of businesses, financial and otherwise, doesn’t matter any more. This makes our entire economic system fragile.

    Meanwhile, Peter Wilby highlights the dangers of the qualification spiral, where ever-longer periods of full-time education are needed for entry to established professions (law, engineering, accountancy). Other professions (trades, specialisms), such as nursing, have sought to raise their status by becoming all-graduate entry. I suspect (and this is just a hunch) that paper qualifications have become less useful as a guide to likely effectiveness on the job as this trend has intensified. Employers complain about school leavers and graduates not being ‘work-ready’, yet they sift applications using paper qualifications which are often irrelevant to the work. Students accumulate more qualifications without necessarily becoming better prepared, and the spiral goes on.

    I think there’s an interesting parallel between raising investment and getting qualifications: both are arms races. It’s like ‘costly display’ in animal behaviour. Some animals show off elaborate plumage like a peacock’s tail. The tail isn’t useful or valuable in itself, but it shows that the peacock has enough spare energy to grow and display it. Female peacocks select as mates the males with the finest tails. What does this mean? It means the next generation is likely to do the same, so the behaviour perpetuates itself. Investors prefer companies which are already attractive to other investors, so businesses devote energy to appearing attractive to the markets rather than generating value for customers. Teachers, students and parents focus on exam success, even when it doesn’t demonstrate meaningful learning. The arms race is on and people feel compelled to join.

    Now, I don’t have much to say about investment markets, or about how they could change for the better. But on the qualifications side, it’s extremely encouraging to see that new routes will be opened up for non-graduates into traditional professions. There are other possibilities too. Douglas Adams had it right:

    Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it “.

    Learn stuff. Fundamental stuff about how the world works. Learn in order to understand, not just to pass exams. To get paid work, find something newly invented and get really good at it. Making it, adapting it, managing it, teaching it, selling it – there could be a career in any of those, and more. Use it to other peoples’ advantage, and your own. Do this before the new technology has had time to become an ‘established profession’; before formal qualifications in it even exist. Exam systems are buckling under their own weight, and because of that are ripe for disruptive innovation. Parents and teachers will probably tell you that the routes that worked for them – academic qualifications and university – will also work for you. They may well be wrong.

    XP Day 2012

    Here’s a bulleted list of some of the key things I learned at XP Day last week. I’m really pleased I went: I’d never been to an unconference before and enjoyed the format very much: the atmosphere felt genuinely open and receptive to all perspectives. I’d have liked to be able to go to the pub in the evenings and continue the conversation, but couldn’t, unfortunately. I’ll hope to bump into people and continue our conversations at the Extreme Tuesday Club instead.

    So, key learnings. I hope to expand on some of these in future posts.

    • Alternative Key Performance Indicators (KPIs). I was struck by some extremely smart ideas about KPIs in a superb talk by özlem yüce about an Agile transformation at Maersk. They established their KPIs on the central principle of short feedback loops.
    • Cost of Delay. More impressive stuff from Maersk: this time the speaker was Joshua Arnold. He walked his audience through the steps they took to assign estimated dollar values to each proposed feature: the cost per week of not having it implemented. This was a powerful tool for prioritisation and balancing the needs of different stakeholders, and it sounded as though the process of getting to the dollar value wasn’t as daunting as expected.
    • Estimation. We heard from people who simply choose not to do it, and still deliver results as effective, high-performing teams. There’s a fair amount of psychological research that suggests human beings are pretty bad at estimating how long tasks will take. I’ve known for a long time that estimates can get a bit better with experience and attention to detail – but can they improve enough to justify the time and effort? I think not, now. There are other, better options.
    • ‘Strangler Figs’: the ‘strangler fig’ is a powerful analogy for how new systems can replace legacy systems. Strangler figs grow all around their host plants, eventually taking over all their resources. With software, a greenfield application is built to encompass the old one, strangling (replacing) the legacy app one feature at a time. This is especially attractive because it limits risk: no ‘big bang’ cutover.
    • Bridging roles between developers and managers. Several conversations pointed to a lasting distrust between developer and manager communities, or between developer and manager roles in large organisations. ‘Bridging roles’ – often ‘Coach’ or ‘Scrum Master’ roles – where the purpose of the role is to smooth and enhance the flow of communcation across that gap – can help to some extent, but, interestingly, a couple of people in such roles pointed out the danger of the role itself becoming a bottleneck. My personal take is that these roles can be hugely helpful, but their purpose and content needs to be defined upfront. In a very general sense, bridging roles exist to inform developers about business context, and to give managers visibility into what’s happening within the development team. But precisely what information do managers need, and when? How do we think ‘additional context’ will affect developers’ choices? How will we know if the information is getting through, and having positive effects? Focusing on those questions before engaging with the role – perhaps designing some experiments to track progress – reduces the risk of the most important information getting ‘stuck on the bridge’.