Did somebody say agile?

 In Articles, Teachings

What can we learn from the latest trends in software design?

 

Many of the success stories to come out of the new economy have been generated by small businesses that started out in a “garage” using extremely agile processes. “Small is beautiful” was their motto.

What we’re interested in finding out is whether these are simply exceptions that were built on their founders’ merits, or whether organizational lessons can be learnt from them and then applied to big businesses. It is interesting to note that firms such as Google, Facebook and even Amazon continue to be flexible and innovative despite their substantial size (32,000 /  3,000 / 22,000 employees respectively).

At a time when “offshoring” is equally as popular as “lean management”, what changes can be made to organisations and processes in order to be as efficient as possible?

When 1 is better than 10

The iPhone’s famous software (operating system) was created by 60 developers, whereas Motorola put 1,500 people on the job but still didn’t manage to develop a rival system: not only is it impossible to compensate for the developers’ quality by boosting their quantity, but the opposite is, in fact, true – having an overly-large team is actually counter-productive. We have seen projects fall behind by several months for just 15 man-days of development work.

Why? It all comes down to the “basic unit”, the developer: “A single good developer is worth 10 mediocre ones” (report by Sackman, Erickson and Grant): good developers not only produce code that is of better quality, but they do it faster. The quality of the code has a knock-on effect all the way down the line from the testing and debugging phases right through to the maintenance phase and later upgrades. On a team level, each piece of coding  is not a standalone component but totally interdependent with everything else: the end result isn’t simply a sum of good and not-so-good work, but a multiplication of the whole, which means that a single mistake can end up wasting time for all concerned.

These are, in fact, age-old laws that apply to all sectors whose work involves more than simply carrying out orders: craftsmanship, creation, analysis, etc. A developer is a “creator” of code, not a simple executor who repeats the same task over and over again.

That means that management must be adapted accordingly in order to encourage initiative, and “military-style” management must be avoided: dictating executable orders to one and all is effective if tasks simply need to be carried out, but less so if staff are expected to use a certain amount of initiative. This applies to managers and technicians, as well as to other members of staff whose work involves more than simply carrying out orders. More generally speaking, it also applies to any major project.

Simple and light

Now let’s turn our attention back to software in order to determine what makes these efficient teams so successful. For a good many years, one of the key points when developing software has been the fundamental duality of functional specifications and development.

It is important to understand that describing what the software will do (writing the specifications) can be just as complicated as actually doing the coding itself. What’s more, specifications are always subject to interpretation, so by their very nature they will always be questionable.

Based on this observation, new development methods tend to keep the specification phase as short as possible and merge it with the development phase. Don’t be fooled, though, it’s not a case of “acting first and thinking later”, but rather restricting the specification phase to the overall architecture and interfaces, and breaking it all down into logical subsets.

After having adopted this kind of method, Facebook is now capable of publishing a brand new version every single week, and Firefox has cut its new version turnaround time from several months, if not years, to just  6 weeks.

So, what tools have been implemented in order to achieve these results?

Teams are small and responsible, comprising a functional manager (product owner), who is the sole client representative, a Scrum master who resolves non-technical problems and supports the team when liaising with non-team members, and finally a small and responsible team of experienced developers who have a global vision of the project and are capable of showing initiative. Quite the opposite of so many projects that have more people involved in making decisions, managing the project and approving the end-product than actually working on the software itself. We have already seen projects with €35K earmarked for project management compared to just €5K for development costs!

Dare to be flexible

Specifications consequently need to be light and flexible in order to accommodate development constraints and the inevitable changes to operational needs: they are just one of the tools in the software production process. They must be kept to a minimum to stop the development process from grinding to a halt and it must be possible to adapt them as the project progresses to reflect each player’s contributions.

It’s time to say goodbye to detailed, contract-style specifications, which may have been reassuring but were practically set in stone. Specifications of this type are never perfect, and they generally end up being nothing more than a burden that triggers the usual client/supplier disagreements about delivery deadlines and such. The time has come for specifications to serve as a support for continual two-way discussions, one of the tools that can help developers understand the vision of the product, with discussions only ending once the product has been successfully delivered.

Details do not guarantee success

This is a lesson that needs to be learnt in almost everything we do: how many highly detailed schedules go off-course on day one? How many 50-page barely read contracts end up going in the bin when we finally realize the whole thing is based on a fundamental misunderstanding?

Details can be reassuring, but there is always a risk we will end up losing sight of what’s really important.  Is the 25th decimal of this pi correct?
Pi = 31.415926535897932384626433832795028841971693993751058209749445923078164062862089
98628034825342117067 ???  Have a think and get back to me…1

It is extremely tricky to attain a culture of simplicity. Simplicity is exceedingly complicated… 1/2gt² – perhaps you remember this formula for falling bodies? The human race had to wait for Galileo and Newton to come up with this simple formula, which may not consider drag, but is basically correct. We may not all be Newton, but if we aim for simple and concise results instead of producing a 200-page essay, we are more likely to stay on track.

The client/supplier culture

The internal client/supplier culture clearly has numerous benefits. One such benefit is to make internal procedures far more understandable, and to break things down into intermediate “deliverables”. However, in a good many cases it generates extra costs due to the accumulation of requests and poor understanding of the other parties’ constraints. This is what we refer to as technical complexity:

  • “The product is too expensive because those idiots in the purchasing department failed to find my part at the right price. They didn’t respect the SLA!” (Service Level Agreement).  Although I forgot to mention that 3m was an approximate measurement, and that the 3.02m standard-sized part would have been absolutely fine.
  • “I said I wanted a column for the month and another for the date. I didn’t really get what I wanted, it set the project back by a whole month, and now it’s too late to change it.”

Sound familiar? Getting the marketing and development teams to sit down and define the product together is absolutely vital to the project’s overall efficiency.  When two teams discuss interfaces (software, physical), the conversation should always be a two-way street and not a one-sided dictatorship.

 Iterations

Alongside these flexible specifications, it is important to introduce iterative processes that can be adapted as circumstances change: software is developed in successive and periodic increments known as sprints.

Work is approved at the end of each sprint (rather than at the end of the project) and “something” is always delivered at the end of the sprint. New features can be added during each sprint and all existing code is optimized, which is known as re-engineering. A sprint analysis is then carried out to identify training needs, if appropriate, improve the processes and foster the transfer of knowledge within the firm.

Each sprint generally lasts a fortnight, but that timeframe can be changed to suit the project:

  • Two hours to prepare a presentation.
  • Several weeks for a complex industrial project.

It’s a case of striking the right balance between being able to see something concrete and having stably-defined needs whilst nonetheless allowing a certain freedom of interpretation.

These methods can be truly inspirational for a wide range of projects, from preparing a seminar to launching a new product. When organizing a 3-day seminar, a client once asked us to break the schedule down into half-hour segments, although it was clear that the schedule would need to remain flexible in order to fine-tune things according to the participants’ reactions.

Projected figures are always wrong

When a project is launched, making projections is absolutely vital, but the figures will never be right: the project boundaries still have to be defined, no-one knows what setbacks will be encountered, the projections are made by the management or marketing teams rather than operational staff, the results generated by a project/team are transposed to another project/team without taking account of the various points of difference…

Acknowledging that a project will have its ups and downs goes against our rational sense and our human optimism. But technical challenges, changes, setbacks and mistakes are all inevitable. If there aren’t any, we really ought to ask ourselves whether the project is actually creating any value! It is consequently important to accept that unpredictability is only natural and handle it by projecting figures for manageable chunks of a project. Those projections should then be reviewed on a regular basis.

 Getting it right first time

Then comes the issue of quality. The “right first time” approach is always far superior to “statistical quality”.  If a bottle of shampoo doesn’t make it off the production line because a simple crossbar has detected that the cap hasn’t been screwed on properly, you can address the problem immediately and guarantee a quality level of 100%, all for very little cost. This kind of quality level would be impossible to achieve with an end-of-line control.

How can this be operationalized?

  • Firstly by making a simple cultural change: avoid the “we’ll correct it later” attitude, from typing errors that will be detected by a proofreader, if there is one, to serious “bugs” that are supposed to be detected in the validation/quality control phase.
  • Protect your teams from “crunch mode”: having to work 24/7 simply to meet the deadline. Stepping up the pace comes at the expense of quality, which can itself end up causing setbacks.
  • Introduce instant checks: get the client to check the coherence of results and immediately validate each iteration.
  • Have clean and legible tools: the IT code must be as clean as Toyota’s work stations!

Get into the habit of asking your peers to check your work (plans, code, presentations, business cases, etc.) as everyone has a tendency to concentrate more when they know their work is going to be proofed. We deliver clear, neat and well-presented work out of respect for our peers, and the return on the investment is usually instantaneous.

Market intelligence and investment

Finally, the golden rule that features in all management and personal organisation handbooks worth their salt: start with what’s important, not with what’s urgent. If we look to developers once again for inspiration, those who manage to stay ahead of the game are those who constantly keep one eye on the latest state-of-the-art developments and can consequently choose the model, tool, language or framework that best suits the matter in hand.

It is often tempting and reassuring to spend time dealing with day-to-day affairs, whilst market intelligence and investment seem somewhat more abstract.  And yet it is generally more efficient to find a way of resolving a problem for good by identifying an existing solution, automating or delegating (“teach a man to fish and you feed him for life”) than by resolving the problem yourself. Finding a lasting solution may take two or three times longer than finding a quick-fix remedy, but as a general rule it pays very quickly.

 

The “agile” revolution in the IT world can clearly teach us a lot about managing all kinds of projects. Some would say it follows in the footsteps of the “quality” methods that were launched back in the 1970s.  But that “quality” was so poorly interpreted, and generated so many reassuring files and handbooks that have been lying dormant in cupboards for years, that we thought it would nonetheless be useful to take another look at the “disruptive” methods that allow a new product to be launched every week rather than every year, all at no extra cost.

 


1 The 25th decimal is correct, but the decimal point was in the wrong place. It goes without saying that you noticed that, though, didn’t you?