How to Manage Test Delivery within Large Complex Programmes

Senior Test Director, Ivan Grice shares his perspective on the causes of test delivery failure, key success factors for projects and programmes and what it takes to manage large teams effectively

[ Aimed at the intermediate to senior tester ]

Large, complex programmes face similar problems as other programmes, in terms of finances, resources, processes and people involved except that it’s on a much larger scale. What do you believe it is about such programmes that make it prone to test delivery failure?

Ivan :  There are a couple of factors including :

  • poor initial estimation. Every project is different and most initial estimates are wildly inaccurate based on assumptions that are not adequately validated;
  • lack of appreciation of the investment required in test environments and test data;
  • a basic misunderstanding of how the test basis will be used to define the relevant test cases. In many projects, I have seen teams not able to really understand the requirements, user stories etc. and be too afraid to say so;
  • an ‘underbaked’ approach to the complexity of the end-to-end integration testing and particularly where to build the validation points;
  • regression impacts in a long, multiple-drop programme; and
  • the wrong test focus leading to wasted effort.


Featured Course
Become a CTFL qualified tester now!

ISTQB® Foundation Exam Prep Course


What are the factors you take into account in holding a particular programme as large or complex?

Ivan : These are the few I would consider important :

  • functionality – how broad and deep is the functional footprint?;
  • the number of integrated systems – more systems mean more interfaces and it always surprises me how two development teams can differently interpret one interface specification;
  • the length of the programme – the longer it is, the more likely key people will churn;
  • distributed teams – nothing beats the whole project team in one room but with recent trends in outsourcing, this rarely happens;
  • a multi-vendor environment with software vendors conducting the initial testing of customisations. This very rarely goes well;
  • technology – people always want to use the latest technology. This can complicate the solution especially when it is realised part-way through the programme that a poor technology decision has been made;
  • test data, in general, and the need for data migration activities in particular;
  • multiple drops/iterative approach leading to significant regression requirements;
  • high level of configuration by people who don’t really understand the integrated solution;
  • need to integrate with external parties, and in particular, when sending sensitive data (e.g. financial systems);
  • requirements captured in an unnecessarily complex way (and this could mean using a simple approach e.g. a Word document) to capture requirements that would be better captured (for example) diagrammatically.

You would have had some ideas about how things work when you started on your first large-scale project. What did you think then that you know now is utterly wrong/incorrect?

Ivan : Two things come to mind. First, most experienced people know what they are doing.  Whilst some do, many are using techniques that may have worked in previous projects but don’t work in the current one.  Many ‘experienced’ testers have surprising gaps in their knowledge, particularly around test data.

Second, that software vendors know how to test their own products.  There tends to be quite a bit of churn in vendor quality teams (where they even exist), which means I always err on the side of caution when a vendor tells me that software customisations have been ‘fully tested’.

Committed, engaged, open people with no hidden agendas make for a delivery environment that is supportive and trusting, both of which I believe are essential to delivery success.

Of people, process and money, which of any of these factors would you hold as the single most important element to programme success?

Ivan : With the obvious caveat that they’re all incredibly important, the single most important element has to be ‘people’ – it’s people that make the decisions on how to spend the money, which processes to follow etc. Talented people are so important in all areas of project delivery, both in terms of their knowledge (in their area of specialisation and beyond) and how they contribute to the project team.  Committed, engaged, open people with no hidden agendas make for a delivery environment that is supportive and trusting, both of which I believe are essential to delivery success.

Many large such programmes run over in terms of  budget and time. They also under deliver on value. Can you talk about any one experience in terms of managing budget, time and value and the issues that you had to face? How did you deal with those challenges?

Ivan : A classic problem in testing is how to limit manual test execution so that the testing executed maximises ‘bang for buck’ in terms of finding defects (at the simplest level) and allowing an accurate risk assessment to be made prior to go-live.

There are obvious techniques to use to prioritise requirements for testing, one of which is test automation.  Whilst it isn’t rocket science to ensure that any test automation can be seen to deliver value, it always surprises me how often test automation is seen as an end in itself.  Test automation of functional regression testing delivers value in particular circumstances (multiple drops, incremental delivery etc.).  I always make sure any automation approach is underpinned by a solid cost/benefit analysis and that the assumptions haven’t been ‘massaged’ to make the outcome look positive.

The key to setting up a successful automation exercise is to show how automation addresses the problem statement in a more cost effective way than the alternatives.

In one test programme I led, we developed a large automation suite that we used to sanity check new releases and ensure the general quality of what was being delivered especially given the changes being made to shared services on the enterprise service bus.  The key to setting up a successful automation exercise is to show how automation addresses the problem statement in a more cost effective way than the alternatives.  I try to avoid business cases that rely on the automation scripts being used in the post-project, ‘business as usual’ environment as in my experience this very rarely happens.

However, with the growing maturity of Agile and DevOps delivery techniques, automation is becoming more of a hygiene factor and as organisations (so very slowly) restructure to blend delivery and operations teams, automation resulting from project delivery becomes valuable across the product life cycle.

From your years of experience, what do you believe is the single biggest success factor in managing large teams and sub-teams across multiple sites?

Ivan : Leadership.  For some lucky people, being a great leader comes naturally.  For me, the concepts around leadership are not hard but they can be hard work.  When managing large, distributed teams I think most of us know what we ought to do – build a team spirit, ensure the team receives regular communications, ensure everyone knows what they are doing, recognise people for their performance etc.  However, in practice, I see people in leadership positions not following the basic disciplines:

  • no team activities when building a new team to get people used to working with each other;
  • no communications except to their reports (and a surprising number of people seem to feel that a 30-minute catch up every month is more than enough);
  • no ‘all hands’ meetings on a regular basis for teams based across multiple sites – even if this has to be by video conference;
  • not preparing well thought-out, engaging updates when team meetings are held – it takes time to prepare these but it’s worth it to ensure attendees get something from the meeting;
  • no meaningful methods of celebrating success (and sending a ‘reply all’ email stating ‘well done to everyone for the release’ doesn’t count, in my opinion);
  • meetings not minuted and actions not tracked – I know this can be a pain but it’s basic team leadership and gives structure and continuity to meetings; and
  • and for me, most egregious of all is where the leader doesn’t give the necessary air cover to members of the team – the team needs to feel that they are supported by the leader even if someone has screwed up (this doesn’t mean that if someone screws up they won’t be held to account; it just means that as a leader I take responsibility for what the team does).

Having visibility of the programmes in other divisions and being able to assess potential impact on your programme may be important. How have you typically managed this aspect so that you have enough access and visibility into other programmes that you can make assessments and decisions on your own programme?

Ivan : At its simplest level, I rely on the organisation’s delivery portfolio management function to put a framework in place to analyse and understand cross-programme delivery impacts.  However, many organisations don’t do this well.

In some organisations, there will be an effective enterprise release management framework in place where cross-programme impacts are assessed from early in the delivery life cycle.  In large integration programmes, there can be multiple projects making changes to the same systems and it is testing that tends to drive the impact analysis.  In organisations that do not have effective enterprise release management, cross-programme impacts can be identified via test environment management (where projects share test environments).

However, as organisations move to fully virtualised and cloud test environments, it becomes easier for individual projects to get the environments they need in isolation and if there is no effective cross-programme dependency analysis framework in place then code contention issues can arise.  If I was in an organisation that had no effective framework in place, I would leverage the centralised testing function to set up a ‘test delivery forum’ to get test teams talking about what they were doing.  If there wasn’t a centralised testing function in place, then I’d set the forum up as a standalone entity.

What steps do you take to find out potentially negative impact your programme may have should it be delayed so that you can minimise it?

Ivan : Not sure that I entirely understand this question but I’ll assume you mean ‘how do I prevent my testing activities from over-running’…

Testing on large, waterfall projects always suffers from being one of the final activities – upstream delays mean that integration testing rarely starts on time but the project plan usually has a hard end date and as a result testing gets squeezed.

Early in my career, that meant lots of long days and weekend work.  Now I tend to work closely with the project/programme manager to ensure he or she has a clear idea of how upstream delays will impact the test window and what can be done to address these delays.

Of course, sometimes (not very often!) testing starts on time and delays arise solely in the testing space.  The most common reason is that testing discovers a lot of defects that are taking a long time to fix and retest.  This can be avoided somewhat by reviewing testing prior to integration to ensure the right level of quality is present from the start.  I find that using the estimation model’s parameters (# test cases, time taken to execute, defect rate, # testers etc) and substituting real data from early execution gives me a predictive model on how long test execution will take.  If there are going to be delays, then conversations are had with the project team and stakeholders about increasing test resources, pushing out the execution window or reducing scope.

Another reason for delays in the testing phase is if the test team is just not ready e.g. the test environments are not in place, test data has not been created, the test cases/scripts have not been developed etc.  A test project is just a project and preparatory test activities need to be tracked.  Entry gate meetings focus the mind just prior to the start of test execution but by then it’s all a bit too late; it is better to have addressed these issues through the entire life cycle of the test effort.

For someone new to developing and then implementing a division-wide programme testing control framework, what do you suggest is a good place to start? What are some key elements that should be considered?

Ivan :  Assuming that the reference to ‘programme testing control framework’ is to a ‘test QA framework’, the  main thing is to ensure that the test methodology and approach is appropriate for the overall delivery framework.  Is the programme going to be delivered as a Waterfall project, fully ‘Agile’ or following some hybrid, iterative approach? Nowadays, it’s usually the third.  What documentation will be produced that will inform the test cases?  How will business acceptance be gained?

Testers need a fair degree of experience to be able to write a test strategy that is more than just a boilerplate re-hashing of another project’s test strategy.  Basic questions to start with include :

  • are multiple functional test levels required?;
  • is non-functional testing required?;
  • what project documents will be produced, how do they relate to each other and how do they impact testing?;
  • what are the project’s critical goals that determine whether it has been successful – and how does testing help realise those goals?;
  • what is the test data strategy?;
  • what is the test environment strategy?;
  • what test accelerators will be used (tools, techniques etc.)?;
  • what test deliverables will be produced?;
  • how will the test team be structured?; and
  • how will test results be reported?

Junior testers tend to use existing templates for test strategies and can sometimes miss answering some basic questions.  In my opinion, a good test strategy answers the basic questions of why, what, how, who, when and where…and pictures always help 🙂

What do you believe to be the top three most important elements of a project governance process?

Ivan : First, benefits realisation (which never happens in my experience). Second, effective identification and then resolution of risks and issues. Third, managed flexibility – processes should be adhered to where they add value and changed where they don’t.

Do you believe that it is necessary for a person with your role to have a deep understanding of change management principles? Either way, can you share why so.

Ivan : Obviously, yes.  Project delivery (and testing as a subset of this) is all about implementing (hopefully well thought-out and executed) change.  As delivery professionals, we must never lose sight of how challenging people find change is and how change doesn’t ‘just happen’.  At all times, we must think about how we can make the change easier both for the end user/customer but also within the organisation the change is being made.

For an organisation looking to hire a strong technical lead, what do you believe are the most important characteristics to look out for?

Ivan : A strong technical test lead?  Obviously at the simplest level, you need someone who possesses the required technical skills to the required level.  Once that basic hurdle has been jumped, it then comes down to their soft skills – can they work within a team, provide guidance to junior members, manage project stakeholders, form relationships etc.?

How do you judge the technical expertise of a strong technical lead?

Ivan : Ask them to explain complex concepts in their areas to me as though I were a business person.  I want them to be able to explain complex things to me in a simple, engaging manner.  I also ask them to give me examples of where they have used their technical knowledge to deliver value in previous projects.

Agile is like any other delivery approach in that it is better for some things as opposed to others but it can only work when the people on the project all agree on what it is and are experienced in using it.

The success of large programmes is heavily dependent on having solid project management processes in place. How do you validate the project management processes with colleagues to ensure they serve their purpose?

Ivan : Experienced project delivery people tend to know very quickly when project management processes aren’t working.  At all times I ask myself:

  • do I understand what the PM wants me and my team to do?;
  • are the project documents of the right quality? And not knowing how to assess their quality is a big problem…;
  • is too little/too much effort being put into the project plan? Does it allow the accurate tracking of activities?;
  • are meetings well managed?;
  • are project activities routinely slipping leading to multiple re-baselining of the schedule?;
  • are risks and issues being appropriately managed?; and
  • do project reports accurately reflect where we are (acknowledging that all PMs put a slight positive spin on things)?

At the end of the day, it’s clear to all experienced delivery people when a project is out of control and the project management processes either aren’t in place or key people aren’t following them.

What’s your take on making full use of Agile practices when it comes to large, complex test programmes? Are you for/against it? Do you believe in going Agile for this? What method do you find works best?

Ivan : I think that if you asked three people to define ‘Agile’, you’d get four different answers.  Agile is like any other delivery approach in that it is better for some things as opposed to others but it can only work when the people on the project all agree on what it is and are experienced in using it.

There is no reason for Agile practices not to be used for large, complex test programmes; taken in isolation, they all make sense in some circumstances.  However, what tends to happen is that the bigger the programme, the more ‘hybrid’ the approach becomes and the required rigour of the approach starts to break down.

It all comes back to the people in the project team – if they can maintain clarity on the process and adherence to it, there is no reason not to go with Agile…as long as the technical underpinnings (especially integrated test environments) are in place. In short, I don’t think there is one single method that works in all cases. It’s more about getting the right practices in place for that particular project…and sticking to them.

Can you share a good decision-making structure you’ve been part of? What makes it effective?

Ivan : I don’t tend to get too introspective regarding decision making. I find that as long as you have the right people involved and there is a base level of respect between them, then a reasonable conversation can ensue.  As long as the problem is clearly defined, options are analysed and agreed decision criteria are in place, then it shouldn’t be a big drama.

What do you do to build effective teams?

Ivan :  First of all, ensure that every team member possesses the skills for the role they have been hired for.  People may be useful in other areas, too, but they need to be able to do their day job well.

Second, make the effort to run team building sessions at the start – get the team used to working with each other and used to having fun with each other.  Also, organise cross-team events ( I find that a few drinks with the business analysts, developers etc. can really help prevent the test team becoming ‘dehumanised’).

Third, encourage everyone to accept accountability for their actions.

Fourth, have a clear team structure so everyone understands well what their role/s is/are and who they report to.

Fifth, make an effort with your team meetings – prepare engaging presentations, get people interacting, minute the meetings and track actions to show progress.

Last, celebrate success, both individually and as a team.

A McKinsey report indicated that one way that companies can maximise the chances their projects deliver on time and on budget is to focus on managing strategy and stakeholders instead of just focusing on budget and scheduling. Do you agree, based on your experiences so far?

Ivan : I’m not a huge strategist as I tend to be the one who executes against a strategy.  Stakeholders are obviously critical to any project but then so are budgets and schedules.  Whilst strategy is essential to set the direction, Project managers are paid to manage to a defined scope, timeline and budget.  In my experience, a tightly managed budget and schedule doesn’t guarantee project success but a loose budget and schedule guarantees failure.

At a high level, how would you suggest the first 90 days of a complex programme be managed?

Ivan : From a testing perspective:

  • understand the project’s goals and structure – and where the power lies;
  • define your test strategy and be clear on what will be needed to be successful;
  • conduct regular risk and issues management meetings;
  • build a living integrated test strategy that links to the overall project schedule;
  • focus on communication – upwards, outwards and downwards;
  • implement a formal change management framework; and
  • conduct team building exercises.

Ivan GriceIvan  Grice has worked in the testing domain for nearly 20 years, specialising in managing large test functions, the delivery of large, complex integration programmes and test consultancy.  

He has worked in the UK, Europe, Australia and Hong Kong and was educated at Oxford University, the Australian Graduate School of Management and Kellogg School of Management.





If you think this post is interesting, please share using the buttons below!
iphone6 and notebook on desk II image courtesy kaboompics.com
#softwaretesting #tester #Tech #testdelivery #projectmanagement #testing

There are no comments

Add yours

This site uses Akismet to reduce spam. Learn how your comment data is processed.

freshmail.com powered your email marketing