How to develop a culture of sustainable development
As an organisation, your teams should be delivering regularly with frequent feedback loops to enable them to innovate while also maintaining quality. This article provides an easy-to-follow framework that allows your teams to assess where they are in adopting continuous deployment practices and what they should be aiming for next.
Véronique Skelsey and Mike Collins from Liqueo’s Ways of Working team recently published an article on Agile and DevOps. It provides a useful overview of how Agile and DevOps work together to enable teams to deliver value predictably and to high quality.
Even today, with the proliferation of Agile literature and armies of Agile Coaches, not many teams manage to reach true continuous delivery maturity. We define this as the ability to continuously and seamlessly integrate and deploy, thereby enabling rapid iteration and value delivery to the customer.
Neglecting your continuous deployment pipeline is like investing in building a state-of-the-art bullet train but neglecting to upgrade the tracks and assuming you will achieve the same result.
So where is your organisation on its continuous deployment journey and how rickety are your tracks?
At Liqueo we’ve devised a rating system which takes five key areas into consideration:
– how change is introduced.
– how teams identify and introduce improvement.
– the stages teams go through to get an idea implemented.
– how resilience and organisational support is managed.
– finally, how manual steps are automated where possible.
Our rating system explained:
Symbol |
Rating out of 5 |
Description |
|
1/5 |
Early Journey: You are still very early on your journey towards enabling continuous delivery. Focus on working towards “making progress”. |
|
2/5 |
Making Progress: You have started enabling your teams to improve their continuous delivery but should no focus on getting them “ready to race” next |
|
3/5 |
Getting Ready to Race: You have made a great deal of progress now build on what you have done to date to enable your team to ramp up to “running”. |
|
4/5 |
Running: You have achieved more than many organisations who would call themselves “mature”. Now for the final push towards enabling true continuous delivery and rapid feedback and value delivery |
|
5/5 |
Flying: Your teams have managed to enable true continuous delivery and enable rapid feedback and value iteration. Your main enemy now is complacency, ensure that you have active champions to ensure that your team doesn’t stagnate and keeps pushing and leveraging the abilities you have enabled. |
N.B. The word “requirements” is used in this article to refer broadly to needs, enablers, features & business value
|
Change How change is introduced |
|
Your team’s requirements1 are vague and gathered late. |
|
Your team’s requirements only describe audience facing change. Requirements are defined by Product Owner and Business Analysts only. Requirements only describe high level vision and cover many features. |
|
Your team requirements explicitly include some separate component behaviour. Requirements describe small discrete separate (atomic) changes. Release early & release often. |
|
Your requirements can have input from all team members. There are some non-functional requirements such as performance, resilience, security and so on… Acceptance criteria are clearly defined. Definition Of Done is agreed with the team. Product features are built on iteratively. |
|
All change is associated with a product feature, prioritised, agreed with the Product Owner and captured in requirements. All non-functional requirements are included. Change to individual components are decoupled and can be raised separately. Functionality is released in vertical slices where possible to reduce dependencies and encourage early integration. |
|
Feedback How teams identify and introduce improvement
|
|
Gut feeling is used to identify and prioritise improvement. |
|
Mostly vanity measures such as site visits are collected with no correlation to time scale and system metrics. Unused features are being maintained and/or delivered. Product roadmap is not widely visible and communicated. |
|
Usage or KPIs are visible but no real analysis or tool usage. A mechanism to capture user (qualitative) feedback is in place but analysis is haphazard and lightweight. Product roadmap focuses mostly on imminent and current changes. Roadmap is regularly reviewed and reprioritised. |
|
Definition and analysis of usage KPIs are captured as part of requirements. Team accurately monitors and reviews usage KPIs and uses insights to iterate. Product features are implemented as a result on feedback gathered from data and metrics. Some user (qualitative) feedback is analysed and used to iterate. |
|
Your requirements include systems and tools used for KPI analysis. Analysis and follow-up of all feedback (qualitative and quantitative) and KPI stats are used to inform future requirements. The team actively measures – builds – iterates and is prepared to pivot. Requirements are defined to test hypothesis and inform future features. Trialling (e.g. A/B testing) is used to compare new and existing solutions. Teams’ requirements are visible to other areas of the business and change and strategy is aligned across the business. |
|
Pipeline The stages teams go through to get an idea implemented |
|
No defined process. |
|
Partial definition of local stages and processes. Local environments have few controls and can get out of step with production. |
|
Some stages and processes are defined. Some ad hoc environments are maintained by a central team who exert strict controls on releases. |
|
Stages and processes are defined with clear criteria for moving from one stage to the next, including fixed times (e.g. overnight). Local environments are well managed and represent production configurations. |
|
Local stages and processes are defined and followed. All local environments are maintained by the team and the state can be guaranteed. Local stages and processes can be refactored at will. Environment setup and configuration is largely automated and can be reset easily. All members of the team can release changes at any time. |
|
Business Continuity How resilience and organisational support is managed |
|
No defined process or clear ownership. |
|
Low awareness of business continuity. It is seen as the responsibility of a specific person or department with no real integration in the everyday functioning of teams. |
|
There is an awareness and some guidelines on: Incident management Technology recovery Security management Business recovery However, this is not yet ingrained as a way of working within the organisation with regular updating, testing and feedback & improvement. |
|
Business continuity plans and strategies are accessible, widely known and updated. Continuity plans are updated along with any relevant product changes. Business continuity strategies are regularly tested. |
|
Change control methods and continuous process improvement are continually updated and maintained. Business continuity capabilities are understood and measured. |
|
Automation How manual steps are automated where possible |
|
All manual. |
|
Some manual steps exist in build process. All testing is manual or triggered manually. |
|
Predominantly manual testing at product level. Unit testing is automated and triggered by build. Some automated deployment or testing tools are in place. Build is fully automated when code is committed. |
|
Extensive but not comprehensive automated testing. Extensive automation of regression testing but not comprehensive. Partial automation of deployment to test environments. Automated build and test cycle when code is committed. Coding standards and quality checks are automated and measured. The current build status is available to the development team. Tests and test cases are refactored regularly. |
|
All automatable tests are automated. All regression tests are automated. Non-functional testing is automated (performance, load, security, etc). Fully automated deployment of environments. Environments are automatically provisioned. Place of manual testing is fully understood by everyone involved. The main focus of manual testing is exploratory. Use of virtualisation. Production releases are largely automated. Full insights into the status of the whole pipeline always visible to the development team. |
The above checklist will help you to assess your teams’ maturity in continuous deployment.
But however well you are doing, always keep asking yourself:
How are you enabling your teams to achieve a culture of early delivery, regular feedback and rapid innovation with high quality? Are you continuously improving and investing in your pipeline to production?
If you would like more help with assessing your teams’ maturity or implementing continuous deployment practices, contact us.

Interested in speaking to one of our team?
If you’ve got questions, we’ve got expert insights. Contact us to discuss how our expertise can be leveraged to address your most pressing business and technology needs.
Latest Insights

Centralised vs. Federated vs. Hybrid: Choosing the Right Data Governance Model
Data governance is often overlooked until something goes wrong. We regularly see firms prioritise it...

The Next Chapter Introducing Our New Website
Some milestones feel like a moment to stop and take stock. Five years ago, Liqueo was just an idea &...

Six Considerations to Set Testing Up for Success in OMS Transformation Programmes
By Senior Consultant, Shireen Quadir, and Consultant, Edward Wimble The success of an Order Manageme...