Predicting the future of enterprise software development and delivery can often feel like hammering Jell-O to a wall. The industry is a perpetually evolving beast, awash with nuance, relativity and disruption. Just when you think you’ve got your head around it all, new technology and ideas can gatecrash the party overnight.
That said, we’ve dusted off our crystal ball to help give you an idea of the year or so ahead. Based on conversations with customers, analysts, partners and a cross-section of Tasktopians, we’ve identified some recurring themes that could impact the industry over the next 12-18 months.
1) The final nail in the coffin of the ‘One Tool Fallacy’
Having ‘one tool to rule them all’ is naturally appealing in terms of perceived simplicity and cost benefits. The idea, however, is inherently flawed. History speaks for itself; any tool that has attempted to be ‘everything for everyone’ has failed miserably. There’s just too many nuances within the different workflows across the various specialty teams in the software value stream.
When you plug too much workflow into one tool (say Jira), you will inevitably flood the tool and hit a wall, undoing all the productivity achievements that Jira and other leading development and delivery tools have brought to the process.
Instead, organizations will continue to introduce domain-specific tools to optimize productivity and functionality at key stages. Their priority is seeking the best way to ensure that these multiple tools work together in a single dynamic system for better visibility, traceability and control over the process.
2) Rise of the ‘One Tool Vendor’
If the ‘One Tool Fallacy’ is dead in the water, the ‘One Tool Vendor’ is still swimming strong. The rise of vendor consolidation shows no signs of abating. Industry heavyweights continue to acquire younger upstarts to try and manage more aspects of the development cycle – see CA and Rally, Micro Focus and HPE QC, Planview and LeanKit etc.
But that doesn’t mean the best-of-breed tool trend that Agile and DevOps has fostered will slow down – far from it! When tool vendor purchases another tool to extend their portfolio, there’s no guarantee that the new acquired tool will necessarily be the right tool for a customer’s specific business. There will always be innovative new tools that offer new, different and better functionality that meet the diverse needs of the market.
What organizations will be looking for now is how to make all these best-of-breed tools work together. Large-scale integration, however, is a specialty that tool vendors lack the expertise in. Customers may be lured into a lightweight integration between two tools owned by the same company, but that integration is not likely to be robust and sophisticated enough to handle the complexity involved in maintaining a strong integration fabric for scaling tools, teams and projects.
Nor will these lightweight integration solutions be able to connect to all the other leading tools in the market. This is significant as organizations are looking beyond point-to-point, two-way synchronization to drive efficiencies as it is only one portion of their whole value stream. Instead they’re looking to integrate multiple tools within the whole value stream for better visibility, traceability and control over the way they deliver end products and services.
3) Value Stream Thinking
We will continue to see organizations working even closer with customers and implement processes that enables faster user feedback, always focusing on the only real important question: “what do our discerning customers want from their software?”
The answer is a seamless digital experience at all times, delivered by speed, innovation, reliability and predictability. What this means is listening, in real-time, to customer feedback. Each product feature and release is a new experience, and if an organization isn’t listening, it’s highly likely that they won’t be building the right thing. And they will definitely lose customers if they don’t swiftly adjust their approach.
This focus on customer value is driving a shift in mindset. Organizations are beginning to see their software delivery process as a network of linked activities that provide tangible value for their customers. Or in other words, their software delivery is a ‘value stream’.
They’re starting to analyze where value is created and lost, and how work flows from ideation to production. This type of thinking is forcing them to define their value stream, and seek ways to obtain end-to-end visibility and traceability into the process to improve it.
4) Value Stream Networks and integration
While the days of the failed ‘Waterfall’ approach to software still lingers, most organizations recognize that software delivery isn’t a linear, sequential process. Instead, they recognize it as a latent network of critical interactions between key collaborators united by their work (via artifacts such as features, epics, stories and defects).
These critical interactions are bilateral working relationships at key stages of the software delivery process – such as Developer and Tester, Developer and Ops, PMO and Product, and so on. What’s happening is organizations are gradually realizing that if you take a step back and “zoom out” of the whole software delivery process, the whole system resembles more of a social network of communication.
This realization begets another realization – how do we connect this network? How do we enable it to operate as one? How do we automatically flow artifacts? To address this, they’re looking into robust and sophisticated integration solutions that can handle the scaling of their software delivery to ensure they keep their eye on the prize; supporting their customers’ business initiatives.
5) Measurement
Another major trend will be how organizations measure the success of their software delivery at scale.
In 2018, organizations will look at flow time as the key measure of delivery speed, measuring the time it takes to deliver a new feature or product from the first customer request through to completion.
Previous measures like lead time and cycle time have tended to focus on code commit to deploy, and have helped to increase speed in sections of the delivery process, but are not adequate when trying to become more predictable with customers.
Flow time allows organizations to quantify the probability of completing X% of work in so many days.
6) Taking DevOps to the next level
You only have to type “DevOps” into Google and be swept away by the 9.5 million search results to realize that DevOps is still all the rage. But what does DevOps really mean?
Continuous Integration (CI), Continuous Deployment and Release Automation has optimized the ‘build and deploy’ phase, helping developers and operations to collaborate better to deliver products faster. But how do you know whether your organization is delivering the right products?
Continuous delivery counts for little if it’s not serving a customer’s needs. What we’ve discovered is that there’s a distinct disconnect between teams up and downstream because their tools aren’t interoperable, meaning the productivity benefits of any DevOps transformation are only felt downstream.
As the pressure mounts on CIOs to justify investments in DevOps, Agile, Release Automation etc., we’ll see organizations begin to bride the gulf between these two key phases. Tightening collaboration and feedback loops, they will focus on improving both end-to-end speed of delivery and end product quality, all the while implementing an infrastructure to scale their DevOps and other IT transformations
7) Project to product
As part of this evolution in DevOps-thinking, we will see a shift in mindset as software delivery organizations begin to think in “product” and not “project”. The traditional project concepts, like fixed start and end dates, yearly budget cycles, and deadlines, will quickly become irrelevant for organizations that iterate in short cycles and deliver continuously.
In customer-obsessed companies, it no longer matters if you met a milestone or finished on time. Instead what matters is whether customers (external or internal) liked what you delivered, and if you achieved the desired business outcome. When thinking “product”, or even better “feature”, you ask a different set of questions:
- “Did this feature move the needle on revenue or a proxy to revenue?”
- “How fast did this feature get delivered?”
- “How much new business value did we create through these features?”
These types of questions sharpen individual and team focus, closely aligning practitioners to the end goal of their work (the product), not just their functional responsibilities.
8) Increased autonomy
As more elements get poured into the software development pot, process improvement is more important an ever. Enterprises at both organizational and team level looking at ways to increase autonomy, reduce maintenance and make better use of their resources. We see organizations achieving this by:
- Cloud-based services: many of our customers are transitioning to cloud-hosted ALM tools, with many moving from on-premises systems to virtual, enabling them to free up time to focus on building products
- Self-sufficiency – teams want more control over their processes as they seek to reduce dependencies on other teams. More and more customers are wanting to use their own tool to manage different elements of the software delivery process e.g. preferring to manage their own MySQL rather than depend on an Oracle database maintained by a separate central database team.
9) Refreshed view on testing
Just as Agile and DevOps has transformed the dynamics of the development team, testing teams are getting a shake up too. Testing is moving away from large QA groups performing manual tests in large clunky tools as testers move into the development teams as a functional member to support their Agile and DevOps initiatives.
True to the tenets of both methodologies, testing is becoming increasingly automated as teams seek to trim as much manual fat as possible. For any specific manual testing that is required, there is a conveyor belt of new breed, lightweight tools that can be added to the mix.
However, as a word of caution, all these tools and automation come with their own set of challenges, such as maintaining accurate traceability across the process. For instance, when an automated test via a tool such as Selenium and Junit fails, it generally happens deep down in the bowels of a build pipeline, whereby developers will then manually create a defect to track the failure.
Unfortunately, this still leaves a disconnect between the test, its failures, and the original requirement, obfuscating the relative cost and risk profile that underlie the feature associated with the requirement. This level of traceability will be critical as continuous integration and delivery become the norm and speed ratchets up thanks to organizational investment in automation.
10) The emergence of practical applications for AI/ML
AI/ML (artificial intelligence / machine learning) continues to intrigue across the industry, but we’ve yet to see any real practical applications emerge. That could be about to change.
A major reason for slow adoption is a lack of data consistency from a fragmented toolchain; you can’t teach what you don’t know. No tool integration means no automated flow of artifacts between tools, meaning a lack of consistency and traceability between the data in each tool.
However, as organizations continue to connect their multiple tools to create a consistent flow of data for end-to-end measurement and reporting, they can gain access to a treasure trove of historical data from which to train AI algorithms to measure risk, predict delivery times, classify defects and more.
Agree? Disagree? Think we’ve missed anything? Let us know below!