S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Corporations
Financial Institutions
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Corporations
Financial Institutions
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Research — 3 May, 2022
Introduction
Observability is a hot topic among enterprise IT practitioners. The prospect of leveraging observability tools aided by artificial intelligence or machine learning to simplify IT management tasks, including managing more workloads with fewer employees, is undeniably attractive. In practice, extracting this value is challenging. A sprawling array of tools from a growing list of vendors — with overlapping feature sets and use cases, and inconsistent paths toward integration — has resulted in a confusing patchwork of software.
While many observability tools have definable utility for a specific product or service stakeholder, if the operational goal is to reduce cost and complexity, it would be optimal to have a smaller assortment of tools with broader functionality. On the other hand, specialized tools, such as those for serverless application monitoring, may offer features not available in more general-purpose application and infrastructure performance management, or AIPM, offerings. This presents a significant challenge, since choosing the wrong tool for the job may result in slower incident detection and longer remediation times. Consequently, many of these software tools will end up marginalized or shelved if they do not meet real business needs. Half of all respondents to 451 Research's Voice of the Enterprise: Storage, Transformation 2021 survey indicate that at least one of the management or monitoring tools their organization purchased has since turned into "shelfware" — software that is not deployed or not widely used.
Extracting the full value of an observability tool requires thoughtful planning and integration. This includes defining or detecting the environment you wish to monitor, ensuring that the tool in question integrates well with other resources in the existing service catalog — yet also provides differentiated value — and ensuring that the operational budget supports use of the tool. These are all vital considerations in the discovery and evaluation phase. Successful deployment of observability software also requires conscious work from the vendors to ensure that the ongoing experience meets practitioners at the level of technical skill and in the environments with which they have experience. Such efforts could include improving the breadth and depth of product documentation, providing integration and plug-in capabilities for other products in the customer's ecosystem and ensuring that pricing models scale equitably as companies grow.
Observe your budget before observing your workloads
The cost of using an observability tool is the most frequently cited cause of a tool winding up as shelfware. Thirty-seven percent of our Voice of the Enterprise survey respondents indicated that a purchased tool wound up as shelfware because it was simply too expensive to use. While some observability vendors are perceived as comparatively expensive, the right fit for a given organization is likely to change with growth. Cloud computing adoption has essentially broken the linear relationship between company size and IT spending. For example, a startup with fewer than 250 employees managing over $500,000 in monthly cloud spending may find per-user pricing advantageous. On the other end of the spectrum, an enterprise with over 10,000 employees may find per-user pricing unpalatable.
Naturally, there are contextual factors to this. The proportion of stakeholders requiring a license varies greatly between startups and enterprises. Likewise, variance in that proportion is natural between industries — for example, retail versus software — assuming equivalent head counts. There is also the truism of "there are no bad products, there are only bad prices." While disadvantageous pricing does not make a product bad, it can effectively curtail adoption among small startups or particularly cost-sensitive enterprises, limiting the overall potential for adoption.
Technology analysis meets utilitarianism
Given a general shift among enterprises toward vendor consolidation, the risk of and motivation for "feature creep" among observability vendors is all too real. A quarter of our survey respondents indicated that a management/monitoring tool became shelfware for not delivering on the value promised by the vendor. The potentially uncomfortable reality is that building a single perfect solution that uniformly fits the needs of all enterprise users is a practical impossibility — a product that prioritizes feature richness inevitably suffers from issues in service discovery and usability.
Accordingly, 28% of survey respondents cited tools being too difficult to use. From a practitioner's viewpoint, there is a vast difference in utility between a feature in a product constructed to meet a marketing requirement and a feature constructed as part of a strategic direction for the product. This could also have an invisible effect on the product itself, given that the morale of a developer tasked with building a feature in a purely perfunctory manner is likely not abundantly high. In other words, if a product was ungratifying to build, it's likely ungratifying to use.
Thus, the guiding principle for vendors is essentially a form of utilitarianism — build the product that achieves the greatest good for the greatest proportion of your target audience. This target audience may be easy to define for startups but can become muddled at greater scale. The target audience may also reasonably differ for different products within a portfolio.
Principles of multitool integration
Support for integrating with existing tools, observability or otherwise, in an organization's existing service catalog is a reality that vendors of observability tools must face. Typically, observability vendors offer some degree of integration capability — at a bare minimum, AIPM tools usually offer some integration with alerting tools such as PagerDuty Inc. or Atlassian Corporation PLC's Opsgenie, or with a ticketing system such as ServiceNow Inc., to convey that an issue is occurring and requires attention.
In our survey, 28% of respondents indicated that a newly purchased tool's lack of integration with those currently in use led them to render the purchased tool as shelfware. There is a complex web of open-source, proprietary — either self-managed or cloud-delivered — and internally developed software within any given organization. While out-of-the-box integration with popular tools is important and perhaps preferable, application programming interfaces that empower developers to build their integration are also important.
Conclusion
Building focused, interoperable products for practitioners, with their needs in mind, is the surest way to convert users into product evangelists. Product documentation is equally important to the underlying code powering the product: If the code works but is difficult to integrate with an existing environment, it will not get far. Therefore, the depth and breadth of documentation are significant factors. While a product can appear to meet requirements in evaluation, poor or out-of-date documentation can significantly hinder utilization after purchase.
Adoption of OpenTelemetry, the open-source observability standard, among AIPM vendors makes it easier for practitioners to test and evaluate various vendors and easily adopt complementary products in tandem with their existing environments without needing to integrate a vendor-specific proprietary agent. While there are other positive outcomes associated with OpenTelemetry, support for the standard can act as a foot in the door for small and large vendors alike, reducing the time needed to begin evaluating a product.
This article was published by S&P Global Market Intelligence and not by S&P Global Ratings, which is a separately managed division of S&P Global.
451 Research is part of S&P Global Market Intelligence.
Location