Beating the 4%: A Pragmatic Approach to AI Success

This blog is a summary of a session, moderated by Andy Brown, CEO of Sand Hill East, and delivered at the recent AI Networking Summit in NYC. To view the entire presentation, visit here.

The world of Artificial Intelligence is abuzz with transformative potential, yet a stark reality, highlighted by an MIT paper claiming a 96% failure rate in AI projects, looms large. This panel discussion, “Beating the 4%,” featuring industry veterans James Walker (Chief Admin Officer, DXC), Phil (Head of AI Innovations, Zscaler), Sean Finnerty (Merck), and Sean O’Donohue (CTO, Security Benefit), moderated by Andy Brown (CEO, Santa East), offers a refreshing and practical perspective on navigating this landscape. The core message: success in AI isn’t about chasing shiny new tech, but about meticulous planning, data rigor, and a clear focus on measurable outcomes.

The Undeniable Primacy of Data

The panel unanimously stressed the foundational role of data. Sean O’Donohue emphasized that data is “certainly fundamental to everything about AI,” envisioning an application layer disintermediated, with AI tools directly interacting with a robust data layer. This necessitates a strong technical capability to work with APIs and other external data sources to make organizations adaptable.

Sean Finnerty echoed this concern, stating that data is his “number one thing that I’m worried about.” He highlighted the common organizational pitfall of “reinventing the wheel,” spending heavily to reassemble data for each new AI project rather than leveraging and optimizing existing, often unprepared, data structures. His priority is to invest more in getting data right to maximize the return on other AI investments.

Phil brought the discussion to scale, noting Zscaler processes “a quarter of a quadrillion data points a day.” This immense volume underscores the challenge of data engineering, which he posits is “harder than data science.” Beyond sheer scale, Phil emphasized the critical aspects of data quality, safety, and privacy, particularly when dealing with customer data and PII. The need to “take the friction out of…access[ing] that data” for AI motivations is paramount.

James Walker illustrated the pervasive issue of “Susie’s spreadsheet” – critical business processes relying on undocumented, brittle data chains. This lack of data provenance, reliability, governance, and a single source of truth is a major reason why AI projects fail; “they’re not AI problems,” but rather fundamental data challenges.

Outcomes Over Optics: The Path to Value

The panel strongly advocated for an outcome-driven mindset. Sean O’Donohue argued against a scattergun approach of “10, 20, a hundred projects,” urging instead a focused domain approach, such as operations or sales, to demonstrably “show the additive value.” He differentiated AI automation from traditional RPA by its adaptive and learning capabilities, enabling continuous process improvement. Furthermore, AI can significantly enhance knowledge access, as seen in their call center project where agents gain “complete answer[s] to a question straight away” in real-time.

James Walker noted the shift from traditional AI/ML to generative AI, highlighting its creative nature. However, he cautioned against using Gen AI for processes requiring “a hundred percent deterministic output,” such as invoicing. The successful internal projects at DXC have been in creative areas like marketing videos, where some level of randomness is acceptable. For critical workflows, “you have to be absolutely rigorous about understanding how the process works… and what the outcome that you want is.”

Sean Finnerty reinforced the adage: “You shouldn’t automate a process that sucks.” Prioritizing process optimization before applying AI is crucial. His team at Merck focuses on “lean[ing] out the teams and structur[ing] the teams” to reduce friction, bringing everyone along on the journey. He also emphasized the value in “smaller, easier stuff,” like AI-powered AIOps, which can significantly reduce mean time to resolution and unlock millions in operational value. James added that Gen AI itself can be a powerful tool for process mapping and identifying inefficiencies.

Andy Brown concluded by illustrating the “Venn diagram” of AI project failure: overstretching technology capability due to “the enthusiasm of the CEO” or the “shininess of the project,” without sufficient focus on real-world impact. The collective wisdom: getting into the “bowels of the organization” to understand processes and data is half the battle.

Ensuring AI Delivers the Right Answers

To ensure AI provides accurate answers, the panel stressed the importance of rigorous testing. James Walker highlighted the advantage of AI coding assistants in generating “many, many, many more test cases” and synthetic data, enabling higher confidence than human-driven testing alone. Phil reiterated the value of “test-driven development” and measuring the “quality of the outputs” against gold standard data. Sean O’Donohue pointed to the use of knowledge graphs to improve the “accuracy of the output” in knowledge-based solutions, an essential factor in regulated industries.

The One Thing: Ask the Right Question

As a parting thought, Phil urged the audience to “ask the right question” – is AI truly the solution, or are you just seeking a use for it? Sean Finnerty advised contemplating value from the start, keeping it simple, and building on quick wins. Sean O’Donohue encouraged a “one bite at a time” approach, avoiding the temptation to “boil the ocean.” Finally, James Walker emphasized Abraham Lincoln’s wisdom: “Spend 20% of the time defining the outcome and the success criteria. Spend 60% of the time on the plan and 20% of the time on… having fun with AI.”

By prioritizing data integrity, focusing on measurable outcomes, and employing a disciplined, iterative approach, organizations can move beyond the 96% failure rate and join the successful 4% in harnessing the true power of AI.

Author's Bio

Joann Varello

Head of Marketing, ONUG