Does Your Track Record Beat AI Project Success?

Blimey. I hope so.

I read through stats with fingers over my eyes published into AI project implementation results, as surveyed of 2,000 chief execs by IBM.

Their press release announced eleven bulleted findings. Here's the first of them;

"Surveyed CEOs report that only 25% of AI initiatives have delivered expected ROI over the last few years, and only 16% have scaled enterprise wide".

Return on investment from only one-in-four projects. Farther rollout considered apt in a mere 1 in 6¼ cases.

Not much cop, then. As those from my hometown would say.

There's the instant check on how any AI project may fare that involves your working lot.

Beyond these, there's the broader thought for projects in general.

The gaping lacuna missing in the Big Blue analysis, is how fresh AI initiatives match up with all others.

Given that they trumpet discovering the 'expected growth rate of AI investments will more than double over the next two years', are chief execs wantonly throwing mud at walls knowing full well they'll be throwing three-quarters of it away?

Have we paused to ask our customers what RoI they gained from partnering with us?

Of subsidiary style environments we've sold into, how many have bloomed further around the wider organisation?

We really ought show way higher than half yielding positive numbers with added bare minimum growth from two-of-five.

Whether you have a shiny new AI angle to sell or not though, these figures bear repeating.

We surely better them. A significant mitigation soothing cavils of prospective buyers.

And how might we do so?

Consider piloting.

How many such projects does your prospect try? How many see broader adoption? Do they even track such?

I'd actually be surprised if companies had these kinds of figures to hand. For one, project success depends on a plethora of factors. Some completely out your control. Especially if showing subjective skew. Such tally could also be avoided for its blame culture encouragement. Also possibly considered data for data sake.

With AI, the number of initiatives are likely to be few, visible, and fresh in the memory. Making numbers easier to cite. Yet does their inevitably short half-life enable true conclusions yet?

I knew I was onto something with my own business creation when my client's sister companies would get in touch.

Consider starting small.

Could you sell with a 'start-small' approach? Closely aligned to the vaunted #FailFast movement, perhaps.

Separate from piloting, in that rather than a shrunken version of the full wares in a single place, here we start everywhere, with a subset of features alone.

With tech-cum-apps, this can be an overlooked viable option. It brings implementation intricacies and if you unleash hitherto untapped synergies attraction can dim. But choose a sole function or workflow right, and you gain much welcome control preceding the desired incremental spread.

Consider an actual joined-up improvement.

It almost goes without saying, but in whichever aspect you pitch progress, it must be connected to the whole in some real way.

Particularly given their own chosen secondary headline;

"Half of surveyed CEOs report that rapid investment has resulted in disconnected technology within their organization".

This might well put flesh on where AI purchases presently fall down. The media's full of projects greenlit purely for ticking the AI box. I read those pieces and wonder which projects got canned so that a flashy AI one could be paid for? With finite resource available, what's the priority rankings for authorisation? And how do you map at the very least the up- and down-stream impacts?

It holds echoes the ol' 'nobody ever buys efficiency' lament.

How many such buyers ask themselves, 'how will this help my customers?'

You could find yourself asking them the very same.

Subscribe to Salespodder

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe