Web Stories Wednesday, June 26
Newsletter

When we talk about AI, we typically focus on one metric: productivity. That metric has been used in every technology announcement since the beginning of the current tech age.

Going back to when I first became an external tech analyst and during the ramp-up to the launch of Windows 95, the argument was that it improved productivity so much that it would provide a return on investment (ROI) within a year of its purchase. It turned out that during the first year, the product broke so much it had an initial negative rather than positive impact on productivity.

AI’s ROI is potentially far worse, and ironically, much of our problem this century is not the lack of productivity or performance but poor decision support.

Last week, I attended a Computex prep event. As I watched the presentations, I noticed a familiar undercurrent of productivity. I remain concerned that if we improve speeds significantly but do not also improve the quality of the related decisions, we’ll be making mistakes at machine speeds, which may not be survivable.

Let’s talk about that this week, and we’ll close with my Product of the Week, which is the airline I just took to Taiwan. It was so much better than United, which I usually use for international trips, that I figured I’d point out why so many non-U.S. airlines are significantly better than U.S. carriers.

Productivity vs. Quality

I am ex-IBM. During my tenure there, I was one of a small group that went through IBM’s executive training program. One of the principles that was driven into all the employees was that quality mattered.

The most memorable class I took in this regard was not from IBM but from the Society of Competitive Intelligence Professionals (SCIP). Its focus was speed vs. direction. The instructor argued that most companies focus on speed first when it comes to new processes and technology.

He maintained that if you do not focus first on direction, you will end up going in the wrong direction at an ever-faster pace. If you do not first focus on defining the goal, speed will not help you. It will make things worse.

While at both IBM and Siemens as a competitive analyst, I had the annoying experience of providing decision support and having our recommendations not only ignored but actively fought and then not followed. It resulted in catastrophic losses and the failure of several groups I worked for.

The reason was that executives would rather appear to be right than actually be right. After a time, my unit was disbanded (a trend that cut across the industry) because executives didn’t like the embarrassment of being called on the carpet after a catastrophic failure for ignoring well-founded advice because their “gut” told them their predetermined direction must be better, yet repeatedly wasn’t.

After I stopped working inside companies and became an external analyst, I was amazed to find that my advice was more likely to be followed because executives didn’t feel my being right created a threat to their careers.

From inside the company, they considered me a risk. From outside, I wasn’t, so they were more willing to listen and follow a different strategy because they didn’t feel like they were competing with me.

Executives have access to massive amounts of data that should enable them to make better decisions. However, I still see too many who make poorly founded decisions that result in catastrophic outcomes.

Therefore, AI should be focused on helping companies make better decisions, and only then should it be focused on productivity and performance. If you focus on speed without ensuring the decision behind the direction is the right one, you are more likely to go in the wrong direction much faster, resulting in both more frequent and more expensive mistakes.

Decision-Making Challenges

From our personal to our professional lives, we can make decisions faster with AI, but the quality of those decisions is degrading. If you were to look back at Microsoft and Intel, two of the principal backers of the current AI technology wave, you would see that for much of their existence, particularly this century, the firms made bad decisions that cost them both one or more CEOs.

My old friend Steve Ballmer was cursed by bad decision after bad decision, which I still think was as much the result of the people or person supporting him rather than anything inherent to the man himself.

The guy was top of his class at Harvard and arguably the smartest person I’ve ever met. He is credited with the success of the Xbox. Still, after that, despite husbanding Microsoft’s financial performance well, he failed with the Zune, Microsoft Phone, and Yahoo, crippling Microsoft’s valuation and resulting in his being fired.

Along with a few other analysts, I was initially assigned to help him make better decisions. However, we were all sidelined almost immediately, even though I authored email after email, arguing that if he did not improve the quality of his decisions, he was going to get fired. Sadly, he just got angry with my attempts. I still think of his failure as my own, and it will haunt me for the rest of my life.

This problem mirrors what happened to John Akers at IBM, who was surrounded by people who did not let in information from those of us closer to problems. While my efforts in IBM to eliminate the problems in that company were rewarded, the impact of people like me, and there were a lot, were so downgraded that Akers lost his job. It was not because he was stupid or didn’t listen. It was because we were blocked by executives who had his ear and didn’t want to lose the status connected with that access.

Thus, the information that both company CEOs needed to be successful was denied by people they trusted. They were more focused on status and access than on assuring the success of the companies they worked for.

The AI Decision Problem Is Two-Fold

First, we know that the results from AI efforts, while impressive in capability, are also incredibly inaccurate or incomplete. The Wall Street Journal just evaluated the top AI products and found that both Google’s Gemini and Microsoft’s Copilot were, with some exceptions, the lowest quality, even though they should be the most widely used.

In addition, as I pointed out above, even if they were far more accurate, given past behavior, executives might not use them, preferring their gut to anything a system told them. Although this may reduce the impact of the quality issues with these products, the result is a system that either cannot or will not be trusted.

The current quality issues help support and reinforce the bad behavior that existed prior to the current generation of AI, so even if the quality problems with AI are corrected, it will still underperform its potential to make businesses and governments more successful.

Wrapping Up

Right now, our need for speed (productivity, performance) is far less than our need for the technology providing this benefit to be both trustworthy and worthy of trust. But even if we were to fix this problem, Argumentative Theory suggests that the technology will not be used for better decision support due to our general inability to see internal advice as anything but a threat to our jobs, status, and image.

There is some truth to this position because if people know your decisions are based on AI advice, might they eventually conclude that you are redundant?

We need to stop focusing on AI with productivity as a primary goal and focus instead on far higher quality and providing better decision support so that we are not overwhelmed with bad decisions and advice at machine speeds.

Then, we need to actively train people to accept valid advice, which will more effectively allow us to advance at machine speeds rather than be buried by bad decisions at that same speed. We also need to reward people for their effective use of AI, not make them feel that this use will put their jobs and careers at risk.

AI can help make a better world, but only if it provides quality results and we use those results to make our decisions.

Starlux Airlines

Starlux Airlines

I’ve nearly stopped flying on United Airlines due to bad experiences that have ranged from being stuck in remote airports due to canceled flights to paying for first-class tickets only to end up in coach because of a combination of bad operations and an unwillingness to assure the passengers delayed due to operations mistakes that they make it to their destinations timely.

My experience with non-U.S. carriers has been far better. On my trip to Computex last week, I took Starlux, a Taiwanese carrier. The experience on this airline was far superior.

In business class on United, I often feel like less of a customer and more of an annoyance. On Starlux, people went out of their way to make sure my trip was comfortable; they took my personal care as a priority. When I asked for a special meal, they went out of their way to supply it. When I struggled with the Wi-Fi, they gave me support until the problem was resolved and seemed to care about ensuring my experience was exemplary.

I travel a lot in my career and dread it, which is sad because when I was a kid, I looked forward to every time I flew on an airplane. When I traveled on Starlux, I regained some of that love for flying and found I was looking forward to the flight home rather than dreading getting on the plane.

Starlux made my 13-hour flight fun, and I should point out that I’ve noticed this with other foreign carriers, like Singapore Airlines, Emirates Airlines, and others on this list. So, Starlux Airlines is my Product of the Week.

Read the full article here

Share.

Leave A Reply

© 2024 Wuulu. All Rights Reserved.