Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Why AI should augment, not replace, humans

Rob Enderle | March 27, 2017
Your workforce may not yet be ready for artificial intelligence. First comes trust, education and training.

robotic hands

One of the more interesting exchanges from IBM Interconnect 2017 was between IBM CEO Ginny Rometty and Salesforce CEO Mark Benioff [Disclosure: IBM is a client of the author]. Benioff commented that both had recently gone to Washington to address the issue that the U.S. workforce isn’t ready for artificial intelligence (AI). Both companies have platforms that are now partnered, IBM Watson and Salesforce Einstein. The problem is twofold, both firms are currently focused on augmenting people, but if people aren’t trained to work with AI, the easier path may become replacement and that path creates a massive problem connected to unemployment and unemployed people not only don’t buy products, they tend to revolt.  

Let’s chat a bit about what it might mean to prepare the workforce for AI.  

 

A matter of trust

At the heart of the problem is the fact that we simply don’t trust systems to the level we’ll need to for AI assistants to truly be helpful. We came into the workforce with concepts like intuition and “gut” driving our decisions. Even though we are surrounded by data, the actual use of information based on valid data seems to decrease, not increase. I know I’ve seen this from executive after executive, and this is showcased by our current U.S. President -- basically ignore the data and make an orthogonal decision that seldom ends well largely because they don’t trust the data underneath the advice they’ve been given.

Now there have actually been good reasons for this because the quality of the data has been all over the map. In addition, those seeking the data may have their own agendas, which oftentimes have little to do with where the unaltered data would otherwise point.  

Addressing this is a two-step process.First there really has to be increased effort on assuring that the data is both complete and unbiased and the analyses is completely based on this reliable data.   The foundation for trust has to be trustworthy results. The second, and equally critical step, is then to reeducate decision-makers to this new reality where the resulting information can be trusted. If the second step is done before the first it’ll only make decision makers distrust the advice that these AIs provide more and move the ball in the wrong direction.  

 

Effective coupling

This was partially showcased with the presentation by H & R Block on stage at InterConnect. This presentation showcased that a tax preparer at Block now has two monitors. One has the information that is typically part of the tax preparation interview. The other is Watson, acting almost as a peer, providing advice real time as the form is filled out suggesting items that will improve the return either by increasing the deduction or assuring accuracy. You get a team now of a human and an AI who collectively provide a service that is better than either could do separately.

 

1  2  Next Page 

Sign up for Computerworld eNewsletters.