Talking to Myself with AI in Two-Way SMS

A man stares thoughtfully at a miniature version of his head sitting in his open hand. Numerous question marks float in the background. Engaged in self-talk as if in a two-way conversation, the man appears deep in thought wondering if this is AI.

Recently, I had a conversation with myself about AI.  I say that a bit tongue in cheek, but in reality, I used our conversational SMS platform (Lead Sticker) and had a two-way text discussion where I was the person responding on both sides.  In fairness, I used the AI function of Lead Sticker in order to see how I might respond to a customer if someone was asking me questions about AI via text – and in the interest of full discloser - the responses below, in white, are AI generated without edits other than the first prompt.  Lead Sticker applies a “human in the middle” process and allows me as the human to fully edit, modify and change AI responses before the text is sent to the customer.  However, in this case I have posted the exact AI response.  Follow along if you will…

So I can talk to myself, right?  Big deal.  Actually, I think this is more interesting than at first glance.

·        Notice that the topic is fairly meaty.  The first prompt is open-ended and is not a yes/no kind of question or response that you might typically experience with a chat bot.

·        It’s conversational.  This exchange is the antithesis of the type of interactions you might have with your bank, e.g. “would you like to check your account balance?”

·        My responses (in green) are lengthier than one-word answers.  By comparison, if you’re talking to a chat bot and you give the expected answer of “Yes”, in response to a question, the chat bot may then continue down its preformatted dialogue.  Or if the chat bot doesn’t understand, you might get a response like this: “I’m sorry I didn’t quite get that, please tell me again.”

The SMS text dialogue captured above is obviously very different than the predictable, decision tree style found in many chat bot responses.  It demonstrates the value of AI when the subject and conversation is lengthy and/or meaty.  In contrast, in the lead up to this conversation, I discovered that short answers and short conversations lead to little or no response, even when AI is prompted or initiated.

On a recent trip to Utah I stopped and visited with Dr. David Wood of the BYU Marriott School of Business.  Along with over 1,000 reviewers and professors from Arizona State University, the University of Duisburg-Essen and the Connor Group, Dr. Wood and friends have published a Generative AI Governance Framework.  This Governance Framework is a very useful document and not overly long or technical.    It's well researched and contains the input and thinking of a wide range of individuals and organizations.  It addresses, many of the issues of AI, including data repositories, operational management, technology issues and transparency.  It even brings to the forefront some of the human, ethical and society issues that should be considered.  So why is the relevant to SMS text messaging and my thoughts today?

It’s pretty straightforward – from the multiple recommendations the Generative AI Governance Framework makes, I’ll highlight two:

·        Implement “human-in-the-middle” policies for sensitive disclosures

·        Ensure transparent and traceable GenAI decision making

In the context of the two-way text messaging conversation highlighted above, do we have a “human-in-the-middle” experience?  Yes.  Lead Sticker uses AI to generate responses and feeds/presents those responses to the user on his or her screen.  The user can then edit at will before hitting Send.  In fact, the user maintains full control of the conversation and AI is only implemented when the user initiates the AI.  It is not automatic.  If the user does not initiate, then AI remains silent.

Does Lead Sticker provide transparency and traceable records on decisions the GenAI is making?  That’s a bit of a tricky question.  Do we capture the full two-way conversation between customer and agent in the app?  Yes.  Can we identify what portion of that conversation is AI generated and what is human generated?  Maybe.  If AI is initiated, like it is in the example above, and the response is not edited, then yes, we have a traceable record.  However, if the agent edits or changes the response from AI before sending to the customer, then it is a blended response and no, at present we are not able to track those changes.  At least it is not visible to the agent and is not recorded in the agent facing customer record.  This would require something akin to red-line editing similar to what is available in Microsoft Word.  In addition, the GenAI engine is provided to us by Meta, Google, OpenAI, Microsoft and a host of other tech companies.  Do they give us access to the large language models, algorithms, code base and other details that let us have visibility and a traceable record of decisions GenAI is making?  No, definitely not.

In light of the above governance recommendations, we’re pretty good on the “human-in-the-middle” recommendation but there are serious gaps in ensuring transparent and traceable GenAI decision making.  In fairness, this is likely the same circumstance for every company that is using GenAI tools developed by OpenAI, Microsoft, Meta, Google and others. And what about the SMS platform you’re using today? How well does it match up with recommendations from the Generative AI Governance Framework?

In some ways the idea of transparent and traceable decision making is contradictory to the fundamental basis of artificial intelligence.  In theory, artificial intelligence is ever evolving and learning on its own – and we have limited purview into how it learns and what decisions are made. Not unlike human decision making, some might argue. (Of note, Meta’s Llama 3.1 has 405 billion parameters).

The two-way SMS example above is simple, slow and easy to understand.  However, when you step into the highly technical world of marketing automation, variable content and dynamic generation and assembly of content from real-time click-stream data and responses of customers on e-commerce sites, then the effort to provide transparent and traceable GenAI decision making becomes a daunting challenge.  Plug this same type of GenAI adoption into medical practice, industrial plants like oil and gas refineries or even air traffic control and you end up with a bit of a black box scenario.

I know many people are uncomfortable with artificial intelligence.  I for one am not.  I believe AI will bring about tremendous benefits for humanity.  It will also bring about challenges, circumstances and nefarious uses that will challenge even the best that society has to offer. 

Dr. Wood made an astute comment in our conversation.  As a society we have to get comfortable with ambiguity and what we want are finite, predictable answers.  AI will challenge that very notion and expectation.  In the meantime, I plan to embrace and admire the new tools that AI will bring forward. 

Your thoughts? Text me now: 206-312-2129

Previous
Previous

GenAI: Is transparency and traceability possible with 405 billion parameters and counting?

Next
Next

Customer Replenishment and the Revenue Treadmill