Can companies embed human values into chatbots?

Let me get this straight, leading AI experts want us to align human values into chatbots and other AI systems but they don’t want us to create artificial intelligence that appears to be human.  Isn’t that a bit of a contradiction?

Recently while listening to a podcast from an AI consulting group, they strongly suggested it is necessary to align AI with the values of humanity.  Apparently this was a sticking point for Elon Musk and why he parted ways with OpenAI.  However, their point of aligning human values raised multiple questions with me and I began to think about how companies might embed company values into chatbots.

Over the years I have been involved in defining and rolling out brand identities, repositioning companies and leading management teams in brand definition and development work.  Sometimes this has taken months of effort to get the leadership team into the same space with the same understanding.  Common areas of work include company Vision, Mission, Personality and Values to name just a few- and then to translate that work into Messaging.  You know – Just Do It!

As I listened to the podcast on AI and the need to align the values of humanity with AI two thoughts came to mind:

·        First, this is a pipe dream.  Putin, Trump, Xi Jingping and Kim Jong Un may have common values, at some level, but their ambitions and uses of AI likely fall into very different camps.  The whole idea put forth by Isaac Asimov years ago that robotics [AI] should “do no harm” runs harshly into the reality of military use of AI.

·        Any company that wants to embed company values into AI must develop and use proprietary models for development of the chatbot experience.  If not, then the company using chatbot services by third party companies, has likely outsourced or abdicated company values to an outside third party.

Setting those two points aside, let me suggest that training an AI chatbot with company values might create more consistency but you will also lose the human touch. 

As I have worked with two-way SMS embedded with AI, using a human-in-the-middle process, it has been brilliantly demonstrated that consistency is not necessarily a strong-suit of humanity.  We get angry, we show compassion, we use sarcasm and humor; and we pause in order to emphasize a point.  The very human nuances of conversation make the experience, well, human.  Humans do not execute flawlessly every time.  We are filled with errors and gaffs and apologies, and that is part of the warmth, compassion and attraction we feel with each other.

Having said that, here are a few thoughts on what to consider when embedding or aligning company values with AI dependent chatbots:

1.      Define your company values.  If your company has not yet done the hard work of defining, publishing, training and communicating company values, this is a must.

2.      Articulate your company personality.  Are you built to win at all cost?  Do you want fairness to reign supreme?  Or is transparency the one thing that sets you apart?  These may sound like values but talking about the company personality in this context is very helpful and translates into chatbot language, tone and response.  Here are three examples from my own career and how people describe companies I’ve worked with or worked for: “those guys are “Utah nice”, “they’re brusque, like New Yorkers” and “they’ll never tell you “no”, but that doesn’t mean yes.  It just means you won’t get a direct answer.”  How do you build each of those personality differences into a chatbot?  Well, that takes real effort.

3.      Model training data.  This is pretty critical and I hope you get it right.  Generative AI models are typically built by ingesting massive amounts of data, and then we are surprised that bias shows up in the data.  Well, humanity is biased and values are different in various cultures.  They are also different in companies.  Consider the reputation of Nordstrom taking any return for any reason, without a receipt and contrast that with companies that require a receipt, original packaging and a return no later than 30 days.  Now bake that into a chatbot for a company where the guiding principle is “the customer is always right” and then operate on a 30 day return window.

4.      Monitoring and review.  When it comes to monitoring the model, I’ve seen the following descriptors used:  supervised, unsupervised and reinforcement learning.  These are obviously very broad terms, but deciding how closely you will review, monitor and calibrate the model is a pretty significant decision.  And the decision will affect model accuracy, relevance, cost and customer experience to name a few.

This is not an exhaustive list, but it should prompt some thoughts on how your company can address the issue of humanity, company values and what to do when deploying AI chatbots.

Years ago, in the early days of email marketing I worked in a company where we were transitioning from print catalogs to email marketing.  The basic assumption was email is free, right?  It doesn’t cost anymore to send a thousand emails or a million.  Well, not exactly.  There are “hidden” costs and they are very real; deliverability, send rates, database size and responsiveness, response histories, costs and time to get off of black lists, etc.  We see the same thing in AI today.  It may look easy and so very accessible to simply deploy ChatGPT, Grok and other conversational models, to leverage the latest and greatest technology; but there are tradeoffs and the deployment of AI chatbots have very real backend costs that are perpetual.  We may think it is less costly and more efficient to deploy bots over human and… you know, the siren song is real.  However, most companies use third-party chatbot solutions and they will have very real impacts on company brand, reputation and customer experience.  And all of these things translate into sales, customer loyalty and profit.

So here’s the question, wouldn’t it be much better to keep real people in the middle of those conversations?  Real people having real conversations, super charged with the benefits of AI, but never leaving the chatbot alone to do its thing.

I am biased in this opinion and I recognize in part where that bias comes from.  I worked at Direct Alliance (acquired by TTEC) years ago and was in the thick of implementing marketing automation with variable content, triggered digital transactions and personalization.  One of our account managers went on vacation for multiple weeks and still delivered 90%+ of his quota because his customers had become accustomed to ordering via digital channels.  Based in part on that result, we debated how much of our sales activity could be automated.  We could potentially eliminate the sales person and their high costs, right?  Our head of sales made a definitive statement and it has stuck with me, “People buy from people.”  In the end, we did not eliminate any sales people.  We simply gave them better tools and helped them be smarter, more efficient and more effective in their jobs as sales people. 

Will the same hold true today with artificial intelligence?  Time will tell.  However, let me leave you with the following thoughts:

The future of business is personalized, measurable, transactional and integrated.  It is only through ongoing relationships that we create value for companies and individuals that we serve.  We believe an automated approach that optimizes rich media and technology will deliver relationships with the most value.  Companies must always remember the customer is king and personal choice is the foundation of all relationships.  Technology is simply an enabler of making choices.”

Next
Next

Amazon embraces Vendors to improve AI merchandising