AI, Chatbots and Face-to-Face Verification

Let me share a little story… something that transpired over the last two days.

On Monday, I received a text message from a phone number in the United Kingdom…

“Hello, I’m Jessy from Oyster Recruitment Company USA. Are you interested in a remote position/ flexible role? Can I share brief details with you?

               My Answer:  Sure…and send me your LinkedIn profile.

And here we go, into the conversation… via screen shots below:

And this is where the conversation ended… or so I thought.  The next day, via WhatsApp it went like this:

And this, my friends, is an AI chat-bot, for recruiting.  Unsolicited via text messaging.  Constructed to appear as if it is conversational with a real person, and yet, it is not - or so it appears to me.

Click Here to read the full press release from Oyster, the global employment platform.

Personally, I think this particular chatbot is built on the basis of deception.  You’re supposed to think or feel like you’re talking to a live person, but in reality you are not.

The combination of AI and text messaging is a powerful set of technologies and can be used for very rich and robust customer service applications.  However, it’s important that the AI agent readily identifies itself as artificial intelligence, rather than presenting itself in a deceptive manner as if it is a real person.

In my request to “Jessy” I asked for his/her LinkedIn profile.  And what did Jessy send?  The name, and eventually the profile, of the CEO of Oyster Partnership.  Here is the real LinkedIn profile when I personally copy the URL:

https://www.linkedin.com/in/saszabandiera/

And the profile link below, suggests that Jessy, as the chatbot, was perhaps following the profile link from a page instead of looking it up on LinkedIn and copying and pasting it like a live person might do.

Remember, Jessy said he/she was with Oyster Recruitment USA.  The phone number however was a U.K. phone number and the LinkedIn profile provided is clearly identified as a U.K. profile.

I asked for a copy of “Jessy’s” LinkedIn profile for several reasons:

·        LinkedIn profiles often have pictures.  Chatbots do not.

·        LinkedIn is moving to Verified users and one simple method is to upload a government issued photo ID or by uploading a live selfie. Learn more here

·        In order to obtain a Driver License – at least in many places – a live person must walk into the Department of Motor Vehicles (DMV), stand in line, get their picture taken and talk to another live person.  It is in effect a first person, face-to-face verification experience.

Neither “Jessy” or “Charlotte” could provide their LinkedIn profile.

Artificial intelligence is pervasive technology.  It is being built into major systems and processes and it will become increasingly difficult for people to know when they are dealing with something or someone that is “artificial” and something or someone that is real.

AI is here to stay and certainly it will be combined with personalization, text messaging and a host of other consumer facing “touch points.”  If you’re in marketing, sales or any kind of leadership position, decide today that your use of AI will include notifying your customers up front that they are interacting with artificial intelligence.  Deception is a lousy way to build trust with your customers.  Brand equity will surely take a hit if customers feel like they have been duped.  And clearly the idea of opt-in, permission based marketing requires knowing that the system or person you’re interacting with is not attempting to deceive, dupe or head fake you into thinking, you’re talking to a real person, when in fact you are not.

Previous
Previous

Football crumbls with text messaging

Next
Next

Two-Way SMS becomes More Attractive with the Rise of Generative AI.