Everybody is talking about how artificial intelligence (AI) is changing the world and how it is the future of just about everything. Even communications professionals are abuzz with their desire to jump on the AI bandwagon for their media analytics.
It’s true; AI can be pretty impressive. It is already recommending products to consumers, catching credit card fraud, and targeting advertising with uncanny accuracy and effectiveness. Even doctors are starting to get assistance from AI in diagnosing disease through analysis of symptoms and lab results.
So how does AI “play nice” in the communications world? In one word – carefully.
Let’s start by digging into what AI does well…
Most experts would agree that AI can be great at handling scenarios that involve a yes/no answer, particularly when there is a robust feedback loop to tell the system when its prediction is right or wrong on an ongoing basis.
Let’s look at three examples of AI usage that illustrate this strength:
Determining Fraudulent and Legitimate Charges
When a credit card is stolen, it can create a perfect learning opportunity for AI. A bank customer reports their card stolen and the identifies which charges were legitimate and which were fraudulent. Any uncontested charges are implicitly used as confirmation of valid transactions, thereby reinforcing the characteristics of legitimate transactions. If you take that information from millions of cardholders, AI can use those data points to predict with uncanny accuracy when it sees a change in a purchasing pattern that’s likely fraudulent.
AI also uses “geofenced” data to protect your credit card account – knowing the locations (geographies) where you normally visit and spend. In addition, AI “learns” how you (and thousands of other similar voyagers) travel, using historical patterns of purchases – hotels, restaurants, etc. to approve or flag as suspicious any out-of-town spending. Why does that work? Because AI has perfect data from a feedback loop with thousands or millions of data points and is being taught the right answer every day. Even if you move to a new city, AI can use a real-time feedback loop to generate new data and adjust its predictions with no human input required other than normal card use in your new hometown.
Credit card transaction validation is a very effective use of a yes/no feedback loop that drives powerful AI learning and effectiveness.
Online Advertising and Product Recommendations
When you see ads on the internet, most of the advertisers are using you to help test thousands of variants of different advertising attributes such as the type and size of ad, time of day delivery, pricing deals, and even the words shown to you. They might even target specific product ads based on what you have shopped for in the past. (Who hasn’t been chased by ads across the Internet for that new pair of shoes you browsed one time on an e-commerce site?)
How are they doing this? The advertising companies are improving their targeting by using AI reinforced with perfect information. When you click on an ad and go to an e-commerce site, you either buy, or you don’t. YES vote. NO vote. AI will constantly learn which advertising attributes work and cause people to click. With millions of interactions to learn from – all tagged with reliable, fact-based results – computers can learn very quickly what works best in just about any situation.
In a similar scenario, Amazon, the e-commerce giant, uses AI to drive product recommendations. For example, when buying a shirt on Amazon, you see a set of product photos (slacks, belts, etc.) with the headline, “People who bought this also bought these:” What the AI technology at Amazon and similar online companies does is look for patterns in people’s purchasing behavior to suggest additional items that follow that same pattern. Of course, if you then click and buy the recommended product, that’s one more ‘plus’ vote for that AI recommendation.
Advertising customization and targeting and Amazon online shopping are more good examples where AI is learning from actual transactions. You either clicked on the ad and bought the recommended product, or you didn’t. It’s a yes/no answer providing perfect feedback.
Spam Email Identification
Identifying when email is legitimate or spam is one of the best mass-applications of classification in the supervised machine learning field. Often called ‘Ham or Spam’, AI uses patterns of words, characters, and senders to identify likely spam. Yet the system can only improve if humans flag emails as spam – or go to their spam folder to retrieve legitimate emails and flag them as ‘Ham’ (not spam).
Early spam identification systems used the feedback of the masses to apply standardized, mass email filters to individual users. In recent years, some spam filters have begun to allow for additional customized spam determination based on individual user preferences and feedback. This approach becomes especially helpful as some people flag legitimate e-commerce offerings a spam – offers that they perhaps opted into months or years before but no longer want to receive. Other users will desire to keep those very same emails coming to their normal inbox.
The human yes/no feedback loop is critical to the ongoing, evolving effectiveness of these spam-filtering AI tools.
Here are some examples of where AI is not ready for prime time.
If the answer is not known it can’t be fed back to the computer
For example, say you’re looking to hire a new employee, and the (AI) computer says you should make an offer to a person based on the data. If I hire that person and it either works out or doesn’t, that’s one piece of data. But what about the people I didn’t hire? I will never know whether they would have worked out, and AI is not able to confirm my rejections. It is hard for AI to determine the best hire when it only gets feedback on the people I chose to hire.
This is the challenge of what are called Type I vs. Type II errors. A Type I error is a false positive: someone AI recommended who turned out to be a bad hire. We can learn from that type of error. A Type II error, on the other hand, is a false negative: someone AI passed on who would have been good, but I’ll never know that for sure. We cannot learn from that type of error. So when AI cannot be given information on Type II errors, it has only half the necessary learning set to advance the AI properly.
Another variation of the AI challenges in hiring is when the AI system is exposed to all-new data. For example, if your resumes to date have all been from East Coast schools and for applicants with engineering degrees, what does your system predict when exposed to a Stanford graduate with a physics degree? AI struggles to reach a conclusion when exposed to vastly new, deviant data points that is has not seen before.
Can AI still learn in these circumstances? Yes, to a degree, but it does not see (and cannot learn from) the missed opportunities, and it needs enough of the new data points to begin to model and predict outcomes. The data that is collected from the hiring decision represents a fatally incomplete training set.
If the data sets are small
For example, if you are making a one-time life decision such as what house I should buy (not the price I should pay, which AI is good at, but rather what house will work for me and my family), the data set would not be large. The data might suggest I will like the house for the community and the features of the house. If I buy the house, regardless of whether it works out or doesn’t, I still only have a single piece of feedback to learn from. It is hard to learn from tiny data sets, as you need thousands if not tens of thousands of data points to run through machine learning to train it to make informed decisions.
If the answer is indeterminate compared to the yes/no answer
This is probably the biggest area where unassisted AI fails at proper classification. And it is the problem that most affects those of us seeking trustworthy media analytics.
How a person sees content frequently depends on their perspective. ‘Good’ things can in fact be ‘bad’ and vice-versa. And computers can’t be taught one-answer-fits-all approaches, which is what most AI-powered automated media intelligence solutions are doing today. Two people can read the same story and have a very different opinion of the sentiment. Their take may depend on their political or educational background, their role in a company, or even the message the company wants to be heard in the public – when positive discussion of a taboo topic is seen as a bad thing.
In addition, AI can’t reliably interpret many language structures and use, including even simple phrases like “I love it” since they can be serious or sarcastic. AI also struggles with double meanings and plays on words. And AI is unable to address the contextual and temporal nature of the text and how the words, topics, and phrases used in content change over time. For example, a comparison to Tiger Woods might be positive when comparing to his early career, less positive in his later career, and perhaps quite negative in a comparison to him as a husband.
If the subject matter is evolving
Most AI solutions being applied to media analysis today use what can be called a ‘static dictionary’ approach. They choose a defined set of topical words (or Booleans) and a defined set of semantic-linked emotional trigger words. The AI determines the topic and the sentiment by comparing the words in the content to the static dictionary. Current studies like “The Future of Coding: A Comparison of Hand-Coding and Three Types of Computer-Assisted Text Analysis Methods,” (Nelson et. al., 2017) have proven the dictionary methodology does not work reliably and that its error increases over time.
The fundamental flaw in this AI method is that the static dictionary doesn’t evolve rapidly as topics and concepts shift over time and new veins of discussion are introduced. Unless there is a way to regularly provide feedback to the AI solution, it cannot learn and the margin of error grows and compounds quickly. It is a bit like trying to talk about Facebook to someone transported from the year 2004 who only understands structured publishing – they just cannot understand what you are talking about in any meaningful manner because mass social media was not yet developed.
As these examples show, AI struggles with interpreting complex situations with either small data sets or indeterminate answers that evolve over time. So what does this mean to me as a professional communicator?
AI applied to media analytics needs to be guided to be successful. There are three specific areas where AI needs a boost to be successful analyzing media:
- Changing Conversations: As seen in the Cal Berkeley research, for an AI system analyzing media content to remain accurate and relevant, it needs to be constantly trained as conversations and popular phraseology shift over time. You need enough consistently superior-quality analysis to feed back to the computer and train it.
- Perspective: You need to tune AI specifically to understand your perspective. A solution tuned to someone else or all companies blended together just won’t work. This is because the phrase that one person (or company) determines is relevant and positive might be viewed differently by another person with different priorities or messaging goals.
- Context: The conversation ecosystem needs to be taken into account. Often coverage is bookended by events, public discourse, and related coverage outside the sample set of coverage. In his article just a few weeks ago for MIT Sloan Management Review, Sam Ransbotham writes, “While the pace of business may be ever-accelerating, many business decisions still have time for a second opinion where human general knowledge of context can add value.”
It doesn’t mean you have to analyze everything to train an AI system, but you need to analyze enough data so that your computer can learn robustly from them. AI alone can’t teach itself about changing social conversations, perspective, or context.
On the bright side, humans can work with AI by defining, training, and maintaining a dynamic, accurate, and reliable human feedback loop. This means persistent training, unique for each individual company, with human attention to help AI bridge the gap between what it’s trained on, and what the customer is trying to know. Supervised machine learning is almost universally considered to be the leading approach to solving the human content analytics AI problem for the foreseeable future.
So how will you use AI? Smartly, I hope.
PublicRelay delivers a world-class media intelligence solution to big brands worldwide by leveraging both technology and highly-trained analysts. It is a leader on the path to superior AI analytics through supervised machine learning. Contact PublicRelay to learn more.