
In today’s competitive marketplace, brand awareness matters more for companies than ever before. Social media platforms not only provide a means to boost your brand, but they also generate data that, when used correctly, can steer your PR campaigns towards success.
Read: The Role of Social Media in Public Relations
What is Brand Awareness?
Brand awareness is the level of consumer familiarity with a particular brand’s products, services, or image. Familiarity is what motivates consumers to choose Coca-Cola over other soft drinks. It is the economic moat that wards off competition and ensures customers remain loyal. This is the first stage of the marketing funnel and the key to promoting new products, establishing loyalties, and reviving old brands.
When developing branding campaigns, companies must consider their values, reputation, and the levels of engagement their key messages receive on social platforms. Beyond engagement, brand boosts are about establishing positive relations between a company and its target audience.
Why is Brand Awareness Important?
Brand awareness is important because consumers rely on research and social proof to inform their purchasing decisions. In his TED talk, “The Post-Crisis Consumer,” John Gerzerma labels this phenomenon the “rise of the mindful consumer.” With an abundance of information at their fingertips, consumers can now sift through online reviews and compare influencer testimonials on social media before every purchase. In fact, 67% of the consumer journey now occurs digitally.
For this reason, companies need to use social media channels to build brand awareness positively influence their target audience’s consumer journey in their favor.
Social Media Metrics to Measure Brand Awareness
PR professionals’ branding strategies are most effective when informed by reliable data. Social media provides companies with access to a wealth of information on consumer engagement with and awareness of new campaigns, and influencer pick-up of key messages. With approximately 501 million tweets sent per day, companies are confronted with both a goldmine and a headache when it comes to analyzing the available data. MGP head of digital Eamonn Carey explains, “you can almost get data overload: the challenge is picking out the metrics that matter[…] The smarter brands are taking a step back from the tsunami of data.”
Amid such large quantities of data, PublicRelay has witnessed an explosion of AI-based tools that help track the key metrics used to measure brand awareness. These metrics can be broken down into four standardized categories:
Exposure and Potential Reach
Exposure and potential reach, which tell the possible number of unique viewers a post may have, are the first data points to consider when attempting to improve your brand recognition across social media platforms.
However, exposure and potential reach should be utilized as baseline metrics as they do not provide enough insight on their own to help steer a PR strategy. For instance, a post could have high impressions, but also receive low or negative engagement. Using exposure and potential reach in conjunction with other metrics, such as engagement, will help you glean more insight from the data at hand.
Engagement
Put simply, engagement is the number of users who interact with a campaign and the degree of that interaction over time. Extrapolating engagement can be done in multiple ways. Often, data analytics tools will create a metric that is a combination of likes, retweets, comments, and shares.
High levels of positive engagement often indicate a healthy relationship with your target audience and a successful branding campaign. However, these metrics also need to be understood within their context. For instance, a post may receive a high number of likes and shares but relatively few comments, indicating that the topic doesn’t stimulate discussion. On the other hand, the motivations for sharing or retweeting your company’s coverage may be negative. For this reason, engagement metrics are just one part of a wide range of data points that need to be taken into consideration.
Impact
Measuring impact on social media refers to the overall changes in consumer behavior and sentiment towards your brand as the result of your PR campaigns. Often companies will compare their initial rates of engagement and exposure to those during and after a campaign. In this case, social media analytics can provide a useful benchmark to inform your next steps.
Further, social media platforms are data treasure troves when it comes to evaluating your brand awareness relative to your rivals. A key goal for any awareness strategy should be about establishing your brand as the central player among competitors.
Advocacy
In every campaign, influencers are vital to swaying opinions and increasing awareness. Strategists must understand industry influencers’ topical interests and the degree of engagement they can generate.
The appropriate metrics can reveal the extent to which influencers promoted your key messages and help your team to identify the major influencers in the industry. Ultimately, boosting your brand recognition is about intelligent engagement. Find the right influencers and you can reach the audiences that truly matter.
Ensuring Accurate Assessment and Intelligent Engagement
PR teams that engage with this standardized model lay a strong analytical foundation upon which to develop their brand awareness strategies. As Neil Kleiner, former head of social media at Golin argues, “exposure, engagement, impact, [and] advocacy are important, and there are demographic elements to it as well: it’s about reaching 10,000 of the right people, not ten million.”
Indeed, PR strategists should be looking for intelligent engagement when improving their brands and that means going beyond base-level metrics that are supplied by AI. Although AI is useful for processing large amounts of data, it encounters issues with accuracy and meaning when it comes to nuance and developing insights that matter, and especially when gauging sentiment.
How is Sentiment Analysis Used for Brand Management?
Sentiment analysis is crucial to brand management when staging a brand awareness campaign. Applying sentiment to the four-baseline metrics is the final element in a PR strategist’s information armory. Receiving high volumes of mentions, retweets, and influencer traction are all signs of growth. However, without an awareness of the sentiment of social engagement, your campaign assessment may be deceptive.
Sentiment refers to the tone or emotion attached to social media posts or engagement. With the rise of Artificial Intelligence, many products are appearing on the market that offer sentiment analysis with unlimited data pools.
While the tone of your coverage is important, sentiment also extends to social media coverage of your company values and reputational drivers. However, AI cannot accurately identify reputational drivers and value systems that human analysts can. By understanding the kinds of reputational drivers and values that have emerged during a campaign combined with tone, PR strategists can understand how their brand awareness strategies are having a real-time impact. Applying sentiment to the four-baseline metrics through a synthesis of human intelligence and AI tools allows PR teams to pinpoint the positives and negatives of their campaigns to increase brand awareness.
Using Social Media Analytics the Right Way
When measuring boosts in brand awareness using social media analytics, employing a variety of baseline metrics paired with accurate sentiment analysis will yield the most reliable results.
At PublicRelay, we utilize the four-baseline metrics for a holistic approach that compliments AI systems with purposeful, human-generated insights. Our human-AI hybrid approach focuses on intelligent engagement, whereby we filter out the noise and pinpoint the most valuable insights to help you increase your brand awareness. Click here to learn more.

Sentiment analysis is a term that most PR practitioners and communications professionals have heard of, and perhaps even a tool they use as a part of their strategy. However, many industry pros struggle to fully understand the concept and what it can do for them when implemented effectively.
The applications of sentiment analysis are wide-ranging and impactful. For instance, Brandwatch asserts that “shifts in sentiment on social media have been shown to correlate with shifts in the stock market.” British political magazine New Statesman even used the process to determine that President Joe Biden’s recent 2021 inaugural address was “the angriest ever,” based on key linguistic choices.
What is Sentiment Analysis?
Sentiment analysis is the process of identifying the tone or emotion attached to a communication. It can also be referred to as “opinion mining” or Emotion AI. Examples of the types of communication that can be analyzed for tone are nonverbal, like facial expressions and body language, and linguistic.
Analyzing the sentiment of linguistic forms of communication starts with examining a sample of text, which is then assigned a value based on the perceived attitude or tone of the communicator. Usually, the values are coded as positive, neutral, or negative so the data can be easily sorted and later visualized and studied for trends.
Why is Sentiment Analysis Important?
Sentiment analysis is important because it can provide you with a better understanding of your earned media coverage and help you reach your messaging goals. The analysis is part of an integral feedback loop that allows communicators to gauge the success of their communications tactics and identify opportunities for improvement.
Measuring the volume of media coverage by topic can only tell you so much. Without knowing the tone of that coverage, teams can’t determine whether their campaign is a success or a failure. For example, if your company experiences a spike in mentions related to product quality, how can you appropriately respond without first knowing whether that coverage is positive or a potential PR crisis, all of which comes down to sentiment?
Lexalytics explains that sentiment analysis can help companies to gauge “public opinion, conduct nuanced market research, monitor brand and product reputation, and understand customer experiences.” Once you have identified your strengths, weaknesses, and opportunities, you and your team can take advantage of all the practice has to offer.
Using AI for Sentiment Analysis
When analyzing text, computers deploy natural language processing and machine learning techniques to attach sentiment to words, phrases, topics, and themes. When an analysis program runs on an article it breaks the text down into these units. The program then identifies components that have been assigned sentiment in the program’s sentiment library (which stores the system’s human-coded values) – or the library entries they are closest to – and assigns a score to each unit. Finally, the system combines the individual scores to generate a multi-layered analysis score that represents the whole article.
As smooth as this process sounds, there are many areas where problems can arise along the way.
The Accuracy of AI Sentiment Analysis
Because AI uses natural language processing and machine learning to automate the process, it’s a useful tool for freeing up your team’s valuable time. However, fully automating your sentiment analysis can compromise its accuracy.
According to the Institute for Public Relations, no method of sentiment analysis will ever be 100% accurate. However, they argue that relying solely on a tech tool to measure sentiment “can be like flipping a coin, or only 50% accurate, since these platforms often struggle to measure more nuanced posts or are unable to filter and interpret the information through the lens of a company or brand.” Similarly, 5WPR estimates that sentiment algorithms are only about 60 percent accurate.
Linguistic Challenges for AI
Toptal has identified four major pitfalls of AI sentiment analysis: irony and sarcasm, negations, word ambiguity, and multipolarity. Some of these pitfalls can be addressed with approaches like machine learning algorithms or deep learning, but no solution is guaranteed to be fully effective.
Sarcasm is an especially deep pitfall, and its prevalence in consumer-generated content, like social media posts, makes it even more important in many sentiment analysis projects. Even humans struggle to comprehend sarcasm sometimes, so it’s no surprise that computers are often tricked by false-positive statements like, “I love the way [company’s] customer service team put me on hold for two hours.” Research shows that numerical sarcasm like in this statement is especially challenging for AI to comprehend due to its effect on a statement’s polarity.
As a media analyst, I often see articles that dive into complex subjects in detail. The more detailed the article, however, the higher the chances that an AI program will be tripped up by common traps like negatory statements, ambiguity surrounding entities, or articles that discuss both the pros and cons of one idea.
These issues demonstrate some of the imperfections of using AI, which can drastically change the narrative of your media analysis and your subsequent tactical decisions.
Adding a human element to your approach can be the solution to avoiding these major data hazards.
Using Humans to Detect Sentiment
Although using an AI program can help save time, its imperfections can lead to inaccurate results that can impact your communications strategy. Because of these shortcomings, it is essential to include a human perspective to analyze the more linguistically complex elements of your media coverage.
While computers need to be trained to detect subtle context clues, humans have been ‘programmed’ to find them throughout their entire socialized lives, which makes identifying common language tools like irony and negations quite simple. Using human analysts to identify these common contexts and AI to automate the basic tasks that save time can be beneficial for PR professionals as they work to improve the accuracy of their sentiment analysis insights.
The Value of a Hybrid Approach
Both AI and human analyst approaches to sentiment analysis have benefits: AI programs save time with automation, and humans decipher context and increase accuracy. Ultimately, utilizing a combined approach can offer the best of both worlds.
At PublicRelay, our human-AI hybrid approach to media monitoring makes conceptual insights possible. To learn more about using PublicRelay for accurate sentiment analysis, contact us here.

Every communicator knows when a big story is published, every minute matters. Yet many earned media articles leave little impact – while a few pieces drive the entire conversation. But what if you could know in advance which stories will catch fire?
With PublicRelay’s Predictive Alerts, you can. This feature gives your team an email alert that a particular story is likely to take off over social media – hours before it actually does. You and your team gain valuable time with which to craft the perfect response or engage key advocates to amplify the coverage.
The Details
Predictive Alerts uses industry-leading AI to predict whether an article will go viral on social media. If an article is likely to be widely shared, we deliver an email alert straight to your inbox. This gives you time to craft the appropriate media strategy.
The alert operates within a set of search terms defined by you and your analyst, depending on the topic of interest. The scope is completely up to you – search terms are not limited to tracked themes and brand drivers. Your team can keep an eye on important company announcements, key influencers, or monitor major articles on industry topics more broadly.
Strategic Value
Social sharing is a crucial gauge on which topics, outlets, authors, and stories garner the most attention. Knowing about these news hits in advance, your team can:
- Enhance positive news by engaging company advocates and employees to share the story.
- Get ahead of negative coverage with a clear, compelling media response.
- Calm worried executives by demonstrating a disagreeable piece is unlikely to receive much attention.
- Keep track of what’s generating real buzz for competitors or peer companies.
Earned Media in the Social Age
Social media has become a cornerstone of the brand landscape. In a surprising twist, however, this shift to social has only increased the importance of viral earned media. Only half of consumers say they trust paid advertisements – but 92% trust earned media. This trust, coupled with the fact that a majority of social sharing is generated by only a few earned articles, makes identifying viral earned media paramount to staying ahead in brand awareness. Predictive Alerts are the best way to glimpse into your media future – what will you do with the extra time?

With the rise of artificial intelligence and ensuing hype, many companies in the media intelligence industry and beyond began touting their use of AI. But the story often stops there without further explanation.
Communicators don’t have to be data scientists, but it is worth asking your media intelligence provider how they employ AI. In the world of textual media analytics, there are best practices as in any other industry. If your provider is not following them, it could have serious consequences for the accuracy of your communications data.
Media Analytics Best Practices
Use Ongoing Supervised Machine Learning
Cultural conversation changes quickly. The meaning and connotation of words are situational and evolve over time. This is why several studies have found artificial intelligence employed in the media analytics space must be supervised. One study from communications experts at the University of Wisconsin-Madison and the University of Georgia found, “the combination of computational processing power with human intelligence ensures high levels of reliability and validity for the analysis of latent content.” Another from researchers at the University of California at Berkeley and Northwestern University found that unsupervised machine learning, “does not perform well in picking up themes that may be buried within discussions of different topics” and therefore missed several mentions of the topic they were tracking of economic inequality. The concept of inequality, whether in the economy or in the workplace, might very well be something a communicator would want to track – and certainly other nebulous concepts like it.
Computers can improve at processing language, but they need to be told what’s right and wrong. A computer cannot tell when the use of sarcasm in an article contradicts the normal sentiment of a word it has already learned to label positive, so it will continue incorrectly analyzing your content until it is corrected.
That’s why media intelligence providers cannot take a “set it and forget it” approach to AI. A constant feedback loop is required to educate the computer in the nuances of language. If the data set remains static, it will make your analysis inaccurate and irrelevant.
Target Analysis Specifically to Your Company and Your Perspective
The most accurate communications analysis comes from ongoing supervised machine learning targeted specifically to your business. Every organization has different goals, challenges, and perspectives on the world. Two companies can read the same news article or social post and analyze it completely differently based on their point of view. A solar energy company and electric utility company would categorize and tone the same article about energy regulation very differently. If you use the same data set across clients, you run into the same problem again in that the computer will continue analyzing content as it originally learned, not accounting for the context of what a specific organization cares about.
At PublicRelay, we perform client specific media analysis leveraging ongoing supervised machine learning to ensure that our clients are getting the most accurate data possible. This accurate, contextual analysis tailored to their business goals enables them to not only understand what they’ve done, but yields insights that tell them what to do next.

No matter what terminology you prefer – driverless car, autonomous vehicle, autonomous driving, self-driving car or something else – the future of human and non-human driving is a hot topic. And we kicked off 2018 with two major consumer events that had the media abuzz about it. PublicRelay analyzed traditional and social media around autonomous vehicles at both events and here are some of the interesting nuggets that we uncovered.
Read Next: “6 Steps for Measuring PR at Your Next Event“
CES hands down “wins” for sheer volume of coverage at 7X the number of articles written on the subject. Autonomous drive was among the top five products discussed at CES 2018, along with gaming products, computers, and smart home technology. At the Detroit Auto Show it placed in the top four with vehicle infotainment systems and electric cars.
Social sharing of autonomous driving articles also put CES in the lead by 8X – Detroit Auto Show articles were shared 19,416 times across Facebook, Twitter, and LinkedIn and CES articles were shared 151,662 times. The topic as a percentage of traditional coverage for each show was much closer – the topic appeared in 13% of total CES event coverage, and in 11% of Detroit Auto Show coverage. The chart below outlines the top topics by percentage for both events.
The most obvious reason is that CES is first on the calendar. But more probable is that CES’ focus on technology is a perfect platform for showing exactly HOW these vehicles are coming to market. The CES audience is extremely interested in technology and innovation that they can use today or in the future. How a car can drive itself certainly fits that bill.
The self-driving topic appeared in more than half [54%] of Nvidia’s traditional coverage at CES. They released a new platform for self-driving vehicles, and announced partnerships with VW and Uber. At a press conference, Nvidia’s CEO stated, “the complexity of future cars is incredible.”
While autonomous driving was covered at the Detroit Auto Show, the show focus is on what is available today for consumers. For instance, articles about self-driving features within traditional cars, rather than fully autonomous vehicles were popular.
A hot self-driving sub-topic in Detroit was government regulations and guidelines, with many articles citing quotes from U.S. Transportation Secretary Elaine Chao and other legislators. Chao appeared in 16% of the Detroit Auto Show’s autonomous driving coverage.
Learn more about PublicRelay’s coverage of CES: Understanding the Success of CES.

The world of data and measurement in PR is consistently a mixed bag. In a perfect world, organizations would have a useful bounty of accurate data and insights. Unfortunately, many times this aspirational goal is unrealistic. The major downside to data-driven analytics right now is that people simply don’t trust the data.
According to recent (2017) studies by MIT, Cal Berkeley, and Northwestern University, technology alone, including artificial intelligence (AI), is not even close to delivering the needed insights. It’s going to take some hefty shifts to gain the right information along with the industry’s trust. According to a recent survey by PRNews, over 60% of communications professionals are asked by their CEO and executive boards for data-driven analysis and metrics. Gone are the days of measurement simply to prove one’s worth – measurement is now necessary as organizations look to tie efficacy of campaigns and broader business goals.
While it’s great that executives and board members are seeing the value that strategic data and intelligence can bring to their business, more than 75% of communicators found the data to be unreliable. This number is staggering considering that both communicators and executives all desire media analysis to drive both reactive and proactive strategies.
So what do you do when the data just isn’t up to par? When practitioners are wasting time cleaning up data to figure out what insights can be derived? Here are some things that communicators can do now to take steps toward this ultimate data utopia.
Be Your Own Advocate
55% of communicators use media analysis to drive both reactive and proactive strategies…BUT only if they can trust it. It’s essential to prove to board members and business leaders that smart communications data is in fact available and deserves a seat at the table. Mapping results to business goals is the best way to advocate for a strong measurement function.
Historically, the data executives saw wasn’t smart, helpful or insightful. In fact it was rather surface level including things like mentions, reach and impressions. Today we have copious amounts of relevant, helpful and strategic data at our fingertips that should be used to inform business strategy. Communicators don’t need to use data to prove worth and ensure job security anymore; they can instead turn it into something useful and become a strategic partner to the business.
Implement Efficiency
Nearly 40% of communicators find it difficult to understand the media data they receive and are spending way too much time hunting for insights instead of developing strategic campaigns. In fact, 69% of communicators said they’d rather spend time building strategic messaging plans and 65% said they’d prefer to put more effort into pitching or focusing on influencer outreach rather than media analysis. Basic metrics aren’t helpful, and automated tools aren’t enough to add enough valuable context to business leaders. Operating efficiently is paramount.
It’s essential to know that you don’t need to go overboard on measurement, especially when you clearly understand your business goals. In both cases of B2C and B2B – it’s the context and not the counts that are going to move the needle.
Similarly, we often see sarcasm used on social media and even in traditional media which can be easily misinterpreted when looking at tone. This can be as challenging to detect as contextualizing a positive statement about you in an otherwise negatively-toned article. Analysts expertly trained at picking up these nuances are extremely efficient and accurate, giving you back time for more strategic endeavors.
Gain More Insight
Over 60% of communicators would like media intelligence to be more “insightful.” Now that is an aspirational concept that could mean different things for every business, but in reality it goes deeper than surface level measurement.
Say goodbye to content overload. In today’s media landscape, business leaders would much rather have fewer pieces of very insightful data, than a mass amount of media coverage without much context.
Now to obtain this data can provide insight into share of voice and how companies are doing against the competitors, digging into sentiment for brand and reputation drivers, and also looking at how your organization is perceived on social media. Communicators need to illustrate how corporate social responsibility efforts are progressing along with sentiment at conferences and trade shows. At the end of the day, this is the type of aspirational data that is going to put businesses ahead of the competition.
As it stands, there needs to be a fundamental shift in the way communicators generate trustworthy media intelligence. With CEOs and boards demanding data driven analytics to help make decisions, communicators need to take the next step with their analytics solutions. The industry needs to rise to the measurement challenge so the information they are delivering is insightful, reliable and exceeding expectations.

Everybody is talking about how artificial intelligence (AI) is changing the world and how it is the future of just about everything. Even communications professionals are abuzz with their desire to jump on the AI bandwagon for their media analytics.
It’s true; AI can be pretty impressive. It is already recommending products to consumers, catching credit card fraud, and targeting advertising with uncanny accuracy and effectiveness. Even doctors are starting to get assistance from AI in diagnosing disease through analysis of symptoms and lab results.
So how does AI “play nice” in the communications world? In one word – carefully.
Let’s start by digging into what AI does well…
The Good
Most experts would agree that AI can be great at handling scenarios that involve a yes/no answer, particularly when there is a robust feedback loop to tell the system when its prediction is right or wrong on an ongoing basis.
Let’s look at three examples of AI usage that illustrate this strength:
Determining Fraudulent and Legitimate Charges
When a credit card is stolen, it can create a perfect learning opportunity for AI. A bank customer reports their card stolen and the identifies which charges were legitimate and which were fraudulent. Any uncontested charges are implicitly used as confirmation of valid transactions, thereby reinforcing the characteristics of legitimate transactions. If you take that information from millions of cardholders, AI can use those data points to predict with uncanny accuracy when it sees a change in a purchasing pattern that’s likely fraudulent.
AI also uses “geofenced” data to protect your credit card account – knowing the locations (geographies) where you normally visit and spend. In addition, AI “learns” how you (and thousands of other similar voyagers) travel, using historical patterns of purchases – hotels, restaurants, etc. to approve or flag as suspicious any out-of-town spending. Why does that work? Because AI has perfect data from a feedback loop with thousands or millions of data points and is being taught the right answer every day. Even if you move to a new city, AI can use a real-time feedback loop to generate new data and adjust its predictions with no human input required other than normal card use in your new hometown.
Credit card transaction validation is a very effective use of a yes/no feedback loop that drives powerful AI learning and effectiveness.
Online Advertising and Product Recommendations
When you see ads on the internet, most of the advertisers are using you to help test thousands of variants of different advertising attributes such as the type and size of ad, time of day delivery, pricing deals, and even the words shown to you. They might even target specific product ads based on what you have shopped for in the past. (Who hasn’t been chased by ads across the Internet for that new pair of shoes you browsed one time on an e-commerce site?)
How are they doing this? The advertising companies are improving their targeting by using AI reinforced with perfect information. When you click on an ad and go to an e-commerce site, you either buy, or you don’t. YES vote. NO vote. AI will constantly learn which advertising attributes work and cause people to click. With millions of interactions to learn from – all tagged with reliable, fact-based results – computers can learn very quickly what works best in just about any situation.
In a similar scenario, Amazon, the e-commerce giant, uses AI to drive product recommendations. For example, when buying a shirt on Amazon, you see a set of product photos (slacks, belts, etc.) with the headline, “People who bought this also bought these:” What the AI technology at Amazon and similar online companies does is look for patterns in people’s purchasing behavior to suggest additional items that follow that same pattern. Of course, if you then click and buy the recommended product, that’s one more ‘plus’ vote for that AI recommendation.
Advertising customization and targeting and Amazon online shopping are more good examples where AI is learning from actual transactions. You either clicked on the ad and bought the recommended product, or you didn’t. It’s a yes/no answer providing perfect feedback.
Spam Email Identification
Identifying when email is legitimate or spam is one of the best mass-applications of classification in the supervised machine learning field. Often called ‘Ham or Spam’, AI uses patterns of words, characters, and senders to identify likely spam. Yet the system can only improve if humans flag emails as spam – or go to their spam folder to retrieve legitimate emails and flag them as ‘Ham’ (not spam).
Early spam identification systems used the feedback of the masses to apply standardized, mass email filters to individual users. In recent years, some spam filters have begun to allow for additional customized spam determination based on individual user preferences and feedback. This approach becomes especially helpful as some people flag legitimate e-commerce offerings a spam – offers that they perhaps opted into months or years before but no longer want to receive. Other users will desire to keep those very same emails coming to their normal inbox.
The human yes/no feedback loop is critical to the ongoing, evolving effectiveness of these spam-filtering AI tools.
The Bad
Here are some examples of where AI is not ready for prime time.
If the answer is not known it can’t be fed back to the computer
For example, say you’re looking to hire a new employee, and the (AI) computer says you should make an offer to a person based on the data. If I hire that person and it either works out or doesn’t, that’s one piece of data. But what about the people I didn’t hire? I will never know whether they would have worked out, and AI is not able to confirm my rejections. It is hard for AI to determine the best hire when it only gets feedback on the people I chose to hire.
This is the challenge of what are called Type I vs. Type II errors. A Type I error is a false positive: someone AI recommended who turned out to be a bad hire. We can learn from that type of error. A Type II error, on the other hand, is a false negative: someone AI passed on who would have been good, but I’ll never know that for sure. We cannot learn from that type of error. So when AI cannot be given information on Type II errors, it has only half the necessary learning set to advance the AI properly.
Another variation of the AI challenges in hiring is when the AI system is exposed to all-new data. For example, if your resumes to date have all been from East Coast schools and for applicants with engineering degrees, what does your system predict when exposed to a Stanford graduate with a physics degree? AI struggles to reach a conclusion when exposed to vastly new, deviant data points that is has not seen before.
Can AI still learn in these circumstances? Yes, to a degree, but it does not see (and cannot learn from) the missed opportunities, and it needs enough of the new data points to begin to model and predict outcomes. The data that is collected from the hiring decision represents a fatally incomplete training set.
If the data sets are small
For example, if you are making a one-time life decision such as what house I should buy (not the price I should pay, which AI is good at, but rather what house will work for me and my family), the data set would not be large. The data might suggest I will like the house for the community and the features of the house. If I buy the house, regardless of whether it works out or doesn’t, I still only have a single piece of feedback to learn from. It is hard to learn from tiny data sets, as you need thousands if not tens of thousands of data points to run through machine learning to train it to make informed decisions.
If the answer is indeterminate compared to the yes/no answer
This is probably the biggest area where unassisted AI fails at proper classification. And it is the problem that most affects those of us seeking trustworthy media analytics.
How a person sees content frequently depends on their perspective. ‘Good’ things can in fact be ‘bad’ and vice-versa. And computers can’t be taught one-answer-fits-all approaches, which is what most AI-powered automated media intelligence solutions are doing today. Two people can read the same story and have a very different opinion of the sentiment. Their take may depend on their political or educational background, their role in a company, or even the message the company wants to be heard in the public – when positive discussion of a taboo topic is seen as a bad thing.
In addition, AI can’t reliably interpret many language structures and use, including even simple phrases like “I love it” since they can be serious or sarcastic. AI also struggles with double meanings and plays on words. And AI is unable to address the contextual and temporal nature of the text and how the words, topics, and phrases used in content change over time. For example, a comparison to Tiger Woods might be positive when comparing to his early career, less positive in his later career, and perhaps quite negative in a comparison to him as a husband.
If the subject matter is evolving
Most AI solutions being applied to media analysis today use what can be called a ‘static dictionary’ approach. They choose a defined set of topical words (or Booleans) and a defined set of semantic-linked emotional trigger words. The AI determines the topic and the sentiment by comparing the words in the content to the static dictionary. Current studies like “The Future of Coding: A Comparison of Hand-Coding and Three Types of Computer-Assisted Text Analysis Methods,” (Nelson et. al., 2017) have proven the dictionary methodology does not work reliably and that its error increases over time.
The fundamental flaw in this AI method is that the static dictionary doesn’t evolve rapidly as topics and concepts shift over time and new veins of discussion are introduced. Unless there is a way to regularly provide feedback to the AI solution, it cannot learn and the margin of error grows and compounds quickly. It is a bit like trying to talk about Facebook to someone transported from the year 2004 who only understands structured publishing – they just cannot understand what you are talking about in any meaningful manner because mass social media was not yet developed.
As these examples show, AI struggles with interpreting complex situations with either small data sets or indeterminate answers that evolve over time. So what does this mean to me as a professional communicator?
AI applied to media analytics needs to be guided to be successful. There are three specific areas where AI needs a boost to be successful analyzing media:
- Changing Conversations: As seen in the Cal Berkeley research, for an AI system analyzing media content to remain accurate and relevant, it needs to be constantly trained as conversations and popular phraseology shift over time. You need enough consistently superior-quality analysis to feed back to the computer and train it.
- Perspective: You need to tune AI specifically to understand your perspective. A solution tuned to someone else or all companies blended together just won’t work. This is because the phrase that one person (or company) determines is relevant and positive might be viewed differently by another person with different priorities or messaging goals.
- Context: The conversation ecosystem needs to be taken into account. Often coverage is bookended by events, public discourse, and related coverage outside the sample set of coverage. In his article just a few weeks ago for MIT Sloan Management Review, Sam Ransbotham writes, “While the pace of business may be ever-accelerating, many business decisions still have time for a second opinion where human general knowledge of context can add value.”
It doesn’t mean you have to analyze everything to train an AI system, but you need to analyze enough data so that your computer can learn robustly from them. AI alone can’t teach itself about changing social conversations, perspective, or context.
On the bright side, humans can work with AI by defining, training, and maintaining a dynamic, accurate, and reliable human feedback loop. This means persistent training, unique for each individual company, with human attention to help AI bridge the gap between what it’s trained on, and what the customer is trying to know. Supervised machine learning is almost universally considered to be the leading approach to solving the human content analytics AI problem for the foreseeable future.
So how will you use AI? Smartly, I hope.
PublicRelay delivers a world-class media intelligence solution to big brands worldwide by leveraging both technology and highly-trained analysts. It is a leader on the path to superior AI analytics through supervised machine learning. Contact PublicRelay to learn more.

A recent Venture Beat article titled “AI will turn PR people into superheroes within one year” predicts that artificial intelligence and machine learning will explode within the public relations industry over the next three decades.
Data found through machine learning, combined with professional hunches and experience, can help PR experts with real-life applications like steering companies clear of future communications catastrophes. We’ll explore more about AI and media intelligence in this Q&A exchange with PublicRelay’s Bill Mitchell, Chief Technology Officer.
Q: Can artificial intelligence sometimes be too artificial?
A: I agree with the position made in the Venture Beat article that sometimes people are too quick to jump on the bandwagon of artificial intelligence. With any highly nuanced, context-based industry like media intelligence, it’s important to consistently apply machine learning and validate our results – paving the way for more increases in accuracies and efficiencies. I’m hesitant to start wheeling out the term artificial intelligence when we’re not trying to use machines to replace human thought. I prefer to call it human-assisted AI because we’re really using it to give us superpowers and visibility into a much broader range than ever before.
Supervised machine learning does need to be supervised, however. You’re often limited by the training data that you send in, and a system will miss new emerging topics or trends that were not in the original training data. It’s not a “set it and forget it” type of solution, especially when used to analyze data from highly variable inputs like traditional and social media.
Q: When can too much AI be a bad thing?
A: When you use GPS in a car, you still need to remain aware of your surroundings. Relying on a purely automated solution is like driving a car while looking solely at a GPS and not the road. There will be unexpected turns in the road and situations that the GPS (or the automated media intelligence solution) has not been trained for. When it comes to using an unsupervised system for media intelligence and analysis, you’re “driving without windows” yet it’s up to you to spot the dangers. That’s why human-assisted AI solutions deliver the best of both worlds – GPS and windows!
Q: How will human-assisted AI give superpowers to the PR industry?
A: Imagine the power of Superman or Wonder Woman if they could simultaneously read newspapers from coast to coast, listen to TV broadcasts in all local markets at once, read all tweets, and give you only what’s relevant to your brand. Human-assisted AI supercharges a company’s media monitoring capabilities by delivering the intelligence previously hidden in the context of what you are collecting.
Our systems at PublicRelay take in tens of millions of articles per week. It would be simply impossible for a human to review content at that sort of scale. Using supervised machine learning, we’re able to make intelligent routing decisions for relevant content. The result is that our analysts don’t miss important stories, yet our customers don’t miss out on the business insights that individuals uniquely attuned to their business are able to make.
From there they are able to dig deep to extract insights on the topics that you have requested. As an added bonus, they also uncover topics that you might care about (something machines cannot do). For example, one recent issue that we blogged about was the ability to extract contextual meaning from a population of articles that were summarily categorized as “negative tone” by completely automated solutions. Most news articles strive to be neutral by the way. Just because the article tone is neutral doesn’t mean that the tone for your company, product, service, or competitors was neutral. Wouldn’t that information be more useful to you?
Using machine learning to isolate a population of articles and shortlist them for more detailed inspection and rich content labeling by analysts is a great example of a complementary AI-human hybrid approach.
Q: Can human-assisted AI predict crises before they happen?
A: The ability to move very quickly around social media content can bring a real competitive advantage to media pros. Imagine if you had an extra two hours to prepare a well-researched, thought-out response to a particularly nasty allegation by a competitor before it went viral. That would be huge!
When measuring media coverage impact the first 48 hours are critical. In fact, within the first 24 hours, 99% of the social media coverage (be it through Facebook, LinkedIn, Twitter, etc.) is going to come in. The ability to receive a “heads up” notification just a couple of hours in advance of what’s to come would definitely feel like a superpower. Without accurately identifying content, you could be making predictions of virality for content that isn’t even relevant – yet another reason why having a supervised machine learning system with trusted results is the bedrock for this approach.
Q: Why is supervised machine learning more relevant for media intelligence?
A: A simple answer is because of the supervised learning component. At their core, both traditional and social media are driven to avoid the type of “sameness” that machines find easy to learn from. While brands and communicators strive to “get their message picked up” and repeated — outlets, authors and influencers will rarely repeat that message verbatim and in a predictable pattern. The same happens with spokespeople who are told to “stay on message”.
Supervised machine learning on the other hand actually gets better when it gets something wrong. It delivers the best results when its allowed to be expansive enough to find some terms that are at the decision boundary and require human interpretation. Having a system that is too tightly constrained to prior content also leaves you vulnerable to missing important content.
Human-assisted AI also provides a mechanism for fast and accurate discovery of new concepts. The human analysts that are doing the final quality assurance will catch anomalies that AI would need to see repeated multiple times before it appeared as a pattern. For example, a new influencer/expert is getting covered in outlets or by authors that you care about. If an analyst sees the name or company appear in a short time span or in multiple outlets, they alert the client immediately. Or perhaps an author (or authors) are mentioning you when they actually meant to attribute the activity to a competitor. If it’s negative, you might want to get that corrected as soon as possible.
Q: What are your personal predictions for PR and machine learning in the next few years?
A: I think that we’re going to see more companies revise their initial thinking that artificial intelligence is going to transform their business. Instead, I think they’ll focus on where machine learning can help them be more intelligent about deploying their resources. Customers are going to start demanding more of their vendors – with media mentions and initial keyword/phrase tracking becoming “table stakes.” Teasing out conceptual tags and aligning the data analysis with KPIs and real business goals will be the norm. In the future of AI, solutions that give leaders the answers they need are greater than tools.

A recent Bloomberg Businessweek article titled “The Future of AI Depends on a Huge Workforce of Human Teachers” discussed what to expect in the future of artificial intelligence.
It’s clear from this article that the tech industry and professional investors are betting big on AI. But what the Bloomberg article shows is they are specifically putting their money (more than $50 million in 2017 alone) towards data labeling startups who collectively employ over 1 million people worldwide to train AI software. The bet they’re placing is on a technique known as supervised machine learning.
Supervised machine learning is a subsection of artificial intelligence and one of two popular ways which computers learn:
- In supervised machine learning, a computer model is fed raw inputs that are already labeled. This method is the equivalent of technology learning like a student working with a teacher. The teacher (or human) provides examples to the student (a computer) of what is right and wrong, correcting the student’s work along the way. In this method, the student learns the general rules of a subject and applies these lessons to predict the right answer to a new question in the future.
- In unsupervised machine learning, a computer is fed raw inputs without any labels to analyze. The result is that the computer must find patterns in that data on its own without any human assistance. This method is the equivalent of trying to learn a foreign language by reading millions of pages of untranslated text without the assistance of an instructor, verb conjugations, or vocabulary dictionary.
If you look at the best applications of AI, they need humans to provide feedback on what is right and wrong. For example, Tesla Autopilot, a driver assist feature offered by Tesla Motors, uses supervised machine learning to train its self-driving technology. In this case, Tesla Autopilot is taught how to drive by the human owners who are operating their cars every day. As Tesla owners drive their cars, sensors in their vehicles collect hundreds of millions data points on driving, from GPS coordinates to actual videos captured from the car’s front and rear cameras. The vehicles then wirelessly send this data back to Tesla’s Autopilot data model, creating a massive library of knowledge around how to drive. The human feedback loop is essential here because if there is an error, the damage could be catastrophic.
By creating this library with the help of human drivers, Autopilot can use visual techniques to break down the videos and understand why drivers reacted the way they did. So, when a ball or child crosses the self-driving car’s path, Autopilot recognizes the pattern and reacts accordingly—stop!
Utilizing AI in Media Intelligence
Many automated media monitoring solutions say they use artificial intelligence and machine learning. But if there is no dedicated human analyst anywhere in the process to check or label the input data (i.e. your organization’s media), it means one of two things:
- The results you’re getting from your company’s media intelligence solution are inaccurate because it uses unsupervised machine learning. The solution using AI is teaching itself and you probably shouldn’t be counting on it to make informed decisions without labeled data.
- You, the customer, are expected to be like the coders discussed in the Bloomberg article. This means taking the time out of your own schedule to train the algorithms on what is right and wrong or hiring someone to do so.
Some people say a supervised machine learning solution is expensive, but supervised machine learning boosts media intelligence accuracy and helps communications pros make better decisions. To get accurate data, you need to either hire an analyst to train the algorithm so that it learns the right way, or spend time doing it yourself. Otherwise, you may end up paying in a different way with inaccurate results.
How AI Can Affect Media Intelligence in the Future
AI and machine learning are going to have an enormous impact on public relations and media intelligence. AI will give you more than just analytics, it will give you answers to what is happening in your media coverage and why – all on demand. Not only that, but it will forecast what topics could be a problem in the future, where those problems will occur, and how long the problems may last. These predictive analytics will make you proactive rather than just reactive.
As AI gets “smarter”, it will make you better and more prepared. But AI will only become as smart as the human teachers who train it along the way. It’s a virtuous upward cycle of humans and technology making each other better. Humans can train the machines, and pick up the last pieces of the most complex analyses that AI can’t like idiomatic phrasing and sarcasm. If done right, the benefits are truly worth the investment.
PublicRelay continues to invest in AI. In fact, supervised machine learning is embedded into our solution both in terms of making the analysis better and faster as well as in the outputs to the client.
Contact PublicRelay to learn more about how accurate data can help you with media intelligence.