
Everybody is talking about how artificial intelligence (AI) is changing the world and how it is the future of just about everything. Even communications professionals are abuzz with their desire to jump on the AI bandwagon for their media analytics.
It’s true; AI can be pretty impressive. It is already recommending products to consumers, catching credit card fraud, and targeting advertising with uncanny accuracy and effectiveness. Even doctors are starting to get assistance from AI in diagnosing disease through analysis of symptoms and lab results.
So how does AI “play nice” in the communications world? In one word – carefully.
Let’s start by digging into what AI does well…
The Good
Most experts would agree that AI can be great at handling scenarios that involve a yes/no answer, particularly when there is a robust feedback loop to tell the system when its prediction is right or wrong on an ongoing basis.
Let’s look at three examples of AI usage that illustrate this strength:
Determining Fraudulent and Legitimate Charges
When a credit card is stolen, it can create a perfect learning opportunity for AI. A bank customer reports their card stolen and the identifies which charges were legitimate and which were fraudulent. Any uncontested charges are implicitly used as confirmation of valid transactions, thereby reinforcing the characteristics of legitimate transactions. If you take that information from millions of cardholders, AI can use those data points to predict with uncanny accuracy when it sees a change in a purchasing pattern that’s likely fraudulent.
AI also uses “geofenced” data to protect your credit card account – knowing the locations (geographies) where you normally visit and spend. In addition, AI “learns” how you (and thousands of other similar voyagers) travel, using historical patterns of purchases – hotels, restaurants, etc. to approve or flag as suspicious any out-of-town spending. Why does that work? Because AI has perfect data from a feedback loop with thousands or millions of data points and is being taught the right answer every day. Even if you move to a new city, AI can use a real-time feedback loop to generate new data and adjust its predictions with no human input required other than normal card use in your new hometown.
Credit card transaction validation is a very effective use of a yes/no feedback loop that drives powerful AI learning and effectiveness.
Online Advertising and Product Recommendations
When you see ads on the internet, most of the advertisers are using you to help test thousands of variants of different advertising attributes such as the type and size of ad, time of day delivery, pricing deals, and even the words shown to you. They might even target specific product ads based on what you have shopped for in the past. (Who hasn’t been chased by ads across the Internet for that new pair of shoes you browsed one time on an e-commerce site?)
How are they doing this? The advertising companies are improving their targeting by using AI reinforced with perfect information. When you click on an ad and go to an e-commerce site, you either buy, or you don’t. YES vote. NO vote. AI will constantly learn which advertising attributes work and cause people to click. With millions of interactions to learn from – all tagged with reliable, fact-based results – computers can learn very quickly what works best in just about any situation.
In a similar scenario, Amazon, the e-commerce giant, uses AI to drive product recommendations. For example, when buying a shirt on Amazon, you see a set of product photos (slacks, belts, etc.) with the headline, “People who bought this also bought these:” What the AI technology at Amazon and similar online companies does is look for patterns in people’s purchasing behavior to suggest additional items that follow that same pattern. Of course, if you then click and buy the recommended product, that’s one more ‘plus’ vote for that AI recommendation.
Advertising customization and targeting and Amazon online shopping are more good examples where AI is learning from actual transactions. You either clicked on the ad and bought the recommended product, or you didn’t. It’s a yes/no answer providing perfect feedback.
Spam Email Identification
Identifying when email is legitimate or spam is one of the best mass-applications of classification in the supervised machine learning field. Often called ‘Ham or Spam’, AI uses patterns of words, characters, and senders to identify likely spam. Yet the system can only improve if humans flag emails as spam – or go to their spam folder to retrieve legitimate emails and flag them as ‘Ham’ (not spam).
Early spam identification systems used the feedback of the masses to apply standardized, mass email filters to individual users. In recent years, some spam filters have begun to allow for additional customized spam determination based on individual user preferences and feedback. This approach becomes especially helpful as some people flag legitimate e-commerce offerings a spam – offers that they perhaps opted into months or years before but no longer want to receive. Other users will desire to keep those very same emails coming to their normal inbox.
The human yes/no feedback loop is critical to the ongoing, evolving effectiveness of these spam-filtering AI tools.
The Bad
Here are some examples of where AI is not ready for prime time.
If the answer is not known it can’t be fed back to the computer
For example, say you’re looking to hire a new employee, and the (AI) computer says you should make an offer to a person based on the data. If I hire that person and it either works out or doesn’t, that’s one piece of data. But what about the people I didn’t hire? I will never know whether they would have worked out, and AI is not able to confirm my rejections. It is hard for AI to determine the best hire when it only gets feedback on the people I chose to hire.
This is the challenge of what are called Type I vs. Type II errors. A Type I error is a false positive: someone AI recommended who turned out to be a bad hire. We can learn from that type of error. A Type II error, on the other hand, is a false negative: someone AI passed on who would have been good, but I’ll never know that for sure. We cannot learn from that type of error. So when AI cannot be given information on Type II errors, it has only half the necessary learning set to advance the AI properly.
Another variation of the AI challenges in hiring is when the AI system is exposed to all-new data. For example, if your resumes to date have all been from East Coast schools and for applicants with engineering degrees, what does your system predict when exposed to a Stanford graduate with a physics degree? AI struggles to reach a conclusion when exposed to vastly new, deviant data points that is has not seen before.
Can AI still learn in these circumstances? Yes, to a degree, but it does not see (and cannot learn from) the missed opportunities, and it needs enough of the new data points to begin to model and predict outcomes. The data that is collected from the hiring decision represents a fatally incomplete training set.
If the data sets are small
For example, if you are making a one-time life decision such as what house I should buy (not the price I should pay, which AI is good at, but rather what house will work for me and my family), the data set would not be large. The data might suggest I will like the house for the community and the features of the house. If I buy the house, regardless of whether it works out or doesn’t, I still only have a single piece of feedback to learn from. It is hard to learn from tiny data sets, as you need thousands if not tens of thousands of data points to run through machine learning to train it to make informed decisions.
If the answer is indeterminate compared to the yes/no answer
This is probably the biggest area where unassisted AI fails at proper classification. And it is the problem that most affects those of us seeking trustworthy media analytics.
How a person sees content frequently depends on their perspective. ‘Good’ things can in fact be ‘bad’ and vice-versa. And computers can’t be taught one-answer-fits-all approaches, which is what most AI-powered automated media intelligence solutions are doing today. Two people can read the same story and have a very different opinion of the sentiment. Their take may depend on their political or educational background, their role in a company, or even the message the company wants to be heard in the public – when positive discussion of a taboo topic is seen as a bad thing.
In addition, AI can’t reliably interpret many language structures and use, including even simple phrases like “I love it” since they can be serious or sarcastic. AI also struggles with double meanings and plays on words. And AI is unable to address the contextual and temporal nature of the text and how the words, topics, and phrases used in content change over time. For example, a comparison to Tiger Woods might be positive when comparing to his early career, less positive in his later career, and perhaps quite negative in a comparison to him as a husband.
If the subject matter is evolving
Most AI solutions being applied to media analysis today use what can be called a ‘static dictionary’ approach. They choose a defined set of topical words (or Booleans) and a defined set of semantic-linked emotional trigger words. The AI determines the topic and the sentiment by comparing the words in the content to the static dictionary. Current studies like “The Future of Coding: A Comparison of Hand-Coding and Three Types of Computer-Assisted Text Analysis Methods,” (Nelson et. al., 2017) have proven the dictionary methodology does not work reliably and that its error increases over time.
The fundamental flaw in this AI method is that the static dictionary doesn’t evolve rapidly as topics and concepts shift over time and new veins of discussion are introduced. Unless there is a way to regularly provide feedback to the AI solution, it cannot learn and the margin of error grows and compounds quickly. It is a bit like trying to talk about Facebook to someone transported from the year 2004 who only understands structured publishing – they just cannot understand what you are talking about in any meaningful manner because mass social media was not yet developed.
As these examples show, AI struggles with interpreting complex situations with either small data sets or indeterminate answers that evolve over time. So what does this mean to me as a professional communicator?
AI applied to media analytics needs to be guided to be successful. There are three specific areas where AI needs a boost to be successful analyzing media:
- Changing Conversations: As seen in the Cal Berkeley research, for an AI system analyzing media content to remain accurate and relevant, it needs to be constantly trained as conversations and popular phraseology shift over time. You need enough consistently superior-quality analysis to feed back to the computer and train it.
- Perspective: You need to tune AI specifically to understand your perspective. A solution tuned to someone else or all companies blended together just won’t work. This is because the phrase that one person (or company) determines is relevant and positive might be viewed differently by another person with different priorities or messaging goals.
- Context: The conversation ecosystem needs to be taken into account. Often coverage is bookended by events, public discourse, and related coverage outside the sample set of coverage. In his article just a few weeks ago for MIT Sloan Management Review, Sam Ransbotham writes, “While the pace of business may be ever-accelerating, many business decisions still have time for a second opinion where human general knowledge of context can add value.”
It doesn’t mean you have to analyze everything to train an AI system, but you need to analyze enough data so that your computer can learn robustly from them. AI alone can’t teach itself about changing social conversations, perspective, or context.
On the bright side, humans can work with AI by defining, training, and maintaining a dynamic, accurate, and reliable human feedback loop. This means persistent training, unique for each individual company, with human attention to help AI bridge the gap between what it’s trained on, and what the customer is trying to know. Supervised machine learning is almost universally considered to be the leading approach to solving the human content analytics AI problem for the foreseeable future.
So how will you use AI? Smartly, I hope.
PublicRelay delivers a world-class media intelligence solution to big brands worldwide by leveraging both technology and highly-trained analysts. It is a leader on the path to superior AI analytics through supervised machine learning. Contact PublicRelay to learn more.

With accurate media intelligence, your team can create data-driven plans, go after goals, and evaluate the impact their communications efforts are having on the company’s KPIs. But first, you need to make sure you’re asking the right questions about your media campaigns and messaging.
Some of the questions that PR/communications teams should be asking include the following:
Questions to Ask About Your Messages and Campaigns:
Is our media message pulling through?
Communicators should determine what outcomes they are trying to accomplish with their messaging and measure campaigns accordingly. For instance, are you trying to improve customer perceptions, strengthen investor relations, or demonstrate thought leadership? Collecting accurate data from both traditional and social media is an important first step. But understanding message pull through means you need to analyze the context of the stories you or your peers are appearing in. Did the author or our spokesperson convey the thought leadership message clearly or favorably? Do we need to adjust our strategy if it’s not working?
Are we over-allocating team resources to push an already successful message or topic?
This will be particularly important for teams that are resource-constrained or experiencing “initiative overload”. You need to accurately allocate resources to the campaigns that need them the most. Still not gaining traction with your efforts around increasing positive SOV for your workplace environment topics? Use media intelligence to determine new publications and influencers who have not yet covered your brand but have written about a competitor. Even with limited resources, this approach of using data to adjust allocation helps your communications team realize the value of using data to drive outcomes and not just output.
How successful was the media campaign or event? Who did we reach?
If you’re not already asking how your campaigns are performing, chances are someone else will ask you. Top performing communicators track how their efforts are impacting the goals of the business. What are the brand and reputation drivers that the business is hoping to impact this year? Executives want to see real results and determining outcomes such as whether you improved not only SOV in your industry but also increased positive tone for your brand, products or executives. Have you earned more positive coverage from authors not previously talking about you? Being able to show a report with data backing up the success of your efforts makes all the difference to leadership.
Is this a PR crisis? Are we helping or hurting the situation?
You know all too well that dealing with a potential PR crisis can be stressful because the crisis can widen if it’s not responded to correctly and in a timely way. Figuring out how and when to respond comes with experience, but wouldn’t it be helpful to have a proactive way to set strategy? The cornerstone of every crisis response strategy is strong data. Good media intelligence not only brings potential PR crises to your attention early, but can also provide a roadmap based on any previous crises. Does this crisis look the same regarding volume and tone? How quickly can we gauge media reaction as we respond so we can course correct? Sometimes a quick look backward can help chart your forward course.
Which messages or topics should we include or avoid in our content creation?
When developing content, you already know it’s important to be in tune with industry keywords as well as current news, trends, coverage, and social media sentiment relating to your industry or brand. Analyzing the tone and sentiment of various topics and subtopics will provide guidance as to what messages are resonating in the market. Adding in data points also available in each article like authors, outlets and influencers will help you more narrowly target your efforts.
Where is there whitespace for message expansion?
This question becomes critical in noisy or volatile industries. Finding a topic in your industry that your audience cares about but isn’t yet “owned” by any of your competitors can be a game changer. You get to jump-start a new conversation with a receptive audience – and offering up new content always gets the attention of the media. A win-win for everyone.
Feel Confident in Your Media Campaigns and Messages with On-Point Answers
The intelligence your media analytics provides should give you confidence that you’re presenting senior leadership with credible reports documenting accurate data tied to outcomes.
Pairing human analysis with technology gets you the story behind the story. Your communications team gets the benefit of accurate and timely media intelligence to inform strategy and measure progress without wasting precious effort on manual clean-up tasks. More importantly, you get to make data-driven decisions that will make your CEO take notice.
PublicRelay developed a human-hybrid media intelligence solution that gets you the data you need. Contact PublicRelay to learn more.

There is an old adage that bad news for your competitors is good news for you, but in today’s 24X7 news channel and social media world, this may no longer be true. Case in point – the Equifax breach.
On Sept. 7, 2017, Equifax made headlines when it announced a data breach that affected 145.5 million consumers in the United States, making it one of the largest cybersecurity breaches of all time. In an industry where companies like Equifax, Experian and TransUnion track virtually every piece of our financial lives, this news made waves across media outlets everywhere.
At PublicRelay, we dove in headfirst and examined the number of mentions with positive and negative tone across Equifax, Experian and TransUnion versus the previous quarter. Understandably, the total number of relevant mentions for this quarter for all three credit bureaus increased substantially to 2,303 relevant mentions in Q3 from 313 in Q2 – a more than 600 percent increase.
In the wake of this major breach, how did each company fare? How truly bad was it for Equifax? Did negative perceptions of Equifax bring the other companies down? Here’s a quick summary of our findings about mentions of Equifax and two competitors:
- Equifax – Negative mentions increased by 180X (1,459 vs. 7)
- Experian – Negative mentions increased by 4X (23 vs. 6)
- TransUnion – Negative mentions increased by 2X (24 vs. 10)
Now, let’s dive into that data to see how a competitor’s PR crisis can impact perception of your brand.
How Bad Was the Negative Media Coverage of Equifax?
Just how bad was the coverage for Equifax? The company had 180X more negative coverage in Q3 vs. Q2. In Q2 Equifax had the smallest Share of Voice (SOV) out of the three companies with 27 percent SOV overall. Understandably, that number increased dramatically to 68 percent in the quarter with the breach and had the largest SOV
Impact of Equifax’s PR Crisis on Competitor Experian
Experian’s negative mentions increased by 4X, however we found that only 8.4 percent of breach articles that included Experian mentioned the company negatively. The rest of the Experian mentions were neutral or positive. Additionally, only 52 percent of articles mentioning Experian received any negative social sharing.
Interestingly enough, Experian was the only company who had an uptick in its positive Share of Voice with an increase of 7 percent due to thought leadership coverage.
Impact of Equifax’s Data Breach on TransUnion’s Media Coverage
TransUnion negative mentions increased by 2X. We found that the company’s coverage consisted of 6.3 percent of breach-related articles mentioning TransUnion negatively. Only 52 percent of articles mentioning TransUnion negatively received any social sharing (same percentage as Experian).
Most Trafficked Outlet for PR Coverage
The online outlet Krebs On Security (cybersecurity blog with a reach of 200k) was the #1 most shared outlet spot for TransUnion and the #3 spot for Experian. Typically we see that when big news breaks the most shared outlet is large, mainstream publications such as The New York Times, The Wall Street Journal and The Washington Post. However, interestingly enough in this case, when people are looking for facts and ways to protect themselves, the public turned to this niche specialty outlet for the most accurate and up to date information.
Looking at this data, all three companies suffered from an increase in negative sentiment during Q3. Equifax suffered the greatest reputational damage; however as a result, its competitors also faced negative repercussions. This shows that when crisis strikes your industry, it’s essential to monitor your media mentions, competitors, and consumer sentiment to ensure you’re taking the right steps to gain or keep trust.
Want to be ready to react before negative mentions of a competitor hurt your PR? PublicRelay uses a combination of cutting edge technology and expert human analysis to deliver media intelligence that helps you nip PR crises in the bud. Top brand communications rely on us to track the progress of their media strategies and compare themselves to competitors. To learn more, contact PublicRelay today.

For today’s corporations, a commitment to diversity and inclusion isn’t simply the right thing to do – it’s a competitive advantage. Especially when it comes to brand reputation, customer loyalty, and recruitment. In fact, companies ranked in the top quartile for ethical diversity are 35 percent more likely to financially outperform their national industry medians, according to research from McKinsey. Those ranked in the top quartile for gender diversity fare 15 percent better.
From a recruitment perspective, no less than one-third of employees – and nearly one-half of Millennials – consider diversity and inclusion when assessing a new job opportunity, according to research from the Institute for Public Relations (IPR) and Weber Shandwick.
Such perspectives also extend to what a company does externally – with its ad campaigns, marketing strategy, public statements, etc. We’ve noticed an uptick in analysis around inclusion of minority groups with our clients in the banking and insurance industries. In the banking sector, diversity is a concept growing in importance, particularly in understanding a key brand driver like workplace environment. In insurance, more industry leaders are launching campaigns to communicate the importance of minority inclusion in their customer base.
But analyzing concepts like diversity in traditional and social media coverage is tricky. The topic may be one of many in a piece about your brand, your mentions may be passing ones in coverage primarily about competitors, you may be actively pitching stories on this topic, or customers may rake you over the coals on social media. Whether it’s reactive or proactive, you still need to know the sentiment about your brand, your executives, your products or your services to understand what to do (or not do) next.
4 Steps for Tracking Perception of Factors Like Your Brand’s Inclusivity
So how do you build a report card that shows whether your brand is perceived as inclusive?
Here are 4 steps to follow if you want to see how your brand scores when it comes to diversity and inclusion or a similar topic:
Ask yourselves.
Are we trying to increase positive SOV in our industry? Earn more positive coverage from authors currently not talking about us? Create more opportunities for our spokespeople to weigh in on the topic?
Any time you are trying to move the needle, you’ll need an accurate baseline from which to work.
Collect accurate data that relates to sentiment around your brand.
You don’t want to miss anything meaningful, but you also don’t need passing mentions or your digital ads in the mix. The most strategic information about how you are doing is often revealed during comparisons against your competitors and peers. This significantly increases the sheer volume of data you need to collect and clean up. In some cases, an outlet list can help you hone in on what content is most important. Is it counts or concrete results from your effort that chart your course?
Correctly categorizing topics and subtopics during this step is how you glean the intelligence from your analysis. This is also the step where humans are essential. The more complex the topics (like hot-button social issues), especially where keywords are nearly non-existent, the more critical it is to correctly identify sentiment. Was that social media post sarcastic? Did the author make positive statements about our brand but negative statements about our peers?
The analysis – time to answer the questions.
Some of these questions should relate back to your goals and progress measuring. Others should help define or re-define strategy. Popular ones that our clients ask include:
- Did we move the needle on the topics we put our efforts toward?
- Are we over-allocating resources on goals we are already crushing?
- Which authors or outlets produce high social sharing?
- What’s the SOV of our spokespeople vs our peers?
- Are there other industry influencers we should be engaging with?
- Where is the whitespace for message expansion? Can we own the conversation in another market segment?
Answering “Now What?”
With accurate answers in hand this is one of the easier ones. It could be as simple as stay the course. In other cases, you may uncover that you need to redirect resources or change course altogether.
Regardless of industry, communicators responsible for topics like diversity must be enabled to build and react to programs with trustworthy media intelligence. Technology with just the right dose of human input delivers the accurate research you need to drive impact when it comes to corporate goals.

As technology brings stakeholders closer to their beloved brands, the immediate and direct communication platforms intended to nurture the brand-stakeholder relationship are leaving little room for miscalculated brand communication strategies. Add to that the propagation of social media newsfeeds that are extending the lifecycle of news, which can spotlight and prolong any misalignments.
Just in the past year, we’ve seen an iconic beverage brand miss its mark with a controversial commercial, a groundbreaking international restaurant chain still reeling from a food-safety crisis, and a consumer credit reporting company suffering reputational damage from a security breach. To respond effectively to such instances and navigate tactfully through news cycles, it’s essential that communications strategies are insight-driven and that business executives are maintaining a focus on stakeholder sentiment. This way of thinking is essential to the success of media communicators tasked with cultivating a brand’s voice and building a positive brand perception.
Below are four goals from best-in-class brand communicators. Aim for these goals in order to effectively act and react to evolving stakeholder sentiment and news cycles:
Influence, Align and Incorporate Business Objectives into Brand Communications
Good brand communicators don’t just create positive publicity. They create meaningful media coverage, real relationships and stakeholder sentiment that aligns with and supports broader business objectives. By tying communications strategies back to corporate goals (such as growing thought leadership in a key area, improving corporate social responsibility or improving customer/investor perceptions), proactive professionals can improve brand value and deliver relevant and impactful results. In today’s environment, this requires business objectives and media communications strategies to work in synergy, influencing one another. Business leaders must recognize these strategies as a means to empower the company brand and meet corporate goals.
Get to Know ALL of Your Audiences
With a business mindset in hand, a successful communicator can turn their attention to stakeholder engagement. The first step is for the media communications team to utilize all available market and customer intelligence to gather as much intel as possible about their most important audiences. Recognizing “who” is the recipient of the intended message is crucial to crafting a strategic campaign.
In the past, media communicators relied merely on demographic data. Today, we must be more granular – not only to identify stakeholder drivers, but also to unveil and pinpoint niche audience groups that weren’t traceable before. Simply relying on generic characteristics isn’t enough to determine factors that influence stakeholder behavior and interests, which are essential for any strategy to be truly targeted. Armed with this intelligence, PR strategists can now “deconstruct” the audience profile – what’s important to them, what their brand demands are, what the customer concerns are – thus creating a more intimate experience with consumers. By mining the data in such a way, intelligence morphs into targeted action that can drive results or mitigate damage should a crisis break.
Hack the Media Landscape by Tracking Sentiment and Adjusting
Once you visualize the end goal and understand all of the characteristics that make up your most important stakeholders, you need to assess “the battlefield.” The 24×7 news cycle is constantly reacting, responding and keeping up with viewers’ and readers’ attention span. Therefore, to penetrate the morphing cycle, it’s essential to assess the opportunities and most effective strategies and activities to align with media narratives.
This is where media sentiment is essential to detect the lifecycle of news. Deciphering the tone and sentiment of coverage provides context to news cycles, as it provides a more granular understanding of authors and outlets – what issues are spiking their interest or what type of stories are prompting them to veer from their traditional reporting style. For example, detecting sarcasm in a reporter’s story is an indicator of the level of interest and engagement attaching them to the issue and the potential of follow-up or additional stories that your brand can pursue. By contextualizing content and media interest, we can envision how a brand’s intended message fits in, adjusting its delivery accordingly.
Rethink How You Measurement Efficacy of Your Media Communications Strategy
How does a good brand communicator determine if their media strategy hit the mark? Or if they reacted adequately to or recovered from a publicity issue? Once media campaigns are complete, it’s time to provide the boardroom with a clear and concise view of the effect on corporate strategy. If campaigns are data-driven, data-based evaluation is built in from the beginning, making it easier and more accurate.
For many years, communicators would measure their outcomes in a simple numeric form – a number of hits, a percentage of the industry narrative, etc. But as PR strategies work in lockstep with the boardroom, it’s not enough to simply deliver a metrics report that’s overwhelmed with disparate numbers. Instead, it must present measurable outcomes, weaving data points into context and correlating the data with the stated business goals. In other words, “yesterday” one would present a coverage report; today they must explain the value of the results, translate the report into a context of corporate goals and measure a campaign’s impact on the brand-stakeholder relationship. By adding context to these metrics, they become relevant to business leaders and support the board’s decision-making process.
However, measurement is more than “a look backward” after the completion of a campaign to evaluate the outcomes; it’s an “active” tool during a campaign that helps determine if a course correction is needed, identifying what’s working and what isn’t, in order to readjust your strategy in real time. The longer you measure against business goals like brand and reputation drivers, the better able you become to predict outcomes.
If “moving the needle” is the expectation a good brand communicator starts off with, then measuring the “grounds gained” is the qualifier of a media campaign’s success, once it crosses the finish line.
Get Real Media Intelligence About Your Brand Communications
Ready to take your media intelligence to the next level? PublicRelay uses a combination of cutting edge technology and expert human analysis to deliver media intelligence you can count on. Top brand communications rely on us to track the progress of their media strategies.
Contact PublicRelay today to get started!

A recent Venture Beat article titled “AI will turn PR people into superheroes within one year” predicts that artificial intelligence and machine learning will explode within the public relations industry over the next three decades.
Data found through machine learning, combined with professional hunches and experience, can help PR experts with real-life applications like steering companies clear of future communications catastrophes. We’ll explore more about AI and media intelligence in this Q&A exchange with PublicRelay’s Bill Mitchell, Chief Technology Officer.
Q: Can artificial intelligence sometimes be too artificial?
A: I agree with the position made in the Venture Beat article that sometimes people are too quick to jump on the bandwagon of artificial intelligence. With any highly nuanced, context-based industry like media intelligence, it’s important to consistently apply machine learning and validate our results – paving the way for more increases in accuracies and efficiencies. I’m hesitant to start wheeling out the term artificial intelligence when we’re not trying to use machines to replace human thought. I prefer to call it human-assisted AI because we’re really using it to give us superpowers and visibility into a much broader range than ever before.
Supervised machine learning does need to be supervised, however. You’re often limited by the training data that you send in, and a system will miss new emerging topics or trends that were not in the original training data. It’s not a “set it and forget it” type of solution, especially when used to analyze data from highly variable inputs like traditional and social media.
Q: When can too much AI be a bad thing?
A: When you use GPS in a car, you still need to remain aware of your surroundings. Relying on a purely automated solution is like driving a car while looking solely at a GPS and not the road. There will be unexpected turns in the road and situations that the GPS (or the automated media intelligence solution) has not been trained for. When it comes to using an unsupervised system for media intelligence and analysis, you’re “driving without windows” yet it’s up to you to spot the dangers. That’s why human-assisted AI solutions deliver the best of both worlds – GPS and windows!
Q: How will human-assisted AI give superpowers to the PR industry?
A: Imagine the power of Superman or Wonder Woman if they could simultaneously read newspapers from coast to coast, listen to TV broadcasts in all local markets at once, read all tweets, and give you only what’s relevant to your brand. Human-assisted AI supercharges a company’s media monitoring capabilities by delivering the intelligence previously hidden in the context of what you are collecting.
Our systems at PublicRelay take in tens of millions of articles per week. It would be simply impossible for a human to review content at that sort of scale. Using supervised machine learning, we’re able to make intelligent routing decisions for relevant content. The result is that our analysts don’t miss important stories, yet our customers don’t miss out on the business insights that individuals uniquely attuned to their business are able to make.
From there they are able to dig deep to extract insights on the topics that you have requested. As an added bonus, they also uncover topics that you might care about (something machines cannot do). For example, one recent issue that we blogged about was the ability to extract contextual meaning from a population of articles that were summarily categorized as “negative tone” by completely automated solutions. Most news articles strive to be neutral by the way. Just because the article tone is neutral doesn’t mean that the tone for your company, product, service, or competitors was neutral. Wouldn’t that information be more useful to you?
Using machine learning to isolate a population of articles and shortlist them for more detailed inspection and rich content labeling by analysts is a great example of a complementary AI-human hybrid approach.
Q: Can human-assisted AI predict crises before they happen?
A: The ability to move very quickly around social media content can bring a real competitive advantage to media pros. Imagine if you had an extra two hours to prepare a well-researched, thought-out response to a particularly nasty allegation by a competitor before it went viral. That would be huge!
When measuring media coverage impact the first 48 hours are critical. In fact, within the first 24 hours, 99% of the social media coverage (be it through Facebook, LinkedIn, Twitter, etc.) is going to come in. The ability to receive a “heads up” notification just a couple of hours in advance of what’s to come would definitely feel like a superpower. Without accurately identifying content, you could be making predictions of virality for content that isn’t even relevant – yet another reason why having a supervised machine learning system with trusted results is the bedrock for this approach.
Q: Why is supervised machine learning more relevant for media intelligence?
A: A simple answer is because of the supervised learning component. At their core, both traditional and social media are driven to avoid the type of “sameness” that machines find easy to learn from. While brands and communicators strive to “get their message picked up” and repeated — outlets, authors and influencers will rarely repeat that message verbatim and in a predictable pattern. The same happens with spokespeople who are told to “stay on message”.
Supervised machine learning on the other hand actually gets better when it gets something wrong. It delivers the best results when its allowed to be expansive enough to find some terms that are at the decision boundary and require human interpretation. Having a system that is too tightly constrained to prior content also leaves you vulnerable to missing important content.
Human-assisted AI also provides a mechanism for fast and accurate discovery of new concepts. The human analysts that are doing the final quality assurance will catch anomalies that AI would need to see repeated multiple times before it appeared as a pattern. For example, a new influencer/expert is getting covered in outlets or by authors that you care about. If an analyst sees the name or company appear in a short time span or in multiple outlets, they alert the client immediately. Or perhaps an author (or authors) are mentioning you when they actually meant to attribute the activity to a competitor. If it’s negative, you might want to get that corrected as soon as possible.
Q: What are your personal predictions for PR and machine learning in the next few years?
A: I think that we’re going to see more companies revise their initial thinking that artificial intelligence is going to transform their business. Instead, I think they’ll focus on where machine learning can help them be more intelligent about deploying their resources. Customers are going to start demanding more of their vendors – with media mentions and initial keyword/phrase tracking becoming “table stakes.” Teasing out conceptual tags and aligning the data analysis with KPIs and real business goals will be the norm. In the future of AI, solutions that give leaders the answers they need are greater than tools.

A recent Bloomberg Businessweek article titled “The Future of AI Depends on a Huge Workforce of Human Teachers” discussed what to expect in the future of artificial intelligence.
It’s clear from this article that the tech industry and professional investors are betting big on AI. But what the Bloomberg article shows is they are specifically putting their money (more than $50 million in 2017 alone) towards data labeling startups who collectively employ over 1 million people worldwide to train AI software. The bet they’re placing is on a technique known as supervised machine learning.
Supervised machine learning is a subsection of artificial intelligence and one of two popular ways which computers learn:
- In supervised machine learning, a computer model is fed raw inputs that are already labeled. This method is the equivalent of technology learning like a student working with a teacher. The teacher (or human) provides examples to the student (a computer) of what is right and wrong, correcting the student’s work along the way. In this method, the student learns the general rules of a subject and applies these lessons to predict the right answer to a new question in the future.
- In unsupervised machine learning, a computer is fed raw inputs without any labels to analyze. The result is that the computer must find patterns in that data on its own without any human assistance. This method is the equivalent of trying to learn a foreign language by reading millions of pages of untranslated text without the assistance of an instructor, verb conjugations, or vocabulary dictionary.
If you look at the best applications of AI, they need humans to provide feedback on what is right and wrong. For example, Tesla Autopilot, a driver assist feature offered by Tesla Motors, uses supervised machine learning to train its self-driving technology. In this case, Tesla Autopilot is taught how to drive by the human owners who are operating their cars every day. As Tesla owners drive their cars, sensors in their vehicles collect hundreds of millions data points on driving, from GPS coordinates to actual videos captured from the car’s front and rear cameras. The vehicles then wirelessly send this data back to Tesla’s Autopilot data model, creating a massive library of knowledge around how to drive. The human feedback loop is essential here because if there is an error, the damage could be catastrophic.
By creating this library with the help of human drivers, Autopilot can use visual techniques to break down the videos and understand why drivers reacted the way they did. So, when a ball or child crosses the self-driving car’s path, Autopilot recognizes the pattern and reacts accordingly—stop!
Utilizing AI in Media Intelligence
Many automated media monitoring solutions say they use artificial intelligence and machine learning. But if there is no dedicated human analyst anywhere in the process to check or label the input data (i.e. your organization’s media), it means one of two things:
- The results you’re getting from your company’s media intelligence solution are inaccurate because it uses unsupervised machine learning. The solution using AI is teaching itself and you probably shouldn’t be counting on it to make informed decisions without labeled data.
- You, the customer, are expected to be like the coders discussed in the Bloomberg article. This means taking the time out of your own schedule to train the algorithms on what is right and wrong or hiring someone to do so.
Some people say a supervised machine learning solution is expensive, but supervised machine learning boosts media intelligence accuracy and helps communications pros make better decisions. To get accurate data, you need to either hire an analyst to train the algorithm so that it learns the right way, or spend time doing it yourself. Otherwise, you may end up paying in a different way with inaccurate results.
How AI Can Affect Media Intelligence in the Future
AI and machine learning are going to have an enormous impact on public relations and media intelligence. AI will give you more than just analytics, it will give you answers to what is happening in your media coverage and why – all on demand. Not only that, but it will forecast what topics could be a problem in the future, where those problems will occur, and how long the problems may last. These predictive analytics will make you proactive rather than just reactive.
As AI gets “smarter”, it will make you better and more prepared. But AI will only become as smart as the human teachers who train it along the way. It’s a virtuous upward cycle of humans and technology making each other better. Humans can train the machines, and pick up the last pieces of the most complex analyses that AI can’t like idiomatic phrasing and sarcasm. If done right, the benefits are truly worth the investment.
PublicRelay continues to invest in AI. In fact, supervised machine learning is embedded into our solution both in terms of making the analysis better and faster as well as in the outputs to the client.
Contact PublicRelay to learn more about how accurate data can help you with media intelligence.

Unfortunately, most of our clients have lived through the following scenario. The Head of Communications is delivering a presentation to the CEO (or the Board), showing graphs with various data points like spikes in positive coverage and then someone in the room challenges them. “What was that uptick from again?” “Really? Can you dig in deeper on that?” “That just seems off, based on what I’ve seen in the media.” And they go back to their team to get more detail.
Then it happens – they uncover that not only was that spike based on irrelevant media hits, but they’ve been inaccurately analyzing the content in their data set. Now what?
- Go ream out the staff you put in charge of not only finding, but correctly using, a media intelligence tool
- Own this problem and make sure it never happens again
We all know that (b) is the “right” choice, but how are you supposed to accomplish that, with so much on your plate? There is no way you are going to sit through vendor meetings if they are just demos of “tools” that you aren’t going to use yourself. But YOU own the outcome and YOU are going to have to go into future executive meetings and tell your stories with confidence. That means this process must entail more than just hiring a “tool” provider to solve the issues; it must get you the answers that make a difference in executive-level conversations.
Many of our clients have been in this situation and never want to go through it again. And when we dive into how the problem materialized, we uncover the same underlying issue – no one on the PR team was hired to be a data scientist – they were hired to be communications pros, helping to execute strategy in their area of expertise.
While this may seem overly dramatic, it’s an unfortunate truth. The tools most communications teams use simply collect and generally categorize content and tone from keywords found in the text. If anything is irrelevant or incorrect at this point in the process, the analysis is useless and can send your team in a completely wrong direction.
So, your teams spend valuable hours constantly training the system. Trying over and over again to get the perfect mix of keywords and Boolean strings. And what is perfection? Never missing a single article or post AND not cleaning out their relevant mentions? Is this really what you are paying your team to do? (And by the way, if your agency is managing this on your behalf the same frustration is happening on their end and you’re paying the bill.)
How important is demonstrating to your Board and CEO that you’re making strategic decisions? Is it as crucial (or even more crucial) than choosing an agency of record? Would you leave your agency decisions entirely up to your staff? If not, then why would you entrust them to run your analytics strategy and hire business partners on their own without your guidance?
Now is the time to take the reins on your measurement and analysis. Focus less on finding “tools” to track your programs and more on ways to deliver the answers your business leaders expect. This way, you can confidently make data-driven decisions that tie to the company goals. Never again will you worry about the perceptions of the C-Suite –YOU can come equipped with key insights when they start asking hard questions – and the hard questions will come. Fortunately, hard questions are easy to answer when you have the right approach and reliable data to back it up.
9 Questions You Need to Ask Your Media Intelligence Solution Provider >>
This blog was written by Darren Sleeger, SVP of Enterprise Solutions at PublicRelay.

Media Intelligence is crucial to high-performing communications teams – but how accurate is this reporting?
The reality is that many PR/communications teams today remain in the Dark Ages when it comes to performance measurement and, unfortunately, most CCOs don’t realize they have bad data until it’s too late.
In this blog, we’ll explore what it really means to have accurate data and, more importantly, how the accuracy of the data you start with affects objectives you track and the outcomes you present to your board of directors.
Common Roadblocks with Media Monitoring
Typical media monitoring tools are, at best, only 80% accurate. Most vendors have an out-of-the-box approach to data collection that centers only on keyword tracking, and that yields many problems.
Analyzing Irrelevant Information
Most media analysis tools provide counts of keyword mentions and overall tone which is not insightful. Brands need to accurately report on media mentions as there is strong potential that a significant number of those mentions are stock quotes or auto-published press releases on spam sites.
Additionally, mentions of product or company names may appear in counts, even when they have no meaning to the business. For example, impression counts might include the use of “sprint” as a verb even though the PR team is looking to track references to Sprint, the telecommunications company. Or they might include a news article that refers to an employee who once worked for the company. Automated tools don’t catch these things, and this can really skew your metrics.
Missing Out On Valuable Information
To filter out irrelevant articles or mentions, Boolean or smart technologies are used to narrow down keyword results, but this also yields inaccuracies because what if you accidentally filter something out that is important?
Poor Sentiment Identification
Although technology has come a long way, media monitoring systems still have a difficult time understanding human emotions like sarcasm. What if your brand is mentioned in a publication such as The Onion? Most tools could never analyze the real tone of that article. They also can’t catch if your company is wrongly named – think Equifax and Experian in the recent data breach crisis. Because there are only three major credit bureaus, Experian is often mentioned in articles about Equifax that are negatively toned.
Misinterpreting sarcasm and catching false mentions are just some of the many downfalls of automated solutions. Overall, automated sentiment analysis is widely considered to only be 60% to 80% correct. For example, a pharmaceutical company that produces cancer treatments found that their automated media intelligence solution marked all mentions of the word cancer in articles as “negative.” Adding the element of professional human analysis to your media monitoring can help avoid potential pitfalls like these.
Lack of Context
Automated monitoring systems give every mention the same weight. Yet, a reference in the Wall Street Journal or by influencers in the company’s niche that can amplify content they care about, can mean more than a remark in a local media outlet.
Locating Non-Keyword Concepts
Automated tools can’t analyze concepts in your media intelligence reports that don’t show up in a keyword search. High-level concepts like “brand quality,” “thought leadership” or “workplace experience” are critical for building strategic plans and measuring goals. Yet these terms are rarely stated explicitly in written text and can only be recognized by a person who has read the article.
Emphasis on Outputs Not Outcomes
Automated technologies that many PR teams use to monitor media coverage track KPIs—such as reach and impressions—that quantify outputs, not outcomes. To be taken seriously by leadership, PR/communications teams must track metrics that are accurate and meaningful to business performance.
What Can Two-Pronged Media Intelligence Do for CCOs?
A two-pronged approach to data collection and analysis helps CCOs answer high-level questions focused on custom strategic outcomes. Confidence in what you are presenting to your C-Suite or board of directors is critical – you must trust that the data that your team is sourcing is A) accurate B) being analyzed in a meaningful way that you can tie to your objectives.
By combining technology with human analysis, organizations can benefit from accurate data and actionable analytics surrounding outcome-based metrics. Only then, can you move from simply monitoring output to strategically accomplishing objectives such as the following, accurately and confidently:
- Increase thought leadership among your key audiences and regions
- Identify opportunities for media coverage that your competitors do not already dominate
- Determine key subtopics that are impacting your reputation
- Confirm that your messaging is accurately reaching your target audience as intended
Contact PublicRelay to find out how our media intelligence accuracy and human analysis sets us apart and can help you make data-driven decisions.

Recently, a Business Insider article caught my attention when it sought to quantify the brand boosts that Intel, Under Armour and Merck experienced when their CEOs resigned from the White House Manufacturing Advisory Council. To measure the impact of the walkouts, the piece relied on mentions and tonality data provided by a well-known social analytics tool.
The quote that stuck out was about Merck and their very high percentage of negative tone. The provider clarified by stating “the only reason it is negative is because people are criticizing Trump for singling out Merck and Frazier and not the other CEOs, and the algorithm can’t decipher that context.”
Here’s the problem: algorithms can mine data well and even analyze its tone using keywords, but only a human can interpret the results in the context of current events. But what happens when you ask bigger questions? Such as what was the negativity in the social data referring to? This is where our approach gives the insights that machines can’t.
Because the overall tone of a post is not an accurate measure for effect on reputation and brand, our analysis focuses on getting to the “so what?” answers. And these are the answers we uncovered when the context of the Merck posts were analyzed for our test:
- What was the overall sentiment toward Merck? Mostly positive – less than 30% of the posts were negative about the brand – a sharp contrast to the statistic pulled by an algorithm in the Business Insider.
- Were the posts about any of the brand drivers that the business cares about? (see chart) Yes – and the tone of each subtopic was analyzed for more clarity.
- Did traditional media have any impact on social? Traditional media sharing activity concentrated largely on positive coverage. A Los Angeles Times editorial praising Ken Frazier for his courage was among the most shared articles, generating over a quarter of a million Facebook shares.This is exactly the type of insight communicators can use to bring the right perspective to their executive teams.

Reporting on counts and tonality in social media can be a starting point. But to deliver true value to your organization, you need to uncover context. Pairing human analysis with technology gets you the story behind the story. Communicators need specific, timely, and trustworthy conclusions to track their company’s brand and reputation when a major (or any) event occurs.