No matter what terminology you prefer – driverless car, autonomous vehicle, autonomous driving, self-driving car or something else – the future of human and non-human driving is a hot topic. And we kicked off 2018 with two major consumer events that had the media abuzz about it. PublicRelay analyzed traditional and social media around autonomous vehicles at both events and here are some of the interesting nuggets that we uncovered.

Read Next: “6 Steps for Measuring PR at Your Next Event

CES hands down “wins” for sheer volume of coverage at 7X the number of articles written on the subject. Autonomous drive was among the top five products discussed at CES 2018, along with gaming products, computers, and smart home technology. At the Detroit Auto Show it placed in the top four with vehicle infotainment systems and electric cars.

Social sharing of autonomous driving articles also put CES in the lead by 8X – Detroit Auto Show articles were shared 19,416 times across Facebook, Twitter, and LinkedIn and CES articles were shared 151,662 times. The topic as a percentage of traditional coverage for each show was much closer – the topic appeared in 13% of total CES event coverage, and in 11% of Detroit Auto Show coverage. The chart below outlines the top topics by percentage for both events.

The most obvious reason is that CES is first on the calendar. But more probable is that CES’ focus on technology is a perfect platform for showing exactly HOW these vehicles are coming to market. The CES audience is extremely interested in technology and innovation that they can use today or in the future. How a car can drive itself certainly fits that bill.

The self-driving topic appeared in more than half [54%] of Nvidia’s traditional coverage at CES. They released a new platform for self-driving vehicles, and announced partnerships with VW and Uber. At a press conference, Nvidia’s CEO stated, “the complexity of future cars is incredible.”

While autonomous driving was covered at the Detroit Auto Show, the show focus is on what is available today for consumers. For instance, articles about self-driving features within traditional cars, rather than fully autonomous vehicles were popular.

A hot self-driving sub-topic in Detroit was government regulations and guidelines, with many articles citing quotes from U.S. Transportation Secretary Elaine Chao and other legislators. Chao appeared in 16% of the Detroit Auto Show’s autonomous driving coverage.

Learn more about PublicRelay’s coverage of CES: Understanding the Success of CES.

Related Resources

Everybody is talking about how artificial intelligence (AI) is changing the world and how it is the future of just about everything. Even communications professionals are abuzz with their desire to jump on the AI bandwagon for their media analytics.

It’s true; AI can be pretty impressive. It is already recommending products to consumers, catching credit card fraud, and targeting advertising with uncanny accuracy and effectiveness. Even doctors are starting to get assistance from AI in diagnosing disease through analysis of symptoms and lab results.

So how does AI “play nice” in the communications world?  In one word – carefully.

Let’s start by digging into what AI does well…

The Good

Most experts would agree that AI can be great at handling scenarios that involve a yes/no answer, particularly when there is a robust feedback loop to tell the system when its prediction is right or wrong on an ongoing basis.

Let’s look at three examples of AI usage that illustrate this strength:

Determining Fraudulent and Legitimate Charges

When a credit card is stolen, it can create a perfect learning opportunity for AI. A bank customer reports their card stolen and the identifies which charges were legitimate and which were fraudulent. Any uncontested charges are implicitly used as confirmation of valid transactions, thereby reinforcing the characteristics of legitimate transactions. If you take that information from millions of cardholders, AI can use those data points to predict with uncanny accuracy when it sees a change in a purchasing pattern that’s likely fraudulent.

AI also uses “geofenced” data to protect your credit card account – knowing the locations (geographies) where you normally visit and spend.  In addition, AI “learns” how you (and thousands of other similar voyagers) travel, using historical patterns of purchases – hotels, restaurants, etc. to approve or flag as suspicious any out-of-town spending.  Why does that work? Because AI has perfect data from a feedback loop with thousands or millions of data points and is being taught the right answer every day.  Even if you move to a new city, AI can use a real-time feedback loop to generate new data and adjust its predictions with no human input required other than normal card use in your new hometown.

Credit card transaction validation is a very effective use of a yes/no feedback loop that drives powerful AI learning and effectiveness.

Online Advertising and Product Recommendations

When you see ads on the internet, most of the advertisers are using you to help test thousands of variants of different advertising attributes such as the type and size of ad, time of day delivery, pricing deals, and even the words shown to you. They might even target specific product ads based on what you have shopped for in the past. (Who hasn’t been chased by ads across the Internet for that new pair of shoes you browsed one time on an e-commerce site?)

How are they doing this? The advertising companies are improving their targeting by using AI reinforced with perfect information. When you click on an ad and go to an e-commerce site, you either buy, or you don’t.  YES vote.  NO vote.  AI will constantly learn which advertising attributes work and cause people to click.  With millions of interactions to learn from – all tagged with reliable, fact-based results – computers can learn very quickly what works best in just about any situation.

In a similar scenario, Amazon, the e-commerce giant, uses AI to drive product recommendations.  For example, when buying a shirt on Amazon, you see a set of product photos (slacks, belts, etc.) with the headline, “People who bought this also bought these:” What the AI technology at Amazon and similar online companies does is look for patterns in people’s purchasing behavior to suggest additional items that follow that same pattern.  Of course, if you then click and buy the recommended product, that’s one more ‘plus’ vote for that AI recommendation.

Advertising customization and targeting and Amazon online shopping are more good examples where AI is learning from actual transactions. You either clicked on the ad and bought the recommended product, or you didn’t. It’s a yes/no answer providing perfect feedback.

Spam Email Identification

Identifying when email is legitimate or spam is one of the best mass-applications of classification in the supervised machine learning field. Often called ‘Ham or Spam’, AI uses patterns of words, characters, and senders to identify likely spam. Yet the system can only improve if humans flag emails as spam – or go to their spam folder to retrieve legitimate emails and flag them as ‘Ham’ (not spam).

Early spam identification systems used the feedback of the masses to apply standardized, mass email filters to individual users.  In recent years, some spam filters have begun to allow for additional customized spam determination based on individual user preferences and feedback.  This approach becomes especially helpful as some people flag legitimate e-commerce offerings a spam – offers that they perhaps opted into months or years before but no longer want to receive.  Other users will desire to keep those very same emails coming to their normal inbox.

The human yes/no feedback loop is critical to the ongoing, evolving effectiveness of these spam-filtering AI tools.

The Bad

Here are some examples of where AI is not ready for prime time.

If the answer is not known it can’t be fed back to the computer

For example, say you’re looking to hire a new employee, and the (AI) computer says you should make an offer to a person based on the data. If I hire that person and it either works out or doesn’t, that’s one piece of data. But what about the people I didn’t hire? I will never know whether they would have worked out, and AI is not able to confirm my rejections. It is hard for AI to determine the best hire when it only gets feedback on the people I chose to hire.

This is the challenge of what are called Type I vs. Type II errors. A Type I error is a false positive: someone AI recommended who turned out to be a bad hire.  We can learn from that type of error. A Type II error, on the other hand, is a false negative: someone AI passed on who would have been good, but I’ll never know that for sure. We cannot learn from that type of error.  So when AI cannot be given information on Type II errors, it has only half the necessary learning set to advance the AI properly.

Another variation of the AI challenges in hiring is when the AI system is exposed to all-new data.  For example, if your resumes to date have all been from East Coast schools and for applicants with engineering degrees, what does your system predict when exposed to a Stanford graduate with a physics degree?  AI struggles to reach a conclusion when exposed to vastly new, deviant data points that is has not seen before.

Can AI still learn in these circumstances? Yes, to a degree, but it does not see (and cannot learn from) the missed opportunities, and it needs enough of the new data points to begin to model and predict outcomes.  The data that is collected from the hiring decision represents a fatally incomplete training set.

If the data sets are small

For example, if you are making a one-time life decision such as what house I should buy (not the price I should pay, which AI is good at, but rather what house will work for me and my family), the data set would not be large. The data might suggest I will like the house for the community and the features of the house. If I buy the house, regardless of whether it works out or doesn’t, I still only have a single piece of feedback to learn from. It is hard to learn from tiny data sets, as you need thousands if not tens of thousands of data points to run through machine learning to train it to make informed decisions.

If the answer is indeterminate compared to the yes/no answer

This is probably the biggest area where unassisted AI fails at proper classification. And it is the problem that most affects those of us seeking trustworthy media analytics.

How a person sees content frequently depends on their perspective. ‘Good’ things can in fact be ‘bad’ and vice-versa. And computers can’t be taught one-answer-fits-all approaches, which is what most AI-powered automated media intelligence solutions are doing today.  Two people can read the same story and have a very different opinion of the sentiment.  Their take may depend on their political or educational background, their role in a company, or even the message the company wants to be heard in the public – when positive discussion of a taboo topic is seen as a bad thing.

In addition, AI can’t reliably interpret many language structures and use, including even simple phrases like “I love it” since they can be serious or sarcastic.  AI also struggles with double meanings and plays on words.  And AI is unable to address the contextual and temporal nature of the text and how the words, topics, and phrases used in content change over time. For example, a comparison to Tiger Woods might be positive when comparing to his early career, less positive in his later career, and perhaps quite negative in a comparison to him as a husband.

If the subject matter is evolving

Most AI solutions being applied to media analysis today use what can be called a ‘static dictionary’ approach. They choose a defined set of topical words (or Booleans) and a defined set of semantic-linked emotional trigger words. The AI determines the topic and the sentiment by comparing the words in the content to the static dictionary. Current studies like “The Future of Coding: A Comparison of Hand-Coding and Three Types of Computer-Assisted Text Analysis Methods,” (Nelson et. al., 2017) have proven the dictionary methodology does not work reliably and that its error increases over time.

The fundamental flaw in this AI method is that the static dictionary doesn’t evolve rapidly as topics and concepts shift over time and new veins of discussion are introduced. Unless there is a way to regularly provide feedback to the AI solution, it cannot learn and the margin of error grows and compounds quickly.  It is a bit like trying to talk about Facebook to someone transported from the year 2004 who only understands structured publishing – they just cannot understand what you are talking about in any meaningful manner because mass social media was not yet developed.

As these examples show, AI struggles with interpreting complex situations with either small data sets or indeterminate answers that evolve over time.  So what does this mean to me as a professional communicator?

AI applied to media analytics needs to be guided to be successful. There are three specific areas where AI needs a boost to be successful analyzing media:

  • Changing Conversations: As seen in the Cal Berkeley research, for an AI system analyzing media content to remain accurate and relevant, it needs to be constantly trained as conversations and popular phraseology shift over time. You need enough consistently superior-quality analysis to feed back to the computer and train it.
  • Perspective: You need to tune AI specifically to understand your perspective. A solution tuned to someone else or all companies blended together just won’t work.  This is because the phrase that one person (or company) determines is relevant and positive might be viewed differently by another person with different priorities or messaging goals.
  • Context: The conversation ecosystem needs to be taken into account. Often coverage is bookended by events, public discourse, and related coverage outside the sample set of coverage.  In his article just a few weeks ago for MIT Sloan Management Review, Sam Ransbotham writes, “While the pace of business may be ever-accelerating, many business decisions still have time for a second opinion where human general knowledge of context can add value.”

It doesn’t mean you have to analyze everything to train an AI system, but you need to analyze enough data so that your computer can learn robustly from them. AI alone can’t teach itself about changing social conversations, perspective, or context.

On the bright side, humans can work with AI by defining, training, and maintaining a dynamic, accurate, and reliable human feedback loop. This means persistent training, unique for each individual company, with human attention to help AI bridge the gap between what it’s trained on, and what the customer is trying to know.  Supervised machine learning is almost universally considered to be the leading approach to solving the human content analytics AI problem for the foreseeable future.

So how will you use AI?  Smartly, I hope.

PublicRelay delivers a world-class media intelligence solution to big brands worldwide by leveraging both technology and highly-trained analysts.  It is a leader on the path to superior AI analytics through supervised machine learning. Contact PublicRelay to learn more.

Related Resources

A recent Venture Beat article titled “AI will turn PR people into superheroes within one year” predicts that artificial intelligence and machine learning will explode within the public relations industry over the next three decades.

Data found through machine learning, combined with professional hunches and experience, can help PR experts with real-life applications like steering companies clear of future communications catastrophes. We’ll explore more about AI and media intelligence in this Q&A exchange with PublicRelay’s Bill Mitchell, Chief Technology Officer.

Q: Can artificial intelligence sometimes be too artificial?

A: I agree with the position made in the Venture Beat article that sometimes people are too quick to jump on the bandwagon of artificial intelligence. With any highly nuanced, context-based industry like media intelligence, it’s important to consistently apply machine learning and validate our results – paving the way for more increases in accuracies and efficiencies. I’m hesitant to start wheeling out the term artificial intelligence when we’re not trying to use machines to replace human thought. I prefer to call it human-assisted AI because we’re really using it to give us superpowers and visibility into a much broader range than ever before.

Supervised machine learning does need to be supervised, however. You’re often limited by the training data that you send in, and a system will miss new emerging topics or trends that were not in the original training data. It’s not a “set it and forget it” type of solution, especially when used to analyze data from highly variable inputs like traditional and social media.

Q: When can too much AI be a bad thing?

A: When you use GPS in a car, you still need to remain aware of your surroundings. Relying on a purely automated solution is like driving a car while looking solely at a GPS and not the road. There will be unexpected turns in the road and situations that the GPS (or the automated media intelligence solution) has not been trained for. When it comes to using an unsupervised system for media intelligence and analysis, you’re “driving without windows” yet it’s up to you to spot the dangers. That’s why human-assisted AI solutions deliver the best of both worlds – GPS and windows!

Q: How will human-assisted AI give superpowers to the PR industry?

A: Imagine the power of Superman or Wonder Woman if they could simultaneously read newspapers from coast to coast, listen to TV broadcasts in all local markets at once, read all tweets, and give you only what’s relevant to your brand. Human-assisted AI supercharges a company’s media monitoring capabilities by delivering the intelligence previously hidden in the context of what you are collecting.

Our systems at PublicRelay take in tens of millions of articles per week. It would be simply impossible for a human to review content at that sort of scale. Using supervised machine learning, we’re able to make intelligent routing decisions for relevant content. The result is that our analysts don’t miss important stories, yet our customers don’t miss out on the business insights that individuals uniquely attuned to their business are able to make.

From there they are able to dig deep to extract insights on the topics that you have requested. As an added bonus, they also uncover topics that you might care about (something machines cannot do). For example, one recent issue that we blogged about was the ability to extract contextual meaning from a population of articles that were summarily categorized as “negative tone” by completely automated solutions. Most news articles strive to be neutral by the way. Just because the article tone is neutral doesn’t mean that the tone for your company, product, service, or competitors was neutral. Wouldn’t that information be more useful to you?

Using machine learning to isolate a population of articles and shortlist them for more detailed inspection and rich content labeling by analysts is a great example of a complementary AI-human hybrid approach.

Q: Can human-assisted AI predict crises before they happen?

A: The ability to move very quickly around social media content can bring a real competitive advantage to media pros. Imagine if you had an extra two hours to prepare a well-researched, thought-out response to a particularly nasty allegation by a competitor before it went viral. That would be huge!

When measuring media coverage impact the first 48 hours are critical. In fact, within the first 24 hours, 99% of the social media coverage (be it through Facebook, LinkedIn, Twitter, etc.) is going to come in. The ability to receive a “heads up” notification just a couple of hours in advance of what’s to come would definitely feel like a superpower. Without accurately identifying content, you could be making predictions of virality for content that isn’t even relevant – yet another reason why having a supervised machine learning system with trusted results is the bedrock for this approach.

Q: Why is supervised machine learning more relevant for media intelligence?

A: A simple answer is because of the supervised learning component. At their core, both traditional and social media are driven to avoid the type of “sameness” that machines find easy to learn from. While brands and communicators strive to “get their message picked up” and repeated — outlets, authors and influencers will rarely repeat that message verbatim and in a predictable pattern. The same happens with spokespeople who are told to “stay on message”.

Supervised machine learning on the other hand actually gets better when it gets something wrong. It delivers the best results when its allowed to be expansive enough to find some terms that are at the decision boundary and require human interpretation. Having a system that is too tightly constrained to prior content also leaves you vulnerable to missing important content.

Human-assisted AI also provides a mechanism for fast and accurate discovery of new concepts. The human analysts that are doing the final quality assurance will catch anomalies that AI would need to see repeated multiple times before it appeared as a pattern. For example, a new influencer/expert is getting covered in outlets or by authors that you care about. If an analyst sees the name or company appear in a short time span or in multiple outlets, they alert the client immediately. Or perhaps an author (or authors) are mentioning you when they actually meant to attribute the activity to a competitor. If it’s negative, you might want to get that corrected as soon as possible.

Q: What are your personal predictions for PR and machine learning in the next few years?

A: I think that we’re going to see more companies revise their initial thinking that artificial intelligence is going to transform their business. Instead, I think they’ll focus on where machine learning can help them be more intelligent about deploying their resources. Customers are going to start demanding more of their vendors – with media mentions and initial keyword/phrase tracking becoming “table stakes.” Teasing out conceptual tags and aligning the data analysis with KPIs and real business goals will be the norm. In the future of AI, solutions that give leaders the answers they need are greater than tools.

Contact PublicRelay to learn more about how a supervised machine learning solution with human analysis can help you and your PR team with media intelligence.

Related Resources

A recent Bloomberg Businessweek article titled “The Future of AI Depends on a Huge Workforce of Human Teachers” discussed what to expect in the future of artificial intelligence.

It’s clear from this article that the tech industry and professional investors are betting big on AI. But what the Bloomberg article shows is they are specifically putting their money (more than $50 million in 2017 alone) towards data labeling startups who collectively employ over 1 million people worldwide to train AI software. The bet they’re placing is on a technique known as supervised machine learning.

Supervised machine learning is a subsection of artificial intelligence and one of two popular ways which computers learn:

  • In supervised machine learning, a computer model is fed raw inputs that are already labeled. This method is the equivalent of technology learning like a student working with a teacher. The teacher (or human) provides examples to the student (a computer) of what is right and wrong, correcting the student’s work along the way. In this method, the student learns the general rules of a subject and applies these lessons to predict the right answer to a new question in the future.
  • In unsupervised machine learning, a computer is fed raw inputs without any labels to analyze. The result is that the computer must find patterns in that data on its own without any human assistance. This method is the equivalent of trying to learn a foreign language by reading millions of pages of untranslated text without the assistance of an instructor, verb conjugations, or vocabulary dictionary.

If you look at the best applications of AI, they need humans to provide feedback on what is right and wrong. For example, Tesla Autopilot, a driver assist feature offered by Tesla Motors, uses supervised machine learning to train its self-driving technology. In this case, Tesla Autopilot is taught how to drive by the human owners who are operating their cars every day. As Tesla owners drive their cars, sensors in their vehicles collect hundreds of millions data points on driving, from GPS coordinates to actual videos captured from the car’s front and rear cameras. The vehicles then wirelessly send this data back to Tesla’s Autopilot data model, creating a massive library of knowledge around how to drive. The human feedback loop is essential here because if there is an error, the damage could be catastrophic.

By creating this library with the help of human drivers, Autopilot can use visual techniques to break down the videos and understand why drivers reacted the way they did. So, when a ball or child crosses the self-driving car’s path, Autopilot recognizes the pattern and reacts accordingly—stop!

Utilizing AI in Media Intelligence

Many automated media monitoring solutions say they use artificial intelligence and machine learning. But if there is no dedicated human analyst anywhere in the process to check or label the input data (i.e. your organization’s media), it means one of two things:

  1. The results you’re getting from your company’s media intelligence solution are inaccurate because it uses unsupervised machine learning. The solution using AI is teaching itself and you probably shouldn’t be counting on it to make informed decisions without labeled data.
  2. You, the customer, are expected to be like the coders discussed in the Bloomberg article. This means taking the time out of your own schedule to train the algorithms on what is right and wrong or hiring someone to do so.

Some people say a supervised machine learning solution is expensive, but supervised machine learning boosts media intelligence accuracy and helps communications pros make better decisions. To get accurate data, you need to either hire an analyst to train the algorithm so that it learns the right way, or spend time doing it yourself. Otherwise, you may end up paying in a different way with inaccurate results.

How AI Can Affect Media Intelligence in the Future

AI and machine learning are going to have an enormous impact on public relations and media intelligence. AI will give you more than just analytics, it will give you answers to what is happening in your media coverage and why – all on demand. Not only that, but it will forecast what topics could be a problem in the future, where those problems will occur, and how long the problems may last. These predictive analytics will make you proactive rather than just reactive.

As AI gets “smarter”, it will make you better and more prepared. But AI will only become as smart as the human teachers who train it along the way. It’s a virtuous upward cycle of humans and technology making each other better. Humans can train the machines, and pick up the last pieces of the most complex analyses that AI can’t like idiomatic phrasing and sarcasm. If done right, the benefits are truly worth the investment.

PublicRelay continues to invest in AI. In fact, supervised machine learning is embedded into our solution both in terms of making the analysis better and faster as well as in the outputs to the client.

Contact PublicRelay to learn more about how accurate data can help you with media intelligence.

Related Resources

Media Intelligence is crucial to high-performing communications teams – but how accurate is this reporting?

The reality is that many PR/communications teams today remain in the Dark Ages when it comes to performance measurement and, unfortunately, most CCOs don’t realize they have bad data until it’s too late.

In this blog, we’ll explore what it really means to have accurate data and, more importantly, how the accuracy of the data you start with affects objectives you track and the outcomes you present to your board of directors.

Common Roadblocks with Media Monitoring  

Typical media monitoring tools are, at best, only 80% accurate. Most vendors have an out-of-the-box approach to data collection that centers only on keyword tracking, and that yields many problems.

Analyzing Irrelevant Information

Most media analysis tools provide counts of keyword mentions and overall tone which is not insightful.  Brands need to accurately report on media mentions as there is strong potential that a significant number of those mentions are stock quotes or auto-published press releases on spam sites.

Additionally, mentions of product or company names may appear in counts, even when they have no meaning to the business. For example, impression counts might include the use of “sprint” as a verb even though the PR team is looking to track references to Sprint, the telecommunications company. Or they might include a news article that refers to an employee who once worked for the company. Automated tools don’t catch these things, and this can really skew your metrics.

Missing Out On Valuable Information

To filter out irrelevant articles or mentions, Boolean or smart technologies are used to narrow down keyword results, but this also yields inaccuracies because what if you accidentally filter something out that is important?

Poor Sentiment Identification

Although technology has come a long way, media monitoring systems still have a difficult time understanding human emotions like sarcasm. What if your brand is mentioned in a publication such as The Onion? Most tools could never analyze the real tone of that article. They also can’t catch if your company is wrongly named – think Equifax and Experian in the recent data breach crisis. Because there are only three major credit bureaus, Experian is often mentioned in articles about Equifax that are negatively toned.

Misinterpreting sarcasm and catching false mentions are just some of the many downfalls of automated solutions. Overall, automated sentiment analysis is widely considered to only be 60% to 80% correct.  For example, a pharmaceutical company that produces cancer treatments found that their automated media intelligence solution marked all mentions of the word cancer in articles as “negative.” Adding the element of professional human analysis to your media monitoring can help avoid potential pitfalls like these.

Lack of Context

Automated monitoring systems give every mention the same weight. Yet, a reference in the Wall Street Journal or by influencers in the company’s niche that can amplify content they care about, can mean more than a remark in a local media outlet.

Locating Non-Keyword Concepts

Automated tools can’t analyze concepts in your media intelligence reports that don’t show up in a keyword search. High-level concepts like “brand quality,” “thought leadership” or “workplace experience” are critical for building strategic plans and measuring goals. Yet these terms are rarely stated explicitly in written text and can only be recognized by a person who has read the article.

Emphasis on Outputs Not Outcomes

Automated technologies that many PR teams use to monitor media coverage track KPIs—such as reach and impressions—that quantify outputs, not outcomes. To be taken seriously by leadership, PR/communications teams must track metrics that are accurate and meaningful to business performance.

What Can Two-Pronged Media Intelligence Do for CCOs?

A two-pronged approach to data collection and analysis helps CCOs answer high-level questions focused on custom strategic outcomes. Confidence in what you are presenting to your C-Suite or board of directors is critical – you must trust that the data that your team is sourcing is A) accurate B) being analyzed in a meaningful way that you can tie to your objectives.

By combining technology with human analysis, organizations can benefit from accurate data and actionable analytics surrounding outcome-based metrics. Only then, can you move from simply monitoring output to strategically accomplishing objectives such as the following, accurately and confidently:

  • Increase thought leadership among your key audiences and regions
  • Identify opportunities for media coverage that your competitors do not already dominate
  • Determine key subtopics that are impacting your reputation
  • Confirm that your messaging is accurately reaching your target audience as intended

Contact PublicRelay to find out how our media intelligence accuracy and human analysis sets us apart and can help you make data-driven decisions.

Related Resources

Recently, a Business Insider article caught my attention when it sought to quantify the brand boosts that Intel, Under Armour and Merck experienced when their CEOs resigned from the White House Manufacturing Advisory Council. To measure the impact of the walkouts, the piece relied on mentions and tonality data provided by a well-known social analytics tool.

The quote that stuck out was about Merck and their very high percentage of negative tone. The provider clarified by stating “the only reason it is negative is because people are criticizing Trump for singling out Merck and Frazier and not the other CEOs, and the algorithm can’t decipher that context.”

Here’s the problem: algorithms can mine data well and even analyze its tone using keywords, but only a human can interpret the results in the context of current events. But what happens when you ask bigger questions? Such as what was the negativity in the social data referring to? This is where our approach gives the insights that machines can’t.

Because the overall tone of a post is not an accurate measure for effect on reputation and brand, our analysis focuses on getting to the “so what?” answers. And these are the answers we uncovered when the context of the Merck posts were analyzed for our test:

  • What was the overall sentiment toward Merck? Mostly positive – less than 30% of the posts were negative about the brand – a sharp contrast to the statistic pulled by an algorithm in the Business Insider.
  • Were the posts about any of the brand drivers that the business cares about? (see chart) Yes – and the tone of each subtopic was analyzed for more clarity.
  • Did traditional media have any impact on social? Traditional media sharing activity concentrated largely on positive coverage. A Los Angeles Times editorial praising Ken Frazier for his courage was among the most shared articles, generating over a quarter of a million Facebook shares.This is exactly the type of insight communicators can use to bring the right perspective to their executive teams.

Reporting on counts and tonality in social media can be a starting point. But to deliver true value to your organization, you need to uncover context. Pairing human analysis with technology gets you the story behind the story. Communicators need specific, timely, and trustworthy conclusions to track their company’s brand and reputation when a major (or any) event occurs.

Related Resources

We are in a new world of public relations management where missing out on a story or tweet from an influencer can potentially cause your brand a disaster.  As communicators, it is essential to perform ongoing media monitoring and social listening for mentions of your brand and key topics that you’re tracking. This can be especially daunting when your brand has a name that is cited thousands of times a day in social and traditional media. Picture how difficult it is to sift through all that information.

On average, we’ve found that less than 20% of all social media data analyzed for clients is relevant to their brand and key messages. A lot of the irrelevant data is generated because brands have common names.

Take Chipotle, for instance, whose brand name is also a common dictionary word for a type of sauce or pepper. How can Chipotle’s communications team monitor thousands of tweets and news articles that use the word ‘chipotle’ and isolate when they are referring only to the brand? Merely using keywords to track online mentions is not enough.

Most media monitoring solutions encourage users to use Boolean logic to pare down the volume of data collected, but applying too many filters puts brands at risk of missing valuable information. For instance: what if a tweet discusses a bad batch of chipotle peppers used at a Chipotle location? If you are filtering out “pepper”, you would have likely omitted this result.

Tide is another example of a brand that deals with common name issues. How can Tide’s communications team focus on articles about the Tide brand and not changes in sea levels or Roll Tide Bama or any number of uses? A typical media monitoring solution would suggest that you solve this problem by filtering out articles that contain keywords like ‘tide’ and ‘ocean’ together. But, that could still cause you to miss essential coverage such as an article claiming that Tide detergent is causing water pollution.

The bottom line is that simply automating keyword tracking and pairing it with Boolean logic is not a strong enough media monitoring solution. You need to pair automated media monitoring with high-quality human analysis to ensure that you are working with data that you can trust.

Related Resources

As you dissect the landscape of media monitoring and analytics solutions available to your team, it’s important to understand how best-in-class Communications professionals are tying measurement to business outcomes.

In our last blog post, we shared the results of an MIT study testing how untrained machines fare when it comes to measuring a brand’s key messages, identifying appropriate sentiment, and determining customer experience. The findings were less than stellar at 9% correct for key messages, 20% on sentiment and 33% on customer experience.

The focus in that post was on human vs. machine, but what happens when you pair the two together?

A hybrid approach to media analysis, one where a highly-trained human analyst utilizes top technology to track and monitor your media coverage and then correlates that data back to the stated business goals, produces exponentially better results.

The outcomes outlined below are achieved with a partnership of top technology and a level of human analysis that understands the context and nuances of media coverage specific to each unique company.

  • Identify a new competitor– The C-Suite at a manufacturing corporation was specifically concerned about their coverage in high-stature industry publications, especially as it pertained to competitors. Although they were initially monitoring for a predetermined list of competing companies, a dedicated Analyst noticed that a young and untracked company was emerging, taking over a share of attention from important outlets. Proactively tracking that young company allowed the C-Suite to immediately pull powerful SOV reports and keep an eye on the new business, keeping them in the know on what was most impacting their business before it was too late.
  • Stay in the know in a volatile market – A major telecommunications provider frequently needs timely media insights and reporting regarding product announcements, competitive announcements, or for ad-hoc internal presentations. They do not have the time to rework “directionally correct” data to get to the coverage analysis they need. Utilizing a dedicated media analyst with access to state-of-the-art reporting allows the team to be responsive and deliver their CEO the analytics they trust to make important decisions in the moment.

Ultimately, the power of pairing human with technology when it comes to media analysis is that it gets to the story behind the story, giving you the specific, timely, and trustworthy conclusions you need to be a brand hero with your C-suite.

Related Resources

Measuring media coverage is a complicated task. It’s a no-brainer for your team to want both accuracy and efficiency at the forefront of any monitoring and analysis solution, but it can be difficult to find a tool that succeeds at both.

With a plethora of different options to choose from, including many automated solutions that use technology alone to provide media metrics, it’s important to ask – is this analysis something we can trust?

The Problem with Data

For many leading companies, accurate analysis of text remains a problem yet to be solved. IBM has estimated that one-third of business leaders don’t trust their own data, and even more staggering, that $3.1 trillion of America’s GDP is lost due to bad data.

Researching Humans v. Machines

Considering this high risk that inaccurate data has across industries, a recent collaboration with Toyota Motor NA Energy & Environmental Research Group, PublicRelay and students at the Massachusetts Institute of Technology Sloan School of Management (MIT Sloan) Analytics Lab set out to find an answer to the question “can computers analyze media coverage as well as humans?”

After a series of analytical tests against social media postings conducted over the course of 3 months, it was determined that the machine-based analysis could only detect 9% of key messages, identify 20% of appropriate sentiment, and determine 33% of the customer experience.

Conducting the Study

In order to conduct their research, the MIT Sloan Analytics Lab tested various technologies to:

  • Understand which topics car enthusiasts are discussing in relation to Toyota’s alternate-fuel vehicles on Twitter;
    and
  • Identify “significant” tweets based on topic inference from above.

The goal was to improve the Toyota team’s understanding of consumer behavior, to improve consumer perceptions and sentiments, identify tweets that demand further tracking or direct engagement, and formulate messaging that Toyota might use to drive social media conversations themselves.

Answering the Data Question

Rather than accurately providing the analysis itself, the students found that the technology was most useful for eliminating irrelevant posts before humans analyze them.

The results of this student project showed that rapidly emerging issues or breaking news were slow to be reflected in the automated technology model—since the machines had to “learn” how to categorize it and this took a fair amount of time. The results also showed that 80% of the machine-learning work was purely data clean-up.

These days, data and analytics are critical to decision making in every business function, and Communications and PR is no exception.   But teams need to select solutions that don’t just provide data, rather they provide quality data.  And based on the MIT/Toyota study, it seems that computers are not yet up to that task.

Click here to learn more about PublicRelay’s partnership with Toyota and MIT regarding machine analysis.

Related Resources

Businesses love information, data, and statistics.  Show me a successful CEO, and I will put money on the fact that they rely on stats to plan, react, focus, and strategize smartly.  And compared with 20 or even 10 years ago, the wide availability of vast troves of information has changed the face of business.

Data for Communications

Those of us in the communications field are caught up in the same tidal wave.  We are expected to collect media intelligence to support our decisions and prove our impact to the CEO and the Board.

The days of thick binders of media clips and clip counts are long gone.  And even the old standby “Ad Value Equivalent” is under increasing scrutiny by skeptical CEOs, who find its almost arbitrary assignment of the value of a story hard to justify in the days of internet distribution, mobile consumption, and social media sound bites.

So what have we done?

  • We’ve changed with the times and turned to technology and data analytics to achieve those important goals.
  • We’ve begun analyzing stories by the thousands – even by the millions in the case of social media monitoring – to extract trends and patterns in the coverage.
  • We’ve captured sentiment through complex algorithms that look for descriptive and emotional words and tie them to subjects gleaned through entity extraction.
  • And we’ve collected all these data into vast databases that are then crunched to create impressive-looking charts and graphs that tell us what is happening in the media as it relates to us and our brand.

More Data vs. More Information

But let’s take a step back and ask an important question:

Do we have more information, or do we really have just more data?  Is this vast database – at its core – based on solid analytics that are structured on how we, as communications professionals, do our jobs?

Frequently the answer is no, and therein lies the rub.

Just because you can analyze everything with a computer does not mean that you should.  As more and more people become publishers through blogs and social media, they talk about anything that comes to mind.  Brand names are tossed about in casual conversations that have nothing to do with the product themselves.  Sarcasm and double-meanings are commonplace, so often what you read is not literally what was written.

Less Knowledge

What increasingly is happening is this: as you expand your data set to pick up every single mention of a topic, issue, brand, executive, or company, by the nature of your terms you will always be casting a very wide net.  The data set becomes huge, though software can handle the volumes easily.

But what software continues to struggle with is the increasingly poor signal-to-noise ratio.  The wider you cast the net, the more likely you are to ingest, process, and analyze irrelevant coverage.  And what corporate communications or product strategy is ever driven by an unknown, low-influence blogger?  It is important not to dilute information and insight from the authors and outlets that really matter with volumes of what amounts essentially to just noise.

What you have is more and more information – but actually less and less knowledge. 

And so while we are enjoying the pretty charts and statistics, we are actually not looking at more insight.

For communications to be part of the data revolution, we need solutions and approaches that leverage data, but do so based on quality data from the start.  Only then will we be able to participate in the C-Level executive discussions as a respected peer and prove our impact.

Related Resources