A sentiment experiment: This week’s #BBCQT panellists on Twitter


Sentiment is a complex beast, even for humans to decode. How many times are you never really sure if somebody is being positive or negative about something? How many times do you have to use non-verbal cues like their body language or facial expression? Cultural and linguistic factors play a huge role in our ability to understand what is meant. And this is why it is a difficult process to automate.

This is why sentiment reporting for social media discussions is problematic - giving a single ‘positive’ or ‘negative’ rating to a comment risks missing the real nuance. Even at a Tweet-level, assessing 140 characters as positive or negative can be wrong as often as it is right.

There is another way of looking at sentiment. Not to look at the comment, but to identify known elements within the sentence and look at how they are discussed. For example, let’s consider the following Tweet:

As much as I absolutely adore BBC Question Time, Steve Coogan is making me hate this week’s show so much. He’s making me really angry #bbcqt

As we read this, we know that it is neither positive or negative. It expresses both things depending on the object being discussed:

  • ‘BBC Question Time’ is clearly being described positively (‘I absolutely adore [it]‘)
  • ‘Steve Coogan’ is clearly negative (‘making me really angry’)
  • ‘This week’s show’ is also clearly negative (‘[I] hate [it]‘)

So by breaking down the Tweet into elements like this we get a much more nuanced view of sentiment. And probably a much more useful one. If I am analysing what people say about an episode of BBC Question Time, for example, I might be more interested in comparing how people talk on Twitter about the issues raised, or the guests on the panel than I am generically about the tone of discussions during the show. Looking at sentiment at this object-level is more insightful and more actionable.

So for the episode first broadcast on 27 September 2012, we conducted an experiment to explore sentiment. Not of the show or general discussions but specifically to investigate people’s sentiment towards the five guest panellists.

What we analysed

  • Using DataSift, we recorded all Tweets including the hashtag #bbcqt during the time the show was on air. This was a total of 21,651 tweets.
  • A random sample of 20% of these was then taken, giving us a total sample of 4,266 tweets.
  • This sample of Tweets was analysed using Semantria - this identifies the things (they call them ‘entities’) discussed in the Tweet and then gives a positive or negative score based on the context in which that entity is discussed.
  • We isolated entities that were the five guests on the show - using all possible spellings of the following names:
    • Danny Alexander
    • Harriet Harman
    • Jacob Rees-Mogg
    • Kirstie Allsopp
    • Steve Coogan
  • We then took a mean score for how positive or negative the context is in which each of these entities is discussed.

What we found

  • The most discussed guest panellist this week was writer and comedian Steve Coogan who was explicitly mentioned in almost 7% of all Tweets about #bbcqt. But he was also discussed most negatively.
  • The most positively discussed panellist was Labour MP Harriet Harman. She was also the only panellist who had a positive sentiment score overall.
  • Liberal Democrat MP and Minister Danny Alexander was the second most negatively discussed panellist, with Jacob Rees-Mogg and then Kirstie Allsopp above him.

What we can learn about sentiment analysis

What can we learn from this? As with all research it is important to understand the biases of our sample - it could be that the audience who view the programme and discuss it on Twitter are more left-wing and so more sympathetic to Harman’s point of view. It may be that the discussions about Steve Coogan were coordinated by a small group of individuals who had an agenda against him and so biased his score down. And it may be that the relatively small instance of mentions of Kirstie Allsopp makes her score less reliable.

All of these are areas of potential bias that should be explored. But analysing sentiment at the object level like this gives us a much more nuanced understanding of how people were discussing BBC Question Time last night. And it allows us to have much more valuable discussions than just knowing that Tweets during the show were positive or negative.

Sentiment is a complex beast, as are the humans that are expressing it. To inform a real discussion and to have a real understanding of what may be happening in discussions online we need to stop thinking in terms of Tweets and posts and comments, and to start disaggregating the individual objects discussed and explore those instead.

What’s hot in social media – February 2012 round up


February was a busy month in social media: Pinterest rocketed in popularity so much that some are (wrongly) calling it “the next Facebook”, while Facebook itself announced the roll-out of Timeline for brand pages. Here’s a few other things that have caught our eye this month, which you may have missed:

Twitter sentiment analysis heats up

  • Datasift historic tweetsTwitter and UK based data technology company Datasift came to an agreement to release tweets going back two years. Until now, marketers have only been allowed to see tweets from up to 30 days ago. Datasift will be taking in about 250 million tweets every 24 hours and analysing them for sentiment, location and influence. The effect of this arrangement to access the Twitter archive has led to concerns about privacy, as well as conjecture that it could be a step towards being able to predict future events.
  • And speaking of predicting the future, HP and Organic took advantage of this month’s Oscars to play with some real time sentiment analysis. Similar to XFactorTracker from Professor Noreena Hertz, The Awards Meter used language analysis to monitor Twitter during the run up to the Oscars and ranked nominees according to popular or negative opinion on Twitter. At FreshNetworks we believe that you can’t necessarily take sentiment analysis at face value - automated tools need deeper analysis and understanding of the tool’s inherent biases to really dig in for insights.  However, simple tools like the Awards Meter do hint at how useful it can be to look at social media for viewing overall trends and, and are a great way to demonstrate the technology.

Social influencers are the new darlings of social media

  • PeerIndex, the social influence company has released a service targeted towards people who are ranked highly in specific subjects to offer them related discounts. Essentially a free sampling service, �?PeerPerks’ aims to differentiate itself by ensuring that free samples only go to people who are really influencers in their product fields – with the aim being that if they then talk about the products in their social circles, the uptake will be much greater. As Ian Carrington, mobile sales director at Google UK said during Social Media Week, consumers are 300% more likely to buy something when it is recommended by a friend, so it will be interesting to see whether PeerPerks takes off.
  • Boo Facebook's most influential dogAnd as we’re involved with Park Bench, a community for dog owners, we like to keep a handle on the non-human influencers in social media too – and with almost 3.5 million fans, Boo is possibly the most famous dog on the planet. Interestingly, it looks like even he is now endorsing products in social media with the recent mention on his Facebook page of a new American Apparel hoodie. Will other brands be jumping on the Boo bandwagon?

SoDash: bringing artificial intelligence to social media monitoring


Screenshot of SoDash - social media monitoring tool with artificial intelligenceKeeping track of discussions surrounding your brand or competitors is crucial for successful social media monitoring and listening. One challenge is the sheer range and volume of conversations that take place online, and determining what to do with them.

Sentiment analysis is a difficult task to automate as irony and sarcasm can generate false results, affecting accuracy. Being able to identify what action a posting needs, if any, is also difficult, as spam or bot messages might drown out genuine users.

We spoke to Simon Campbell about the exciting approach SoDash has taken to social media monitoring. SoDash uses artificial intelligence which means the tool can be “trained” into determining the sentiment and category of a social media posting. This advanced approach to social media monitoring can potentially result in greater effectiveness at gathering intelligence from online conversations, and reacting to them appropriately.

What do you feel is the most accurate definition of a social media management tool?
I think the key is how you define “management”. There are a lot of social media monitoring and reporting tools, but I think the real value comes from engagement, which is something the traditional offerings in the market do quite poorly. I think the most accurate definition for the perfect social media management tool would be: A tool that helps you “monitor”, “filter”, “engage with” and “report on” social media in the most efficient way by automatically identifying opportunities and reducing workload through improved workflow.

Why do you think they are valuable to brands or businesses (ie, time savers etc)?

Tools by definition are there to make life easier and the good ones will cleverly filter all of the noise out there in social media, deliver just the relevant messages and provide a much improved work flow so that social media can be managed with minimal resource and maximum efficiency. Businesses need to be able to simply monitor and engage with their customers and prospects within social media as this represents how their brand is viewed and can relate directly to the bottom line.

What do you think is the most accurate way of tracking social media activity without using a tool?

It is a fairly laborious task without using any tools at all as it involved creating individual searches in things like Twitter and manually monitoring them, and setting up Google Alerts for numerous phrases and again, manually checking them all in the different places. There are some free tools that go some of the way to help monitor (such as TweetDeck) but they still rely on someone sitting in front of it all the time and it does not do anything especially clever except for pulling in the information to one place.

Explain how SoDash works and why it is an effective tool for social media management.

SoDash is a social media dashboard for brands and organisations to monitor and interact with the market. The reason it is unique owes to its artificial intelligence algorithms that learn what is important to your business through tagging. Once trained, it will automatically tag messages that are sales leads, positive or negative comments about your brand or competitors, deliver market information, ghost write and send responses and much more. Whilst some tools out there are good for monitoring social media, SoDash enables you to take control of social media and make it work for you with minimal resources.

What platforms does SoDash cover?

SoDash currently covers Twitter and Facebook with full monitoring of blogs, forums, YouTube, LinkedIn and others coming in September. We can currently also link to any specific source if requested. It is important to understand about engagement in the different platforms. Twitter is by far the most engaging, as it is an open platform. Facebook is great if you have a page with lots of fans that you need to manage but you cannot access and engage with private profiles.

How are you different from other social media management tools on the market?

SoDash is unique because it has in-built artificial intelligence which enables it to be trained to filter, recognise and tag messages based upon the criteria that is important to your business. Due to the artificial intelligence algorithms, it is also much more accurate when looking at things like sentiment analysis as again it is trained in relation to all aspects of the messages, including the structure, punctuation and person messaging, not just positive or negative words as with other tools. Essentially, other tools on the market have been developed to focus on monitoring whereas SoDash is built for engagement with monitoring as a given.

Who do you see as your main competitors?

Companies that use SoDash might also look at Radian6 or CoTweet. Both were built initially with monitoring and reporting in mind and, as with other tools on the market, they do not incorporate artificial intelligence so are reliant on manual filtering and responses. We have come across agencies who might continue to use something like Radian6 alongside SoDash although SoDash will soon be able to offer the full breadth of monitoring and reporting to cover all angles. Another of the features that customers are highlighting as a strong aspect of SoDash in comparison to other tools is the ease of use.

What sort of future developments can we expect to see from SoDash?

With the core functionality in place, the SoDash roadmap now focuses upon bringing on more channels/platforms and the automation of more specific reporting, especially to cover internal factors such as response times to messages (all of which can be provided now if requested). There are also some really cool advances that no one else has on the radar right now, but you will have to wait to see those!

FreshNetworks Blog: Top five posts in June


Image by always13 via Flickr

As a social media agencyFreshNetworks aims to bring you the best posts in social media, online communities, marketing and customer engagement online. In case you missed them, find below our top five posts in June.

1. Social media monitoring review 2010 – download the final report

Over the first few months of 2010 we conducted an in-depth review of the leading social media monitoring tools in conjunction with our sister company, FreshMinds Research. We compared how Alterian, Brandwatch, Biz360, Neilsen Buzzmetrics, Radian6, Scoutlabs and Sysomos performed when monitoring conversations about global coffee brand Starbucks, analysing over 19,000 online conversations.

Many thousands of you have already read our posts about the review and downloaded the final whitepaper. If you haven’t yet, you can find a more detailed analysis of all these tools and more in our final report – Turning Conversations into Insights: a Comparison of Social Media Monitoring Tools.

2. Why a museum is the UK’s top brand on Twitter

The Famecount dataset is, like much data, not perfect but it does highlight some surprises that we can all learn from. The brand it has as the top Twitter brand in the UK is one such surprise. Rather than the big FMCG, fashion and media firms they include in their brands ranking, the top UK brand on Twitter for them is a museum, @Tate.

There are some structural reasons why the Tate will attract followers. Twitter is great for events and experiences and a museum has lots of these. But the success and popularity of the Tate is about much more than this. It’s thanks to the way they use Twitter. In this post we look at the three simple characteristics of the way the Tate uses Twitter that all brands can learn from, and that contribute to their success.

3. The most beautiful tweet ever written (as judged by @stephenfry)

In June, Stephen Fry declared the most beautiful Tweet ever written at the Hay Festival. The winning tweet, from Marc MacKenzie, is a concise but informative tweet and perhaps is a great example of how people are using this new medium. But what makes this tweet the most beautiful ever written?

The beauty in Twitter, and in the tweets people send, is that they convey emotion, opinion, information and expression in a relatively short period, and they, broadly speaking, do so in public. Unlike other conversational forms, Twitter, even when you direct a tweet at a specific person, has a broader audience and often an audience you don’t know. And of course you only have 14o characters with which to express yourself. Marc MacKenzie’s tweet is a good example of this new medium – the audience is unclear and the tweet manages to convey information, opinion, belief and also humour. All in 140 characters.

4. The top ten brands on Facebook

Starbucks is the most popular brand on Facebook when ranked by the number of people who �?Like’ a brand (’Fans’ as they used to be called). Over 7.5 million people like the coffee chain on Facebook, almost 2 million more than like the second most popular brand, Coca-Cola.

This data comes from Famecount which ranks brands (and people) based on the number of people who follow, like or friend them in social networks. It shows that food and drink brands are in each of the top five places, with fashion brands making up most of the remaining places in the top ten. Consumers are interested in what these brands are doing, or at least want to flag their interest in the brand or product on their own Facebook profile.

5. The problem with automated sentiment analysis

As part of our review of social media monitoring tools we compared their automated sentiment analysis with the findings of a human analyst, looking at seven of the leading social media monitoring tools – Alterian, Brandwatch, Biz360, Neilsen Buzzmetrics, Radian6, Scoutlabs and Sysomos. And the outcome suggests that automated sentiment analysis cannot be trusted to accurately reflect and report on the sentiment of conversations online.

In our tests when comparing with a human analyst, the tools were typically about 30% accurate at deciding if a statement was positive or negative. In one case the accuracy was as low as 7% and the best tool was still only 48% accurate when compared to a human. For any brand looking to use social media monitoring to help them interact with and respond to positive or negative comments this is disastrous. More often than not, a positive comment will be classified as negative or vice-versa. In fact no tool managed to get all the positive statements correctly classified. And no tool got all the negative statements right either. Automated sentiment does not work, and for businesses relying on it can cause problems.

The problem with automated sentiment analysis


social-media-monitoring-toolsSentiment analysis is a complex beast. Even for humans. Consider this statement: “The hotel room is on the ground floor right by the reception”. Is that neutral, or is it positive or negative? Well the answer is probably that it is different things to different people. If you want a high room with a view away from the noise or reception the review is negative. If have mobility issues and need a room with easy access it is positive. And for many people it would just be information and so neutral. Sentiment analysis is difficult even in human analysts in ambiguous or more complex situations. For social media monitoring tools it is also complicated and not always as simple or as clear-cut as we might like or expect.

As part of our review of social media monitoring tools we compared their automated sentiment analysis with the findings of a human analyst, looking at seven of the leading social media monitoring tools – Alterian, Brandwatch, Biz360, Neilsen Buzzmetrics, Radian6, Scoutlabs and Sysomos. And the outcome suggests that automated sentiment analysis cannot be trusted to accurately reflect and report on the sentiment of conversations online.

Understanding where automated sentiment analysis fails

On aggregate, automated sentiment analysis looks good with accuracy levels of between 70% and 80% which compares very favourably with the levels of accuracy we would expect from a human analyst. However this masks what is really going on here. In our test case on the Starbucks brand, approximately 80% of all comments we found were neutral in nature. They were mere statements of fact or information, not expressing either positivity or negativity. This volume is common to many brands and terms we have analysed we would typically expect that the majority of discussions online are neutral. These discussions are typically of less interest to a brand that wants to make a decision or perform an action on the basis of what is being said online. For brands the positive and negative conversations are of most importance and it is here that automated sentiment analysis really fails.

No tool consistently distinguishes between positive and negative conversations

When you remove the neutral statements, automated tools typically analyse sentiment incorrectly. In our tests when comparing with a human analyst, the tools were typically about 30% accurate at deciding if a statement was positive or negative. In one case the accuracy was as low as 7% and the best tool was still only 48% accurate when compared to a human. For any brand looking to use social media monitoring to help them interact with and respond to positive or negative comments this is disastrous. More often than not, a positive comment will be classified as negative or vice-versa. In fact no tool managed to get all the positive statements correctly classified. And no tool got all the negative statements right either.

Why this failing matters to brands

This real failing of automated sentiment analysis can cause real problems for brands, especially if they are basing any internal workflow or processes on the basis of your social media monitoring. For example, imagine that you send all your negative conversations to your Customer Care team to respond to where relevant. If two-thirds (or maybe more) of the ‘negative’ conversations sent over are actually positive then this process starts to break down. Perhaps more importantly, a lot of the negative conversations will never make it to the Customer Care team in the first place (having been incorrectly classified as positive). Unhappy customers don’t get routed to the right people and don’t get their problems dealt with. The complete opposite of why many of our clients want to use social media monitoring in the first place.

So what can we do

As with any test, our experiment with the Starbucks brand won’t necessarily reflect findings for every brand and term monitored online. Our test was for a relatively short time period and we only put a randomised, but relatively representative, sample of conversations through human analysis. However, even with these limitations, we were surprised by the very high level of inaccuracy shown by the social media monitoring tools investigated. For businesses looking to make decisions or perform actions on the basis of a conversation being positive or negative this is potentially quite dangerous.

Of course there is much that can be done here and over time the tools can be trained to learn and to improve how they assess conversations about a given brand. But the overall message remains: automated sentiment analysis fails in its role of helping brands to make real decisions and to react to conversations about it online.

Read the other posts from our social media monitoring review 2010.