Social media monitoring review 2010: Test 1 results

Tweet

social-media-monitoring-toolsThis is our second post from the Social Media Monitoring - 2010 review series.

In it we’ll be giving you an insight into how we have set up the comaprison of tools (which proved rather a challenge) and the volume of online conversations that each social media monitoring tool was able to uncover.

Setting up the search string
We decided to use Starbucks as a test brand for our social media monitoring because it’s a global brand that is frequently discussed online. Also, the word ‘Starbucks’ doesn’t have any other meaning or use other than being a brand/company name.

As well as tracking the word ‘Starbucks’, we also tracked the phrase ‘Flat White’, a new addition to the Starbucks coffee range which launched in December 2009 . We also tracked their new ready brew coffee, ‘Via’, which was released in the autumn of last year in the US and in March 2010 in the UK. We wanted to see what impact this new development was having on online conversations about the brand . Finally, because Starbucks is associated with its ‘Reward Card’ and the phrase ‘Fair Trade’ we tracked these subjects too. To keep things fair we created a similar search string for each tool.

It is important to note that some tools are capable of more sophisticated search strings than others. So we were testing to the lowest-common-denominator in this sense.

Comparison challenges
Although the tools are very different, we wanted to try and evaluate them all as fairly as possible. Thus the tools were used ‘out-of-the-box’, as they come, for the fairest comparison. Again, there are limitations with this approach. Some of the more sophisiticated options offered in some tools are only relevant to more experienced users. And some providers (e.g. Neilsen) are set up to provide a much greater level of analyst support than, for example more technology focussed firms like Radian6.

Our sense for the market is that most firms are still learning the art of social media monitoring and that tools are often managed day-to-day by people with only limited training in how to use them in anger. This drove our approach to the research.

Coverage
As the tools all have different coverage, whether it’s for different media or markets, we set up the same filters for each tool to create a comparable ‘universe’ of conversations for Starbucks. Our test was carried out using only the English language and for the same time period on each tool.

Sentiment analysis
One of the areas we wanted to test was the sentiment analysis accuracy of each tool. In order to compare the automated sentiment (ie, sentiment that is coded automatically positive or negative by the tool) with our own analysis we had to extract the conversations and manually code them. Some tools don’t allow you to extract certain conversations, others do. Where we weren’t able to extract sentiment for some reason, we’ve marked the tool:

Picture4

Number of conversations
The seven tools gave very different results when looking at overall conversations -  the smallest number of conversations was found by Biz360 and the largest by Radian6 - over 11x the difference! But remember, more conversations is not necessarily better - there is often duplication.

Picture1*You can usually make arrangements with your account manager if you need more data.

Conversation types
When you compare the conversations by media type, again each of the tools shows quite a different result:

Picture2

*Scoutlabs doesn’t allow you to extract Twitter conversations with sentiment. The tool does allow you to browse the latest twitter conversations though.

At this top-level, it’s clear the tools are each doing something quite different…

Next…
More detail on these tests, and the results,  can be found in our final report which will be available to download on Friday 16th April. We’re also holding a free social media monitoring breakfast seminar on 15th April in London, where we’ll be presenting the findings of our report, as well as giving practical tips and advice about social media monitoring and the best way to analyse results. You can register for the event by clicking on the button below:

Register for Social media monitoring in London, United Kingdom  on Eventbrite

Read the other posts from our social media monitoring review 2010.

Social Media Monitoring Tools - 2010 Review (intro)

Tweet

social-media-monitoring-toolsOver the last few weeks we’ve been carrying out detailed tests and analysis on 7 of the leading social media monitoring tools - Alterian, Brandwatch, Biz360, Neilsen Buzzmetrics, Radian6, Scoutlabs and Sysomos. Our aim is to provide an in-depth comparison of buzz tracking tools that accurately depicts their individual pros and cons.

We’ve put the tools to the test by tracking well-known international coffee company Starbucks. We compared over 19,000 online conversations, giving us some really unexpected results and highlighting some staggering differences in the way each tool performs.

Over the next few weeks we’ll be blogging about our findings, and at the end of the series you’ll be able to download the full report for free. We’ll cover:

  • The basics of social media monitoring
  • An overview of our results
  • The location of online conversations
  • Social media monitoring and duplication
  • Data latency
  • Sentiment analysis
  • International/multilingual social media monitoring

We’re also holding a free social media monitoring breakfast seminar on 15th April in London, where we’ll be presenting our findings, as well as giving practical tips and advice about social media monitoring and the best way to analyse results. You can register for the event by clicking on the button below:

Register for Social media monitoring in London, United Kingdom  on Eventbrite


So what are social media monitoring tools?

In a nutshell, social media monitoring tools pretty much do what they say on the tin - they monitor online conversations taking place through social media.  They track anything that’s being said about your business or your brand on blogs, forums, Twitter and other social spaces. Each tool is different, varying in complexity and in the way they gather and process the information, as we will show you over the next few weeks.

Our sister company, FreshMinds Research, has been using social media tools to generate customer insights for years. As we’re a social media agency, we usually work with FreshMinds Research to conduct social media audits or monitoring when establishing a  social media strategy for clients. So over the next few weeks you’ll benefit from the unique findings of a research company working in collaboration with a social media agency.

We’ll start with the basics and work through our research step-by-step. If at any time you want us to explore a certain aspect in more detail, please let us know. Our next post will explore the basics of social media monitoring.

Read the other posts from our social media monitoring review 2010.

Facebook, Gross National Happiness and the power of buzz tracking

Tweet

Image by BenSpark via Flickr

Facebook is a great source of information on how people are feeling. I can tell if my friends are happy or sad on a given day based on the updates that appear in my feed. Just imagine the potential of analysing what everybody says of Facebook on a given day. The ability to measure how happy or sad the Facebook users of the world are based on what they say on the social network. This is exactly what Facebook are doing with their Gross National Happiness based on an analysis of the positive and negative words people use when updating their Facebook status.

This is an example of buzz tracking and analysis. Looking at the words and phrases that people use in social media and then using sentiment analysis to assess how positively or negatively they feel about something. With Facebook, the opportunity is huge. If you combine the ability to analyse the sentiment in status updates with the vast amount of profiling data, the potential for insight into consumer behaviour is huge. Macro-level analysis of sentiment could be analysed. What is the impact on male students in New York of a new advertising campaign on the subway, for example? Or how does a government policy aimed at mums impact women in London? The ability to segment and analyse on this basis is huge. And if you add into this the ability to analyse the networks that people sit in on Facebook, and the impact an event has on them and on their friends, this could be a huge resource of information for brands and organisations to learn from.

It is, however, a shame that Facebook hasn’t yet produced data like this. The initial analysis of the Gross National Happiness, for the US, shows two things: people are least happy when public figures die, and most happy during public holidays. Informative stuff.

The real opportunity of the Gross National Happiness analysis, and of buzz-tracking more generally is not to understand what a large mass of people think and do, but to combine this data with more detailed profiling information to really analyse what different segments of customers and stakeholders think. This is where buzz-tracking starts to add real value - comparing the discussions that different people have and analysing their sentiment based on other things we know about them. Are women more likely to be positive about a brand than men, for example. Are customers of a certain value more likely to respond positively to announced product changes than those who spend less per annum?

The Groos National Happiness index really does miss out on the real insight that you can get from buzz-tracking. By combining the universe of Facebook users, the distinctions and differences that exist, and that start to provide real insight into the way people think and behave, and hidden in the data. Buzz tracking offers a really valuable source of insight for brands and organisations, especially when it compares what people say (the buzz and sentiment) with other profiling data we have about them.

Customers sometimes do not know what they want

Tweet

Image by Darren Hester via Flickr

The promise of co-creation is that getting customers involved in the innovation process, and letting them inform the design of new products, will mean that you develop a product that is better suited to their needs and will ultimately perform better in the market. Of course, it is not always this simple. Often customers don’t know what they want. They can’t necessarily articulate how they would design the ideal product, nor can they say what is wrong with the existing product. They may never have articulated what they like nor what they dislike, but this doesn’t mean that the product isn’t perfect.

Over the weekend, the New York Times looked at this very subject following revelations from ex-Google visual designer, Douglas Bowman. In an unusual move, Bowman explained on his blog the reason he had left Google. As the New York Times discussed, his description of the design process at Google raises a number of questions:

Can a company blunt its innovation edge if it listens to its customers too closely? Can its products become dull if they are tailored to match exactly what users say they want?

Bowman’s suggestion is that that answer to all of these questions is “yes”. That Google relies too much on data, as a proxy of customer input, and not enough on design skills alone. As the New York Times article report:

Mr. Bowman’s main complaint is that in Google’s engineering-driven culture, data trumps everything else. When he would come up with a design decision, no matter how minute, he was asked to back it up with data. Before he could decide whether a line on a Web page should be three, four or five pixels wide, for example, he had to put up test versions of all three pages on the Web. Different groups of users would see different versions, and their clicking behavior, or the amount of time they spent on a page, would help pick a winner.

This kind of user-input into the design process is what many think of when they think of working with their customers on new product development and design. They think of presenting a number of options to customers (or indeed to potential customers) and then asking them to evaluate each one and choose the one they prefer (or in this case to take their use of a particular design as a proxy for this choice). Of course, this is not necessarily the best way of co-creating with your customers.

Rather than asking people what they think about a particular set of designs they prefer (or which they use most), you can often get a more useful level of insight by engaging with them. Don’t ask them about solutions to a problem but observe what they discuss and say about the problems themselves.

Imagine you are a company designing kitchen equipment. You could involve your customers in the design and innovation process in one of three ways:

  1. Ask them what they want - ask what new equipment, tools or gadgets would make their life in the kitchen easier or allow them to do new things
  2. Ask them to choose between a set of prototypes - present a set of potential new products to them and ask them to choose which they want.
  3. Ask them to talk about what they do in the kitchen, what equipment they use and what problems they have

The last of these is most likely to produce the most insightful outcomes. Rather than asking people to get involved in the actual prototype products themselves, or to tell you what they want, get them involved further up the innovation funnel. Engage them and talk to them about what they use in the kitchen - what makes their lives easier, what would they like to be able to prepare and cook but can’t. Don’t talk to them about the equipment that, you hope, will solve their problems. Talk to them about their problems themselves.

By watching what people do you can then interpret this and begin a design process based on this information and this engagement. Then, rather than just presenting three options to people of potential new designs, you can approach them based on what they have discussed before: “there was a lot of discussion about x, here are some ways we think we could help with that. What do you think?”

This kind of engagement is where online communities really come to their fore. They let you engage your customer in a sustainable way. You can get to know them, their lives and the problems and challenges they face. It isn’t just a short-term process to “do some co-creation”, rather it is long-term engagement that fundamentally changes the way you innovate and develop new products.

Customers sometimes do not know what they want. It’s a fact. They do, however, know how they use what they have, the problems they face and the things they would like to be simplified. Understand what they do know rather than forcing themselves to make choices about things they don’t.

  • Design: it’s not all about you. (designmind.frogdesign.com)
  • Design Or Data? Ex-Googler Spills All After Landing At Twitter [Design] (gizmodo.com)
  • Google designer leaves, blaming data-centrism (news.cnet.com)

Design matters. Understand who you are designing for.

Tweet

We’ve posted before about how and why good design matters in online communities. We spend a lot of time at FreshNetworks understanding the audience the online community is aimed at so that we can design a community that will appeal to them and help them to achieve what we want them to do.

This process of understanding who you are trying to attract and how you want to engage them is a critical step in designing the online community. It’s a critical stage in designing any content that you want to engage people, even if it’s a PowerPoint presentation.

Last week I came across this great presentation on good design in presentation from Alex Osterwalder. It’s required reading at FreshNetworks this week, and looks at a process for designing an engaging PowerPoint presentation. I see real parallels with the way we design our communities so that they engage the relevant audience.

Subscribe to updates from the FreshNetworks Blog