OPINION27 July 2023

幸运飞行艇官网开奖直播记录+历史结果查询、幸运168飞艇官方开奖直播下载|168飞艇全国统一开奖直播视频 Overly positive? Rethinking sentiment analysis

AI News Trends

In the face of artificial intelligence’s text and language analysis potential, how should sentiment analysis evolve? By Mike Tapp.

Sentiment analysis abstract image

The recent large language model (LLM) boom has opened up a swathe of possibilities for analysing text and language data. This makes it a perfect time to revisit one language based measurement that has become ingrained in modern market research – sentiment. 

Sentiment analysis has been employed across a host of methods including surveys, social listening and qual and traditionally involves the categorisation of documents as either positive, negative or neutral. Yet, if we were to probe how this information is used in practice, is this level of analysis actually insightful? When was the last time you were able to make a business decision based on knowing X% of posts are positive? Does this information truly help to build strategies in any meaningful way?

There are a few core problems with this historical way of viewing sentiment. The first is that binary categorisation doesn’t really relate to the nuance of language. Consider the phrases “I like brand X” versus “I love brand X”. They could both be categorised as positive but, clearly, one of them has more emotional weight than the other. The binary categorisation oversimplifies language data and misses out a lot of useful distinction.

The second is that positive and negative as constructs are easy to measure but seldom relate to what businesses or marketing teams are trying to achieve. Quite often, the goal for brands is to build up perceptions around the conversion funnel (awareness and consideration) or deeper emotive concepts like love, trust or satisfaction. While sentiment can be useful as forming part of these constructs, clearly it isn’t sufficient by itself.

What can we do as practitioners to turn sentiment analyses into something more impactful? To address the first core issue, we could try viewing sentiment as a scale rather than a binary categorisation. There are various open-source models that aim to scale sentiment (for example, from +1 to -1 with +1 being entirely positive, 0 being truly neutral and -1 being entirely negative). This approach captures more nuance and also empowers more powerful analytics.  

To address the second issue, we can focus on measuring what matters instead of trying to use sentiment as a surrogate. If you are interested in brand love, create a measurement for it. LLMs and machine learning in general facilitate this endeavour at a low cost and you’ll end up with a scale that genuinely relates to the thought process of the business which will be immediately more impactful.

There’s also a middle ground to these two approaches which involves bootstrapping sentiment measurement to other metrics or between data sources. For example, if you are measuring sentimentality towards a brand or product in a survey, start doing the same on social and start formally quantifying the relationship between the two.

Or if you already have one firm measurement of the key business indicator like Love or Satisfaction, bootstrap sentiment to that to try to understand how the two are related and whether sentiment really does have impact on what matters to the business. The more sources and metrics we add to our analyses, the more comprehensive our view of the world becomes.

Ultimately, since the start of 2023, our ability to handle and analyse text data has fundamentally changed. But this doesn’t mean we drink the Kool-Aid and rush to integrate LLMs into our workstreams without due diligence. It means that to start with, we owe it ourselves as researchers scrutinise the methods we’ve used to date and reshape how we think about them in order to move forward and make the most out of the exciting new opportunities.

Beyond this, it also means we keep grounded in the value of human context. You can have the best model in the world, but if you can’t articulate why its outputs are important, it won’t achieve its full potential. Whether it’s traditional or modern approaches, we need to invest in the human layer that bridges the gap between technology and meaning.

Mike Tapp is data director at Capture Intelligence