Event: TransAsia Plane Crash

TWEET 1: Dashcams capture apparent footage of Taiwanese plane crash. Crash video may hold crucial clues.

TWEET 2: Hard to believe photos purporting to show #TransAsia plane crash in Taiwan are real. But maybe. Working to verify.

TWEET 3: If you haven’t seen this plane crash video yet, it’s chilling.

Event: Giants v Royals World Series baseball game

TWEET 1: Wow #Royals shut out the Giants 10-0. Bring on game 7, the atmosphere at The K will be insane. #WorldSeries

TWEET 2: The #Royals evened up the #WorldSeries in convincing fashion.

TWEET 3: @marisolchavez switching between the Spurs game and the Royals-Giants game. I agree! SO GOOD!!! #WorldSeries.

***

Which feed do you think is is more credible? The measured, careful and somewhat withdrawn feed from the TransAsia plane crash, or the excited and emotional tweets in the second event?

In a new study, Georgia Institute of Technology researchers developed a language mode that scanned 66 million tweets from almost 1400 events and found that, in general, emotionally charged tweets were seen as highly credible to the average Twitter user.

On the other hand, hedge words — those aimed at reducing the impact of a statement, like “allegedly”, “maybe”, or “may” — reduced the credibility of a tweet.

University of Melbourne language-processing expert Professor Tim Baldwin, who was not involved in the study, told Crikey that words signposting trauma, fear and anxiety in particular, were seen as highly credible.

“You’ve got someone who’s so worked up about an event, so perhaps they’re an eye-witness, or they have some direct line to the action to explain why they’re so worked up,” Baldwin said. “The fact that they could get much of a signal at all out of what is a very noisy data source is certainly impressive.”

The researchers split the tweets into 15 different language-based categories, which revealed their credibility score. And the score matched with tested human opinions almost 68% of the time.

The classifications include: negative or positive emotion, anxiety, subjectivity, hedge words, questions and booster words (e.g. undeniable). 

So how much does sounding credible translate to actual credibility? Again, it depends, Baldwin says.

“The two can be very different — can you establish whether someone was an eye-witness or not? If there’s photographic evidence, for example, and you’re getting multiple images of the same thing, then you have much stronger accuracy beyond people just saying things.” 

While the study is still preliminary, Baldwin says it could add to the arsenal of bots that are being developed to tackle fake news. Right now, however, their linguistic categories can be used as accuracy cues.

Of course, the source of the tweet will make a difference to its trustworthiness as well — we’re more likely to trust a tweet coming from a government source or credible publication.

The researchers, however, pretended everyone was equally reliable and based their model solely on linguistic signals. They write that while this is a limitation, previous studies have shown people tend to look at the content of a tweet rather than the author when they’re assessing credibility anyway.

Their findings were presented at the 20th ACM Conference on Computer-Supported Cooperative Work and Social Computing in Portland, Oregon.