The World Wide Web turns 25 today. As Sir Tim Berners Lee makes a move to try to keep the “web we want”, the current state of affairs is such that the social media revolution is still the main motor behind the spread of the web worldwide. The availability of immediate information as well as the empowerment of citizens has gained momentum to the extent that the amount of data being exchanged about immediate events has increased exponentially.
Ellen De Generes’ selfie at the last Oscar Award ceremony threatened the whole infrastructure of twitter – an information superload. It’s not just the pink news that is doing it. Breaking world wide news is now seasoned with the input from literally millions of netizens – all giving their slant or take on what is going on. We are used to seeing major news sites asking for “on the ground” information – cue the BBC’s now standard box on a news item asking whether “you are on site” and whether you can provide immediate information.
The social media have also been at the core of the revolutions that swept across the Arab world and more recently in the Ukraine. Whether it is a natural disaster such as a tsunami or earthquake, or a human tragedy – a shootout, a crash – the social media is on the front-line. There is a problem though, and it is becoming more and more dangerous.
The lack of control over what is and is not published when it comes to netizen input means that a rumour or a conjecture can rapidly spread across the net and be treated as a truth. We are already familiar with fake deaths of stars that quickly go viral and before you know it the news is taken as being true. The problem is exacerbated when it comes to news from trouble zones such as we have recently seen in Syria or Ukraine and is with regard to crucial information such as the presence of snipers or attackers.
This problem is now being studied by researchers at five different European universities who are trying to develop an algorithm that filters online rumours and chooses the true (or potentially true) from the false.
Five European universities are working on a social media lie detector in an attempt to verify online rumors. The technology developed in the wake of the London riots is set to help not only journalists and the private sector, but also governments.
Researchers, led by Sheffield University in England, are cooperating on the system, which could automatically ascertain if a rumor can be verified and whether it originates from a reliable source. It will attempt to filter reliable factual information from social media sites like Twitter and Facebook.
The project called PHEME is being funded by the European Union and has already been in development for three years. It is named after the Greek mythological character of Pheme, who was famed for spreading rumors. [REUTERS]
The filter will try to label information as being either speculation, controversy, disinformation or misinformation. The system will try and use three different factors to establish the accuracy of a nugget of information. It will examine the information itself (lexical, syntactic and semantic), and then cross-reference the information with a trust worthy data source and the dissemination of information.
In other words, PHEME promises to be the first frontier at combating online fraud and misconceptions although it will not entirely replace human judgement. The ultimate arbiter of what can or cannot be considered as potentially true will remain the gatekeepers at the newsdesks. What PHEME does is simplify their task – particularly as the new is live when it would be more time-consuming to follow leads – and provide a probability.ù
Mark Twain, Winston Churchill and Pratchett are all attributed different versions of the quote “A lie can travel half way around the world before the truth can put its shoes/pants/boots on.” With PHEME’s help the time gap might be shorter…
The truth, if I lie.
)