Misinformation is insidious whether it’s the latest scaremongering in the tabloid press, ill-advised commentary on social media from celebrities or TV anchors evangelising biased untruths about climate, guns, vaccines, quackery, terrorism, fundamentalism or any of countless other points of contention.
It is the spread of misinformation through online social networks, such as Twitter and Facebook that represent perhaps the greatest challenge to society. Propaganda propagation is rife. Bias and bigotry bifurcated with every status update. Last year, a team at BITS-Pilani, Hyderabad Campus, in Andhra Pradesh, India (The spread of misinformation online), reported a taxonomy of misinformation to help them model how non-fact and economies of truth might spread through online social networks, the traditional media and into the offline world. Krishna Kumar and G. Geethakumari explained that not only are we increasingly reliant on online sources of information but our faith in the validity of said information is increasingly counterproductive and makes individuals and organisations susceptible to exploitation through what the team refers to as a “semantic attack”, this, they suggest, represents the “soft underbelly of the internet”.
The detection of a semantic attack, deliberate falsification of the facts would require a clear understanding of how a given network functions, where its nodes and hubs reside, their reach and their susceptibility to manipulation, or even social engineering. More importantly, it requires insight into the meaning, the semantics, of a given attack. The researchers have now proposed a computer algorithm that might, with the assistance of their taxonomy, that can look for patterns of propagation in the myriad streams of information, detect instances of misinformation and pinpoint the user nodes that are spreading the problem content. The concept of re-propagation is critical to the identification of misinformation being spread. They have tested their prototype algorithm against publicly available Twitter data sets.
“Malicious use of social networks to spread rumours, propaganda and deliberate false information or disinformation has been on the rise,” the team explains. “In this scenario, we argue the importance of a trust based reputation system for OSNs not only to rate the sources for providing accurate and timely information but also to identify malicious users forming clusters to spread false information.”
Kumar, K.P.K. and Geethakumari, G. (2014) ‘Analysis and modelling of semantic attacks in online social networks’, Int. J. Trust Management in Computing and Communications, Vol. 2, No. 3, pp.207–228.