A Youtube case study published in the International Journal of Web-based Communities suggests that trolling, cyber bullying, flaming and profanity are rife in the comments sections of countless videos uploaded to the site. The study points to a theory of the “acceptability of language use” that might be employed in trying to address the problems of flaming and hate speech. However, the study’s author suggests that the new theory asks more questions than it answers.
Michael Nycyk of the Department of Internet Studies, at Curtin University, in Bentley, Western Australia, suggests that web-based communities are under increasing pressures from governments and the public to enforce community guidelines that discourage and prevent negative member behavior. Hostile, threatening and profane comments are common in many such communities, and indeed, seem to be the raison d’ĂȘtre of some. However, most mainstream users do not indulge in such behavior and similarly would prefer not to see such activity or be on the receiving end of insults and threats. That said, the concepts of free speech and an uncensored internet are important to those very same users.
The international video-sharing website Youtube, which is now owned and operated by Google and has its headquarters in San Bruno, California, USA, is a very popular online destination for a billion users. These users collectively watch hundreds of millions of hours of video footage every day, much of it user generated. Most videos are set to allow other users to comment and critique on the content and it is in this sphere that problems can arise.
Nycyk points out that in common with other large online communities including the microblogging platform Twitter, it is difficult, if not actually impossible, to police the content efficiently, effectively and in a timely manner. As such, there are countless offensive and even illegal comments that might be read by large numbers of people before those comments and the people making them might be blocked as in breach of the site’s terms and conditions or applicable laws. This is particularly pertinent when it comes to threats, hate speech or incitement. Flames may fall into these categories or may amount to unpleasant cyberbullying or trolling, which may or may not be illegal in any given jurisdiction but would be considered inappropriate by most reasonable users of the service.
‘The key issues for YouTube, or any web-based community, are the interpretation and subjectivity of the words used, their tone, and the intent behind these,’ explains Nycyk. However, the determination of whether a given inflammatory comment is simply that, inflammatory, or amounts to something worse, depends entirely on context and content, the target and the perpetrator. Indeed, comments deemed highly offensive by one group might be perceived as nothing more than friendly banter in another group. After all, language and behavior considered acceptable in a social setting among friends and family may not be quite so acceptable in the workplace or among strangers.
“As the YouTube community guidelines seem clear on what is acceptable or not, they do not specifically state words or sentences that may be considered offensive,” says Nycyk. “This, combined with cultural and individual attitudes towards certain language, especially swear words, makes it difficult to decide who to ban or warn in the community and for what reasons.” In order to explore this almost opaque issue, there is a need for a qualitative analysis of the content of flame comments, Nycyk suggests and his research endeavors to establish a framework for such an analysis. Such a framework should allow online communities, such as Youtube, to build a clearer and more defined policy in order to manage flaming and other negative behavior that might otherwise damage the community’s reputation in the long term.
Nycyk, M. (2016) ‘Enforcing community guidelines in web-based communities: the case of flame comments on YouTube‘, Int. J. Web Based Communities, Vol. 12, No. 2, pp.131-146.