Nikhil Pahwa is an Indian journalist, digital rights activist, and founder of MediaNama, a mobile and digital news portal. He has been a key commentator on stories and debates around Indian digital media companies, censorship and Internet and mobile regulation in India.
On the even of India’s general election 2019, Nalaka Gunawardene spoke to him in an email interview to find out how disinformation spread on social media and chat app platforms figures in election campaigning. Excerpts of this interview were quoted in Nalaka’s #OnlineOffline column in the Sunday Morning newspaper of Sri Lanka on 7 April 2019.
Nalaka: What social media and chat app platforms are most widely used for spreading mis and disinformation in the current election campaign in India?
Nikhil: In India, it’s as if we’ve been in campaigning mode ever since the 2014 elections got over: the political party in power, the BJP, which leveraged social media extensively in 2014 to get elected has continued to build its base on various platforms and has been campaigning either directly or, allegedly, through affiliates, ever since. They’re using online advertising, chat apps, videos, live streaming, and Twitter and Facebook to campaign. Much of the campaigning happens on WhatsApp in India, and messages move from person to person and group to group. Last elections we saw a fair about of humour: jokes were used as a campaigning tool, but there was a fair amount of misinformation then, as there has been ever since.
Are platforms sufficiently aware of these many misuses — and are they doing enough (besides issuing lofty statements) to tackle the problem?
Platforms are aware of the misuse: a WhatsApp video was used to incite a riot as far back as 2013. India has the highest number of internet shutdowns in the world: 134 last year, as per sflc.in. much of this is attributable to internet shutdowns, and the inability of local administration to deal with the spread of misinformation.
Platforms are trying to do what they can. WhatsApp has, so far, reduced the ability to forward messages to more than 5 people at a time. Earlier it was 256 people. Now people are able to control whether they can be added to a group without consent or not. Forwarded messages are marked as forwarded, so people know that the sender hasn’t created the message. Facebook has taken down groups for inauthentic behavior, robbing some parties of a reach of over 240,000 fans, for some pages. Google and Facebook are monitoring election advertising and reporting expenditure to the Election Commission. They are also supporting training of journalists in fact checking, and funding fact checking and research on fake news. These are all steps in the right direction, but given the scale of the usage of these platforms and how organised parties are, they can only mitigate some of the impact.
Does the Elections Commission have powers and capacity to effectively address this problem?
Incorrect speech isn’t illegal. The Election Commission has a series of measures announced, including a code of conduct from platforms, approvals for political advertising, take down of inauthentic content. I’m not sure of what else they can do, because they also have to prevent misinformation without censoring legitimate campaigning and legitimate political speech.
What more can and must be done to minimise the misleading of voters through online content?
I wish I knew! There’s no silver bullet here, and it will always be an arms race versus misinformation. There is great political incentive for political parties to create misinformation, and very little from platforms to control it.