Nikhil Pahwa, journalist, digital activist and founder of Medianama.com
Nikhil Pahwa is an Indian journalist, digital rights activist, and founder of MediaNama, a mobile and digital news portal. He has been a key commentator on stories and debates around Indian digital media companies, censorship and Internet and mobile regulation in India.
On the even of India’s general election 2019, Nalaka Gunawardene spoke to him in an email interview to find out how disinformation spread on social media and chat app platforms figures in election campaigning. Excerpts of this interview were quoted in Nalaka’s #OnlineOffline column in the Sunday Morning newspaper of Sri Lanka on 7 April 2019.
Nalaka: What social media and chat app platforms are most widely used for spreading mis and disinformation in the current election campaign in India?
Nikhil: In India, it’s as if we’ve been in campaigning mode ever since the 2014 elections got over: the political party in power, the BJP, which leveraged social media extensively in 2014 to get elected has continued to build its base on various platforms and has been campaigning either directly or, allegedly, through affiliates, ever since. They’re using online advertising, chat apps, videos, live streaming, and Twitter and Facebook to campaign. Much of the campaigning happens on WhatsApp in India, and messages move from person to person and group to group. Last elections we saw a fair about of humour: jokes were used as a campaigning tool, but there was a fair amount of misinformation then, as there has been ever since.
Are platforms sufficiently aware of these many misuses — and are they doing enough (besides issuing lofty statements) to tackle the problem?
Platforms are aware of the misuse: a WhatsApp video was used to incite a riot as far back as 2013. India has the highest number of internet shutdowns in the world: 134 last year, as per sflc.in. much of this is attributable to internet shutdowns, and the inability of local administration to deal with the spread of misinformation.
Platforms are trying to do what they can. WhatsApp has, so far, reduced the ability to forward messages to more than 5 people at a time. Earlier it was 256 people. Now people are able to control whether they can be added to a group without consent or not. Forwarded messages are marked as forwarded, so people know that the sender hasn’t created the message. Facebook has taken down groups for inauthentic behavior, robbing some parties of a reach of over 240,000 fans, for some pages. Google and Facebook are monitoring election advertising and reporting expenditure to the Election Commission. They are also supporting training of journalists in fact checking, and funding fact checking and research on fake news. These are all steps in the right direction, but given the scale of the usage of these platforms and how organised parties are, they can only mitigate some of the impact.
Does the Elections Commission have powers and capacity to effectively address this problem?
Incorrect speech isn’t illegal. The Election Commission has a series of measures announced, including a code of conduct from platforms, approvals for political advertising, take down of inauthentic content. I’m not sure of what else they can do, because they also have to prevent misinformation without censoring legitimate campaigning and legitimate political speech.
What more can and must be done to minimise the misleading of voters through online content?
I wish I knew! There’s no silver bullet here, and it will always be an arms race versus misinformation. There is great political incentive for political parties to create misinformation, and very little from platforms to control it.
WhatsApp 2019 commercial against Fake News in India
Today, I was interviewed on video for BBC Sinhala service for my views on hate speech and fake news. Given below is my remarks in Sinhala, excerpts from which are to be used.
In summary, I said these phenomena predate social media and the web itself, but cyber space has enabled easier and faster dissemination of falsehoods and hatred. Additionally, anonymity and pseudonymity — fundamental qualities of the web – seem to embolden some to behave badly without revealing their identities.
The societal and state responses must be measured, proportionate and cautious, so as not to restrict everybody’s freedom of expression for the misdeeds of a numerical minority of web users. I urged a multi-pronged response including:
– adopting clear legal definitions of hate speech and fake news;
– enforcing the existing laws, without fear or favour, against those peddling hatred and falsehoods;
– mobilising the community of web users to voluntarily monitor and report misuses online; and
– promoting digital literacy at all levels in society, to nurture responsible web use and social media use.
Drawing from my recent interactions with the IGF Academy, as well as several academic and civil society groups, I position the current public debates on web’s socio-cultural impacts in the context of freedom of expression.
With 30 per cent of our population now using the Internet, it is no longer a peripheral pursuit. Neither is it limited to cities or rich people. So we urgently need more accurate insights into how society and economy are being transformed by these modern tools.
My basic premise: many well-meaning persons who urge for greater regulation of the web and social media overlook that governments in Sri Lanka have a terrible track record in stifling dissent in the name of safeguarding the public.
Cartoon by John Jonik
I argue: “As a democracy recovering from a decade of authoritarianism, we need to be especially careful how public sentiments based on fear or populism can push policymakers to restrict freedom of expression online. The web has become the last frontier for free speech when it is under pressure elsewhere.
“When our politicians look up to academics and researchers for policy guidance, the advice they often get is control or block these new media. Instead, what we need is more study, deeper reflection and – after that, if really required – some light-touch regulation.”
I acknowledge that there indeed are problems arising from these new technologies – some predictable, and others not. They include cyber-bullying, hate speech, identity theft through account hijacking, trolling (deliberately offensive or provocative online postings) and sexting (sending and receiving sexually explicit messages, primarily via mobile phones).
I cite some research findings from the work done by non-profit groups or media activists. These findings are not pretty, and some of them outright damning. But bans, blocks and penalties alone cannot deal with these or other abuses, I argue.
I end with these words: “We can and must shape the new cyber frontier to be safer and more inclusive. But a safer web experience would lose its meaning if the heavy hand of government or social orthodoxy tries to make it a sanitized, lame or sycophantic environment at the same time. We sure don’t need a cyber nanny state.”
This is the Sinhala text of my weekly column in Ravaya newspaper of 20 Nov 2011. This week, I continue our discussion on Internet freedom: what can – and must – be regulated online, and how regulation is fundamentally different from control and censorship. I insist that conceptual clarity is as important as technical understanding of how the Internet works.