Nikhil Pahwa is an Indian journalist, digital rights activist, and founder of MediaNama, a mobile and digital news portal. He has been a key commentator on stories and debates around Indian digital media companies, censorship and Internet and mobile regulation in India.
On the even of India’s general election 2019, Nalaka Gunawardene spoke to him in an email interview to find out how disinformation spread on social media and chat app platforms figures in election campaigning. Excerpts of this interview were quoted in Nalaka’s #OnlineOffline column in the Sunday Morning newspaper of Sri Lanka on 7 April 2019.
Nalaka: What social media and chat app platforms are most widely used for spreading mis and disinformation in the current election campaign in India?
Nikhil: In India, it’s as if we’ve been in campaigning mode ever since the 2014 elections got over: the political party in power, the BJP, which leveraged social media extensively in 2014 to get elected has continued to build its base on various platforms and has been campaigning either directly or, allegedly, through affiliates, ever since. They’re using online advertising, chat apps, videos, live streaming, and Twitter and Facebook to campaign. Much of the campaigning happens on WhatsApp in India, and messages move from person to person and group to group. Last elections we saw a fair about of humour: jokes were used as a campaigning tool, but there was a fair amount of misinformation then, as there has been ever since.
Are platforms sufficiently aware of these many misuses — and are they doing enough (besides issuing lofty statements) to tackle the problem?
Platforms are aware of the misuse: a WhatsApp video was used to incite a riot as far back as 2013. India has the highest number of internet shutdowns in the world: 134 last year, as per sflc.in. much of this is attributable to internet shutdowns, and the inability of local administration to deal with the spread of misinformation.
Platforms are trying to do what they can. WhatsApp has, so far, reduced the ability to forward messages to more than 5 people at a time. Earlier it was 256 people. Now people are able to control whether they can be added to a group without consent or not. Forwarded messages are marked as forwarded, so people know that the sender hasn’t created the message. Facebook has taken down groups for inauthentic behavior, robbing some parties of a reach of over 240,000 fans, for some pages. Google and Facebook are monitoring election advertising and reporting expenditure to the Election Commission. They are also supporting training of journalists in fact checking, and funding fact checking and research on fake news. These are all steps in the right direction, but given the scale of the usage of these platforms and how organised parties are, they can only mitigate some of the impact.
Does the Elections Commission have powers and capacity to effectively address this problem?
Incorrect speech isn’t illegal. The Election Commission has a series of measures announced, including a code of conduct from platforms, approvals for political advertising, take down of inauthentic content. I’m not sure of what else they can do, because they also have to prevent misinformation without censoring legitimate campaigning and legitimate political speech.
What more can and must be done to minimise the misleading of voters through online content?
I wish I knew! There’s no silver bullet here, and it will always be an arms race versus misinformation. There is great political incentive for political parties to create misinformation, and very little from platforms to control it.
Trends like ultra-nationalistic media, hate speech and fake news have all been around for decades — certainly well before the web emerged in the 1990s. What digital tools and the web have done is to ‘turbo-charge’ these trends.
This is the main thrust of this week’s Ravaya column, published on 1 July 2018, where I capture some discussions and debates at the 11thDeutsche Welle Global Media Forum (GMF), held in Bonn, Germany, from 11 to 13 June 2018.
I was among the 2,000+ media professionals and experts from over 100 countries who participated in the event. Across many plenaries and parallel sessions, we discussed a whole range of issues related to politics and human rights, media development and innovative journalism concepts.
Some say that blogging is in decline and claim that the days are numbered for this art of web-based writing and sharing of all kinds of content. But not (yet?) in Sri Lanka’s local languages of Sinhala and Tamil, where vibrant blogospheres exist, sustaining their own subcultures and dynamics.
In this article (in Sinhala) written in April 2017 and published in Desathiya magazine of November 2017, I I look around the Sinhala language blogosphere in Sri Lanka, and offer a few glimpses on how the myriad conversations are unfolding. In that process, I also try to demystify blogs — about which popular myths and misconceptions persist in Lankan society (some of them peddled by the mainstream media or uninformed mass media academics).
This is an annotated Sinhala language adaptation of Facebook’s Community Standards as they stood on 25 March 2018. Note this is not a verbatim translation and also not an officially sanctioned translation. It has been adapted and annotated as a public service by Nalaka Gunawardene, new media analyst and activist.
I discuss Facebook’s Community Standards and the complaints mechanism currently in place, and the difficulties that non-English language content poses for Facebook’s designated monitors looking out for violations of these standards. Hate speech and other objectionable content produced in local languages like Sinhala sometimes pass through FB’s scrutiny. This indicates more needs to be done both by the platform’s administrators, as well as by concerned FB users who spot such content.
But I sound a caution about introducing new Sri Lankan laws to regulate social media, as that can easily stifle citizens’ right to freedom of expression to question, challenge and criticise politicians and officials. Of course, FoE can have reasonable and proportionate limits, and our challenge is to have a public dialogue on what these limits are for online speech and self-expression that social media enables.
Sri Lanka’s first ever social media blocking lasted from 7 to 15 March 2018. During that time, Facebook and Instagram were completely blocked while chat apps WhatsApp and Viber were restricted (no images, audio or video, but text allowed).
On 7 March 2018, the country’s telecom regulator, Telecommunications Regulatory Commission (TRCSL), ordered all telecom operators to impose this blocking across the country for three days, Reuters reported. This was “to prevent the spread of communal violence”, the news agency quoted an unnamed government official as saying. In the end, the blocking lasted 8 days.
Both actions are unprecedented. In the 23 years Sri Lanka has had commercial Internet services, it has never imposed complete network shutdowns (although during the last phase of the civil war between 2005 and 2009, the government periodically shut down telephone services in the Northern and Eastern Provinces). Nor has any social media or messaging platforms been blocked before.
I protested this course of action from the very outset. Restricting public communications networks is ill-advised at any time — and especially bad during an emergency when people are frantically seeking reliable situation updates and/or sharing information about the safety of loved ones.
Blocking selected websites or platforms is a self-defeating exercise in any case, since those who are more digitally savvy – many hate peddlers among them –can and will use proxy servers to get around. It is the average web user who will be deprived of news, views and updates.
While the blocking was on, I gave many media interviews to local and international media. I urged the government “to Police the streets, not the web!”.
At the same time, I acknowledged and explained how a few political and religious extremist groups have systematically ‘weaponised’ social media in Sri Lanka during recent years. These groups have been peddling racially charged hate speech online and offline. A law to deal with hate speech has been in the country’s law books for over a decade. The International Covenant on Civil and Political Rights (ICCPR) Act No 56 of 2007 prohibits the advocacy of ‘religious hatred that constitutes incitement to discrimination, hostility or violence’. This law, fully compliant with international human rights standards, has not been enforced.
On 14 March 2018, I took part in the ‘Aluth Parlimenthuwa’ TV talk show of TV Derana on this topic, where I articulated the above and related views. The other panelists were Deputy Minister Karu Paranawithana, presidential advisor Shiral Lakthilaka, Bar Association of Sri Lanka chairman U R de Silva, and media commentator Mohan Samaranayake.
This comment on Sri Lanka’s social media blocking that commenced on 7 March 2018, was written on 8 March 2018 at the request of Irida Lakbima Sunday broadsheet newspaper, which carried excerpts from it in their issue of 11 March 2018. The full text is shared here, for the record.
On 1 March 2018, Facebook announced that it was ending its six-nation experiment known as ‘Explore Feed’. The idea was to create a version of Facebook with two different News Feeds: one as a dedicated place with posts from friends and family and another as a dedicated place for posts from Pages.
Adam Mosseri, Head of News Feed at Facebook wrote: “People don’t want two separate feeds. In surveys, people told us they were less satisfied with the posts they were seeing, and having two separate feeds didn’t actually help them connect more with friends and family.”
An international news agency asked me to write a comment on this from Sri Lanka, one of the six countries where the Explore feed was tried out from October 2017 to February 2018. Here is my full text, for the record:
Did Facebook’s “Explore” experiment increase
our exposure to fake news?
Comment by Nalaka Gunawardene, researcher and commentator on online and digital media; Fellow, Internet Governance Academy in Germany
Despite its mammoth size and reach, Facebook is still a young company only 14 years old this year. As it evolves, it keeps experimenting – mistakes and missteps are all part of that learning process.
But given how large the company’s reach is – with over 2 billion users worldwide – there can be far reaching and unintended consequences.
Last October, Facebook split its News Feed into two automatically sorted streams: one for non-promoted posts from FB Pages and publishers (which was called “Explore”), and the other for contents posted by each user’s friends and family.
Sri Lanka was one of six countries where this trial was conducted, without much notice to users. (The other countries were Bolivia, Cambodia, Guatemala, Serbia, Slovakia.)
Five months on, Facebook company has found that such a separation did not increase connections with friends and family as it had hoped. So the separation will end — in my view, not a moment too soon!
What can we make of this experiment and its outcome?
Humans are complex creatures when it comes to how we consume information and how we relate to online content. While many among us like to look up what our social media ‘friends’ have recommended or shared, we remain curious of, and open to, content coming from other sources too.
I personally found it tiresome to keep switching back and forth between my main news feed and what FB’s algorithms sorted under the ‘Explore’ feed. Especially on mobile devices – through which 80% of Lankan web users go online – most people simply overlooked or forgot to look up Explore feed. As a result, they missed out a great deal of interesting and diverse content.
For me as an individual user, a key part of the social media user experience is what is known as Serendipity – accidentally making happy discoveries. The Explore feed reduced my chances of Serendipity on Facebook, and as a result, in recent months I found myself using Facebook less often and for shorter periods of time.
For publishers of online newspapers, magazines and blogs, Facebook’s unilateral decision to cluster their content in the Explore feed meant significantly less visibility and click-through traffic. Fewer Facebook users were looking at Explore feed and then going on to such publishers’ content.
I am aware of mainstream media houses as well as bloggers in Sri Lanka who suffered as a result. Publishers in the other five countries reported similar experiences.
For the overall information landscape too, the Explore feed separation was bad news. When updates or posts from mainstream news media and socially engaged organisations were coming through on a single, consolidated news feed, our eyes and ears were kept more open. We were less prone to being confined to the chatter of our friends or family, or being trapped in ‘eco chambers’ of the likeminded.
Content from reputed news media outlets and bloggers sometimes comes with their own biases, for sure, but these act as a useful ‘bulwark’ against fake news and mind-rotting nonsense that is increasing in Sri Lanka’s social media.
It was thus ill-advised of Facebook to have taken such content away and tucked it in a place called Explore that few of us bothered to visit regularly.
The Explore experiment may have failed, but I hope Facebook administrators learn from it to fine-tune their platform to be a more responsive and responsible place for global cacophony to evolve.
Indeed, the entire Facebook is an on-going, planetary level experiment in which all its 2 billion plus members are participating. Our common challenge is to balance our urge for self-expression and sharing with responsibility and restraint. The justified limitations on free speech continue to apply on new media too.
Some are urging national governments to ‘regulate’ social media in ways similar to how newspapers, television and radio are regulated. This is easier said than done where globalized social media platforms like Facebook, Twitter and Instagram are concerned, because national governments don’t have jurisdiction over them.
But does this mean that globalized media companies are above the law? Short of blocking entire platforms from being accessed within their territories, what other options do governments have? Do ‘user community standards’ that some social media platforms have adopted offer a sufficient defence against hate speech, cyber bullying and other excesses?
In this conversation, Lankan science writer Nalaka Gunawardene discusses these and related issues with Toby Mendel, a human rights lawyer specialising in freedom of expression, the right to information and democracy rights.
Mendel is the executive director of the Center for Law and Democracy (CLD) in Canada. Prior to founding CLD in 2010, Mendel was for over 12 years Senior Director for Law at ARTICLE 19, a human rights NGO focusing on freedom of expression and the right to information.
The interview was recorded in Colombo, Sri Lanka, on 5 July 2017.
The German “Forum on Media and Development” (Forum Medien und Entwicklung, FOME) is a network of institutions and individuals active in the field of media development cooperation. I was invited to participate in, and moderate a panel at FoME Symposium 2017 held in Berlin on 16 – 17 November 2017.
This year’s symposium theme was Power Shifts – Media Freedom and the Internet. It explored how Internet governance issues are becoming more and more important for those who want to develop media (both mainstream media and social media) as democratic platforms.
On 17 November 2017, I moderated an international panel on Fake News: Tackling the phenomena respecting freedom of expression. It brought together representatives from government, civil society and a global media platform to discuss their roles and how they can interact to tackle the issue – all within the framework of Freedom of Expression (FOE).
Miriam Estrin, Public Policy Manager for Europe, Middle East and Africa, Google
Here are my opening remarks that set the context for our discussion:
Just as there are many definitions of Fake News, there can also be many perspectives on the topic. We all recognise Fake News as a problem, so let’s focus on how it can be countered. What are the local, national and global level strategies? What alliances, tools and resources are needed for such countering? What cautions and alarms can we raise?
To respond to any problem, we need to understand its contours.
Fake News is not new. The phenomenon has been around, in one form or another, for decades! Many of us in the global South have grown up amidst intentionally fake news stories in our media, some of it coming from governments, no less. And the developing world governments don’t have a monopoly over Fake News either: for over half a century, the erstwhile Soviet Union and Eastern Bloc countries manufactured a vast amount of disinformation (i.e. deliberately wrong information) that was fed to their own citizens and spread overseas in sustained propaganda efforts.
Sitting here, within a few kilometres from where the Berlin Wall once stood, we need to acknowledge that veritable factory of lies that operated on the other side!
So what’s new? During the past decade, as broadband Internet spread worldwide, fake news peddlers found an easy and fast medium online. From websites to social media accounts (many hiding behind pseudonyms), the web has provided a globalised playing field where dubious content could go ‘viral’.
Yesterday at this Symposium, Mark Nelson from CIMA said “We live in a world where lies are very cheap, and much easier to disseminate than the truth.”
Which reminded me of one of my favourite quotes: ““A lie can travel halfway around the world while the truth is putting on its shoes!”
Variations of this quote have been attributed to several persons including Jonathan Swift and Mark Twain. Whoever said it first, these words neatly sum up a long standing challenge to modern societies: how to cope with the spread of deliberate falsehoods.
As Mark Nelson asked us yesterday, how can we “make the Internet a place where truth is valued and spread – instead of disinformation?” This is the crux of our challenge.
So what is to be done? Among the options available, which ones are most desirable?
In searching for solutions to the Fake News crisis, we must recognise it is a nuanced, complex and variable phenomenon. There cannot be one global solution or quick fix.
Indeed, any ‘medicine’ prescribed for the malady of Fake News should not be worse than the ailment itself! We must proceed with caution, safeguarding the principles of Freedom of Expression and applying its reasonable limitations.
As human rights defenders caution, there is a danger that governments in their zeal to counter fake news could impose direct or indirect censorships, suppress critical thinking, or take other steps that violate international human rights law. This is NOT the way to deal with Fake News.
In my view, Fake News is a symptom of a wider and deeper crisis. It is a crisis of public trust in journalism and the media that has been building up over the years in many countries. Some call this a ‘Journalism Deficit’, or a gulf between what journalism ought be, and what it has (mostly) become today.
In my view, a free press is not an automatic guarantee against Fake News. In other words, media freedom is necessary — but not sufficient — to ensure that media content is trusted by the public. We need to better measure public trust in media and what the current trust levels mean for those producing media content professionally.
I would argue that the medium to long term response to Fake News is to narrow and bridge the Journalism Deficit by nurturing quality journalism and critical consumption of media. If you agree with this premise, what specific measures can we recommend and advocate?
Let us explore how media development can counter Fake News by exposing it, undermining it, and equipping media consumers with the knowledge and skills to spot it – and not spread it inadvertently.
For this, we need everyone’s cooperation.
We need global social media platforms and digital gatekeepers like Google to join with all their might (and what might!).
We need governments to be thoughtfully, carefully evaluate the optimum responses.
We need civil society to go beyond mere hand waving and finger pointing to help enhance media and information literacy.
We need researchers to keep studying and discerning trends that can influence policy and regulation (where appropriate).
We are not going to solve the problem in an hour. But we can at least ask the right questions, and clarify the issues in our minds. Onward!