It seems impossible for Facebook-owned WhatsApp to stay away from troubles. In the past few months, India has witnessed several lynching cases triggered by fake messages circulated on WhatsApp. In the latest occurrence, five people were lynched by a mob in Maharashtra on the suspicion of being child kidnappers. On similar grounds, Indian states Tripura, Assam, and Karnataka also faced a series of lynching incidents last month.
“Instances of the lynching of innocent people have been noticed recently because of a large number of irresponsible and explosive messages filled with rumors and provocation are being circulated on WhatsApp. The unfortunate killing in many states such as Assam, Maharashtra, Karnataka, Tripura and West Bengals are deeply painful and regrettable, “said a statement issued by Indian Ministry of Electronics and Information Technology said.
After the government’s grave warning, WhatsApp responded a day later in a letter mentioning all the detailed they have undertaken to stop the crisis at hand. The letter reads, “We are also horrified by these terrible acts of violence” and wanted to respond quickly to the very important issues raised. We believe this is a challenge that requires government, civil society, and technology companies to work together.”
Also read: UBER TO LAUNCH ‘EXPRESS POOL’ IN INDIA
The full text of WhatsApp’s response to MeitY letter on the issue of misinformation is given below:
Our strategy has been twofold:
First, to give people the controls and information they need to stay safe; and
Second, to work proactively to prevent misuse on WhatsApp.
WhatsApp cares deeply about people’s safety which is why we designed our app with security in mind from the get go. For example, you can block anyone from messaging you with just one tap. And if someone who is not in your address book sends you a message, WhatsApp automatically asks if you want to block or report that user. We’ve also recently made a number of changes to group chats to prevent the spread of unwanted information, which we believe will address some of the specific issues you raise.
In Mid-May, we added new protections to prevent people from adding others back into groups which they had left — a form of misuse we think it is important to correct. And last week, we launched a new setting that enables administrators to decide who gets to send messages within individual groups. This will help reduce the spread of unwanted messages into important group conversations — as well as the forwarding of hoaxes and other content.
In addition, we have been testing a new label in India that highlights when a message has been forwarded versus composed by the sender. This could serve as an important signal for recipients to think twice before forwarding messages because it lets a user know if content they received was written by the person they know or a potential rumor from someone else. We plan to launch this new feature soon.
Finally, just yesterday we announced a new project to work with leading academic experts in India to learn more about the spread of misinformation, which will help inform additional product improvements going forward — as well as help our efforts to block bad actors (see below) going forward.
Digital Literacy and Fact-checking
We are also working hard to educate people about how to stay safe online. For example, we regularly put out information that explains how to spot fake news and hoaxes — and we plan to run long-term public safety ad campaigns in India, given its importance to us at WhatsApp. As a starting point, we will soon publish new educational materials around misinformation and conduct our news literacy workshops.
This year, for the first time, we also started working with fact checking organizations to identify rumors and false news — and respond to them — using WhatsApp. For example, during the recent Presidential election in Mexico, we worked closely with the news consortium Verificado. Users sent thousands of rumors to Verificado’s WhatsApp account and in turn, were provided regular updates on what was accurate and what was false.