On Thursday, social networking giant Facebook’s CEO Mark Zuckerberg announced that it would change its holding company’s name and rebrand it to Meta, which in Greek means ‘beyond’. The move is reminiscent of Google restructuring itself under a holding company, Alphabet, in 2015 October.
The rebranding also included a new logo that looked like a slightly askew infinity symbol. This change doesn’t apply to individual platforms like Facebook, Instagram, WhatsApp and Oculus and is only limited to the parent company.
Zuckerberg claims to unify diverse digital worlds into the Metaverse, which is the future of the internet. This digital world would consist of virtual reality headphones’ and augmented reality. He also mentioned that the company would be going beyond social networking, and Facebook as a corporate name would not be justifiable with this move.
At the same time, renaming the brand would help distance the company from all the recent controversies. The company was under fire for spreading misinformation and hate speech. It was also under the scanner of many governments recently.
Internal documents first leaked to the Wall Street Journal by former worker Frances Haugen last month revealed how the company struggled to regulate hate speech and misinformation. She also shed light on the extent of its research into its platforms’ adverse effects on young people’s mental health and body image.
The latest controversy for Facebook was when The Washington Post¹ recently reported that the company withheld essential data regarding vaccine misinformation. There have also been reports that claimed Facebook knew about the ill effects of Instagram on the minds of teenagers. However, Zuckerberg declined these claims and alleged that the media misused the leaked documents to paint a negative image of the company.
The Washington Post reported:
Facebook researchers had deep knowledge of how coronavirus and vaccine misinformation moved through the company’s apps, running multiple studies and producing large internal reports on what kinds of users were most likely to share falsehoods about the deadly virus, according to documents disclosed by Facebook whistleblower Frances Haugen.
But even as academics, lawmakers and the White House urged Facebook for months to be more transparent about the misinformation and its effects on the behavior of its users, the company refused to share much of this information publicly, resulting in a showdown with the Biden administration.
Internally, Facebook employees showed that coronavirus misinformation was dominating small sections of its platform, creating “echo-chamber-like effects” and reinforcing vaccine hesitancy. Other researchers documented how posts by medical authorities, like the World Health Organization, were often swarmed by anti-vaccine commenters, hijacking a pro-vaccine message and wrapping it in a stream of falsehoods.
Facebook was subjected to a large number of congressional hearings with multiple legal and regulatory scrutiny. Earlier in 2016, a British Company, Cambridge Analytica was accused of using data harvested from Facebook for various campaigns of political nature, which included the 2016 Presidential Election campaign of Ted Cruz and Donald Trump in the US, Brexit in the UK and elections held in India and Brazil. Following the Cambridge Analytica data scandal, the Federal Trade Commission moved an antitrust lawsuit against Facebook charging a fine of $5bn in July 2019. In October 2019, Facebook agreed to pay a £500,000 fine to the UK Information Commissioner’s Office for exposing the data of its users to a “serious risk of harm”. The Australian government had fined Facebook £50 million for the breach of an enforcement order. Antitrust lawsuits were piling up worldwide even while authorities and dissidents continued to harvest personally identifiable data of millions of its users from Facebook.
Governments have made attempts across the globe to create laws that force Facebook and other tech giants to pay media companies for the audience traffic and advertising revenue they generate on these platforms. Until now, only Australia was willing to set a fee for the same, in reaction to Facebook blocking the news feed in the country².
Governments have been trying to regulate Facebook to reduce misinformation and hate speech. A declaration by the United Nations³ states that social media companies should ensure transparency in their rules for content moderation, and these rules should reflect international human rights.
Social media companies should ensure that their content moderation rules reflect international human rights standards including the importance of inclusive debate about matters of public interest, and elaborate clearly when, how and what measures may be taken against content posted by politicians and public officials.
The governments have also been worried about the safety of citizen information that Facebook collects and its other applications. This concern has led to many questions about internet security and data collection. While these are focused on the negative impact of Facebook, many governments have also used the platform for community awareness and social campaigns.
The good and bad of the company aside, this move by Zuckerberg may not be entirely accepted by the people as Metaverse doesn’t exist yet; it is a long term product. It also looks like the company is trying to get away from the negative stories and rebuild itself. Whether or not this move will help the company is yet to be seen.
Comments