Social Media

Facebook is now banning COVID-19 vaccine misinformation

Conspiracies are out

Published

on

As the healthcare industry finally develops and approves a COVID-19 vaccine, information dissemination and the fight against misinformation will become the next staging ground against the pandemic. In a revised policy, Facebook is now banning COVID-19 vaccine misinformation on the platform.

Previously, Facebook already confirmed more rigorous moderation against COVID-19 misinformation in general. The platform previously had a laxer crusade against anti-vaccine ads.

Now, in a Thursday announcement, the social media platform will also police COVID-19 vaccine information, especially regarding claims already debunked by medical experts.

Facebook will target “false claims about the safety, efficacy, ingredients or side effects of the vaccines.” This will include ill-informed conspiracy theories such one that claims the vaccine will have microchips.

Enforcement will remain a problem for the platform, though. Facebook admits that moderation cannot happen overnight, especially since new information comes out practically every day.

Most recently, the United Kingdom approved widespread use of the Pfizer vaccine candidate, marking the country as the first to do so. Elsewhere, Pfizer and Moderna are still seeking approvals from other countries.

Besides Facebook, Twitter and YouTube have also committed to more moderation policies against COVID-19 and COVID-19 vaccine misinformation. However, despite being months into the pandemic, misinformation still finds a way to rise to the top. One can only hope that ongoing moderation policies can effectively cut through falsities.

SEE ALSO: Facebook took down pro-China, pro-Duterte accounts

News

Dozens of attorney generals ask Facebook to ditch Instagram Kids

But Facebook has other plans, obviously

Published

on

Facebook confirmed a few weeks ago that it’s working on a new project — Instagram Kids. Experts have warned that it could be a terrible idea, but Facebook has shown no interest in turning back.

Now, attorneys general from 44 states and territories of the US urged Facebook to abandon its plans to create an Instagram service for kids under 13. Citing detrimental health effects of social media on kids and Facebook’s murky past of protecting children on its platform.

The letter also noted that social media sites exploit users’ fear of missing out (FOMO) and can lead to body dissatisfaction and low self-esteem.

“Further, Facebook has historically failed to protect the welfare of children on its platforms. The attorneys general have an interest in protecting our youngest citizens, and Facebook’s plans to create a platform where kids under the age of 13 are encouraged to share content online is contrary to that interest,” the letter said.

The AGs also cited a UK study that found more cases of “sexual grooming on Instagram than any other platform” and noted that in 2020 Facebook and Instagram reported 20 million child sexual abuse images.

Facebook response

A Facebook spokesperson in response to the letter said, “We agree that any experience we develop must prioritize their safety and privacy, and we will consult with experts in child development, child safety, and mental health, and privacy advocates to inform it.”

In a nutshell, Facebook doesn’t get the fundamental fact that kids under the age of 13 don’t need to be on social media. The company defends its move by saying children often lie about their age and make an account on Instagram, where child-friendly features have limited reach. To prevent this, it’s creating Instagram kids to offer a safe and controlled environment exclusive for kids.

The spokesperson also noted that Facebook is a founding sponsor of a new Digital Wellness Lab at Boston Children’s Hospital better to understand the effect of digital technology on kids.

Continue Reading

News

realme India’s official Twitter was compromised, ran a crypto scam

Elon Musk is the new Nigerian Prince

Published

on

Crypto Scam

realme India’s official Twitter account was compromised for a short period on May 6, 2021. Scammers were able to control the verified brand account and change its name to Tesla, trying to mislead gullible users via a crypto scam.

After the Twitter account was taken over, the scammers replied to Elon Musk’s tweets and said they’re organizing an “airdrop” of 5000 bitcoins. The tweets pretended to be an official Tesla undertaking and suggested users sign up on obscure websites.

The account’s handle remained unchanged, but the Tesla name manipulated users to believe it’s true. The vulnerability was short-lived as realme India’s team soon took back control of the account, but they haven’t issued any clarifications or apology.

 

The modus operandi of these scams is quite simple — take over a verified account and rename it, share malicious or misleading links that claim to be a cryptocurrency giveaway, ask users to enter a raffle, or contribute a small token sum, and vanish with the monies.

Unsurprisingly, one of the mention domains contains a message from the Nigerian Prince-like Elon Musk, saying, “Our marketing department here at Tesla HQ came up with an idea: to hold a special giveaway event for all crypto fans out there.”

The scam asks users to contribute 1 BTC (bitcoin) to win back 10 BTC. It’s not supposed to make logical sense because it’s an outright scam. Twitter is filled to the brim with verified accounts that are compromised and used to cheat unsuspecting users. It isn’t known yet whether anyone fell for the crypto scam.

The boom in cryptocurrency as a decentralized and unregulated system is controversial. Lack of regulation and law enforcement makes it impossible to trace transactions, creating a haven for scam artists and the black market. While the growth of cryptocurrency is consistent, many experts still question its reliability and redressal mechanism in the real-world scenario.

Read Also: Basics of cryptocurrency: Risks and benefits

Continue Reading

Apps

TikTok, Reels clone YouTube Shorts launches in the US

Everyone wants a piece of the pie

Published

on

shorts

YouTube unveiled its short-video-making tool called Shorts last year, but it was in beta and limited to India. Shorts is now available to all creators in the US after testing them with select creators.

The initial release was quite hasty as it was supposed to bridge the vacuum left by TikTok’s ban in India. However, Instagram was much faster and well prepared to take on the challenge, dominating the turf over many local apps like Chingari, Roposo, and MX TakaTak.

YouTube is also adding a dedicated space in the bottom tab by replacing the explore button. In India, YouTube Shorts has a dedicated space on the top bar of the app. YouTube also displays Shorts in the home feed of the app after around 2-3 videos.

The goal is to incorporate a short video format in the existing app. While watching a “short”, users can tap on the music option to hear the full song via YouTube. Soon, the feature also will work the other way: From a YouTube music video, you will be able to click a “create” button right from the video to make your own Short.

Shorts will expand

The video platform’s music team has signed licensing agreements to use snippets of millions of songs from over 250 labels and publishers. It plans to expand Shorts to more markets later this year but it hasn’t specified which ones.

Ahead of the US launch, a bunch of new features has been added as well. There’s now an option to record 60-second clips in addition to the 15-second option. But users will not be able to add music from the YouTube library to 60-second Shorts. There are also new filters and effects in the YouTube Shorts camera.

In its most recent earnings report, YouTube confirmed that Shorts were generating 6.5 billion daily views, a substantial uptick over the 3.5 billion daily views that the feature was generating in late January.

Continue Reading

Trending