UK To Impose Tougher Rules On Google, Facebook

This file illustration taken on October 1, 2019 shows the logos of mobile apps Facebook and Google displayed on a tablet in Lille, France. DENIS CHARLET / AFP
This file illustration taken on October 1, 2019 shows the logos of mobile apps Facebook and Google displayed on a tablet in Lille, France. DENIS CHARLET / AFP

 

Britain announced Friday it will set up a watchdog to regulate tech giants such as Facebook and Google and improve their transparency on using people’s data and personalised advertising.

The Department for Culture, Media and Sport said in a statement that the new regulator, the Digital Markets Unit, will “govern the behaviour of platforms that currently dominate the market, such as Google and Facebook”.

The aim is “to ensure consumers and small businesses aren’t disadvantaged”, it said.

The unit is being created after the UK Competition and Markets Authority (CMA) said in July that existing laws were not effective and a new regulatory regime was needed to control internet giants that earn from digital advertising.

The CMA has backed the new rules while it has not taken direct action against Facebook and Google.

“Our new, pro-competition regime for digital markets will ensure consumers have choice and mean smaller firms aren’t pushed out,” said Business Secretary Alok Sharma.

Britain acknowledged the online platforms bring “huge benefits for businesses and society” but said the “concentration of power amongst a small number of tech companies” was curbing growth and innovation in the industry, which could have “negative impacts” for the public.

A new statutory code will aim to make the tech giants “more transparent about the services they provide and how they are using consumers’ data”, it said.

Consumers will be able to choose whether to see personalised advertising, the government said.

The new regulator will be launched in April and could have powers to “suspend, block and reverse decisions of tech giants”, order them to take actions and impose fines.

The new code could also mean online platforms have to offer fairer terms to news publications.

There have been calls for Facebook and Google to give a larger share of their advertising revenue to media organisations whose content they use.

According to the CMA, last year around 80 percent of the £14 billion ($18.7 billion, 15.7 billion euros) spent on digital advertising went to Google and Facebook.

Newspapers are dependent on the online giants for traffic, with around 40 percent of visits to their sites coming via Facebook and Google.

Google reacted by saying it wants to “work constructively” with the new regulator.

Facebook is preparing to launch its Facebook News service in the UK, which works with news media and includes original reporting.

It said it remains “committed to working with our UK industry partners to find ways to support journalism and help the long-term sustainability of news organisations”.

-AFP

Apple To Press Ahead On Mobile Privacy, Despite Facebook Protests

FILES) In this file photo taken on September 28, 2020 a person walks past the Apple store on Fifth Avenue in New York City. 
Angela Weiss / AFP

 

Apple said Thursday it would press ahead with mobile software changes that limit tracking for targeted advertising — a move that has prompted complaints from Facebook and others.

The iPhone maker said updates to its mobile operating system would give users more information and control on the tracking of their online activity by apps on Apple devices.

Apple earlier this year delayed the changes to give online advertisers time to adapt.

But in a letter to the nonprofit group Ranking Digital Rights, Apple said it planned to move forward next year “because we share your concerns about users being tracked without their consent and the bundling and reselling of data by advertising networks and data brokers.”

The letter from Apple privacy chief Jane Horvath noted that Apple intends to support online ads but without “unfettered data collection” and noted a split with Facebook, which had expressed concerns about the new policy.

“Facebook and others have a very different approach to targeting,” Horvath said in the letter, verified by AFP.

“Not only do they allow the grouping of users into smaller segments, they use detailed data about online browsing activity to target ads.

“Facebook executives have made clear their intent is to collect as much data as possible across both first and third party products to develop and monetize detailed profiles of their users, and this disregard for user privacy continues to expand to include more of their products.”

Facebook responded that Apple was trying to shift attention from the fact that it was collecting more data on its users.

“Apple is being accused of monitoring and tracking people’s private data from their personal computers without their customers’ knowledge through its latest update to macOS – and today’s letter is a distraction from that,” the company said in a statement sent to AFP.

“The truth is Apple has expanded its business into advertising and through its upcoming iOS14 changes is trying to move the free internet into paid apps and services where they profit,” said Facebook, referring to the latest update to Apple’s mobile operating system.

The social media giant said the changes would allow Apple to collect user data while making it nearly impossible for competitors to do so

“They claim it’s about privacy, but it’s about profit,” said Facebook.

Facebook Moderators Press For Pandemic Safety Protections

Mark Zuckerberg, Chief Executive Officer of Facebook, testifies remotely during the Senate Judiciary Committee hearing on "Breaking the News: Censorship, Suppression, and the 2020 Election" on November 17, 2020 in Washington, DC. Photo By Bill Clark-Pool/Getty Image
Mark Zuckerberg, Chief Executive Officer of Facebook, testifies remotely during the Senate Judiciary Committee hearing on “Breaking the News: Censorship, Suppression, and the 2020 Election” on November 17, 2020 in Washington, DC. Photo By Bill Clark-Pool/Getty Image

 

More than 200 Facebook content moderators demanded better health and safety protections Wednesday as the social media giant called the workers back to the office during the pandemic.

A petition signed by the contract workers living in various countries said Facebook should guarantee better conditions or allow the workers to continue their jobs from home.

“After months of allowing content moderators to work from home, faced with intense pressure to keep Facebook free of hate and disinformation, you have forced us back to the office,” said the open letter released by the British-based legal activist firm Foxglove.

The letter called on Facebook to “keep moderators and their families safe” by maintaining remote work as much as possible and offering “hazard pay” to those who do come into the office.

When the pandemic hit, Facebook sent home most of its content moderators — those responsible for filtering violent and hateful images as well as other content which violates platform rules.

But the social platform discovered limits on what remote employees could do and turned to automated systems using artificial intelligence, which had other shortcomings.

“We appreciate the valuable work content reviewers do and we prioritize their health and safety,” a Facebook spokesperson said in a statement to AFP.

“The majority of these 15,000 global content reviewers have been working from home and will continue to do so for the duration of the pandemic,” the spokesperson said.

The workers’ letter said the current environment highlights the need for human moderators.

“The AI wasn’t up to the job. Important speech got swept into the maw of the Facebook filter — and risky content, like self-harm, stayed up,” the letter said.

“The lesson is clear. Facebook’s algorithms are years away from achieving the necessary level of sophistication to moderate content automatically. They may never get there.”

The petition said Facebook should consider making the moderators full employees — who in most cases may continue working remotely through mid-2021.

“By outsourcing our jobs, Facebook implies that the 35,000 of us who work in moderation are somehow peripheral to social media,” the letter said, referring to a broader group of moderators that includes the 15,000 content reviewers.

“Yet we are so integral to Facebook’s viability that we must risk our lives to come into work.”

Fake News: Facebook, Twitter Defend Actions Over US Election At Congress Hearing

Mark Zuckerberg, Chief Executive Officer of Facebook, testifies remotely during the Senate Judiciary Committee hearing on "Breaking the News: Censorship, Suppression, and the 2020 Election" on November 17, 2020 in Washington, DC. Photo By Bill Clark-Pool/Getty Image
Mark Zuckerberg, Chief Executive Officer of Facebook, testifies remotely during the Senate Judiciary Committee hearing on “Breaking the News: Censorship, Suppression, and the 2020 Election” on November 17, 2020 in Washington, DC. Photo By Bill Clark-Pool/Getty Image

 

Facebook and Twitter defended their handling of US election misinformation at a heated congressional hearing Tuesday where one key senator assailed the platforms for being the “ultimate editor” of political news.

The hearing, the second in less than a month, came with social media under fire from both the left and the right for their handling of political content during a bitter US presidential campaign.

Facebook chief Mark Zuckerberg and Twitter CEO Jack Dorsey testified remotely to the session, which was called to discuss “censorship and suppression of news articles” and the “handling of the 2020 election” by the platforms.

Republican Senator Lindsey Graham, chairing the Judiciary Committee hearing, warned the CEOs that new regulations are needed to ensure the social media giants are held responsible for decisions on removing, filtering or allowing content to remain.

“It seems like you’re the ultimate editor,” Graham said at the opening as he took aim at decisions by both platforms to limit the distribution of a New York Post article claiming to expose malfeasance involving the son of President-elect Joe Biden during the campaign.

“When you have companies that have the power of governments (and) have far more power than traditional media outlets, something has to give.”

Graham said the law known as Section 230 that gives immunity to online services for content posted by others “needs to be changed.”

Megaphone for falsehoods

Democratic Senator Richard Blumenthal also called for reform of Section 230 while rebuking the companies for what he said was inadequate action against political misinformation by President Donald Trump.

“The president has used this megaphone to spread vicious falsehoods in an apparent attempt to overturn the will of voters,” Blumenthal said.

Blumenthal said the social media firms had “power far exceeding the robber barons of the last Gilded Age” and have “profited hugely by strip mining data about our private lives and promoting hate speech and voter suppression.”

Republican Senator Mike Lee meanwhile denounced what he called “instances in which your platforms are taking a very distinctively partisan approach and not a neutral one to election related content moderation… just days before the election.”

Jack Dorsey, Chief Executive Officer of Twitter, testifies remotely as Senator John Kennedy (R-LA) looks on during the Senate Judiciary Committee hearing on 'Breaking the News: Censorship, Suppression, and the 2020 Election' on Capitol Hill on November 17, 2020 in Washington, DC. Bill Clark / POOL / AFP
Jack Dorsey, Chief Executive Officer of Twitter, testifies remotely as Senator John Kennedy (R-LA) looks on during the Senate Judiciary Committee hearing on ‘Breaking the News: Censorship, Suppression, and the 2020 Election’ on Capitol Hill on November 17, 2020 in Washington, DC.
Bill Clark / POOL / AFP

 

From the other side, Blumenthal said that “Facebook, seems to have a record of making accommodations and caving to conservative pressure” on content policies.

Democrat Dianne Feinstein questioned the adequacy of Twitter’s labeling of unverified tweets such as those of Trump claiming an election victory.

“Does that label do enough to prevent the tweets harms when the tweet is still visible and is not accurate?” the California senator asked.

230 rules

Both Dorsey and Zuckerberg said they were open to reform on Section 230 but cautioned that the platforms should not be treated as “publishers” or traditional media.

“We do have to be very careful and thoughtful about changes.. because going one direction might box out new competitors and new startups,” Dorsey said.

“Going another might create a demand for the possible amount of resources to handle it. Going yet another might encourage even more blocking of voices… I believe that we can build upon (Section) 230.”

Defending the filters

Both CEOs defended their efforts to curb harmful misinformation during the election campaign.

“We strengthened our enforcement against militias, conspiracy networks, and other groups to help prevent them from using our platform to organize violence or civil unrest in the period after the election,” Zuckerberg said.

He said Facebook removed false claims about polling conditions and displayed warnings on more than 150 million pieces of content flagged by independent fact-checkers.

Both CEOs said they would study the spread of election misinformation while allowing independent academics to carry out similar research.

Dorsey meanwhile said filtering at Twitter was not a result of bias, despite claims to the contrary by conservatives.

In filtering content, “all decisions are made without using political viewpoints, party affiliation, or political ideology,” Dorsey said in his testimony.

“Our Twitter rules are not based on ideology or a particular set of beliefs. We believe strongly in being impartial, and we strive to enforce our Twitter rules fairly.”

Both platforms have begun limiting the reach of many of Trump’s tweets, notably those in which the president rejected his election defeat or questioned the integrity of the voting process.

Twitter and Facebook have been facing pressure to remove what many see as harmful misinformation around the elections, while also fighting claims of suppression of certain political views.

 

AFP

Zuckerberg, Dorsey To Appear Before US Congress On Tuesday

This combination of file photos created on October 1, 2020 shows Facebook founder Mark Zuckerberg (L) in Washington, DC on October 17, 2019, and Twitter CEO Jack Dorsey in Washington, DC, on September 5, 2018. AFP

 

The top executives of Facebook and Twitter were set to appear before US lawmakers for the second time in less than a month for a fresh hearing on the hotly disputed role of social networks in US political debate.

Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey were scheduled to appear remotely at the hearing before the Senate Judiciary Committee.

Both executives were testifying “voluntarily,” according to the panel.

Committee chair Senator Lindsey Graham called the session to address what he called “censorship and suppression of news articles” and the “handling of the 2020 election” by the platforms.

Graham said the panel would notably address the decision by both social platforms to limit circulation of New York Post articles which claimed to have exposed malfeasance by Democrat Joe Biden ahead of his election victory.

President Donald Trump and his allies have claimed the major social platforms have suppressed conservative voices, despite his own large following and prolific posting.

The session follows an October 28 hearing at a different Senate committee — which included Zuckerberg, Dorsey and Google CEO Sundar Pichai — on the legal immunity of online services for content posted by others on their platforms.

Some lawmakers last month complained that the platforms were becoming biased “arbiters” of content and unfairly suppressing conservative voices.

The large networks have been struggling with ways to filter out abusive content and misinformation while keeping their platforms open to all voices.

AFP

2.2 Million Facebook And Instagram Ads Rejected Ahead Of US Vote – Facebook

In this file illustration photo taken on March 25, 2020, a Facebook app logo is displayed on a smartphone in Arlington, Virginia. Olivier DOULIERY / AFP
In this file illustration photo taken on March 25, 2020, a Facebook app logo is displayed on a smartphone in Arlington, Virginia. Olivier DOULIERY / AFP

 

A total of 2.2 million ads on Facebook and Instagram have been rejected and 120,000 posts withdrawn for attempting to “obstruct voting” in the upcoming US presidential election, Facebook’s vice president Nick Clegg said in an interview published Sunday.

In addition, warnings were posted on 150 million examples of false information posted online, the former British deputy prime minister told the French weekly Journal du Dimanche.

Facebook has been increasing its efforts to avoid a repeat of events leading up to the 2016 US presidential election, won by Donald Trump, when its network was used for attempts at voter manipulation, carried out from Russia.

There were similar problems ahead of Britain’s 2016 referendum on leaving the European Union.

“Thirty-five thousand employees take care of the security of our platforms and contribute for elections,” said Clegg, who is vice president of global affairs and communications at Facebook.

“We have established partnerships with 70 specialised media, including five in France, on the verification of information”, he added.

AFP is one of those partners.

Clegg added that the company also uses artificial intelligence which has “made it possible to delete billions of posts and fake accounts, even before they are reported by users”.

Facebook also stores all advertisements and information on their funding and provenance for seven years “to insure transparency,” he said.

In 2016, while he was still deputy prime minister, Clegg complained to the Journal du Dimanche that Facebook had not identified or suppressed a single foreign network interfering in the US election.

On Wednesday President Trump rebuked Facebook and Twitter for blocking links to a New York Post article purporting to expose corrupt dealings by election rival Joe Biden and his son Hunter in Ukraine.

A day earlier Facebook announced a ban on ads that discourage people from getting vaccinated, in light of the coronavirus pandemic which the social media giant said has “highlighted the importance of preventive health behaviors.”

AFP

Twitter Alters Policy On Hacked Content After Blocked Biden Story

Employees walk past a lighted Twitter log as they leave the company’s headquarters in San Francisco on August 13, 2019. Twitter on August 13 said that by the end of the year users will be able to follow a small number of interests the same way they follow people. The feature will be rolled out internationally as the one-to-many messaging platform makes a priority of being an online venue for conversations rather than a pulpit for one-way broadcasting to the masses.
Glenn CHAPMAN / AFP

 

Twitter has altered its policy on hacked content after its decision to block a news report critical of Democratic presidential candidate Joe Biden provoked Republican fury.

The social media behemoth — used by hundreds of millions worldwide — said late Thursday it would in future only block stolen information which was posted directly by hackers, and label any other information of questionable provenance.

Senate Republicans earlier said they would subpoena Twitter chief executive Jack Dorsey to testify before two different committees on why the company blocked links to an article in the New York Post alleging corruption by Biden in Ukraine.

Republican senator Ted Cruz called the decision “election interference,” while President Donald Trump — who trails Biden in polls 19 days before the presidential poll — decried the blockage by both Twitter and Facebook.

Twitter’s legal and policy executive Vijaya Gadde tweeted late Thursday that under changes to its two-year-old Hacked Materials Policy, the platform would “no longer remove hacked content unless it is directly shared by hackers or those acting in concert with them.”

She said the social media giant would also label tweets to provide context “instead of blocking links from being shared on Twitter.”

Gadde said the firm made the change to address “concerns that there could be many unintended consequences to journalists, whistleblowers and others in ways that are contrary to Twitter’s purpose of serving the public conversation.”

The furor came as social media companies grapple with questions of bias and misinformation during an election campaign characterized by controversy and divisiveness.

That Trump is a famously prolific and unfiltered user of Twitter — boasting more than 87 million followers — makes things even more complicated for the platform.

The Post’s story purported to expose corrupt dealings by Biden and his son Hunter Biden in Ukraine.

The newspaper claimed that the former vice-president, who was in charge of US policy toward Ukraine, took actions to help his son, who in 2014-2017 sat on the board of controversial Ukraine energy company Burisma.

But the outlet’s source for the information raised questions.

It cited records on a drive allegedly copied from a computer said to have been abandoned by Hunter Biden, that Trump lawyer Rudy Giuliani gave to the Post.

The report also made claims about Joe Biden’s actions in Ukraine which were contrary to the record.

Wary of “fake news” campaigns, both Facebook and Twitter said they took action out of caution over the article and its sourcing.

The Biden campaign rejected the assertions of corruption in the report, but has not denied the veracity of the underlying materials, mostly emails between Hunter Biden and business partners.

 

 

-AFP

Facebook Deletes Trump’s Post For Downplaying COVID-19 Danger

 In this file illustration photo taken on March 25, 2020, a Facebook app logo is displayed on a smartphone in Arlington, Virginia. Olivier DOULIERY / AFP
In this file illustration photo taken on March 25, 2020, a Facebook app logo is displayed on a smartphone in Arlington, Virginia. Olivier DOULIERY / AFP

 

Facebook on Tuesday removed a post by US President Donald Trump for downplaying Covid-19 danger by saying the season flu is more deadly, in a rare step against the American leader by the leading social network.

A day after checking out of a hospital where he received first-class treatment for Covid-19, Trump used Twitter and Facebook to post messages inaccurately contending that people have more to fear from the flu.

“We remove incorrect information about the severity of Covid-19, and have now removed this post,” Facebook said in reply to an AFP inquiry.

Twitter added a notice to the tweeted version of the Trump post , saying the message was left up due to public interest but that it violated rules about spreading misleading and potentially harmful information related to Covid-19.

Twitter also added a link to reliable Covid-19 information.

Trump checked out of hospital Monday after four days of emergency treatment for Covid-19, pulling off his mask the moment he reached the White House and vowing to quickly get back on the campaign trail.

Shortly beforehand, Trump had tweeted that Americans, who have lost nearly 210,000 people to the pandemic, should not be afraid of the .coronavirus.

Facebook in August removed a video post by Trump in which he contended that children are “almost immune” to the coronavirus, a claim the social network called “harmful COVID misinformation.”

That was the first time the leading social network pulled a post from the president’s page for being dangerously incorrect.

Facebook faces pressure to prevent the spread of misinformation while simultaneously being accused of silencing viewpoints by calling for posts to be truthful.

Health officials have urged people of all age groups to protect themselves against exposure to the virus, saying everyone is at risk.

Trump has unleashed an array of misleading medical speculation, criticism for his own top virus expert and praise for an eccentric preacher-doctor touting conspiracy theories.

Facebook has largely held firm to a policy that it would not fact-check political leaders, but it has pledged to take down any post which could lead to violence or mislead people about the voting process.

A coalition of activists has pressed Facebook to be more aggressive in removing hateful content and misinformation, including from the president and political leaders.

 

AFP

Tech Giants Strike Deal With Advertisers Over Hate Speech

The hearing is titled “Examining the Dominance of Amazon, Apple, Facebook, and Google.”

 

Web giants including Facebook struck a deal Wednesday with global advertisers to get on the same page on defining hate speech, a move aimed at helping companies steer clear of being associated with vile content.

The agreement — which also includes Twitter and YouTube — lays out for the first time a common set of definitions for hateful posts.

Facebook and others have long grappled with how to purge toxic content from online platforms while fending off accusations they stifle free expression in the process.

In July, hundreds of brands suspended advertising at Facebook as part of a #StopHateForProfit campaign, saying the social-media titan should do more to stamp out hatred and misinformation on its platform.

Earlier this month a group of celebrities — including Kim Kardashian, Leonardo DiCaprio and Katy Perry — stop using Facebook and Instagram for 24 hours, to push a similar message.

The announcement came from the Global Alliance for Responsible Media, a group which includes tech platforms and the major brands in the World Federation of Advertisers.

No mention was made of tightening rules regarding hateful posts at any of the social media platforms, just that they would have a standard by which to determine what that is.

The impact of the change, set to be implemented in 2021, was not clear amid a range of interpretations of what constitutes hate speech.

“We shouldn’t get our hopes up though as there are other content issues that are already universally agreed to be harmful, such as terrorism and suicide, but social media companies continue to fail here,” said Syracuse University assistant professor of communications Jennifer Grygiel.

– Empty promises? –
Key areas of the agreement were said to include applying common definitions of harmful content; developing reporting standards; establishing independent oversight; and rolling out tools for keeping advertisements away from harmful content.

“We welcome today’s announcement that these social media platforms have finally committed to doing a better job tracking and auditing hateful content,” said Anti-Defamation League chief executive Jonathan Greenblatt, a longtime critic of Facebook’s content moderation efforts.

“These commitments must be followed in a timely and comprehensive manner, to ensure they are not the kind of empty promises that we have seen too often from Facebook.”

The WFA said that defining online hate speech solves the problem of different platforms using their own definitions, which it said makes it difficult for companies to decide where to put ads.

“As funders of the online ecosystem, advertisers have a critical role to play in driving positive change,” said Stephan Loerke, chief executive of the WFA.

Luis Di Como, executive vice president of global media at Unilever, a major advertiser, sounded a note of cautious optimism.

“The issues within the online ecosystem are complicated, and whilst change doesn’t happen overnight, today marks an important step in the right direction,” Di Como said.

Facebook founder and chief executive Mark Zuckerberg has been adamant that the company did not want hate speech on the social network.

On Wednesday, the leading social network’s vice president for global marketing solutions, Carolyn Everson, said the “uncommon collaboration” gave all parties “a unified language to move forward on the fight against hate online.”

The deal is not enough to reduce societal risks posed by online social media platforms, contended Grygiel.

“We need social media companies to collective engage in self-regulation at the industry level like the advertising industry does,” Grygiel said.

 

 

-AFP

Tech Giants Strike Deal With Advertisers Over Hate Speech

This file illustration taken on October 1, 2019 shows the logos of mobile apps Facebook and Google displayed on a tablet in Lille, France. DENIS CHARLET / AFP
This file illustration taken on October 1, 2019 shows the logos of mobile apps Facebook and Google displayed on a tablet in Lille, France. DENIS CHARLET / AFP.

 

Web giants including Facebook have struck a deal with advertisers on how to identify harmful content such as hate speech, after an impasse over the issue which led to boycotts of the platform.

The agreement — which also included Twitter and YouTube — laid out for the first time a common set of definitions for hateful statements online.

In July, hundreds of advertisers including big-name consumer brands suspended advertising with Facebook as part of the #StopHateForProfit campaign, saying the social-media titan should do more to stamp out hatred and misinformation on its platform.

And earlier this month a group of celebrities — including Kim Kardashian, Leonardo DiCaprio and Katy Perry — stopped using Facebook and Instagram for 24 hours, to push a similar message.

The World Federation of Advertisers (WFA) said in a statement Wednesday: “Facebook, YouTube and Twitter, in collaboration with marketers and agencies through the Global Alliance for Responsible Media have agreed to adopt a common set of definitions for hate speech and other harmful content and to collaborate with a view to monitoring industry efforts to improve in this critical area.”

The alliance was founded by the WFA and includes other major trade bodies.

According to the WFA, key areas of agreement included applying the alliance’s common definitions of harmful content; developing reporting standards for such content; establishing independent oversight; and rolling out tools for keeping advertisements away from harmful content.

The WFA said that properly defining online hate speech would remove the current problem of different platforms using their own definitions, which it said made it difficult for companies to decide where to put their ads.

“As funders of the online ecosystem, advertisers have a critical role to play in driving positive change and we are pleased to have reached agreement with the platforms on an action plan and timeline in order to make the necessary improvements,” said Stephan Loerke, chief executive of the WFA.

Luis Di Como, executive vice-president of global media at Unilever, a major advertiser, sounded a note of cautious optimism.

He said: “The issues within the online ecosystem are complicated, and whilst change doesn’t happen overnight, today marks an important step in the right direction.”

Speaking in July, Facebook’s founder and chief executive Mark Zuckerberg said he remained adamant that the company did not want hate speech on the social network.

On Wednesday, the company’s vice-president for global marketing solutions, Carolyn Everson, said the agreement gave all parties “a unified language to move forward on the fight against hate online.”

AFP

Facebook Blocks Terminally Ill Frenchman From Streaming His Death

(FILES) In this file photo taken on August 12, 2020 Alain Cocq, suffering from an orphan desease of the blood, rests on his medical bed in his flat in Dijon, northeastern France. PHILIPPE DESMAZES / AFP

 

Facebook said Saturday it would block the live stream of a Frenchman suffering from an incurable condition who wanted to broadcast his death on the social media platform.

Earlier, Alain Cocq announced that he was now refusing all food, drink, and medicine after President Emmanuel Macron turned down his request for euthanasia.

Cocq, 57, who suffers from a rare condition that causes the walls of his arteries to stick together, said he believed he had less than a week to live and would broadcast his death from Saturday morning.

“The road to deliverance begins and believe me, I am happy,” he wrote on Facebook shortly after midnight in a post announcing he had “finished his last meal”.

“I know the days ahead are going to be difficult but I have made my decision and I am calm,” he added.

Facebook has been increasingly criticised over the way it polices the content it carries and said Saturday its rules did not allow it to portray suicide.

“Although we respect  (Cocq’s) decision to want to draw attention to this complex question, following expert advice we have taken measures to prevent the live broadcast on Alain’s account,” a Facebook spokesman told AFP.

“Our rules do not allow us to show suicide attempts.”

– Calls on supporters –

Cocq had been trying to post another video earlier Saturday when he messaged: “Facebook is blocking my video broadcast until September 8.”

“It is up to you now,” he said in a message to supporters before giving out Facebook’s French address “so you can let them know what you think about their methods of restricting free speech”.

“There will be a back-up within 24 hours” to run the video, he added.

Cocq had written to Macron asking to be given a substance that would allow him to die in peace but the president wrote back to him explaining this was not allowed under French law.

Cocq has used his plight to draw attention to the situation of terminally ill patients in France who are unable to be allowed to die in line with their wishes.

“Because I am not above the law, I am not able to comply with your request,” Macron said in a letter to Cocq, which the patient published on his Facebook page.

“I cannot ask anyone to go beyond our current legal framework… Your wish is to request active assistance in dying which is not currently permitted in our country.”

– ‘With profound respect’ –

In order to show France the “agony” caused by the law in its current state, Cocq planned to broadcast the end of his life — which he believed would come in “four to five days” — on his Facebook page, he told AFP.

Cocq said he hoped his struggle would be remembered and “go down in the long term” as a step towards changing the law.

Macron said in his letter that “with emotion, I respect your action”.

The president added a handwritten postscript: “With all my personal support and profound respect.”

An official from the president’s office told AFP that Macron wanted to hail Cocq’s commitment to the rights of people with disabilities.

Right-to-die cases have long been an emotive issue in France.

Most polarising was the case of Vincent Lambert, who was left in a vegetative state after a traffic accident in 2008 and died in July last year after doctors removed life support following a long legal battle.

The case divided the country as well as Lambert’s own family, with his parents using every legal avenue to keep him alive but his wife and nephew insisting he must be allowed to die.

AFP

Facebook To Ban Political Ads In Week Before US Election

 

Facebook says it will ban political advertising the week before the US election, one of its most sweeping moves against disinformation yet as CEO Mark Zuckerberg warned of a “risk of civil unrest” after the vote.

The social media giant also vowed to fact check any premature claims of victory, stating that if a candidate tries to declare himself the winner before final votes are tallied “we’ll add a label to their posts directing people to the official results.”

And it promised to “add an informational label” to any content seeking to delegitimize the results or claim that “lawful voting methods” will lead to fraud.

“I’m concerned about the challenges people could face when voting. I’m also worried that with our nation so divided and election results potentially taking days or even weeks to be finalized, there could be an increased risk of civil unrest across the country,” Zuckerberg said in a post.

Democrats have long warned that President Donald Trump and his supporters may try to sow chaos with false claims on November 3, when the vote will take place amid unprecedented health and economic crises, social unrest and protests for racial justice.

The US remains the epicenter of the world’s worst coronavirus outbreak, and voters are expected to shift to mail-in voting, with an estimated three-quarters of the population eligible to do so.

As a result officials are warning that the final tally may not be revealed until well after voting day — spurring fears that paranoia and rumor-mongering could hit an all-time high.

Trump — a prolific user of social media who is trailing Democratic challenger Joe Biden in the polls — has recently hurtled down a rabbit hole of conspiracy theories filled with claims that he is victim of a coup and/or plans to rig the polls.

Almost daily, he claims that increased mail-in voting is a gambit to “rig” the election against him, and he has refused to say whether he will accept the results.

He has also opposed more funding for the cash-strapped US Postal Service (USPS), acknowledging the money would be used to help process ballots.

And he has refused to condemn the presence of armed vigilantes in the streets during a wave of social justice protests across America this summer, spurring fears of unrest if there is not a clear result immediately after November 3.

Opponents say Trump’s increasingly extreme resistance to expanded mail-in voting — a method already used widely in the United States — is an attempt to suppress voter turnout, while setting up an excuse to challenge the result if he is defeated.

“This election is not going to be business as usual,” Zuckerberg, who has come under increasing pressure to do more to combat conspiracy theories at Facebook, said.

“We all have a responsibility to protect our democracy. That means helping people register and vote, clearing up confusion about how this election will work, and taking steps to reduce the chances of violence and unrest.”

AFP