Saturday, October 31, 2020

Tech Giants Want EU ‘Safeguard’ to Proactively Remove Pirated Content | TorrentFreak

Prominent tech companies such as Twitter, Facebook and Google, all respond to takedown notices, as they are legally required to do.

Major copyright holder groups believe this is not enough. They have repeatedly called on these platforms to do more to curb online piracy.

This is a controversial issue, as the EU Copyright Directive negotiations highlighted last year. The public at large fears that proactive measures such as automated upload filters will result in overblocking and restrictions of free speech.

The leading Internet companies have been critical of upload filters as well but they are not against further action. Earlier this year industry group EDiMA, which represents Twitter, Facebook, Google, TikTok, Mozilla, and others, proposed a landmark Online Responsibility framework.

Tackling Piracy With Proactive Algorithms

With this framework, the tech giants propose to use algorithms to tackle illegal content including piracy, beyond what’s currently required by law. The word ‘filter’ isn’t mentioned specifically, but that’s pretty much what you get when using algorithms proactively.

The proposed framework refers to ‘illegal’ content and avoids the term copyright, but we have confirmed that anti-piracy measures are certainly covered.

“The Online Responsibility Framework would facilitate proactive action by service providers against any and all illegal content, including copyrighted content,” Siada El Ramly, Director General of EDiMA tells TorrentFreak.

This week, EDiMA released a new paper as part of the plan. The group highlights that its members want to do more to tackle illegal content but stress that this is tricky under current EU law.

“Online service providers want to do more to voluntarily and proactively remove illegal content from their services, and society wants the same. However, there are important barriers under the current regime which prevent them from doing so.”

Existing EU law requires online service providers to remove illegal content if they have actual knowledge of its presence. They are, however, not obliged to find and police all illegal content uploaded by users, which helps to prevent overblocking that can harm free speech.

While the tech companies generally value free speech, this ‘protection’ of user rights now finds itself in the way. It makes it harder for online services to proactively remove pirated content, which they are eager to do.

Safeguard Paves Way For Proactive Measures

EDiMA, therefore, calls for a new legal safeguard that allows tech companies to use proactive measures, such as upload filters, without the risk of being held liable for having ‘actual knowledge’ of illegal content.

“The association is calling for the introduction of a legal safeguard which would allow companies to take proactive actions to remove illegal content and activity from their services, without the risk of additional liability for those attempts to tackle illegal content,” the group says.

“Current EU rules lack this crucial provision, which has a chilling effect on service providers who want to do more to tackle illegal activity online.”

Actual Knowledge

The term ‘actual knowledge’ is key here. The tech companies want to use algorithms to detect and remove illegal material, but they don’t want this to constitute having ‘actual knowledge,’ which means that they can be held liable afterward.

In the US this is not an issue because of the “good samaritan” principle and EDiMA now calls for a liability safeguard in the EU as well.

“The Framework and the legal safeguards would complement the existing copyright directive by facilitating service providers making ‘best efforts’ to ensure that copyrighted material, for which no license was agreed, would not be available on their service,” El Ramly tells us.

“It would remove the disincentive that exists for service providers to find and remove this material, and instead encourage it.”

Automated Filters Are (not) a Problem

EDiMA positions its framework as a great solution for all involved, including users, but the tone of their message is completely different from what we’ve seen in the past.

Just a few months ago, many of the same companies that are part of EDiMA warned against the EU Copyright Directive as algorithms and upload filters may harm free speech.

The EU proposal, however, makes clear that companies such as Google, Facebook, Twitter, and TikTok see proactive algorithmic actions – which can be translated to automated filters – as a good solution.

Keeping User Rights in Mind

EDiMA’s proposal does keep the rights of users in mind as well. It stresses that its proposed framework still prohibits the requirement for a general monitoring obligation. In addition, people should have the right to appeal removals of their content.

This appeals process should be transparent. Users have the right to know why something was removed and additional human reviews may be required. Also, while an appeal is pending, it should be possible to reinstate flagged material.

“These specific safeguards will ensure that users have a meaningful way to get an explanation as to why their content was removed and to contest removals should they wish to do so,” the proposal reads.

“They will also ensure that service providers have clear and proactive policies in place when it comes to which content is allowed on their services, while fostering transparent dialogue with their users.”

Easing into The Copyright Directive?

All in all, it is safe to say that the major tech companies do see a future for automated filters. Perhaps this shouldn’t come as a surprise, as companies already widely use these today, including YouTube’s Content-ID system.

It appears that with EDiMA’s Online Responsibility Framework and the extra “safeguards” the tech companies try to pave the way for a smooth implementation of the EU Copyright Directive, on their terms.

From: TF, for the latest news on copyright battles, piracy and more.

[from https://ift.tt/148uEe4]

No comments: