Music Business

Sir Paul McCartney Slams Opt-out AI Scraping Policies

Big Tech’s “opt-out” policies could legalize years of AI copyright infringement, forcing creators to fight to protect their work. Paul McCartney and others warn that opt-out AI scraping threatens artists everywhere.

Sir Paul McCartney Slams Opt-Out AI Scraping Policy

by CHRIS CASTLE from Music Tech Policy

Artists are starting to get the message about the disastrous “opt out” policy being promoted by Really Big Tech.  “Opt out” is a rhetorical tactic to protect the massive copyright infringement through content scraping to train AI that has already occurred.  The AI platforms want lawmakers to believe that their massive infringement is not so bad and that requiring artists to “opt out” of that madness is a way to encourage “innovation” and actually protects artists and other creators.

Here’s the fallacy at the heart of what Really Big Tech wants you to believe:  As reported in the New York Times among other outlets, AI platforms have already engaged in massive copyright infringement in order to train AI.  They’ve been infringing for years without asking permission (just like always).  The Times tells us:

Google transcribed YouTube videos to harvest text for its A.I. models, five people with knowledge of the company’s practices said. That potentially violated the copyrights to the videos, which belong to their creators….Google said that its A.I. models “are trained on some YouTube content,” which was allowed under agreements with YouTube creators, and that the company did not use data from office apps outside of an experimental program. 

If you believe that Google isn’t using Gmail and Google Docs to train AI, there’s this bridge in Brooklyn that’s just perfect for you.  After what we discovered about Google’s use of Gmail for consumer profiling, you have to assume that they have an excuse for everything, including using their customers’ law firm documents to train AI in secret.

But here’s the Catch 22:  If governments adopt an “opt-out” approach, it would immediately excuse all past infringements and also cut off future litigation by forcing artists to notify infringers that the artists do not want their works used to train AI—if the artists can ever find out to a legal certainty—much less prove–that their works were even used for training.

“Big Tech wants to stop the litigation by limiting an artist’s remedy to sending a notice asking to “opt out” of the madness”

See what they did there?  The intentional infringement already occurred, and when artists sue—as many have–Big Tech wants to stop the litigation by limiting an artist’s remedy to sending a notice asking to “opt out” of the madness.  All while government fails to require infringers to release transparent documentation of the works they used to train AI to establish which works were infringed in the first place. 

So we’re back to the old “notice and takedown” in the style of the DMCA in the US and the eCommerce Directive in Europe.  In other words, a disaster that burdens exactly the wrong people while excusing exactly the wrong behavior.  If we have learned anything from the DMCA it’s that the burdens should be the other way around like it is for all other copyright infringement, not yet another exception that protects Really Big Tech and helps Really Big become Even Bigger Still.

Opt Out AI Scraping Policies May Be Coming Soon to a Government Near You

In other words, “opt out” is yet another fake safe harbor for AI platforms.  And it’s actively under consideration in the UK according to the Financial Times.  But understand that if “opt-out” gets a foothold in the UK, it will be coming soon to a government near you. With the secrecy that is the hallmark of Silicon Valley’s lobbying style, Big Tech is trying to present the “opt out” safe harbor as an irreversible done deal before artists even get a chance to be heard by lawmakers.  Word is leaking out and artists are not happy starting with Sir Paul McCartney.  According to Robert Booth, writing the The Guardian:

Paul McCartney has backed calls for laws to stop mass copyright theft by companies building generative artificial intelligence, warning AI “could just take over”.  The former Beatle said it would be “a very sad thing indeed” if young composers and writers could not protect their intellectual property from the rise of algorithmic models, which so far have learned by digesting mountains of copyrighted material.

On Tuesday, Lisa Nandy, the culture secretary, told the [UK Parliament’s House of] Commons culture, media and support select committee that the government had not decided which model it would propose in the forthcoming consultation but she highlighted reservations about a system that would require creatives to opt out.

Nandy said: “We have looked at the limitations of similar legislation in the USA and the EU, so we have reservations about this idea that you can simply just say I want to opt out and then find that you have been completely erased from the internet.” 

That may put her in opposition to the technology secretary, Peter Kyle, whose department has “fully drunk the Kool-Aid on AI”, according to the committee chair, Caroline Dinenage. He is thought likely to want copyrighted material to be available to the tech companies unless creators opt out.

Taking the Bait:  Really Big Tech’s Global Lobbying Campaign to Commoditize the World

I can’t emphasize enough that what happens in the UK will not stay in the UK.  Or the US or any other country where Really Big Tech gets a foothold.  Currently, every government in the developed world is adopting a policy on artificial intelligence.  Australia, Brazil, Canada, China, the European Union, Germany, India, Nepal, New Zealand, Japan, Russia, Singapore, South Africa, South Korea, United Arab Emirates and of course the United States are all at some stage of drafting legislation dealing with AI.

According to AI Policy Tracker (a useful site), of the 14 countries it tracks for officially announced AI policy initiatives, fully eight countries announced AI policy projects in 2024.  Six launched in June, July or August of 2024.  In fact, India and Canada both launched on August 28—although 7,000 miles apart.  Singapore and the United Arab Emirates both launched on September 24—although 4,000 miles apart.  

Do you think this is happening naturally?  Or do you think that all or most of these countries are each being heavily and simultaneously lobbied by Really Big Tech/Mag 7 shotcallers who want to control the way the rules protect them?

Uncle Sugar Sets the Bait

This brings us to our old friend Eric Schmidt, or “Uncle Sugar.”  Uncle Sugar is styled by Slate Magazine as “The [Biden] White House’s Favorite Tech Billionaire.”  Mr. Schmidt bragged at an Axios conference that he and his cronies wrote the Biden Executive Order on AI, one of the longest executive orders in history.  Almost like it was the legislation Schmidt couldn’t pass?

That’s right, our own Uncle Sugar drafted the rules that he and his pals want us to live by and teed up the next round of copyright exceptions, scarificators, fleams and leeches to a compliant White House.  (Remind me again why he suddenly left Google?  Well, whatever it was, he’s back with a recovery surpassing Elliot Spitzer’s.)

But it wasn’t enough to write the rules for the US, no.  Uncle Sugar got then-UK Prime Minister and tech fanboy Rishi Sunak to “take the bait” as Schmidt put it to Axios: “So far we are on a win, the taste of winning is there.  If you look at the [Bletchley Park] UK event which I was part of, the UK government took the bait, took the ideas [from Biden AI EO that Schmidt wrote], decided to lead, and they came out with very sensible guidelines.”  That’s right—the Biden EO was signed October 31 and Schmidt showed up with it on November 1 at the UK “event” the so-called “AI Safety Summit.”  (Or as some might call it, the Prime Minister’s Job Fair.) Nothing to see here, it’s all about safety, don’t you know.

What actually happened is very instructive in trying to understand how all of these AI rules are happening so quickly. The Really Big Tech “competitors” (ha!) all colluded…I mean broke bread perhaps over breakfast at Buck’s…and came up with these rules that they wanted implemented in law.  Uncle Sugar tells us he then smarmed his way into the White House (where he’s the favorite tech billionaire don’t you know), got Biden to sign the EO they drafted (the “bait”), and then got the soon-to-be-unemployed Rishi Sunak to “take the bait.”  How far that went to a chit for the soon-to-be-former Prime Minister, green card holder, Stanford MBA and Santa Monica dweller we may never know.

This is how international lobbying is done by Really Big Tech.  If they literally write the rules for a few big economies, the rest of the world pretty much has to tag along.  Don’t get me wrong, they will spread some gold around in other countries, too, so the local lobbyists don’t feel left out of the gravy train.  But make no mistake, it all started with your favorite uncle and whoever he lets into the meeting.

And this brings us back to the “opt out” format being promoted in the UK that is really a safe harbor for AI platforms masquerading as a fairness-making compromise, albeit a one-handed compromise.  Given the cross-pollination of international AI lobbying, if the opt-out concept gets traction in the UK, it will show up eventually in each of the developed economies on our list.  

Because as Uncle Sugar bragged to Axios, there is a master plan. 

Share on:

Comments

Email address is not displayed with comments

Note: Use HTML tags like <b> <i> and <ul> to style your text. URLs automatically linked.


The reCAPTCHA verification period has expired. Please reload the page.