Itemoids

Ross Andersen

Publishers Striking AI Deals Are Making a Fatal Error

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 05 › fatal-flaw-publishers-making-openai-deals › 678477

In 2011, I sat at the Guggenheim Museum in New York and watched Rupert Murdoch announce the beginning of a “new digital renaissance” for news. The newspaper mogul was unveiling an iPad-inspired publication called The Daily. “The iPad demands that we completely reimagine our craft,” he said. The Daily shut down the following year, after burning through a reported $40 million.  

For as long as I have reported on internet companies, I have watched news leaders try to bend their businesses to the will of Apple, Google, Meta, and more. Chasing tech’s distribution and cash, news firms strike deals to try to ride out the next digital wave. They make concessions to platforms that attempt to take all of the audience (and trust) that great journalism attracts, without ever having to do the complicated and expensive work of the journalism itself. And it never, ever works as planned.

Publishers like News Corp did it with Apple and the iPad, investing huge sums in flashy content that didn’t make them any money but helped Apple sell more hardware. They took payouts from Google to offer their journalism for free through search, only to find that it eroded their subscription businesses. They lined up to produce original video shows for Facebook and to reformat their articles to work well in its new app. Then the social-media company canceled the shows and the app. Many news organizations went out of business.

The Wall Street Journal recently laid off staffers who were part of a Google-funded program to get journalists to post to YouTube channels when the funding for the program dried up. And still, just as the news business is entering a death spiral, these publishers are making all the same mistakes, and more, with AI.

[Adrienne LaFrance: The Coming Humanist Renaissance]

Publishers are deep in negotiations with tech firms such as OpenAI to sell their journalism as training for the companies’ models. It turns out that accurate, well-written news is one of the most valuable sources for these models, which have been hoovering up humans’ intellectual output without permission. These AI platforms need timely news and facts to get consumers to trust them. And now, facing the threat of lawsuits, they are pursuing business deals to absolve them of the theft. These deals amount to settling without litigation. The publishers willing to roll over this way aren’t just failing to defend their own intellectual property—they are also trading their own hard-earned credibility for a little cash from the companies that are simultaneously undervaluing them and building products quite clearly intended to replace them.

Late last year Axel Springer, the European publisher who owns Politico and Business Insider, sealed a deal with OpenAI reportedly worth tens of millions of dollars over several years. OpenAI has been offering other publishers $1 million to $5 million a year to license their content. News Corp’s new five-year deal with OpenAI is reportedly valued at as much as $250 million in cash and OpenAI credits. Conversations are heating up. As its negotiations with OpenAI failed, The New York Times sued the firm—as did Alden Global Capital, which owns the New York Daily News and the Chicago Tribune. They were brave moves, although I worry that they are likely to end in deals too.

That media companies would rush to do these deals after being so burned by their tech deals of the past is extraordinarily distressing. And these AI partnerships are far worse for publishers. Ten years ago, it was at least plausible to believe that tech companies would become serious about distributing news to consumers. They were building actual products such as Google News. Today’s AI chatbots are so early and make mistakes often. Just this week, Google’s AI suggested you should glue cheese to pizza crust to keep it from slipping off.

OpenAI and others say they are interested in building new models for distributing and crediting news, and many news executives I respect believe them. But it’s hard to see how any AI product built by a tech company would create meaningful new distribution and revenue for news. These companies are using AI to disrupt internet search—to help users find a single answer faster than browsing a few links. So why would anyone want to read a bunch of news articles when an AI could give them the answer, maybe with a tiny footnote crediting the publisher that no user will ever click on?

Companies act in their interest. But OpenAI isn’t even an ordinary business. It’s a nonprofit (with a for-profit arm) that wants to promote general artificial intelligence that benefits humanity—though can’t quite agree on what that means. Even if its executives were ardent believers in the importance of news, helping journalism wouldn’t be on their long-term priority list.

[Ross Andersen: Does Sam Altman Know What He’s Creating?]

That’s all before we talk about how to price the news. Ask six publishers how they should be paid by these tech companies, and they will spout off six different ideas. One common idea publishers describe is getting a slice of the tech companies’ revenue based on the percentage of the total training data their publications represent. That’s impossible to track, and there’s no way tech companies would agree to it. Even if they did agree to it, there would be no way to check their calculations—the data sets used for training are vast and inscrutable. And let’s remember that these AI companies are themselves struggling to find a consumer business model. How do you negotiate for a slice of something that doesn’t yet exist?

The news industry finds itself in this dangerous spot, yet again, in part because it lacks a long-term focus and strategic patience. Once-family-owned outlets, such as The Washington Post and Los Angeles Times, have been sold to interested billionaires. Others, like The Wall Street Journal, are beholden to the public markets and face coming generational change among their owners. Television journalism is at the whims of the largest media conglomerates, which are now looking to slice, dice, and sell off their empires at peak market value. Many large media companies are run by executives who want to live to see another quarter, not set up their companies for the next 50 years. At the same time, the industry’s lobbying power is eroding. A recent congressional hearing on the topic of AI and news was overshadowed by OpenAI CEO Sam Altman’s meeting with House Speaker Mike Johnson. Tech companies clearly have far more clout than media companies.

Things are about to get worse. Legacy and upstart media alike are bleeding money and talent by the week. More outlets are likely to shut down while others will end up in the hands of powerful individuals using them for their own agendas (see the former GOP presidential candidate Vivek Ramaswamy’s activist play for BuzzFeed).

The long-term solutions are far from clear. But the answer to this moment is painfully obvious. Publishers should be patient and refrain from licensing away their content for relative pennies. They should protect the value of their work, and their archives. They should have the integrity to say no. It’s simply too early to get into bed with the companies that trained their models on professional content without permission and have no compelling case for how they will help build the news business.

Instead of keeping their business-development departments busy, newsrooms should focus on what they do best: making great journalism and serving it up to their readers. Technology companies aren’t in the business of news. And they shouldn’t be. Publishers have to stop looking to them to rescue the news business. We must start saving ourselves.

OpenAI Just Gave Away the Entire Game

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 05 › openai-scarlett-johansson-sky › 678446

If you’re looking to understand the philosophy that underpins Silicon Valley’s latest gold rush, look no further than OpenAI’s Scarlett Johansson debacle. The story, according to Johansson’s lawyers, goes like this: Nine months ago, OpenAI CEO Sam Altman approached the actor with a request to license her voice for a new digital assistant; Johansson declined. She alleges that just two days before the company’s keynote event last week, in which that assistant was revealed as part of a new system called GPT-4o, Altman reached out to Johansson’s team, urging the actor to reconsider. Johansson and Altman allegedly never spoke, and Johansson allegedly never granted OpenAI permission to use her voice. Nevertheless, the company debuted Sky two days later—a program with a voice many believed was alarmingly similar to Johansson’s.

Johansson told NPR that she was “shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine.” In response, Altman issued a statement denying that the company had cloned her voice and saying that it had already cast a different voice actor before reaching out to Johansson. (I’d encourage you to listen for yourself.) Curiously, Altman said that OpenAI would take down Sky’s voice from its platform “out of respect” for Johansson. This is a messy situation for OpenAI, complicated by Altman’s own social-media posts. On the day that OpenAI released ChatGPT’s assistant, Altman posted a cheeky, one-word statement on X: “Her”—a reference to the 2013 film of the same name, in which Johansson is the voice of an AI assistant that a man falls in love with. Altman’s post is reasonably damning, implying that Altman was aware, even proud, of the similarities between Sky’s voice and Johansson’s.

On its own, this seems to be yet another example of a tech company blowing past ethical concerns and operating with impunity. But the situation is also a tidy microcosm of the raw deal at the center of generative AI, a technology that is built off data scraped from the internet, generally without the consent of creators or copyright owners. Multiple artists and publishers, including The New York Times, have sued AI companies for this reason, but the tech firms remain unchastened, prevaricating when asked point-blank about the provenance of their training data. At the core of these deflections is an implication: The hypothetical superintelligence they are building is too big, too world-changing, too important for prosaic concerns such as copyright and attribution. The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.

Altman and OpenAI have been candid on this front. The end goal of OpenAI has always been to build a so-called artificial general intelligence, or AGI, that would, in their imagining, alter the course of human history forever, ushering in an unthinkable revolution of productivity and prosperity—a utopian world where jobs disappear, replaced by some form of universal basic income, and humanity experiences quantum leaps in science and medicine. (Or, the machines cause life on Earth as we know it to end.) The stakes, in this hypothetical, are unimaginably high—all the more reason for OpenAI to accelerate progress by any means necessary. Last summer, my colleague Ross Andersen described Altman’s ambitions thusly:

As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.

Part of Altman’s reasoning, he told Andersen, is that AI development is a geopolitical race against autocracies like China. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than that of “authoritarian governments,” he said. He noted that, in an ideal world, AI should be a product of nations. But in this world, Altman seems to view his company as akin to its own nation-state. Altman, of course, has testified before Congress, urging lawmakers to regulate the technology while also stressing that “the benefits of the tools we have deployed so far vastly outweigh the risks.” Still, the message is clear: The future is coming, and you ought to let us be the ones to build it.

Other OpenAI employees have offered a less gracious vision. In a video posted last fall on YouTube by a group of effective altruists in the Netherlands, three OpenAI employees answered questions about the future of the technology. In response to one question about AGI rendering jobs obsolete, Jeff Wu, an engineer for the company, confessed, “It’s kind of deeply unfair that, you know, a group of people can just build AI and take everyone’s jobs away, and in some sense, there’s nothing you can do to stop them right now.” He added, “I don’t know. Raise awareness, get governments to care, get other people to care. Yeah. Or join us and have one of the few remaining jobs. I don’t know; it’s rough.” Wu’s colleague Daniel Kokotajlo jumped in with the justification. “To add to that,” he said, “AGI is going to create tremendous wealth. And if that wealth is distributed—even if it’s not equitably distributed, but the closer it is to equitable distribution, it’s going to make everyone incredibly wealthy.” (There is no evidence to suggest that the wealth will be evenly distributed.)

This is the unvarnished logic of OpenAI. It is cold, rationalist, and paternalistic. That such a small group of people should be anointed to build a civilization-changing technology is inherently unfair, they note. And yet they will carry on because they have both a vision for the future and the means to try to bring it to fruition. Wu’s proposition, which he offers with a resigned shrug in the video, is telling: You can try to fight this, but you can’t stop it. Your best bet is to get on board.

You can see this dynamic playing out in OpenAI’s content-licensing agreements, which it has struck with platforms such as Reddit and news organizations such as Axel Springer and Dotdash Meredith. Recently, a tech executive I spoke with compared these types of agreements to a hostage situation, suggesting they believe that AI companies will find ways to scrape publishers’ websites anyhow, if they don’t comply. Best to get a paltry fee out of them while you can, the person argued.

The Johansson accusations only compound (and, if true, validate) these suspicions. Altman’s alleged reasoning for commissioning Johansson’s voice was that her familiar timbre might be “comforting to people” who find AI assistants off-putting. Her likeness would have been less about a particular voice-bot aesthetic and more of an adoption hack or a recruitment tool for a technology that many people didn’t ask for, and seem uneasy about. Here, again, is the logic of OpenAI at work. It follows that the company would plow ahead, consent be damned, simply because it might believe the stakes are too high to pivot or wait. When your technology aims to rewrite the rules of society, it stands that society’s current rules need not apply.

Hubris and entitlement are inherent in the development of any transformative technology. A small group of people needs to feel confident enough in its vision to bring it into the world and ask the rest of us to adapt. But generative AI stretches this dynamic to the point of absurdity. It is a technology that requires a mindset of manifest destiny, of dominion and conquest. It’s not stealing to build the future if you believe it has belonged to you all along.