Itemoids

How ChatGPT

Technology Makes Us More Human

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 01 › chatgpt-ai-technology-techo-humanism-reid-hoffman › 672872

ChatGPT, a new AI system that sounds so human in conversations that it could host its own podcast, is a test of temperament. Reading between its instantly generated, flawlessly grammatical lines, people see wildly different visions of the future.

For some, ChatGPT promises to revolutionize the way we search for information, draft articles, write software code, and create business plans. When they use ChatGPT, they see Star Trek: a future in which opportunities for personal fulfillment are as large as the universe itself.

Others see only massive job displacement and a profound loss of agency, as we hand off creative processes that were once the domain of humans to machines. When they use ChatGPT, they see Black Mirror: a future in which technological innovation primarily exists to annoy, humiliate, terrify, and, most of all, dehumanize humanity.

Annie Lowrey: How ChatGPT will destabilize white-collar work

I’m firmly in the Star Trek camp, because although I fully acknowledge that the tech industry is imperfect, and always in need of thoughtful, responsive leadership, I still believe that improvement through technology is how humanity most effectively makes progress.

That’s why I switched from a planned career in academia to one in Silicon Valley in the first place. In the early 1990s, I saw how software, globally distributed on the internet, was creating new opportunities to empower people at scale, and that’s ultimately what led me to co-found LinkedIn. I wanted to use technology to help individuals improve their economic opportunities over the course of their entire career, and thus have more chances to pursue meaning in their lives.

Techno-humanism is typically conflated with transhumanism, referring to the idea that we are on a path to incorporating so much technology into our lives that eventually we will evolve into an entirely new species of post-humans or superhumans.

I interpret techno-humanism in a slightly different way. What defines humanity is not just our unusual level of intelligence, but also how we capitalize on that intelligence by developing technologies that amplify and complement our mental, physical, and social capacities. If we merely lived up to our scientific classification—Homo sapiens—and just sat around thinking all day, we’d be much different creatures than we actually are. A more accurate name for us is Homo techne: humans as toolmakers and tool users. The story of humanity is the story of technology.

Technology is the thing that makes us us. Through the tools we create, we become neither less human nor superhuman, nor post-human. We become more human.

This doesn’t mean that all technological innovations automatically produce good outcomes—far from it. New technologies can create new problems or exacerbate old ones, such as when AI systems end up reproducing biases (against racial minorities, for instance) that exist in their training data. We in the tech industry should be vigilant in our efforts to mitigate and correct such problems.

Read: How the racism baked into technology hurts teens

Nor would I ever suggest that technologies are neutral, equally capable of being used for good or bad. The values, assumptions, and aspirations we build into the technologies we create shape how they can be used, and thus what kinds of outcomes they can produce. That’s why techno-humanism should strive for outcomes that broadly benefit humanity.

At the same time, a techno-humanist perspective also orients to the future, dynamism, and change. This means it inevitably clashes with desires for security, predictability, and the familiar. In moments of accelerating innovation—like the one we’re living through right now, as robotics, virtual reality, synthetic biology, and especially AI all evolve quickly—the urge to entrench the status quo against the uncertain terrain of new realities accelerates too.

Just so, New York City’s public-school system has already blocked students and teachers from accessing ChatGPT in its classrooms. Multiple online art communities have banned users from uploading images they created using AI image-generators such as DALL-E, Midjourney, and Stable Diffusion.

I get it. Learning to write an essay from scratch is a time-honored way to develop critical thinking, organizational skills, and a facility for personal expression. Creating vivid and beautiful imagery one painstaking brushstroke at a time is perhaps the epitome of human creativity.

But what if teachers used ChatGPT to instantly personalize lesson plans for each student in their class—wouldn’t that be humanizing in a way that the industrialized approaches of traditional classroom teaching are not? Aren’t tools that allow millions of people to visually express their ideas and communicate with one another in new ways a step forward for humanity?

If it’s detrimental to society to simply claim that “technology is neutral” and avoid any responsibility for negative outcomes—and I believe it is—so is rejecting a technology just because it has a capacity to produce negative outcomes along with positive ones.

Is there a future where the massive proliferation of robots ushers in a new era of human flourishing, not human marginalization? Where AI-driven research helps us safely harness the power of nuclear fusion in time to help avert the worst consequences of climate change? It’s only natural to peer into the dark unknown and ask what could possibly go wrong. It’s equally necessary—and more essentially human—to do so and envision what could possibly go right.

If Robots Eat Journalism, Does It Have to Be With Personality Quizzes?

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 01 › buzzfeed-using-chatgpt-openai-creating-personality-quizzes › 672880

One might assume that when your boss finally comes to tell you that the robots are here to do your job, he won’t also point out with enthusiasm that they’re going to do it 10 times better than you did. Alas, this was not the case at BuzzFeed.

Yesterday, at a virtual all-hands meeting, BuzzFeed CEO Jonah Peretti had some news to discuss about the automated future of media. The brand, known for massively viral stories aggregated from social media and being the most notable progenitor of what some might call clickbait, would begin publishing content generated by artificial-intelligence programs. In other words: Robots would help make BuzzFeed posts.

“When you see this work in action it is pretty amazing,” Peretti had promised employees in a memo earlier in the day. During the meeting, which I viewed a recording of, he was careful to say that AI would not be harnessed to generate “low-quality content for the purposes of cost-saving.” (BuzzFeed cut its workforce by about 12 percent weeks before Christmas.) Instead, Peretti said, AI could be used to create “endless possibilities” for personality quizzes, a popular format that he called “a driving force on the internet.” You’ve surely come across one or two before: “Sorry, Millennials, but There’s No Way You Will Be Able to Pass This Super-Easy Quiz,” for instance, or “If You Were a Cat, What Color Would Your Fur Be?

These quizzes and their results have historically been dreamed up by human brains and typed with human fingers. Now BuzzFeed staffers would write a prompt and a handful of questions for a user to fill out, like a form in a proctologist’s waiting room, and then the machine, reportedly constructed by OpenAI, the creator of the widely discussed chatbot ChatGPT, would spit out uniquely tailored text. Peretti wrote a bold promise about these quizzes on a presentation slide: “Integrating AI will make them 10x better & be the biggest change to the format in a decade.” The personality-quiz revolution is upon us.

[Read: ChatGPT is dumber than you think]

Peretti offered the staff some examples of these bigger, better personality quizzes: Answer 7 Simple Questions and AI Will Write a Song About Your Ideal Soulmate. Have an AI Create a Secret Society for Your BFFs in 5 Easy Questions. Create a Mythical Creature to Ride. This Quiz Will Write a RomCom About You in Less Than 30 Seconds. The rom-com, Peretti noted, would be“a great thing for an entertainment sponsor … maybe before Valentine’s Day.”  He demonstrated how the quiz could play out: The user—in this example, a hypothetical person named Jess—would fill out responses to questions like “Tell us an endearing flaw you have” (Jess’s answer: “I am never on time, ever”), and the AI would spit out a story that incorporated those details. Here’s part of the 250-word result. Like a lot of AI-generated text, it may remind you of reading someone else’s completed Mad Libs:

Cher gets out of bed and calls everyone they know to gather outside while she serenades Jess with her melodic voice singing “Let Me Love You.” When the song ends everyone claps, showering them with adoration, making this moment one for the books—or one to erase.

Things take an unexpected turn when Ron Tortellini shows up—a wealthy man who previously was betrothed to Cher. As it turns out, Ron is a broke, flailing actor trying to using [sic] Cher to further his career. With this twist, our two heroines must battle these obstacles to be together against all odds—and have a fighting chance.

There are many fair questions one might ask reading this. “Why?” is one of them. “Ron Tortellini?” is another. But the most important is this: Who is the content for? The answer is no one in particular. The quiz’s result is machine-generated writing designed to run through other machines—content that will be parsed and distributed by tech platforms. AI may yet prove to be a wonderful assistive tool for humans doing interesting creative work, but right now it’s looking like robo-media’s future will be flooding our information ecosystem with even more junk.

Peretti did not respond to a request for comment, but there’s no mistaking his interest here. Quizzes are a major traffic-driver for BuzzFeed, bringing in 1.1 billion views in 2022 alone, according to his presentation. They can be sold as sponsored content, meaning an advertiser can pay for an AI-generated quiz about its brand. And they spread on social media, where algorithmic feeds put them in front of other people, who click onto the website to take the quiz themselves, and perhaps find other quizzes to take and share. Personality quizzes are a perfect fit for AI, because while they seem to say something about the individual posting them, they actually say nothing at all: “Make an Ice Cream Cone and We’ll Reveal Which Emoji You Are” was written by a person, but might as well have been written by a program.

Much the same could be said about content from CNET, which has recently started to publish articles written at least in part by an AI program, no doubt to earn easy placement in search engines. (Why else write the headline “What Are NSF Fees and Why Do Banks Charge Them?” but to anticipate something a human being might punch into Google? Indeed, CNET’s AI-“assisted” article is one of the top results for such a query.) The goal, according to the site’s editor in chief, Connie Guglielmo, is “to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective.” Reporting from Futurism has revealed that these articles have contained factual errors and apparent plagiarism. Guglielmo has responded to the ensuing controversy by saying, in part, that “AI engines, like humans, make mistakes.”

Such is the immediate path for robot journalism, if we can call it that: Bots will write content that is optimized to circulate through tech platforms, a new spin on an old race-to-the-bottom dynamic that has always been present in digital media. BuzzFeed and CNET aren’t innovating, really: They’re using AI to reinforce an unfortunate status quo, where stories are produced to hit quotas and serve ads against—that is, they are produced because they might be clicked. Many times, machines will even be the ones doing that clicking! The bleak future of media is human-owned websites profiting from automated banner ads placed on bot-written content, crawled by search-engine bots, and occasionally served to bot visitors.

[Read: How ChatGPT will destabilize white-collar work]

This is not the apocalypse, but it’s not wonderful, either. To state what was once obvious, journalism and entertainment alike are supposed to be for people. Viral stories—be they 6,000-word investigative features or a quiz about what state you actually belong in—work because they have mass appeal, not because they are hypertargeted to serve an individual reader. BuzzFeed was once brilliant enough to livestream video of people wrapping rubber bands around a watermelon until it exploded. At the risk of over-nostalagizing a moment that was in fact engineered for a machine itself—Facebook had just started to pay publishers to use its live-video tool—this was at least content for everyone, rather than no one in particular. Bots can be valuable tools in the work of journalism. For years, the Los Angeles Times has experimented with a computer program that helps quickly disseminate information about earthquakes, for example. (Though not without error, I might add.) But new technology is not in and of itself valuable; it’s all in how you use it.

Much has been made of the potential for generative AI to upend education as we’ve known it, and destabilize white-collar work. These are real, valid concerns. But the rise of robo-journalism has introduced another: What will the internet look like when it is populated to a greater extent by soulless material devoid of any real purpose or appeal? The AI-generated romcom is a pile of nonsense; CNET’s finance content can’t be trusted. And this is just the start.

In 2021, my colleague Kaitlyn Tiffany wrote about the dead-internet theory, a conspiracy rooted in 4chan’s paranormal message board that posits that the internet is now mostly synthetic. The premise is that most of the content seen on the internet “was actually created using AI” and fueled by a shadowy group that hopes to “control our thoughts and get us to purchase stuff.” It seemed absurd then. But a little more real today.