Itemoids

ChatGPT

ChatGPT Resembles a Slice of the Human Brain

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 01 › chatgpt-ai-language-human-computer-grammar-logic › 672902

Language is commonly understood to be the “stuff” of thought. People “talk it out” and “speak their mind,” follow “trains of thought” or “streams of consciousness.” Some of the pinnacles of human creation—music, geometry, computer programming—are framed as metaphorical languages. The underlying assumption is that the brain processes the world and our experience of it through a progression of words. And this supposed link between language and thinking is a large part of what makes ChatGPT and similar programs so uncanny: The ability of AI to answer any prompt with human-sounding language can suggest that the machine has some sort of intent, even sentience.

But then the program says something completely absurd—that there are 12 letters in nineteen or that sailfish are mammals—and the veil drops. Although ChatGPT can generate fluent and sometimes elegant prose, easily passing the Turing-test benchmark that has haunted the field of AI for more than 70 years, it can also seem incredibly dumb, even dangerous. It gets math wrong, fails to give the most basic cooking instructions, and displays shocking biases. In a new paper, cognitive scientists and linguists address this dissonance by separating communication via language from the act of thinking: Capacity for one does not imply the other. At a moment when pundits are fixated on the potential for generative AI to disrupt every aspect of how we live and work, their argument should force a reevaluation of the limits and complexities of artificial and human intelligence alike.

The researchers explain that words may not work very well as a synecdoche for thought. People, after all, identify themselves on a continuum of visual to verbal thinking; the experience of not being able to put an idea into words is perhaps as human as language itself. Contemporary research on the human brain, too, suggests that “there is a separation between language and thought,” says Anna Ivanova, a cognitive neuroscientist at MIT and one of the study’s two lead authors. Brain scans of people using dozens of languages have revealed a particular network of neurons that fires independent of the language being used (including invented tongues such as Na’vi and Dothraki).

That network of neurons is not generally involved in thinking activities including math, music, and coding. In addition, many patients with aphasia—a loss of the ability to comprehend or produce language, as a result of brain damage—remain skilled at arithmetic and other nonlinguistic mental tasks. Combined, these two bodies of evidence suggest that language alone is not the medium of thought; it is more like a messenger. The use of grammar and a lexicon to communicate functions that involve other parts of the brain, such as socializing and logic, is what makes human language special.

[Read: Hollywood’s love affair with fictional languages]

ChatGPT and software like it demonstrate an incredible ability to string words together, but they struggle with other tasks. Ask for a letter explaining to a child that Santa Claus is fake, and it produces a moving message signed by Saint Nick himself. These large language models, also called LLMs, work by predicting the next word in a sentence based on everything before it (popular belief follows contrary to, for example). But ask ChatGPT to do basic arithmetic and spelling or give advice for frying an egg, and you may receive grammatically superb nonsense: “If you use too much force when flipping the egg, the eggshell can crack and break.”

These shortcomings point to a distinction, not dissimilar to one that exists in the human brain, between piecing together words and piecing together ideas—what the authors term formal and functional linguistic competence, respectively. “Language models are really good at producing fluent, grammatical language,” says the University of Texas at Austin linguist Kyle Mahowald, the paper’s other lead author. “But that doesn’t necessarily mean something which can produce grammatical language is able to do math or logical reasoning, or think, or navigate social contexts.”

If the human brain’s language network is not responsible for math, music, or programming—that is, for thinking—then there’s no reason an artificial “neural network” trained on terabytes of text would be good at those things either. “In line with evidence from cognitive neuroscience,” the authors write, “LLMs’ behavior highlights the difference between being good at language and being good at thought.” ChatGPT’s ability to get mediocre scores on some business- and law-school exams, then, is more a mirage than a sign of understanding.

Still, hype swirls around the next iteration of language models, which will train on far more words and with far more computing power. OpenAI, the creator of ChatGPT, claims that its programs are approaching a so-called general intelligence that would put the machines on par with humankind. But if the comparison to the human brain holds, then simply making models better at word prediction won’t bring them much closer to this goal. In other words, you can dismiss the notion that AI programs such as ChatGPT have a soul or resemble an alien invasion.

Ivanova and Mahowald believe that different training methods are required to spur further advances in AI—for instance, approaches specific to logical or social reasoning rather than word prediction. ChatGPT may have already taken a step in that direction, not just reading massive amounts of text but also incorporating human feedback: Supervisors were able to comment on what constituted good or bad responses. But with few details about ChatGPT’s training available, it’s unclear just what that human input targeted; the program apparently thinks 1,000 is both greater and less than 1,062. (OpenAI released an update to ChatGPT yesterday that supposedly improves its “mathematical capabilities,” but it’s still reportedly struggling with basic word problems.)

[Read: What happens when AI has read everything?]

There are, it should be noted, people who believe that large language models are not as good at language as Ivanova and Mahowald write—that they are basically glorified auto-completes whose flaws scale with their power. “Language is more than just syntax,” says Gary Marcus, a cognitive scientist and prominent AI researcher. “In particular, it’s also about semantics.” It’s not just that AI chatbots don’t understand math or how to fry eggs—they also, he says, struggle to comprehend how a sentence derives meaning from the structure of its parts.

For instance, imagine three plastic balls in a row: green, blue, blue. Someone asks you to grab “the second blue ball”: You understand that they’re referring to the last ball in the sequence, but a chatbot might understand the instruction as referring to the second ball, which also happens to be blue. “That a large language model is good at language is overstated,” Marcus says. But to Ivanova, something like the blue-ball example requires not just compiling words but also conjuring a scene, and as such “is not really about language proper; it’s about language use.”

And no matter how compelling their language use is, there’s still a healthy debate over just how much programs such as ChatGPT actually “understand” about the world by simply being fed data from books and Wikipedia entries. “Meaning is not given,” says Roxana Girju, a computational linguist at the University of Illinois at Urbana-Champaign. “Meaning is negotiated in our interactions, discussions, not only with other people but also with the world. It’s something that we reach at in the process of engaging through language.” If that’s right, building a truly intelligent machine would require a different way of combining language and thought—not just layering different algorithms but designing a program that might, for instance, learn language and how to navigate social relationships at the same time.

Ivanova and Mahowald are not outright rejecting the view that language epitomizes human intelligence; they’re complicating it. Humans are “good” at language precisely because we combine thought with its expression. A computer that both masters the rules of language and can put them to use will necessarily be intelligent—the flip side being that narrowly mimicking human utterances is precisely what is holding machines back. But before we can use our organic brains to better understand silicon ones, we will need both new ideas and new words to understand the significance of language itself.

Adani vs Hindenburg: India's top businessman faces biggest test

CNN

www.cnn.com › 2023 › 01 › 31 › investing › india-adani-hindenburg-report-explainer-intl-hnk › index.html

India's richest man Gautam Adani ended his trip to Davos earlier this month on an optimistic note. The infrastructure billionaire expressed confidence about India's growth and ambition. He even talked about his mild addiction to ChatGPT.

Hear why this teacher says schools should embrace ChatGPT, not ban it

CNN

www.cnn.com › videos › business › 2023 › 01 › 26 › nightcap-chatgpt-students-clip-orig-nb.cnn

High school teacher Cherie Shields tells "Nightcap's" Jon Sarlin that ChatGPT should not be banned in schools because it's a powerful teaching tool. For more, watch the full Nightcap episode here.

Technology Makes Us More Human

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 01 › chatgpt-ai-technology-techo-humanism-reid-hoffman › 672872

ChatGPT, a new AI system that sounds so human in conversations that it could host its own podcast, is a test of temperament. Reading between its instantly generated, flawlessly grammatical lines, people see wildly different visions of the future.

For some, ChatGPT promises to revolutionize the way we search for information, draft articles, write software code, and create business plans. When they use ChatGPT, they see Star Trek: a future in which opportunities for personal fulfillment are as large as the universe itself.

Others see only massive job displacement and a profound loss of agency, as we hand off creative processes that were once the domain of humans to machines. When they use ChatGPT, they see Black Mirror: a future in which technological innovation primarily exists to annoy, humiliate, terrify, and, most of all, dehumanize humanity.

Annie Lowrey: How ChatGPT will destabilize white-collar work

I’m firmly in the Star Trek camp, because although I fully acknowledge that the tech industry is imperfect, and always in need of thoughtful, responsive leadership, I still believe that improvement through technology is how humanity most effectively makes progress.

That’s why I switched from a planned career in academia to one in Silicon Valley in the first place. In the early 1990s, I saw how software, globally distributed on the internet, was creating new opportunities to empower people at scale, and that’s ultimately what led me to co-found LinkedIn. I wanted to use technology to help individuals improve their economic opportunities over the course of their entire career, and thus have more chances to pursue meaning in their lives.

Techno-humanism is typically conflated with transhumanism, referring to the idea that we are on a path to incorporating so much technology into our lives that eventually we will evolve into an entirely new species of post-humans or superhumans.

I interpret techno-humanism in a slightly different way. What defines humanity is not just our unusual level of intelligence, but also how we capitalize on that intelligence by developing technologies that amplify and complement our mental, physical, and social capacities. If we merely lived up to our scientific classification—Homo sapiens—and just sat around thinking all day, we’d be much different creatures than we actually are. A more accurate name for us is Homo techne: humans as toolmakers and tool users. The story of humanity is the story of technology.

Technology is the thing that makes us us. Through the tools we create, we become neither less human nor superhuman, nor post-human. We become more human.

This doesn’t mean that all technological innovations automatically produce good outcomes—far from it. New technologies can create new problems or exacerbate old ones, such as when AI systems end up reproducing biases (against racial minorities, for instance) that exist in their training data. We in the tech industry should be vigilant in our efforts to mitigate and correct such problems.

Read: How the racism baked into technology hurts teens

Nor would I ever suggest that technologies are neutral, equally capable of being used for good or bad. The values, assumptions, and aspirations we build into the technologies we create shape how they can be used, and thus what kinds of outcomes they can produce. That’s why techno-humanism should strive for outcomes that broadly benefit humanity.

At the same time, a techno-humanist perspective also orients to the future, dynamism, and change. This means it inevitably clashes with desires for security, predictability, and the familiar. In moments of accelerating innovation—like the one we’re living through right now, as robotics, virtual reality, synthetic biology, and especially AI all evolve quickly—the urge to entrench the status quo against the uncertain terrain of new realities accelerates too.

Just so, New York City’s public-school system has already blocked students and teachers from accessing ChatGPT in its classrooms. Multiple online art communities have banned users from uploading images they created using AI image-generators such as DALL-E, Midjourney, and Stable Diffusion.

I get it. Learning to write an essay from scratch is a time-honored way to develop critical thinking, organizational skills, and a facility for personal expression. Creating vivid and beautiful imagery one painstaking brushstroke at a time is perhaps the epitome of human creativity.

But what if teachers used ChatGPT to instantly personalize lesson plans for each student in their class—wouldn’t that be humanizing in a way that the industrialized approaches of traditional classroom teaching are not? Aren’t tools that allow millions of people to visually express their ideas and communicate with one another in new ways a step forward for humanity?

If it’s detrimental to society to simply claim that “technology is neutral” and avoid any responsibility for negative outcomes—and I believe it is—so is rejecting a technology just because it has a capacity to produce negative outcomes along with positive ones.

Is there a future where the massive proliferation of robots ushers in a new era of human flourishing, not human marginalization? Where AI-driven research helps us safely harness the power of nuclear fusion in time to help avert the worst consequences of climate change? It’s only natural to peer into the dark unknown and ask what could possibly go wrong. It’s equally necessary—and more essentially human—to do so and envision what could possibly go right.

If Robots Eat Journalism, Does It Have to Be With Personality Quizzes?

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 01 › buzzfeed-using-chatgpt-openai-creating-personality-quizzes › 672880

One might assume that when your boss finally comes to tell you that the robots are here to do your job, he won’t also point out with enthusiasm that they’re going to do it 10 times better than you did. Alas, this was not the case at BuzzFeed.

Yesterday, at a virtual all-hands meeting, BuzzFeed CEO Jonah Peretti had some news to discuss about the automated future of media. The brand, known for massively viral stories aggregated from social media and being the most notable progenitor of what some might call clickbait, would begin publishing content generated by artificial-intelligence programs. In other words: Robots would help make BuzzFeed posts.

“When you see this work in action it is pretty amazing,” Peretti had promised employees in a memo earlier in the day. During the meeting, which I viewed a recording of, he was careful to say that AI would not be harnessed to generate “low-quality content for the purposes of cost-saving.” (BuzzFeed cut its workforce by about 12 percent weeks before Christmas.) Instead, Peretti said, AI could be used to create “endless possibilities” for personality quizzes, a popular format that he called “a driving force on the internet.” You’ve surely come across one or two before: “Sorry, Millennials, but There’s No Way You Will Be Able to Pass This Super-Easy Quiz,” for instance, or “If You Were a Cat, What Color Would Your Fur Be?

These quizzes and their results have historically been dreamed up by human brains and typed with human fingers. Now BuzzFeed staffers would write a prompt and a handful of questions for a user to fill out, like a form in a proctologist’s waiting room, and then the machine, reportedly constructed by OpenAI, the creator of the widely discussed chatbot ChatGPT, would spit out uniquely tailored text. Peretti wrote a bold promise about these quizzes on a presentation slide: “Integrating AI will make them 10x better & be the biggest change to the format in a decade.” The personality-quiz revolution is upon us.

[Read: ChatGPT is dumber than you think]

Peretti offered the staff some examples of these bigger, better personality quizzes: Answer 7 Simple Questions and AI Will Write a Song About Your Ideal Soulmate. Have an AI Create a Secret Society for Your BFFs in 5 Easy Questions. Create a Mythical Creature to Ride. This Quiz Will Write a RomCom About You in Less Than 30 Seconds. The rom-com, Peretti noted, would be“a great thing for an entertainment sponsor … maybe before Valentine’s Day.”  He demonstrated how the quiz could play out: The user—in this example, a hypothetical person named Jess—would fill out responses to questions like “Tell us an endearing flaw you have” (Jess’s answer: “I am never on time, ever”), and the AI would spit out a story that incorporated those details. Here’s part of the 250-word result. Like a lot of AI-generated text, it may remind you of reading someone else’s completed Mad Libs:

Cher gets out of bed and calls everyone they know to gather outside while she serenades Jess with her melodic voice singing “Let Me Love You.” When the song ends everyone claps, showering them with adoration, making this moment one for the books—or one to erase.

Things take an unexpected turn when Ron Tortellini shows up—a wealthy man who previously was betrothed to Cher. As it turns out, Ron is a broke, flailing actor trying to using [sic] Cher to further his career. With this twist, our two heroines must battle these obstacles to be together against all odds—and have a fighting chance.

There are many fair questions one might ask reading this. “Why?” is one of them. “Ron Tortellini?” is another. But the most important is this: Who is the content for? The answer is no one in particular. The quiz’s result is machine-generated writing designed to run through other machines—content that will be parsed and distributed by tech platforms. AI may yet prove to be a wonderful assistive tool for humans doing interesting creative work, but right now it’s looking like robo-media’s future will be flooding our information ecosystem with even more junk.

Peretti did not respond to a request for comment, but there’s no mistaking his interest here. Quizzes are a major traffic-driver for BuzzFeed, bringing in 1.1 billion views in 2022 alone, according to his presentation. They can be sold as sponsored content, meaning an advertiser can pay for an AI-generated quiz about its brand. And they spread on social media, where algorithmic feeds put them in front of other people, who click onto the website to take the quiz themselves, and perhaps find other quizzes to take and share. Personality quizzes are a perfect fit for AI, because while they seem to say something about the individual posting them, they actually say nothing at all: “Make an Ice Cream Cone and We’ll Reveal Which Emoji You Are” was written by a person, but might as well have been written by a program.

Much the same could be said about content from CNET, which has recently started to publish articles written at least in part by an AI program, no doubt to earn easy placement in search engines. (Why else write the headline “What Are NSF Fees and Why Do Banks Charge Them?” but to anticipate something a human being might punch into Google? Indeed, CNET’s AI-“assisted” article is one of the top results for such a query.) The goal, according to the site’s editor in chief, Connie Guglielmo, is “to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective.” Reporting from Futurism has revealed that these articles have contained factual errors and apparent plagiarism. Guglielmo has responded to the ensuing controversy by saying, in part, that “AI engines, like humans, make mistakes.”

Such is the immediate path for robot journalism, if we can call it that: Bots will write content that is optimized to circulate through tech platforms, a new spin on an old race-to-the-bottom dynamic that has always been present in digital media. BuzzFeed and CNET aren’t innovating, really: They’re using AI to reinforce an unfortunate status quo, where stories are produced to hit quotas and serve ads against—that is, they are produced because they might be clicked. Many times, machines will even be the ones doing that clicking! The bleak future of media is human-owned websites profiting from automated banner ads placed on bot-written content, crawled by search-engine bots, and occasionally served to bot visitors.

[Read: How ChatGPT will destabilize white-collar work]

This is not the apocalypse, but it’s not wonderful, either. To state what was once obvious, journalism and entertainment alike are supposed to be for people. Viral stories—be they 6,000-word investigative features or a quiz about what state you actually belong in—work because they have mass appeal, not because they are hypertargeted to serve an individual reader. BuzzFeed was once brilliant enough to livestream video of people wrapping rubber bands around a watermelon until it exploded. At the risk of over-nostalagizing a moment that was in fact engineered for a machine itself—Facebook had just started to pay publishers to use its live-video tool—this was at least content for everyone, rather than no one in particular. Bots can be valuable tools in the work of journalism. For years, the Los Angeles Times has experimented with a computer program that helps quickly disseminate information about earthquakes, for example. (Though not without error, I might add.) But new technology is not in and of itself valuable; it’s all in how you use it.

Much has been made of the potential for generative AI to upend education as we’ve known it, and destabilize white-collar work. These are real, valid concerns. But the rise of robo-journalism has introduced another: What will the internet look like when it is populated to a greater extent by soulless material devoid of any real purpose or appeal? The AI-generated romcom is a pile of nonsense; CNET’s finance content can’t be trusted. And this is just the start.

In 2021, my colleague Kaitlyn Tiffany wrote about the dead-internet theory, a conspiracy rooted in 4chan’s paranormal message board that posits that the internet is now mostly synthetic. The premise is that most of the content seen on the internet “was actually created using AI” and fueled by a shadowy group that hopes to “control our thoughts and get us to purchase stuff.” It seemed absurd then. But a little more real today.

The Elon Musk mystique is fading and this teacher says don't ban ChatGPT

CNN

www.cnn.com › videos › business › 2023 › 01 › 26 › nightcap-elon-musk-tesla-chatgpt-full-orig-jg.cnn-business

CNN's Allison Morrow tells "Nightcap's" Jon Sarlin that Elon Musk's Twitter antics are damaging Tesla's brand. Plus, high school teacher Cherie Shields argues that ChatGPT is an excellent teaching tool and schools are making a mistake if they ban the AI technology. To get the day's business headlines sent directly to your inbox, sign up for the Nightcap newsletter.

BuzzFeed says it will use AI to help create content, stock jumps 150%

CNN

www.cnn.com › 2023 › 01 › 26 › media › buzzfeed-ai-content-creation › index.html

BuzzFeed said Thursday that it will work with ChatGPT creator OpenAI to use artificial intelligence to help create content for its audience, marking a milestone in how media companies implement the new technology into their businesses.