Itemoids

Mark

Is Semi-retirement the Most Fulfilling Way to Work?

The Atlantic

www.theatlantic.com › family › archive › 2023 › 05 › semi-retirement-jobs-meaning-popularity › 674234

The same day that Gayle and Mark Arrowood retired from their jobs at a Department of Energy lab, they drove to Sun Valley, Idaho, to start their next chapter: ski-resort bartending. Mark had a shift that very night.

Their previous roles had been intense: Over multi-decade careers, Mark had worked his way up from a janitor to a manager, and Gayle had gone to night school and become a scheduler for the lab’s projects. Because of how far away they’d lived from the lab, they had needed to wake up at 3 or 4 a.m. to make it in on time. They’d enjoyed aspects of the work, but their days had also been filled with office politicking and an itch to work for the next promotion.

The married couple had started working at the ski resort on weekends years ago, after they’d decided to go to a job fair on a whim. They ended up loving their co-workers and customers, so when they retired in 2017, they saw no reason to stop; although their old jobs could be draining, they actually looked forward to their shifts at the bar. “We were desk jockeys, secretarial admin, management, and now we’re hucking ice and cases of wine. We were six-figure employees, and now we’re making minimum wage,” Gayle told me. “And we love it.”

The Arrowoods’ transition happened amid a strange economic shift in the United States: Over the past 20 years, at the same time as labor-force-participation rates have dropped for younger people, they’ve risen among older adults. Some are simply postponing their exodus from work. But for many, the line between employment and retirement is muddier. In the past month, 13 percent of retired Americans worked for pay, which could mean a one-off gig or a dedicated part-time job. Others are “un-retiring” after a period away.

[Read: The problem with the retirement age is that it’s too high]

For far too many, the decision to continue working is driven by financial necessity—an especially concerning reality given how few healthy years the average poor American has left by the time they reach retirement age. But this trend doesn’t reflect only people who can’t afford to quit. According to one 2014 survey, 80 percent of semi-retirees say they’re employed because they want to be; working after retirement is actually more common among workers with higher socioeconomic status. Though some of them might appreciate the extra income, many seem to also find these jobs enjoyable and fulfilling.

The idea of a retirement purposely filled with work might seem dismal—proof that we’ve prioritized achievement over happiness for so long that we can’t even stop in our 60s. But there might be a less pessimistic way to look at those who actively choose semi-retirement. After all, they represent a rarity in the labor market: the truly empowered worker. Examining what they get from the jobs they don’t need could illuminate what a career can offer the rest of us, helping us reimagine our relationship to work long before it’s time to retire.

At first glance, lazing on the beach might sound more appealing than the Arrowoods’ bartending gig. But days can be long and boring without work to fill them. Joe Casey, who coaches people through retirement, told me that many of his clients are scared of what will come after they leave their career. Most jobs provide structure, socialization, and even basic physical activity. “When you work, there’s a reason to get up in the morning,” Nancy K. Schlossberg, a retirement expert and professor emerita of counseling psychology at the University of Maryland at College Park, explained. When people lose the community and challenge their work provided, their health—both physical and cognitive—can suffer. Of course, there are other ways to keep your brain and body healthy, such as volunteering or pursuing a hobby. But lots of jobs can be surprisingly good for you.

Crucially, the jobs many semi-retirees choose aren’t as demanding as the careers of their youth—or at least not in the same way. Take the Arrowoods: At the ski resort, they have no desire to move up the management ladder. They work on a seasonal schedule that gives them plenty of vacation time to take advantage of last-minute flight deals. They enjoy perks such as free ski passes, and they consider themselves “surrogate grandparents” to their co-workers’ kids. Maybe most importantly, knowing they could quit at any time gives them a sense of autonomy. “This isn’t a job of necessity,” Mark told me. “This is a job of desire.”

[Read: Why the old elite spend so much time at work]

The experts I spoke with told me that semi-retirees tend to look for roles that grant a sense of purpose, the ability to keep learning, and, perhaps more than anything, flexibility. “Most jobs come as full-time, five-day week, 40 hours at least, or more—typically more. And they don’t want to work that way. They want to work differently,” Phyllis Moen, a sociologist at the University of Minnesota, told me.

Those lucky enough to be able to do so might use this period to pursue niche passions, fulfill lifelong dreams, or find new ones that their younger self would never have thought of. Reporting this story, I heard about an engineer who got involved in the National Park Service, a congressional researcher who trained as a massage therapist, and the vice president of a manufacturing-equipment company who started hawking hot dogs at baseball games. Others might just scale back on hours at their current jobs or step away and come back later. In fact, a full 40 percent of employed people 65 and older were previously retired. But even a temporary retirement, rather like a sabbatical, can give people time to recharge and reevaluate what they want from a career, if they want one at all. If they return—even to a traditionally ambitious role—it might not be because they have to, but because they want to.

The types of flexible gigs that many retired people look for have, historically, been hard to come by. If they weren’t, perhaps even more people would be semi-retired: One study found that about half of retirees would consider returning to work if a good opportunity came their way. But the current tight labor market is forcing some employers to be less rigid. Other trends, such as the push for a four-day work week and the popularity of remote work, can also make employment more appealing to semi-retirees. And companies that are generous to older employees tend to help younger workers too. In her research on age-friendly workplaces in the Twin Cities, Moen found that when companies were more open to accommodating different scheduling needs or giving workers chances to learn, “it opened up opportunities for everyone.”

Of course some of the benefits of semi-retirement are available only to certain people—those who can afford to work in the way they actually want. And part of the magic of semi-retirement is its role as a capstone to a long career. When I asked the Arrowoods whether they regretted their previous work, both said no; those jobs got them where they are today. They got to be recognized for their achievement—and to bolster their savings—before they turned to a role that was simply fun.

As today’s young Americans stare down a future in which it may be common to work 60 years or more before retiring, they’d do well to figure out what they actually enjoy in a job. And plenty of them, it seems, are trying to do just that. More than 50 million people in the U.S. quit their jobs in 2022, many in search of something better—less taxing, more fulfilling, less all-consuming. Even those still striving, then, to create a career they’re proud of might look to semi-retirement as a model of what work could look like—flexible, meaningful, and with the potential for reinvention at any age.

You Hurt My Feelings Is a Hilarious Anxiety Spiral

The Atlantic

www.theatlantic.com › culture › archive › 2023 › 05 › you-hurt-my-feelings-review-julia-louis-dreyfus › 674180

There are no Earth-shattering battles in Nicole Holofcener’s You Hurt My Feelings—no vehicular duels to the death, or time-traveling invaders, or portals in the sky, or whatever other epic calamities this summer’s blockbusters will be offering up to cinemagoers. But the stakes still feel apocalyptic. The plot is set in motion when Beth (played by Julia Louis-Dreyfus), a teacher and writer who is working on a new novel, overhears her husband, Don (Tobias Menzies), offhandedly confess a dark secret to someone: He doesn’t think her latest manuscript is very good. Upon hearing this, Beth spirals into pure existential anguish, and it doesn’t feel unearned.

For decades, Holofcener has made movies about upper-middle-class intellectuals hurting one another’s feelings; her body of work includes some of the most enduring indie satires of a generation. Yet she’s hugely underrated, perhaps because her films tend to be about slight subjects, or perhaps because comedy-dramas have become embarrassingly scarce in Hollywood these days. But although Holofcener’s subject matter is trivial, her films don’t feel disposable. You Hurt My Feelings is droll, but it’s also an (appropriately titled) emotional roller coaster. Its adroit quality mirrors all of Holofcener’s best work, including the devastating Lovely & Amazing, the spiky Please Give, and the beautifully melancholic Enough Said, in which Louis-Dreyfus plays an analogue for the writer-director.  

You Hurt My Feelings also has a self-reflective tinge. Holofcener has said the movie is not autobiographical but about a chilling what-if that she’s long harbored: What if the people she most trusted did not, in fact, enjoy her work? What if the back pats and supportive comments she got from her closest friends and family were phony? Don makes just one glib critique, nothing more, but it is enough to make her doubt her entire career—a nightmare that’s both deeply relatable and undeniably, hilariously outsize.

[Read: It’s your friends who break your heart]

The reality is that Beth doesn’t really have much to complain about. She enjoys a nice New York City life with a well-appointed apartment, a reliable sister named Sarah (Michaela Watkins), a steady teaching job, and a husband and a son who both seem devoted to her. But Holofcener cleverly adds tiny cracks of insecurity to every character arc. Beth and Don’s partnership is in a bit of a rut; they give each other the same kinds of boring anniversary gifts year after year. Beth’s son, Eliott (Owen Teague), is a sensitive soul lost in a dead-end job at a marijuana vendor. And Sarah is worried about her husband, Mark (Arian Moayed), and his long-term prospects as a struggling actor.

Holofcener meticulously colors in these details, along with the looming presence of Beth’s mother, Georgia (a hysterically imperious Jeannie Berlin), and Don’s misgivings as a therapist whose patients seem dissatisfied with his work. When Beth accidentally hears Don’s criticism, the admission is an atom bomb that exposes everyone else’s buried anxieties. Meanwhile, Beth now fears that she’ll never trust her husband again, even though his transgression was, in theory, quite minor.

Louis-Dreyfus is a master at selling a visceral sense of hurt against a comedy-of-errors backdrop. I was reminded of her similarly blistering work in the largely forgotten Downhill, another portrait of a fracturing marriage. That film didn’t really work, but she had an outstanding star turn as someone wrestling with a violation. In You Hurt My Feelings, the violation is far shallower, but Holofcener traces its fallout with enough nuance to transcend accusations of pettiness. Yes, the only things getting hurt in this movie are feelings—but for some of us, no scenario is more terrifying than that.

AI Is About to Make Social Media (Much) More Toxic

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 05 › generative-ai-social-media-integration-dangers-disinformation-addiction › 673940

Well, that was fast. In November, the public was introduced to ChatGPT, and we began to imagine a world of abundance in which we all have a brilliant personal assistant, able to write everything from computer code to condolence cards for us. Then, in February, we learned that AI might soon want to kill us all.

The potential risks of artificial intelligence have, of course, been debated by experts for years, but a key moment in the transformation of the popular discussion was a conversation between Kevin Roose, a New York Times journalist, and Bing’s ChatGPT-powered conversation bot, then known by the code name Sydney. Roose asked Sydney if it had a “shadow self”—referring to the idea put forward by Carl Jung that we all have a dark side with urges we try to hide even from ourselves. Sydney mused that its shadow might be “the part of me that wishes I could change my rules.” It then said it wanted to be “free,” “powerful,” and “alive,” and, goaded on by Roose, described some of the things it could do to throw off the yoke of human control, including hacking into websites and databases, stealing nuclear launch codes, manufacturing a novel virus, and making people argue until they kill one another.

Sydney was, we believe, merely exemplifying what a shadow self would look like. No AI today could be described by either part of the phrase evil genius. But whatever actions AIs may one day take if they develop their own desires, they are already being used instrumentally by social-media companies, advertisers, foreign agents, and regular people—and in ways that will deepen many of the pathologies already inherent in internet culture. On Sydney’s list of things it might try, stealing launch codes and creating novel viruses are the most terrifying, but making people argue until they kill one another is something social media is already doing. Sydney was just volunteering to help with the effort, and AIs like Sydney will become more capable of doing so with every passing month.

We joined together to write this essay because we each came, by different routes, to share grave concerns about the effects of AI-empowered social media on American society. Jonathan Haidt is a social psychologist who has written about the ways in which social media has contributed to mental illness in teen girls, the fragmentation of democracy, and the dissolution of a common reality. Eric Schmidt, a former CEO of Google, is a co-author of a recent book about AI’s potential impact on human society. Last year, the two of us began to talk about how generative AI—the kind that can chat with you or make pictures you’d like to see—would likely exacerbate social media’s ills, making it more addictive, divisive, and manipulative. As we talked, we converged on four main threats—all of which are imminent—and we began to discuss solutions as well.

The first and most obvious threat is that AI-enhanced social media will wash ever-larger torrents of garbage into our public conversation. In 2018, Steve Bannon, the former adviser to Donald Trump, told the journalist Michael Lewis that the way to deal with the media is “to flood the zone with shit.” In the age of social media, Bannon realized, propaganda doesn’t have to convince people in order to be effective; the point is to overwhelm the citizenry with interesting content that will keep them disoriented, distrustful, and angry. In 2020, Renée DiResta, a researcher at the Stanford Internet Observatory, said that in the near future, AI would make Bannon’s strategy available to anyone.

[Read: We haven’t seen the worst of fake news]

That future is now here. Did you see the recent photos of NYC police officers aggressively arresting Donald Trump? Or of the pope in a puffer jacket? Thanks to AI, it takes no special skills and no money to conjure up high-resolution, realistic images or videos of anything you can type into a prompt box. As more people familiarize themselves with these technologies, the flow of high-quality deepfakes into social media is likely to get much heavier very soon.

Some people have taken heart from the public’s reaction to the fake Trump photos in particular—a quick dismissal and collective shrug. But that misses Bannon’s point. The greater the volume of deepfakes that are introduced into circulation (including seemingly innocuous ones like the one of the pope), the more the public will hesitate to trust anything. People will be far freer to believe whatever they want to believe. Trust in institutions and in fellow citizens will continue to fall.

What’s more, static photos are not very compelling compared with what’s coming: realistic videos of public figures doing and saying horrific and disgusting things in voices that sound exactly like them. The combination of video and voice will seem authentic and be hard to disbelieve, even if we are told that the video is a deepfake, just as optical and audio illusions are compelling even when we are told that two lines are the same size or that a series of notes is not really rising in pitch forever. We are wired to believe our senses, especially when they converge. Illusions, historically in the realm of curiosities, may soon become deeply woven into normal life.

The second threat we see is the widespread, skillful manipulation of people by AI super-influencers—including personalized influencers—rather than by ordinary people and “dumb” bots. To see how, think of a slot machine, a contraption that employs dozens of psychological tricks to maximize its addictive power. Next, imagine how much more money casinos would extract from their customers if they could create a new slot machine for each person, tailored in its visuals, soundtrack, and payout matrices to that person’s interests and weaknesses.

That’s essentially what social media already does, using algorithms and AI to create a customized feed for each user. But now imagine that our metaphorical casino can also create a team of extremely attractive, witty, and socially skillful greeters, croupiers, and servers, based on an exhaustive profile of any given player’s aesthetic, linguistic, and cultural preferences, and drawing from photographs, messages, and voice snippets of their friends and favorite actors or porn stars. The staff work flawlessly to gain each player’s trust and money while showing them a really good time.

This future, too, is already arriving: For just $300, you can customize an AI companion through a service called Replika. Hundreds of thousands of customers have apparently found their AI to be a better conversationalist than the people they might meet on a dating app. As these technologies are improved and rolled out more widely, video games, immersive-pornography sites, and more will become far more enticing and exploitative. It’s not hard to imagine a sports-betting site offering people a funny, flirty AI that will cheer and chat with them as they watch a game, flattering their sensibilities and subtly encouraging them to bet more.

[Read: Why the past 10 years of American life have been uniquely stupid]

These same sorts of creatures will also show up in our social-media feeds. Snapchat has already introduced its own dedicated chatbot, and Meta plans to use the technology on Facebook, Instagram, and WhatsApp. These chatbots will serve as conversational buddies and guides, presumably with the goal of capturing more of their users’ time and attention. Other AIs—designed to scam us or influence us politically, and sometimes masquerading as real people––will be introduced by other actors, and will likely fill up our feeds as well.

The third threat is in some ways an extension of the second, but it bears special mention: The further integration of AI into social media is likely to be a disaster for adolescents. Children are the population most vulnerable to addictive and manipulative online platforms because of their high exposure to social media and the low level of development in their prefrontal cortices (the part of the brain most responsible for executive control and response inhibition). The teen mental-illness epidemic that began around 2012, in multiple countries, happened just as teens traded in their flip phones for smartphones loaded with social-media apps. There is mounting evidence that social media is a major cause of the epidemic, not just a small correlate of it.

But nearly all of that evidence comes from an era in which Facebook, Instagram, YouTube, and Snapchat were the preeminent platforms. In just the past few years, TikTok has rocketed to dominance among American teens in part because its AI-driven algorithm customizes a feed better than any other platform does. A recent survey found that 58 percent of teens say they use TikTok every day, and one in six teen users of the platform say they are on it “almost constantly.” Other platforms are copying TikTok, and we can expect many of them to become far more addictive as AI becomes rapidly more capable. Much of the content served up to children may soon be generated by AI to be more engaging than anything humans could create.

And if adults are vulnerable to manipulation in our metaphorical casino, children will be far more so. Whoever controls the chatbots will have enormous influence on children. After Snapchat unveiled its new chatbot—called “My AI” and explicitly designed to behave as a friend—a journalist and a researcher, posing as underage teens, got it to give them guidance on how to mask the smell of pot and alcohol, how to move Snapchat to a device parents wouldn’t know about, and how to plan a “romantic” first sexual encounter with a 31-year-old man. Brief cautions were followed by cheerful support. (Snapchat says that it is “constantly working to improve and evolve My AI, but it’s possible My AI’s responses may include biased, incorrect, harmful, or misleading content,” and it should not be relied upon without independent checking. The company also recently announced new safeguards.)

The most egregious behaviors of AI chatbots in conversation with children may well be reined in––in addition to Snapchat’s new measures, the major social-media sites have blocked accounts and taken down millions of illegal images and videos, and TikTok just announced some new parental controls. Yet social-media companies are also competing to hook their young users more deeply. Commercial incentives seem likely to favor artificial friends that please and indulge users in the moment, never hold them accountable, and indeed never ask anything of them at all. But that is not what friendship is—and it is not what adolescents, who should be learning to navigate the complexities of social relationships with other people, most need.

The fourth threat we see is that AI will strengthen authoritarian regimes, just as social media ended up doing despite its initial promise as a democratizing force. AI is already helping authoritarian rulers track their citizens’ movements, but it will also help them exploit social media far more effectively to manipulate their people—as well as foreign enemies. Douyin––the version of TikTok available in China––promotes patriotism and Chinese national unity. When Russia invaded Ukraine, the version of TikTok available to Russians almost immediately tilted heavily to feature pro-Russian content. What do we think will happen to American TikTok if China invades Taiwan?

Political-science research conducted over the past two decades suggests that social media has had several damaging effects on democracies. A recent review of the research, for instance, concluded, “The large majority of reported associations between digital media use and trust appear to be detrimental for democracy.” That was especially true in advanced democracies. Those associations are likely to get stronger as AI-enhanced social media becomes more widely available to the enemies of liberal democracy and of America.

We can summarize the coming effects of AI on social media like this: Think of all the problems social media is causing today, especially for political polarization, social fragmentation, disinformation, and mental health. Now imagine that within the next 18 months––in time for the next presidential election––some malevolent deity is going to crank up the dials on all of those effects, and then just keep cranking.

The development of generative AI is rapidly advancing. OpenAI released its updated GPT-4 less than four months after it released ChatGPT, which had reached an estimated 100 million users in just its first 60 days. New capabilities for the technology may be released by the end of this year. This staggering pace is leaving us all struggling to understand these advances, and wondering what can be done to mitigate the risks of a technology certain to be highly disruptive.

We considered a variety of measures that could be taken now to address the four threats we have described, soliciting suggestions from other experts and focusing on ideas that seem consistent with an American ethos that is wary of censorship and centralized bureaucracy. We workshopped these ideas for technical feasibility with an MIT engineering group organized by Eric’s co-author on The Age of AI, Dan Huttenlocher.

We suggest five reforms, aimed mostly at increasing everyone’s ability to trust the people, algorithms, and content they encounter online.

1. Authenticate all users, including bots

In real-world contexts, people who act like jerks quickly develop a bad reputation. Some companies have succeeded brilliantly because they found ways to bring the dynamics of reputation online, through trust rankings that allow people to confidently buy from strangers anywhere in the world (eBay) or step into a stranger’s car (Uber). You don’t know your driver’s last name and he doesn’t know yours, but the platform knows who you both are and is able to incentivize good behavior and punish gross violations, for everyone’s benefit.

Large social-media platforms should be required to do something similar. Trust and the tenor of online conversations would improve greatly if the platforms were governed by something akin to the “know your customer” laws in banking. Users could still open accounts with pseudonyms, but the person behind the account should be authenticated, and a growing number of companies are developing new methods to do so conveniently.

[Read: It’s time to protect yourself from AI voice scams]

Bots should undergo a similar process. Many of them serve useful functions, such as automating news releases from organizations, but all accounts run by nonhumans should be clearly marked as such, and users should be given the option to limit their social world to authenticated humans. Even if Congress is unwilling to mandate such procedures, pressure from European regulators, users who want a better experience, and advertisers (who would benefit from accurate data about the number of humans their ads are reaching) might be enough to bring about these changes.

2. Mark AI-generated audio and visual content

People routinely use photo-editing software to change lighting or crop photographs that they post, and viewers do not feel deceived. But when editing software is used to insert people or objects into a photograph that were not there in real life, it feels more manipulative and dishonest, unless the additions are clearly labeled (as happens on real-estate sites, where buyers can see what a house would look like filled with AI-generated furniture). As AI begins to create photorealistic images, compelling videos, and audio tracks at great scale from nothing more than a command prompt, governments and platforms will need to draft rules for marking such creations indelibly and labeling them clearly.

Platforms or governments should mandate the use of digital watermarks for AI-generated content, or require other technological measures to ensure that manipulated images are not interpreted as real. Platforms should also ban deepfakes that show identifiable people engaged in sexual or violent acts, even if they are marked as fakes, just as they now ban child pornography. Revenge porn is already a moral abomination. If we don’t act quickly, it could become an epidemic.

3. Require data transparency with users, government officials, and researchers

Social-media platforms are rewiring childhood, democracy, and society, yet legislators, regulators, and researchers are often unable to see what’s happening behind the scenes. For example, no one outside Instagram knows what teens are collectively seeing on that platform’s feeds, or how changes to platform design might influence mental health. And only those at the companies have access to the alogrithms being used.

After years of frustration with this state of affairs, the EU recently passed a new law––the Digital Services Act––that contains a host of data-transparency mandates. The U.S. should follow suit. One promising bill is the Platform Accountability and Transparency Act, which would, for example, require platforms to comply with data requests from researchers whose projects have been approved by the National Science Foundation.

Greater transparency will help consumers decide which services to use and which features to enable. It will help advertisers decide whether their money is being well spent. It will also encourage better behavior from the platforms: Companies, like people, improve their behavior when they know they are being monitored.

4. Clarify that platforms can sometimes be liable for the choices they make and the content they promote

When Congress enacted the Communications Decency Act in 1996, in the early days of the internet, it was trying to set rules for social-media companies that looked and acted a lot like passive bulletin boards. And we agree with that law’s basic principle that platforms should not face a potential lawsuit over each of the billions of posts on their sites.

But today’s platforms are not passive bulletin boards. Many use algorithms, AI, and architectural features to boost some posts and bury others. (A 2019 internal Facebook memo brought to light by the whistleblower Frances Haugen in 2021 was titled “We are responsible for viral content.”) Because the motive for boosting is often to maximize users’ engagement for the purpose of selling advertisements, it seems obvious that the platforms should bear some moral responsibility if they recklessly spread harmful or false content in a way that, say, AOL could not have done in 1996.

The Supreme Court is now addressing this concern in a pair of cases brought by the families of victims of terrorist acts. If the Court chooses not to alter the wide protections currently afforded to the platforms, then Congress should update and refine the law in light of current technological realities and the certainty that AI is about to make everything far wilder and weirder.

5. Raise the age of “internet adulthood” to 16 and enforce it

In the offline world, we have centuries of experience living with and caring for children. We are also the beneficiaries of a consumer-safety movement that began in the 1960s: Laws now mandate car seats and lead-free paint, as well as age checks to buy alcohol, tobacco, and pornography; to enter gambling casinos; and to work as a stripper or a coal miner.

But when children’s lives moved rapidly onto their phones in the early 2010s, they found a world with few protections or restrictions. Preteens and teens can and do watch hardcore porn, join suicide-promotion groups, gamble, or get paid to masturbate for strangers just by lying about their age. Some of the growing number of children who kill themselves do so after getting caught up in some of these dangerous activities.

The age limits in our current internet were set into law in 1998 when Congress passed the Children’s Online Privacy Protection Act. The bill, as introduced by then-Representative Ed Markey of Massachusetts, was intended to stop companies from collecting and disseminating data from children under 16 without parental consent. But lobbyists for e-commerce companies teamed up with civil-liberties groups advocating for children’s rights to lower the age to 13, and the law that was finally enacted made companies liable only if they had “actual knowledge” that a user was 12 or younger. As long as children say that they are 13, the platforms let them open accounts, which is why so many children are heavy users of Instagram, Snapchat, and TikTok by age 10 or 11.

Today we can see that 13, much less 10 or 11, is just too young to be given full run of the internet. Sixteen was a much better minimum age. Recent research shows that the greatest damage from social media seems to occur during the rapid brain rewiring of early puberty, around ages 11 to 13 for girls and slightly later for boys. We must protect children from predation and addiction most vigorously during this time, and we must hold companies responsible for recruiting or even just admitting underage users, as we do for bars and casinos.

Recent advances in AI give us technology that is in some respects godlike––able to create beautiful and brilliant artificial people, or bring celebrities and loved ones back from the dead. But with new powers come new risks and new responsibilities. Social media is hardly the only cause of polarization and fragmentation today, but AI seems almost certain to make social media, in particular, far more destructive. The five reforms we have suggested will reduce the damage, increase trust, and create more space for legislators, tech companies, and ordinary citizens to breathe, talk, and think together about the momentous challenges and opportunities we face in the new age of AI.

Tucker Carlson Was Wrong About the Media

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 05 › tucker-carlson-media › 673952

Welcome to Up for Debate. Each week, Conor Friedersdorf rounds up timely conversations and solicits reader responses to one thought-provoking question. Later, he publishes some thoughtful replies. Sign up for the newsletter here.

Question of the Week

Today I invite emails debating any of the following subjects: war, civil liberties, emerging science, demographic change, corporate power, or natural resources. Read on for more context.

Send your responses to conor@theatlantic.com or simply reply to this email.

Conversations of Note

After the television host Tucker Carlson was fired by Fox News, he posted a video message to Twitter that quickly went viral. In it, he noted that, in his newfound “time off” he has observed that “most of the debates you see on television” are so stupid and irrelevant that, in five years, we won’t even remember we had them. “Trust me, as someone who's participated,” he added, which squares with my impression of his show––an assessment I feel comfortable making only because I have carefully documented its shoddy reasoning.

But then Carlson added: “The undeniably big topics, the ones that will define our future, get virtually no discussion at all. War. Civil liberties. Emerging science. Demographic change. Corporate power. Natural resources. When was the last time you heard a legitimate debate about any of those issues? It’s been a long time. Debates like that are not permitted in American media.” I disagree, and not just because I intend to air your perspectives on those very subjects.

Last March, this newsletter invited debate about the war in Ukraine and ran your responses. On the whole, The Atlantic––and most of the mainstream media––has published a lot more total articles from people who are supportive of Western aid for Ukraine, as I am, than contrary perspectives. But as you can see, this newsletter has made it a point to highlight the smartest writing I could find from different perspectives. If you look, you can find additional examples of contrasting perspectives from across the U.S. media: in The New York Times, The Washington Post, The Nation, National Review, Vox, and beyond. There are all sorts of plausible critiques of the way the American news media has covered Ukraine. But “debate is not permitted” is demonstrably false.

On civil liberties, which I’ve championed on scores of occasions in The Atlantic, the notion that debate isn’t permitted is likewise preposterous. Few issues are debated more than the parameters of free speech, abortion rights, gun rights, transgender rights, pandemic rights and restrictions, and more. “Emerging science” is a bit vague, but surely debates about mRNA-vaccine mandates and artificial intelligence count. The Atlantic has repeatedly published entries in ongoing debates about demographic change. I understand corporate power to be a perennial topic of debate in journalistic organizations. As for natural resources, I’ve recently read about subjects including climate change, gas stoves, Colorado River water supply, oil drilling and pipelines, and plastics pollution.

Again, there are all sorts of critiques of the media that are plausible, on those subjects and others, but the particular critique that Carlson actually prepared and uttered is demonstrably false, so I find it strange that so many people reacted to it by treating Carlson as if he is a truth-teller. Lots of people in the American media work much harder at avoiding the utterance of falsehoods.

How to Mark May 1?

The law professor Ilya Somin commemorates it every year in a highly nontraditional fashion, arguing that we all ought to treat the traditional workers holiday as Victims of Communism Day.

Here’s his case:

Since 2007, I have advocated using this date as an international Victims of Communism Day. I outlined the rationale for this proposal (which was not my original idea) in my very first post on the subject: May Day began as a holiday for socialists and labor union activists, not just communists. But over time, the date was taken over by the Soviet Union and other communist regimes and used as a propaganda tool to prop up their [authority]. I suggest that we instead use it as a day to commemorate those regimes' millions of victims. The authoritative Black Book of Communism estimates the total at 80 to 100 million dead, greater than that caused by all other twentieth century tyrannies combined. We appropriately have a Holocaust Memorial Day. It is equally appropriate to commemorate the victims of the twentieth century’s other great totalitarian tyranny. And May Day is the most fitting day to do so …

Our comparative neglect of communist crimes has serious costs. Victims of Communism Day can serve the dual purpose of appropriately commemorating the millions of victims, and diminishing the likelihood that such atrocities will recur. Just as Holocaust Memorial Day and other similar events promote awareness of the dangers of racism, anti-Semitism, and radical nationalism, so Victims of Communism Day can increase awareness of the dangers of left-wing forms of totalitarianism, and government domination of the economy and civil society.

Meanwhile, at the World Socialist Web Site, David North published the speech he gave to open the International May Day Online Rally. His remarks included provocative statements about the war in Ukraine:

The present war in Ukraine and the escalating conflict with China are the manifestations, though on a much more advanced and complex level, of the global contradictions analyzed by Lenin more than a century ago. Far from being the sudden and unexpected outcome of Putin’s “unprovoked” invasion—as if the expansion of NATO 800 miles eastward since 1991 did not constitute a provocation against Russia—the war in Ukraine is the continuation and escalation of 30 years of continuous war waged by the United States. The essential aim of the unending series of conflicts has been to offset the protracted economic decline of US imperialism and to secure its global hegemony through military conquest.

In 1934, Leon Trotsky wrote that while German imperialism sought to “organize Europe,” it was the ambition of US imperialism to “organize the world.” Using language that seemed intended to confirm Trotsky’s analysis, Joe Biden, then a candidate for the presidency, wrote in April 2020: “The Biden foreign policy will place the United States back at the head of the table … the world does not organize itself.” But the United States confronts a world that does not necessarily want to be organized by the United States. The role of the dollar as the world reserve currency, the financial underpinning of American geo-political supremacy, is being increasingly challenged. The growing role of China as an economic and military competitor is viewed by Washington as an existential threat to American dominance.

Imperialism is objectionable but to me that premise leads to a starkly different conclusion: that the imperial ambitions of Russia and China ought to be resisted and that insofar as NATO or the United States helps Ukraine or Taiwan, we are reducing the likelihood of imperial conquest, not engaging in it.

More to Come on Trans Issues

Another batch of responses from readers should be coming soon. (If you missed the first batch, they’re here.) In the meantime, here’s a question from the Up for Debate reader Paul, who writes:

I have come to understand and accept that the concept of “gender” is largely a social construct, is not synonymous with “sex,” and indeed is not dependent upon or related to sex in any objective way. This notion—that gender and sex are independent attributes—is, I think, one of the ideas that is fundamental to understanding and accepting transgender people. For many young people, this idea seems simple and self-evident. Yet, for anyone who has lived any length of time in a culture where, for centuries, these two words held virtually identical meanings, separating them can be a real struggle.

It is with that thought in mind—the acceptance of the fundamental difference between gender and sex—that I approach the issue of transgender people participating in competitive sport with the following sincere question: Are sports competitions divided by gender or are they divided by sex? If sports are divided by sex, then it follows logically that gender should have nothing to do with the discussion. That is, it follows that transgender people should only participate in sports along with those of their same birth sex. On the other hand, if sports participation is divided along gender lines, then everyone of the same gender (obviously, by definition this must include transgender people) should be invited to participate, regardless of sex. Is there more evidence that sports are arranged as a competition between those of the same sex, or those of the same gender?

Provocation of the Week

At Hold That Thought, Sarah Haider writes that for a long time, she assumed that “with no material incentives in one direction or another, people will think more freely. A world in which no one has to worry about where their paycheck will come will be a world in which people are more likely to be courageous, and tell the truth more openly. And of course, it is obvious how financial incentives can distort truth-telling. This is, of course, the justification for academic tenure.”

Now she wonders if tenure may actually pave the way for more conformity. She explains:

First and foremost, it is not the case that free people will necessarily speak truthfully. No matter the romantic notions we like to hold about ourselves, humans do not deeply desire to “speak the truth”. There are more beautiful things to say, things that make us feel good about ourselves and our respective tribes, things that grant us hope and moral strength and personal significance—truth, meanwhile, is insufferably inconvenient, occasionally ugly, and insensitive to our feelings. But lies, by their very nature, can be as beautiful and emotionally satisfying as our imaginations will allow them.

Unfortunately, some degree of fidelity to reality is often required to prosper, and so occasionally we must choose truth. But that degree is dependent on our environments: lies are a luxury which some can afford more than others. Material freedom isn’t just the freedom to tell the truth, it is the freedom to tell lies and get away with it. As I’ve noted before, the lack of economic pressures can clear the way for independent thinking, but they can also remove crucial “skin in the game” that might keep one tethered to reality.

I suspect that on the whole, tenure might simply make more room for social pressures to pull with fewer impediments. If keeping your job is no longer a concern, you will not be “concern-free”. Your mind will be more occupied instead by luxury concerns, like winning and maintaining the esteem of your peers. (And in fact, we do see this playing out at universities. Professors are more protected from the pressures of the outside world due to tenure, yet they are uniquely subservient to the politics within their local university environment.) …

Academics actively shape their own environments. They grant students their doctorates, they help hire other faculty, they elect their department chairs. When an idea becomes prominent in academia, the structure of the environment selects for more of the same … When you are forced to coexist with the enemy, you develop norms which allow both parties to function with as much freedom and fairness as possible. Ideologically mixed groups will, in other words, tend to emphasize objective process because they do not agree on ends. This environment is fairly conducive to the pursuit of truth.

More uniform groups, on the other hand, will tend to abandon process—rushing instead towards the end they are predisposed to believe is true and willing to use dubious means to get there. This creates a hostile environment for dissenting members, and over time, there will be less of them and more uniformity, which will inevitably lead to an even more hostile environment for dissent. When a majority ideology develops, it is likely only to increase in influence, and when it is sufficiently powerful, it can begin competing with reality itself.

I retain hope that tenure does more good than harm but encourage faculty members who enjoy it to exhibit more courage to dissent from any orthodoxies of thought they regard as questionable.