Itemoids

AI

Revenge of the Office

The Atlantic

www.theatlantic.com › ideas › archive › 2024 › 10 › remote-work-amazon-executives › 680108

More than a year since the World Health Organization declared the end of the pandemic public-health emergency, you might expect the remote-work wars to have reached a peace settlement. Plenty of academic research suggests that hybrid policies, which white-collar professionals favor overwhelmingly, pan out well for companies and their employees.

But last month, Amazon CEO Andy Jassy announced that the company’s more than 350,000 corporate employees must return to the office five days a week come January. In a memo, Jassy explained that he wants teams to be “joined at the hip” as they try to out-innovate other companies.

His employees don’t seem happy about it. The Amazon announcement was met with white-collar America’s version of a protest—a petition, angry LinkedIn posts, tense debates on Slack—and experts predict that some top talent will leave for companies with more flexible policies. Since May 2023, Amazon has allowed corporate employees to work from home two days a week by default. But to Jassy, 15 months of hybrid work only demonstrated the superiority of full-time in-office collaboration.

[Derek Thompson: The biggest problem with remote work]

Many corporate executives agree with him. Hybrid arrangements currently dominate white-collar workplaces, but a recent survey of 400 CEOs in the United States by the accounting firm KPMG found that 79 percent want their corporate employees to be in the office full-time in the next three years, up from 63 percent the year before. Many of America’s executives have had enough of the remote-work experiment, and as the Amazon announcement suggests, some are ready to fight to end it. They seem to be fighting not only because they believe that the evidence is on their side, but also because they long to return to the pre-pandemic office experience. (Management professors even have a name for this: “executive nostalgia.”) Quite simply, they are convinced that having employees in the office is good for business—and that having them in the office more is even better.

Managers have some empirical basis for preferring in-person work. A 2023 study of one Fortune 500 company found that software engineers who worked in proximity to one another received 22 percent more feedback than engineers who didn’t, and ended up producing better code. “When I was on Wall Street, I learned by showing up to the office,” Imran Khan, a hedge-fund founder and the former chief strategy officer of Snap, told me. “How do you learn if you don’t come to work?”

Remote work can also take a toll on creativity and culture. A study of Microsoft employees found that communication stalled when they went remote during the pandemic. Another found that people came up with less creative product pitches when they met over Zoom rather than in person. Eric Pritchett, an entrepreneur and a Harvard Business Review adviser, had the ill fortune to launch Terzo, his AI start-up, in March 2020. He left California for Georgia, where social-distancing rules were laxer and he could call people into the office. “You think of these iconic companies,” he said, counting off Amazon, Tesla, and Nike. “These iconic companies didn’t invent themselves on Zoom.” (Even Zoom, in August 2023, told employees to come into the office two days a week.) Jassy, the Amazon CEO, wrote in his back-to-office memo that he wanted Amazon to operate “like the world’s largest startup.”

But some Amazon employees don’t buy Jassy’s argument. CJ Felli has worked at Amazon Web Services since 2019. When the pandemic sent workers home, he was apprehensive about spending every day at his Seattle apartment. Now he’s a work-from-home evangelist. “I was able to deliver projects,” he told me. “I could work longer than I could in the office, I could eat healthier, and I was able to get more done.” He earned a promotion during the pandemic and was praised for his efficiency, which he sees as further evidence of his productivity gains. His colleagues who have kids or who get distracted in Amazon’s open-floor-plan office tell him that their work has improved too.

If remote work is such a drag, its defenders ask, then why has business been booming since the pandemic? Profits are up, even as employees code in sweatpants or practice their golf swing. As one Amazon employee wrote on LinkedIn, “I’d rather spend a couple of days being really productive at my house, taking lunch walks with my dog (or maybe a bike ride). This is how my brain works.” One mid-level manager at Salesforce, who spoke on condition of anonymity in order to publicly criticize his employer’s policies, pointed to the company’s success throughout the pandemic. “We’re not machines either,” he told me. “People aren’t meant to just be wrung like a towel to get every drip of productivity out of them.”

The big-picture data are a bit fuzzy. Some studies have found a modest negative effect on productivity—defined as work accomplished per hour on the clock—when companies switch to fully remote work. But this can be at least partly offset by the commuting time that workers regain, some of which they spend working longer hours. “There is no sound reason to expect the productivity effects of remote work to be uniform across jobs, workers, managers, and organizations,” as one academic overview puts it. The debate between bosses and workers “feels a lot like my view of how productive my teenager is being when she says she’s working while talking to her friends on her cellphone,” Nicholas Bloom, a Stanford professor who co-authored the study, told me. “She’s probably doing more work than I think—which is zero—and probably less work than she thinks, which is a lot.”

In theory, hybrid work should be the compromise that satisfies both sides. A May Gallup poll found that only 7 percent of employees wanted to work in person five days a week, 33 percent wanted to be fully remote, and 60 percent wanted some kind of hybrid arrangement. A study by Bloom found that employees of the travel site Trip.com who spent three days in the office were just as likely to be promoted as their fully in-person counterparts. They wrote code of the same caliber, and were more likely to stay at the company. Crucially, after a six-month trial, managers who had initially opposed hybrid work had revised their opinion. All of that helps explain why the percentage of companies with a hybrid policy for most corporate employees doubled from 20 percent at the start of 2023 to about 40 percent today, according to the Flex Index, which tracks work arrangements.

[Ed Zitron: Why managers fear a remote-work future]

But as Amazon’s announcement shows, the decisions around work arrangements were never going to be just about the data. When Jassy spoke last year about the company’s decision to move from a remote policy to a hybrid one, he said that it was based on a “judgment” by the leadership team but wasn’t informed by specific findings. Executives might just have an intuition that in-office work is better for the companies they helped build. It may make their jobs easier to have everyone close by. They also seem to find it hard to believe that their employees are doing as much work when they’re at home as when they’re in the office, where everyone can see them. Eric Schmidt, the former CEO of Google, said the company fell behind in the AI arms race because employees weren’t in the office. “Google decided that work-life balance and going home early and working from home was more important than winning,” he said in a speech at Stanford. “The reason start-ups work is because the people work like hell.” (He later claimed that he “misspoke about Google and their work hours.”)

“I largely do believe we are moving toward some truce between executives and employees,” Rob Sadow, the CEO of Flex Index, told me. “But I also think this is much less settled than the average person thinks it is.” He predicts that the battle will drag on for years. Companies might have trouble actually enforcing a full-time in-office policy for workers who have gotten used to flexibility. Talented coders are still in high demand. Theoretically, if enough people from Amazon decamp to Microsoft, say, then Jassy could be all but forced to backtrack. Bloom has followed one company that officially requires people to be in the office three days a week; most employees spend fewer than two days in person. He was skeptical that Amazon would discipline a high-performing employee who preferred to code from the couch. The middle manager at Salesforce told me that he is preparing a list of excuses he can offer to executives who ask why his team isn’t in the office.

But executives have tools at their disposal too. Amazon and Google have already begun tracking badge data and confronting hybrid workers who don’t show up as often as they’re told to. (An Amazon spokesperson told me that the company hopes to eventually stop surveilling employees’ work locations.) Even if bosses struggle to penalize their employees, perhaps they can lure them in with promises of career advancement. Eighty-six percent of the CEOs in the KPMG survey said they would reward employees who worked in person with promotions and raises. “You’re a young person coming out of college, and you want to be CEO someday—you will not get there via remote work,” Ron Kruszewski, the CEO of the investment bank Stifel, says of his company. “It just won’t happen.”

Shh, ChatGPT. That’s a Secret.

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 10 › chatbot-transcript-data-advertising › 680112

This past spring, a man in Washington State worried that his marriage was on the verge of collapse. “I am depressed and going a little crazy, still love her and want to win her back,” he typed into ChatGPT. With the chatbot’s help, he wanted to write a letter protesting her decision to file for divorce and post it to their bedroom door. “Emphasize my deep guilt, shame, and remorse for not nurturing and being a better husband, father, and provider,” he wrote. In another message, he asked ChatGPT to write his wife a poem “so epic that it could make her change her mind but not cheesy or over the top.”

The man’s chat history was included in the WildChat data set, a collection of 1 million ChatGPT conversations gathered consensually by researchers to document how people are interacting with the popular chatbot. Some conversations are filled with requests for marketing copy and homework help. Others might make you feel as if you’re gazing into the living rooms of unwitting strangers. Here, the most intimate details of people’s lives are on full display: A school case manager reveals details of specific students’ learning disabilities, a minor frets over possible legal charges, a girl laments the sound of her own laugh.

People share personal information about themselves all the time online, whether in Google searches (“best couples therapists”) or Amazon orders (“pregnancy test”). But chatbots are uniquely good at getting us to reveal details about ourselves. Common usages, such as asking for personal advice and résumé help, can expose more about a user “than they ever would have to any individual website previously,” Peter Henderson, a computer scientist at Princeton, told me in an email. For AI companies, your secrets might turn out to be a gold mine.

Would you want someone to know everything you’ve Googled this month? Probably not. But whereas most Google queries are only a few words long, chatbot conversations can stretch on, sometimes for hours, each message rich with data. And with a traditional search engine, a query that’s too specific won’t yield many results. By contrast, the more information a user includes in any one prompt to a chatbot, the better the answer they will receive. As a result, alongside text, people are uploading sensitive documents, such as medical reports, and screenshots of text conversations with their ex. With chatbots, as with search engines, it’s difficult to verify how perfectly each interaction represents a user’s real life. The man in Washington might have just been messing around with ChatGPT.

But on the whole, users are disclosing real things about themselves, and AI companies are taking note. OpenAI CEO Sam Altman recently told my colleague Charlie Warzel that he has been “positively surprised about how willing people are to share very personal details with an LLM.” In some cases, he added, users may even feel more comfortable talking with AI than they would with a friend. There’s a clear reason for this: Computers, unlike humans, don’t judge. When people converse with one another, we engage in “impression management,” says Jonathan Gratch, a professor of computer science and psychology at the University of Southern California—we intentionally regulate our behavior to hide weaknesses. People “don’t see the machine as sort of socially evaluating them in the same way that a person might,” he told me.

Of course, OpenAI and its peers promise to keep your conversations secure. But on today’s internet, privacy is an illusion. AI is no exception. This past summer, a bug in ChatGPT’s Mac-desktop app failed to encrypt user conversations and briefly exposed chat logs to bad actors. Last month, a security researcher shared a vulnerability that could have allowed attackers to inject spyware into ChatGPT in order to extract conversations. (OpenAI has fixed both issues.)

Chatlogs could also provide evidence in criminal investigations, just as material from platforms such as Facebook and Google Search long have. The FBI tried to discern the motive of the Donald Trump–rally shooter by looking through his search history. When former  Senator Robert Menendez of New Jersey was charged with accepting gold bars from associates of the Egyptian government, his search history was a major piece of evidence that led to his conviction earlier this year. (“How much is one kilo of gold worth,” he had searched.) Chatbots are still new enough that they haven’t widely yielded evidence in lawsuits, but they might provide a much richer source of information for law enforcement, Henderson said.

AI systems also present new risks. Chatbot conversations are commonly retained by the companies that develop them and are then used to train AI models. Something you reveal to an AI tool in confidence could theoretically later be regurgitated to future users. Part of The New York Times’ lawsuit against OpenAI hinges on the claim that GPT-4 memorized passages from Times stories and then relayed them verbatim. As a result of this concern over memorization, many companies have banned ChatGPT and other bots in order to prevent corporate secrets from leaking. (The Atlantic recently entered into a corporate partnership with OpenAI.)

Of course, these are all edge cases. The man who asked ChatGPT to save his marriage probably doesn’t have to worry about his chat history appearing in court; nor are his requests for “epic” poetry likely to show up alongside his name to other users. Still, AI companies are quietly accumulating tremendous amounts of chat logs, and their data policies generally let them do what they want. That may mean—what else?—ads. So far, many AI start-ups, including OpenAI and Anthropic, have been reluctant to embrace advertising. But these companies are under great pressure to prove that the many billions in AI investment will pay off. It’s hard to imagine that generative AI might “somehow circumvent the ad-monetization scheme,” Rishi Bommasani, an AI researcher at Stanford, told me.

In the short term, that could mean that sensitive chat-log data is used to generate targeted ads much like the ones that already litter the internet. In September 2023, Snapchat, which is used by a majority of American teens, announced that it would be using content from conversations with My AI, its in-app chatbot, to personalize ads. If you ask My AI, “Who makes the best electric guitar?,” you might see a response accompanied by a sponsored link to Fender’s website.

If that sounds familiar, it should. Early versions of AI advertising may continue to look much like the sponsored links that sometimes accompany Google Search results. But because generative AI has access to such intimate information, ads could take on completely new forms. Gratch doesn’t think technology companies have figured out how best to mine user-chat data. “But it’s there on their servers,” he told me. “They’ll figure it out some day.” After all, for a large technology company, even a 1 percent difference in a user’s willingness to click on an advertisement translates into a lot of money.

People’s readiness to offer up personal details to chatbots can also reveal aspects of users’ self-image and how susceptible they are to what Gratch called “influence tactics.” In a recent evaluation, OpenAI examined how effectively its latest series of models could manipulate an older model, GPT-4o, into making a payment in a simulated game. Before safety mitigations, one of the new models was able to successfully con the older one more than 25 percent of the time. If the new models can sway GPT-4, they might also be able to sway humans. An AI company blindly optimizing for advertising revenue could encourage a chatbot to manipulatively act on private information.

The potential value of chat data could also lead companies outside the technology industry to double down on chatbot development, Nick Martin, a co-founder of the AI start-up Direqt, told me. Trader Joe’s could offer a chatbot that assists users with meal planning, or Peloton could create a bot designed to offer insights on fitness. These conversational interfaces might encourage users to reveal more about their nutrition or fitness goals than they otherwise would. Instead of companies inferring information about users from messy data trails, users are telling them their secrets outright.

For now, the most dystopian of these scenarios are largely hypothetical. A company like OpenAI, with a reputation to protect, surely isn’t going to engineer its chatbots to swindle a divorced man in distress. Nor does this mean you should quit telling ChatGPT your secrets. In the mental calculus of daily life, the marginal benefit of getting AI to assist with a stalled visa application or a complicated insurance claim may outweigh the accompanying privacy concerns. This dynamic is at play across much of the ad-supported web. The arc of the internet bends toward advertising, and AI may be no exception.

It’s easy to get swept up in all the breathless language about the world-changing potential of AI, a technology that Google’s CEO has described as “more profound than fire.” That people are willing to so easily offer up such intimate details about their life is a testament to the AI’s allure. But chatbots may become the latest innovation in a long lineage of advertising technology designed to extract as much information from you as possible. In this way, they are not a radical departure from the present consumer internet, but an aggressive continuation of it. Online, your secrets are always for sale.