Itemoids

ChatGPT

FTC chair Lina Khan warns AI could 'turbocharge' fraud and scams

CNN

www.cnn.com › 2023 › 04 › 18 › tech › lina-khan-ai-warning › index.html

Artificial intelligence tools such as ChatGPT could lead to a "turbocharging" of consumer harms including fraud and scams, and the US government has substantial authority to crack down on AI-driven consumer harms under existing law, members of the Federal Trade Commission said Tuesday.

You Should Ask a Chatbot to Make You a Drink

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 04 › chatgpt-generative-ai-reliability-creativity-grocery-list › 673759

Two weeks in a row, ChatGPT botched my grocery list. I thought that I had found a really solid, practical use for AI—automating one of my least favorite Sunday chores—but the bot turned out to be pretty darn bad at it. I fed it a link to a recipe for cauliflower shawarma with a spicy sauce and asked it to compile the ingredients in a list. It forgot the pita, so I forgot the pita, and then I had to use tortillas instead. The following week, I gave it a link to a taco recipe. It forgot the tortillas.

How is AI going to revolutionize the world if it can’t even revolutionize my groceries? I vented to my colleague Derek Thompson, who’s written about the technology and its potential. He told me that he’d been using ChatGPT in almost the reverse way, by offering it cocktail ingredients he already had in his pantry and asking for drink recipes. I decided to give it a go, and soon enough I was sipping a pleasant mocktail made with jalapeño and seltzer.  

The AI—at least in its free iteration—was pretty bad at gathering information from a random website link in an orderly fashion, but it did a good job playing with the ingredients that I provided. It is adept at a kind of creative synthesis—picking up on associations between words and pairing them in both familiar and novel ways to delight the user. Understanding why could give us a richer sense of how to deploy generative AI moving forward—and help us avoid putting it to wrongheaded, even harmful uses.

[Read: ChatGPT will change housework]

In addition to being a dismal grocery shopper, ChatGPT has struggled in the past to do basic math. We think of computers as logical and exacting, but ChatGPT is something different: a large language model that has been trained on big chunks of the internet to create associations between words, which it then “speaks” back to you. It may have read the encyclopedia, but it is not itself an encyclopedia. The program is less concerned with things being true or false; instead, it analyzes large amounts of information and provides answers that are highly probable based on our language patterns.

Some stochasticity or randomness—what the computer scientist Stephen Wolfram calls “voodoo”—is built into the model. Rather than always generating results based on what is most likely to be accurate, which would be pretty boring and predictable by definition, ChatGPT will sometimes choose a less obvious bent, something that is associated with the prompt but statistically less likely to come up. It will tell you that the word pours finishes the idiom beginning with “When it rains, it …” But if you push it to come up with other options, it may suggest “When it rains, it drizzles” or “When it rains, it storms.” As Kathleen Creel, a professor of philosophy and computer science at Northeastern University, put it: “When you give it a prompt, it says, Okay, based on this prompt … this word is 60 percent most likely to be a good word to go next, and this word is 20 percent, and this word is 5 percent.” Sometimes that less likely option is inaccurate or problematic in some way, leading to the popular criticism that large language models are “stochastic parrots”: able to piece together words but ignorant of meaning. Any given chatbot’s randomness can be dialed up or dialed down by its creator.

“It’s actually not in the business of doing something exactly,” Daniel Rockmore, a professor of math and computer science at Dartmouth College, told me. “It’s really in the business of doing something that’s super likely.” That distinction is meaningful, even in the realm of routine chores: Giving me a grocery list that is likely to be right isn’t the same as giving me a grocery list that includes everything I need. But when it comes to putting together a mixed drink based on a set of given ingredients, there isn’t necessarily one right way to do things. “You can get a shitty cocktail, but you kind of can’t get a wrong cocktail,” Rockmore pointed out.

As if to test Rockmore’s theory, Axelrad, a beer garden in Houston, recently ran a special called “Humans vs. Machines,” which pitted ChatGPT recipes against those constructed by human mixologists. The bar prompted ChatGPT to design a cocktail—for example, one inspired by the Legend of Zelda video-game series—and then tested it against one made by a bartender. Patrons could try each concoction and vote for their favorite. The bar ran the competition four times, and the robots and humans ended up tied. ChatGPT’s remake of Axelrad’s signature Blackberry Bramble Jam even triumphed over the original.

Lui Fernandes, a restaurant and bar owner who runs a YouTube channel about cocktail making, has likewise been toying with the technology. He told me that ChatGPT’s recipes are “actually very, very good,” though far from flawless. When he started pushing the limits of conventional ingredients, it “spit out some crazy recipes” that he would then have to adjust. Similarly, when my editor offered ChatGPT an objectively awful list of potential ingredients—Aperol, gin, half a beer, and a sack of frozen peas—it suggested he make a “Beer-Gin Spritz” with a garnish of frozen peas for a “fun and unexpected touch.” (You can always count on editors to attempt to break your story.) ChatGPT may understand based on its training data that vegetables can sometimes work as a drink garnish, like celery in a Bloody Mary, but it couldn’t understand why peas would be an odd choice—even if the drink itself was odd, too.

[Read: Nine AI chatbots you can play with right now]

“Every now and again, it’s gonna throw up something which is totally disgusting that it somehow thinks is an extension of the things we like,” Marcus du Sautoy, a mathematician and professor at the University of Oxford, told me. Other times its choices might inspire us, like in the case of the Blackberry Bramble Jam. It is also, I should say, excellent at writing original recipes for classic and familiar drinks, having read and synthesized countless cocktail recipes itself.

What we’re basically talking about here is creativity. When humans make art, they remix what they know and toy with boundaries. Cocktails are more art than anything else: There are recipes for specific drinks, but they always boil down to taste. In this simple, low-stakes context, ChatGPT’s creative synthesis can help us find an unexpected solution to a quotidian problem.   

But this creativity has limits. Giorgio Franceschelli, a Ph.D. student in computer science and engineering at the University of Bologna who conducted a study on these models’ imaginative potential, argued over email that the technology is inherently restricted, because it leans on existing material. It cannot achieve transformational creativity, “where ideas currently inconceivable … are actually made true.”

Although ChatGPT may help us explore our own creativity, it also risks flattening what we produce. Creel warned about the “cultural homogeneity” of cocktail recipes produced by the bot. Similar to how recommendation algorithms have arguably homogenized popular music, chatbots could condense the cocktail scene to one that just plays the hits over and over. And because of how they were trained, AI tools may disproportionately offer the preferences of the English-speaking internet. Fernandes—a Brazilian immigrant who, in tribute to his heritage, chooses to focus on South and Latin American spirits that other bars may overlook—found that the bot struggled to balance cachaça or pisco cocktails. “It wasn’t actually able to give me as good of a recipe as when I asked it about bourbon, rye, or gin,” he said. If we’re not thoughtful about how we use AI, it could lead us toward a monoculture beyond just our bars.

[Read: What have humans just unleashed?]

Technology experts and bartenders alike told me that we should think of AI-generated cocktail recipes as a first draft, not a final product. They encouraged a feedback loop between human and bot, to work with it to home in on what you want.

And this advice expands beyond cocktails. Rockmore proposed treating its responses as “a suggestion from someone that you don’t really know but you think is smart” rather than considering the tool to be “the all-knowing master oracle that has to be followed.”

Too often, it seems, we’re turning to AI chatbots for answers, when perhaps we should be thinking of them as unreliable—but fun and well-read—collaborators. Sure, they’ve yet to save me any time when it comes to things that I need done precisely. But they do make a nice spicy margarita.  

Why Chatbot AI Is a Problem for China

The Atlantic

www.theatlantic.com › international › archive › 2023 › 04 › chatbot-ai-problem-china › 673754

ChatGPT, the chatbot designed by the San Francisco–based company OpenAI, has elicited excitement, some unease, and much wonderment around the world. In China, though, the U.S. bot and the artificial intelligence that makes it work represent a threat to the country’s political system and global ambitions. This is because chatbots such as ChatGPT revel in information—something the Chinese state insists on controlling.

The Chinese Communist Party keeps itself in power through censorship, and under its domineering leader, Xi Jinping, that effort has intensified in a quest for greater ideological conformity. Chatbots are especially tricky to censor. What they blurt out can be unpredictable, and when it comes to micromanaging what the Chinese public knows, reads, and shares, the authorities don’t like surprises.

Yet this political imperative collides with the country’s urgent and essential need for innovation, especially in areas such as AI and chatbots. Without continuing technological advances, China’s economic miracle could stall and undercut Xi’s aim of overtaking the United States as the world’s premier superpower. Xi is as intent on his campaign for technological progress as he is on his drive for stricter social control. The development of AI is a crucial pillar of that program, and ChatGPT has exposed how China’s tech sector still lags behind that of its chief geopolitical rival, the U.S.

“The Chinese government is very torn” on chatbots, Matt Sheehan, a fellow who focuses on global technology at the Carnegie Endowment for International Peace, told me. “Ideological control, information control, is one of, if not the, top priority of the Chinese government. But they’ve also made leadership in AI and other emerging technologies a top priority.” Chatbots, he said, are “where these two things start to come into conflict.”

Which path Xi chooses could have huge consequences for China’s competitiveness in technology. Will he permit the progress that can propel China to dominance in the global economy? Or will he sacrifice the cause of innovation to his desire to maintain his grip on Chinese society?

[Annie Lowrey: How ChatGPT will destabilize white-collar work]

Those who live in open societies tend to believe that free thinking and the free flow of information are indispensable prerequisites for innovation. A corollary of this view is that a political system such as China’s, which stifles intellectual curiosity and enforces social conformity, discourages the creativity and risk-taking necessary for achieving breakthroughs. In some respects, that argument has merit. There is no Chinese Disney, for instance, and there may never be as long as the state restricts the freedom of filmmakers to tell stories and create characters. Pop culture across Asia is dominated by what the democratic societies of Japan and South Korea produce.

China’s authoritarianism already inhibits its tech sector in other ways. The Chinese video-swapping app TikTok is facing a possible ban or forced sale in the U.S. because of fears that its Beijing-headquartered parent company, ByteDance, could be pressured to give up private data on American citizens to China’s security state.

Chinese leaders do not believe innovation requires individual liberties innovation and see no contradiction between political control and high-tech aspiration. Communist autocracy has not prevented Chinese companies from emerging as leaders in sectors such as 5G telecommunications networks or electric vehicles. Nor has censorship impeded the development of technologies in the politically riskier realm of data and content. China has vibrant and inventive industries in gaming and social media.

In addition, far from suppressing potentially disruptive and subversive AI technology, the state has actively supported it. In 2017, the State Council, the country’s top governing body, released a national strategy for the sector called the “New Generation Artificial Intelligence Development Plan,” with the goal of “making China the world’s primary AI innovation center” by 2030. In his report to October’s important Communist Party congress, Xi specifically mentioned AI as one of the “new growth engines” that the country must cultivate.

Despite this high-level attention, China’s AI sector lags behind America’s—at least in the area of chatbots, as ChatGPT made all too obvious. In China, “the government, tech entrepreneurs, and investors understand how incredible ChatGPT is and they don’t want to be left behind,” Jordan Schneider, a senior analyst with the research firm Rhodium Group, told me. “To sort of be upstaged so dramatically by OpenAI and ChatGPT was a little embarrassing and is something that is certainly going to focus minds and companies and talent around closing that gap.”

The deficit appears significant. In March, Robin Li, the founder of the Chinese internet-search firm Baidu, tried to show off his own ERNIE Bot, but the demonstration—which used prerecorded results—was so disappointing that the company’s share price plunged on the Hong Kong stock exchange.

[Read: ChatGPT is about to dump more work on everyone]

Left to themselves, the talented engineers and coders at Baidu and other Chinese AI labs will likely catch up. But the state is certain to interfere. Whatever chatbots the tech firms create will have to abide by the same restrictions on speech that China’s human residents are compelled to follow. That was made clear this month when the country’s cybersecurity watchdog issued new draft regulations for the AI sector that require chatbots to produce content in line with socialist values and not liable to subvert state power—broad categories indeed.

The government imposes such censorship on the digital world with the same blunt force it applies to the real world. An army of scrupulous censors scrub politically sensitive material from social-media platforms. Many foreign media and internet services are blocked by the Great Firewall, the digital fortification erected by the state to keep out unwanted information and ideas. Internet searches are restricted. Authorities have taken steps to prevent Chinese citizens from using ChatGPT. Regulators reportedly ordered Chinese tech firms to deny their users access.

Otherwise, ChatGPT will produce politically unacceptable—if, in all likelihood, truthful—information on such topics as Beijing’s mistreatment of the minority Uyghur community, which the state doesn’t want the Chinese public to see. The China Daily, a news outlet owned by the Chinese government, warned that ChatGPT can “boost propaganda campaigns launched by the U.S.”

Baidu’s ERNIE, available to the public on a limited basis only, simply refuses to respond to some politically suspect queries and tries instead to change the subject. (I requested access to ERNIE for this article, but have not been granted it.)

How Baidu and other chatbot providers adjust their models to adhere to the state’s censorship rules could have further negative effects. For instance, a chatbot model trained only on vetted information encircled by China’s Great Firewall is unlikely to be as effective as a foreign competitor that draws on a wider and more diverse corpus of sources. (In a recent press release, Baidu noted that ERNIE had been trained on “a knowledge graph of 550 billion facts” and other material, but when I asked for further details of the sources, the company would not comment.)

Chatbots are also potentially more difficult to censor than earlier forms of digital media. Chatbot models will analyze, collate, and connect data in unexpected and surprising ways. “The best analogy would be to how a human learns,” Jeffrey Ding, a political scientist at George Washington University who studies Chinese technology, explained to me. “Even if you are learning things from only a censored set of books, the interactions between all those different books you are reading might produce either flawed information or politically sensitive information.”

That presents special challenges to Chinese AI specialists and state censors. Even if a Chinese chatbot is trained on a limited set of politically acceptable information, it can’t be guaranteed to generate politically acceptable outcomes. Furthermore, chatbots can be “tricked” by determined users into revealing dangerous information or stating things they have been trained not to say, a phenomenon that has already occurred with ChatGPT.

This unpredictability places China’s tech sector in an unenviable position. On the one hand, researchers are under pressure to achieve breakthroughs in AI and meet the government’s targets. On the other, designing chatbots could be dangerous in a political environment that tolerates no dissent. The authorities are unlikely to look kindly on a chatbot that breaks the rules—or on the entrepreneurs and engineers designing and training it. To drive that point home, the draft regulations from the cybersecurity agency hold chatbot providers responsible for the content they produce. That alone could discourage China’s tech elite from pursuing chatbots, or at least advanced models of them that would be available to the public.

[Read: The electric-car lesson China is serving up for America]

Fettering chatbots with too many constraints, however, could imperil China’s progress, as well as inhibit developments in the crucial science behind them. “Chatbots are not just a funny toy,” Sheehan, from the Carnegie Endowment, told me. “A lot of people in the deep tech of AI think this is the most promising path forward for creating more general artificial intelligence, which is kind of the holy grail of the field.” Therefore, “Chinese officials are at cross purposes on this one.”

Much will depend on what China’s leaders are willing to let slide in the name of experimentation. There are good reasons to believe they will allow some latitude. The explosion of social media in China has also posed risks to the state, as it offers Chinese citizens the power to widely share unauthorized information—videos of protests, for instance—faster than censors can suppress it. Yet the authorities have accepted this downside in order to allow new technologies to flourish.

“I do think the Chinese government is concerned about the negative, harmful effects of AI,” Ding told me. Despite “the censorship,” he added, “we’ve seen from the past track record of Chinese companies and the Chinese government that there is a way forward with respect to creating breakthrough innovations in this space.”

The Chinese government could even find ways to use chatbots to its advantage. Just as the authorities have been able to co-opt social media and employ the platforms to manipulate popular opinion, monitor the public, and track dissenters, so could a chatbot easily become a tool of social control, promoting official narratives and principles. In their recent book, Surveillance State, the journalists Josh Chin and Liza Lin write that China’s rulers believe that becoming a leader in such technologies as AI “would help the Party build a new system of control that would ensure its own well-being.”

Such an obedient, party-line chatbot—shielded from more formidable, uncensored foreign competitors behind the Great Firewall—could succeed perfectly well within China yet have little appeal outside. In that case, what China’s authoritarianism will inhibit is not technological advancement per se, but its technological competitiveness in the wider world.

Appalachia’s Quiet Time Bombs

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 04 › appalachias-quiet-time-bombs › 673752

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

The people who live and work in Appalachian coal country tend to be viewed as climate-change villains rather than victims. But the deadly floods that swept a pocket of eastern Kentucky last summer challenge common preconceptions about which Americans are vulnerable to environmental disasters, and what—or who—is to blame.

First, here are four new stories from The Atlantic:

The myth of the broke Millennial ChatGPT will change housework. We’re in denial about our dogs. The violent fantasy behind the Texas governor’s pardon demand The Weight of the Rain

To understand how a freak summer rainstorm could kill 44 Appalachian residents and leave thousands more displaced across eastern Kentucky, you could consider the moment in the early morning hours of July 28, 2022, when the floodwaters that swelled from local creeks darkened from muddy brown to charcoal gray, rising high enough to loosen mobile homes, trucks, and trees from their perches and hurl them through the valleys like missiles. You could recall how the weight of the rain forced families to seek shelter in the hills and watch as their communities washed away down the hollows.

Or you could read an Atlantic article from April 1962. Written by a Kentucky lawyer named Harry Caudill, “The Rape of the Appalachians” was a broadside against a relatively new method of coal extraction—strip mining—and it managed to predict precisely the environmental catastrophe that befell eastern Kentucky this past summer.

“By a process which produces huge and immediate profits for a few industrialists, the southern Appalachians are literally being ripped to shreds,” Caudill wrote. “Eventually every taxpayer from Maine to Hawaii will have to pay the cost of flood control and soil reclamation.”

Traditional mines had been dug downward in the search for coal deposits, then outward along their seams, allowing a team of miners to descend into mountains, chip away at the fuel, and cart it up to the surface. Strip-mining operations, by contrast, deploy bulldozers to clear timber from a ridge’s surface in horizontal streaks, then blast into the mountain’s side with explosives, exposing a seam to the open air. This allows for more efficient extraction of coal but eliminates the forests that help drain and slow runoff from rainstorms. So when the thunderstorms began in late July 2022, water rushed down the mountains unabated, destroying a Breathitt County community called Lost Creek, a small collection of homes gathered down the mountain from a strip mine.

Ned Pillersdorf, a lawyer in Prestonsburg, Kentucky, put it in simpler terms. “If you pour a gallon of milk on a table, it will run off all at once,” he told me. “If you put some towels down, it drips off.” By blasting away soil and timber, strip mining has the effect of ripping towels from the table. As a result, strip mines, he explained, are “time bombs.” When the storms came, water flooded the screened porch where Pillersdorf watches baseball, but he and his family were otherwise unaffected. In Lost Creek, though, nearly every single home was destroyed, Pillersdorf said. Two residents died. “On July 28,” he continued, “one of the time bombs went off.”

Today, Pillersdorf is leading a class-action lawsuit on behalf of many of the residents of Lost Creek against Blackhawk Mining, the company that operates the strip mine, and a subsidiary of Blackhawk, Pine Branch Mining. In an argument not unlike Caudill’s, he alleges that the company’s failure to “reclaim” the mine, by reforesting the area and maintaining silt ponds to prevent excessive runoff, aggravated the flooding. (In a response to his legal complaint, lawyers for Blackhawk and Pine Branch denied all of Pillersdorf’s allegations; the flood, they claimed, was an act of God.)

“I’m not a person that hates the coal industry or anything like that,” Gregory Chase Hays, one of Pillersdorf’s plaintiffs, told me. Like many people in the area, Hays has benefited from coal extraction at various points throughout his life; his grandfather and stepfather were both employed in the coal industry. But he’s come to question how the industry treats the communities around mines: Not long after midnight on July 28, Hays watched as his neighbor’s home floated through his yard. That night, he and one of his sons carried his mother-in-law to higher ground through waist-deep floodwaters. When they were at last able to return to their home, Hays found a notice from one of the local coal companies announcing that it intended to continue blasting away in the mountains nearby. It was posted on the bottom of their door; their stoop had been swept away.

The July floods displaced thousands of people. Some lived in tents for months. Hays, whose HVAC system was destroyed, had his air-conditioning fixed only this past Wednesday.

A February report from the Ohio River Valley Institute and Appalachian Citizens’ Law Center estimates that it will cost $450 million to $950 million to rebuild the approximately 9,000 homes damaged by flooding. As of early March, FEMA has provided just more than $100 million. In keeping with Caudill’s grim prediction that mining would enrich only a few industrialists, the counties most exposed to the potential hazards of strip mining are also among the most impoverished in the United States: Without significant assistance, many families won’t be able to rebuild.

And as global temperatures continue to rise, storms like those that flooded eastern Kentucky and devastated the community of Lost Creek are likely to become more and more frequent. Across Appalachia, each has the potential to unleash a similar catastrophe.

Related:

The photographer undoing the myth of Appalachia Henry Caudill on the destruction of the Appalachians (from 1962) Today’s News Violence in Sudan has continued for a third day as rival generals fight for control of the northeast African country. Millions of residents are hiding in their homes, and the toll of civilian deaths and injuries continues to rise. A grand jury in Summit County, Ohio, decided not to charge police officers in the death of 25-year-old Jayland Walker, a Black man shot by police in 2022 after an attempted traffic stop. Two Kenyan runners were champions in today’s Boston Marathon—Evans Chebet for a second consecutive year in the men’s race and Hellen Obiri in the women’s race. Dispatches Up for Debate: Readers weigh in on what they believe is the best cuisine on earth.

Explore all of our newsletters here.

Evening Read Illustration by The Atlantic

Why Does Contact Say So Much About God?

By Jaime Green

“As I imagine it,” Carl Sagan once said, “there will be a multilayered message. First there is a beacon, an announcement signal, something that says, Pay attention. This is not some natural astronomical phenomenon. This is a signal from intelligent beings … Then, the next layer is one that says, This message is directed specifically to you guys on Earth. It isn’t directed to anybody else. And the third part of the message is the real content, which is a very complex set of data in a new language, which is also explained.”

He was describing his novel, Contact, a 370-or-so-page answer, literally or in spirit, to every question we can ask about how finding alien intelligence might go. Yes, there’s conflict and strife—acts of terrorism, government obstruction, frustration and loss and death—but at its core the story promises an inviting cosmos. A door opening to a galactic community. We’re not only not alone but also welcomed. This hope is central to the idealistic origins of the search for extraterrestrial intelligence (SETI), to Sagan’s motivations as a scientist and communicator. It also makes it especially weird that the novel ends with its heroine finding proof that God is real, but we’ll get to that.

Read the full article.

More From The Atlantic

Vermeer’s revelations SNL has struck gold with “Lisa From Temecula.” Animals are migrating to the Great Pacific Garbage Patch. Culture Break A24

Read. Argument With a Child,” a poem by Katie Peterson.

“Plant your eyes on that place mat of the world / you love and don’t / move them until it stops hurting.”

Watch. Aftersun, available to rent on multiple platforms, is a film to watch—and to weep over—alone.

Play our daily crossword.

P.S.

I’ve been fascinated by Harry Caudill since I first reported on his life and legacy for a photo essay featuring the work of the documentary photographer Stacy Kranitz. The success of “The Rape of the Appalachians” gave the lawyer a national platform, and in a series of follow-up articles and books, Caudill became a spokesperson of sorts for Appalachia and its plight. Today, his book Night Comes to the Cumberlands is credited in part with spurring the War on Poverty. But a dark undercurrent ran through much of his work: Caudill blamed Appalachians themselves—his neighbors—for their misfortune, and had little faith that they could change their circumstances. His writing brought billions of dollars of aid to the region but also engrained an enduring stereotype of Appalachia as a poverty-stricken backwater. Later in life, he embraced the theories of the Nobel Prize–winning physicist turned eugenics advocate William Shockley and attempted to establish a program to offer cash bonuses to Appalachians who volunteered to be sterilized. (It never took off.)

If you’re interested in learning more about Harry Caudill’s meteoric rise and rapid fall from grace, I highly recommend the Lexington Herald Leader’s excellent five-part series by John Cheves and Bill Estep, published for the 50th anniversary of Night Comes to the Cumberlands. I also encourage you to spend some time with Stacy’s striking photography; in addition to her work subverting Caudill’s stereotypes of Appalachia, her images have appeared alongside reporting on Tennessee’s abortion ban and the state’s efforts to expel Justin Pearson and Justin Jones from its legislature.

— Andrew

Did someone forward you this email? Sign up here.

Isabel Fattal contributed to this newsletter.