MIT Top Stories
In the middle of the night on May 24, TikTok changed its voice. The ubiquitous woman’s voice that could read your video’s text out loud in a slightly stilted, robotic cadence was suddenly replaced by one with an almost smirky, upbeat tone. Many users started calling the new one the “Uncanny Valley Girl” to express their displeasure. Lil Nas X even made a TikTok about it.
But what happened to the old voice? And who was the woman behind it?
When we think of women in computing, we often think about how, both literally and figuratively, they have been silenced more often than they’ve been listened to. Women’s voices and bodies can be found all throughout the history of computing—from being heard in launch countdowns to being visible in photographs—but only relatively recently have historians written these women back into the narrative by explaining what they did. For a long time, women were mistakenly thought to be peripheral to computing history, even though they were often the ones who programmed the computers.
And it is still the case that when we hear a woman’s voice as part of a tech product, we might not know who she is, whether she is even real, and if so, whether she consented to have her voice used in that way. Many TikTok users assumed that the text-to-speech voice they heard on the app wasn’t a real person. But it was: it belonged to a Canadian voice actor named Bev Standing, and Standing had never given ByteDance, the company that owns TikTok, permission to use it.
Standing sued the company in May, alleging that the ways her voice was being used—particularly the way users could make it say anything, including profanity—were injuring her brand and her ability to make a living. Her voice becoming known as “that voice on TikTok” that you could make say whatever you liked brought recognition without remuneration and, she alleged, hurt her ability to get voice work.
Then, when TikTok abruptly removed her voice, Standing found out the same way the rest of us did—by hearing the change and seeing the reporting on it. (TikTok has not commented to the press about the voice change.)
Those familiar with the story of Apple’s Siri may be feeling a bit of déjà vu: Susan Bennett, the woman who voiced the original Siri, also didn’t know that her voice was being used for that product until it came out. Bennett was eventually replaced as the “US English female voice,” and Apple never publicly acknowledged her. Since then, Apple has written secrecy clauses into voice actors’ contracts and most recently has claimed that its new voice is “entirely software generated,” removing the need to give anyone credit.
These incidents reflect a troubling and common pattern in the tech industry. The way that people’s accomplishments are valued, recognized, and paid for often mirrors their position in the wider society, not their actual contributions. One reason Bev Standing’s and Susan Bennett’s names are now widely known online is that they’re extreme examples of how women’s work gets erased even when it’s right there for everyone to see—or hear.
The way that people’s accomplishments are valued, recognized, and paid for often mirrors their position in the wider society, not their actual contributions.
When women in tech do speak up, they’re often told to quiet down—particularly if they are women of color. Timnit Gebru, who holds a PhD in computer science from Stanford, was recently ousted from Google, where she co-led an AI ethics team, after she spoke up about her concerns regarding the company’s large language models. Her co-lead, Margaret Mitchell (who holds a PhD from the University of Aberdeen with a focus on natural-language generation), was also removed from her position after speaking up about Gebru’s firing. Elsewhere in the industry, whistleblowers like Sophie Zhang at Facebook, Susan Fowler at Uber, and many other women found themselves silenced and often fired as a direct or indirect result of trying to do their jobs and mitigate the harms they saw in the technology companies where they worked.
Even women who found startups can find themselves erased in real time, and the problem again is worse for women of color. Rumman Chowdhury, who holds a PhD from the University of California, San Diego, and is the founder and former CEO of Parity, a company focused on ethical AI, saw her role in her own company’s history minimized by the New York Times.
Friends, I am tired. I work hard to build good things and bring in the right people. Parity is no different.
For the second time in two weeks, I have to fight a major media outlet for basic recognition of my work. The gaslighting is real. The erasure is real. 1/ https://t.co/GcCmDDlzU5
In a feature story about Parity, the paper failed to identify Chowdhury as the founding CEO and instead described her merely as “a researcher who made a tool” that Parity’s business is based on. After significant public blowback, the Times quietly updated the story without issuing a formal correction. But it still fails to identify Chowdhury as Parity’s founding CEO, instead focusing on the young white woman who is her successor.
And recently, thousands of Black creators on TikTok, many of them women, went on strike, refusing to choreograph new dances for Megan Thee Stallion’s recent single. Black women in particular have seen their choreography repeatedly copied and stolen by TikTok creators who are white women and who monetize those dances, and even go on to perform them on national television, without giving credit to the original creators.
When we look at the impact of women’s voices in tech today, we can see both that they have led calls for accountability and also that they have been literally and figuratively undervalued. From doing voiceover work that becomes the basis for voice tools that millions use, without being paid or acknowledged accordingly, or doing work on the foundational concepts of AI, women are often present in tech without being listened to.
While women, and particularly women of color, are often the first people tech companies go to when they need to showcase their diversity or defend themselves against criticism that their products worsen sexism and racism, these women struggle to have their expertise taken seriously at the highest levels of management and are too rarely in a position to set the agenda for technological development.
The good news is that historians and journalists, as well as the women themselves, have been working hard to reverse this erasure and are having significant success. In the past decade, new books, articles, and films have set the record straight and changed our understanding about the importance of women’s contributions to high tech. The bad news is that those contributions are still being erased in real time, including the work of women who are trying to solve some of the most important problems in tech today. As long as that’s true, no matter how fast we try to correct the record, we will still end up in the same place.
Neuroscientists have released the most detailed 3D map of the mammalian brain ever made, created from an animal whose brain architecture is very similar to our own—the mouse.
The map and underlying data set, which are now freely available to the public, depict more than 200,000 neurons and half a billion neural connections contained inside a cube of mouse brain no bigger than a grain of sand.
The new research is part of the Machine Intelligence from Cortical Networks (MICrONS) program, which hopes to improve the next generation of machine-learning algorithms by reverse-engineering the cerebral cortex—the part of the brain that in mammals is responsible for higher functions like planning and reasoning. A consortium of researchers led by groups from the Allen Institute, Baylor College of Medicine, and Princeton University collected the data.
“Some people think that maybe the fundamental secrets of human intelligence are to be found in studying the cortex,” says H. Sebastian Seung, a professor of computer science and neuroscience at Princeton and a lead scientist for MICrONS. “That’s why it’s been such a mysterious, glamorous subject in neuroscience.” As scientists learn more about the brain, their discoveries could lead to more humanlike AI.
Creating the map was a five-year project with three stages. The first involved taking measurements of what the mouse’s brain did when the animal was alive. This produced more than 70,000 images of active brain cells as the mouse processed visual information. Then MICrONS researchers cut out a small piece of the brain and sliced it into more than 25,000 ultra-thin pieces. Next, they used electron microscopy to take more than 150 million high-resolution images of those pieces.
Previous wiring diagrams, as the images are known, have mapped “connectomes” for the fruit fly and human brains. One reason MICrONS has been so well received is that the data set has the potential to improve scientist’s understanding of the brain and possibly help them treat brain disorders.
Venkatesh Murthy, a professor of molecular and cellular biology at Harvard University who studies neural activity in mice but was not involved in the study, says the project gives him and other scientists “a bird’s-eye view” into how single neurons interact, offering an exquisitely high-resolution “freeze frame” image that they can zoom into.
R. Clay Reid, a senior investigator at the Allen Institute and another lead scientist for the MICrONS project, says that before the program’s research was complete, he would’ve thought this level of reconstruction was impossible.
Reid says that with machine learning, the process of turning two-dimensional wiring diagrams of the brain into three-dimensional models has gotten exponentially better. “It’s a funny combination of a very old field and a new approach to it,” he says.
Reid compared the new images to the first maps of the human genome, in that they provide foundational knowledge for others to use. He envisions them helping others to see structures and relationships inside the brain that were previously invisible.
“I consider this, in many ways, the beginning,” says Reid. “These data and these AI-powered reconstructions can be used by anyone with an internet connection and a computer, to ask an extraordinary range of questions about the brain.”
December 20, 2019, was supposed to be a landmark moment for the US space program and the US space industry, Boeing in particular.
Boeing has been a partner of NASA since the agency’s inception in 1958—the company or those it acquired built the capsules that took Apollo astronauts to the moon and later built the space shuttle, and it helps operate the International Space Station. On that day, Boeing was launching its brand-new CST-100 Starliner spacecraft to the ISS on an uncrewed demonstration mission. Along with SpaceX’s Crew Dragon, Starliner was set to become NASA’s go-to option for ferrying astronauts to and from Earth’s orbit.
That didn’t happen. Starliner made it to space, but a computer glitch sank the spacecraft’s chances of actually getting to the ISS. Though it came back to Earth in one piece a couple of days later, it was clearly not ready for human missions.
Now, Boeing is going for a high-stakes redo of that mission. On August 3, Orbital Flight Test 2, or OFT-2, will send Starliner to the ISS again. The company cannot afford another failure.
“There is a lot of credibility at stake here,” says Greg Autry, a space policy expert at Arizona State University. “Nothing is more visible than space systems that fly humans.”
The afternoon of July 30 was a stark reminder of that visibility. After Russia’s new 23-ton multipurpose Nauka module docked with the ISS, it began firing its thrusters unexpectedly and without command, shifting the ISS out of its proper and normal position in orbit. NASA and Russia fixed the problem and had things stabilized in under an hour, but we still don’t know what happened, and it’s unnerving to think what could have happened if conditions had been worse. The whole incident is still under investigation and has forced NASA to postpone the Starliner launch from July 31 to August 3.
It’s precisely this kind of near-disaster Boeing wants to avoid, for OFT-2 and any future mission with people onboard.How Starliner got here
The shutdown of the space shuttle program in 2011 gave NASA a chance to rethink its approach. Instead of building a new spacecraft designed for travel to low Earth orbit, the agency elected to open up opportunities to the private sector as part of a new Commercial Crew Program. It awarded contracts to Boeing and SpaceX to build their own crewed vehicles: Starliner and Crew Dragon, respectively. NASA would buy flights on these vehicles and focus its own efforts on building new technologies for missions to the moon, Mars, and elsewhere.
Both companies hit development delays, and for nine years NASA’s only way of getting to space was by handing over millions of dollars to Russia for seats on Soyuz missions. SpaceX finally sent astronauts to space in May 2020 (followed by two more crewed missions since), but Boeing is still lagging behind. Its December 2019 flight was supposed to prove that all its systems worked, and that it was capable of docking with the ISS and returning to Earth safely. But a glitch with its internal clock caused it to execute a critical burn prematurely, making it impossible to dock with the ISS.
A subsequent investigation revealed that a second glitch would have caused Starliner to fire its thrusters at the wrong time when making its descent back to Earth, which could have destroyed the spacecraft. That glitch was fixed mere hours before Starliner was set to come back home. Software issues aren’t unexpected in spacecraft development, but they’re things Boeing could have resolved ahead of time with better quality control or better oversight from NASA.
Boeing has had 21 months to fix these problems. NASA never demanded another Starliner flight test; Boeing elected to redo it and foot the $410 million bill on its own.
“I fully expect the test to go perfectly,” says Autry. “These problems involved software systems, and those should be easily resolvable.”What’s at stake
If things go wrong, the repercussions will depend on what those things are. Should the spacecraft experience another set of software problems, there’ll likely be hell to pay, and it’s very hard to see how Boeing’s relationship with NASA could recover. A catastrophic failure for other reasons would also be bad, but space is volatile, and even tiny problems that are hard to anticipate and control for can lead to explosive outcomes. That may be more forgivable.
If the new test doesn’t succeed, NASA will still work with Boeing, but a re-flight “might be a couple years off,” says Roger Handberg, a space policy expert at the University of Central Florida. “NASA would likely go back to SpaceX for more flights, further disadvantaging Boeing.”
Boeing needs OFT-2 to go well for reasons beyond just fulfilling its contract with NASA. Neither SpaceX nor Boeing built its new vehicles to carry out ISS missions—they each had larger ambitions. “There is real demand [for access to space] from high-net-worth individuals, demonstrated since the early 2000s, when several flew on the Russian Soyuz,” says Autry. “There is also a very strong business in flying the sovereign astronaut corps of many countries that are not ready to build their own vehicles.”
SpaceX will prove to be very stiff competition. It has private missions—its own and through Axiom Space—already slated for the next few years. More are sure to come, especially since Axiom, Sierra Nevada, and other companies plan to build private space stations for paying visitors.
Boeing’s biggest problem is cost. NASA is paying the company $90 million per seat to fly astronauts to the ISS, versus $55 million per seat to SpaceX. “NASA can afford them because after the shuttle problems the agency did not want to become dependent upon a single flight system—if that breaks, everything stops,” says Handberg. But private citizens and other countries are likely to plump for the cheaper—and more experienced—option.
Boeing could definitely use some good PR these days. It is building the main booster for the $20-billion-and-counting Space Launch System, set to be the most powerful rocket in the world. But high costs and massive delays have turned it into a lightning rod for criticism. Meanwhile, alternatives like SpaceX’s Falcon Heavy and Super Heavy, Blue Origin’s New Glenn, and ULA’s Vulcan Centaur have emerged or are set to debut in the next few years. In 2019, NASA’s inspector general looked at potential fraud in Boeing contracts worth up $661 million. And the company is one of the main characters at the center of a criminal probe involving a previous bid for a lunar lander contract.
If there was ever a time Boeing wanted to remind people what it’s capable of and what it can do for the US space program, it’s next week.
“Another failure would put Boeing so far behind SpaceX that they might have to consider major changes in their approach,” says Handberg. “For Boeing, this is the show.”
They were gold miners in French Guiana, revelers in Cape Cod, and Indian health-care workers. Even though they inhabit worlds apart, they ended up having two things in common. All were vaccinated against covid-19. And they all became part of infection clusters.
In recent weeks, cases like these are proving that covid-19 transmission chains and superspreading events can occur even in groups where nearly everyone is vaccinated, setting off alarms among health officials and torpedoing hopes of a quick return to business as usual in the US.
In May 2021, the CDC had told vaccinated Americans they could safety go unmasked, but on Tuesday the agency reversed course, saying vaccinated people should wear masks in indoor public settings.
The reason was what investigators learned from an outbreak in Provincetown, Massachusetts, a seaside town on Cape Cod, which in early July hosted a rowdy parade and crowded weeks of pool parties. Since then, health investigators say, there have been more than 800 cases of covid-19 linked to those events, 74% of which are in people who were vaccinated.
The Provincetown outbreak was caused by the so-called delta variant, which now accounts for most cases in the US. In a statement released today, Rochelle Walensky, head of the CDC, said the “pivotal discovery” was that vaccinated people infected with delta in Provincetown appear to have just as much virus in their systems as those who are unvaccinated.
“High viral loads suggest an increased risk of transmission and raised concern that, unlike with other variants, vaccinated people infected with delta can transmit the virus,” she said.
The recommendation suggests a rapid return to a layered approach of countermeasures, including masks and social distancing, which could also complicate school reopenings starting next month in the US.Infection at a gold mine
Investigations around the world have been building evidence of outbreaks among the vaccinated for weeks. For instance, a scientific team in Paris and French Guiana recently described how covid-19 tore through a South American gold mine in May, even though nearly all the miners had received Pfizer’s vaccine.
Despite being inoculated, 60% became infected by a variant called gamma. That surprised the scientists so much that they checked to see if the vaccines had been damaged in shipping, but they weren’t.
The initial studies of Pfizer’s vaccine, the mostly widely used in the US, showed it was more than 90% effective in preventing symptomatic disease. But that’s not what was seen in the gold miners; half ended up with symptoms like a fever. The vaccines may still have helped, though. None of the miners became seriously ill, even though most were older than 50 and some had risk factors like high blood pressure and diabetes.
More evidence comes from India, where health-care workers were eligible for the AstraZeneca vaccine starting in early 2021. But when a team from the UK and India looked at covid-19 cases in these workers, they found “significant numbers of vaccine breakthrough infections” at three Delhi hospitals, including a superspreading event that infected 30 people.
The breakthrough infections were much more likely to be caused by the delta variant, they say, than any of the older strains. The older variants were never able to cause a cluster of more than two linked cases among the health-care workers. But the researchers found 10 delta outbreaks that did so.
The reason the delta variant is different is that it transmits more easily; one reason is that the strain may be “evading” prior immunity, say researchers. That could help explain outbreaks among vaccinated people, and it also means that if you’ve already had covid-19, you could more easily get it again. The UK-India team estimated that natural protection against infection dropped by as much as half when people were exposed to delta.Covid on Cape Cod
In the US, the Provincetown outbreak may have taken hold during the July 4 “Independence Week,” when the town hosts thousands of visitors. As July wore on, investigators learned of hundreds of covid-19 cases, and sequencing labs in Boston determined they were caused by delta.
The Provincetown outbreak set off alarm bells at the CDC because vaccines didn’t seem to prevent the virus from spreading person to person, even though most were vaccinated, according to the Washington Post, which obtained an internal CDC presentation that described delta as being as contagious as chicken pox.
Another key clue came from PCR tests run on about 200 people in the Provincetown cluster. Researchers found that the amount of virus in someone’s airway—and hence what the person might launch into the word with every cough and sneeze—was roughly the same, no matter whether people were vaccinated or not.
That doesn’t prove that vaccinated people transmit just as much, says Monica Gandhi, an infectious disease researcher at the University of California, San Francisco. She says that PCR tests detect virus fragments as well as live germs, so vaccinated people might be shedding less live virus or be infectious for less time. Gandhi adds that even with variants circulating, vaccines are still effective so far at preventing most major illness.
Nevertheless, “we are seeing more mild, symptomatic cases,” she says, as well as transmission among the vaccinated.
For the CDC, the new information posed a difficult communication problem: how to tell everyone the vaccine party might be over. In May, it had said that fully vaccinated Americans could dispense with masks and social distancing in most circumstances.
But by July 25, local officials in Provincetown had reintroduced an indoor mask mandate for the town, covering indoor restaurants, offices, bars, and dance floors, and said they would begin testing wastewater. Two days later, the CDC followed suit, recommending that in high-transmission areas everyone wear a mask in indoor public settings.
Because of the delta variant, much of the US may soon qualify as being a high-risk area. Since a low in June, covid-19 cases have risen more than sixfold.
DeepMind has developed a vast candy-colored virtual playground that teaches AIs general skills by endlessly changing the tasks it sets them. Instead of developing just the skills needed to solve a particular task, the AIs learn to experiment and explore, picking up skills they then use to succeed in tasks they’ve never seen before. It is a small step toward general intelligence.
What is it? XLand is a video-game-like 3D world that the AI players sense in color. The playground is managed by a central AI that sets the players billions of different tasks by changing the environment, the game rules, and the number of players. Both the players and the playground manager use reinforcement learning to improve by trial and error.
During training, the players first face simple one-player games, such as finding a purple cube or placing a yellow ball on a red floor. They advance to more complex multiplayer games like hide and seek or capture the flag, where teams compete to be the first to find and grab their opponent’s flag. The playground manager has no specific goal but aims to improve the general capability of the players over time.
Why is this cool? AIs like DeepMind’s AlphaZero have beaten the world’s best human players at chess and Go. But they can only learn one game at a time. As DeepMind cofounder Shane Legg put it when I spoke to him last year, it’s like having to swap out your chess brain for your Go brain each time you want to switch games.
Researchers are now trying to build AIs that can learn multiple tasks at once, which means teaching them general skills that make it easier to adapt.Having learned to experiment, these bots improvised a ramp DEEPMIND
One exciting trend in this direction is open-ended learning, where AIs are trained on many different tasks without a specific goal. In many ways, this is how humans and other animals seem to learn, via aimless play. But this requires a vast amount of data. XLand generates that data automatically, in the form of an endless stream of challenges. It is similar to POET, an AI training dojo where two-legged bots learn to navigate obstacles in a 2D landscape. XLand’s world is much more complex and detailed, however.
XLand is also an example of AI learning to make itself, or what Jeff Clune, who helped develop POET and leads a team working on this topic at OpenAI, calls AI-generating algorithms (AI-GAs). “This work pushes the frontiers of AI-GAs,” says Clune. “It is very exciting to see.”
What did they learn? Some of DeepMind’s XLand AIs played 700,000 different games in 4,000 different worlds, encountering 3.4 million unique tasks in total. Instead of learning the best thing to do in each situation, which is what most existing reinforcement-learning AIs do, the players learned to experiment—moving objects around to see what happened, or using one object as a tool to reach another object or hide behind—until they beat the particular task.
In the videos you can see the AIs chucking objects around until they stumble on something useful: a large tile, for example, becomes a ramp up to a platform. It is hard to know for sure if all such outcomes are intentional or happy accidents, say the researchers. But they happen consistently.
AIs that learned to experiment had an advantage in most tasks, even ones that they had not seen before. The researchers found that after just 30 minutes of training on a complex new task, the XLand AIs adapted to it quickly. But AIs that had not spent time in XLand could not learn these tasks at all.
When covid-19 struck Europe in March 2020, hospitals were plunged into a health crisis that was still badly understood. “Doctors really didn’t have a clue how to manage these patients,” says Laure Wynants, an epidemiologist at Maastricht University in the Netherlands, who studies predictive tools.
But there was data coming out of China, which had a four-month head start in the race to beat the pandemic. If machine-learning algorithms could be trained on that data to help doctors understand what they were seeing and make decisions, it just might save lives. “I thought, ‘If there’s any time that AI could prove its usefulness, it’s now,’” says Wynants. “I had my hopes up.”
It never happened—but not for lack of effort. Research teams around the world stepped up to help. The AI community, in particular, rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines—in theory.
In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.
That’s the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK’s national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid.Not fit for clinical use
This echoes the results of two major studies that assessed hundreds of predictive tools developed last year. Wynants is lead author of one of them, a review in the British Medical Journal that is still being updated as new tools are released and existing ones tested. She and her colleagues have looked at 232 algorithms for diagnosing patients or predicting how sick those with the disease might get. They found that none of them were fit for clinical use. Just two have been singled out as being promising enough for future testing.
“It’s shocking,” says Wynants. “I went into it with some worries, but this exceeded my fears.”
Wynants’s study is backed up by another large review carried out by Derek Driggs, a machine-learning researcher at the University of Cambridge, and his colleagues, and published in Nature Machine Intelligence. This team zoomed in on deep-learning models for diagnosing covid and predicting patient risk from medical images, such as chest x-rays and chest computer tomography (CT) scans. They looked at 415 published tools and, like Wynants and her colleagues, concluded that none were fit for clinical use.
“This pandemic was a big test for AI and medicine,” says Driggs, who is himself working on a machine-learning tool to help doctors during the pandemic. “It would have gone a long way to getting the public on our side,” he says. “But I don’t think we passed that test.”
Both teams found that researchers repeated the same basic errors in the way they trained or tested their tools. Incorrect assumptions about the data often meant that the trained models did not work as claimed.
Wynants and Driggs still believe AI has the potential to help. But they are concerned that it could be harmful if built in the wrong way because they could miss diagnoses or underestimate risk for vulnerable patients. “There is a lot of hype about machine-learning models and what they can do today,” says Driggs.
Unrealistic expectations encourage the use of these tools before they are ready. Wynants and Driggs both say that a few of the algorithms they looked at have already been used in hospitals, and some are being marketed by private developers. “I fear that they may have harmed patients,” says Wynants.
So what went wrong? And how do we bridge that gap? If there’s an upside, it is that the pandemic has made it clear to many researchers that the way AI tools are built needs to change. “The pandemic has put problems in the spotlight that we’ve been dragging along for some time,” says Wynants.What went wrong
Many of the problems that were uncovered are linked to the poor quality of the data that researchers used to develop their tools. Information about covid patients, including medical scans, was collected and shared in the middle of a global pandemic, often by the doctors struggling to treat those patients. Researchers wanted to help quickly, and these were the only public data sets available. But this meant that many tools were built using mislabeled data or data from unknown sources.
Driggs highlights the problem of what he calls Frankenstein data sets, which are spliced together from multiple sources and can contain duplicates. This means that some tools end up being tested on the same data they were trained on, making them appear more accurate than they are.
It also muddies the origin of certain data sets. This can mean that researchers miss important features that skew the training of their models. Many unwittingly used a data set that contained chest scans of children who did not have covid as their examples of what non-covid cases looked like. But as a result, the AIs learned to identify kids, not covid.
Driggs’s group trained its own model using a data set that contained a mix of scans taken when patients were lying down and standing up. Because patients scanned while lying down were more likely to be seriously ill, the AI learned wrongly to predict serious covid risk from a person’s position.
In yet other cases, some AIs were found to be picking up on the text font that certain hospitals used to label the scans. As a result, fonts from hospitals with more serious caseloads became predictors of covid risk.
Errors like these seem obvious in hindsight. They can also be fixed by adjusting the models, if researchers are aware of them. It is possible to acknowledge the shortcomings and release a less accurate, but less misleading model. But many tools were developed either by AI researchers who lacked the medical expertise to spot flaws in the data or by medical researchers who lacked the mathematical skills to compensate for those flaws.
A more subtle problem Driggs highlights is incorporation bias, or bias introduced at the point a data set is labeled. For example, many medical scans were labeled according to whether the radiologists who created them said they showed covid. But that embeds, or incorporates, any biases of that particular doctor into the ground truth of a data set. It would be much better to label a medical scan with the result of a PCR test rather than one doctor’s opinion, says Driggs. But there isn’t always time for statistical niceties in busy hospitals.
That hasn’t stopped some of these tools from being rushed into clinical practice. Wynants says it isn’t clear which ones are being used or how. Hospitals will sometimes say that they are using a tool only for research purposes, which makes it hard to assess how much doctors are relying on them. “There’s a lot of secrecy,” she says.
Wynants asked one company that was marketing deep-learning algorithms to share information about its approach but did not hear back. She later found several published models from researchers tied to this company, all of them with a high risk of bias. “We don’t actually know what the company implemented,” she says.
According to Wynants, some hospitals are even signing nondisclosure agreements with medical AI vendors. When she asked doctors what algorithms or software they were using, they sometimes told her they weren’t allowed to say.How to fix it
What’s the fix? Better data would help, but in times of crisis that’s a big ask. It’s more important to make the most of the data sets we have. The simplest move would be for AI teams to collaborate more with clinicians, says Driggs. Researchers also need to share their models and disclose how they were trained so that others can test them and build on them. “Those are two things we could do today,” he says. “And they would solve maybe 50% of the issues that we identified.”
Getting hold of data would also be easier if formats were standardized, says Bilal Mateen, a doctor who leads research into clinical technology at the Wellcome Trust, a global health research charity based in London.
Another problem Wynants, Driggs, and Mateen all identify is that most researchers rushed to develop their own models, rather than working together or improving existing ones. The result was that the collective effort of researchers around the world produced hundreds of mediocre tools, rather than a handful of properly trained and tested ones.
“The models are so similar—they almost all use the same techniques with minor tweaks, the same inputs—and they all make the same mistakes,” says Wynants. “If all these people making new models instead tested models that were already available, maybe we’d have something that could really help in the clinic by now.”
In a sense, this is an old problem with research. Academic researchers have few career incentives to share work or validate existing results. There’s no reward for pushing through the last mile that takes tech from “lab bench to bedside,” says Mateen.
To address this issue, the World Health Organization is considering an emergency data-sharing contract that would kick in during international health crises. It would let researchers move data across borders more easily, says Mateen. Before the G7 summit in the UK in June, leading scientific groups from participating nations also called for “data readiness” in preparation for future health emergencies.
Such initiatives sound a little vague, and calls for change always have a whiff of wishful thinking about them. But Mateen has what he calls a “naïvely optimistic” view. Before the pandemic, momentum for such initiatives had stalled. “It felt like it was too high of a mountain to hike and the view wasn’t worth it,” he says. “Covid has put a lot of this back on the agenda.”
“Until we buy into the idea that we need to sort out the unsexy problems before the sexy ones, we’re doomed to repeat the same mistakes,” says Mateen. “It’s unacceptable if it doesn’t happen. To forget the lessons of this pandemic is disrespectful to those who passed away.”
Free doughnuts. Tickets to see the Los Angeles Lakers. Video visits with loved ones for people in prison. The chance to win a million-dollar lottery.
States, cities, and private companies are dangling anything they can think of to convince Americans to get a covid-19 vaccine. The idea is to nudge people who are open to a vaccine but just need an extra push—but so far, there’s little evidence these programs have had the impact some had hoped.
As infections with the delta variant rise across the country, giving everyone paid time off work could boost vaccination rates and protect frontline workers and their communities. It may seem like a small perk for the kind of salaried, remote worker who can easily disappear from Zoom for a few hours to get a shot. But for millions of hourly shift workers, it could be the one thing that finally gets them vaccinated.The pandemic workplace
While much of normal life came grinding to a halt during the pandemic, many Americans had to continue in-person labor, often without hazard pay.
Surveys conducted by the Kaiser Family Foundation in June found that 65% of workers were encouraged by their employer to get a covid-19 vaccine, but only 50% actually received paid time off to get the shot or recover from side effects. Workers in that group were more likely to be vaccinated—even when controlling for age, race, income, and political party.
Nearly 20% of workers said they haven’t gotten vaccinated because they’re afraid of missing work or because they’re too busy. That proportion jumps to 26% for Black workers and 40% for Hispanic workers.
That leaves half of workers without financial support or compensation. If more employers encouraged workers to get vaccinated—especially with paid time off for the appointments—vaccination rates could increase, according to the study.
That could be even more true for Black and Hispanic workers, who were already more likely to catch covid, and are more likely to work low-wage jobs like those in the retail or service sectors. In the KFF survey, nearly 20% of all workers said they haven’t gotten vaccinated yet because they’re afraid of missing work or because they’re too busy. That proportion jumps to 26% for Black workers and 40% for Hispanic workers.
Some companies have already offered bonuses or other incentives. Target provides free rides to vaccine sites, Dollar General will give employees four hours of wages to go to their appointment, and Instacart is giving $25 stipends to workers who get vaccinated.
Instacart declined to tell me how the company settled on the stipend amount but said that nearly 100,000 workers have requested and received it.No clear guidance for workers
The federal government has tried some ways to encourage employers to offer paid time off for vaccinations. Under the American Rescue Plan, companies that offer PTO for getting a vaccine or recovering from side effects can claim payroll tax credits. But it’s voluntary, and we don’t yet know how many companies have offered PTO to workers this way.
Meanwhile, New York and a handful of other states have their own laws to guarantee paid time off for covid vaccinations.
But state laws are a piecemeal approach, and workers’ protections or benefits largely depend on what employers will give. Ifeoma Ajunwa, an associate professor of law at the University of North Carolina at Chapel Hill, says employers operate as their own private governments, with free rein over how they run their business. Covid exposed “the limited power that the government can exert over employers,” says Ajunwa. “The pandemic really laid that bare, especially when it came to covid-19 precautions or covid-19 procedures for operation.”
That means it’s largely up to workers to research and understand their rights.
“If you’re part of the 94% of private sector workers who are not in a union, you may not know that a benefit exists,” says Justin Feldman, an epidemiologist at Harvard who has written about covid-19 and the workplace. “And even if you do know that exists, it doesn’t mean you’re going to be able to exercise it without retaliation.”
In a statement, the New York Department of Labor told me it has received “various complaints” about violation of the covid-19 vaccination leave law and says that it “attempts to collect unpaid wages, or restitution for those who were not paid for the time off as required.”
But even laws that appear, on paper, to support workers could neglect those in the most precarious jobs. The New York Department of Labor has said any worker denied vaccination leave should file a complaint but declined to say specifically if so-called gig workers are covered. (Ajunwa at Chapel Hill says that because the law uses the word “employee,” it would not cover gig workers, who also don’t get health insurance through work.)“A national emergency”
Public health experts stress that there isn’t just one foolproof tactic for getting people vaccinated. The government could create a series of paid days off for workers in different sectors to get shots, but we’d still need to combine that with other public health strategies like going door to door, Feldman says.
Misconceptions about covid-19 need tackling, too: younger workers may believe they’re not susceptible to severe effects of the disease, Feldman notes, especially if they’ve already worked in person with minimum precautions throughout the pandemic and haven’t gotten sick. It may be particularly hard to change their minds after hearing peers, media, or commentators downplaying the risk.
“We need to treat getting people vaccinated as a national emergency, and that means not treating it like an individual failing,” he says. “We need to do a lot of different things at the same time and see what works.”
“Once folks have the information they need, based on the science, it makes other carrots more like the icing on the cake.”Rhea Boyd, founder of The Conversation
Rhea Boyd, a pediatrician in the San Francisco Bay Area, says that people need more information before they can be persuaded by incentives. She founded The Conversation, in which Black and Latino health-care workers deliver credible information about covid-19 vaccines to their communities.
“A major incentive is personal self-interest,” Boyd said in an email. “Once folks have the information they need, based on the science, it makes other ‘carrots’ more like the icing on the cake.”
What would that look like?
“We will only know what is enough once everyone is vaccinated,” she says.
In the meantime, frontline workers’ level of protection on the job continues to rely on shifting public health recommendations, their employers’ own policies, and the whims of customers who can choose to abide by safety measures—or not.
And although public health officials have taken vaccine clinics to public parks, churches, and Juneteenth celebrations in an attempt to change minds, workers are watching what their bosses say and do.
“Workers of every stripe take cues for what they should be doing from their employers,” Ajunwa says. “I think this points to an oversize influence that employers have on employees’ lives in America.”
This story is part of the Pandemic Technology Project, supported by The Rockefeller Foundation.
The world first learned of Sophie Zhang in September 2020, when BuzzFeed News obtained and published highlights from an abridged version of her nearly 8,000-word exit memo from Facebook.
Before she was fired for poor performance, Zhang was officially employed as a low-level data scientist at the company. But she had become consumed by a task she deemed more important: finding and taking down fake accounts and likes on the platform that were being used to sway elections globally.
Her memo revealed how she’d identified dozens of countries, including India, Mexico, Afghanistan, and South Korea, where this type of abuse was enabling politicians to mislead the public and gain power. It also revealed how little the company had done to mitigate it, despite Zhang’s repeated efforts to bring it to the attention of leadership.
“I know that I have blood on my hands by now,” she wrote.
On the eve of her departure, Zhang was still debating whether to write the memo at all. It was perhaps her last chance to create enough internal pressure on leadership to start taking the problems seriously. In anticipation of writing it, she had turned down a nearly $64,000 severance package to avoid signing a nondisparagement agreement and retain the freedom to speak critically about the company.
But she was disturbed by the idea that, just two months out from the 2020 US election, the memo could erode the public’s trust in the electoral process if prematurely released to the press. “I was terrified of somehow becoming the James Comey of 2020,” she says, referring to the former FBI director who told Congress the agency had reopened an investigation into Hillary Clinton’s use of a private email server days before the election. Clinton went on to blame Comey for losing her the presidency.
To Zhang’s great relief, that didn’t happen. And after the election passed, she proceeded with her original plan. In April, she came forward in two Guardian articles with her face, name, and even more detailed documentation on the political manipulation she’d uncovered as well as Facebook’s negligence.
Her account supplied concrete evidence to support what critics had long been saying on the outside: Facebook makes election interference easy, and that unless such activity hurts the company’s business interests, it can’t be bothered to fix the problem.
By going public and eschewing anonymity, Zhang also risked legal action from the company, her future career prospects, and perhaps even action from the politicians she exposed in the process. “What she did is very brave,” says Julia Carrie Wong, the Guardian reporter who published her revelations.
In a statement Joe Osborn, a Facebook spokesperson, vehemently denied Zhang’s characterization. “For the countless press interviews she’s done since leaving Facebook, we have fundamentally disagreed with Ms. Zhang’s characterization of our priorities and efforts to root out abuse on our platform,” he said. “We aggressively go after abuse around the world and have specialized teams focused on this work. As a result, we’ve already taken down more than 150 networks of coordinated inauthentic behavior…Combatting coordinated inauthentic behavior is our priority.”
After nearly a year of avoiding personal questions, Zhang is now ready to tell her story. She wants the world to understand how she became so entwined in trying to protect democracy worldwide and why she cared so deeply. She’s also tired of being in the closet: Zhang is a transgender woman, a core aspect of her identity that informed her actions at and after Facebook.
Her story reveals that it is really pure luck that we now know so much about how Facebook enables election interference globally. Zhang was not just the only person fighting an entire swath of political manipulation, it also wasn’t her job. She had discovered the problem because of a unique confluence of skills and passion, then taken it upon herself, driven by an extraordinary sense of moral responsibility.
To regulators around the world considering how to rein in the company, this should be a wakeup call.
Zhang never planned to be in this position. She’s deeply introverted and hates being in the limelight. She’d joined Facebook in 2018 after the financial strain of living in the Bay Area on part-time contract work had worn her down. When she received Facebook’s offer, she was upfront with her recruiter: she didn’t think the company was making the world better, but she would join to help fix it.
“They told me, ‘You’d be surprised how many people at Facebook say that,’” she remembers.
But the task was easier said than done. Like many new hires, she joined without being assigned to a specific team. She wanted to work on election integrity, which works to mitigate election-related platform abuse, but her skills didn’t match their openings. She settled for a new team tackling fake engagement instead.
Fake engagement refers to things such as likes, shares, and comments that have been bought or otherwise inauthentically generated on the platform. The focus of the new team’s work was narrower, on so-called “scripted inauthentic activity”—fake likes and shares produced by automated bots, used to drive up someone’s popularity.
The vast majority of such cases were people obtaining likes for vanity. But half a year in, Zhang intuited that politicians could do the same to increase their influence and reach on the platform. It didn’t take long for her to find examples in Brazil and India, in the lead up to general elections.
But in the process of searching for scripted activity, she found something far more worrying. The Facebook page administrator of the Honduran president, Juan Orlando Hernández, had created hundreds of pages with fake names and profile pictures to look just like users, and was using them to flood the president’s posts with likes, comments, and shares. (Facebook bars users from making multiple profiles but doesn’t apply the same restriction to pages, which are usually meant for businesses and public figures.)
The activity didn’t count as scripted, but the effect was the same. It could not only mislead the casual observer into believing Orlando Hernández was more well-liked and popular than he was. It was also boosting his posts higher up in people’s newsfeed. For a politician whose 2017 re-election campaign was widely believed to be fraudulent, the brazenness—and implications—were alarming.
“Everyone agreed that it was terrible. No one could agree who should be responsible, or even what should be done.”
But when Zhang raised the issue, she says she received a lukewarm reception. The pages integrity team, which handles abuse of and on Facebook pages, wouldn’t block the mass-manufacture of pages to look like users. The newsfeed integrity team, which tries to improve the quality of what appears in user’s newsfeeds, wouldn’t remove the fake likes and comments from the ranking algorithm’s consideration. “Everyone agreed that it was terrible,” Zhang says. “No one could agree who should be responsible, or even what should be done.”
After a year of Zhang applying pressure, the network of fake pages was finally removed. A few months later, Facebook created a new “inauthentic behavior policy” to ban fake pages marauding as users. But this policy change didn’t address a more fundamental problem: no one was being asked to enforce it.
So Zhang took it upon herself. When she wasn’t working to scrub away vanity likes, she diligently combed through streams of data, searching for the use of fake pages, fake accounts, and other forms of coordinated fake activity on politicians’ pages. She found cases across dozens of countries, most egregiously in Azerbaijan where the pages technique was being used to harass the opposition.
But finding and flagging new cases wasn’t enough. In order to get any networks of fake pages or accounts removed, Zhang found she had to persistently lobby the relevant teams. In countries where such activity posed little PR risk to the company, enforcement could be put off repeatedly. (Facebook disputes this characterization.) The responsibility weighed on her heavily. Was it more important to push for a case in Bolivia with a population of 11.6 million, or in Rajasthan, India, with a population close to 70 million?
Then in the fall of 2019, weeks of deadly civil protest broke out in Bolivia after the public contested the results of its presidential election. Only a few weeks earlier, Zhang had indeed deprioritized the country to take care of more urgent cases. The news consumed her with guilt. Intellectually, she knew there was no way to draw a direct connection between her decision and the events. The fake engagement had been so small the effect was likely negligible. But psychologically and emotionally, it didn’t matter. “That’s when I started losing sleep,” she says.
Whereas someone else may have chosen to leave such a taxing job or perhaps abdicate themselves of the responsibility as a means of coping, Zhang leaned in, at great personal cost, in an attempt to single-handedly right a wrong.
Over the year between the events in Bolivia and her firing, the exertion sent her health into sharp decline. She already suffered from anxiety and depression, but it grew significantly—and dangerously—worse. Always a voracious reader of world news, she could no longer distance herself from the political turmoil of other countries. The pressure pushed her away from friends and loved ones. She grew increasingly isolated and broke up with her girlfriend. She upped her anxiety and antidepressant medication until her dose had increased by six-fold.
For Zhang, the explanation of why she cared so much is tied up in her identity. She grew up in Ann Arbor, Michigan, the daughter of parents who’d immigrated from mainland China. From an early age, she was held to high academic standards and proved a precocious scholar. At six or seven, she read an introductory physics book and grew fascinated by the building blocks of the universe. Her passion would lead her to study cosmology at the University of Michigan, where she published two research papers, including one as a single author.
“She was blazing smart. She may be the smartest undergrad student I’ve ever worked with,” recalls Dragan Huterer, her undergraduate advisor. “I would say she was more advanced than a graduate student.”
But her childhood was also marked by severe trauma. As early as five years old, she began to realize she was different. She read a children’s book about a boy whose friends told him that if he kissed his elbow he would turn into a girl. “I spent a long time after that trying to kiss my elbow,” she says.
She did her best to hide it, understanding that her parents would find her transgender identity intolerable. But she vividly remembers the moment her father found out. It was spring of 8th grade. It had just rained. And she was cowered in the bathroom, contemplating whether to jump out the window, as he beat down the door.
In the end, she chose not to jump and let him hit her until she was bloody, she says. “Ultimately, I decided that I was the person who stayed in imperfect situations to try and fix them.” The next day, she wore a long sleeve shirt to cover up the bruises and prepared an excuse in case a teacher noticed. None did, she says.
(When reached by email, her father denied the allegations. “I am sad that she alleges that I beat her as a child after I discovered her transgender identity, which is completely false,” he wrote. But multiple people who knew Zhang through high school to present day corroborated her account of her father’s abusive behavior.)
“To give up on them and abandon them would be a betrayal of the very core of my identity.”
In college, she decided to transition, after which her father disowned her. But she soon discovered that finally being perceived correctly as a woman came with its own consequences. “I knew precisely how people treated me when they thought that I was a dude. It was very different,” she says.
After being accepted to all the top PhD programs for physics, she chose to attend Princeton University. During orientation, the person giving a tour of the machine shop repeatedly singled her out in front of the group with false assumptions about her incompetence. “It was my official introduction to Princeton and a very appropriate one,” she says.
From there the sexism only got worse. Almost immediately a male grad student began to stalk and sexually harass her. To cope, she picked a thesis advisor in the biophysics department, which allowed her to escape her harasser by conducting research in another building. The trouble was she wasn’t actually interested in biophysics. And whether for this or other reasons, her interest in physics slowly dissolved.
Three years in, deeply unhappy, she decided to leave the program, though not without finally reporting the harassment to the university. “They were like, ‘It’s your word against his,’” she remembers. “You can probably guess now why I extensively documented everything I gave to Julia,” referring to Julia Carrie Wong at the Guardian. “I didn’t want to be in another ‘he said she said’ situation.”
(A Princeton spokesperson said he was unable to comment on individual situations but stated the university’s commitment to “providing an inclusive and welcoming educational and working environment.” “Princeton seeks to support any member of the campus community who has experienced sexual misconduct, including sexual harassment,” he said.)
“What these experiences have in common is the fact that I’ve experienced repeatedly falling through the cracks of responsibility,” Zhang wrote in her memo, after summarizing these experiences. “I never received the support from the authority figures I needed…In each case, they completed the letter of their duty but failed the spirit, and I paid the price of their decisions.”
“Perhaps then you can understand why this was so personal for myself from the very start, why I fought so hard to keep the people of Honduras and Azerbaijan from slipping through those cracks,” she wrote. “To give up on them and abandon them would be a betrayal of the very core of my identity.”
It was during the start of her physical and mental decline in the fall of 2019 that Zhang began thinking about whether to come forward. She wanted to give Facebook’s official systems a chance to work. But she worried about being a single point of failure. “What if I got hit by a bus the next day?” she says. She needed someone else to have access to the same information.
By coincidence, she received an email from a journalist in her inbox. Wong, then a senior tech reporter at the Guardian, had been messaging Facebook employees looking to cultivate new sources. Zhang took the chance and agreed to meet for an off-the-record conversation. That day, she dropped her company-issued phone and computer off at a former housemate’s place as a precaution, knowing that Facebook had the ability to track her location. When she returned, she looked a little more relieved, the former housemate Ness Io Kain remembers. “You could tell that she felt like she’d accomplished something. It’s pretty silent, but it’s definitely palpable.”
For a moment things at Facebook seemed to make progress. She saw the policy change and takedown of the Honduran president’s fake network as forward momentum. She was called upon repeatedly to help handle emergencies and praised for her work, which she was told was valued and important.
But despite her repeated attempts to push for more resources, leadership cited different priorities. They also dismissed Zhang’s suggestions for a more sustainable solution, such as suspending or otherwise penalizing politicians who were repeat offenders. It left her to face a never-ending firehose: The manipulation networks she took down quickly came back, often only hours or days later. “It increasingly felt like I was trying to empty the ocean with a colander,” she says. “Two steps back, two steps forward.”
“I have never hated my autism more than when I joined Facebook.”
Then in January of 2020, the tide turned. Both her manager and manager’s manager told her to stop her political work and stick to her assigned job. If she didn’t, her services at the company would no longer be needed, she remembers the latter saying. But without a team assigned to continue her work, Zhang kept doing some in secret.
As the pressure of her work and her health worsened, Zhang realized she would ultimately need to leave. She made a plan to depart after the US election, considering it the last and most important duty she needed to perform. But leadership had other plans. In August, she was informed that she would be fired due to poor performance.
On her last day, within hours of her posting her memo internally, Facebook deleted it (though it later restored an edited version after widespread employee anger). A few hours later, an HR person called her, asking her to also remove a password-protected copy she had posted on her personal website. She tried to bargain: she would do so if they restored the internal version. The next day, instead, she received a notice from her hosting server that it’d taken down her entire website after a complaint from Facebook. A few days later, it took down her domain as well.
Even after all that Facebook put her through, Zhang defaults to blaming herself. In her memo, she apologized to colleagues for any trouble she may have caused them and for leaving them without achieving more. In a Reddit AMA months later, she apologized to the citizens of different countries for not acting fast enough and for failing to reach a long-term solution.
To me, Zhang, who is autistic, wonders aloud what she could have accomplished if she were not. “I have no talent for persuasion and convincing,” she says. “If I were someone born with a silver tongue, perhaps I could have made changes.”
“I have never hated my autism more than when I joined Facebook.”
In preparation for going public, Zhang made one final sacrifice: to conceal her trans identity, not for fear of harassment, but for fear that it would distract from her message. In the US, where transgender rights are highly politicized, she didn’t want protecting democracy to become a partisan issue. Abroad, where some countries treat being transgender as a crime punishable by prison time or even death, she didn’t want people to stop listening.
It was a continuation of a sacrifice she’d repeatedly made when policing election interference globally. She treated all politicians equally, even when removing the fake activity of one in Azerbaijan inevitably boosted an opponent who espoused homophobia. “I did my best to protect democracy and rule of law globally for people, regardless of whether they believed me to be human,” she says with a deep sigh. “But I don’t think anyone should have to make that choice.”
The night the Guardian articles published, she anxiously awaited the public reaction, worried about whether she’d be able to handle the media attention. “I think she actually surprised herself at how good she was in interviews,” says her girlfriend Lisa Danz, who Zhang got back together with after leaving Facebook. “She found that when there’s material that she knows very well and she’s just getting asked questions about it, she can answer.”
The attention ultimately fell short of what Zhang had hoped for. Several media outlets in the US did follow-up pieces as did foreign outlets from countries impacted by the manipulation activity. But as far as she’s aware, it didn’t achieve what she was ultimately hoping for: enough of a PR scandal for Facebook to finally prioritize the work she left behind.
Facebook once again disputes this characterization, saying the fake engagement team has continued Zhang’s work. But Zhang points to other evidence: the network of fake pages in Azerbaijan is still there. “It’s clear they haven’t been successful,” she says.
Nonetheless Zhang doesn’t regret her decision to come forward. To her, it was a foregone conclusion. “I was the only one in this position of responsibility from the start,” she says, “and someone had to take the responsibility and do the utmost to protect people.”
Without skipping a beat, she then rattles off the consequences that others have faced for going up against the powerful in more hostile countries: journalists being murdered for investigating government corruption, protestors being gunned down for showing their dissent.
“Compared to them, I’m small potatoes,” she says.
Israeli government officials visited the offices of the hacking company NSO Group on Wednesday to investigate allegations that the firm’s spyware has been used to target activists, politicians, business executives, and journalists, the country’s defense ministry said in a statement today.
An investigation published last week by 17 global media organizations claims that phone numbers belonging to notable figures have been targeted by Pegasus, the notorious spyware that is NSO’s best-selling product.
The Ministry of Defense did not specify which government agencies were involved in the investigation, but Israeli media previously reported that the foreign ministry, justice ministry, Mossad, and military intelligence were also looking into the company following the report.
NSO Group CEO Shalev Hulio confirmed to MIT Technology Review that the visit had taken place but continued the company’s denials that the list published by reporters was linked to Pegasus.
The reports focused largely on the successful hacking of 37 smartphones of business leaders, journalists, and human rights activists. But they also pointed to a leaked list of over 50,000 more phone numbers of interest in countries that are reportedly clients of NSO Group. The company has repeatedly denied the reporting. At this point, both the source and meaning of the list remain unclear, but numerous phones on it were hacked according to technical analysis by Amnesty International’s Security Lab.
When asked if the government’s investigation process will continue, Hulio said he hopes it will be ongoing.
“We want them to check everything and make sure that the allegations are wrong,” he added.International scandal
Despite the emphatic denials, the “Pegasus Project” has drawn international attention.
In the United States, Democratic members of Congress called for action against NSO.
“Private companies should not be selling sophisticated cyber-intrusion tools on the open market, and the United States should work with its allies to regulate this trade,” the lawmakers said. “Companies that sell such incredibly sensitive tools to dictatorships are the AQ Khans of the cyber world. They should be sanctioned, and if necessary, shut down.”
The French government has said it will question Israeli defense minister Benny Gantz after the French president Emmanuel Macron’s phone showed up on the leaked list. NSO denied any attempt to hack French officials.
NSO is not the only Israeli hacking company in the news lately. Microsoft and the University of Toronto’s Citizen Lab also recently reported on hacking tools developed by Candiru that were subsequently used to target civil society groups.
NSO Group is under the direct regulation of Israel’s Ministry of Defense, which approves each sale. Critics say the export licensing process is broken because it results in sales to authoritarian regimes that have used the hacking tools to commit abuses. NSO recent said the company has cut off five customers for abuse.
The ministry said last week that it will “take appropriate action” if it finds that NSO Group violated its export license.
When gas falls into a black hole, it releases an enormous amount of energy and spews electromagnetic radiation in all directions, making these objects some of the brightest in the known universe. But scientists have only ever been able to see light and other radiation from a supermassive black hole when it’s shining directly toward our telescopes—anything from behind it has always been obscured.
Until now. A new study published in Nature demonstrates the first detection of radiation coming from behind a black hole—bent as a result of the warping of spacetime around the object. It’s another piece of evidence for Einstein’s theory of general relativity.
“This is a really exciting result,” says Edward Cackett, an astronomer at Wayne State University who was not involved with the study. “Although we have seen the signature of x-ray echoes before, until now it has not been possible to separate out the echo that comes from behind the black hole and gets bent around into our line of sight. It will allow for better mapping of how things fall into black holes and how black holes bend the space time around them.”
The release of energy by black holes, sometimes in the form of x-rays, is an absurdly extreme process. And because supermassive black holes release so much energy, they are essentially powerhouses that allow galaxies to grow around them. “If you want to understand how galaxies form, you really need to understand these processes outside the black hole that are able to release these enormous amounts of energy and power, these amazingly bright light sources that we’re studying,” says Dan Wilkins, an astrophysicist at Stanford University and the lead author of the study.
The study focuses on a supermassive black hole at the center of a galaxy called I Zwicky 1 (I Zw 1 for short), around 100 million light-years from Earth. In supermassive black holes like I Zw 1’s, large amounts of gas fall toward the center (the event horizon, which is basically the point of no return) and tend to flatten out into a disk. Above the black hole, a confluence of supercharged particles and magnetic field activity results in the production of high-energy x-rays.
Some of these x-rays are shining straight at us, and we can observe them normally, using telescopes. But some of them also shine down toward the flat disk of gas and will reflect off it. I Zw 1 black hole’s rotation is slowing down at a higher rate than that seen in most supermassive black holes, which causes surrounding gas and dust to fall in more easily and feed the black hole from multiple directions. This, in turn, leads to greater x-ray emissions, which is why Wilkins and his team were especially interested.
While Wilkins and his team were observing this black hole, they noticed that the corona appeared to be “flashing.” These flashes, caused by x-ray pulses reflecting off the massive disk of gas, were coming from behind the black hole’s shadow—a place that is normally hidden from view. But because the black hole bends the space around it, the x-ray reflections are also bent around it, which means we can spot them.
The signals were found using two different space-based telescopes optimized to detect x-rays in space: NuSTAR, which is run by NASA, and XMM-Newton, which is run by the European Space Agency.
The biggest implication of the new findings is that they confirm what Albert Einstein predicted in 1963 as part of his theory of general relativity—the way light ought to bend around gargantuan objects like supermassive black holes.
“It’s the first time we really see the direct signature of the way light bends all the way behind the black hole into our line of sight, because of the way black hole warps space around itself,” says Wilkins.
“While this observation doesn’t change our general picture of black hole accretion, it is a nice confirmation that general relativity is at play in these systems,” says Erin Kara, an astrophysicist at MIT who was not involved with the study.
Despite the name, supermassive black holes are so far away that they really just look like single points of light, even with state-of-the-art instruments. It’s not going to be possible to take images of all of them the way scientists used the Event Horizon Telescope to capture the shadow of a supermassive black hole in galaxy M87.
So although it’s early, Wilkins and his team are hopeful that detecting and studying more of these x-ray echoes from behind the bend could help us create partial or even full pictures of distant supermassive black holes. In turn, that could help them unlock some big mysteries around how supermassive black holes grow, sustain entire galaxies, and create environments where the laws of physics are pushed to the limit.
On Tuesday, July 27, the US Centers for Disease Control and Prevention recommended that vaccinated individuals wear masks in public indoor spaces in communities where covid cases are spiking.
Along with the new policy, the CDC recommends that children in grades K–12 attend school in person while continuing to wear masks inside.Why is the CDC making this switch?
The announcement comes on the heels of rising infections with the delta variant, the highly infectious strain of covid that was first detected in India earlier this year. The new policy may seem like backtracking, but Rochelle Walensky, director of the CDC, explained that the agency’s decisions aren’t made lightly.
“Our guidance and recommendations will follow the science,” said Walensky during a press briefing. “The delta variant is showing every day its willingness to outsmart us and to be an opportunist in areas where we have not shown a fortified response against it.”
In May, delta was responsible for just 2% of cases sequenced in the US, but today 82% of samples contain the more contagious variant, according to Johns Hopkins.Does this change affect me?
Probably (if you live in the US). More than 63% of the US is experiencing what the CDC calls “substantial transmission rates,” which means the new policy would apply there. To find out if you’re living in an area where covid is surging, visit the CDC’s Covid Data Tracker, which tracks infections by county.
(If you’re not fully vaccinated, this may not be much of a change, depending on where you live. Eight states, including California, New York, and Nevada, have already been requiring unvaccinated people to mask up.)How is the delta variant spreading?
The CDC believes that unvaccinated individuals are driving this spread. But rarely, vaccinated people are also getting sick and may be passing on the infection, although their cases are likely much less severe. Earlier in the pandemic, a person with covid could infect 2.5 others, on average. But with the delta variant, one infection spawns an average of six more.
“That means it doesn’t take a lot of close contact time—seconds versus minutes—for the virus to spread from one person to another,” says Ajay Sethi, a professor at the University of Wisconsin who studies infectious diseases.Who’s protected by this new mask guidance?
Walensky said the new mask policy is about protecting some of the most susceptible people in our society, like those who live in high-transmission areas or who have vulnerable family members like children or people with preexisting health issues.
She also said it was important for the US to get control of the spread quickly because a future variant could evade the vaccine’s efficacy in terms of preventing severe disease and death.
That doesn’t necessarily make the changes easier for the public to accept.
“Unfortunately, many people will see this as a flip-flop, particularly those already critical of the CDC,” says Sethi.What next?
Sethi says that although the public desperately wants to believe the pandemic is over, it won’t be as long as health policies are being ignored.
Walensky stressed that the US vaccination rate must improve, and quickly. She said: “This moment, and most importantly the associated illness, suffering, and death, could have been avoided with higher vaccination coverage in this country.”
She also made no promises that the guidance won’t change once more: “We continue to follow the science closely, and update the guidance should the science shift again.”
This story is part of the Pandemic Technology Project, supported by The Rockefeller Foundation.
Wildfires raging across the US West Coast have filled the air with enough carbon dioxide to wipe out more than half of the region’s pandemic-driven emissions reductions last year. And that was just in July.
The numbers illustrate a troubling feedback loop. Climate change creates hotter, drier conditions that fuel increasingly frequent and devastating fires—which, in turn, release greenhouse gases that will drive further warming.
The problem will likely grow worse in the coming decades across large parts of the globe. That means not only will deadly fires exact a rising toll on communities, emergency responders, air quality, human health, and forests, but they will also undermine our limited progress in addressing climate change.
Together, California, Idaho, Oregon, and Washington saw fossil-fuel emissions decline by around 69 million tons of carbon dioxide last year as the pandemic slashed pollution from ground transportation, aviation, and industry, according to data from Carbon Monitor. But from July 1 to July 25, fires in those states produced about 41 million tons of carbon dioxide, based on data provided to MIT Technology Review from the European Commission’s Copernicus Atmosphere Monitoring Service.
That’s far above normal levels for this part of the year and comes on top of the surge of emissions from the massive fires across the American West in 2020. California fires alone produced more than 100 million tons of carbon dioxide last year, which was already enough to more than cancel out the broader region’s annual emissions declines.
“The steady but slow reductions in [greenhouse gases] pale in comparison to those from wildfire,” says Oriana Chegwidden, a climate scientist at CarbonPlan.
Massive wildfires burning across millions of acres in Siberia are also clogging the skies across eastern Russia and releasing tens of millions of tons of emissions, Copernicus reported earlier this month.
Fires and forest emissions are only expected to increase across many regions of the world as climate change accelerates in the coming decades, creating the hot and often dry conditions that turn trees and plants into tinder.
Fire risk—defined as the chance that an area will experience a moderate- to high-severity fire in any given year—could quadruple across the US by 2090, even under scenarios where emissions decline significantly in the coming decades, according to a recent study by researchers at the University of Utah and CarbonPlan. With unchecked emissions, US fire risk could be 14 times higher near the end of the century.
Emissions from fires are “already bad and only going to get worse,” says Chegwidden, one of the study’s lead authors.“Very ominous”
Over longer periods, the emissions and climate impacts of increasing wildfires will depend on how rapidly forests grow back and draw carbon back down—or whether they do at all. That, in turn, depends on the dominant trees, the severity of the fires, and how much local climate conditions have changed since that forest took root.
While working toward her doctorate in the early 2010s, Camille Stevens-Rumann spent summer and spring months trekking through alpine forests in Idaho’s Frank Church–River of No Return Wilderness, studying the aftermath of fires.
She noted where and when conifer forests began to return, where they didn’t, and where opportunistic invasive species like cheatgrass took over the landscape.
In a 2018 study in Ecology Letters, she and her coauthors concluded that trees that burned down across the Rocky Mountains have had far more trouble growing back this century, as the region has grown hotter and drier, than during the end of the last one. Dry conifer forests that had already teetered on the edge of survivable conditions were far more likely to simply convert to grass and shrublands, which generally absorb and store much less carbon.
This can be healthy up to a point, creating fire breaks that reduce the damage of future fires, says Stevens-Rumann, an assistant professor of forest and rangeland stewardship at Colorado State University. It can also help to make up a bit for the US’s history of aggressively putting out fires, which has allowed fuel to build up in many forests, also increasing the odds of major blazes when they do ignite.
But their findings are “very ominous” given the massive fires we’re already seeing and the projections for increasingly hot, dry conditions across the American West, she says.
Other studies have noted that these pressures could begin to fundamentally transform western US forests in the coming decades, damaging or destroying sources of biodiversity, water, wildlife habitat, and carbon storage.
Fires, droughts, insect infestations, and shifting climate conditions will convert major parts of California’s forests into shrublands, according to a modeling study published in AGU Advances last week. Tree losses could be particularly steep in the dense Douglas fir and coastal redwood forests along the Northern California coast and in the foothills of the Sierra Nevada range.Kings Canyon National Park, in California’s Sierra Nevada range, following a recent forest fire.GETTY
All told, the state will lose around 9% of the carbon stored in trees and plants aboveground by the end of this century under a scenario in which we stabilize emissions this century, and more than 16% in a future world where they continue to rise.
Among other impacts, that will clearly complicate the state’s reliance on its lands to capture and store carbon through its forestry offsets program and other climate efforts, the study notes. California is striving to become carbon neutral by 2045.
Meanwhile, medium- to high-emissions scenarios create “a real likelihood of Yellowstone’s forests being converted to non-forest vegetation during the mid-21st century,” because increasingly common and large fires would make it more and more difficult for trees to grow back, a 2011 study in Proceedings of the National Academy of Sciences concluded.The global picture
The net effect of climate change on fires, and fires on climate change, is much more complicated globally.
Fires contribute directly to climate change by releasing emissions from trees as well as the rich carbon stored in soils and peatlands. They can also produce black carbon that may eventually settle on glaciers and ice sheets, where it absorbs heat. That accelerates the loss of ice and the rise of ocean levels.
But fires can drive negative climate feedback as well. The smoke from Western wildfires that reached the East Coast in recent days, while terrible for human health, carries aerosols that reflect some level of heat back into space. Similarly, fires in boreal forests in Canada, Alaska, and Russia can open up space for snow that’s far more reflective than the forests they replaced, offsetting the heating effect of the emissions released.
Different parts of the globe are also pushing and pulling in different ways.
Climate change is making wildfires worse in most forested areas of the globe, says James Randerson, a professor of earth system science at the University of California, Irvine, and a coauthor of the AGU paper.
But the total area burned by fires worldwide is actually going down, primarily thanks to decreases across the savannas and grasslands of the tropics. Among other factors, sprawling farms and roads are fragmenting the landscape in developing parts of Africa, Asia, and South America, acting as breaks for these fires. Meanwhile, growing herds of livestock are gobbling up fuels.
Overall, global emissions from fires stand at about a fifth the levels from fossil fuels, though they’re not rising sharply as yet. But total emissions from forests have clearly been climbing when you include fires, deforestation and logging. They’ve grown from less than 5 billion tons in 2001 to more than 10 billion in 2019, according to a Nature Climate Change paper in January.Less fuel to burn
As warming continues in the decades ahead, climate change itself will affect different areas in different ways. While many regions will become hotter, drier, and more susceptible to wildfires, some cooler parts of the globe will become more hospitable to forest growth, like the high reaches of tall mountains and parts of the Arctic tundra, Randerson says.
Global warming could also reach a point where it actually starts to reduce certain risks as well. If Yellowstone, California’s Sierra Nevada, and other areas lose big portions of their forests, as studies have suggested, fires could begin to tick back down toward the end of the century. That’s because there’ll simply be less, or less flammable, fuel to burn.
It’s difficult to make reliable predictions about global forest and fire emissions in the decades ahead because there are so many competing variables and unknowns, notably including what actions humans will decide to take, says Doug Morton, chief of the biospheric sciences laboratory at NASA’s Goddard Space Flight Center.
The good news is we do have some control over these forces.
Nations can step up efforts to cut greenhouse-gas emissions as quickly as possible. They can get more serious about halting clearcutting, slash-and-burn agriculture, and other forms of deforestation while promoting tree-planting campaigns. And governments can directly address fire dangers through better forest management practices, including using chainsaws, bulldozers, and prescribed burns to add fire breaks and remove fuel.
Matthew Hurteau, a professor of biology at the University of New Mexico, was the lead author of a 2019 Nature paper that found climate change and fires could dramatically transform the Sierra Nevada under high-emissions scenarios.
Asked what that might mean for treasured areas of the range like the Yosemite, Sequoia, and Kings Canyon national parks, Hurteau said it will depend largely on how rapidly we cut emissions and how aggressively we manage our fire risks.
“It’s still, in large part, up to us,” he says.
In March, when covid cases began spiking around India, Bani Jolly went hunting for answers in the virus’s genetic code.
Researchers in the UK had just set the scientific world ablaze with news that a covid variant called B.1.1.7—soon to be referred to as alpha—was to blame for skyrocketing case counts there. Jolly, a third-year PhD student at the CSIR Institute of Genomics and Integrative Biology in New Delhi, expected to find that it was driving infections in her country too.
Because her institution is at the forefront of covid research in India, she had access to sequences from thousands of covid samples taken around the country. She began running them through software that grouped them according to branches of covid’s family tree.
Instead of dense clumps of B.1.1.7 cases, Jolly found a cluster of sequences that didn’t look quite like any known variant, some of them with two mutations of the spike protein that were already suspected to make the virus more dangerous.
Jolly talked to her advisor, who suggested that she reach out to other sequencing labs around India. Their data, too, showed signs that a local outbreak had given rise to a new family of the virus.
Before long, journalists got wind of the new development, and Jolly began to see articles about “double mutants” and the “Indian variant.”
She knew researchers could do more with a useful label than a “scariant” nickname. So she went to the place where a small group of scientists give new variants their names: a GitHub page staffed by a handful of volunteers around the world, led primarily by a PhD student in Scotland.
Those volunteers oversee a system called Pango, which has quietly become essential to global covid research. Its software tools and naming system have now helped scientists worldwide understand and classify nearly 2.5 million samples of the virus.
In April, Jolly posted her sequences to the GitHub page, along with an explanation of why they represented a significant change to the virus. (She was the second user to flag the new variant; the first flag had been waved a few days before, by a researcher in the UK.) The Pango team quickly came up with a new name, B.1.167. The family includes the infamously transmissible variant now known, in the media, as delta.
“Pango makes it really easy to see if other people are seeing what we’re seeing,” Jolly says. “If they’re not, it is really easy to report what’s being seen in India, so people can track it in other regions.”
Researchers, public health officers, and journalists around the world use Pango to understand covid’s evolution. But few realize that the entire endeavor—like much in the new field of covid genomics—is powered by a tiny team of young researchers who have often put their own work on hold to build it.Too much data
You might assume that there’s long been an official, time-tested process for naming new branches of a virus’s family tree as it evolves, infecting one person after another. After all, researchers have been using genomic sequencing to study viruses for two decades.
But that work has historically had to cope with orders of magnitude less data, and little of it was shared collaboratively between scientists on different continents, as covid sequences have been. There had never been a pressing need to develop standardized names.
In March 2020, when the WHO declared a pandemic, the public sequence database GISAID held 524 covid sequences. Over the next month scientists uploaded 6,000 more. By the end of May, the total was over 35,000. (In contrast, global scientists added 40,000 flu sequences to GISAID in all of 2019.)
“Without a name, forget about it—we cannot understand what other people are saying,” says Anderson Brito, a postdoc in genomic epidemiology at the Yale School of Public Health, who contributes to the Pango effort.
As the number of covid sequences spiraled, researchers trying to study them were forced to create entirely new infrastructure and standards on the fly. A universal naming system has been one of the most important elements of this effort: without it, scientists would struggle to talk to each other about how the virus’s descendants are traveling and changing—either to flag up a question or, even more critically, to sound the alarm.Where Pango came from
In April 2020, a handful of prominent virologists in the UK and Australia proposed a system of letters and numbers for naming lineages, or new branches, of the covid family. It had a logic, and a hierarchy, even though the names it generated—like B.1.1.7—were a bit of a mouthful.
One of the authors on the paper was Áine O’Toole, a PhD candidate at the University of Edinburgh. Soon she’d become the primary person actually doing that sorting and classifying, eventually combing through hundreds of thousands of sequences by hand.
She says: “Very early on, it was just who was available to curate the sequences. That ended up being my job for a good bit. I guess I never understood quite the scale we were going to get to.”
She quickly set about building software to assign new genomes to the right lineages. Not long after that, another researcher, postdoc Emily Scher, built a machine-learning algorithm to speed things up even more.
“Without a name, forget about it—we cannot understand what other people are saying.”Anderson Brito, Yale School of Public Health
They named the software Pangolin, a tongue-in-cheek reference to a debate about the animal origin of covid. (The whole system is now simply known as Pango.)
The naming system, along with the software to implement it, quickly became a global essential. Although the WHO has recently started using Greek letters for variants that seem especially concerning, like delta, those nicknames are for the public and the media. Delta actually refers to a growing family of variants, which scientists call by their more precise Pango names: B.1.617.2, AY.1, AY.2, and AY.3.
“When alpha emerged in the UK, Pango made it very easy for us to look for those mutations in our genomes to see if we had that lineage in our country too,” says Jolly. “Ever since then, Pango has been used as the baseline for reporting and surveillance of variants in India.”
Because Pango offers a rational, orderly approach to what would otherwise be chaos, it may forever change the way scientists name viral strains—allowing experts from all over the world to work together with a shared vocabulary. Brito says: “Most likely, this will be a format we’ll use for tracking any other new virus.”
Many of the foundational tools for tracking covid genomes have been developed and maintained by early-career scientists like O’Toole and Scher over the last year and a half. As the need for worldwide covid collaboration exploded, scientists rushed to support it with ad hoc infrastructure like Pango. Much of that work fell to tech-savvy young researchers in their 20s and 30s. They used informal networks and tools that were open source—meaning they were free to use, and anyone could volunteer to add tweaks and improvements.
“The people on the cutting edge of new technologies tend to be grad students and postdocs,” says Angie Hinrichs, a bioinformatician at UC Santa Cruz who joined the Pangolin project earlier this year. For example, O’Toole and Scher work in the lab of Andrew Rambaut, a genomic epidemiologist who posted the first public covid sequences online after receiving them from Chinese scientists. “They just happened to be perfectly placed to provide these tools that became absolutely critical,” Hinrichs says.Building fast
It hasn’t been easy. For most of 2020, O’Toole took on the bulk of the responsibility for identifying and naming new lineages by herself. The university was shuttered, but she and another of Rambaut’s PhD students, Verity Hill, got permission to come into the office. Her commute, walking 40 minutes to school from the apartment where she lived alone, gave her some sense of normalcy.
Every few weeks, O’Toole would download the entire covid repository from the GISAID database, which had grown exponentially each time. Then she would hunt around for groups of genomes with mutations that looked similar, or things that looked odd and might have been mislabeled.
When she got particularly stuck, Hill, Rambaut, and other members of the lab would pitch in to discuss the designations. But the grunt work fell on her.
“Imagine going through 20,000 sequences from 100 different places in the world. I saw sequences from places I’d never even heard of.”Áine O’Toole, University of Edinburgh
Deciding when descendants of the virus deserve a new family name can be as much art as science. It was a painstaking process, sifting through an unheard-of number of genomes and asking time and again: Is this a new variant of covid or not?
“It was pretty tedious,” she says. “But it was always really humbling. Imagine going through 20,000 sequences from 100 different places in the world. I saw sequences from places I’d never even heard of.”
As time went on, O’Toole struggled to keep up with the volume of new genomes to sort and name.
In June 2020, there were over 57,000 sequences stored in the GISAID database, and O’Toole had sorted them into 39 variants. By November 2020, a month after she was supposed to turn in her thesis, O’Toole took her last solo run through the data. It took her 10 days to go through all the sequences, which by then numbered 200,000. (Although covid has overshadowed her research on other viruses, she’s putting a chapter on Pango in her thesis.)
Fortunately, the Pango software is built to be collaborative, and others have stepped up. An online community—the one that Jolly turned to when she noticed the variant sweeping across India—sprouted and grew. This year, O’Toole’s work has been much more hands-off. New lineages are now designated mostly when epidemiologists around the world contact O’Toole and the rest of the team through Twitter, email, or GitHub— her preferred method.
“Now it’s more reactionary,” says O’Toole. “If a group of researchers somewhere in the world is working on some data and they believe they’ve identified a new lineage, they can put in a request.”
The deluge of data has continued. This past spring, the team held a “pangothon,” a sort of hackathon in which they sorted 800,000 sequences into around 1,200 lineages.
“We gave ourselves three solid days,” says O’Toole. “It took two weeks.”
Since then, the Pango team has recruited a few more volunteers, like UCSC researcher Hindriks and Yale researcher Brito, who both got involved initially by adding their two cents on Twitter and the GitHub page. A postdoc at the University of Cambridge, Chris Ruis, has turned his attention to helping O’Toole clear out the backlog of GitHub requests.
O’Toole recently asked them to formally join the organization as part of the newly created Pango Network Lineage Designation Committee, which discusses and makes decisions about variant names. Another committee, which includes lab leader Rambaut, makes higher-level decisions.
“We’ve got a website, and an email that’s not just my email,” O’Toole says. “It’s become a lot more formalized, and I think that will really help it scale.”The future
A few cracks around the edges have started to show as the data has grown. As of today, there are nearly 2.5 million covid sequences in GISAID, which the Pango team has split into 1,300 branches. Each branch corresponds to a variant. Of those, eight are ones to watch, according to the WHO.
With so much to process, the software is starting to buckle. Things are getting mislabeled. Many strains look similar, because the virus evolves the most advantageous mutations over and over again.
As a stopgap measure, the team has built new software that uses a different sorting method and can catch things that Pango may miss.
It’s important to remember, though, that no system has ever dealt with such a deluge of data on how viruses morph. Covid has become the most-watched virus of all time. It’s also the first time scientists have been able to see exactly how the virus changes as it moves between countries.
“All this was possible because people were sharing their data, people were sharing their tools,” says Jolly.
As scientists have found ways to communicate with one another, they’ve also had to learn about public communication. It’s been “a bit surreal,” says O’Toole, watching the media use these highly technical names.
“We’d been using this nomenclature all year long, and it’s really useful for the scientific community, but a name like B.1.1.7 definitely wasn’t designed to be on BBC News,” she says. “It’s been a big learning experience to have this level of public scrutiny.”
Behind the scenes, the Pango team continues to track the evolution of covid so that scientists around the globe can work together on stopping the pandemic.
Says Brito: “The media is talking all the time about the delta variant, the alpha variant. CNN Brazil is talking about the genomes being sequenced and saying, ‘The lineage will be assigned and we’ll get a report in a few days’ … It would have been unimaginable two years ago.”
This story is part of the Pandemic Technology Project, supported by The Rockefeller Foundation.
In May, the longtime coronavirus researcher Ralph Baric found himself at the center of the swirling debate over gain-of-function research, in which scientists engineer new properties into existing viruses. And during a congressional hearing, Senator Rand Paul of Kentucky implied that the National Institutes of Health had been funding such research at both the Wuhan Institute of Virology and Baric’s University of North Carolina lab, and that the two labs had been collaborating to make “superviruses.”
Baric released a statement clarifying that according to the NIH, the research in question did not qualify as gain-of-function, none of the SARS-like coronaviruses he’d used in the experiments were closely related to SARS-CoV-2 (the original virus behind the covid pandemic), and his collaboration with the Wuhan Institute of Virology had been minimal.
Yet that did little to quell questions about the role Baric’s research may have played in furthering scientists’ ability to modify coronaviruses in potentially dangerous ways. Such questions have dogged Baric since 2014, when he became the reluctant spokesperson for gain-of-function research after the NIH declared a moratorium on such experiments until their safety could be assessed, temporarily halting his work.
Baric believes such research is essential to the development of vaccines and other countermeasures against emerging viruses, a project he has been engaged in for more than 20 years. That work has made him the country’s foremost expert on coronaviruses, and his high-security UNC lab has been a center of the US response to the pandemic, testing numerous drug candidates for other labs that lack the biosafety clearance or the expertise.
His research laid the groundwork for the first approved anti-covid drug and helped speed the development of the mRNA vaccines that have proved so pivotal. Recently, his lab announced the creation of the world’s first pan-coronavirus mRNA vaccine.
Yet Baric also pioneered the reverse-genetics techniques that have allowed other researchers, including those at the Wuhan Institute of Virology, to engineer viruses with altered functions. Some scientists fear that the technique, which allow coronaviruses to be recreated from their genetic code, could engender a future pandemic, and other critics, like Senator Paul, imply they might have led to the creation or release of SARS-CoV-2.
MIT Technology Review recently asked Baric to explain what constitutes a gain-of-function experiment, why such research exists, and whether it could have played any role in the pandemic. The interview has been edited and shortened for clarity.Q: Now that Rand Paul has announced on the floor of the Senate that you’re creating superviruses and performing gain-of-function experiments, this seems like a good time to talk about your work.
Ralph Baric: Well, let me start off by saying that we’ve never created a supervirus. That’s a figment of his imagination and obviously being used for political advancement. Unfortunately, the way social media works today, this fabrication will be repeated many times.How do you define gain-of-function research?
Human beings have practiced gain-of-function for the last 2,000 years, mostly in plants, where farmers would always save the largest seeds from the healthiest plants to replant the following year. The reason we can manage to have 7 billion people here on the planet is basically through direct or indirect genetic engineering through gain-of-function research. The simple definition of gain-of-function research is the introduction of a mutation than enhances a gene’s function or property—a process used commonly in genetic, biologic, and microbiologic research.
In virology, historically, attenuated vaccines were generated by gain-of-function studies, which took human virus pathogens and adapted them for improved growth in cell culture, which reduced virus virulence in the natural human host.
So gain-of-function has been used in virology and microbiology for decades as a part of the scientific method. But that classic definition and purpose changed in 2011 and 2012, when researchers in Wisconsin and the Netherlands were funded to do gain-of-function research on avian flu transmissibility.Those were the experiments that took H5N1, which had a high mortality rate in humans but low transmissibility, and made it highly transmissible through respiratory avenues.
The NIH, the FDA, the CDC, and the WHO all held meetings to identify the critical topics in influenza research that were least understood. What information and insight would better prepare us for flu pandemics that emerge from animal reservoirs in the future? The number-one conclusion was that we needed to understand the genetics and biology of flu emergence and transmission.
In response, the NIH called for proposals. Two researchers responded and were funded, and they discovered genetic changes that regulated H5N1 transmissibility in ferrets.
After that, they were labeled as rogue scientists, and gain-of-function was defined in negative terms. But in fact, they were working within the confines of the global health community’s interests.
Then again, the other side argues that regardless of how safe your BSL-3 or BSL-4 research infrastructure is, human beings are not infallible. [Pathogen labs are assigned a biosafety level rating of 1 to 4, with 4 being the highest.] They make mistakes, even in high-containment facilities. Consequently, the risks may outweigh the benefits of the experiment. Both sides of the argument have justified concerns and points of view.In addition to concerns over a lab escape, there were also concerns about whether the knowledge of how to do such experiments might fall into the wrong hands.
That’s certainly part of the issue. And there was a fair amount of debate about whether that information [about genetic changes associated with flu transmission] should be made public. There are two or three instances in the virology literature of papers that are a potential concern.
Some consider my 2015 paper in this light, although after consultation with the NIH and the journal, we purposely did not provide the genetic sequence of the chimera in the original publication. Thus, our exact method remained obscure.
[Baric is referring to a 2015 collaboration with Zhengli Shi of the Wuhan Institute of Virology, or WIV, in China, which created a so-called chimera by combining the “spike” gene from a new bat virus with the backbone of a second virus. The spike gene determines how well a virus attaches to human cells. A detailed discussion of the research to test novel spike genes appears here.]
However, the sequence was repeatedly requested after the covid-19 pandemic emerged, and so after discussion with the NIH and the journal, it was provided to the community. Those who analyzed these sequences stated that it was very different from SARS-CoV-2.How did that chimeric work on coronaviruses begin?
Around 2012 or 2013, I heard Dr. Shi present at a meeting. [Shi’s team had recently discovered two new coronaviruses in a bat cave, which they named SHC014 and WIV1.] We talked after the meeting. I asked her whether she’d be willing to make the sequences to either the SHC014 or the WIV1 spike available after she published.
And she was gracious enough to send us those sequences almost immediately—in fact, before she’d published. That was her major contribution to the paper. And when a colleague gives you sequences beforehand, coauthorship on the paper is appropriate.
That was the basis of that collaboration. We never provided the chimeric virus sequence, clones, or viruses to researchers at the WIV; and Dr. Shi, or members of her research team, never worked in our laboratory at UNC. No one from my group has worked in WIV laboratories.And you had developed a reverse-genetics technique that allowed you to synthesize those viruses from the genetic sequence alone?
Yes, but at the time, DNA synthesis costs were expensive—around a dollar per base [one letter of DNA]. So synthesizing a coronavirus genome could cost $30,000. And we only had the spike sequence. Synthesizing just the 4,000-nucleotide spike gene cost $4,000. So we introduced the authentic SHC014 spike into a replication-competent backbone: a mouse-adapted strain of SARS. The virus was viable, and we discovered that it could replicate in human cells.
So is that gain-of-function research? Well, the SARS coronavirus parental strain could replicate quite efficiently in primary human cells. The chimera could also program infection of human cells, but not better than the parental virus. So we didn’t gain any function—rather, we retained function. Moreover, the chimera was attenuated in mice as compared to the parental mouse-adapted virus, so this would be considered a loss of function.One of the knocks against gain-of-function research—including this research—is that the work has little practical value. Would you agree?
Well, by 2016, using chimeras and reverse genetics, we had identified enough high-risk SARS-like coronaviruses to be able to test and identify drugs that have broad-based activity against coronaviruses. We identified remdesivir as the first broad-based antiviral drug that worked against all known coronaviruses, and published on it in 2017. It immediately was entered into human trials and became the first FDA-approved drug for treating covid-19 infections globally. A second drug, called EIDD-2801, or molnupiravir, was also shown to be effective against all known coronaviruses prior to the 2020 pandemic, and then shown to work against SARS-CoV-2 by March 2020.
Consequently, I disagree. I would ask critics if they had identified any broad-spectrum coronavirus drugs prior to the pandemic. Can they point to papers from their laboratories documenting a strategic approach to develop effective pan-coronavirus drugs that turned out to be effective against an unknown emerging pandemic virus?
Unfortunately, remdesivir could only be delivered by intravenous injection. We were moving toward an oral-based delivery formulation, but the covid-19 pandemic emerged. I really wish we’d had an oral-based drug early on. That’s the game-changer that would help people infected in the developing world, as well as citizens in the US.
Molnupiravir is an oral medication, and phase 3 trials demonstrate rapid control of viral infection. It’s been considered for emergency-use authorization in India.
Finally, the work also supported federal policy decisions that prioritized basic and applied research on coronaviruses.What about vaccines?
Around 2018 to 2019, the Vaccine Research Center at NIH contacted us to begin testing a messenger-RNA-based vaccine against MERS-CoV [a coronavirus that sometimes spreads from camels to humans]. MERS-CoV has been an ongoing problem since 2012, with a 35% mortality rate, so it has real global-health-threat potential.
By early 2020, we had a tremendous amount of data showing that in the mouse model that we had developed, these mRNA spike vaccines were really efficacious in protecting against lethal MERS-CoV infection. If designed against the original 2003 SARS strain, it was also very effective. So I think it was a no-brainer for NIH to consider mRNA-based vaccines as a safe and robust platform against SARS-CoV-2 and to give them a high priority moving forward.
Most recently, we published a paper showing that multiplexed, chimeric spike mRNA vaccines protect against all known SARS-like virus infections in mice. Global efforts to develop pan-sarbecoronavirus vaccines [sarbecoronavirus is the subgenus to which SARS and SARS-CoV-2 belong] will require us to make viruses like those described in the 2015 paper.
So I would argue that anyone saying there was no justification to do the work in 2015 is simply not acknowledging the infrastructure that contributed to therapeutics and vaccines for covid-19 and future coronaviruses.The work only has value if the benefits outweigh the risks. Are there safety standards that should be applied to minimize those risks?
Certainly. We do everything at BSL-3 plus. The minimum requirements at BSL-3 would be an N95 mask, eye protection, gloves, and a lab coat, but we actually wear impervious Tyvek suits, aprons, and booties and are double-gloved. Our personnel wear hoods with PAPRs [powered air-purifying respirators] that supply HEPA-filtered air to the worker. So not only are we doing all research in a biological safety cabinet, but we also perform the research in a negative-pressure containment facility, which has lots of redundant features and backups, and each worker is encased in their own private personal containment suit.
Another thing we do is to run emergency drills with local first responders. We also work with the local hospital. With many laboratory infections, there’s actually no known event that caused that infection to occur. And people get sick, right? You have to have medical surveillance plans in place to rapidly quarantine people at home, to make sure they have masks and communicate regularly with a doctor on campus.Is all that standard for other facilities in the US and internationally?
No, I don’t think so. Different places have different levels of BSL-3 containment operations, standard operating procedures, and protective gear. Some of it is dependent on how deep your pockets are and the pathogens studied in the facility. An N95 is a lot cheaper than a PAPR.
Internationally, the US has no say over what biological safety conditions are used in China or any other sovereign nation to conduct research on viruses, be they coronaviruses or Nipah, Hendra, or Ebola.The Wuhan Institute of Virology was making chimeric coronaviruses, using techniques similar to yours, right?
Let me make it clear that we never sent any of our molecular clones or any chimeric viruses to China. They developed their own molecular clone, based on WIV1, which is a bat coronavirus. And into that backbone they shuffled in the spike genes of other bat coronaviruses, to learn how well the spike genes of these strains can promote infection in human cells.Would you call that gain-of-function?
A committee at NIH makes determinations of gain-of-function research. The gain-of-function rules are focused on viruses of pandemic potential and experiments that intend to enhance the transmissibility or pathogenesis of SARS, MERS, and avian flu strains in humans. WIV1 is approximately 10% different from SARS. Some argue that “SARS coronavirus” by definition covers anything in the sarbecoronavirus genus. By this definition, the Chinese might be doing gain-of-function experiments, depending on how the chimera behaves. Others argue that SARS and WIV1 are different, and as such the experiments would be exempt. Certainly, the CDC considers SARS and WIV1 to be different viruses. Only the SARS coronavirus from 2003 is a select agent. Ultimately, a committee at the NIH is the final arbiter and makes the decision about what is or is not a gain-of-function experiment.Definitions aside, we know they were doing the work in BSL-2 conditions, which is a much lower safety level than your BSL-3 plus.
Historically, the Chinese have done a lot of their bat coronavirus research under BSL-2 conditions. Obviously, the safety standards of BSL-2 are different than BSL-3, and lab-acquired infections occur much more frequently at BSL-2. There is also much less oversight at BSL-2.This year, a joint commission of the World Health Organization and China said it was extremely unlikely that a lab accident had caused SARS-CoV-2. But you later signed a letter with other scientists calling for a thorough investigation of all possible causes. Why was that?
One of the reasons I signed the letter in Science was that the WHO report didn’t really discuss how work was done in the WIV laboratory, or what data the expert panel reviewed to come to the conclusion that it was “very unlikely” that a laboratory escape or infection was the cause of the pandemic.
There must be some recognition that a laboratory infection could have occurred under BSL-2 operating conditions. Some unknown viruses pooled from guano or oral swabs might replicate or recombine with others, so you could get new strains with unique and unpredictable biological features.
And if all this research is being performed at BSL-2, then there are questions that need to be addressed. What are the standard operating procedures in the BSL-2? What are the training records of the staff? What is the history of potential exposure events in the lab, and how were they reviewed and resolved? What are the biosafety procedures designed to prevent potential exposure events?
Living in a community, workers will be infected with pathogens from the community. Respiratory infections occur frequently. No one is exempt. What are the biosafety procedures used to deal with these complications? Do they quarantine workers who develop fevers? Do they continue to work in the lab or are they quarantined at home with N95 masks? What procedures are in place to protect the community or local hospitals if an exposed person becomes ill? Do they use mass transit?
This is just a handful of the questions that should have been reviewed in the WHO document, providing actionable evidence regarding the likelihood of a laboratory-acquired-infection origin.Should they have been doing such experiments in a BSL-2 lab?
I would not. However, I don’t set the standard for the US or any other country. There’s definitely some risk associated with these and other SARS-like bat viruses that can enter human cells.
We also know that people who live near bat hibernacula [bat caves] have tested positive for antibodies against SARS-like bat viruses, so some of these viruses clearly can infect humans. While we have no idea whether they could actually cause severe disease or transmit from person to person, you want to err on the side of increased caution when working with these pathogens.
As a sovereign nation, China decides their own biological safety conditions and procedures for research, but they should also be held accountable for those decisions, just like any other nation that conducts high-containment biological research. As other nations develop BSL-3 facilities and begin to conduct high-containment research, each will have to make fundamental decisions about what kind of containment they use for different viruses and bacteria, along with the underlying biosafety procedures.
This is serious stuff. Global standards need to exist, especially for understudied emerging viruses. If you study hundreds of different bat viruses at BSL-2, your luck may eventually run out.Do you think their luck ran out?
The possibility of accidental escape still remains and cannot be excluded, so further investigation and transparency is critical, but I personally feel that SARS-CoV-2 is a natural pathogen that emerged from wildlife. Its closest relatives are bat strains. Historical precedent argues that all other human coronaviruses emerged from animals. No matter how many bat viruses are at the WIV, nature has many, many more.
At this time, there’s really no strong and actionable data that argues that the virus was engineered and escaped containment. As the pathogenesis of SARS-CoV-2 is so complex, the thought that anybody could engineer it is almost ludicrous.
When you think about the diversity of SARS-related strains that exist in nature, it’s not hard to imagine a strain that would have the complex and unpredictable biological features of SARS-CoV-2. As scientists, we tend to do experiments, read the literature, and then think we understand how nature works. We make definitive statements regarding how coronaviruses are supposed to emerge from animal reservoirs, based on one or two examples. But nature has many secrets, and our understanding is limited. Or as they said in Game of Thrones, “You know nothing, Jon Snow.”In addition to the WIV and you, are other groups doing coronavirus engineering?
Before covid-19, there were probably three to four main groups globally. That’s changed dramatically. Now the number of labs doing coronavirus genetics is likely three or four times higher and continuing to increase. That proliferation is unsettling, because it allows many inexperienced groups, globally, to make decisions about building and isolating chimeras or natural zoonotic [viruses].
By “inexperienced,” I mean that they are applying previous discoveries and approaches in the coronavirus field, but perhaps with less respect for the inherent risk posed by this group of pathogens.
People are making chimeras right now for the variants of concern, and each of those variants is providing new insights into human transmissibility and pathogenesis.So the virus itself is contributing to gain-of-function knowledge?
The virus is a master at finding better ways to outcompete its ancestors in humans. And each of these successful SARS-CoV-2 variants outcompetes the old variants and reveals the underlying genetics that regulate increased transmissibility and/or pathogenesis. And that information is being learned in a real-time setting and in humans, as compared to the avian-flu-transmission scenario, which was conducted under controlled artificial conditions in ferrets. I would argue that the real-time knowledge is more relevant and perhaps more unsettling than the research conducted in animal models under high containment.
Given our scientific capabilities today, every new emerging virus that causes an outbreak in the future can be studied at this level of granularity. That is unprecedented. Each could provide a classic recipe for potential dual-use applications in other strains. [Dual-use biological research is that which can be used to develop both therapeutics and bioweapons.]Anything else about this that keeps you up at night?
The number of zoonotic coronaviruses that are poised to jump species is a major concern. That’s not going away.
Also, the biology of this virus is such that its virulence will most likely continue to increase rather than decrease, at least in the short term.Why is that?
The transmission events occur early, while the most severe disease occurs late, after the virus is being cleared from the body. That means transmission and severe disease and death are partially uncoupled, biologically. Consequently, it doesn’t hurt the virus to increase its virulence.
If you are one of the people waiting to get the vaccine, your risk is going up with each new variant. These variants are dangerous. They want to reproduce and spread and show increased pathogenesis, even in younger adults. They have little concern for you or your family’s health and welfare, so get vaccinated.
That is the saddest thing about the pandemic. For an effective public health response, you need to respond as a national and global community with one voice. You must believe in the power of public health and public health procedures. Politics has no place in a pandemic, but that is what we ended up with—politically inspired mixed messaging.
How did that work out for America? Did we get diagnostics online quickly? No! Did we use the two-to-three-month lead time to stock hospitals with PPE or respirators? No. Rather, Americans received the message that the virus wasn’t dangerous, that it would go away or that the summer heat would destroy it. We heard rumors that mask wearing was detrimental, or that unproven drugs were miracle cures.
Some say that the true tragedy is the hundreds of thousands of Americans who didn’t need to die [but did] because the greatest nation in the world did not respond to a pandemic in a unified, science-based manner. Taiwan responded with a unified public health response and had only handfuls of cases and few deaths. The US led the world in deaths and numbers of cases. Why are the failures leading to the deaths of hundreds of thousands of Americans not the subject of rigorous investigation?
NASA’s InSight robotic lander has just given us our first look deep inside a planet other than Earth.
More than two years after its launch, seismic data that InSight collected has given researchers hints into how Mars was formed, how it has evolved over 4.6 billion years, and how it differs from Earth. A set of three new studies, published in Science this week, suggests that Mars has a thicker crust than expected, as well as a molten liquid core that is bigger than we thought.
In the early days of the solar system, Mars and Earth were pretty much alike, each with a blanket of ocean covering the surface. But over the following 4 billion years, Earth became temperate and perfect for life, while Mars lost its atmosphere and water and became the barren wasteland we know today. Finding out more about what Mars is like inside might help us work out why the two planets had such very different fates.
“By going from [a] cartoon understanding of what the inside of Mars looks like to putting real numbers on it,” said Mark Panning, project scientist for the InSight mission, during a NASA press conference, “we are able to really expand the family tree of understanding how these rocky planets form and how they’re similar and how they’re different.”
Since InSight landed on Mars in 2018, its seismometer, which sits on the surface of the planet, has picked up more than a thousand distinct quakes. Most are so small they would be unnoticeable to someone standing on Mars’s surface. But a few were big enough to help the team get the first true glimpse of what’s happening underneath.NASA/JPL-CALTECH
Marsquakes create seismic waves that the seismometer detects. Researchers created a 3D map of Mars using data from two different kinds of seismic waves: shear and pressure waves. Shear waves, which can only pass through solids, are reflected off the planet’s surface.
Pressure waves are faster and can pass through solids, liquids, and gases. Measuring the differences between the times that these waves arrived allowed the researchers to locate quakes and gave clues to the interior’s composition.
One team, led by Simon Stähler, a seismologist at ETH Zurich, used data generated by 11 bigger quakes to study the planet’s core. From the way the seismic waves reflected off the core, they concluded it’s made from liquid nickel-iron, and that it’s far larger than had been previously estimated (between 2,230 and 2320 miles wide) and probably less dense.
Another team, led by Amir Khan, a scientist at the Institute of Geophysics at ETH Zurich and at the Physics Institute at the University of Zurich, looked at the Martian mantle, the layer that sits between the crust and the core. They used the data to determine that Mars’s lithosphere—while similar in chemical composition to Earth’s—lacks tectonic plates. It is also thicker than Earth’s by about 56 miles.
This extra thickness was most likely “the result of early magma ocean crystallization and solidification,” meaning that Mars may have been quickly frozen at a key point in its formative years, the team suggests.
A third team, led by Brigitte Knapmeyer-Endrun, a planetary seismologist at the University of Cologne, analyzed the Martian crust, the layer of rocks at its surface. They found while its crust is likely very deep, it’s also thinner than her team expected.
“That’s intriguing because it points to differences in the interior of the Earth and Mars, and maybe they are not made of exactly the same stuff, so they were not built from exactly the same building blocks,” says Knapmeyer-Endrun.
The InSight mission will come to an end next year after its solar cells are unable to produce any more power, but in the meantime, it’s possible even more of Mars’s inner secrets will be unveiled.
“Regarding seismology and InSight, there are also still many open questions for the extended mission,” says Knapmeyer-Endrun.
Oscar Maung-Haley, 24, was working a part-time job in a bar in Manchester, England, when his phone pinged. It was the UK’s NHS Test and Trace app letting him know he’d potentially been exposed to covid-19 and needed to self-isolate. The news immediately caused problems. “It was a mad dash around the venue to show my manager and say I had to go,” he says.
The alert he got was one of hundreds of thousands being sent out every week as the UK battles its latest wave of covid, which means more and more people face the same logistical, emotional, and financial challenges. An estimated one in five have resorted to deleting the app altogether—after all, you can’t get a notification if you don’t have it on your phone. The phenomenon is being dubbed a “pingdemic” on social media, blamed for everything from gas shortages to bare store shelves.
The ping deluge reflects the collision of several developments. The delta variant, which appears much easier to spread than others, has swept across the UK. At the same time, record numbers of Britons have downloaded the NHS app. Meanwhile, the UK has dropped many of its lockdown restrictions, so more people are coming into more frequent contact than before. More infections, more users, more contact: more pings.
But that’s exactly how it’s supposed to work, says Imogen Parker, policy director for the Ada Lovelace Institute, which studies AI and data policies. In fact, even with so many notifications being sent, there are still many infections that the system is not catching.
“More than 600,000 people have been told to isolate by the NHS covid-19 app across the week of July 8 in England and Wales,” she says, “but that’s only a little more than double the number of new positive cases in the same period. While we had concerns about the justification for the contact tracing app, criticizing it for the ‘pingdemic’ is misplaced: the app is essentially working as it always has been.”
Christophe Fraser, an epidemiologist at the University of Oxford’s Big Data Institute who has done the most prominent studies on the effectiveness of the app, says that while it is functioning as designed, there’s another problem: a significant breakdown in the social contract. “People can see, on TV, there are raves and nightclubs going on. Why am I being told to stay home? Which is a fair point, to be honest,” he says.
It’s this lack of clear, fair rules, he says, that is leading to widespread frustration as people are told to self-isolate. As we’ve seen throughout the pandemic, public health technology is deeply intertwined with everything around it—the way it’s marketed, the way it’s talked about in the media, the way it’s discussed by your physician, the way it’s supported (or not) by lawmakers.
“People do want to do the right thing,” Fraser says. “They need to be met halfway.”How we got here
Exposure notification apps are a digital public health tactic pioneered during the pandemic—and they’ve already weathered a lot of criticism from those who say that they didn’t get enough use. Dozens of countries built apps to alert users to covid exposure, sharing code and using a framework developed jointly by Google and Apple. But amid criticism over privacy worries and tech glitches, detractors charged that the apps had launched too late in the pandemic—at a time when case numbers were too high for tech to turn back the tide.
So shouldn’t this moment in the UK—when technical glitches have been ironed out, when adoption is high, and with a new wave spiking—be the right time for its app to make a real difference?
“The science is not as much of a challenge … the challenge comes around the behavior. The hardest parts of the system are the parts where you need to convince people to do something.”Jenny Wanger, Linux Foundation Public Health
Not if people don’t voluntarily follow the instructions to isolate, says Jenny Wanger, who leads covid-related tech initiatives for Linux Foundation Public Health.
Eighteen months into the pandemic, “the tech is not usually a challenge,” she says. “The science is not as much of a challenge … we know, at this point, how covid transmission works. The challenge comes around the behavior. The hardest parts of the system are the parts where you need to convince people to do something—of course, based on best practices.”
Oxford’s Fraser says that he thinks about it in terms of incentives. For the average person, he says, the incentives for adhering to the rules of contact tracing—digital or otherwise—don’t always add up.
If the result of using the app is that “you end up being quarantined but your neighbor who hasn’t installed the app doesn’t get quarantined,” he says, “that doesn’t necessarily feel fair, right?”
To make matters even more complicated, the UK has announced that it’s about to change its rules. In mid-August, people who have received two doses of a vaccine will no longer need to self-isolate because of covid exposure; they’ll only need to do so if they test positive. About half of the country’s adult population is fully vaccinated.
That could be a moment to bring incentives more in line with what people would be willing to do, he says. “Maybe people should be offered tests so that they can keep going to work and get on with life, rather than be isolated for a number of days.”
In the meantime, though, a handful of corporate leaders—the head of a budget airline, for example—have encouraged employees to delete the app to avoid the pings. Even the two most powerful politicians in the country, Prime Minister Boris Johnson and Chancellor Rishi Sunak, tried to skirt the requirement to isolate after being pinged (saying they were taking part in a trial of alternative measures) before public outcry forced them into quarantine.When protection creates confusion
The mixed messages are compounded by the app’s privacy-protecting functions. Users aren’t told who among their contacts may have infected them—and they’re not told where any interactions happened. But that isn’t an accident: the apps were designed that way to safeguard people’s information.
“In epidemiology, surveillance is a noble thing,” says Fraser. “In digital tech, it’s a darker thing. I think the privacy-preserving protocol got the balance right. It’s incumbent on science and epidemiology to get information to people while preserving that privacy.”
Be that as it may, those privacy protections are now creating even more confusion.
Alistair Scott, 38, lives with his fiancée in North London. The couple did everything together during lockdown—yet Scott recently got a notification telling him he needed to isolate, while his partner did not. “It immediately became this game of ‘Why did I get pinged and you didn’t?’” he says.What’s next
Experts say that there are a few ways forward. One could be to tweak the algorithm: the app could incorporate new science about the length of covid exposure that might merit a ping even if you’re vaccinated.
“Emerging evidence looks like full vaccination should decrease the risk that someone transmits the virus by around half,” says Parker of the Ada Lovelace Institute. “That could have a sizeable impact on alerts if it was built into the model.”
That means alerts could become less frequent for vaccinated people.
On the other hand, Wanger says that NHS leaders could adjust settings to be more sensitive, to reflect the increased transmission risk of variants like delta. There’s no indication that such changes have been made yet.
Either way, she says, what’s important is that the app keep doing its job.
“As a public health authority, when you’re looking at cases rising dramatically within your country, and you’re trying to pursue economic goals by lifting lockdown restrictions—it’s a really hard position to be in,” Wanger says. “You want to nudge people to do behavior changes, but you’ve got this whole psychology aspect to it. If people get notification fatigue, they are not going to change their behavior.”
Meanwhile, people are still being pinged, still feeling confused—and still hearing mixed messages.
Charlotte Wilson, 39, and her husband both downloaded the app onto their phones almost as soon as it was available. But there’s been a split in the household, especially since lawmakers were seen apparently trying to avoid the rules. Faced with the prospect of being told to self-isolate, Wilson said she would follow the advice, while her partner felt differently and deleted the app completely.
“My husband thought [over the weekend], ‘You know what? This is ridiculous,’” she says. The impending change in self-isolation protocol made it seem especially fruitless.
Still, she understands his view, even if she’s personally keeping the app on her phone.
“I don’t really know what the answer is as far as society’s concerned,” she says. “We’re just riddled with covid.”
This story is part of the Pandemic Technology Project, supported by The Rockefeller Foundation.
Back in December 2020, DeepMind took the world of biology by surprise when it solved a 50-year grand challenge with AlphaFold, an AI tool that predicts the structure of proteins. Last week the London-based company published full details of that tool and released its source code.
Now the firm has announced that it has used its AI to predict the shapes of nearly every protein in the human body, as well as the shapes of hundreds of thousands of other proteins found in 20 of the most widely studied organisms, including yeast, fruit flies, and mice. The breakthrough could allow biologists from around the world to understand diseases better and develop new drugs.
So far the trove consists of 350,000 newly predicted protein structures. DeepMind says it will predict and release the structures for more than 100 million more in the next few months—more or less all proteins known to science.
“Protein folding is a problem I’ve had my eye on for more than 20 years,” says DeepMind cofounder and CEO Demis Hassabis. “It’s been a huge project for us. I would say this is the biggest thing we’ve done so far. And it’s the most exciting in a way, because it should have the biggest impact in the world outside of AI.”
Proteins are made of long ribbons of amino acids, which twist themselves up into complicated knots. Knowing the shape of a protein’s knot can reveal what that protein does, which is crucial for understanding how diseases work and developing new drugs—or identifying organisms that can help tackle pollution and climate change. Figuring out a protein’s shape takes weeks or months in the lab. AlphaFold can predict shapes to the nearest atom in a day or two.
The new database should make life even easier for biologists. AlphaFold might be available for researchers to use, but not everyone will want to run the software themselves. “It’s much easier to go and grab a structure from the database than it is running it on your own computer,” says David Baker of the Institute for Protein Design at the University of Washington, whose lab has built its own tool for predicting protein structure, called RoseTTAFold, based on AlphaFold’s approach.
In the last few months Baker’s team has been working with biologists who were previously stuck trying to figure out the shape of proteins they were studying. “There’s a lot of pretty cool biological research that’s been really sped up,” he says. A public database containing hundreds of thousands of ready-made protein shapes should be an even bigger accelerator.
“It looks astonishingly impressive,” says Tom Ellis, a synthetic biologist at Imperial College London studying the yeast genome, who is excited to try the database. But he cautions that most of the predicted shapes have not yet been verified in the lab.Atomic precision
In the new version of AlphaFold, predictions come with a confidence score that the tool uses to flag how close it thinks each predicted shape is to the real thing. Using this measure, DeepMind found that AlphaFold predicted shapes for 36% of human proteins with an accuracy that is correct down to the level of individual atoms. This is good enough for drug development, says Hassabis.
Previously, after decades of work, only 17% of the proteins in the human body have had their structures identified in the lab. If AlphaFold’s predictions are as accurate as DeepMind says, the tool has more than doubled this number in just a few weeks.
Even predictions that are not fully accurate at the atomic level are still useful. For more than half of the proteins in the human body, AlphaFold has predicted a shape that should be good enough for researchers to figure out the protein’s function. The rest of AlphaFold’s current predictions are either incorrect, or are for the third of proteins in the human body that don’t have a structure at all until they bind with others. “They’re floppy,” says Hassabis.
“The fact that it can be applied at this level of quality is an impressive thing,” says Mohammed AlQuraish, a systems biologist at Columbia University who has developed his own software for predicting protein structure. He also points out that having structures for most of the proteins in an organism will make it possible to study how these proteins work as a system, not just in isolation. “That’s what I think is most exciting,” he says.
DeepMind is releasing its tools and predictions for free and will not say if it has plans for making money from them in future. It is not ruling out the possibility, however. To set up and run the database, DeepMind is partnering with the European Molecular Biology Laboratory, an international research institution that already hosts a large database of protein information.
For now, AlQuraishi can’t wait to see what researchers do with the new data. “It’s pretty spectacular,” he says “I don’t think any of us thought we would be here this quickly. It’s mind boggling.”
Mice: check. Lizards: check. Squid: check. Marsupials … check.
CRISPR has been used to modify the genes of tomatoes, humans, and just about everything in between. Because of their unique reproductive biology and their relative rarity in laboratory settings, though, marsupials had eluded the CRISPR rush—until now.
A team of researchers at Japan’s Riken Institute, a national research facility, have used the technology to edit the genes of a South American species of opossum. The results were described in a new study out today in Current Biology. The ability to tweak marsupial genomes could help biologists learn more about the animals and use them to study immune responses, developmental biology, and even diseases like melanoma.
“I’m very excited to see this paper. It’s an accomplishment that I didn’t think would perhaps happen in my lifetime,” says John VandeBerg, a geneticist at the University of Texas Rio Grande Valley, who was not involved in the study.
The difficulties of genetically modifying marsupials had less to do with CRISPR than with the intricacies of marsupial reproductive biology, says Hiroshi Kiyonari (link in Japanese), the lead author of the new study.
While kangaroos and koalas are more well-known, researchers who study marsupials often use opossums in lab experiments, since they’re smaller and easier to care for. Gray short-tailed opossums, the species used in the study, are related to the white-faced North American opossums, but they’re smaller and don’t have a pouch.
The researchers at Riken used CRISPR to delete, or knock out, a gene that codes for pigment production. Targeting this gene meant that if the experiments worked, the results would be obvious at a glance: the opossums would be albino if both copies of the gene were knocked out, and mottled, or mosaic, if a single copy was deleted.
The resulting litter included one albino opossum and one mosaic opossum (pictured above). The researchers also bred the two, which resulted in a litter of fully albino opossums, showing that the coloring was an inherited genetic trait.
The researchers had to navigate a few hurdles to edit the opossum genome. First, they had to work out the timing of hormone injections to get the animals ready for pregnancy. The other challenge was that marsupial eggs develop a thick layer around them, called a mucoid shell, soon after fertilization. This makes it harder to inject the CRISPR treatment into the cells. In their first attempts, needles either would not penetrate the cells or would damage them so the embryos couldn’t survive, Kiyonari says.
The researchers realized that it would be a lot easier to do the injection at an earlier stage, before the coating around the egg got too tough. By changing when the lights turned off in the labs, researchers got the opossums to mate later in the evening so that the eggs would be ready to work with in the morning, about a day and a half later.
The researchers then used a tool called a piezoelectric drill, which uses electric charge to more easily penetrate the membrane. This helped them inject the cells without damaging them.
“I think it’s an incredible result,” says Richard Behringer, a geneticist at the University of Texas. “They’ve shown it can be done. Now it’s time to do the biology,” he adds.
Opossums have been used as laboratory animals since the 1970s, and researchers have attempted to edit their genes for at least 25 years, says VandeBerg, who started trying to create the first laboratory opossum colony in 1978. They were also the first marsupial to have their genome fully sequenced, in 2007.
Comparative biologists hope the ability to genetically modify opossums will help them learn more about some of the unique aspects of marsupial biology that have yet to be decoded. “We find genes and marsupial genomes that we don’t have, so that creates a bit of a mystery as to what they’re doing,” says Rob Miller, an immunologist at the University of New Mexico, who uses opossums in his research.
Most vertebrates have two types of T cells, one of the components of the immune system (and lizards only have one type). But marsupials, including opossums, have a third type, and researchers aren’t sure what they do or how they work. Being able to remove the cells and see what happens, or knock out other parts of the immune system, might help them figure out what this mystery cell is doing, Miller says.
Opossums are also used as models for some human diseases. They’re among the few mammals that get melanoma (a skin cancer) like humans.
Another interesting characteristic of opossums is that they are born after only 14 days, as barely more than balls of cells with forearms to help them crawl onto their mother’s chest. These little jelly beans then develop their eyes, back limbs, and a decent chunk of their immune system after they’re already out in the world.
Since so much of their development happens after birth, studying and manipulating their growth could be much easier than doing similar work in other laboratory animals like mice. Kiyonari says his team is looking for other ways to tweak opossum genes to study the animals’ organ development.
Miller and other researchers are hopeful that gene-edited opossums will help them make new discoveries about biology and about ourselves. “Sometimes comparative biology reveals what’s really important,” he says. “Things that we have in common must be fundamental, and things that are different are interesting.”
Some companies that create these games, like Pymetrics and Arctic Shores, claim that they limit bias in hiring. But AI hiring games can be especially difficult to navigate for job seekers with disabilities.
In the latest episode of MIT Technology Review’s podcast “In Machines We Trust,” we explore how AI-powered hiring games and other tools may exclude people with disabilities. And while many people in the US are looking to the federal commission responsible for employment discrimination to regulate these technologies, the agency has yet to act.
To get a closer look, we asked Henry Claypool, a disability policy analyst, to play one of Pymetrics’s games. Pymetrics measures nine skills, including attention, generosity, and risk tolerance, that CEO and cofounder Frida Polli says relate to job success.
When it works with a company looking to hire new people, Pymetrics first asks the company to identify people who are already succeeding at the job it’s trying to fill and has them play its games. Then, to identify the skills most specific to the successful employees, it compares their game data with data from a random sample of players.
When he signed on, the game prompted Claypool to choose between a modified version—designed for those with color blindness, ADHD, or dyslexia—and an unmodified version. This question poses a dilemma for applicants with disabilities, he says.
“The fear is that if I click one of these, I’ll disclose something that will disqualify me for the job, and if I don’t click on—say—dyslexia or whatever it is that makes it difficult for me to read letters and process that information quickly, then I’ll be at a disadvantage,” Claypool says. “I’m going to fail either way.”
Polli says Pymetrics does not tell employers which applicants requested in-game accommodations during the hiring process, which should help prevent employers from discriminating against people with certain disabilities. She added that in response to our reporting, the company will make this information more clear so applicants know that their need for an in-game accommodation is private and confidential.
The Americans with Disabilities Act requires employers to provide reasonable accommodations to people with disabilities. And if a company’s hiring assessments exclude people with disabilities, then it must prove that those assessments are necessary to the job.
For employers, using games such as those produced by Arctic Shores may seem more objective. Unlike traditional psychometric testing, Arctic Shores’s algorithm evaluates candidates on the basis of their choices throughout the game. However, candidates often don’t know what the game is measuring or what to expect as they play. For applicants with disabilities, this makes it hard to know whether they should ask for an accommodation.
Safe Hammad, CTO and cofounder of Arctic Shores, says his team is focused on making its assessments accessible to as many people as possible. People with color blindness and hearing disabilities can use the company’s software without special accommodations, he says, but employers should not use such requests to screen out candidates.
The use of these tools can sometimes exclude people in ways that may not be obvious to a potential employer, though. Patti Sanchez is an employment specialist at the MacDonald Training Center in Florida who works with job seekers who are deaf or hard of hearing. About two years ago, one of her clients applied for a job at Amazon that required a video interview through HireVue.
Sanchez, who is also deaf, attempted to call and request assistance from the company, but couldn’t get through. Instead, she brought her client and a sign language interpreter to the hiring site and persuaded representatives there to interview him in person. Amazon hired her client, but Sanchez says issues like these are common when navigating automated systems. (Amazon did not respond to a request for comment.)
Making hiring technology accessible means ensuring both that a candidate can use the technology and that the skills it measures don’t unfairly exclude candidates with disabilities, says Alexandra Givens, the CEO of the Center for Democracy and Technology, an organization focused on civil rights in the digital age.
AI-powered hiring tools often fail to include people with disabilities when generating their training data, she says. Such people have long been excluded from the workforce, so algorithms modeled after a company’s previous hires won’t reflect their potential.
Even if the models could account for outliers, the way a disability presents itself varies widely from person to person. Two people with autism, for example, could have very different strengths and challenges.
“As we automate these systems, and employers push to what’s fastest and most efficient, they’re losing the chance for people to actually show their qualifications and their ability to do the job,” Givens says. “And that is a huge loss.”A hands-off approach
Government regulators are finding it difficult to monitor AI hiring tools. In December 2020, 11 senators wrote a letter to the US Equal Employment Opportunity Commission expressing concerns about the use of hiring technologies after the covid-19 pandemic. The letter inquired about the agency’s authority to investigate whether these tools discriminate, particularly against those with disabilities.
The EEOC responded with a letter in January that was leaked to MIT Technology Review. In the letter, the commission indicated that it cannot investigate AI hiring tools without a specific claim of discrimination. The letter also outlined concerns about the industry’s hesitance to share data and said that variation between different companies’ software would prevent the EEOC from instituting any broad policies.
“I was surprised and disappointed when I saw the response,” says Roland Behm, a lawyer and advocate for people with behavioral health issues. “The whole tenor of that letter seemed to make the EEOC seem like more of a passive bystander rather than an enforcement agency.”
The agency typically starts an investigation once an individual files a claim of discrimination. With AI hiring technology, though, most candidates don’t know why they were rejected for the job. “I believe a reason that we haven’t seen more enforcement action or private litigation in this area is due to the fact that candidates don’t know that they’re being graded or assessed by a computer,” says Keith Sonderling, an EEOC commissioner.
Sonderling says he believes that artificial intelligence will improve the hiring process, and he hopes the agency will issue guidance for employers on how best to implement it. He says he welcomes oversight from Congress.
However, Aaron Rieke, managing director of Upturn, a nonprofit dedicated to civil rights and technology, expressed disappointment in the EEOC’s response: “I actually would hope that in the years ahead, the EEOC could be a little bit more aggressive and creative in thinking about how to use that authority.”
Pauline Kim, a law professor at Washington University in St. Louis, whose research focuses on algorithmic hiring tools, says the EEOC could be more proactive in gathering research and updating guidelines to help employers and AI companies comply with the law.
Behm adds that the EEOC could pursue other avenues of enforcement, including a commissioner’s charge, which allows commissioners to initiate an investigation into suspected discrimination instead of requiring an individual claim (Sonderling says he is considering making such a charge). He also suggests that the EEOC consult with advocacy groups to develop guidelines for AI companies hoping to better represent people with disabilities in their algorithmic models.
It’s unlikely that AI companies and employers are screening out people with disabilities on purpose, Behm says. But they “haven’t spent the time and effort necessary to understand the systems that are making what for many people are life-changing decisions: Am I going to be hired or not? Can I support my family or not?”
The Facebook engineer was itching to know why his date hadn’t responded to his messages. Perhaps there was a simple explanation—maybe she was sick or on vacation.
So at 10 p.m. one night in the company’s Menlo Park headquarters, he brought up her Facebook profile on the company’s internal systems and began looking at her personal data. Her politics, her lifestyle, her interests—even her real-time location.
The engineer would be fired for his behavior, along with 51 other employees who had inappropriately abused their access to company data, a privilege that was then available to everyone who worked at Facebook, regardless of their job function or seniority. The vast majority of the 51 were just like him: men looking up information about the women they were interested in.
In September 2015, after Alex Stamos, the new chief security officer, brought the issue to Mark Zuckerberg’s attention, the CEO ordered a system overhaul to restrict employee access to user data. It was a rare victory for Stamos, one in which he convinced Zuckerberg that Facebook’s design was to blame, rather than individual behavior.
So begins An Ugly Truth, a new book about Facebook written by veteran New York Times reporters Sheera Frenkel and Cecilia Kang. With Frenkel’s expertise in cybersecurity, Kang’s expertise in technology and regulatory policy, and their deep well of sources, the duo provide a compelling account of Facebook’s years spanning the 2016 and 2020 elections.
Stamos would no longer be so lucky. The issues that derived from Facebook’s business model would only escalate in the years that followed but as Stamos unearthed more egregious problems, including Russian interference in US elections, he was pushed out for making Zuckerberg and Sheryl Sandberg face inconvenient truths. Once he left, the leadership continued to refuse to address a whole host of profoundly disturbing problems, including the Cambridge Analytica scandal, the genocide in Myanmar, and rampant covid misinformation.The authors, Cecilia Kang and Sheera FrenkelBEOWULF SHEEHAN
Frenkel and Kang argue that Facebook’s problems today are not the product of a company that lost its way. Instead they are part of its very design, built atop Zuckerberg’s narrow worldview, the careless privacy culture he cultivated, and the staggering ambitions he chased with Sandberg.
When the company was still small, perhaps such a lack of foresight and imagination could be excused. But since then, Zuckerberg’s and Sandberg’s decisions have shown that growth and revenue trump everything else.
In a chapter titled “Company Over Country,” for example, the authors chronicle how the leadership tried to bury the extent of Russian election interference on the platform from the US intelligence community, Congress, and the American public. They censored the Facebook security team’s multiple attempts to publish details of what they had found, and cherry-picked the data to downplay the severity and partisan nature of the problem. When Stamos proposed a redesign of the company’s organization to prevent a repeat of the issue, other leaders dismissed the idea as “alarmist” and focused their resources on getting control of the public narrative and keeping regulators at bay.
In 2014, a similar pattern began to play out in Facebook’s response to the escalating violence in Myanmar, detailed in the chapter “Think Before You Share.” A year prior, Myanmar-based activists had already begun to warn the company about the concerning levels of hate speech and misinformation on the platform being directed at the country’s Rohingya Muslim minority. But driven by Zuckerberg’s desire to expand globally, Facebook didn’t take the warnings seriously.
When riots erupted in the country, the company further underscored their priorities. It remained silent in the face of two deaths and fourteen injured but jumped in the moment the Burmese government cut off Facebook access for the country. Leadership then continued to delay investments and platform changes that could have prevented the violence from getting worse because it risked reducing user engagement. By 2017, ethnic tensions had devolved into a full-blown genocide, which the UN later found had been “substantively contributed to” by Facebook, resulting in the killing of more than 24,000 Rohingya Muslims.
This is what Frenkel and Kang call Facebook’s “ugly truth.” Its “irreconcilable dichotomy” of wanting to connect people to advance society but also enrich its bottom line. Chapter after chapter makes abundantly clear that it isn’t possible to satisfy both—and Facebook has time again chosen the latter at the expense of the former.
The book is as much a feat of storytelling as it is reporting. Whether you have followed Facebook’s scandals closely as I have, or only heard bits and pieces at a distance, Frenkel and Kang weave it together in a way that leaves something for everyone. The detailed anecdotes take readers behind the scenes into Zuckerberg’s conference room known as “Aquarium,” where key decisions shaped the course of the company. The pacing of each chapter guarantees fresh revelations with every turn of the page.
While I recognized each of the events that the authors referenced, the degree to which the company sought to protect itself at the cost of others was still worse than I had previously known. Meanwhile, my partner who read it side-by-side with me and squarely falls into the second category of reader repeatedly looked up stunned by what he had learned.
The authors keep their own analysis light, preferring to let the facts speak for themselves. In this spirit, they demur at the end of their account from making any hard conclusions about what to do with Facebook, or where this leaves us. “Even if the company undergoes a radical transformation in the coming year,” they write, “that change is unlikely to come from within.” But between the lines, the message is loud and clear: Facebook will never fix itself.