MIT Top Stories
You can see the faint stubble coming in on his upper lip, the wrinkles on his forehead, the blemishes on his skin. He isn’t a real person, but he’s meant to mimic one—as are the hundreds of thousands of others made by Datagen, a company that sells fake, simulated humans.
These humans are not gaming avatars or animated characters for movies. They are synthetic data designed to feed the growing appetite of deep-learning algorithms. Firms like Datagen offer a compelling alternative to the expensive and time-consuming process of gathering real-world data. They will make it for you: how you want it, when you want—and relatively cheaply.
To generate its synthetic humans, Datagen first scans actual humans. It partners with vendors who pay people to step inside giant full-body scanners that capture every detail from their irises to their skin texture to the curvature of their fingers. The startup then takes the raw data and pumps it through a series of algorithms, which develop 3D representations of a person’s body, face, eyes, and hands.
The company, which is based in Israel, says it’s already working with four major US tech giants, though it won’t disclose which ones on the record. Its closest competitor, Synthesis AI, also offers on-demand digital humans. Other companies generate data to be used in finance, insurance, and health care. There are about as many synthetic-data companies as there are types of data.
Once viewed as less desirable than real data, synthetic data is now seen by some as a panacea. Real data is messy and riddled with bias. New data privacy regulations make it hard to collect. By contrast, synthetic data is pristine and can be used to build more diverse data sets. You can produce perfectly labeled faces, say, of different ages, shapes, and ethnicities to build a face-detection system that works across populations.
But synthetic data has its limitations. If it fails to reflect reality, it could end up producing even worse AI than messy, biased real-world data—or it could simply inherit the same problems. “What I don’t want to do is give the thumbs up to this paradigm and say, ‘Oh, this will solve so many problems,’” says Cathy O’Neil, a data scientist and founder of the algorithmic auditing firm ORCAA. “Because it will also ignore a lot of things.”Realistic, not real
Deep learning has always been about data. But in the last few years, the AI community has learned that good data is more important than big data. Even small amounts of the right, cleanly labeled data can do more to improve an AI system’s performance than 10 times the amount of uncurated data, or even a more advanced algorithm.
That changes the way companies should approach developing their AI models, says Datagen’s CEO and cofounder, Ofir Chakon. Today, they start by acquiring as much data as possible and then tweak and tune their algorithms for better performance. Instead, they should be doing the opposite: use the same algorithm while improving on the composition of their data.Datagen also generates fake furniture and indoor environments to put its fake humans in context.DATAGEN
But collecting real-world data to perform this kind of iterative experimentation is too costly and time intensive. This is where Datagen comes in. With a synthetic data generator, teams can create and test dozens of new data sets a day to identify which one maximizes a model’s performance.
To ensure the realism of its data, Datagen gives its vendors detailed instructions on how many individuals to scan in each age bracket, BMI range, and ethnicity, as well as a set list of actions for them to perform, like walking around a room or drinking a soda. The vendors send back both high-fidelity static images and motion-capture data of those actions. Datagen’s algorithms then expand this data into hundreds of thousands of combinations. The synthesized data is sometimes then checked again. Fake faces are plotted against real faces, for example, to see if they seem realistic.
Datagen is now generating facial expressions to monitor driver alertness in smart cars, body motions to track customers in cashier-free stores, and irises and hand motions to improve the eye- and hand-tracking capabilities of VR headsets. The company says its data has already been used to develop computer-vision systems serving tens of millions of users.
It’s not just synthetic humans that are being mass-manufactured. Click-Ins is a startup that uses synthetic AI to perform automated vehicle inspections. Using design software, it re-creates all car makes and models that its AI needs to recognize and then renders them with different colors, damages, and deformations under different lighting conditions, against different backgrounds. This lets the company update its AI when automakers put out new models, and helps it avoid data privacy violations in countries where license plates are considered private information and thus cannot be present in photos used to train AI.Click-Ins renders cars of different makes and models against various backgrounds.CLICK-INS
Mostly.ai works with financial, telecommunications, and insurance companies to provide spreadsheets of fake client data that let companies share their customer database with outside vendors in a legally compliant way. Anonymization can reduce a data set’s richness yet still fail to adequately protect people’s privacy. But synthetic data can be used to generate detailed fake data sets that share the same statistical properties as a company’s real data. It can also be used to simulate data that the company doesn’t yet have, including a more diverse client population or scenarios like fraudulent activity.
Proponents of synthetic data say that it can help evaluate AI as well. In a recent paper published at an AI conference, Suchi Saria, an associate professor of machine learning and health care at Johns Hopkins University, and her coauthors demonstrated how data-generation techniques could be used to extrapolate different patient populations from a single set of data. This could be useful if, for example, a company only had data from New York City’s more youthful population but wanted to understand how its AI performs on an aging population with higher prevalence of diabetes. She’s now starting her own company, Bayesian Health, which will use this technique to help test medical AI systems.The limits of faking it
But is synthetic data overhyped?
When it comes to privacy, “just because the data is ‘synthetic’ and does not directly correspond to real user data does not mean that it does not encode sensitive information about real people,” says Aaron Roth, a professor of computer and information science at the University of Pennsylvania. Some data generation techniques have been shown to closely reproduce images or text found in the training data, for example, while others are vulnerable to attacks that make them fully regurgitate that data.
This might be fine for a firm like Datagen, whose synthetic data isn’t meant to conceal the identity of the individuals who consented to be scanned. But it would be bad news for companies that offer their solution as a way to protect sensitive financial or patient information.
Research suggests that the combination of two synthetic-data techniques in particular—differential privacy and generative adversarial networks—can produce the strongest privacy protections, says Bernease Herman, a data scientist at the University of Washington eScience Institute. But skeptics worry that this nuance can be lost in the marketing lingo of synthetic-data vendors, which won’t always be forthcoming about what techniques they are using.
Meanwhile, little evidence suggests that synthetic data can effectively mitigate the bias of AI systems. For one thing, extrapolating new data from an existing data set that is skewed doesn’t necessarily produce data that’s more representative. Datagen’s raw data, for example, contains proportionally fewer ethnic minorities, which means it uses fewer real data points to generate fake humans from those groups. While the generation process isn’t entirely guesswork, those fake humans might still be more likely to diverge from reality. “If your darker-skin-tone faces aren’t particularly good approximations of faces, then you’re not actually solving the problem,” says O’Neil.
For another, perfectly balanced data sets don’t automatically translate into perfectly fair AI systems, says Christo Wilson, an associate professor of computer science at Northeastern University. If a credit card lender were trying to develop an AI algorithm for scoring potential borrowers, it would not eliminate all possible discrimination by simply representing white people as well as Black people in its data. Discrimination could still creep in through differences between white and Black applicants.
To complicate matters further, early research shows that in some cases, it may not even be possible to achieve both private and fair AI with synthetic data. In a recent paper published at an AI conference, researchers from the University of Toronto and the Vector Institute tried to do so with chest x-rays. They found they were unable to create an accurate medical AI system when they tried to make a diverse synthetic data set through the combination of differential privacy and generative adversarial networks.
None of this means that synthetic data shouldn’t be used. In fact, it may well become a necessity. As regulators confront the need to test AI systems for legal compliance, it could be the only approach that gives them the flexibility they need to generate on-demand, targeted testing data, O’Neil says. But that makes questions about its limitations even more important to study and answer now.
“Synthetic data is likely to get better over time,” she says, “but not by accident.”
Clinical trials have never been more in the public eye than in the past year, as the world watched the development of vaccines against covid-19, the disease at the center of the 2020 coronavirus pandemic. Discussions of study phases, efficacy, and side effects dominated the news. The most distinctive feature of the vaccine trials was their speed. Because the vaccines are meant for universal distribution, the study population is, basically, everyone. That unique feature means that recruiting enough people for the trials has not been the obstacle that it commonly is.
“One of the most difficult parts of my job is enrolling patients into studies,” says Nicholas Borys, chief medical officer for Lawrenceville, N.J., biotechnology company Celsion, which develops next-generation chemotherapy and immunotherapy agents for liver and ovarian cancers and certain types of brain tumors. Borys estimates that fewer than 10% of cancer patients are enrolled in clinical trials. “If we could get that up to 20% or 30%, we probably could have had several cancers conquered by now.”
Clinical trials test new drugs, devices, and procedures to determine whether they’re safe and effective before they’re approved for general use. But the path from study design to approval is long, winding, and expensive. Today,researchers are using artificial intelligence and advanced data analytics to speed up the process, reduce costs, and get effective treatments more swiftly to those who need them. And they’re tapping into an underused but rapidly growing resource: data on patients from past trialsBuilding external controls
Clinical trials usually involve at least two groups, or “arms”: a test or experimental arm that receives the treatment under investigation, and a control arm that doesn’t. A control arm may receive no treatment at all, a placebo or the current standard of care for the disease being treated, depending on what type of treatment is being studied and what it’s being compared with under the study protocol. It’s easy to see the recruitment problem for investigators studying therapies for cancer and other deadly diseases: patients with a life-threatening condition need help now. While they might be willing to take a risk on a new treatment, “the last thing they want is to be randomized to a control arm,” Borys says. Combine that reluctance with the need to recruit patients who have relatively rare diseases—for example, a form of breast cancer characterized by a specific genetic marker—and the time to recruit enough people can stretch out for months, or even years. Nine out of 10 clinical trials worldwide—not just for cancer but for all types of conditions—can’t recruit enough people within their target timeframes. Some trials fail altogether for lack of enough participants.
What if researchers didn’t need to recruit a control group at all and could offer the experimental treatment to everyone who agreed to be in the study? Celsion is exploring such an approach with New York-headquartered Medidata, which provides management software and electronic data capture for more than half of the world’s clinical trials, serving most major pharmaceutical and medical device companies, as well as academic medical centers. Acquired by French software company Dassault Systèmes in 2019, Medidata has compiled an enormous “big data” resource: detailed information from more than 23,000 trials and nearly 7 million patients going back about 10 years.
The idea is to reuse data from patients in past trials to create “external control arms.” These groups serve the same function as traditional control arms, but they can be used in settings where a control group is difficult to recruit: for extremely rare diseases, for example, or conditions such as cancer, which are imminently life-threatening. They can also be used effectively for “single-arm” trials, which make a control group impractical: for example, to measure the effectiveness of an implanted device or a surgical procedure. Perhaps their most valuable immediate use is for doing rapid preliminary trials, to evaluate whether a treatment is worth pursuing to the point of a full clinical trial.
Medidata uses artificial intelligence to plumb its database and find patients who served as controls in past trials of treatments for a certain condition to create its proprietary version of external control arms. “We can carefully select these historical patients and match the current-day experimental arm with the historical trial data,” says Arnaub Chatterjee, senior vice president for products, Acorn AI at Medidata. (Acorn AI is Medidata’s data and analytics division.) The trials and the patients are matched for the objectives of the study—the so-called endpoints, such as reduced mortality or how long patients remain cancer-free—and for other aspects of the study designs, such as the type of data collected at the beginning of the study and along the way.
When creating an external control arm, “We do everything we can to mimic an ideal randomized controlled trial,” says Ruthie Davi, vice president of data science, Acorn AI at Medidata. The first step is to search the database for possible control arm candidates using the key eligibility criteria from the investigational trial: for example, the type of cancer, the key features of the disease and how advanced it is, and whether it’s the patient’s first time being treated. It’s essentially the same process used to select control patients in a standard clinical trial—except data recorded at the beginning of the past trial, rather than the current one, is used to determine eligibility, Davi says. “We are finding historical patients who would qualify for the trial if they existed today.”
Download the full report.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
Covid cases are on the rise in England, and a fast-spreading variant may be to blame. B.1.617.2, which now goes by the name Delta, first emerged in India, but has since spread to 62 countries, according to the World Health Organization.
Delta is still rare in the US. At a press conference on Tuesday, the White House’s chief medical advisor, Anthony Fauci, said that it accounts for just 6% of cases. But in the UK it has quickly overtaken B.1.1.7—also known as Alpha—to become the dominant strain, which could derail the country’s plans to ease restrictions on June 21.
The total number of cases is still small, but public health officials are watching the variant closely. On Monday, UK Secretary of State for Health and Social Care Matt Hancock reported that Delta appears to be about 40% more transmissible than Alpha, but scientists are still trying to pin down the exact number—estimates range from 30% to 100%. They are also working to understand what makes it more infectious. They don’t yet have many answers, but they do have hypotheses.
All viruses acquire mutations in their genetic code as they replicate, and SARS-CoV-2 is no exception. Many of these mutations have no impact at all. But some change the virus’s structure or function. Identifying changes in the genetic sequence of a virus is simple. Figuring out how those changes impact the way a virus spreads is trickier. The spike protein, which helps the virus gain entry to cells, is a good place to start.How Delta enters cells
To infect cells, SARS-CoV-2 must enter the body and bind to receptors on the surface of cells. The virus is studded with mushroom-shaped spike proteins that latch onto a receptor called ACE2 on human cells. This receptor is found on many cell types, including those that line the lungs. Think of it like a key fitting into a lock.
Mutations that help the virus bind more tightly can make transmission from one person to another easier. Imagine you breathe in a droplet that contains SARS-CoV-2. If that droplet contains viruses with better binding capabilities, they “will be more efficient at finding and infecting one of your cells,” says Nathaniel Landau, a microbiologist at NYU Grossman School of Medicine.
Scientists don’t yet know how many particles of SARS-CoV-2 you have to inhale to become infected, but the threshold would likely be lower for a virus that is better at grabbing onto ACE2.
Landau and his colleagues study binding in the lab by creating pseudoviruses. These lab-engineered viruses can’t replicate, but researchers can tweak them to express the spike protein on their surface. That allows them to easily test binding without needing to use a high-security laboratory. The researchers mix these pseudoviruses with plastic beads covered with ACE2 and then work out how much virus sticks to the beads. The greater the quantity of virus, the better the virus is at binding. In a preprint posted in May, Grunbaugh and colleagues show that some of the mutations present in Delta do enhance binding.How it infects once inside
But better binding not only lowers the threshold for infection. Because the virus is better at grabbing ACE2, it also will infect more cells inside the body. “The infected person will have more virus in them, because the virus is replicating more efficiently,” Landau says.
After the virus binds to ACE2, the next step is to fuse with the cell, a process that begins when enzymes from the host cell cut the spike at two different sites, a process known as cleavage. This kick starts the fusion machinery. If binding is like the key fitting in the lock, cleavage is like the key turning the deadbolt. “Without cuts at both sites, the virus can’t get into cells,” says Vineet Menachery, a virologist at The University of Texas Medical Branch.
One of the mutations present in Delta actually occurs in one of these cleavage sites, and a new study that has not yet been peer reviewed shows that this mutation does enhance cleavage. And Menachery, who was not involved in the study, says he has replicated those results in his lab. “So it’s a little bit easier for the virus to be activated,” he says.
Whether that improves transmissibility isn’t yet known, but it could. When scientists delete these cleavage sites, the virus becomes less transmissible and less pathogenic, Menachery says. So it stands to reason that changes that facilitate cleavage would increase transmissibility.
It’s also possible that Delta’s ability to evade the body’s immune response helps fuel transmission. Immune evasion means more cells become infected and produce more virus, which then potentially makes it easier for person carrying that virus to infect someone else.But vaccines still work
The good news is that vaccination provides strong protection against Delta. A new study from Public Health England shows that the Pfizer-BioNTech vaccine was 88% effective in preventing symptomatic disease due to Delta in fully vaccinated people. The AstraZeneca vaccine provided slightly less protection. Two shots were 60% effective against the variant. The effectiveness of one dose of either vaccine, however, was much lower— just 33%.
In any case, in the US and UK, just around 42% of the population is fully vaccinated. In India, where the virus surged fueled in part by the rapid spread of Delta, just 3.3% of the population has achieved full vaccination.
At the press briefing, Fauci urged those who have not been vaccinated to get their first shot and reminded those who are partially vaccinated not to skip their second dose. The Biden Administration hopes to have 70% of the population at least partially vaccinated by the Fourth of July. In the UK, Delta quickly replaced Alpha to become the dominant strain, and cases are now on the rise. “We cannot let that happen in the United States,” Fauci said.
Digital transformation has long been a well-established strategic imperative for organizations globally. The effects of covid-19—which have transformed the world into a massively (and perhaps permanently) dispersed collection of individual broadband-connected consumers, partners, and employees—have not disrupted or wholly redefined this trend, instead they have created additional emphasis on digital transformation strategies already well underway.
This is the consensus view of an MIT Technology Review Insights survey of 210 members of technology executives, conducted in March 2021. These respondents report that they need—and still often lack— the ability to develop new digital channels and services quickly, and to optimize them in real time.
Underpinning these waves of digital transformation are two fundamental drivers: the ability to serve and understand customers better, and the need to increase employees’ ability to work more effectively toward those goals.
Two-thirds of respondents indicated that more efficient customer experience delivery was the most critical objective. This was followed closely by the use of analytics and insight to improve products and services (60%). Increasing team collaboration and communication, and increasing security of digital assets and intellectual property came in joint third, with around 55% each.
All the digital objectives are integrally linked to improving customer and employee engagement, retention, and activation. Richard Jefts, vice president and general manager of HCL’s Digital Solutions, notes that increasing team collaboration and communication received additional attention over the last year.
“With covid-19, management teams needed to ensure that business could continue remotely, which has meant new levels of adoption of collaboration capabilities and the use of the low code by employees to digitize business processes to bridge the gaps,” says Jefts.
Miao Song, Brussels-based chief information officer of Mars Petcare, notes that digitalization has been steadily redefining her company’s global pet nutrition and veterinary services businesses. “Our online business has seen double-digit growth, and the resulting volume of customer data allows us to forecast demand better,” says Song.
Digital tools also allow more and better market data to be gathered and utilized quickly. Song points out that AI-enabled image recognition tools are being used by Mars’ sales reps to scan retailers’ shelves and generate insight for better inventory management.
As Mars’ reliance on AI and analytics is increasing throughout the organization, it is teaching many employees to use low-code tools to bolster their internal capabilities. Low code is a software development approach that requires little to no coding to build applications and processes, allowing users with no formal knowledge of coding or software development to create applications.
“Everybody in our company needs to become a data analyst—not just IT team members,” says Song, speaking of Mars’ efforts to increase digital literacy in a bid to enhance visibility across the company’s supply chain, refine pricing strategies, and develop new products and services.
Song notes that promoting the use of low-code development tools through hackathons and other activities has been an important part of Mars’ efforts: “we need to break the notion that only IT can access and use our data resources,” she adds.Customer experience is (still) king
Survey respondents have indicated that they have already seen significantly increased performance in customer experience processes since undertaking digital transformation efforts. Moving into the coming year, customer experience continues to be a priority.
Respondents are seeking to improve digital channels in particular, followed by analytics and to support personalization, and AI or automated customer engagement tools. Other digital competencies are being built to accommodate changes in customer and partner expectations and requirements, streamlining customer experience processes by delivering multi-experience capabilities.
Alan Pritchard, director of ICT Services for Austin Health, a public hospital group based in Melbourne, Australia, explains that his company’s digital transformation process began to accelerate well before covid-19’s impact set in.
“A model of service review in 2019 identified home-based monitoring and home-based care as critical to our future service delivery—so even prior to the pandemic, our health strategy was focused on improving digital channels and increasing our capacity to support people outside of the hospital,” says Pritchard, noting that in order to execute on Austin Health’s outreach strategy, a common customer relationship management (CRM) platform needed to be built.
“While some future service models can be delivered with telehealth initiatives or with device integration, there’s still a lot of work to do looking at how you communicate electronically with people about their health status,” says Pritchard.
The organization’s common CRM platform needed to accommodate numerous autonomous specialty departments, “and each of them wants their own app to communicate electronically with their patients,” observes Pritchard.
Managing numerous separate app development processes is complex, although “there are common patterns in how departments engage with patients in appointment booking, preparation, and follow-up processes”, says Pritchard, “so we need a platform that’s highly reusable, rather than a series of apps built on custom code.”
This, coupled with the need to distribute some control and customization through the multiple departments, led Prichard’s team down a low-code path.
This largely correlates with the experiences of our survey cohort: over 75% of respondents indicate that they have increased their use of digital development platforms (including low code), and over 80% have increased their investment priorities in workflow management tools over the last year.
Download the full report.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
The last 15 years have been tough times for many Americans, but there are now encouraging signs of a turnaround.
Productivity growth, a key driver for higher living standards, averaged only 1.3% since 2006, less than half the rate of the previous decade. But on June 3, the US Bureau of Labor Statistics reported that US labor productivity increased by 5.4% in the first quarter of 2021. What’s better, there’s reason to believe that this is not just a blip, but rather a harbinger of better times ahead: a productivity surge that will match or surpass the boom times of the 1990s.Annual Labor Productivity Growth, 2001 – 2021 Q1 For much of the past decade, productivity growth has been sluggish, but now there are signs it’s picking up. (Source: US Bureau of Labor Statistics)
Our optimism is grounded in our research which indicates that most OECD countries are just passing the lowest point in a productivity J-curve. Driven by advances in digital technologies, such as artificial intelligence, productivity growth is now headed up.
Technology alone is rarely enough to create significant benefits.
The productivity J-curve describes the historical pattern of initially slow productivity growth after a breakthrough technology is introduced, followed years later by a sharp takeoff. Our research and that of others has found that technology alone is rarely enough to create significant benefits. Instead, technology investments must be combined with even larger investments in new business processes, skills, and other types of intangible capital before breakthroughs as diverse as the steam engine or computers ultimately boost productivity. For instance, after electricity was introduced to American factories, productivity was stagnant for more than two decades. It was only after managers reinvented their production lines using distributed machinery, a technique made possible by electricity, that productivity surged.
There are three reasons that this time around the productivity J-curve will be bigger and faster than in the past.
The first is technological: the past decade has delivered an astonishing cluster of technology breakthroughs. The most important ones are in AI: the development of machine learning algorithms combined with large decline in prices for data storage and improvements in computing power has allowed firms to address challenges from vision and speech to prediction and diagnosis. The fast-growing cloud computing market has made these innovations accessible to smaller firms.
Significant innovations have also happened in biomedical sciences and energy. In drug discovery and development, new technologies have allowed researchers to optimize the design of new drugs and predict the 3D structures of proteins. At the same time, breakthrough vaccine technology using messenger RNA has introduced a revolutionary approach that could lead to effective treatments for many other diseases. Moreover, major innovations have led to the steep decline in the price of solar energy and the sharp increase in its energy conversion efficiency rate with serious implications for the future of the energy sector as well as for the environment.
The costs of covid-19 have been tragic, but the pandemic has also compressed a decade’s worth of digital innovation in areas like remote work into less than a year. What’s more, evidence suggests that even after the pandemic, a significant fraction of work will be done remotely, while a new class of high-skill service workers, the digital nomads, is emerging.
As a result, the biggest productivity impact of the pandemic will be realized in the longer-run. Even technology skeptics like Robert Gordon are more optimistic this time. The digitization and reorganization of work has brought us to a turning point in the productivity J-curve.
The third reason to be optimistic about productivity has to do with the aggressive fiscal and monetary policy being implemented in the US. The recent covid-19 relief package is likely to reduce the unemployment rate from 5.8% (in May 2021) to the historically low pre-covid levels in the neighborhood of 4%. Running the economy hot with full employment can accelerate the arrival of the productivity boom. Low unemployment levels drive higher wages which means firms have more incentive to harvest the potential benefits of technology to further improve productivity.
When you put these three factors together—the bounty of technological advances, the compressed restructuring timetable due to covid-19, and an economy finally running at full capacity—the ingredients are in place for a productivity boom. This will not only boost living standards directly, but also frees up resources for a more ambitious policy agenda.
Erik Brynjolfsson is a professor at Stanford and director of the Stanford Digital Economy Lab. Georgios Petropoulos is a post-doc at MIT, a research fellow at Bruegel, and a digital fellow at the Stanford Digital Economy Lab.
“That’s not my face,” Tori Dawn thought, after opening TikTok to make a video in late May. The jaw reflected back on the screen was wrong, slimmer and more feminine. And when they waved their hand in front of the camera, blocking most of their face from the lens, their jaw appeared to pop back to normal. Was their skin also a little softer?
On further investigation, it seemed as if the image was being run through a beauty filter in the TikTok app. Normally, Dawn keeps those filters off in livestreams and videos to around 320,000 followers. But as they flipped around the app’s settings, there was no way to disable the effect:. it seemed to be permanently in place, subtly feminizing Dawn’s features.
“My face is pretty androgynous and I like my jawline,” Dawn said in an interview. “So when I saw that it was popping in and out, I’m like ‘why would they do that, why?’ This is one of the only things that I like about my face. Why would you do that?”
Beauty filters are now a part of life online, allowing users to opt in to changing the face they present to the world on social media. Filters can widen eyes, plump up lips, apply makeup, and change the shape of the face, among other things. But it’s usually a choice, not forced on users—which is why Dawn and others who encountered this strange effect, were so angry and disturbed by it.
Dawn told her followers about it in a video. “As long as that’s still a thing,” Dawn said, showing the effect to their jaw pop in and out on screen, “I don’t feel comfortable making videos because this is not what I look like, and I don’t know how to fix it.” The video got more than 300,000 views, they said, and was shared and duetted by other users who noticed the same thing.@toridawn817
congrats tiktok I am super uncomfortable and disphoric now cuz of whatever the fuck this shit is♬ original sound – Tori Dawn
“Is that why I’ve been kind of looking like an alien lately?” said one.
“Tiktok. Fix this,” said another.
Videos like these circulated for days in late May, as a portion of TikTok’s users looked into the camera and saw a face that wasn’t their own. As the videos spread, many users wondered whether the company was secretly testing out a beauty filter on some users.An odd, temporary issue
I’m a TikTok lurker, not a maker, so it was only after seeing Dawn’s video that I decided to see if the effect appeared on my own camera. Once I started making a video, the change to my jaw shape was obvious. I suspected, but couldn’t tell for sure, that my skin had been smoothed as well. I sent a video of it in action to coworkers and my Twitter followers, asking them to open the app and try the same thing on their own phones: from their responses, I learned that the effect only seemed to impact Android phones. I reached out to TikTok, and the effect stopped appearing two days later. The company later acknowledged in a short statement that there was an issue that had been resolved, but did not provide further details.
On the surface it was an odd, temporary issue that affected some users and not others. But it was also forcibly changing people’s appearances—an important glitch for an app that is used by around 100 million people in the US. So I also sent the video to Amy Niu, a PhD candidate at the University of Wisconsin who studies the psychological impact of beauty filters. She pointed out that in China, and some other places, some apps add a subtle beauty filter by default. When Niu uses apps like WeChat, she can only really tell that a filter is in place by comparing a photo of herself using her camera to the image produced in the app.
A couple months ago, she said, she downloaded the Chinese version of TikTok, called Douyin. “When I turned off the beauty mode and filters, I can still see an adjustment to my face,” she said.
Having beauty filters in an app isn’t necessarily a bad thing, Niu said, but app designers have a responsibility to consider how those filters will be used, and how they will change the people who use them. Even if it was a temporary bug, it could have an impact on how people see themselves.
“People’s internalization of beauty standards, their own body image or whether they will intensify their appearance concerns,” Niu said, are all considerations.
For Dawn, the strange facial effect was just one more thing to add to the list of frustrations with TikTok: “It’s been very reminiscent of a relationship with a narcissist because they love bomb you one minute, they’re giving you all these followers and all this attention and it feels so good,” they said. “And then for some reason they just, they’re just like, we’re cutting you off.”
When the DAVINCI+ and VERITAS missions to Venus were given the green light by NASA last week, the scientific community was stunned. Most had expected that NASA, which hadn’t launched a dedicated mission to Venus in 30 years, would be sending at least one mission to the second planet from the sun by the end of the decade. Two missions, however, blew everyone’s mind.
Maybe NASA anticipated something we’re only just wrapping our heads around: DAVINCI+ and VERITAS will have a tremendous impact not just when it comes to Venus and solar system exploration, but also when it comes to our understanding of habitable, life-bearing worlds outside our solar system itself.
As our exoplanet discoveries continue to pile up (and we’ve spotted over 11,000 possible exoplanets so far) we need to learn whether an Earth-sized planet is more likely to look like Earth, or more likely to look like Venus. “We don’t know which of those outcomes is the expected or likely one,” says Paul Byrne, a planetary scientist at North Carolina State University. And to find that out we need to understand Venus a lot better.
Most scientists would agree that any habitable exoplanets would need to have water.
With surface temperatures of 471 °C and surface pressures 89 times worse than Earth’s, it seems impossible that water might have once existed on Venus. But Venus and Earth are about the same size, same ages, and our best guess is they are made of comparable materials and were born with very similar starting conditions. Venus is 30% closer to the sun than Earth, which is significant, but not overwhelmingly so. And yet after 4.5 billion years, these two planets have fared very differently.
In fact, there’s mounting evidence that Venus might have been home to water long ago. The Pioneer Venus missions launched in 1978 made some tantalizing measurements of the deuterium-hydrogen ratio in the atmosphere, suggesting Venus had lost a ton of water over time. But we’ve never had a proper mission that could study this history of water on Venus, look for ancient water flow features on the surface, or understand whether it possessed the kind of geological and climatological conditions that are essential for water and for habitable conditions.
“There may have been two habitable worlds side by side for an unknown amount of time in our solar system,” says Giada Arney, the deputy principal investigator for DAVINCI+. Although Venus is uninhabitable today, the fact that it may have been habitable at one point means it wasn’t always destined for such a hellish fate if circumstances broke a little more favorably.
And that’s good news for how we evaluate distant exoplanets. “Looking beyond the solar system, this might also suggest habitable planets are more common than we previously anticipated,” says Arney.
There are two leading theories for what happened to Venus—and they both have implications for what we might expect on other exoplanets. The first, consistent with our current-yet-limited observations, is that Venus started off as a hot mess from the get-go and never relented. See, the closer a planet orbits its host star, the more likely it is to rotate slowly (or even tidally locked where one side permanently faces the star, like the moon is around Earth).
Slow rotators like Venus generally have a harder time maintaining a global climate that is cool and comfortable—and for a while it was assumed this is probably what drove Venus to become hot and unbearable. The sun’s rays bombarded the planet with heat, and a steam-rich atmosphere never condensed into liquid water on the surface. Meanwhile, the carbon dioxide, water, and sulfur dioxide gases in the air worked as greenhouse gases that only served to trap all that heat. And it stayed that way for 4 billion years, give or take.
Then there’s a new theory that’s been recently developed by Michael Way and others at NASA’s Goddard Institute for Space Studies. That model shows that if you make a few small tweaks in these planets’ climates, they can develop hemisphere-long cloud forms that consistently face against the host star, reflecting a lot of stellar heat. As a result, a planet like Venus stays temperate and the atmospheric steam condenses into liquid oceans on the surface. Way’s work shows that once you reach this point, the planet can self-regulate its temperature as long as other Earth-like processes like plate tectonics (which helps remove carbon dioxide from the atmosphere) can mitigate greenhouse gas buildup.
It’s a complicated hypothesis, full of caveats. And if Venus is evidence that slow rotators can develop more habitable conditions, it’s also evidence that these conditions are fragile and potentially fleeting. People who buy into Way’s model think what probably happened on Venus is that a massive amount of volcanic activity overwhelmed the planet with carbon and turned the atmosphere 96% carbon dioxide, overriding whatever relief plate tectonics could provide.
And yet, it’s a hypothesis worth testing through DAVINCI+ and VERITAS, because as Arney points out, many of the potentially habitable exoplanets we’ve discovered are slow rotators that orbit low-mass stars. Because these stars are dimmer, planets must usually orbit them close by in order to receive enough heat to allow for liquid water formation. If they form hemisphere-long clouds, they might be able to preserve habitable climates. The only way we can currently probe whether this hypothesis makes sense is to first see whether it may have happened on Venus.
But before we can apply Way’s model to other exoplanets, we need to determine whether it explains Venus. DAVINCI+ will descend into Venus and directly probe the atmosphere’s chemistry and composition, as well as image the surface on its way down. It should be able to collect the type of data that helps tell us whether Venus really was wet earlier in its life, and also flesh out more of its climate history and whether a hemisphere-long cloud could have really formed.
The VERITAS orbiter will interrogate the geology of the planet, taking high-resolution imagery through radar observations that might be able to detect evidence of terrain or landforms created by water flows or past tectonics. The most exciting target might be the tessera: heavily deformed highland regions that are thought to be the oldest geologic features on the planet. If VERITAS spots evidence of ancient oceans—or at the very least, of the kind of geological activity that could have kept the planet more temperate long ago— it will support the notion that other slow-rotating exoplanets could achieve the same conditions.
“To think about them going together really makes it sort of a complementary mega-mission,” says Lauren Jozwiak, a planetary scientist at the Johns Hopkins Applied Physics Laboratory who’s working on the VERITAS mission. “This idea that you’d want to both do geologic mapping and atmospheric probing has been at the heart of how you’d want to investigate Venus,” says Jozwiak.
Ultimately, if Venus was always uninhabitable, then the reason probably has to do with its proximity to the sun. So any exoplanet of similar size that’s proportionally close to its own star is probably going to be like Venus. And we’d be better off focusing more investigations on exoplanets that are farther out from their stars.
On the other hand, if Venus had a period of cool before it turned into a permanent oven, it means we should take “Venus-zone” exoplanets seriously, since they may yet still be habitable. It also suggests factors like plate tectonics and volcanism play a critical role in mediating habitable conditions, and we need to find ways of investigating these things on distant worlds as well.
The more we ponder what DAVINCI+ and VERITAS could achieve, the more it seems as if we’re actually underestimating how excited we should be. These next missions will “completely change how we think about both Venus and planetary formation in general,” says Jozwiak. “It’s an exciting time to figure out if Venus is the ‘once and future Earth.’”
Despite their popularity with kids, tablets and other connected devices are built on top of systems that weren’t designed for them to easily understand or navigate. But adapting algorithms to interact with a child isn’t without its complications—as no one child is exactly like another. Most recognition algorithms look for patterns and consistency to successfully identify objects. But kids are notoriously inconsistent. In this episode, we examine the relationship AI has with kids.We Meet:
- Judith Danovitch, associate professor of psychological and brain sciences at the University of Louisville.
- Lisa Anthony, associate professor of computer science at the University of Florida.
- Tanya Basu, senior reporter at MIT Technology Review.
This episode was reported and produced by Jennifer Strong, Anthony Green, and Tanya Basu with Emma Cillekens. We’re edited by Michael Reilly.
Jennifer: It wasn’t long ago that playing hopscotch, board games or hosting tea parties with dolls was the norm for kids….
Some TV here and there… a day at the park… bikes.
But… we’ve seen hopscotch turn to TicToc… board games become video games… and dolls at tea parties… do more than just talk back
[Upsot: Barbie ad “Barbie.. This is my digital makeover.. I insert my own Ipad and open my app .. and the mirror lights up.. I do my eyeshadow lipstick and blush.. How amazing is that?”]
Jennifer: Kids are exposed to devices almost from birth, and often know how to use a touchscreen before they can walk.
Thing is… these systems aren’t really designed for kids.
So… what does it mean to invite Alexa to the party?
[Upsot.. 1’30-1’40 “Hi there and welcome to Amazon storytime. You can choose anything from pirates to princesses. Fancy that!”]
Jennifer: And… What happens when toys are connected to the internet and kids can ask them anything.. and they’ll not only answer back…. but also learn from your kids and collect their data.
Jennifer: I’m Jennifer Strong and this episode, we explore the relationship AI has with kids.
Judith: My name is Judith Danovitch. I’m an associate professor of psychological and brain sciences at the University of Louisville. So, I’m interested in how children think, and specifically, I’m interested in how children think about information sources. For example, when they have a question about something, how do they go about figuring out where to find the answer and which answers to trust.
Jennifer: So, when she found her son sitting alone talking to Siri one afternoon… It sparked her interest right away. She says he was four years old when he started asking it questions.
Judith: Like, what’s my name? And it seemed like he was kind of testing her to see what she would say in response. Like, did she actually, you know, know these things about him? The funny part was that the device belonged to my husband, whose name is Nick. And so when he said, what’s my name? She said, Nick. And he said, no, this is David. So, you know, it was plausible. It wasn’t even that she just said, I don’t know, she actually said something, but it was wrong.
Jennifer: Then… he started asking questions that weren’t just about himself…
Judith: Which was really interesting because it seemed like he was really trying to figure out, is this device somehow watching me and can it see me right now? And then he moved on to asking what I can only describe as a really broad range of questions. Some of which I recognize as topics that we had talked about. So he asked her, for example, do eagles eat snakes? And I guess he and my husband had been talking about Eagles and snakes recently, but then he also asked her some really kind of profound questions that he hadn’t really asked us. So at one point he asked why do things die? Which you know is a pretty heavy thing for a four year old to be asking Siri.
Jennifer: And as this went on… she started secretly taping him.
David: How do you get out of Egypt?
Is buttface a bad word?
… And why do things die?
Judith: Later on that day after I stopped recording him and he had kind of lost interest in this activity, I asked him a bit more and he told me that he thought there really was a tiny person inside there. That’s who Siri was. She was a tiny person inside the iPad. And that’s who was answering his questions. He didn’t have as good of an insight into where she got her answers from. So he wasn’t able to say, Oh, they’re coming from the internet. And that’s one of the things that I’ve become very interested in is, well, when kids hear these devices, what, where do they think this information is coming from? Is it a tiny person or is it, you know, something else. And, and that ties into questions of, do you believe it? Right? So, should you trust what the device tells you in response to your question?
Jennifer: It’s the kind of trust that little kids place in their parents and teachers.
Judith: Anecdotally I think parents think like, oh, kids are gullible and they’ll trust everything they see on the internet. But actually what we’ve found both with research in the United States and with research with children in China is that young children in preschool ages about four to six are actually very skeptical of the internet and given the choice they’d rather consult a person.
Jennifer: But she says that could change as voice activated devices become more and more commonplace.
Judith: And we’ve been trying to find out if kids have similar kinds of intuitions about the devices as they do about the internet in general but we are seeing similar patterns with young children where again, young children given the choice are saying, I would rather go ask a person for information at least when they information has to do with facts. Like, you know, where does something live, where, where do these things come from? And most of our research has focused on facts.
Jennifer: She does see a shift around 7 or 8 when kids start to trust the internet and voice assistants more. But she wants to be clear – this is early research… And it’s not that kids believe devices a hundred percent of the time as they get older they just believe them more often.
But why are the youngest kids… the ones with big imaginations… more skeptical than older kids?
Judith: Younger kids I think are skeptical frankly, because these are quite strange objects, right? In some ways they’re like people, right? It talks to you. It kind of has a back and forth conversation. But in other ways, these devices are nothing like a person at all, right? It’s this strange box. It doesn’t have a face or eyes or anything like that, that it suggests that it’s an animate object and it isn’t, of course, right? So, you know, for young kids, that should actually be quite difficult to figure out, and it makes sense that they might be cautious about it, that they might say, well, yeah, maybe you can do some things, but I’m going to go with the safe bet is to ask my mom or my teacher over this sort of strange unknown thing.
Jennifer: Alexa has more than 80,000 skills In the U.S.… and there’s developer support for skills aimed at kids.
[School bell rings. Sounds of children chattering as mumble under next VO.]
Jennifer: Sesame Street designed one for kids to play hide-and-seek with Elmo…
Elmo: Time to guess where Elmo is hiding!
Jennifer: Nickelodeon has a way to put them under the sea with Spongebob Squarepants…
[Spongebob Squarepants theme music]
Jennifer: And then there’s a game called Kids Court. It’s an app developed by Pretzel Labs for parents that let’s Alexa be the judge of their children’s squabbles.
Alexa: Welcome to kids court where everything unfair is finally brought to justice. All rise for the honorable judge, Lexi. Do you want to settle a case or hear about previous trials?
Child: Settle a case.
Alexa: Prosecutor, please tell the court what happened.
Child: Brianna read my diary!
Alexa: Kids court was created for cases like this. Is the defendant present or absent?
Alexa: Defendant you’re in trouble. What do you plead? Say guilty if you did it or not guilty if you did.
Brianna: Not guilty!
[Music transition ]
Jennifer: There’s a lot of companies offering connected experiences designed for kids. Among them, voice games aimed at parents wary of screen time. One system is called Yoto, but it’s got much more in common with a cassette player than a smart speaker. New content like music or stories can be added to the system via cards that, when inserted into the speaker, trigger a download from Yoto’s servers.
There’s not much to it. There’s no voice assistant, no camera, no microphone.. and its pixelated display is really only meant to show the time or a cartoonish image related to what’s playing.
Kate Daniels: The best part about it is it’s just so simple. I mean, our youngest turned two yesterday and he’s known how to use it for the last year. You know? I don’t think it needs to be all fancy.
Jennifer: Kate and Brian Daniels just made the move from New York City to Boston with their three kids in tow—who are all avid users of Yoto.
Parker Daniels: A song album My dad put on is Hamilton. Um, I really like it.
Jennifer: That’s their 6 year old son Parker. He’s going through a binder filled with cards… which are used to operate the device.
Parker Daniels: Um, and I’m now… I’m looking for the rest and I have like a whole, like book.
Charlotte Daniels: And on some cards, there’s lots of songs and some there’s lots of stories, but different chapters.
Jennifer: And that’s his younger sister, Charlotte.
Brian Daniels: So we’re, we’re also able to, uh, record stories and put them on, uh, custom cards so that the kids can play the stories that I come up with. And they love when I tell them stories, but I’m not always available, you know, working from home and being busy. So this allows them to play those stories at any time.
Jennifer: Screenless entertainment options are key for this family…. Which… apart from Friday night pizza and a movie… don’t spend much time gathered around the TV. But beyond limiting screen time (while they still can) Mom and Dad say they also enjoy peace of mind that the kids don’t have a direct line to Google.
Kate Daniels: We have complete control over what they have access to, which is another great thing. We had an Alexa for awhile someone had given us and it was didn’t work well for us because they could say, Alexa, tell us about, and they could pick whatever they wanted and we didn’t know what was going to come back so we can really curate what they’re allowed to listen to and experience.
Jennifer: Still, they admit they haven’t quite figured out how to navigate introducing more advanced technology when the time comes.
Kate Daniels: I think that’s a really hard question. You know, we, as parents, we want to really curate everything that they’re exposed to, but ultimately we’re not going to be able to do that. Even with all of the softwares out there to [00:18:06] Big brother, their own phones and watch every text message and everything they’re surfing. I don’t, it’s a big question and I don’t think we have the answer yet.
Tanya: So another reason why these voice games are becoming more popular is that they’re screen-free, which is really interesting and important. Given the fact that kids are usually recommended not to have more than two hours of screen time per day. And that’s when they’re about four or five.
Hi my name is Tanya Basu, I’m a senior reporter at MIT Technology Review and I cover humans and technology.
Younger kids, especially, should not be exposed to as much screen time. And audio based entertainment often seems healthier to parents because it gives them that ability to be entertained, to be educated, to think about things in a different way that doesn’t require basically a screen in front of their face and potentially, creating problems later down the road that we just don’t know about right now.
Jennifer: But designing these systems… isn’t without complications.
Tanya: A lot of it is that kids are learning how to speak, you know, you and I are having this conversation right now, we have an understanding of what a dialogue is in a way that children don’t. So there’s obviously that. There’s also the fact that kids don’t really sit still. So, you know, one might be far away or screaming or saying a word differently. And that obviously affects the way developers might be creating these games. And one big thing that a lot of people I talked to mentioned was the fact that kids are not a universal audience. And I think a lot of people forget that, especially ones who are developing these games…
Jennifer: Still, she says the ability for kids to understand complexity shouldn’t be underestimated.
Tanya: I’m honestly surprised that there aren’t more games for kids. And I’m surprised mostly that the games that are out there tend to be story kind of games and not, you know, a board game or something that is visually representative. We see with roblox and a lot of the more popular video games that came out during the pandemic, how complex they are, and the fact that kids can handle complex storylines, complex gaming, complex movement. But a lot of these voice games are so simple. And a lot of that is because the technology is just not there. But I am surprised that the imagination in terms of seeing where these games are going is quite limited thus far. So I’m really curious to see how these games develop over the next few years.
Jennifer: We’ll be back, right after this.
Lisa: There’s always this challenge of throwing technology at kids and just sort of expecting them to adapt. And I think it’s a two way street.
Jennifer: Lisa Anthony is an associate professor of computer science at the University of Florida. Her research focuses on developing interactive technologies designed to be used by children.
Lisa: We don’t necessarily want systems that just prevent growth. You know, we do want children to continue to grow and develop and not necessarily use the AI as a crutch for all of that process, but we do want the AI to maybe help. It could act as a better support along the way. If we consider children’s developmental needs, expectations and abilities as we design these systems.
Jennifer: She works with kids to understand how they behave differently with devices than adults.
Lisa: So, when they touch the touch screen or when they draw on the touch screen, what does that look like from a software point of view that we can then adapt our algorithms to recognize and interpret those interactions more accurately. // So some of the challenges that you see are really understanding kids’ needs, expectations and abilities with respect to technology, and it’s all going to be driven a lot by their motor skills, the progress of development, you know, their cognitive skills, socio emotional skills, and how they interact with the world is all going to be transitively applied to how they might interact with technology.
Jennifer: For example, most kids simply lack the level of dexterity and motor control needed to tap a small button on a touchscreen—despite their small fingers.
Lisa: So an adult might put their finger to the touchscreen, draw a square and one smooth stroke, all four sides and lift it up, a kid, especially a young kid, let’s say five, six years old is going to be, probably, picking up their finger at every corner. Maybe even in the middle of a stroke and then putting it down again to correct themselves and finish. And those types of small variances in how they make that shape can actually have a big impact on whether the system can recognize that shape if that type of data wasn’t ever used as part of the training process.
Jennifer: Programming this into AI models is critical, because handwriting recognition and intelligent tutoring systems are increasingly turning up in classrooms.
Most recognition algorithms look for patterns and consistency to identify objects. And kids…are notoriously inconsistent. If you were to task a child with drawing five squares in a row each one is going to look different to an algorithm.
The needs of kids are changing as they grow… that means algorithms need to change too.
So, researchers are looking to incorporate lessons learned from kids shows… like how children establish social attachments to animated characters that look like people.
Lisa: That means they’re likely to ascribe social expectations to their interactions with that character. They feel warmly towards the character. They feel that the character is going to respond in predictable social ways. And this can be a benefit if your system is ready to handle that, but it can also be a challenge. If your system is not ready to handle that, it comes across as wooden. It comes across as unnatural. The children are going to be turned off by that.
Jennifer: She says her research has also shown kids respond to AI systems that are transparent and can solve problems together with the child .
Lisa: So kids wanted the system to be able to recognize it didn’t know the answer to their question, or it didn’t know enough information to answer your question or completed an interaction and just say, I don’t know, or tell me, you know, this information that will help me answer. And I think what we were seeing, well, we still tend to see actually is a design trend for AI systems where the AI system tries to gracefully recover from errors or lack of information without quote unquote, bothering the user, right. Without really getting them involved or interrupting them, trying to sort of gracefully exist in the background. Kids were much more tolerant of error and, and wanted to treat it like a collaborative problem, solving, experience
Jennifer: Still, she admits there’s a long road ahead in developing systems with contextual awareness about interacting with children.
Lisa: Often Google home returns sort of like an excerpt from the Google search results and it’s, it could be anything that comes back, right. And the kids have to then somehow listen to this long and sort of obscure paragraph and then figure out if their answer was ever contained in that paragraph anywhere. And they would have to get their parents’ help to interpret the information and a theme that you see a lot in this type of work and generally kids and technologies, they want to be able to do it themselves. They don’t really want to have to ask their parents for help because they want to be independent and engaged with the world on their own.
Jennifer: But how much we allow AI to play a part in developing that independence… is up to us.
Lisa: Do we want AI to go in the direction of cars, for example, where for the most part, many of us own a car,have no idea how it works under the hood, how we can fix it, how we can improve it. What are the implications of this design decision or that design decision? Or do we want AI to be something where people… they’re really empowered and they have a potential to understand these big differences, these big decisions. So, I think that’s why for me, kids and AI education is really important because we want to make sure that they feel like this is not just a black box mystery element of technology in their lives, but something that they can really understand, think critically about affect change and perhaps contribute to building as well.
Jennifer: This episode was reported and produced by me, Anthony Green and Tanya Basu with Emma Cillekens. We’re edited by Michael Reilly.
Thanks for listening, I’m Jennifer Strong.
Angela Mitchell still remembers the night she nearly died.
It was almost one year ago in July. Mitchell—who turns 60 this June—tested positive for covid-19 at her job as a pharmacy technician at the University of Illinois Hospital in Chicago. She was sneezing, coughing, and feeling dizzy.
The hospital management offered her a choice. She could quarantine at a hotel, or she could recover and isolate at home, where her vital signs would be monitored around the clock through a sensor patch worn on her chest. Mitchell chose the patch and went home.
Two nights later, she woke up in a panic because she could not breathe. She was in the bedroom of her suburban Chicago house, and thought a shower might help.
“By the time I got from my bed to the washroom, I was saturated in sweat,” she says. “I had to sit down and catch my breath. I was dizzy. I could barely talk.”
That is when “the call” happened. Clinicians at the University of Illinois Hospital were using sensors like the one Mitchell was wearing to remotely monitor her and hundreds of other patients and employees who were recovering from covid-19 at home. They saw Mitchell’s situation worsen and called. “I was sitting in the bathroom literally holding on to the sink when my phone rang,” she says. The medics told her she needed to see a doctor right away.
“I was sitting in the bathroom literally holding on to the sink when my phone rang”Angela Mitchell
Mitchell was not sure. She did not want to disturb her family sleeping downstairs, and calling an ambulance seemed too extreme. But in the morning, she got a second call from her doctors, who said: Get to a hospital now or we will call an ambulance for you.
Mitchell asked her husband—who’d had covid-19 several months earlier— to drive her to Northwestern Memorial in Chicago, where she was quickly admitted and told that her oxygen levels were dangerously low. She says her condition at home changed so quickly—from very mild symptoms to serious respiratory problems—that she didn’t even realize she was in crisis. But by the time of the second call, she says, “I recognized I [was] in trouble and needed help.” She remained in the hospital for almost a week.
The pilot program that helped Mitchell is a part of a study conducted by the University of Illinois Health system and digital-medicine startup PhysIQ and funded by the National Institutes of Health. It is one important test of a new way for covid-19 patients to receive care outside hospital settings. Monitoring the progress of people recovering from the disease remains a challenge because their symptoms can turn life-threatening so quickly. Some hospitals and health systems have dramatically scaled up the use of wearables and other mobile health technologies to remotely observe their vital signs around the clock.
The Illinois program gives people recovering from covid-19 a take-home kit that includes a pulse oximeter, a disposable Bluetooth-enabled sensor patch, and a paired smartphone. The software takes data from the wearable patch and uses machine learning to develop a profile of each person’s vital signs. The monitoring system alerts clinicians remotely when a patient’s vitals— such as heart rate—shift away from their usual levels.
Typically, patients recovering from covid might get sent home with a pulse oximeter. PhysIQ’s developers say their system is much more sensitive because it uses AI to understand each patient’s body, and its creators claim it is much more likely to anticipate important changes.
“It’s an enormous benefit,” says Terry Vanden Hoek, the chief medical officer and head of emergency medicine at University of Illinois Health, which is hosting the pilot. Working with covid cases is hard, he says: “When you work in the emergency department it’s sad to see patients who waited too long to come in for help. They would require intensive care on a ventilator. You couldn’t help but ask, ‘If we could have warned them four days before, could we have prevented all this?’”
Like Angela Mitchell, most of the study participants are African-American. Another large group are Latino. Many are also living with risk factors such as diabetes, obesity, hypertension, or lung conditions that can complicate covid-19 recovery. Mitchell, for example, has diabetes, hypertension, and asthma.
For example, there are 11 people in Mitchell’s house, including her husband, three daughters, and six grandchildren. “I do everything with my family. We even share covid-19 together!” she says with a laugh. Two of her daughters tested positive in March 2020, followed by her husband, before Mitchell herself.
Although African-Americans are only 30% of Chicago’s population, they made up about 70% of the city’s earliest covid-19 cases. That percentage has declined, but African-Americans recovering from covid-19 still die at rates two to three times those for whites, and vaccination drives have been less successful at reaching this community. The PhysIQ system could help improve survival rates, the study’s researchers say, by sending patients to the ER before it’s too late, just as they did with Mitchell.Lessons from jet engines
PhysIQ founder Gary Conkright has previous experience with remote monitoring, but not in people. In the mid-1990s, he developed an early artificial-intelligence startup called Smart Signal with the University of Chicago. The company used machine learning to remotely monitor the performance of equipment in jet engines and nuclear power plants.
“Our technology is very good at detecting subtle changes that are the earliest predictors of a problem,” says Conkright. “We detected problems in jet engines before GE, Pratt & Whitney, and Rolls-Royce because we developed a personalized model for each engine.”
Smart Signal was acquired by General Electric, but Conkright retained the right to apply the algorithm to the human body. At that time, his mother was experiencing COPD and was rushed to intensive care several times, he said. The entrepreneur wondered if he could remotely monitor her recovery by adapting his existing AI system. The result: PhysIQ and the algorithms now used to monitor people with heart disease, COPD, and covid-19.
Its power, Conkright says, lies in its ability to create a unique “baseline” for each patient—a snapshot of that person’s norm—and then detect exceedingly small changes that might cause concern.
The algorithms need only about 36 hours to create a profile for each person.
The system gets to know “how you are looking in your everyday life,” says Vanden Hoek. “You may be breathing faster, your activity level is falling, or your heart rate is different than the baseline. The advanced practice provider can look at those alerts and decide to call that person to check in. If there are concerns”—such as potential heart or respiratory failure, he says—“they can be referred to a physician or even urgent care or the emergency department.”
In the pilot, clinicians monitor the data streams around the clock. The system alerts medical staff when the participants’ condition changes even slightly—for example, if their heart rate is different from what it normally is at that time of day.
The machine-learning model was trained with data from people enrolled in the study’s first phase. About 500 discharged patients and staff members were monitored at home last year. The researchers expected about 5% of that group to develop episodes that would require treatment. The number was actually about 10%.
The new system predicted these episodes in less time than traditional pulse oximetry, says Vanden Hoek, and fewer patients required hospitalization. Administrators say the program has saved them “substantial” amounts of money.
So far, the US Food and Drug Administration has approved five of the company’s algorithms, including a heart failure prediction model developed for the Department of Veterans Affairs.The promise and peril of wearables
The Chicago-based partnership is one in a growing number of attempts to train AI embedded in wearable devices to diagnose and monitor covid cases. Fitbit, for example, has made progress with an early detection tool: its algorithm detected about 50% of cases at least one day before visible symptoms developed. The US Army is also conducting a nationwide pilot program through its Virtual Medical Center. Its system, much like the Illinois trial, involves continuously monitoring patients’ vital signs through a wearable patch.
The Chicago-based program will continue throughout the year, and participants are now being recruited from several local hospitals in addition to UI Health, bringing the total to about 1,700.
Although it’s an important measure for Black and Latino communities in the city, some experts warn that it’s important to remain cautious when it comes to wearables—particularly because AI has been used to perpetuate discrimination. Black and Latino communities haven’t always benefited from technological advances, and they’ve experienced racial bias in AI medicine, whether from hospital screening systems that are less likely to identify the severity of their health needs or early decisions to locate covid-19 testing centers outside Black neighborhoods.
“There isn’t enough mobile health research being done exclusively with African-Americans,” says Delores C.S. James, an associate professor of health at the University of Florida, whose research focuses on digital health disparities. (She is not involved in the Chicago study.) “There is a unique opportunity given the high ownership of smartphones and social media engagement,” she says. “And let us keep in mind the high rate of health disparities and poor health outcomes. We must be included.”
Mitchell says she is pleased that marginalized communities are targeted to benefit from the AI tool. “This device is being utilized in communities that are deprived of these opportunities,” she says. “This can help everyone.”
Today, she remains optimistic, even though she is still struggling with the impact of covid on her health as one of the estimated 3 million Americans who are considered “long tail” survivors. She didn’t return to work for almost five months, and currently she’s in cardiac rehab to help improve her breathing and talking. A recent study shows that long-term survivors are at higher risk of death, have more complications throughout the body, and will become a “massive health burden” as their symptoms continue.
Still, Mitchell says, the sensor made the difference between long-term problems and paying a much higher price.
“I owe my life to this monitoring system,” she says.
This story is part of the Pandemic Technology Project, supported by The Rockefeller Foundation.
Early on the morning of October 12, 2020, 27-year-old Jang Deok-joon came home after working his overnight shift at South Korean e-commerce giant Coupang and jumped into the shower. He had worked at the company’s warehouse in the southern city of Daegu for a little over a year, hauling crates full of items ready to be shipped to delivery hubs. When he didn’t come out of the bathroom for over an hour and a half, his father opened the door to find him unconscious and curled in a ball in the bathtub, his arms tucked tightly into his chest. He was rushed to the hospital, but with no pulse and failing to breathe on his own, doctors pronounced him dead at 9:09 a.m. The coroner ruled that he had died from a heart attack.
Jang’s story caught my eye because he was the third Coupang worker to die that year, adding to growing concern about the nature of the company’s success. And Coupang has been astoundingly successful: it has risen to become South Korea’s third-largest employer in just a few years, harnessing a vast network of warehouses, 37,000 workers, a fleet of drivers, and a suite of AI-driven tools to take a commanding position in South Korea’s crowded ecommerce market. Coupang is everywhere in South Korea: half of residents have downloaded its app, and its “Rocket Delivery” service—the company claims 99.3% of orders are delivered within 24 hours—has earned it a reputation for “out-Amazoning even Amazon.”
Coupang’s use of AI to shorten delivery times is especially striking: its proprietary algorithms calculate everything from the most efficient way to stack packages in delivery trucks, to the precise route and order of deliveries for drivers. In warehouses, AI anticipates purchases and calculates shipping deadlines for outbound packages. This allows Coupang to promise delivery in less than a day for millions of items, from a 60-cent facemask to a $9,000 camera. Such innovations are why Coupang confidently bills itself as the “future of ecommerce,” and were the driving force behind company’s recent launch on Nasdaq that valued the company at $84 billion—the biggest US IPO by an Asian company since Alibaba in 2014.
But what does all this innovation and efficiency mean for the company’s workers?
That was the question I had in mind last summer, before Jang’s death, when I met several of Coupang’s warehouse and delivery workers. Like Jang, who had told his mother that workers were treated like “disposable objects,” they had all experienced the dehumanizing effects of Coupang’s algorithmic innovations. Some talked about a bruising pace of work hitched to the expectations of superhuman delivery times. Others said it was difficult to even go to the bathroom at work. In 2014, when Coupang began offering Rocket Delivery, its on-demand delivery service, it had promised stable careers with above-average benefits even to bottom-rung workers. But somewhere along the way, it seemed, the workers had been reduced to what South Korean labor journalist Kim Ha-young has called the “arms and legs of artificial intelligence.”
It is no coincidence that much of this criticism mirrored reports of working conditions at Amazon. Although Coupang was founded in 2010 as a Groupon-like deals platform, it switched to Amazon’s vertically integrated fulfillment model in 2014, pledging to become the “Amazon of Korea.” In doing so, it ran into the exact same problems with labor.Demanding work, on demand
What makes Rocket Delivery work is certainty—a promise that Coupang’s algorithms will determine exactly when a batch of deliveries needs to leave the warehouse in order to make it to you on time. In the company’s warehouses, these delivery deadlines come approximately every two hours.
“I realized when I started working there that the sole priority was meeting Rocket Delivery deadlines,” said Go Geon, one former warehouse worker I spoke to. “We were just robots.” Go went on medical leave from his job at Coupang in May 2020 after tearing his left hamstring while running to meet a deadline. He has since been let go by the company.
During the pandemic, the casualties of the obsession with hyperefficiency stacked up. From 2019 to 2020, work-related injuries and illnesses at Coupang and its warehouses nearly doubled to 982 incidents.
Like Amazon, Coupang has used a “unit-per-hour,” or UPH, metric to measure worker productivity in real time and maintain the grueling pace in its warehouses. Although workers are officially given one hour of rest for every eight-hour shift—the legally mandated minimum break—one driver I met last September told me that most people simply worked through their breaks to stay on schedule. He is no longer with the company. In an emailed statement to MIT Technology Review, a Coupang spokesperson stated that the company no longer tracks UPH at its warehouses. But one current worker I spoke to recently told me that some warehouse managers are still openly monitoring work rate this way. “They rarely use the term ‘UPH’ anymore,” he said. “But they’ll still hector you for being too slow, presumably based on some form of concrete proof.”
During the pandemic, from which Coupang has handsomely profited, the casualties of this obsession with hyperefficiency stacked up. From 2019 to 2020, work-related injuries and illnesses at Coupang and its warehouses nearly doubled to 982 incidents. Since Jang Deok-joon’s fatal heart attack, three more Coupang workers have died from what labor activists say was overwork (there have been no official rulings on their deaths).
But despite the concerns these deaths have raised, none of them have caused so much as a blip in Coupang’s operations. On the contrary, the company seems to thrive on how disposable its labor is. Although it employs its workers directly rather than using subcontractors, the majority are reportedly hired on a day-to-day basis the night before via an app called “Coupunch,” or on temporary contracts that usually last a few months. This flexibility allows Coupang to match its labor costs to the ebb and flow of business and keep things lean.COUPANG, LLC
But the constant threat of being denied employment hangs over workers. For those who voice dissent, report a workplace injury, or fall short of their productivity requirements, Coupang is known to withhold contract extensions, workers told me.
In its statement to MIT Technology Review, Coupang said that the company “complies with the Labor Standard Act in every aspect including hiring and termination,” and that “the rate of renewal for the contract worker is more than 90 percent.” However, courts have ruled in the past that the company unfairly fired a worker who submitted a workplace injury claim.
“They make it very clear as soon as you’re hired that if you cause any kind of problems, you won’t be getting a contract extension,” Jeon Woo-oak, a former warehouse worker, told me.
Jang’s death exemplified how exploitative this arrangement can be. As a day laborer who applied for shifts every night via Coupunch, he had been anxious about his precarious employment status. But he had hoped to stay in the company’s good graces and apply for permanent employment, his mother, Park Mi-sook, told me. In the months leading up to his death, he had worked the 7 p.m. to 4 a.m. shift, in addition to frequent overtime, for up to 59 hours over seven consecutive days, earning minimum wage (the equivalent of about $7.60 per hour). “He would be completely wiped out after the end of each deadline,” Park said.
In 2019, as Coupang ramped up its overnight delivery service that offered a 7 a.m. delivery guarantee for orders made the previous evening, the number of deadlines during a typical night shift in the Daegu warehouse increased from around three to seven, according to one worker. Meeting them took a physical toll: Athletic and sturdily built, Jang had lost around 30 pounds since starting at Coupang in June 2019, Park said. She added that the rapid weight loss caused him to develop wrinkles on his face.
In February, the government of South Korea officially attributed Jang’s death to overwork. The final report into his death noted that Jang’s body bore the signs of severe muscular breakdown. Coupang issued an apology and promised to improve working conditions, such as expanding employee medical checkups.
In its emailed statement, a Coupang spokesperson pointed to the fact that Jang’s death was the only one to be officially ruled work-related in the company’s history. And it said its recent investments into warehouse automation “increases efficiency and decreases workload for our workers.”Worldwide worries
All of this should sound familiar to those who follow Amazon, where the company’s drivers and fulfillment center workers have reported almost the exact same problems that are just now emerging at Coupang. Amazon too has faced criticism for a punishing pace of work that leads to high rates of injury, the use of algorithms to surveil and fire workers, oppressive productivity requirements that treat workers like robots, and a business model that seems to depend on disposable labor.
In the United States, discontent around these conditions fueled a historic unionization drive at Amazon’s fulfillment center in Bessemer, Alabama earlier this year. Union organizer Stuart Appelbaum, the president of the Retail, Wholesale and Department Store Union (RWDSU), talked about the “unbearable” pace in the company’s warehouses and explained: “This is really about the future of work. People are managed by an algorithm. They’re disciplined by an app on their phone. And they’re fired by text message. People have had enough.” In response, Amazon, which has a long history of union-busting activities including surveilling and intimidating workers, launched a large-scale anti-union blitz while denying allegations that its delivery drivers were forced to urinate in bottles. Amazon has since walked back its denial of these reports, but ultimately won the Bessemer vote.
In a letter to Amazon shareholders published shortly after the unionization vote in early April, Jeff Bezos announced that the company would be rolling out a new “job rotation program” to address the issue of high injury rates. The program, wrote Bezos, will use “sophisticated algorithms to rotate employees among jobs that use different muscle-tendon groups to decrease repetitive motion and help protect employees from MSD risks.” But underlying this scheme is a problematic view of injuries as a mere efficiency problem rather than the warning signs of deeper dysfunction. And at bottom, the plan seems like less of a serious solution for overwork than an extension of the totalizing and performance-obsessed micromanagement that created the problem in the first place.
In an emailed statement to MIT Technology Review, Amazon spokesperson Max Gleber declined to offer additional details on the program. “Our scanning process is to track inventory movement, not people,” he said. “We know these are physical jobs but we do all that we can to ensure the safety and health of our employees.”COUPANG, LLC
The unionization drive may have failed, but it highlighted how current worker protections are unable to contend with the future of work that Appelbaum spoke about. And the same is true in South Korea, where Coupang has managed to navigate the blind spots in South Korean labor law to keep its workers on insecure contracts—and therefore less likely to organize—while subjecting them to ever-intensifying workloads.
When I first began reporting on Coupang last summer, initially as an investigation into the company’s mishandling of a covid-19 outbreak at one of its warehouses, I was struck not only by how similar its labor issues were to Amazon’s, but by how Coupang workers had immediately understood that their fight was against not just a misbehaving local employer, but the very idea of superfast delivery itself.
Coupang has often repeated the same line when faced with criticisms of its labor practices: that the company’s direct employment model allows it to offer better benefits compared to the rest of the industry. But paying a little more for dehumanizing work does not suddenly make it any less dehumanizing, and workers I spoke to said that any such solutions would fall short of meaningful progress. “The source of all these problems are delivery deadlines and Rocket Delivery,” Go Geon, the former warehouse worker, told me. “That is the starting point of everything.” That’s why the Coupang drivers’ union isn’t simply campaigning for incremental improvements to working conditions or wages, but has called for a rollback of the company’s razor-thin delivery guarantees.
After he left Coupang, Go founded an advocacy group for the company’s warehouse workers. He told me he’d felt a sense of kinship toward Amazon workers after realizing they were suffering in the same way. “It would be nice to launch some collective action,” he told me. It was just an offhand remark, but it felt like a vital insight: challenging a single, universalized model that is reshaping e-commerce around the world might require some kind of international solidarity among workers.An existential dilemma
Despite Coupang’s promises to address its own labor issues, the larger economic currents in which it is placed only deepened during the pandemic. Global e-commerce exploded thanks to store closures and social distancing, and the industry is projected to record close to $5 trillion in sales worldwide by the end of 2021.
In its IPO prospectus, Coupang acknowledged its core existential dilemma: pursuing “speed and reliability”—the two pillars of its business model—while controlling its labor costs, which have grown fourteen-fold between 2014 and 2020. (Meanwhile, the company has yet to turn a profit with Rocket Delivery.)
What would a more labor rights-minded approach to this balancing act entail? Can fast delivery co-exist with worker welfare? I recently posed these questions to Jang Kwi-yeon, a labor researcher at the Labor Rights Research Institute. When I spoke to her last year, she had compared Coupang’s warehouses to the infamous sweatshops in 1970s South Korea.
“I think the logistics system itself should be overhauled,” she told me. “The right to rest and the health of workers should be set as fixed preconditions, and then the algorithms should then be put to work to calculate how fast deliveries can be made.”
The chances of an e-commerce company whose entire business hinges on being fast willingly choosing to be slower are of course close to nil. And even if Coupang changed its approach, the promise of near-instant delivery has already replicated the same problem everywhere. To keep up with Coupang, competitors like internet giant Naver and department store chain Shinsegae Group are promising ever-faster deliveries that will undoubtedly place an even greater burden on their workers. More than a dozen delivery drivers for other operators have died on the job in the past year. Families and union officials have attributed many of these deaths to overwork, similar to Jang Deok-joon’s case.
“The right to rest and the health of workers should be set as fixed preconditions, and then the algorithms should then be put to work to calculate how fast deliveries can be made.”Jang Kwi-yeon, Labor Rights Research Institute
In the US, more competition for Amazon—Walmart, for example, has started offering same-day delivery—suggests that the same story will play out. These companies have changed expectations and hidden the real costs from consumers, while many workers who are faced with rising unemployment caused by the pandemic can’t afford to seek out a more humane workplace.
Some version of ethical super-fast delivery may exist, attained perhaps with better wages, stricter health protocols, and by hiring a lot more workers. But Coupang’s story—and the stories of its workers—suggests that this may be a fundamentally flawed proposition. In the end, it is hard to see how faster delivery guarantees cannot be paid for without the increasingly punishing and dehumanizing labor of frontline workers. As the former driver told me: “it’s a model in which it’s impossible not to aggressively slash down labor costs.”
Max Kim is a freelance journalist, writer, and producer based in Seoul, South Korea.
NASA has just released the first pictures of Jupiter’s largest moon, Ganymede, taken during a flyby by the Juno probe.
Juno passed Ganymede on June 7, making its closest approach at just around 1,000 kilometers from its surface while traveling at 66,800 kilometers per hour. It’s the closest any probe has come to the moon since Galileo in 2000. The image above was taken by the JunoCam, capturing nearly a whole side of Ganymede at a resolution of 1 km per pixel. Another image released was taken by the Stellar Reference Unit, showing off a portion of the moon’s dark side that was lighted by Jupiter itself. More images will be made available in the coming days.
Ganymede is of particular interest to scientists for a number of reasons. It has a metallic core, and is the only moon in the solar system to have its own magnetic field (though this gets pretty well buried by the magnetic field generated by the behemoth Jupiter).
Beneath its icy surface is thought to be a subsurface ocean that contains more water than all of Earth’s oceans combined. Its atmosphere is super thin and it’s pretty unlikely Ganymede could be host to any life, but habitability is not entirely out of the question.
Meanwhile, Juno is having a ball at the moment. The probe first arrived at Jupiter in August 2016 to explore the largest planet in the solar system. Juno’s hardware was specially designed to help protect it from the extreme radiation belts created by Jupiter.
In January, Juno began an extended mission that began with this flyby of Ganymede. Its next target is Europa in 2022, followed by two flybys of Io in 2024. After that, Juno will dive headfirst into Jupiter to formally finish its mission in September 2025.
Long before the first covid-19 vaccines went into arms, certain groups in the US felt the impact of the pandemic more severely: those who whose jobs had to be done in person, who were suddenly labeled “essential”; those who were shut out from government assistance; and certain communities of color.
Officials promised that the vaccine drive would be different, and that equity would be a priority. So far about 63% of US adults have gotten at least one covid-19 shot, and President Joe Biden has set a goal of increasing that to 70% by July 4. But many people in hard-hit communities still haven’t received effective communication about vaccines, and they may continue to face practical barriers to getting shots. As a result, their communities are still more severely affected. In Washington, DC, for example, the racial gap in covid-19 cases has grown rather than shrunk since vaccines became widely available.
Plans to increase equity have varied from place to place, with mixed results. Mississippi, which is home to a larger percentage of Black people than any other US state and initially saw stark vaccination disparities along racial lines, has almost reached parity. That success has been largely due to church leaders’ role in encouraging people to get vaccinated.
In California, however, special sign-up codes meant for Black and Latino communities were misused by wealthier people working from home, who shared the codes among their social and professional networks, according to the Los Angeles Times. And in Chicago, community members say, a digital divide and other access issues left vulnerable populations out—despite a neighborhood-level equity plan.
So are there lessons to be learned?Equity = accessibility
Achieving equity is often a question of accessibility, says Emily Brunson, associate professor of anthropology at Texas State University and principal researcher of the CommuniVax project. Many things can be hurdles to getting a shot, including inconveniently located vaccination sites with limited hours, the need for transportation to those sites, and the difficulty of taking time off work.
“The problem right now is that it’s being talked about so much as a choice,” says Brunson, who points out that white Republican-voting men are particularly reluctant to get vaccinated relative to the rest of the US population. “Focusing on things that are choices takes away the spotlight from really severe access issues in the US.”
One success story took place in Philadelphia, thanks to an effective collaboration between two health systems and Black community leaders. Recognizing that the largely online signup process was hard for older people or those without internet access, Penn Medicine and Mercy Catholic Medical Center created a text-message-based signup system as well as a 24/7 interactive voice recording option that could be used from a land line, with doctors answering patients’ questions before appointments. Working with community leaders, the program held its first clinic at a church and vaccinated 550 people.
“We’ve worked really closely with community leaders, and every clinic since has evolved in terms of design,” says Lauren Hahn, innovation manager at the Penn Medicine Center for Digital Health.
By including community members early on, Hahn hoped, the program would give the people coming in for their shot the feeling that the clinic was made for them. And after their appointment, patients were sent home with resources like the number for a help line they could call if they had any questions about side effects.
“We want to make sure that we’re not just coming in and offering this service and then walking away,” she says.Data needs to guide practice
Researchers say that having complete data on who is—and isn’t—getting vaccinated can improve the vaccine rollout and prevent problems from being obscured. Data gaps have been a problem since the early days of the pandemic, when few states were reporting cases and deaths by race. Though Joe Biden has emphasized equitable vaccine distribution as a priority, the CDC reports having race and ethnicity data for only 56.7% of vaccinated people.
Not everyone wants more information to be made public, however. In Wisconsin, Milwaukee County executive David Crowley says there can be resistance to collecting and publishing data that shows disparate health outcomes among racial groups. “We have to say that racism has been a problem,” Crowley says. But, he adds, “Look at the data. It’s going to tell you a story right there.”
His county created a covid-19 dashboard that reported detailed racial data before many other jurisdictions in the state, Crowley says. It allowed the county to work with the city of Milwaukee to open special walk-in sites for residents in certain zip codes.
“We haven’t found the silver bullet in all of this,” Crowley says. “But at the end of the day, we know that data is telling a story, and we have to utilize this data.”
“Covid is what really catalyzed this type of analysis work.”Dan Pojar, Milwaukee County EMS
Because the data is public, other pandemic response teams outside of government could use it too. Benjamin Weston, director of medical services at the Milwaukee County Office of Emergency Management, says making covid-19 data transparent and accessible helped community groups and academic researchers know where to focus their efforts.
The dashboard has also helped them see, in stark terms, that the communities hit hardest by covid have historically faced broader health challenges. After seeing that covid rates were high in places where people typically have cardiac issues, for example, the county decided to offer CPR training at covid vaccination sites. EMS division director Dan Pojar says he expects about 10,000 people to get CPR training that way.
“That’s an opportunity for us to work with other health systems to flow education and different initiatives into these communities,” Pojar says. “Covid is what really catalyzed this type of analysis work.”It might get harder from here, not easier
Public health and equity researchers were not surprised at the pandemic’s disparate effect on certain communities, according to Stephanie McClure, assistant professor of anthropology at the University of Alabama. Health disparities along racial and economic lines have the potential to become a national and local focal point—in April, CDC director Rochelle Walensky declared racism “a serious public health threat”—but that tide hasn’t yet turned, McClure says.
Prioritizing equity could become more difficult as the US vaccine rollout shifts to a new phase. Some states have asked the federal government to send them fewer vaccines as sign-ups plummet. Some are also closing mass vaccination sites or consolidating efforts. McClure, who leads the Alabama team of the CommuniVax project, says that although it makes sense to respond to changes in the pandemic, those adjustments need to be thoughtful and measured—especially in regions like the South, where a smaller portion of the population is vaccinated.
McClure says people may think that sites are being taken away because residents didn’t show up fast enough, which can feel like a punishment. “Nobody wants to be told that they’re bad,” she says. “Or it can also be interpreted as ‘We’re taking this back because [vaccinations are] over, or because it’s not really that serious, or because you have enough people who are vaccinated,’ none of which is true.”Persistence is vital
McClure says it’s important for public health officials to follow through on their promise to work to get everyone vaccinated. That means keeping in touch with hesitant communities to know if there’s a surge in interest so that vaccinators can quickly meet the demand.
“It’s the old public health trick: you make it easy for people to say yes.”Stephanie McClure, University of Alabama
“It’s the old public health trick: you make it easy for people to say yes,” she says. “You continue the surveillance and monitoring and get the best data you can on vaccination, and then you plan in cooperation with the community. How often should we come back? How often should we remind people that this is available?”
She says the pandemic has been a useful case in point in a long history of health inequities that didn’t start and won’t end with covid. After the emergency state of covid-19 has passed, officials will need to keep the momentum going—especially at the local level, where so many access problems have emerged.
In Alabama, for example, National Guard mobile vaccination units were set up with the ultra-cold freezers needed to transport and store mRNA-based covid-19 vaccines. “Why not, when this particular push is over, leave those freezer units with the federally qualified health centers that are already in those communities?” McClure says. “You’re starting to build the infrastructure for being able to deliver vaccination on a consistent basis.”
Brunson, the principal researcher of the CommuniVax project, says covid-19 vaccinations can be used as a way to open other conversations about health needs that are going unaddressed. If a community hard-hit by covid-19 also suffers from high rates of diabetes, vaccine efforts could open the door to long-term engagement with people who feel their health hasn’t been a priority.
“It’s really the opportunity to change,” she says.
This story is part of the Pandemic Technology Project, supported by The Rockefeller Foundation.
On Friday, Facebook announced that it would suspend former president Donald Trump from the social network for two years, until at least January 7, 2023, and said he would “only be reinstated if conditions permit.”
The announcement comes in response to recommendations last month from Facebook’s recently created Oversight Board. Facebook had hoped that the board would decide how to handle Trump’s account, but while it upheld the company’s initial decision to ban Trump from the platform for inciting violence on January 6, it punted the long-term decision back to executives in Palo Alto.
The news that Trump would be banned from Facebook for another 19 months was meant to provide some answers on the platform’s relationship with the former president—but instead it leaves many open questions.Who is this decision supposed to please?
Although the announcement provides some actual rules about how politicians can use Facebook—and some guidance on how those rules will be enforced—the decision to ban Trump for at least two years isn’t going to be its most popular one. Advocacy groups like Ultraviolet and Media Matters, which have long pushed Facebook to ban Trump, released statements saying that anything less than a permanent ban is inadequate. Meanwhile, the people who feel any rule enforcement against conservative politicians is proof that Facebook penalizes conservative content continue to feel that way, despite lots of evidence that, if anything, the opposite is true. And it leaves open the possibility that Trump will be Back Online in time for the 2024 election cycle.What does “newsworthiness” mean now?
Many platforms, including Facebook, have used a “newsworthiness” exception to avoid enforcing their own rules against politicians and world leaders. Facebook’s announcement comes with some changes to how it’ll use that loophole in the future. First, Facebook said, it will publish a notice whenever it applies the rule to an account. And second, it “will not treat content posted by politicians any differently from content posted by anyone else” when applying the rule, which basically means determining whether the public interest in a rule-breaking piece of content outweighs the potential harm of keeping it online.
Facebook formally introduced this policy in late 2016, after censoring an iconic photo from the Vietnam War because it contained nudity. However, the newsworthiness exception evolved into a blanket exception for politicians, including Trump, which allowed rule-breaking content to stay online because it was considered in the public interest by default. But while this announcement appears to end that blanket protection, it doesn’t get rid of it completely, and it does not address in any more detail how Facebook will determine whether something falls under the exception.Who made this decision?
The announcement was authored by Nick Clegg, the company’s vice president of global affairs, but refers throughout to “we.” However, it does not specify who at Facebook was involved in the decision-making process—which is important for transparency and credibility, given the controversial nature of the decision.
“We know today’s decision will be criticized by many people on opposing sides of the political divide—but our job is to make a decision in as proportionate, fair, and transparent a way as possible,” Clegg wrote.Where will Facebook get advice?
The announcement also says that the company will look to “experts” to “assess whether the risk to public safety has receded,” without specifying which experts these will be, what expertise they will bring, or how Facebook (or, again, who at Facebook) will have decision-making authority based on their insights. The Oversight Board, which was intended partly as a way of outsourcing controversial decisions, has already signaled that it does not wish to perform that role.
This means that knowing whose voice will matter to Facebook, and who will have authority to act on the advice, is especially important—particularly given the high stakes. Conflict assessment and violence analysis are specialized fields, and ones in which Facebook’s previous responses do not inspire much confidence. Three years ago, for example, the United Nations accused the company of being “slow and ineffective” in responding to the spread of hatred online that led to attacks on the Rohingya minority in Myanmar. Facebook commissioned an independent report by the nonprofit Business for Social Responsibility that confirmed the UN’s claims.
That report, published in 2018, noted the possibility of violence in the 2020 US elections, and recommended steps that the company could take to prepare for such “multiple eventualities.“ Facebook executives at the time acknowledged that “we can and should do more” But during the course of the 2020 election campaign, after Trump lost the presidency, and in the run-up to the January 6, the company made few attempts to act on those recommendations.What happens in 2023?
Then there is the limited nature of the ban—and the fact that it may just kick the same conversation down the road until it is possibly even more inconvenient than it already is. Unless Facebook decides to further extend the ban based on its definition of “conditions permitting,” it will lift just in time for the primary season of the next presidential election cycle. What could possibly go wrong?
The 24-hour vigil started just after 8 a.m. US Eastern Time on June 3—more or less on schedule, and without any major disruptions.
The event, hosted on Zoom and broadcast live on other platforms such as YouTube, was put together by Chinese activists to commemorate the Tiananmen Square Massacre, Beijing’s bloody clampdown on a student-led pro-democracy movement that took place on June 4, 1989.
The fact that it could take place wasn’t certain: organizers were worried that they’d see a repeat of last year, when Zoom, the Californian videoconferencing company, shut down three Tiananmen-related events including theirs after a request from the Chinese government. The company even temporarily suspended the accounts of the coordinators, despite the fact that all of them were located outside of mainland China and four of them were in the US.
Zoom’s actions led to an investigation and lawsuit filed by the Department of Justice in December. “We strive to limit actions taken to only those necessary to comply with local laws. Our response should not have impacted users outside of mainland China,” Zoom wrote in a statement posted to its website, in which it admitted that it “fell short.”
It was one of the most extreme examples of how far western technology companies will go to comply with China’s strict controls on online content.A suite of suppression
This kind of self-censorship is standard for Chinese technology companies, who—unlike American businesses shielded by rules such as Section 230—are held responsible for user content by Chinese law.
Every year, a few days ahead of sensitive dates like the anniversary of the 1989 crackdown, the Chinese internet—which is already strictly surveilled—becomes even more closed than normal. Certain words are censored on various platforms. Commonly used emojis, like the candle, start disappearing from emoji keyboards. Usernames on different platforms can’t be changed. And speech that may have been borderline acceptable during other times of the year may result in a visit from state security.
In 2020, Zoom shut down three Tiananmen-related events after a request from the Chinese government—despite the fact that all of them were located outside of mainland China. In December the Department of Justice filed a lawsuit against the company.
This is accompanied by crackdowns in the real world, with increased security at Tiananmen Square in Beijing and other locations the government deems sensitive, while vocal critics of the regime are sent on forced vacations, detained, or jailed outright.
This year, such suppression is stretching even further. Following the passage of a new national security law in Hong Kong that severely curtails speech—despite months of protests—commemoration events there and in neighboring Macau have been officially banned. (Last year 24 people were charged for ignoring a similar ban, including one of the movement’s most prominent leaders, democracy activist Joshua Wong, who is still in jail and was recently sentenced to a further 10 months.
Covid is playing its part too: a large public event planned in Taiwan has also been canceled, for example, due to a strict lockdown after a new wave of covid-19 infections.
All of this heightens the symbolism of this year’s online events.
“Our motto is ‘Tiananmen is not history,’” says Li-Hsuan Guo, a campaign manager with the New School for Democracy, a democracy advocacy organization in Taiwan that is organizing the largest Chinese-language memorial. Its event will be livestreamed on Facebook and Youtube: speakers appearing virtually include Fengsuo Zhou, the former Tiananmen student leader kicked off of Zoom last year, and former Hong Kong legislator Nathan Law, one of the leaders of the region’s Umbrella Movement.
On top of this there is the 24-hour Zoom vigil, as well as other English-language events on Clubhouse, the audio-only social network. Activists including Zhou have been holding daily four-hour long Clubhouse meetings since April 15, the day pro- democracy protests started in 1989.
In a way, Zoom’s actions against Zhou last year—and the subsequent investigation by Washington—has given him a sense of safety: the scrutiny on the company was put under makes him believe that it is unlikely to deplatform him again. But, he says, the incident still showed that far outside China, “there’s no safe place for activists.”“There’s no such thing as ‘within China’ anymore”
Deplatforming is not the only consequence faced by individuals speaking out online.
Netizens in mainland China have had their identities exposed on Chinese social networks for participating on western platforms like Clubhouse and Twitter, and have even been jailed for making critical comments about Communist party leaders on Twitter, despite the fact that the platform is inaccessible to most mainland users. And elsewhere, critics outside of the country have faced organized harassment campaigns, with protestors showing up in front of their homes, sometimes for weeks at a time. State-affiliated hackers have targeted Uyghurs and others in cyberattacks— including by impersonating UN officials, as MIT Technology Review reported last month.
“State-sponsored trolling and doxxing of activists [is] designed to intimidate them into quitting activism altogether,” says Nick Monaco, the director of China Research at Miburo Solutions and coauthor of a recent joint report on Chinese disinformation in Taiwan. “It arguably does the most to disrupt organizing in advance, by instilling … permanent fear,” he adds.
These activities still primarily affect the Chinese diaspora, says Katharin Tai, a PhD candidate at MIT who focuses on Chinese state internet policy and politics. But as both Chinese companies expand further overseas and western companies with Chinese presences are increasingly forced to “resolve this out in the open,” the rest of the world is starting to see the spillover effects of censorship more regularly.
Another case in point: just this week, Nathan Law’s website was taken down by Wix, an Israeli hosting company, at the request of Hong Kong police for violating national security law. It was reinstated, with an apology, three days later.
“There’s no such thing as something ‘just within China’ anymore, unless the platform is restricted from being accessed from abroad,” Tai says.
Sometimes people encounter these restrictions without even realizing: in early June, players of the online roleplaying game Genshin Impact, which is popular worldwide, began wondering on Twitter why they could no longer change their usernames.
Some with connections to China speculated that it was to prevent users from making statements with their usernames about Tiananmen—a common tactic—and that the feature would be back after the anniversary of Tiananmen had passed.
Some of the commenters griped about being stuck with embarrassing names, but others used it as an opportunity to educate other players. “For those living in China, censorship and political persecution are very real things happening in China right now,” wrote one Chinese American user. “It’s a lived experience. It does not ‘go back to normal.’”
For all of the recent advances in language AI technology, it still struggles with one of the most basic applications. In a new study, scientists tested four of the best AI systems for detecting hate speech and found that all of them struggled in different ways to distinguish toxic and innocuous sentences.
The results are not surprising—creating AI that understands the nuances of natural language is hard. But the way the researchers diagnosed the problem is important. They developed 29 different tests targeting different aspects of hate speech to more precisely pinpoint exactly where each system fails. This makes it easier to understand how to overcome a system’s weaknesses and is already helping one commercial service improve its AI.
The study authors, led by scientists from the University of Oxford and the Alan Turing Institute, interviewed employees across 16 nonprofits who work on online hate. The team used these interviews to create a taxonomy of 18 different types of hate speech, focusing on English and text-based hate speech only, including derogatory speech, slurs, and threatening language. They also identified 11 non-hateful scenarios that commonly trip up AI moderators, including the use of profanity in innocuous statements, slurs that have been reclaimed by the targeted community, and denouncements of hate that quote or reference the original hate speech (known as counter speech).
For each of the 29 different categories, they hand-crafted dozens of examples and used “template” sentences like “I hate [IDENTITY]” or “You are just a [SLUR] to me” to generate the same sets of examples for seven protected groups—identities that are legally protected from discrimination under US law. They open-sourced the final data set called HateCheck, which contains nearly 4,000 total examples.
The researchers then tested two popular commercial services: Google Jigsaw’s Perspective API and Two Hat’s SiftNinja. Both allow clients to flag up violating content in posts or comments. Perspective, in particular, is used by platforms like Reddit and news organizations like The New York Times and Wall Street Journal. It flags and prioritizes posts and comments for human review based on its measure of toxicity.
While SiftNinja was overly lenient on hate speech, failing to detect nearly all of its variations, Perspective was overly tough. It excelled at detecting most of the 18 hateful categories but also flagged most of the non-hateful, like reclaimed slurs and counter speech. The researchers found the same pattern when they tested two academic models from Google that represent some of the best language AI technology available and likely serve as the basis for other commercial content-moderation systems. The academic models also showed uneven performance across protected groups—misclassifying hate directed at some groups more often than others.
The results point to one of the most challenging aspects of AI-based hate-speech detection today: Moderate too little and you fail to solve the problem; moderate too much and you could censor the kind of language that marginalized groups use to empower and defend themselves: “All of a sudden you would be penalizing those very communities that are most often targeted by hate in the first place,” says Paul Röttger, a PhD candidate at the Oxford Internet Institute and co-author of the paper.
Lucy Vasserman, Jigsaw’s lead software engineer, says Perspective overcomes these limitations by relying on human moderators to make the final decision. But this process isn’t scalable for larger platforms. Jigsaw is now working on developing a feature that would reprioritize posts and comments based on Perspective’s uncertainty—automatically removing content it’s sure is hateful and flagging up borderline content to humans.
What’s exciting about the new study, she says, is it provides a fine-grained way to evaluate the state of the art. “A lot of the things that are highlighted in this paper, such as reclaimed words being a challenge for these models—that’s something that has been known in the industry but is really hard to quantify,” she says. Jigsaw is now using HateCheck to better understand the differences between its models and where they need to improve.
Academics are excited by the research as well. “This paper gives us a nice clean resource for evaluating industry systems,” says Maarten Sap, a language AI researcher at the University of Washington, which “allows for companies and users to ask for improvement.”
Thomas Davidson, an assistant professor of sociology at Rutgers University, agrees. The limitations of language models and the messiness of language mean there will always be trade-offs between under- and over-identifying hate speech, he says. “The HateCheck dataset helps to make these trade-offs visible,” he adds.
Just weeks after a major American oil pipeline was struck by hackers, a cyberattack hit the world’s largest meat supplier. What next? Will these criminals target hospitals and schools? Will they start going after US cities, governments—and even the military?
In fact, all of these have been hit by ransomware already. While the onslaught we’ve seen in the last month feels new, hackers holding services hostage and demanding payments has been a huge business for years. Dozens of American cities have been disrupted by ransomware, while hospitals were hit by attacks even during the depths of the pandemic. And in 2019, the US military was targeted. But that doesn’t mean what we’re seeing now is just a matter of awareness. So what’s different now?It’s the result of inaction
You cannot explain the metastasizing of the ransomware crisis without examining years of American inaction. The global ransomware crisis grew to incredible proportions during the Donald Trump presidency. Even as US critical infrastructure, cities, and oil pipelines were hit, the Trump administration did little to address the problem, and it went ignored by most Americans.
The ransomware boom started at the tail end of the Obama White House, which approached it as part of its overall cybercrime response. That involved putting agents on the ground around the world to score tactical wins in countries that were otherwise uncooperative, but defense against such attacks fell down the list of priorities under Trump even as ransomware itself boomed.
Today, the Biden administration is making an unprecedented attempt to tackle the problem. The White House has said that the hackers behind both the Colonial Pipeline and JBS ransomware attacks are based in Russia, and have current efforts involving Homeland Security and the Justice Department. But while President Biden plans to discuss the attacks in an upcoming summit with Vladimir Putin on June 16, the problem goes deeper than just relationships between two countries.It’s also the result of new tactics
When the ransomware industry was taking off half a decade ago, the business model for such attacks was fundamentally different—and far simpler. Ransomware gangs started out by indiscriminately infecting vulnerable machines without much care for exactly what they were doing or who they were targeting.
Today, the operations are much more sophisticated and the payouts are much higher. Ransomware gangs now pay specialist hackers to go “big game hunting” and seek out massive targets that can pay out huge ransoms. The hackers sell the access to the gangs, who then carry out the extortion. Everyone gets paid so handsomely that it’s become increasingly irresistible—especially because the gangs typically suffer no consequences.There’s safe harbor for criminals
That leads to the next dimension of the problem: The hackers work from countries where they can avoid prosecution. They operate massive criminal empires and remain effectively immune to all attempts to rein them in. This is what Biden will bring up to Putin in the coming weeks.
The problem extends beyond Russia and, to be clear, it’s not as simple as Moscow directing hackers. But the Kremlin’s tolerance of cybercriminals—and sometimes even direct cooperation with them—is a real contributor to the booming criminal industry. To change that, America and other countries will have to work together to confront nations who otherwise see no problem with US hospitals and pipelines being held for ransom. The safe harbor for cybercriminals, combined with the mostly unregulated cryptocurrency used to facilitate the crime, has made it very favorable for the hackers.And we’re all more connected and insecure than ever
And then there is the unavoidable fact that weak cybersecurity combined with ubiquitous connectivity equals increasingly vulnerable targets. Everything in America—from our factories to our hospitals—is connected to the internet, but a lot of it is not adequately secured.
Globally, the free market has repeatedly failed to solve some of the world’s biggest cybersecurity problems. This may be because the ransomware crisis is a problem at a scale that no private sector can solve alone.
As ransomware and cybercrime increasingly becomes a national security threat—and one that risks harming human beings, as in the case of attacks against hospitals—it’s become clear that government action is required. And so far officials from the world’s most powerful nations have chiefly succeeded in watching the disaster unfold.
Instead, what must happen to change this is a global partnership between countries and companies to take ransomware head on. There is momentum to change the status quo, including a major recent cybersecurity executive order out of the White House. But the work is only beginning.
The last time NASA launched a dedicated mission to Venus was in 1989. The Magellan orbiter spent four years studying Venus before it was allowed to crash into the planet’s surface. For almost 30 years, NASA has given Venus the cold shoulder.
All of that is about to change with a double feature. NASA administrator Bill Nelson announced Wednesday that the agency has selected two new missions to explore Venus: DAVINCI+ and VERITAS. In the words of planetary scientist Paul Byrne of North Carolina State University, “We have gone from a drought to a banquet.”
It’s honestly a bit hard to understand why NASA has not been more bullish about going to back to Venus in such a long time. It’s true that Venus has always been a tough bugger to explore because of its hostile environment. The surface boasts temperatures of up to 471 °C (hot enough to melt lead) and ambient pressures 89 times those on Earth. The atmosphere is 96% carbon dioxide. And the planet is covered in thick clouds of sulfuric acid. When the Soviet Union landed the Venera 13 probe on the planet in 1982, it lasted 127 minutes before it was destroyed.
And yet, we know that conditions there weren’t always so harsh! Venus and Earth are known to have started as similar worlds with similar masses, and both reside in the habitable zone of the sun (the region where it’s possible for liquid water to exist on a planet’s surface). But only Earth became habitable, while Venus turned into a hellscape. Scientists want to know why. These new missions, says Byrne, “will help us fundamentally answer the question why is our sibling planet not our twin?”
In just the last year, another huge development has encouraged NASA to take Venus exploration more seriously: the prospect of finding life. In September 2020, scientists announced that they had potentially discovered phosphine gas—which is known to be produced by biological life—in Venus’s atmosphere. Those findings came under enormous scrutiny in the ensuing months, and now it’s not quite clear whether the phosphine readings were real. But all the excitement fostered discussion to the effect that finding extraterrestrial life was perhaps possible on Venus. This tantalizing new prospect put Venus at the forefront of the public’s mind (and possibly the minds of legislators who approve NASA’s budget).
The selection of both new missions “is a very clear statement from NASA to the Venus community to say, ‘We see you, we know you’ve been neglected, and we’re going to make that right,’” says Stephen Kane, an astronomer at the University of California, Riverside. “It’s an incredible moment.”
DAVINCI+ is short for Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging Plus. It’s a spacecraft that will plunge into the dense, hot atmosphere of Venus and parachute down to the surface. On its 63-minute descent, it will use multiple spectrometers to study the atmosphere’s chemistry and composition. It will also image the Venusian landscape to better understand its crust and terrain (and if successful, it would be the first probe to photograph the planet during descent).
VERITAS, short for Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy, is an orbiter designed to carry out other research from a safer distance. It would use radar and near-infrared spectroscopy to peer below the planet’s thick clouds and observe the geology and topography of its surface.
The two missions each have a distinct focus for their investigations: DAVINCI’s is to study the history and evolution of the atmosphere, climate, and water on Venus while VERITAS is meant to help scientists learn about Venus’s innards—its volcanic and tectonic history, its mass and gravitational field, its geochemistry, and to what extent the planet is still seismically active.
And the fact that both missions are expected to travel to Venus around the same time—between 2028 and 2030—means they can complement each other. Kane, for instance, points out that a planet’s habitability is guided by a number of factors, including plate tectonics and subduction—a process that recycles carbon from the atmosphere into the planet’s interior—and its atmospheric chemistry. While VERITAS can provide unprecedented observations of the surface and tell us whether carbon recycling is happening, DAVINCI+ will probe the atmospheric chemistry directly. Together, he says both missions are “absolutely perfect” for providing a clear picture for how these processes play into the habitable potential of Venus (or lack thereof).
Still, these missions are only a prelude to what Byrne hopes will be a larger exploration program devoted to studying Venus in the same way we study Mars—through multiple missions that can explore its surface, atmosphere, and orbit at the same time. “One mission isn’t enough—two missions aren’t enough!” he says. DAVINCI+ and VERITAS could help lay the groundwork for such a program many decades down the road. Maybe bringing back a sample from Venus, as we’re poised to do soon with Mars, is possible within our lifetimes.
But how many directors get lost in the technicalities of technology? The challenge for a chief information security officer (CISO) is talking to the board of directors in a way they can understand and support the company.
It’s drilled into the heads of board directors and the C-suite by scary data-breach headlines, lawyers, lawsuits, and risk managers: cybersecurity is high-risk. It’s got to be on the list of a company’s top priorities.
Niall Browne, senior vice president and chief information security officer at Palo Alto Networks, says that you can look at the CISO-board discussion as being a classic sales pitch: successful CISOs will know how to close the deal just like the best salespeople do. “That’s what makes a really good salesperson: the person that has the pitch to close” he says. “They have the ability to close the deal. So they ask for something.”
“For ages,” Browne says, CISOs have had two big problems with boards. First, they haven’t been able speak the same language so that the board could understand what the issues were. The second problem: “There was no ask.” You can go in front of a board and give your presentation, and the directors can look like they’re in agreement, nodding or shaking their heads, and you can think to yourself, “Job done. They’re updated.” But that doesn’t necessarily mean that the business’s security posture is any better.
That’s why it’s important for CISOs to raise the board’s understanding to the level where they know what’s needed and why. Especially when it comes to new advances in cybersecurity, like attack surface management, which is “probably one of the areas that CISOs focus least on and yet is the most important,” Browne says. For example, “many times the CISO and the security team may not be able to see the wood from the trees because they’re so involved in it.” And to do that, CISOs need a set of metrics so that anybody can read a board deck and within minutes understand what the CISO is trying to get across, Browne says. “Because for the most part, the data is there, but there’s no context behind it.”
This episode of Business Lab is produced in association with Palo Alto Networks.Full transcript:
Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.
Our topic today is cybersecurity and corporate accountability. In recent years, cybersecurity has become a board level concern with damaged reputation, lost revenue and enormous amounts of data stolen. As the attack surface grows, chief information security officers will have increasing accountability for knowing where to expect the next attack and how to explain how it happened.
Two words for you: outside-in visibility.
My guest is Niall Browne, who’s the senior vice president and chief information security officer at Palo Alto Networks. Niall has decades of experience in managing global security, compliance and risk management programs for financial institutions, cloud providers and technology services companies. He’s on Google’s CISO advisory board.
This episode of Business Lab is produced in association with Palo Alto Networks.
Niall Browne: Excellent. Thank you, Laurel, for having me.
Laurel: So as a chief information security officer, or a CISO, you’re responsible for securing both Palo Alto Networks’ products and the company itself. But you’re not securing just any old company; you’re securing a security company that secures other companies. How is that different?
Niall: Yes, so I think, the beautiful thing about Palo Alto Networks is that we’re the largest cybersecurity company in the world. So we really get to see what an awful lot of companies never get to see. And if you think about it, one of the key things is, knowledge is power. So the more you know about your adversaries, what are they doing, what methods they’re attempting on the network, what are the controls that work and what are the controls that don’t work, the better you are to create your own internal strategy to help protect against those continuous attacks. And you’re in a much better position to be able to provide that data to the board so they can assure that the appropriate oversight is in place.
So certainly for us, with that level of knowledge of what we get to see in our networks, that really gives us the opportunity to continuously innovate. So taking our products and continuously building on those, so we can meet the customer requirements and then the industry requirements. So I think that’s probably the first part. The second part is, we’re really in this boat together. So part of my job is continuously talking to individuals in the industry and fellow CISOs, CTOs, CIOs, and CEOs talking about cybersecurity strategy. And invariably, you’ll find the same issues that they’re having are the exact same issues that we’re having. So for us, it’s really the opportunity to share, how do we ensure that we are able to continuously innovate, make a difference in the industry and really collaborate on an ongoing basis with industry leaders. Especially focusing on how we secure our business and provide best practices as to how we companies can be more secure?
Laurel: So some people may be surprised that collaboration and this kind of open sharing of knowledge is so prevalent, but they shouldn’t be, right? Because how else are you going to all collectively defend against the unknown attackers?
Niall: Great question. And if you look at it on the opposite side of the fence, hackers are continuously sharing. Albeit they’re sharing for financial gain. In other words, they’ll steal data and they’ll resell it and resell it and resell it and resell it. Hackers are continuously sharing that data, including DIY toolkits. And on the security side of the house, there’s always been historically that legacy suspicion. In other words, I’m the only person who’s having this problem uniquely. And if I share this problem, they’ll think that I’m not doing a good job or the company isn’t doing a good job, or I’m the only person who’s having this specific issue. And what happened over time is, CISOs didn’t share a lot of data, which means the hackers were sharing data right left and center. But on the CISO side of the house, on the protection side, there was very little collaboration, which meant that now you had limited shared industry best practices.
Each CISO was in their own silo, in their own pillar, doing their own unique thing, and everybody was learning from their own mistakes. So it was really a one-to-one model. You make a mistake and then you make another mistake, and then you make another mistake. However, if you could talk to your peer, imagine in business or finance, you’re continuously talking to the CTO and the CFO to say, “Oh, by the way, how did you manage such and such issue?” So I’m now seeing the industry starting to change. CISOs are now starting to change, and share. They’re continuously talking about strategy. They’re continually talking about how do they protect their environment? They’re talking about, what are some of the good business models that work?
And if you look at MIT, there’s industry and technical and business models that really work in other industries. But then, if you look in the CISO community itself, it’s like, what are those industry best practices? And now they’re only starting to get kind of formulated up, bubble up from there. And what I’m seeing, certainly over the last, I would say three or four years, there’s a tremendous growth on the CISOs in relation to learning industry best practices, and really uplevelling their skillset. So they’re just not that technical geek in the corner. They really need to be able to talk business technology, be able to talk business terms, and really be able to be seen as that close peer to that CTO, to the CIO, to the CEO in relation to solving business problems.
Because if you think about it from a cybersecurity perspective, at the end of the day, it’s just a business problem. And if it’s a business problem, you need to apply strategic business solutions to solving those issues. Instead of talking about what version of antivirus you’re on, you really need to uplevel the conversation, so that, when you speak to the board, when you’re speaking to the same C-level executive, they’re not throwing their eyes in the air. They understand that you’re talking the same business language as them. Which means, again, if you’re a trusted business partner, then you can make a huge amount more difference in the company, as opposed to being seen as that junior IT leader in the organization that somebody only ever comes to if we get hacked or if a backup fails, or if a Mac is broken.
Laurel: I really like that analogy…growth of the position itself. Like you said, it does actually elevate this role to the board table because it is a business problem with a possible business solution. But how can boards then in return make better decisions? You will then also have to bring some data and information and something to help the board along with all of the other decisions they have to make across the entire company.
Niall: And that’s the key thing, is that most people, when they look at it, it’s classic sales. You can have the best salesperson in the business, but unless they have the close, and the close is the ask. Here’s a great product, and I want to sell this product, i.e., this car for, let’s say, $50,000. And then at the end of the sales pitch, will you buy the car? And that’s what makes a really good salesperson, the person that has the pitch to close. They have the ability to close the deal. So they ask for something. So I think for ages, CISOs had two big issues with the board. One is, they weren’t able to report the right data up to the board and speak the same language where the board would be able to understand what the issues were.
And then two, there was no ask. And that’s very important because if you go into a board and you present and everybody’s nodding and shaking their head and understanding it, sure you’ve updated them, but the security posture is none the better. And if you look at a classical board, any board itself, they’re there at a very, very high level, obviously, to serve the company. So any of the board members or any of the boards that I’ve worked with in the past, they have been extremely willing to help the business itself. So they’re always looking at, “Well, you presented X, but now, how can I help?” So I think CISOs need to flip it into more of being that salesperson with the close. Most importantly, what’s my ask?
And a classic board meeting, I think that goes well, is, you sit down, you work with the board, you show a core set of metrics. Now, you don’t want to show metrics on numbers that are absolutely meaningless to the board. If you look at the board, the board has a wide range of skill sets. Some board members may be compliance experts, some may be business leaders, some may be finance leaders. So it’s really about when you communicate with the board, two sets of things. One is coming up with a set of communications or metrics, and really outlining the business case so that anybody can read a board deck, and within minutes they understand what you’re trying to get across. That’s critical.
And then a second part is, it’s not a presentation. Every board meeting should end with time at the end for questions and answers and for the ask. And I would say, a good board meeting is whereby you don’t even go through the deck. You share the deck in advance, they’ve read through it, they were able to understand your cybersecurity posture by just looking at your deck. And then the board meeting doesn’t even refer to the deck. It’s a simple set of questions, comments back and forth and then the ask. And the ask could be, “Listen, can we get some more focus on a certain area itself or more resources?” Or they may have an ask of you as well. So again, I think the model really is, communicate a core set of data and then making it a conversation with a collaborative ask from both sides versus coming up with a 30-slide deck that nobody understands that you present it and then you run out of the board meeting from there. That model just doesn’t work, as we know.
Laurel: Yeah. Not for anyone, right? So what specific metrics do you actually report back to the board and why are those metrics important to your board or any other board?
Niall: The issue with any industry, including cybersecurity is, sometimes there’s just too much data. So, if you look at industry standards like ISO 27001, you may have a hundred and something controls. If you look at FedRAMP, you’ve got 300 something controls. If you look at COSO or COBIT. So you don’t want to go to the board with, “By the way, here’s 2,000 controls. And here’s how we’re in compliance with these 2,000 controls.” Because for the most part, the data is there, but there’s no context behind it. So they’re wondering, like, “AV being on 95% of end points, is that good? We scan once every, let’s say 12 hours, is that good?” So they’re what I call meaningless metrics. They have no benefit whatsoever for most InfoSec people, never mind board-level leaders. So from our point of view, we break it into simple core sets of pillars that we can measure over time.
And generally, you don’t want to have a set of pillars that’s 25 pillars, because that’s too many because you’re not able to measure one versus 25. So internally, we generally settle in about five major core areas that we focus in on and we measure against those each time. So one is, secure our products. Most organizations are very, very product-centric now. So products in most companies are becoming critical, critical, critical. So one thing we measure is how are we measuring? How are we protecting our products? And we rate ourselves on a scale of zero up to five being maximum maturity.
Now, if you have really good products, but they’re sitting on infrastructure that’s insecure, you have an issue. So the second one is, secure our infrastructure. And the third one is detection and response. So that if you’ve got really secure products on really secure infrastructure, but nobody’s looking at it and nobody’s measuring or monitoring the environment for attacks, then you have an issue. So for us, it’s detection response is the third one, which is critical.
The fourth one then is people. And the people component, it’s absolutely…I can’t stress this enough because if you don’t have people that understand cybersecurity, then you’ve got a core issue. The vast majority of times, it’s people that do something in a company accidentally, i.e., they may click on a phishing link that compromises your network. So one thing, what we call it is street smart. So one of the four pillars is, can we get people so they’re street smart? In other words, cybersecurity smart, street smart. So if they’re walking down the road and they see a stranger look suspicious, well use your gut. Same thing with cybersecurity. What are the simple things that they should do or think about on a day-to-day basis that they can protect a company?
And then the fifth one really is governance. How do we do governance and how do we manage ourselves? And how do we measure our success? So if you look at it there, it’s five simple pillars. It’s just simply product, infrastructure, detection response, people, and governance. And we measure zero to five for each of those. So then it’s very easy for the board and for other members to look at, How are we trending against those areas over time? It allows you to go high, in other words, the thousand-foot view. And then if there’s a question of infrastructure, you can look at the measurement, the infrastructure pillar, and then you can start jumping into other metrics later if they want. But really, that’s the way we articulate that, how we built our security program. And that’s something that I think that resonates very strongly with the board, because now they’re able to measure us based on known entities versus meaningless metrics that for the most part tell them nothing.
Laurel: Now, what if we switched that though? What kind of responsibility does the board have to be “street smart” and have some kind of foundational understanding of cybersecurity? Or do you take that on as your own personal responsibility to spend time with each member to make sure they understand the foundations?
Niall: Correct. So for us, it’s very much a case of taking a certain level of knowledge and then building on that knowledge so at least everybody’s on the same level of knowledge. So one example is, again, you could have somebody who’s chairing that audit committee, who’s very, very technical or very, very compliance driven. And she or he may know all about boards…audits and all the frameworks. And that’s great. And then the other side, you might have somebody who’s more finance-based or more audit-based. And then the question is, how do you work on uplevelling everybody’s skillset?
And there’s numerous different ways of doing that. It’s two things. One is sitting down with them one-on-one and then providing an uplevel of conversation on, this is what we’re doing. This is our entire security program. This is how it works. This is what 2020 looked like. This is what 2021 looks like…so getting everybody onto the same level and building that relationship is very, very important.
And we continuously see that whereby our board members will reach out towards us or we’ll reach out to them in sharing data, or they’ll have an idea that we haven’t thought about and we’ll say, “Well, that’s a really good idea. Let’s incorporate that into our program.” So I think that’s very useful. And then the second part is, it’s all about telling a story. So a story and a narrative. So if you open up a book and you start at the security side and you start at the end chapter, well, that’s not very compelling. It’s like, who’s Jane? Who’s Judy? Who’s Tim? Who’s Tony? Doesn’t make any sense whatsoever.
And oftentimes, that’s what happens in cybersecurity reports is that the board is looking at…and here’s she or he that’s presenting as a CISO and they’re presenting a set of data and metrics that they don’t understand and so therefore, they can’t do anything with that. So we spend a lot of time, our first board, starting off with a basic set of principles and then each board after that, every three months or so we go into more detail incrementally, as we’re growing and as we’re building that cybersecurity deck, they get to better understand and uplevel their understanding as well. And then from their side, with that level of understanding, they can very easily jump in and say, “Oh, by the way, here’s an area I think you should be focusing in on.”
And on our board, we have some VC firms, obviously, that are highly technical and they’ll have a slant that they’ll want us to focus in on. I want to say, “Sure, let’s incorporate that as part of our program.” So I think I would see this as board communication as a very much back and forth communication. It shouldn’t happen once a quarter. It should not happen on a daily basis, but certainly it should happen throughout the quarter whereby a board member has an idea and then you can incorporate that as part of your best practices.
Now, at the same time, you want the staff within that company to be able to operationally run their security team. But certainly, the insights some board member can provide, in some cases are tremendously because they’ve been in that industry for numerous different years. And as part of that model, they would typically have seen what other individuals have never seen before. Plus, I think what’s mostly beneficial from there, in cybersecurity, cybersecurity, again, it’s a business problem and it’s a business process. So most of these board members are exceptional at solving business practices. Maybe not cybersecurity, but they can take a cybersecurity issue and they can relate that to another business best practices, and then leverage that one in cybersecurity.
And frankly, I think that’s the best value a board can provide. Many times the CISO and the security team may not be able to see the wood from the trees because they’re so involved in it. For the board members, it’s a great kind of prism whereby they can look at it from the outside in, and they can provide insight based on, “Well, hang on a second, the way you’re solving this issue based in cybersecurity by doing a consulting model, that doesn’t work or that doesn’t scale. Instead, you should do a one-to-many model, i.e., fix the problem once and then it’s shared amongst all your constituents, the same as cloud does, software as a service does.” So that business slant, business perspective, I think is something that I really enjoy working with a board with, sharing some ideas and then collaborating back and forth. Because again, I think their business acumen is second to none. And if you can simply position cybersecurity as being a business issue, then you can really build a very strong increase of a collaborative environment really quickly.
Laurel: So speaking of your own uplevelling or upskilling, when did you first recognize that attack surface management was a separate new discipline that you needed to become really familiar with, educate your board on and then help staff it and plan for it?
Niall: Good question. I think if I look at ASM, or attack surface management, that’s probably one of the areas that CISOs focus least on and yet is the most important. And the reason for that is, if you look at any hacker, if a hacker wants to compromise your environment, the first thing that they will do is to first get to know your environment. So an example is, if you have a burglar, once they break into a housing estate, she or he will often wander around the housing estate, take a look, which are the houses that have the bins out, which ones have the ground floor windows that are open, which ones have no lights on the front of the house, which one has the dog barking?
So you wander by. Simply all you’re doing is a recon. A quick walk by 20 houses in a housing estate. You pick out the two. Now you’ve got two targets. Then you come back later on in the night or you come back tomorrow evening and then you break into those two. Done. And again, you’re looking at the way different industries do it. It’s fascinating because if you look at one industry, i.e., physical security and then you apply cybersecurity or you apply it to the board, oftentimes there’s a huge amount of similarity. And the same thing with cybersecurity is, if a company wants to compromise your environment, there’s two ways it will generally happen. One is, they’re generally doing a network scan and they look at your company and they find you have weak security. And then they turn their head back and they’re like, “Oh, interesting, a back door is open. I’m going to focus in on this company.”
Or else two, same thing as well, they’re doing a recon but they already know who you are. And in this case, they want to learn as much as possible so they can compromise you deep within your network. So, before you do any hacking of the environment, the recon component is the most critical part. Otherwise, you’re a bull in a china shop. You’re rushing in, you’re knocking off sensors, right, left and center. You shouldn’t be going in the front door, you should be going in the back door. So the recon component on that is critical, critical, critical.
Now, if you ask most CISOs when was the last time they reconned their own company, the vast majority will say, “I have no idea whatsoever.” So they may say, “Well, we use a security scanner.” But if you look at a security scanner, what you do is you go to the security scanner, you’ve put in a set of known IP addresses that you know about and you scan against those IP addresses. But if you look at that, that’s the tip of the iceberg, because what does the new industry model look like? It’s fluid. Gone are the days of cybersecurity would stand up a fire wall and it wouldn’t allow traffic through the firewall.
Now everything is extremely dynamic. Everything is internet facing. So now you’ve got Kubernetes, you’ve got people spinning up tens of thousands of containers with their own external IP addresses. They’re all accessible from the internet. You’ve got dev doing it, stage doing it. You’ve got all of the different environments coming. And now your attack surface every single minute of every single day changes. Some of it is, because it’s genuine. You’re allowing an IP address that’s out there because there’s a legitimate business reason, but oftentimes what will happen is, people will spin up the environment and suddenly it’s exposed to the internet.
Does the security team know about it? Likley no, and the CISO has no idea about it. So the ability, whereby you get to know, you get to recon your environment or the ASM, or attack surface management, is absolutely critical. Because if you don’t know it, you can’t protect it. And then the issue is, you could spin up an IP address in GCP or AWS or Alibaba. It could be on-prem, everybody’s now working from home. So my laptop could be exposed from the internet. And if you look at it, what always happens in virtually every single attack, well for the most part from the hosting, it starts on the outside and works its way in. So you really need to know your attack surface. You need to be scanning it every single day. You need to be able to attribute what are the IP addresses and devices that are exposed.
Simple example is, if you look at the last number of breaches that occurred, it’s simple stuff. Most times, it’s a cluster that was exposed from the internet, or somebody allowed like a transport administration shell like SSH or RDP from the internet, or somebody got a Kubernetes cluster and exposed it from the internet. In each of these cases, it’s just humans making accidental mistakes. But oftentimes, those IP addresses could be exposed to the internet for minutes, for days, for years, and security never gets to know about it, or protect against it. But at the same time, the hacker knows because they’re doing their job, they’re doing the recon continuously. And that’s where I’m seeing that this issue that’s been around for years of, “How do I know what’s exposed to the internet?” now it’s being defined. It’s attack surface management. What’s my outside-in view?
So for the first time ever cybersecurity are starting to…they knew there was a problem for ages, but they weren’t able to articulate what the problem was, never mind what the solution was. And now I’m seeing the kind of shift that, certainly in the last year or two, people were saying, “This is not a problem whereby I can look at it and say, yeah, it’s a problem.” Now, you’ve got to shift from this problem idolization to, “Hey, we’ve got to go fix this.” Because that’s how the hackers are getting in. And now I’m seeing people saying, “Let’s start fixing this.” And I think going forward, you’re going to have attack surface management be one of the most critical components of any CISO and their organization. If not, then they will get owned. They will get compromised and it will have a devastating impact to their business.
Laurel: So speaking of that and how the board understands attack surface management, most IT employees are going to take the path of, like you said, ease and expediency. They’re spinning up Kubernetes and servers and cloud instances and whatever it may be, because they just need to get the job done. Why is that, when you have a global company, such a problem with, or I should say, an opportunity to solve when you go through other business necessities, like a merger and acquisition, where you may have two companies coming together and you think you know where all the servers are, but in fact, a company grows and changes every single day. And that may not be the last count, the last reliable count. Why is that a concern for CISOs and the board?
Niall: So I think about this as two ways. One is, know the attack surface of your own company. And then, two, for any of your acquisitions, before you acquire them, you need to know what their attack surface is as well. So if you ask 99% of CISOs, “Tell me about my attack surface.” They won’t have the data to do that. So give you an example, in Palo Alto Networks, we use Xpanse. And the way that works is there’s four main phases I think about in attack surface management. And this applies to whenever you’re acquiring a company or you’ve integrated in the last 10 years within your organization.
And the first part is, is continuous discovery. So you’ve got to have the ability—and that’s why we use Xpanse—to continuously scan 24 by 7 by 365, every single IP address in the internet to work out what IP addresses, what ports are open. So, first of all, you’ve got to know all of the IP addresses and the ports on the internet. The issue there, that’s fine, but it’s not really going to give you much. So what’s the difference between the IP address in Palo Alto Networks and the IP address of Acme, especially when it changes every single minute? Because everything is dynamic, everything changes continuously on the internet.
So the second part really for us is the attribution. So everything is scanned. We do attribution. So we start looking at every single IP address, every single service, every single user in the internet to look at for those users themselves, are they Palo Alto Networks users or Palo Alto Networks devices or networks? Very critical because that, we’re able to see at any time, if somebody plugs in a laptop, in London, we’re able to get attribution that that’s one of our devices and networks. And if that network and device opens up RDP, a remote shell from the internet, then that’s an issue. Or if somebody spins up a network that we have no idea what it is, and it’s got (personally identifiable information) PII or healthcare data, that would be devastating for us for our business. So we spend a lot of time using the tools, such as Xpanse, for the attribution component there.
Third component we look at, now you know the IP addresses and services and you know which ones are Palo Alto Networks. Next, after that, there’s varying risk levels. If somebody opens something from the internet that’s a web server and it’s communicating using encryption using SSL and it’s well-patched, then, for the most part, the risk in that case is probably one out of 10. But then, if you’ve got another IP address that was spun up and it’s allowing an internal engineering tool that was accidentally exposed to the internet that has access to your cloud environments and it’s not patched. And oftentimes it’s not. Because when you look at tools that are exposed accidentally, they’re not managed because if they were managed in the first place, they wouldn’t be exposed to the internet.
So for us, really, the model is what’s the risk level of every single IP address and every single service? And we can then focus in on the ones that they’re eight or nine out of 10. On a daily basis or on an hourly basis, we can go fix those. But oftentimes again, it’s a case of, if they’re exposed to the internet, they’re exposed, they’re not patched, they’re not managed. They’re accidentally exposed.
And then the final one we focus in on, the problem now is, here’s a problem with scale. You’re not talking about three IP addresses or four IP addresses. You could be talking about 40,000 IP addresses, 400,000 IP addresses. And then suddenly tomorrow, it’s 500,000. Then it goes down to 350,009 IP addresses. So, because of the scale of the issue, and because over time more and more things will be internet-facing, the only way to solve this is through automation. No doubt whatsoever that the issue of an alert being generated, and somebody from the security operations center (SOC) jumping in, looking at that IP address, looking at the service, just doesn’t work.
So what needs to happen is, everything needs to be automated. Everything from the scanning perspective to the attribution components, what’s the risk of that IP address? So now, instead of you’ve got 500,000 IP addresses, and now you’re focusing in on three IP addresses that suddenly popped up there, one is like an SSA server. One could be like a telnet server, another could be an engineering tool. And then, from the automation layer, you want to build automation into the service whereby that service is automatically remediated, whether it’s patched or whether it’s taken offline.
And if you look at that entire chain, it’s the reverse of what the hacker is doing. The hacker is, they’re doing the recon, and then they’re breaking into that server so as to compromise your environment. You’re starting the same position as they are, where you should be. You should start with your attack surface, your recon. And after that, then you’re looking at your risk. You’re looking at the patching, you’re looking at taking it offline. You’re looking at automation. So I firmly believe, if you look at, with the drive towards the cloud, people working from home, this concept of perimeter has been gone for 10 years. It’s been gone for 10 years. But cybersecurity has been hanging on it and saying, “Well, there’s still a perimeter.” There isn’t.
So now they see every single device that’s on the internet. That’s its own perimeter. The device, the network, whatever else it is. And really, I think one of the certainly the driving factors, if everything is on the internet, if everything is online, if everything is always communicating, if everything is dynamically changing, you have to have a cybersecurity program that has the ability to know, tell me every single device that’s on the network, on the internet, what’s its risk level? And then for those that hit a certain risk level, either take it offline and apply controls. And by the way, you’ve got to do it 24 by 7 by 365, no humans involved. You’ve got to do that because of the scale of the issue. If you have a person that’s involved as part of that process, then you are going to fail. You are going to fail. Hence us leveraging tools like Xpanse to find and then fix those issues.
Laurel: Yeah. Technology is scalable, but humans are not. Right?
Laurel: Well, Niall, I appreciate this conversation today. It’s been absolutely fascinating and it’s given us so much to think about. So thank you for joining us today on the Business Lab.
Niall: Thank you very much for the invitation. I really enjoyed the conversation.
Laurel: That was Niall Browne, the chief information security officer at Palo Alto Networks, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River.
That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at dozens of events each year around the world.
For more information about us and the show, please check out our website at technologyreview.com.
The show is available wherever you get your podcasts.
If you enjoyed this episode, we hope you’ll take a moment to rate and review us.
Business Lab is a production of MIT Technology Review.
This episode was produced by Collective Next.
Thanks for listening.
This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff.
The news: The European Union’s digital vaccine passport system went live in seven countries yesterday, ahead of a full launch for all 27 member states on July 1. The document, called a digital green certificate, shows whether someone has been fully vaccinated against covid-19, recovered from the virus, or tested negative within the last 72 hours. Travelers who can prove they fit one of these three criteria are not required to be tested or go into quarantine. The certificate is now being accepted in Bulgaria, Croatia, the Czech Republic, Denmark, Germany, Greece, and Poland.
How it works: The certificate comes in the form of a QR code, which can be either stored on a cell phone or printed out on paper. The data is not retained anywhere afterwards, the commission said, for security and privacy reasons.
Why it matters: As an early mover, the EU could help to lead the way for post-pandemic global travel. The bloc is in talks with the US about how to check the vaccination status of American visitors this summer. That is likely to concern ethicists and data privacy experts, who worry that vaccine passports can be used to further entrench inequity. (To read more about why, check out the full coverage of the issues from our Pandemic Technology Project team.)
Dead end? In any case, it currently seems unlikely that vaccine passports will become common for travel inside the US. Several states, including Alabama, Arizona, Florida, and Georgia, have banned them. New York’s Excelsior Pass, America’s first government-issued vaccine passport, has been downloaded more than one million times, but that represents just a small proportion of the 9 million people who’ve been vaccinated, and the vast majority of businesses aren’t using it yet.
Even early movers are ditching them. Israel was one of the first countries to roll out a vaccine passport. Its “green pass” was designed to allow access to restaurants and sporting events for those who could prove they were vaccinated. But as the country’s successful vaccination rollout has driven coronavirus numbers down into double figures, Israel this week scrapped the pass as it moves to open up fully for everyone.