MIT Top Stories
CxOs and boards recognize that their organization’s ability to generate actionable insights from data, often in real-time, is of the highest strategic importance. If there were any doubts on this score, consumers’ accelerated flight to digital in this past crisis year have dispelled them. To help them become data driven, companies are deploying increasingly advanced cloud-based technologies, including analytics tools with machine learning (ML) capabilities. What these tools deliver, however, will be of limited value without abundant, high-quality, and easily accessible data.
In this context, effective data management is one of the foundations of a data-driven organization. But managing data in an enterprise is highly complex. As new data technologies come on stream, the burden of legacy systems and data silos grows, unless they can be integrated or ring-fenced. Fragmentation of architecture is a headache for many a chief data officer (CDO), due not just to silos but also to the variety of on-premise and cloud-based tools many organizations use. Along with poor data quality, these issues combine to deprive organizations’ data platforms—and the machine learning and analytics models they support—of the speed and scale needed to deliver the desired business results.
To understand how data management and the technologies it relies on are evolving amid such challenges, MIT Technology Review Insights surveyed 351 CDOs, chief analytics officers, chief information officers (CIOs), chief technology officers (CTOs), and other senior technology leaders. We also conducted in-depth interviews with several other senior technology leaders. Here are the key findings:
- Just 13% of organizations excel at delivering on their data strategy. This select group of “high-achievers” deliver measurable business results across the enterprise. They are succeeding thanks to their attention to the foundations of sound data management and architecture, which enable them to “democratize” data and derive value from machine learning.
- Technology-enabled collaboration is creating a working data culture. The CDOs interviewed for the study ascribe great importance to democratizing analytics and ML capabilities. Pushing these to the edge with advanced data technologies will help end-users to make more informed business decisions — the hallmarks of a strong data culture.
- ML’s business impact is limited by difficulties managing its end-to-end lifecycle. Scaling ML use cases is exceedingly complex for many organizations. The most significant challenge, according to 55% of respondents, is the lack of a central place to store and discover ML models.
- Enterprises seek cloud-native platforms that support data management, analytics, and machine learning. Organizations’ top data priorities over the next two years fall into three areas, all supported by wider adoption of cloud platforms: improving data management, enhancing data analytics and ML, and expanding the use of all types of enterprise data, including streaming and unstructured data.
- Open standards are the top requirements of future data architecture strategies. If respondents could build a new data architecture for their business, the most critical advantage over the existing architecture would be a greater embrace of open-source standards and open data formats.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
The future of the single-shot Johnson & Johnson covid vaccine remains in limbo after an advisory panel recommended taking a deeper look into reports of rare—and sometimes fatal—side effects.
The US Centers for Disease Control and the Federal Drug Administration advised suspension of the Johnson & Johnson vaccine on Tuesday, after reports that six people who had received a dose developed rare blood clots in the brain, combined with another disorder that actually inhibits clotting. One patient died.
After discussing the situation, the CDC’s advisors said that the pause would continue for at least a week while information was gathered and reviewed.
“We’ll never have perfect data, and there will always be uncertainty,” said Grace Lee, a professor at Stanford University and chair of the advisory panel’s COVID-19 Vaccine Safety Technical Subgroup, when the group met on Wednesday. “It’s really, for me, about getting better risk estimates.”
Committee members agreed to reconvene once they’ve had more time to gather and assess data about who might be most at risk of complications, and how that compares to the risk of catching and spreading covid.
All six of the cases reported after the vaccine became widely available occurred in women; one additional case—a man—was reported during clinical trials. All patients were between 18 and 48, and several were treated with the blood thinner heparin, which is typically used for clots but worsened the condition of these patients. The symptoms appear very similar to ones associated with AstraZeneca’s covid vaccine, which many European countries have limited or even stopped using. The active components of both are delivered to cells by adenoviruses that have been modified so that they can’t replicate.
But because there are other treatments available that use totally different methods, experts say that it is sensible to hold to see if more information becomes available. The Johnson & Johnson vaccine counts for only 7.5 million of America’s 195 million shots delivered; Pfizer-BioNTech and Moderna, which use mRNA rather than adenoviruses, are responsible for the rest.
“We’ll never have perfect data, and there will always be uncertainty. It’s really, for me, about getting better risk estimates.”
“The risks and benefits of continuing to administer the J&J vaccine can’t be looked at in isolation,” says Seema Shah, a bioethicist at Lurie Children’s Hospital in Chicago. “If people have alternatives, at least while the FDA is figuring things out, it makes sense to steer people in the direction of those alternatives.”
Resumption of Johnson & Johnson shots may not mean that it becomes available to everybody, however. Safety of vaccines is important because they’re given to healthy people, rather than treating people who are already sick, and successfully figuring out which groups might see the most benefit—or most harm—could mean US regulators give tiered recommendations. Several EU countries, for instance, have said the AstraZeneca vaccine should be given to older people at higher risk of complications from covid, rather than younger people who might be at higher risk of vaccine complications.
“At the end of the day, the critical issue is if I’m a 30 year old woman and I get this vaccine, how much will that increase my risk of this bad thing?” says Arthur Reingold, chair of California’s covid-19 Scientific Safety Review Workgroup and a former member of the CDC’s vaccine advisory panel.
A more complicated question is what data the committee will review to make a final decision.No comprehensive data
Information may be limited because the issue was caught quickly, and because the Johnson & Johnson vaccine is so far only being deployed in the US (the company said it was delaying delivery to European Union countries.) But making a determination may also prove difficult because America’s medical data is highly fragmented.
Without a national healthcare system, there’s no comprehensive way to assess risks and benefits for different groups who have received the vaccine. There is no routine federal capability to connect patient data with vaccine records. Instead, regulators hope clinicians will hear about the pause and proactively report cases they hadn’t previously connected to vaccinations.
“It might stimulate some clinician to say, ‘Oh my God, Mrs. Jones had that three weeks ago,’” says Reingold. In addition, he says, “there’s still quite a few people who have gotten a dose within the last two weeks, and some of them could develop this rare side effect.”
The voluntary system may seem archaic, but that is how the six cases under review came to the attention of the authorities. They were reported to the CDC through an online database called the Vaccine Adverse Events Reporting System, or VAERS. It is an open website for medics, patients, and caregivers to notify the government about potential vaccine side effects.
Because the system is so open, and requires opt-in participation, it’s impossible to calculate exact risks using VAERS data. Epidemiologists generally think of it as a place to look for hypotheses that tie vaccines to side effects, rather than a source that can be used to confirm their suspicions.
“It’s a messy system. Anyone can report anything, whether it’s biologically plausible that it’s related to the vaccine or not,” says Mark Sawyer, a member of the FDA’s Vaccines and Related Biological Products Advisory Committee, which reviewed the covid-19 vaccines for public use. “Then the job is to sort through and figure out, is there really a signal here?”
The next best thing to a national healthcare system is the CDC’s Vaccine Safety Datalink, a consortium of American health insurers who provide medical care to patients in-house. The system includes records of about 10 million patients. Unfortunately, only 113,000 Johnson & Johnson vaccines have been captured by that system so far.
“There just haven’t been enough Johnson & Johnson vaccine doses given in the context of these other monitoring systems to detect a problem. This is just too rare an event,” says Reingold. “If all you’ve got left is VAERS or another passive reporting system, then you do the best you can.”
There are many ethical and practical concerns around this pause, as discussed at Wednesday’s committee meeting. Because Johnson & Johnson requires just one shot and doesn’t need to be frozen, it may make it easier to vaccinate people who have less access to healthcare clinics. Other members of the panel, meanwhile, expressed concern that a continued pause would stoke vaccine hesitancy.
Many speakers wrestled with the limitations of available data, particularly regarding the breakdown of risks for different groups of people.
“It’s possible we may not have any more information, at which point we’re still going to have to make a decision,” said Stanford’s Lee during the discussion. “But my hope is that, in the next week or two, we’ll be able to capture [risk] in a more robust way.”
This story is part of the Pandemic Technology Project, supported by the Rockefeller Foundation.
On January 9, 2020, Detroit Police drove to the suburb of Farmington Hill and arrested Robert Williams in his driveway while his wife and young daughters looked on. Williams, a Black man, was accused of stealing watches from a luxury store, and held overnight in jail.
Under questioning, an officer showed Williams a picture of a suspect. His response, he told MIT Technology Review last year, was to reject the claim. “This is not me,” he told the officer. “I hope y’all don’t think all black people look alike.” He says the officer replied: “The computer says it’s you.”
Williams’s wrongful arrest, which was first reported by the New York Times in August 2020, was based on a bad match from the Detroit Police Department’s facial recognition system. Two more instances of false arrests have since been made public. Both are also Black men, and both have taken legal action to try rectifying the situation.
Now Williams is following in their path and going further—not only by suing the Detroit Police for his wrongful arrest, but by trying to get the technology banned.
On Tuesday, the ACLU and the University of Michigan Law School’s Civil Rights Litigation Initiative filed a lawsuit on behalf of Williams, alleging that his arrest violated Williams’s Fourth Amendment rights and was in defiance of Michigan’s civil rights law.
The suit requests compensation, greater transparency about the use of facial recognition, and that the Detroit Police Department stop using all facial recognition technology, either directly or indirectly.
What the lawsuit says
The documents filed on Tuesday lay out the case. In March 2019, the DPD had run a grainy photo of a Black man with a red cap from Shinola’s surveillance video through its facial recognition system, made by a company called DataWorks Plus. The system returned a match with an old driver’s license photo of Williams. Investigating officers then included William’s license photo as part of a photo line-up, and the Shinola security guard identified William’s as the thief. The officers obtained a warrant, which requires multiple sign offs from department leadership, and Williams was arrested.
The complaint argues that the false arrest of Williams was a direct result of the facial recognition system, and that “this wrongful arrest and imprisonment case exemplifies the grave harm caused by the misuse of, and reliance upon, facial recognition technology.”
The case contains four counts, three of which focus on the lack of probable cause for the arrest while one count focuses on the racial disparities in facial recognition. “By employing technology that is empirically proven to misidentify Black people at rates far higher than other groups of people,” it states, ”the DPD denied Mr. Williams the full and equal enjoyment of the Detroit Police Department’s services, privileges, and advantages because of his race or color.”
Facial recognition’s difficulties in identifying darker-skinned people is well-documented. After the killing of George Floyd in Minneapolis in 2020, some cities and states announced bans and moratoriums on the police use of facial recognition. But many others, including Detroit, continued to use it despite growing concerns.“Relying on subpar images”
When MIT Technology review spoke with Williams’s ACLU lawyer, Phil Mayor, last year, he stressed that problems of racism within American law enforcement made the use of facial recognition even more concerning.
“This isn’t a one bad actor situation,” Mayor said. “This is a situation in which we have a criminal legal system that is extremely quick to charge, and extremely slow to protect people’s rights, especially when we’re talking about people of color.”
Eric Williams, a senior staff attorney at the Economic Equity Practice in Detroit, says cameras have many technological limitations, not least that they are hardcoded with color ranges for recognizing skin tone and often simply cannot process darker skin.
“I think every Black person in the country has had the experience of being in a photo and the picture turns up either way lighter or way darker.”
“I think every Black person in the country has had the experience of being in a photo and the picture turns up either way lighter or way darker,” says Williams, who is a member of the ACLU of Michigan’s lawyers committee, but is not working on the Robert Williams case. “Lighting is one of the primary factors when it comes to the quality of an image. So the fact that law enforcement is relying, to some degree… on really subpar images is problematic.”
There have been cases that challenged biased algorithms and artificial intelligence technologies on the basis of race. Facebook, for example, underwent a massive civil rights audit after its targeted advertising algorithms were found to serve ads on the basis of race, gender and religion. YouTube was sued in a class action lawsuit by Black creators who alleged that it’s AI systems profile users and censor and discriminate content based on race. YouTube was also sued by LGBTQ+ creators who said that content moderation systems flagged the words “gay” and “lesbian”.
Some experts say it was only a matter of time until the use of biased technology in a major institution like the police was met with legal challenges.
“Government use of face recognition plainly has a disparate impact against people of color,” says Adam Schwartz, senior staff lawyer at the Electronic Frontier Foundation. “Study after study shows that this dangerous technology has far higher rates of false positives for people of color compared to white people. Thus, government use of this technology violates laws that prohibit government from adopting practices that cause disparate impact.”
But Mayor, Williams’s lawyer, has been expecting a tough fight. He told MIT Technology Review last year that he expected the Detroit Police Department to continue to argue that facial recognition is a great “investigative tool”.
“The Williams case proves it is not. It is not at all,” he said. “And in fact, it can harm people when you use it as an investigative tool.”Under the microscope
The Williams suit comes at a critical time for race and policing in the US. It was filed as defense lawyers began arguments in the trial of Derek Chauvin, the officer charged with murdering George Floyd in Minneapolis last May—and on the third day of protests in response to the shooting of Duante Wright in nearby Brooklyn Center, Minnesota. Wright, a 20-year-old Black man, was pulled over for a traffic stop and arrested under a warrant before officer Kim Potter shot and killed him, allegedly mistaking her handgun for a taser.
Eric Williams says it’s essential to understand facial recognition in this wider context of policing failures. “When DPD decided to purchase the technology… it was known that facial recognition technology was prone to misidentify, darker-skinned people before Mr. Williams was taken into custody, right? Despite that fact, in a city that is over 80% Black, they chose to use this technology.”
“You’re clearly placing less value on the lives and livelihoods and on the civil liberties of Black people than you are on white people. That’s just too common in the current United States.”
Jennifer Strong contributed reporting to this story.
Our entire financial system is built on trust. We can exchange otherwise worthless paper bills for fresh groceries, or swipe a piece of plastic for new clothes. But this trust—typically in a central government-backed bank—is changing. As our financial lives are rapidly digitized, the resulting data turns into fodder for AI. Companies like Apple, Facebook and Google see it as an opportunity to disrupt the entire experience of how people think about and engage with their money. But will we as consumers really get more control over our finances? In this first of a series on automation and our wallets, we explore a digital revolution in how we pay for things.We meet:
- Umar Farooq, CEO of Onyx by J.P. Morgan Chase
- Josh Woodward, Director of product management for Google Pay
- Ed McLaughlin, President of operations and technology for MasterCard
- Craig Vosburg, Chief product officer for MasterCard
This episode was produced by Anthony Green, with help from Jennifer Strong, Karen Hao, Will Douglas Heaven and Emma Cillekens. We’re edited by Michael Reilly. Special thanks to our events team for recording part of this episode at our AI conference, Emtech Digital.Transcript
Strong: For as long as people have needed things, we’ve… also needed a way to pay for them. From bartering and trading… to the invention of money… and eventually, credit cards… which these days we often use through apps on our phones.
Farooq: No one, 10 years ago, no one thought that, you know, you’d be just getting up from a dinner table and using Zelle or Venmo to send five bucks to your friend. And now you do.
Strong: The act of paying for something might seem simple. But trading paper for groceries…or swiping a piece of plastic for new clothes is built on a few powerful ideas that allow us to represent and exchange things of value.
Our entire financial system is built on this agreement… (and trust).
But this model is changing… and banks are no longer the only players in town.
[Sounds from an advertisement for Apple Card]
[Ad music fades in]
Announcer: This is Apple Card. A credit card created by Apple—not a bank. So it’s simple, transparent, and private. It works with Apple Pay. So buying something as easy as: *iPhone ding*.
Strong: It’s not just Apple. Many other tech giants are moving into our wallets… including Google… and Facebook…
[Sounds from Facebook’s developer conference]
Mark Zuckerberg: I believe it should be as easy to send money to someone as it is to send a photo.
Strong: Facebook Pay works through it’s social apps—including Instagram and Whatsapp—and executives hope those payments will one day be made with Facebook’s very own currency.
And beyond what we use to pay for things, how we pay for things is changing too.
[Sounds from an advertisement for Amazon One]
Announcer: Introducing Amazon One. A free service that lets you use your palm to quickly pay for things, gain access, earn rewards and more.
Strong: This product works by scanning the palm of your hand… and it’s not just for payments. It’s also being marketed as an ID. Something like this could one day be used to unlock the door at the office or to board a plane.
But letting companies use data from our bodies in this way raises all sorts of questions—especially if it mixes with other personal data.
Vosburg: We can see in great detail how people, for example, are interacting with their device. We can see the position in which they’re holding it. We can understand the way in which they’re typing. We can understand the pressure that’s being applied on the screen as people are hitting the keystrokes. All of these things can be useful with the combination of artificial intelligence to process the data to create sort of an interaction fingerprint.
Strong: I’m Jennifer Strong, and in this first of a series on automation and our wallets, we explore a digital revolution in how we pay for things.
Farooq: So, if you think about how we operate today, we primarily operate through central authorities.
Strong: Omar Farooq is the CEO of Onyx… from J.P. Morgan. It focuses on futuristic payment products.
Farooq: Frankly, the biggest central authority in some ways is, in the US, for the money purpose, is the US federal reserve and the U S Treasury. You pull out a dollar bill. It says U S Treasury. It’s issued by the, you know, in some ways, quote-unquote, the top of the house. The top of the house guarantees it. And you carry it around with you. But when you give it to someone, you’re ultimately trusting that central authority in how you are transacting.
Strong: This can be a good thing. The value of that otherwise worthless paper bill is guaranteed because it’s issued and backed by the US government. But it can also slow things down. And though we now take for granted being able to transfer money in real time, the ability to do so hasn’t been around that long.
Farooq: Payments actually do, as a technology, evolve somewhat slowly. Just to give you an example, the U S recently, a couple of years back, launched the real-time payments scheme, which literally was the first new payments, you know, sort of, rails in the U S for decades. As crazy as that sounds.
Strong: A payment rail is the infrastructure that lets money move from one place to another. And those “real time payments” are a big deal because until recently when money left your account it took time, often days, before it reached its destination.
It’s why we can send money through apps like Venmo and hear the ding that it’s been received on the other person’s phone just a few seconds later. Also, Venmo’s chief competitor, called Zelle, only exists because of unprecedented cooperation between otherwise competing banks.
Farooq: I think where the world is going is towards more open platforms where it’s not just one party’s capabilities, but multiple parties’ capabilities that come together. And the value that is generated is by the ability for anyone to connect to anyone else. So I think what we are seeing is a rapid evolution in the digital sphere where more and more payment types, whether they are wholesale or retail are going into new modes, new rails, 24/7, 365, the ability to pay anyone anywhere in any currency. All those things are basically getting accelerated.
Strong: This is where cryptocurrencies could come in. Which isn’t just about digital money.
Farooq: We believe that there’s a path forward where money can be smarter itself. So you can actually program the coin and it can control who it goes to.
Strong: In other words, the trust we usually place in banks or governments would be transferred to an algorithm and a shared ledger.
Farooq: So you’re almost relying on that decentralized nature of the algorithm and say, “I think I can trust your token coming to me” because there’s, you know, X… X thousand or X hundred-thousand copies of a ledger that shows you as the owner of that token. And then when you give it to me, All those copies get updated. And now this shows me as the owner of that token.
Strong: And not only could this make payments faster and more seamless. It could also help people who’ve been largely excluded from the banking system.
Farooq: No matter what we do, we cannot really get around this Know Your Customer issue. And I think, you know, our view is that the tech is almost there, but the regulation and the infrastructure around it is not there yet. But, what we do want to do is we want to create these decentralized systems where these people can, over time, be included.
Strong: But sorting out the tech… is just one side of the coin. There’s also a need for better regulation.
Farooq: But I think it’s unfortunately a little bit more than what a bank could do. I think this.. some of these things rise to the level of like, you know, how does a government, or how does a state really enable identity at a global level? And I think that’s why when you look at China or you look at Nordics or some of those countries, I mean, you have national IDs and you have a very standardized method of knowing who someone is.
Strong: And the shift it allows in banking can be transformative…
Farooq: So if you look at a country like India, India has made dramatic progress in how many people have gone from being unbanked to banked in terms of having a wallet on their mobile phone. So I think these technologies are going to turbocharge people’s ability to come into this ecosystem. What I would hope as someone who grew up in the developing world before migrating here is that you would make those connections so, you know, everyone in those countries has access to markets—to bigger markets. So I mean, whether you’re sitting in Sub-Saharan Africa or you’re sitting in like, you know, a village in India or Pakistan or Bangladesh, wherever, you can actually sell something through Amazon and get paid for it. I mean, you know, those sorts of things. I think there’s tremendous potential human potential that could be unlocked if we could take payments in a digital manner to some of those parts of the world.
Strong: And this vision?… extends not only to connecting anyone, anywhere to a bank… but also anything with an internet connection.
Farooq: doing some initial R and D work in the IOT space, which is, if, you know, I mean, if one day your fridge had to order milk by itself. Like, does it have to go through your bank or could it just send the money to someone who’ll deliver your milk?
McLaughlin: Every device you use has potential to be a commerce device and our network brings that together.
Strong: Ed McLaughlin is president of operations and technology for MasterCard. He’s speaking at our A-I conference, EmTech Digital.
McLaughlin: So, what all of that connectivity results in? Is.. bringing together pretty much every financial institution in the world, tens of millions of merchants, governments, tech CO’s, and all of that, which results in billions of transactions a year we see. MasterCard across all of those devices and cards is serving about two and a half billion accounts. So we get the data and transactions from a Facebook sized population, if you think about that… And as far as the scope goes, we’ve been probably seeing 20 to 25% of all internet transactions outside of China—since there was an internet.
Strong: But this connectivity creates its own set of new problems. Maybe you’ve had the experience of going out of town and suddenly your card stops working because the change of location triggered a fraud alert.
McLaughlin: One of the keys in applying AI is how you frame the question and our teams very early on and said it wasn’t to stop transactions. It was to make sure as many good transactions as possible made it through.
Strong: Another key is to have an abundance of data.
McLaughlin: It’s a massive in-memory grid in our network that holds over 2 billion card profiles with about 200 analytical vectors on it. And we make decisions in every transaction that flows through. We have less than 50 milliseconds to make that decision. So in order to do that, we have 13 different AI technologies that we’ve modeled and experimented over the years that we apply to it.
Strong: Banks are also turning to A-I to look for money laundering. In the physical world, organized crime is often hidden behind the storefronts of real businesses. And in the digital world? Hiding is even easier.
Illegal money can quickly change hands dozens of times and cross borders until there’s no clear trail back to its source. It’s a massive problem. And most of it goes undetected. It’s possible only one percent of the profits earned by criminals gets caught. And the turmoil of the global economy over the last year has only made things worse.
McLaughlin: Our adversary.. They’re using AI too. And if you look online, it’s just bots fighting bots. So you have to pick up things you weren’t looking for before, like low and slow attacks where they stay inside, what looks like acceptable tolerances, but they’re constantly probing or doing a tumbler attack on your systems. Hard to pick up. When COVID hit, you know, the world moved online. Spending patterns shifted dramatically. And what we were able to do because the AI’s are rich enough and look at so many different variables.. We were able to really tell you’re still you and you’re just behaving a little bit differently.
Strong: And the types of attacks change too…
McLaughlin: So we saw one attack factor, which was pretty amazing is they thought, okay, people won’t block transactions for personal protective gear. It’s a specific merchant class we have. And we saw the fraudsters pile on in trying to get transactions through because they figured nobody would be blocking. The good news is we look at enough other elements that we could immediately pick that up and block those transactions.
Strong: They’re building machine learning tools to identify patterns of normal activity. And to flag outliers when they’re detected. Humans can then double check those alerts and approve or reject them.
McLaughlin: We constantly have AIs running also, not just blocking the fraud or looking at it, but I’m just calling it weirdness detection—where we’re constantly predicting what we would expect to see. In fact it’s a great way to step into AI because you have KPIs you’re already tracking. Try to start predicting them. When you see something which is an immediate deviation from it, the first thing we actually do is say, what’s going on here? So we may see something the model hasn’t caught up to, we just throw a rule to block it. And we can do that instantly.
Strong: The payments industry used to be slow moving… but it’s adapting to a world where any device might one day be connected to a payments network… including self driving cars.
McLaughlin: So whether you’re using your browser to order online, if it’s your iPhone, we’re using an Apple Pay to tap, or Mercedes just announced that, uh, they’re going to be connecting their cars to gas pumps. So you can simply drive up and authorize your transaction, right from your car. And in fact, as things move away from the card and to devices, we’re seeing even more data coming in through the network.
Strong: We’ll be back… right after this.
Strong: With more and more of our financial lives being documented, tracked and mediated online, that data turns into fodder for AI—which is being enlisted into a whole host of other roles with payments.
Woodward: People have a really complex relationship with their money. It can be stressful. It’s often boring a lot of the time.
Strong: Josh Woodward leads the Google Pay team for the US. He sees it as an opportunity to change not just payments…but the entire experience of how people think about…and engage with…their money.
Woodward: And so what we’re trying to do as a team is think about how can we simplify that relationship with money where people feel in control and they feel confidence when they’re using our app and seeing how their spending is going in and out.
Strong: Google Pay began as a peer to peer payment solution—where the main goal was digitizing the plastic cards in your wallet. But over the years, it’s evolved into a tool meant to help you more holistically manage your finances, and relationships with businesses.
Strong: And it’s taken some cues from social media. Instead of card numbers or accounts, transactions are organized around pictures of people and businesses you’ve recently paid.
Woodward: We realized that transactions, in some ways, the.. the money, that the digits, the dollars and cents, is secondary. It’s a lot more about the person or the memory around that transaction. So we’ve tried to bring that out. Similarly, we’ve taken that same relationship based design and applied it to businesses. And this is something that’s very different. So when you look today at our home screen, // what you see is actually the icon of the business. And when you tap on that, you are taken to that business page where you can actually. Really see, like your relationship with the business. If you have a loyalty card you can see that there, you can see how your points are progressing. So the next time you go buy, you can get 20% off for example. And so we’ve tried to create this… Really almost like a threaded relationship of all your activity with that business inside the Google Pay app a little bit like Gmail, threaded email messages.
Strong: It also lets users sort transactions in a way that mirrors a web search.
Woodward: So you can do things like search for food. And you’ll get all of the transactions at places where you bought food and Google Pay can understand that this restaurant, for example, is a restaurant. You don’t have to go in and manually categorize that. Or you can get more specific and do things like a search for Mexican restaurants. And it’ll just take that subset of Mexican restaurants. There’s no part of that transaction that has the phrase, Mexican restaurant in it. Google Pay’s able to make that connection for you.
Strong: And using computer vision…it can sort through photos of receipts.
Woodward: What we’ve been able to do in Google Pay, again with someone’s permission, this feature is off by default, is that you can say, I want all the photos I’ve taken of receipts to be searchable in Google Pay. And what that allows you to do is actually search very specifically for individual items that are printed on the receipt. So for example, a couple of months ago, before Christmas, I bought a shirt, uh, it was a Christmas present from Lulu. I can go into Google Pay now and search for “shirt.” And that Lulu receipt comes up.
Strong: It’s designed to give users a greater sense of control over their spending.
Woodward: It creates a place where you get that full picture. And that’s what we’ve seen. Time and time again, in the research and in talking to people is that different apps have provided different slices of that picture, but being able to bring it all together is really what we aspire to.
Strong: It’s one more way our lives might become a little easier and more efficient with the help of technology… But also where the gathering… filtering… and processing… of vast amounts of personal data raises big questions… even before we get to things like paying with our faces or gestures… or how all of that data… might mix with the rest of our massive data trails.
And longer-term, what would it mean for companies like Facebook to establish their own currencies and take over the global payments system?
It’s worth asking whether we as consumers really get more control over our finances… or companies get more control over us…
Bennett: We couldn’t have imagined something like Siri or Alexa. You know we just thought we were doing just generic phone voice messaging… and so in 2011 when suddenly Siri appeared, it’s like, “I’m WHO??” [laughing]… “WHAT??”…
Strong: We look at what it takes to make a voice… and how that’s rapidly changing.
Strong: This episode was produced by Anthony Green, with help from Jennifer Strong, Karen Hao, Will Douglas Heaven and Emma Cillekens. We’re edited by Michael Reilly. Special thanks to our events team for recording part of this episode at our AI conference: Emtech Digital.
The US took the dramatic step of recommending that health-care providers stop giving people the Johnson & Johnson vaccine against covid-19 after six women who received it developed serious blood clots and one died.
The US Food and Drug Administration described its action as a temporary halt to give regulators time to understand the apparent side effect. “We are recommending a pause in the use of this vaccine out of an abundance of caution,” the agency said in a statement.
As of Monday, April 12, about 6.8 million doses of the J&J vaccine had been given in the US. That means the rate of the serious clotting events could be about 1 in 1 million, making them “extremely rare,” according to the FDA.
By comparison, about 1 in 600 people Americans have already died of covid-19 or with the infection as a contributing factor, meaning getting infected by the coronavirus is the much greater risk overall.
The problem is that the blood clots have struck younger women, whose personal risk from covid-19 is lower.
The women, all between the ages of 18 and 48, developed serious blood clots six to 13 days after getting the vaccine, the FDA said in a statement. According to the New York Times, one woman died and another is in critical condition.
That means people, especially women, who have gotten the J&J vaccine recently should be on the lookout for a severe headache, pain in the abdomen or legs, and shortness of breath. After about two weeks the risk seems to pass.
The government warned that the type of clot being seen—called a cerebral venous sinus thrombosis—is very unusual and that heparin, a common blood thinner often used to treat clots, could be dangerous in these cases. A group of advisors to the US Centers for Disease Control will meet Wednesday, April 14, to review the cases and “assess their potential significance.”
In statement, Johnson & Johnson said it was “aware of an extremely rare disorder” involving vaccine recipients and said it had decided to postpone the rollout of its vaccine in Europe.
Blot clots have also been linked to another vaccine, from AstraZeneca, which has been widely used in Europe but is not yet authorized in the US. Regulators in Europe have in some cases recommended that younger people avoid that vaccine.
Both the J&J and AstraZeneca vaccines employ an adenovirus, a supposedly harmless component. Now scientists will investigate whether the adenovirus, or another aspect of the vaccine, causes an immune reaction that is leading to the clots.
The two most widely used vaccines in the US, sold by Moderna and Pfizer, are mRNA vaccines that employ a different technology. Those vaccines often cause muscle aches and fever but haven’t been linked to blood clots.
The pause announced by the FDA is likely to curtail the use of the J&J shot at federal vaccine sites. States and hospitals can choose to keep giving it depending on the age and risk profile of the patient, although several governors, as well as the CVS and Walgreens pharmacy chains, said they would also pause the shots, the New York Times reported.
The J&J vaccine requires a single shot and is more convenient than the Moderna and Pfizer vaccines, which both require two doses. However, the company’s manufacturing was plagued by problems. Now, with concern over side effects, the role it plays in the US response may become even more limited.
When Gary Landsman prays, he imagines he is in Israel and his sons Benny and Josh are running toward him. They are wearing yarmulkes, and the cotton fringes called tzizit fly out from their waistbands. He opens his arms ready for a tackle.
The reality is Benny and Josh both have Canavan disease, a fatal inherited brain disorder. They are buckled into wheelchairs, don’t speak, and can’t control their limbs.
On Thursday, April 8, in Dayton, Ohio, Landsman and his family rolled the older boy, Benny, into a hospital where over several hours, neurosurgeons drilled bore holes into his skull and injected trillions of viral particles carrying the correct version of a gene his body is missing.
The procedure marked the climax of a four-year quest by the Landsman family, who live in Brooklyn, New York, to obtain a gene therapy they believe is the only hope to save their kids.
MIT Technology Review first profiled the Landsmans’ odyssey in the cover story of our 2018 special issue on precision medicine. Advances in gene therapy technology are making it possible to treat genetic diseases like hemophilia. But because Canavan is an ultra-rare disease, few companies are working on a cure. So the family financed the daring gene treatment on their own, using funds they raised online.
Impressive advances in genome sequencing, gene replacement, and gene editing mean, in theory, thousands of rare genetic diseases could be treated. But because companies aren’t leading the way, parents say, they are being forced to embark on multimillion-dollar quests to finance the needed experiments. Adding to the ethical dilemma: in some cases, parents are designating their own children as the first recipients.
The trial in Dayton, for instance, is prioritizing children whose families have been able to raise funds to underwrite the experiment, whose costs so far are close to $6 million. “It raises the eternal equity question of who gets access to trials and who doesn’t,” says Alison Bateman-House, a bioethicist at New York University who is studying ethical issues in pediatric gene-therapy trials.
The Landsman family has raised more than $2 million, and families from Russia, Poland, Slovakia, and Italy have also used cash donations to secure spots in the trial. A Russian family even posted a copy of an invoice for “gene-therapy treatment” in the amount of $1,140,000, which included $800,000 to offset costs of manufacturing the genetic treatment being used in the trial.
According to the Russian family’s urgent fundraising appeal, if they failed to pay that amount, their toddler Olga “will not receive the only chance for recovery—an expensive treatment in the United States.” They ended up contributing at least $700,000.
While such “pay-to-play” trials are legal, they do raise red flags, including questions about whether parents—and financial donors—understand that most experimental treatments fail. “They are not necessarily unethical. But you should scrutinize why the patient is being asked to pay,” says Bateman-House. “If it’s a valid trial, why isn’t the NIH [National Institutes of Health] interested, or a biotech company? Why isn’t there other funding?”Will it work?
Canavan disease is caused when a child inherits two broken copies of a gene called ASPA. Without the enzyme that ASPA produces, the brain can’t correctly form the nerve bundles that transmit signals in the brain. The result, for Benny and Josh, is that the boys can’t speak or control their limbs, and their cognition is limited.
“They are like infants in ALS bodies,” says Paola Leone, the researcher at Rowan University in New Jersey who conceived the gene therapy and led the effort to get a clinical trial started.Benny Landsman arriving at Dayton Children’s Hospital on April 8th where he became the first child to received a new gene therapy for Canavan disease. Jennie and Gary Landsman with Benny. The Landsman family raised more than $2 million in donations to underwrite the development of a gene replacement treatment.
The trial in Dayton seeks to use viruses to deliver working copies of the ASPA gene to kids’ brain tissue. That’s what occurred Thursday at the Dayton Children’s Hospital. After Benny was greeted by a golden retriever who cheers patients up, brain surgeons drilled into his skull and then used a needle to introduce 40 trillion virus particles.
Leone’s scientific bet is that adding correct copies of ASPA to specific brain cells called oligodendrocytes could stop the disease from progressing, and maybe allow for some recovery. The treatment has been effective in mice, she says, but “is that going to work in patients? The only way is to test it.”Parents as scientists
There are by now a half-dozen examples of gene-therapy treatments funded by families aiming to treat their own kids, and more such experiments are planned. Scientists have even begun developing hyper-personalized medicine tailored to individual children who suffer from unique genetic problems.
These desperate efforts ask parents to overcome nearly impossible obstacles. They must become experts in drug development, raise millions, and tirelessly cajole scientists. Few people can pull it off.
“There are a lot of people who know how to do gene therapy, but the knowledge is all fragmented, and so much can go wrong,” says Sanath Kumar Ramesh, a software developer whose son is afflicted by a different rare disease. Ramesh founded an organization, Open Treatments, that is building software families can use to organize gene-therapy research, including steps such as hiring scientists to create animal models of an illness.
“I think in the future, the distinction between scientists and parents is going to be blurred,” he says.
For parents whose kids have already been accepted into the Dayton trial, gene therapy may be their last chance. One of them is Meagan Rockwell, a nail technician in Cedar Rapids, Iowa, whose daughter, Tobin Grace, now three and half, was diagnosed with Canavan in 2018.
“They told us sorry, there is nothing we can do—no treatment, no cure—you will be lucky if she sees her fifth birthday. It was a hard blow, to know your only child has a life-limiting brain disease,” Rockwell says.
Rockwell says she found out about Leone’s gene-therapy effort online and eventually raised more than $250,000. “At the time, Tobin was the youngest person in the US with Canavan, and I think that played a huge factor in her acceptance,” she says, adding that Leone tells parents money puts them at the front of the line but doesn’t guarantee treatment.
Bateman-House, the bioethicist, says another risk is whether parents can really judge the benefits of an experimental procedure in a “dispassionate” way, especially if they have sunk a fortune into the effort. “It’s not only that their child is facing a dangerous condition; it’s that their blood, sweat, and tears is what is funding this intervention,” she says. “It could be incredibly difficult for a parent to change their mind and say ‘We are not going to do this.’“Hope versus risk
The Dayton study currently has enough supplies of the genetic drug to treat only nine or 10 children. It was manufactured in Spain, but only after the researchers and families overcame what they call an ordeal of red tape, delays, and obstacles, some thrown up by government regulators who decide which genetic treatments can be tried and whether trials are properly planned.
At one point, in 2019, the Landsmans took their sons to the US Food and Drug Administration for a meeting they landed after dozens of calls to lawmakers. “Beforehand we were a case number in their big pile of paper,” says Jennie Landsman, the boys’ mother. “They had very technical objections. In the meeting we held up Benny and Josh, and we said ‘We hope this issue that is so technical isn’t going to stop the treatment.’”Benny Landsman and his younger brother Josh both suffer from Canavan disease, a fatal inherited disorder. In April, Benny underwent a gene therapy procedure in a bid to add a corrected gene to his brain cells.COURTESY OF JENNIE LANDSMAN
The Dayton trial won a greenlight in December and began barely in time for Benny, who will hit the age cutoff of five years in June. “Benny is the pilot. Benny is the ‘God, we hope this works’ kid,” says Rockwell, who doesn’t yet have a date for her daughter’s procedure.
What’s the chance the therapy works? Gene-replacement techniques have been having notable successes, curing kids who don’t have immune systems, and preventing brain diseases. Since 2017, a small number of gene therapies have also been approved for sale in the US, at prices as high as $2.1 million per child.
Record prices have stoked interest among specialist biotech companies, which now see a business even in super-rare diseases. One, called Aspa Therapeutics, says it has plans to initiate a different Canavan gene-therapy trial. Its CEO, Eric David, estimates there are 1,000 children alive with the disease in the US and Europe. “That, for us, is enough,” he says.
There’s no certainty gene therapy will succeed in Canavan. Even if the corrected gene stops the disease from progressing, the kids’ brains may have already been irreversibly damaged.
“I hope she will sit up on her own, maybe say Mommy and Daddy,” says Rockwell of her daughter. “I am hopeful, but it is purely experimental. We are handing our babies over to science and hoping and praying it works.” It will be a month before doctors know if the new gene is functioning in Benny’s brain, but likely much longer to know of any effect on his symptoms.
In a message to donors, Gary Landsman addressed what he called the “loaded” question of what he expects the procedure to achieve.
“I’ve pondered this question over and over and over again,” he wrote. “Is it OK to want more? Is it OK to want to hold their hands as they walk beside me? Is it OK to want to hear them speak to me? Perhaps I am playing a dangerous game with my psyche. But I think the hope it provides is worth the risk.”
AI researchers often say good machine learning is really more art than science. The same could be said for effective public relations. Selecting the right words to strike a positive tone or reframe the conversation about AI is a delicate task: done well, it can strengthen one’s brand image, but done poorly, it can trigger an even greater backlash.
The tech giants would know. Over the last few years, they’ve had to learn this art quickly as they’ve faced increasing public distrust of their actions and intensifying criticism about their AI research and technologies.
Now they’ve developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly—but want to make sure they don’t invite too much scrutiny. Here’s an insider’s guide to decoding their language and challenging the assumptions and values baked in.
accountability (n) – The act of holding someone else responsible for the consequences when your AI system fails.
accuracy (n) – Technical correctness. The most important measure of success in evaluating an AI model’s performance. See validation.
adversary (n) – A lone engineer capable of disrupting your powerful revenue-generating AI system. See robustness, security.
alignment (n) – The challenge of designing AI systems that do what we tell them to and value what we value. Purposely abstract. Avoid using real examples of harmful unintended consequences. See safety.
artificial general intelligence (phrase) – A hypothetical AI god that’s probably far off in the future but also maybe imminent. Can be really good or really bad whichever is more rhetorically useful. Obviously you’re building the good one. Which is expensive. Therefore, you need more money. See long-term risks.
audit (n) – A review that you pay someone else to do of your company or AI system so that you appear more transparent without needing to change anything. See impact assessment.
augment (v) – To increase the productivity of white-collar workers. Side effect: automating away blue-collar jobs. Sad but inevitable.
beneficial (adj) – A blanket descriptor for what you are trying to build. Conveniently ill-defined. See value.
by design (ph) – As in “fairness by design” or “accountability by design.” A phrase to signal that you are thinking hard about important things from the beginning.
compliance (n) – The act of following the law. Anything that isn’t illegal goes.
data labelers (ph) – The people who allegedly exist behind Amazon’s Mechanical Turk interface to do data cleaning work for cheap. Unsure who they are. Never met them.
democratize (v) – To scale a technology at all costs. A justification for concentrating resources. See scale.
diversity, equity, and inclusion (ph) – The act of hiring engineers and researchers from marginalized groups so you can parade them around to the public. If they challenge the status quo, fire them.
efficiency (n) – The use of less data, memory, staff, or energy to build an AI system.
ethics board (ph) – A group of advisors without real power, convened to create the appearance that your company is actively listening. Examples: Google’s AI ethics board (canceled), Facebook’s Oversight Board (still standing).
ethics principles (ph) – A set of truisms used to signal your good intentions. Keep it high-level. The vaguer the language, the better. See responsible AI.
explainable (adj) – For describing an AI system that you, the developer, and the user can understand. Much harder to achieve for the people it’s used on. Probably not worth the effort. See interpretable.
fairness (n) – A complicated notion of impartiality used to describe unbiased algorithms. Can be defined in dozens of ways based on your preference.
for good (ph) – As in “AI for good” or “data for good.” An initiative completely tangential to your core business that helps you generate good publicity.
foresight (n) – The ability to peer into the future. Basically impossible: thus, a perfectly reasonable explanation for why you can’t rid your AI system of unintended consequences.
framework (n) – A set of guidelines for making decisions. A good way to appear thoughtful and measured while delaying actual decision-making.
generalizable (adj) – The sign of a good AI model. One that continues to work under changing conditions. See real world.
governance (n) – Bureaucracy.
human-centered design (ph) – A process that involves using “personas” to imagine what an average user might want from your AI system. May involve soliciting feedback from actual users. Only if there’s time. See stakeholders.
human in the loop (ph) – Any person that is part of an AI system. Responsibilities range from faking the system’s capabilities to warding off accusations of automation.
impact assessment (ph) – A review that you do yourself of your company or AI system to show your willingness to consider its downsides without changing anything. See audit.
interpretable (adj) – Description of an AI system whose computation you, the developer, can follow step by step to understand how it arrived at its answer. Actually probably just linear regression. AI sounds better.
integrity (n) – Issues that undermine the technical performance of your model or your company’s ability to scale. Not to be confused with issues that are bad for society. Not to be confused with honesty.
interdisciplinary (adj) – Term used of any team or project involving people who do not code: user researchers, product managers, moral philosophers. Especially moral philosophers.
long-term risks (n) – Bad things that could have catastrophic effects in the far-off future. Probably will never happen, but more important to study and avoid than the immediate harms of existing AI systems.
partners (n) – Other elite groups who share your worldview and can work with you to maintain the status quo. See stakeholders.
privacy trade-off (ph) – The noble sacrifice of individual control over personal information for group benefits like AI-driven health-care advancements, which also happen to be highly profitable.
progress (n) – Scientific and technological advancement. An inherent good.
real world (ph) – The opposite of the simulated world. A dynamic physical environment filled with unexpected surprises that AI models are trained to survive. Not to be confused with humans and society.
regulation (n) – What you call for to shift the responsibility for mitigating harmful AI onto policymakers. Not to be confused with policies that would hinder your growth.
responsible AI (n)- A moniker for any work at your company that could be construed by the public as a sincere effort to mitigate the harms of your AI systems.
robustness (n) – The ability of an AI model to function consistently and accurately under nefarious attempts to feed it corrupted data.
safety (n)- The challenge of building AI systems that don’t go rogue from the designer’s intentions. Not to be confused with building AI systems that don’t fail. See alignment.
scale (n)- The de facto end state that any good AI system should strive to achieve.
security (n) – The act of protecting valuable or sensitive data and AI models from being breached by bad actors. See adversary.
stakeholders (n) – Shareholders, regulators, users. The people in power you want to keep happy.
transparency (n) – Revealing your data and code. Bad for proprietary and sensitive information. Thus really hard; quite frankly, even impossible. Not to be confused with clear communication about how your system actually works.
trustworthy (adj) – An assessment of an AI system that can be manufactured with enough coordinated publicity.
universal basic income (ph) – The idea that paying everyone a fixed salary will solve the massive economic upheaval caused when automation leads to widespread job loss. Popularized by 2020 presidential candidate Andrew Yang. See wealth redistribution.
validation (n) – The process of testing an AI model on data other than the data it was trained on, to check that it is still accurate.
value (n) – An intangible benefit rendered to your users that makes you a lot of money.
values (n) – You have them. Remind people.
wealth redistribution (ph) – A useful idea to dangle around when people scrutinize you for using way too many resources and making way too much money. How would wealth redistribution work? Universal basic income, of course. Also not something you could figure out yourself. Would require regulation. See regulation.
withhold publication (ph) – The benevolent act of choosing not to open-source your code because it could fall into the hands of a bad actor. Better to limit access to partners who can afford it.
Now is a tough time to be a retailer. Even before the 2020 coronavirus pandemic brought rapid changes to the market, many traditional brick-and-mortar businesses were struggling. For example, from 2011 to 2020, the number of US department stores shrank from 8,600 to just over 6,000.
The global crisis only amplified retail challenges. Since March 2020, at least 347 US companies cited the pandemic as a factor in their decisions to file for bankruptcy. Among them was Guitar Center, whose executives said its e-commerce sales couldn’t replace the experience of musicians trying out instruments in person. Some businesses are finding new ways to cope— or perhaps come out of the crisis in better shape than when it began. In 2021, it appears many retailers are ready to shift the way they do business.
MIT Technology Review Insights, in association with Oracle, surveyed 297 executives, primarily financial officers, C-suite, and information technology leaders, about their organizations’ plans for big business moves. These include new business models, mergers and acquisitions, and major technology changes, such as automating financial and risk management processes.
According to the research, 83% of executives across industries feel upbeat about their company’s ultimate objective for 2021, expecting to thrive or transform— that is, sell more products and services, or take up new business practices or sales methodologies. Overall, 80% of organizations made a big move in 2020 or are planning at least one in 2021.The road ahead for retail
The shopping process will be different in 2021, says Mike Robinson, head of retail operations at The Eighth Notch, a tech platform that connects shippers and retailers, and former digital business leader at Macy’s. Among the hard-to-answer questions retailers are asking: “How can stores reassure people that it’s safe to return to congregating in places again? How can consumers trust that the store is doing the right thing from a cleanliness perspective?” Nobody has definitive answers, Robinson points out, but at least they’re asking.
Other special areas of concern for retail organizations in 2021: consumer and e-commerce cybersecurity risks. As cyberattacks get bolder and more frequent, retailers have to contemplate how to protect their data, starting with preventing credit card fraud. While that matters to any consumer business, Robinson says, the data protection challenge has extra resonance for retailers. To offer customers better, more personalized experiences, retailers need to collect more data to analyze, opening them up to more risk of a data breach.
The supply chain—manufacturing, shipping, and and logistics— is also a key issue this year. The strain started showing in 2020, when pandemic lockdowns spread across the globe, exposing weaknesses in production processes and supply chains. And the US-China trade war caused many companies to look beyond China to Southeast Asian countries such as Vietnam or Thailand for production partners.
The supply chain isn’t only a financial concern. Robinson says ethical sourcing and manufacturing are becoming more important as consumers raise expectations about sustainability and worker safety. “That’s just going to continue to be more and more important as we move forward,” he adds.Fortune favors the bold
It’s hard to plan for the long term during times of volatility—but that’s exactly what most businesses across industries are doing: more than half of surveyed organizations will ramp up technology investments in 2021, and 40% plan to move IT and business functions to the cloud (see Figure 1).
In some cases, the 2021 strategic plan is simply to ramp up for more business. Thriving companies that sell treadmill desks or sweatpants don’t need to change their business models. Because of increased demand at a time of heightened remote working, those retailers need only to fine-tune the manufacturing processes and work out shipping logistics.
But adapting to a new world means being open to new ideas. Business leaders ready to transform a company have to rethink everything: business models, product development, marketing processes, fulfilment, and success metrics. As a result, 87% of the organizations that expect business transformations in 2021 have some sort of big move planned.
Robinson believes now is the time to be bold, and retailers are realizing that. “People are going to be rewarded for taking chances and will probably be forgiven if it’s imperfect,” he says. When you are out of the usual options, try the unusual ones.
“Business didn’t stop just because of covid,” says Ashwat Panchal, vice president of internal audit at footwear retailer Skechers. “We’re expanding our distribution centers. We’re increasing our e-commerce footprint. We’re implementing new point-of-sale systems. We’re expanding into new territories.”
Download the full report.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
The 2020 coronavirus pandemic upended the way companies do business. Some are coping better than others—but largely, businesses are optimistic about 2021.
That’s especially so for tech-forward organizations in two different industries—technology and manufacturing— that are planning major business initiatives to move beyond crisis response and thrive in a transformed corporate landscape. The pandemic accelerated trends that already were underway—and while 2020 might have been spent coping with the crisis, many business leaders are thinking about the next steps.
“We are in the middle of probably one of the biggest strategic moves the company has made in its history,” says Ritu Raj, director for enterprise engineering at John Deere. “That’s a big statement for a company that’s over 180 years old.”
According to a worldwide survey of 297 executives, conducted by MIT Technology Review Insights, in association with Oracle, 80% feel upbeat about their organizations’ ultimate goals for 2021, expecting to thrive—for example, sell more products and services—or transform—change business models, sales methodology, or otherwise do things differently.
The iconic manufacturer of agricultural and construction equipment is building a new operating model for the company with technology as the centerpiece, Raj says. For example, the tractors it’s selling today collect data about their operations and help farmers complete jobs like planting with precision. It’s one of the big moves— new business models, mergers and acquisitions, and big technology changes such as widespread automation— that organizations are making or planning in a landscape transformed by the pandemic.A tale of two industries
Every industry has unique characteristics. Certainly that’s true of technology companies, which by their nature undergo rapid transformation. The industry tends to be early adopters of new technology, says Mike Saslavsky, senior director of high-tech industry strategy at Oracle. Most tech products have rapid, short lifecycles: “You have to stay up with the next generation of technology,” he adds. “If you’re not transforming and evolving your business, then you’re probably going to be out of the market.” That premise applies across the range of businesses categorized as “tech,” from chip manufacturers to consumer devices to office equipment such as copiers.
Manufacturing has traditionally maintained a more complicated relationship with technology. On the one hand, the industry is trying to be resilient and flexible in a volatile present, says John Barcus, group vice president of Oracle’s industry strategy group. Geopolitical issues like protectionism make it harder to get the right materials delivered for products, and the lockdowns imposed during the pandemic have caused further supply chain issues. That has led manufacturers to greater adoption of cloud technologies to connect partners, track goods, and streamline processes.
On the other hand, the industry has a reputation for short-term thinking—“If it works OK today, I can wait until tomorrow to fix it,” says Barcus. That shortsightedness is caused, often understandably, by cash-flow problems and risk associated with tech investment. “And then, all of a sudden something new hits that they weren’t prepared for and they have to react.”
There are shining examples of what manufacturers could be doing. For instance, global auto parts maker Aptiv spun off its powertrain business in 2017 to focus on high-growth areas such as advanced safety technology, connected services, and autonomous driving, says David Liu, who was until January 2020 director of corporate strategy. (He’s now director of corporate development at General Motors.) In 2019, Aptiv formed Motional, a $4 billion autonomous driving joint venture with Hyundai to accelerate the development and commercialization of autonomous vehicles. The pandemic forced the company to have both the financial discipline to withstand an unpredictable “black swan” event and the imagination and drive to do big things, Liu says. In June 2020, for example, the company made a $4 billion equity issuance to shore up its future growth through investments and possible acquisitions. “The key for us is to balance operational focus and long-term strategic thinking.”The drive behind the plans
Among all survey respondents, the most common planned big moves are substantially increased technology investments (60%) and cloud migrations (46%), with more than a third acting on business-merger plans.
In the technology and manufacturing industries, there’s more commitment to digitize business, and the organizations that did so before the pandemic were better prepared to cope. For instance, they had the technology in place to allow their workforces to work from home, Barcus points out. In fact, the crisis accelerated those efforts. Whatever their progress, he says, “Many of them, if not most of them, are now looking at, ‘How do I prepare and thrive in this new environment?’”
Download the full report.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
As the US government pumps billions of dollars into projects aimed at curbing the pandemic, from vaccine development to genomic sequencing, officials claim they are being transparent about how money is being spent. But government contractors have a lot of leeway to hide things, as shown by a recent records request filed by MIT Technology Review.
After reporting on the struggles of the US’s $44 million vaccine management system, we requested documents related to the CDC’s no-bid contracts for the underlying software, awarded to consulting giant Deloitte. The records we got back had significant redactions—including the company’s costs, the identities of those who worked on the project, and even Deloitte’s explanation for why it was qualified to do the job.
The CDC paid Deloitte to build a system that would help doctors manage vaccine inventory and report shots, let eligible people schedule appointments, and send out second-shot reminders and proofs of vaccination.
Months after the contracted deadline, Deloitte delivered a customized version of a preexisting Salesforce product called Vaccine Cloud. It was so difficult to use that only a handful of states signed up, as we reported in January.
But the documents released under the Freedom of Information Act deliberately blocked certain pieces of information from the public record, including what prior experience Deloitte had with building similar tools and how charges like travel expenses and labor were justified or broken down. They also redacted the names of everyone involved—even the communications person assigned to the project, who would likely be responsible for speaking to the media.https://www.documentcloud.org/documents/20612402-vams-contract-5-11-2020
As part of our reporting, we requested several Deloitte contracts unrelated to the vaccine system from the US Food and Drug Administration. That agency also redacted similar information.“It’s basically a rubber stamp“
All the redactions cite a rule in the Freedom of Information Act commonly referred to as Exemption 4, which allows companies to hide “commercial information” such as trade secrets from the public.
The contractor, rather than the government, decides what is considered sensitive information. When a government agency receives a request for records, it sends that request to the contractors, who mark what they want to keep secret.
Companies have essentially free rein to call contract details “confidential business information,” thanks to a 2019 decision by the Supreme Court. Before that, companies had to explain why releasing the information would cause “substantial harm” to their business.
“Now all the agency has to do is get an affidavit from someone at the company that says, ‘We treat this as confidential business information.’ Period. Full stop,” says Victoria Baranetsky, the general counsel at the Center for Investigative Reporting. “It’s basically a rubber stamp.”
The court’s decision in Food Marketing Institute v. Argus Leader, written by Justice Neil Gorsuch, argued that companies like Amazon should be allowed to hide how much money they receive in federal food stamps, without having to explain why.
The decision has led to increasing secrecy about the business of government, according to Baranetsky.https://www.documentcloud.org/documents/20612403-vams-contract-12-11-2020
“The number of contractors in our country is ballooning,” she says. “The substance of material they are responsible for is more core to our basic civil rights and civil liberties than ever before.”
In fact, when requesters protest Exemption 4 redactions in court, government lawyers will even defend the contractors, using the company’s arguments at taxpayers’ expense.
“We have contractors holding children at the border. They work for the military. They’re building the border wall, setting up prisons and schools,” says Baranetsky. “It’s just this shell game of information about how our system is operating.”
This story is part of the Pandemic Technology Project, supported by the Rockefeller Foundation.
In 2023, NASA will launch VIPER (Volatiles Investigating Polar Exploration Rover), which that will trek across the surface of the moon and hunt for water ice that could one day be used to make rocket fuel. The rover will be armed with the best instruments and tools that NASA can come up with: wheels that can spin properly on lunar soil, a drill that’s able to dig into extraterrestrial geology, hardware that can survive 14 days of a lunar night when temperatures sink to ˗173 °C.
But while much of VIPER is one of a kind, custom-made for the mission, much of the software that it’s running is open-source, meaning it’s available for use, modification, and distribution by anyone for any purpose. If it’s successful, the mission may be about more than just laying the groundwork for a future lunar colony—it may also be an inflection point that causes the space industry to think differently about how it develops and operates robots.
Open-source tech rarely comes to mind when we talk about space missions. It takes a tremendous amount of money to build something that can be launched into space, make its way to its proper destination, and then fulfill a specific set of tasks hundreds or thousands (or hundreds of thousands) of miles away. Keeping the know-how to pull those things off close to one’s chest is a natural inclination. Open-source software, meanwhile, is more usually associated with scrappy programming for smaller projects, like hackathons or student demos. The code that fills online repositories like GitHub is often an inexpensive solution for groups running low on cash and resources needed to build code from scratch.
But the space industry is surging, in no small part because there’s a demand for increased access to space. And that means the use of technologies that are less expensive and more accessible, including software.
Even for bigger groups like NASA, where money’s not an issue, the open-source approach may end up leading to stronger software. “Flight software right now, I would say, is pretty mediocre in space,” says Dylan Taylor, the chairman and CEO of Voyager Space Holdings. (Case in point: Boeing’s Starliner test flight failure in 2019, which was due to software glitches.) If it’s open-source, the smartest scientists can still leverage a larger community’s expertise and feedback if it runs into problems, just as amateur developers do.
Basically, if it’s good enough for NASA, it should presumably be good enough for anyone else trying to operate a robot off this planet. With an ever-increasing number of new companies and new national agencies around the world seeking to launch their own satellites and probes into space while keeping costs down, cheaper robotics software that can confidently handle something as risky as a space mission is a huge boon.
Open-source software can also help make getting to space cheaper because it leads to standards everyone can adopt and work with. You can eliminate the high costs associated with specialized coding. Open-source frameworks are usually something new engineers have already worked with, too. “If we can just leverage that and increase this pipeline from what they’ve learned in school to what they use in flight missions, that shortens the learning curve,” says Terry Fong, director of the Intelligent Robotics Group at NASA Ames Research Center in Mountain View, California, and deputy lead for the VIPER mission. “It makes things faster for us to take advances from the research world and put it into flight.”
NASA has been using open-source software in many R&D projects for about 10 to 15 years now—the agency keeps a very extensive catalogue of the open-source code it has used. But this technology’s role in actual robots sent to space is still nascent. One system the agency has trialed is the Robot Operating System, a collection of open-source software frameworks maintained and updated by the nonprofit Open Robotics, also headquartered in Mountain View. ROS is already used in Robonaut 2, the humanoid robot that has helped with research on the International Space Station, as well as the autonomous Astrobee robots buzzing around the ISS to help astronauts run day-to-day tasks.The Astrobee robot on the International Space Station runs on ROS.NASA
ROS will be running and facilitating tasks critical to something called “ground flight control.” VIPER is going to be driven around by NASA personnel who will be operating things from Earth. Ground flight control will take data collected by VIPER to build real-time maps and renderings of the environment on the moon that the rover’s drivers can use to navigate safely. Other parts of the rover’s software have open-source roots as well: basic functions like telemetry and memory management are handled onboard by a program called core Flight System (cFS), developed by NASA itself and available for free on GitHub. VIPER’s mission operations outside of the rover itself are handled by Open MCT, also created by NASA.
Compared with Mars, the lunar environment is very difficult to physically emulate on Earth, which means testing out a rover’s hardware and software components isn’t easy. For this mission, says Fong, it made more sense to lean on digital simulations that could test many of the rover’s components—and that included the open-source software.
Another reason the mission lends itself to use of open-source software is that the moon is close enough for near-real-time control of the rover, which means some of the software doesn’t need to be on the rover itself and can run on Earth instead.
“We decided to have the robot’s brains split between the moon and Earth,” says Fong. “And as soon as we did that, it opened up the possibility that we can use software that’s not limited by radiation, hard flight, computing—but instead, we can just use off-the-shelf commodity commercial desktops. So we can make use of things like ROS on the ground, something used by so many people so regularly. We don’t have to just rely on custom software.”
VIPER isn’t running on 100% open-source software—its onboard flight system, for instance, uses extremely reliable proprietary software. But it’s easy to see future missions adopting and expanding on what VIPER will run. “I suspect that maybe the next rover from NASA will run Linux,” says Fong.
It will never be possible to use open-source software in all cases. Security concerns could be an issue, and might cause some parties to stick to proprietary tech entirely (although one plus to open-source platforms is that developers are often very public about finding flaws and proposing patches). And Fong also emphasizes that some missions will always be too specialized or advanced to rely heavily on open-source technology.
Still, it’s not just NASA that is turning to the open-source community. Blue Origin recently announced a partnership with several NASA groups to “code robotic intelligence and autonomy” built from open-source frameworks (the company declined to provide details). Smaller initiatives like the Libre Space Foundation based in Greece, which provides open-source hardware and software for small satellite activities, are bound to gain more attention as spaceflight continues to get cheaper. “There’s a domino effect there,” says Brian Gerkey, the CEO of Open Robotics. “Once you have a large organization like NASA saying publicly, ‘We’re depending on this software,’ then other organizations are willing to take a chance and dig in and do the work that’s necessary to make it work for them.”
Facebook is withholding certain job ads from women because of their gender, according to the latest audit of its ad service.
The audit, conducted by independent researchers at the University of Southern California (USC), reveals that Facebook’s ad-delivery system shows different job ads to women and men even though the jobs require the same qualifications. This is considered sex-based discrimination under US equal employment opportunity law, which bans ad targeting based on protected characteristics. The findings come despite years of advocacy and lawsuits, and after promises from Facebook to overhaul how it delivers ads.
The researchers registered as an advertiser on Facebook and bought pairs of ads for jobs with identical qualifications but different real-world demographics. They advertised for two delivery driver jobs, for example: one for Domino’s (pizza delivery) and one for Instacart (grocery delivery). There are currently more men than women who drive for Domino’s, and vice versa for Instacart.
Though no audience was specified on the basis of demographic information, a feature Facebook disabled for housing, credit, and job ads in March of 2019 after settling several lawsuits, algorithms still showed the ads to statistically distinct demographic groups. The Domino’s ad was shown to more men than women, and the Instacart ad was shown to more women than men.
The researchers found the same pattern with ads for two other pairs of jobs: software engineers for Nvidia (skewed male) and Netflix (skewed female), and sales associates for cars (skewed male) and jewelry (skewed female).
The findings suggest that Facebook’s algorithms are somehow picking up on the current demographic distribution of these jobs, which often differ for historical reasons. (The researchers weren’t able to discern why that is, because Facebook won’t say how its ad-delivery system works.) “Facebook reproduces those skews when it delivers ads even though there’s no qualification justification,” says Aleksandra Korolova, an assistant professor at USC, who coauthored the study with her colleague John Heidemann and their PhD advisee Basileal Imana.
The study supplies the latest evidence that Facebook has not resolved its ad discrimination problems since ProPublica first brought the issue to light in October 2016. At the time, ProPublica revealed that the platform allowed advertisers of job and housing opportunities to exclude certain audiences characterized by traits like gender and race. Such groups receive special protection under US law, making this practice illegal. It took two and half years and several legal skirmishes for Facebook to finally remove that feature.
But a few months later, the US Department of Housing and Urban Development (HUD) levied a new lawsuit, alleging that Facebook’s ad-delivery algorithms were still excluding audiences for housing ads without the advertiser specifying the exclusion. A team of independent researchers including Korolova, led by Northeastern University’s Muhammad Ali and Piotr Sapieżyński , corroborated those allegations a week later. They found, for example, that houses for sale were being shown more often to white users and houses for rent were being shown more often to minority users.
Korolova wanted to revisit the issue with her latest audit because the burden of proof for job discrimination is higher than for housing discrimination. While any skew in the display of ads based on protected characteristics is illegal in the case of housing, US employment law deems it justifiable if the skew is due to legitimate qualification differences. The new methodology controls for this factor.
“The design of the experiment is very clean,” says Sapieżyński, who was not involved in the latest study. While some could argue that car and jewelry sales associates do indeed have different qualifications, he says, the differences between delivering pizza and delivering groceries are negligible. “These gender differences cannot be explained away by gender differences in qualifications or a lack of qualifications,” he adds. “Facebook can no longer say [this is] defensible by law.”
The release of this audit comes amid heightened scrutiny of Facebook’s AI bias work. In March, MIT Technology Review published the results of a nine-month investigation into the company’s Responsible AI team, which found that the team, first formed in 2018, had neglected to work on issues like algorithmic amplification of misinformation and polarization because of its blinkered focus on AI bias. The company published a blog post shortly after, emphasizing the importance of that work and saying in particular that Facebook seeks “to better understand potential errors that may affect our ads system, as part of our ongoing and broader work to study algorithmic fairness in ads.”
“We’ve taken meaningful steps to address issues of discrimination in ads and have teams working on ads fairness today,” said Facebook spokesperson Joe Osborn in a statement. “Our system takes into account many signals to try and serve people ads they will be most interested in, but we understand the concerns raised in the report… We’re continuing to work closely with the civil rights community, regulators, and academics on these important matters.”
Despite these claims, however, Korolova says she found no noticeable change between the 2019 audit and this one in the way Facebook’s ad-delivery algorithms work. “From that perspective, it’s actually really disappointing, because we brought this to their attention two years ago,” she says. She’s also offered to work with Facebook on addressing these issues, she says. “We haven’t heard back. At least to me, they haven’t reached out.”
In previous interviews, the company said it was unable to discuss the details of how it was working to mitigate algorithmic discrimination in its ad service because of ongoing litigation. The ads team said its progress has been limited by technical challenges.
Sapieżyński, who has now conducted three audits of the platform, says this has nothing to do with the issue. “Facebook still has yet to acknowledge that there is a problem,” he says. While the team works out the technical kinks, he adds, there’s also an easy interim solution: it could turn off algorithmic ad targeting specifically for housing, employment, and lending ads without affecting the rest of its service. It’s really just an issue of political will, he says.
Christo Wilson, another researcher at Northeastern who studies algorithmic bias but didn’t participate in Korolova’s or Sapieżyński’s research, agrees: “How many times do researchers and journalists need to find these problems before we just accept that the whole ad-targeting system is bankrupt?”
It’s been a busy week for Clearview AI, the controversial facial recognition company that uses 3 billion photos scraped from the web to power a search engine for faces. On April 6, Buzzfeed News published a database of over 1,800 entities—including state and local police and other taxpayer-funded agencies such as healthcare systems and public schools—that it says have used the company’s controversial products. Many of those agencies replied to the accusations by saying they had only trialed the technology, and had no formal contract with the company.
But the day before, the definition of a “trial” with Clearview was detailed when nonprofit news site Muckrock released emails between the New York Police Department and the company. The documents, obtained through freedom of information requests by the Legal Aid Society and journalist Rachel Richards, track a friendly two-year relationship between the department and the tech company during which time NYPD tested the technology many times, and used facial recognition on live investigations.
The NYPD has previously downplayed its relationship with Clearview AI and its use of the company’s technology. But the emails show that the relationship between them was well-developed, with a large number of police officers conducting a high volume of searches with the app, and using them in real investigations. The NYPD has run over 5,100 searches with Clearview AI.
This is particularly problematic because the NYPD has stated policies that limit it from creating an unsupervised repository of photos that facial recognition systems can reference, and restricts the use of facial recognition technology to a specific team—both of which seem to have been circumvented with Clearview AI. The emails reveal that the NYPD broke these policies, giving many officers outside of the facial recognition team access to the system, which relies on a huge library of public photos from social media. The emails also show how NYPD officers downloaded the app onto their personal devices, in contravention of stated policy, and used the powerful and biased technology in a casual fashion.
Clearview AI runs a powerful neural network which processes photographs of faces and compares their precise measurement and symmetry to a massive database of pictures to suggest possible matches. It’s unclear just how accurate the technology is, but it’s widely used by police departments and other government agencies. Clearview AI has been heavily criticized for its use of personally identifiable information, its decision to violate people’s privacy by scraping photographs from the internet without their permission, and its choice of clientele.
The emails span from October 2018 through February 2020, beginning with Clearview AI CEO Hoan Ton-That being introduced to NYPD deputy inspector Chris Flanagan. After initial meetings, Clearview AI entered into a vendor contract with NYPD in December 2018 on a trial basis that lasted until the following March.
The documents show that many individuals at NYPD had access to Clearview during and after this time, from department leadership to junior officers. Throughout the exchanges, Clearview AI encouraged high usage of its services. (“See if you can reach 100 searches,” its onboarding instructions urged officers.) The emails show that trial accounts for the NYPD were created as late as February 2020, almost a year after the trial period was said to have ended.
We reviewed the emails, and talked to top surveillance and legal experts about their contents. Here’s what you need to know.NYPD lied about the extent of its relationship with Clearview AI and the use of its facial recognition technology
The NYPD told Buzzfeed News and the New York Post previously that it had “no institutional relationship” with Clearview AI, “formally or informally.” NYPD did disclose that it had trialed Clearview AI, but the emails show it was used over a sustained time period by a large number of people who completed a high volume of searches in real investigations.
In one exchange, a detective working in the department’s facial recognition unit said, “the app is working great.” In another, an officer on the NYPD’s identity theft squad said, “we continue to receive positive results” and have “gone on to make arrests.” (We have removed full names and email addresses from these images, other personal details were redacted in the original documents.)
Albert Fox Cahn, executive director at the Surveillance Technology Oversight Project, a nonprofit that advocates for the abolition of police use of facial recognition technology in New York City, says the records clearly contradict NYPD’s previous public statements on its use of Clearview AI.
“Here we have a pattern of officers getting Clearview accounts—not for weeks or months—but over the course of years,” he says. “We have evidence of meetings with officials at the highest level of the NYPD, including the facial identification section. This isn’t a few officers who decide to go off and get a trial account. This was a systematic adoption of Clearview’s facial recognition technology to target New Yorkers.”
Further, NYPD’s description of its facial recognition use, which is required under a recently passed law, says that “investigators compare probe images obtained during investigations with a controlled and limited group of photographs already within possession of the NYPD.” Clearview AI is known for its database of over 3 billion photos scraped from the web.NYPD is working closely with immigration enforcement, and officers referred Clearview AI to ICE
The emails show that the NYPD sent over multiple emails belonging to ICE agents in what appear to be referrals to aid Clearview in selling its technology to the Department of Homeland Security. Two police officers had both NYPD and Homeland Security affiliations in their email signature, while another officer identified as a member of a Homeland Security task force.
“There just seems to be so much communication, maybe data sharing, and so much unregulated use of technology.”
New York is designated as a sanctuary city, meaning that local law enforcement limits its cooperation with federal immigration agencies. In fact, NYPD’s facial recognition policy statement says that “information is not shared in furtherance of immigration enforcement” and “access will not be given to other agencies for purposes of furthering immigration enforcement.”
“I think one of the big takeaways is just how lawless and unregulated the interactions and surveillance and data sharing landscape is between local police, federal law enforcement, immigration enforcement” says Matthew Guariglia from the Electronic Frontier Foundation. “There just seems to be so much communication, maybe data sharing, and so much unregulated use of technology.”
Cahn says the emails immediately ring alarm bells, particularly since a great deal of law enforcement information funnels through central systems known as fusion centers.
“You can claim you’re a sanctuary city all you want, but as long as you continue to have these DHS task forces, as long as you continue to have information fusion centers that allow real-time data exchange with DHS, you’re making that promise into a lie.”Many officers asked to use Clearview AI on their personal devices or through their personal email accounts
At least four officers asked for access to Clearview’s app on their personal devices or through personal emails. Department devices are closely regulated, and it can be difficult to download applications to official NYPD mobile phones. Some officers clearly opted to use their personal devices when department phones were too restrictive.
Clearview replied to this email, “Hi William, you should have a setup email in your inbox shortly.”
Jonathan McCoy is a digital forensics attorney at Legal Aid Society and took part in filing the freedom of information request. He found the use of personal devices particularly troublesome. “My takeaway is that they were actively trying to circumvent NYPD policies and procedures that state that if you’re going to be using facial recognition technology, you have to go through FIS (facial identification section) and they have to use the technology that’s already been approved by the NYPD wholesale.” NYPD does already have a facial recognition system, provided by a company called Dataworks.
Guariglia says it points to an attitude of carelessness by both the NYPD and Clearview AI. “I would be horrified to learn that police officers were using Clearview on their personal devices to identify people that then contributed to arrests or official NYPD investigations.”
The concerns these emails raise are not just theoretical: they could allow the police to be challenged in court, and even have cases overturned because of failure to adhere to procedure. McCoy says the Legal Aid Society plans to use the evidence from the emails to defend their clients who have been arrested as the result of an investigation that used facial recognition.
“We would hopefully have a basis to go into court and say that whatever conviction was obtained through the use of the software was done in a way that was not commensurate with NYPD policies and procedures,” he says. “Since Clearview is an untested and unreliable technology, we could argue that the use of such a technology prejudiced our client’s rights.”
As covid vaccines roll out in a handful of countries, the next question has become: How do people prove they’ve been inoculated? For months, this conversation—and the ethical questions any “vaccine passport” system would raise—has been theoretical, but over the last few weeks, efforts have become more concrete. Australian airline Qantas started running a trial in March, while New York launched the first state-level system in the US last week. And on April 5, the UK said it would conduct a pilot as part of its gradual easing of lockdown restrictions. The moves have prompted various reactions: some states in the US have endorsed the concept; others have banned it.What is a vaccine passport?
When experts talk about turning proof of vaccination into a credential or passport, there are usually two very different reasons they’re put forward.
- Proof at international borders. You’d pull this out for immigration authorities when entering another country, mirroring how international vaccine records [pdf] have typically worked for decades—many nations already recommend vaccinations for entry, or require proof of immunizations for diseases such as yellow fever.
- Proof for around town. This kind of credential would get more day-to-day use, and it is the one most people are discussing when they talk about vaccine passports. Experts envision that you might show this to enter the building you work in, go to a cafe, or attend a private event such as a concert or wedding.
In either case, the pass might come in one of two forms. It might be stored on your smartphone, or you might carry a piece of paper that could be scanned or displayed. Systems would typically work with either proof of vaccination or a recent negative test. The UK’s early-stage pilot will reportedly also allow proof of recent infection, which would lend a person immunity.Who’s developing products?
In most places, despite all the recent conversation, vaccine passports haven’t materialized, but many countries and private companies continue to forge ahead. Airlines are talking about an industry-wide solution, for example. As far as countries go, Israel’s version of a vaccine credential is one of the furthest along. Its “green pass” launched in February.
With so many players, software companies have been jockeying for months to become the go-to solution for vaccine credentials. Some are beginning to join up with each other to agree on some common standards. For instance, New York’s system, the Excelsior Pass, uses IBM’s Digital Health Pass. IBM is also a member of Linux Foundation Public Health, an organization that helps hundreds of developers share code and ideas.
But even with increased cooperation, there’s still a lot to sort out. A few big questions about vaccine passports are still on the table.How will developers keep private health information secure?
New York’s app promises privacy but doesn’t explain how that’s accomplished, says security researcher Albert Fox Cahn, who directs the Surveillance Technology Oversight Project based in New York. He says, “We don’t even have the most rudimentary information about what data it captures, how that data is stored, or what security measures are being used.” Cahn says that he tried an “ethical hacking” exercise: he got permission to try activating a user’s pass simply by inputting details (like birth date) found on social media accounts. He says, “It took me 11 minutes before I had their blue Excelsior Pass.”
For Israel’s green pass, some security experts have already outlined concerns about the outdated encryption being used.Paper, smartphone, or both?
Requiring people to use a smartphone would exclude significant portions of the population, including many older people and some who cannot afford or choose not to use high-end phones. New York’s pass system—currently in a pilot phase for selected big venues—says that a paper card would be acceptable proof, and that other states’ records or negative test results should also work. That sort of flexibility is part of other proposed systems, too. The PathCheck initiative, run by MIT associate professor Ramesh Raskar, is working on a system that uses paper cards with QR code stickers attached. Codes can be scanned by venues or anyone who wants to vet people entering a space. Other solutions, he says, are too heavy-handed. “People are trying to build business models on top of it,” he says. Instead, he says, “we need a mass-use solution right away, in the middle of a pandemic.”How does immunization data get stored and shared?
In some countries with nationalized health systems, like the UK and Israel, immunization records can be made centrally accessible. In the US, however, a universal solution faces another major hurdle: the country’s fractured health-care system. Vaccine records are stored in a patchwork of databases that don’t normally work together.
“It’s a jumble,” says Jenny Wanger, who oversees covid-related initiatives for Linux Foundation Public Health. “This is all just a sign of how massively underfunded our public health infrastructure has been for so many years.”
The US’s disconnected system stands in stark contrast to countries like India, where data is much more centralized, says Anit Mukherjee, of the US think tank Center for Global Development. There, he says, “there is no way that we can manage a rollout of a vaccine for one billion people without having some form of centralized system.”What about the ethics of requiring vaccine proof?
While the benefits to those who are able to use vaccine passports are clear—they will be able to return to something resembling normal life—there are legitimate concerns about the ways in which digitized data will be used, today and in the future. Points to keep an eye on:
- Access could be unfairly limited for some people. The vast majority of shots received so far—84%, according to the New York Times—have been given in wealthier countries. And even in those countries, certain groups of workers haven’t been prioritized—US nail salon technicians, for example, have been low priority despite facing high rates of infection. In Israel, distribution to Palestinians in the occupied territories remains slow. For those without a vaccination record, vaccine passports will require proof of a recent negative test, which could cost time or money to obtain.
- Laws and policies will need to spell out protections. Imogen Parker is part of a team at the Ada Lovelace Institute in London, which has been studying vaccine passports and surrounding ethical issues since May 2020. She says that when it comes to day-to-day use, “there has to be real clarity about how this interacts with equalities legislation, employment law … Could this be used at protests? Could this be used at voting booths?” In the US, she says, that information could also pipe to insurance companies, unless such uses are specifically prohibited.
- Countries could use credentials as a way to keep people out. For border crossing, Parker says, the complication is that not all countries have vaccines yet: “Is this going to encourage [countries] to spread vaccines? Is travel and trade predicated on vaccine status?” Mukherjee, meanwhile, points out that not all vaccines are equal. For example, some studies suggest China’s CoronaVac has an efficacy of around 50%, lower than the rates of 90% and higher shown by the Pfizer-BioNTech and Moderna vaccines. Does this mean even those with the “wrong” vaccinations could end up being rejected?
With so many questions still to be answered, the stakes for getting it right remain high. In a slide deck obtained by the Washington Post, federal officials worried that a botched rollout “could hamper our pandemic response by undercutting health safety measures, slowing economic recovery, and undermining public trust and confidence.” Since then, the Biden administration has said that it will not issue a nationwide mandate.
But despite the recent media coverage, political takes, and new app launches, it’s not clear what the long-term outlook for vaccine credentials might be. In the short run, they might become a sort of nudge for the hesitant, encouraging them to get their shots in order to open doors that would otherwise remain (literally) closed.
“Our intention is to open as many places as possible with the green pass,” said Israel’s health ministry’s director for health, Sharon Alroy-Preis, in an interview with the Israeli news website Ynet. “The goal is to create places that are safer, and to encourage vaccination.”
But after that? Experts don’t know yet—and even Israel is still figuring it out. The clearest answer isthat, for at least a brief window of time, in certain places, people may need to prove that they’re inoculated or free of covid. Whether or not these systems stick around, and how people will feel about that, is as hard to predict as the course of the pandemic.
Even if the future is murky, though, Parker says that having a sense of the long view is important: “You’re building a tool for health surveillance and normalizing a number of third parties requesting or requiring individuals to share data. There’s a really big question of how that could evolve.” On the other hand, she says, if this is temporary, “do we have the ability to dismantle it?”
Bioethicist Arthur Caplan, founding head of the Division of Medical Ethics at NYU School of Medicine, says that he’s seen how norms around vaccinations can change and evolve. He recalls his push to require health-care professionals to get flu shots and says that after initial debate, the controversy died down: “Some people said, I’m not doing it, I hate it. After about two years of that? Nobody cares. They just do it.”
And in any case, ending the pandemic relies on multiple factors, not just one kind of technology, says Julie Samuels, who helped launch New York’s exposure notification app last year. As with all tech related to the pandemic, she says, “it’s important to think of these things as just a layer of protection … Obviously the most important thing is to get as many people vaccinated as possible.”
This story is part of the Pandemic Technology Project, supported by the Rockefeller Foundation.
A pair of robot legs called Cassie has been taught to walk using reinforcement learning, the training technique that teaches AIs complex behavior via trial and error. It’s the first time reinforcement learning has been used to teach a two-legged robot how to walk from scratch, including the ability to walk in a crouch and while carrying an unexpected load.
But can it boogie? Expectations for what robots can do run high thanks to viral videos put out by Boston Dynamics, which show its humanoid Atlas robot standing on one leg, jumping over boxes and dancing. These videos have racked up millions of views and have even been parodied. The control Atlas has over its movements is impressive, but the choreographed sequences probably involve a lot of hand-tuning. (Boston Dynamics has not published details, so it’s hard to say how much.)
“These videos may lead some people to believe that this is a solved and easy problem,” says Zhongyu Li at the University of California, Berkeley, who worked on Cassie with his colleagues. “But we still have a long way to go to have humanoid robots reliably operate and live in human environments.” Cassie can’t yet dance, but teaching the human-sized robot to walk by itself puts it several steps closer to being able to handle a wide range of terrain, and recover when it stumbles or damages itself.
Virtual limitations: Reinforcement learning has been used to train bots to walk inside simulations before, but transferring that ability to the real world is hard. “Many of the videos that you see of virtual agents are not at all realistic,” says Chelsea Finn, an AI and robotics researcher at Stanford University, who was not involved in the work. Small differences between the simulated physical laws inside a virtual environment and the real physical laws outside it—such as how friction works between a robot’s feet and the ground—can lead to big failures when a robot tries to apply what it has learned. A heavy two-legged robot can lose balance and fall if its movements are even a tiny bit off.
Double simulation: But training a large robot through trial and error in the real world would be dangerous. To get around these problems, the Berkeley team used two levels of virtual environment. In the first, a simulated version of Cassie learned to walk by drawing on a large existing database of robot movements. This simulation was then transferred to a second virtual environment called SimMechanics that mirrors real-world physics with a high-degree of accuracy—but at the cost of running slower than real-life. Only once Cassie seemed to walk well there was the learned walking model loaded into the actual robot.
The real Cassie was able to walk using the model learned in simulation without any extra fine-tuning. It could walk across rough and slippery terrain, carry unexpected loads, and recover from being pushed. During testing, Cassie also damaged two motors in its right leg but was able to adjust its movements to compensate. Finn thinks that this is exciting work. Edward Johns, who leads the Robot Learning Lab at Imperial College London agrees. “This is one of the most successful examples I have seen,” he says.
The Berkeley team hopes to use their approach to add to Cassie’s repertoire of movements. But don’t expect a dance-off anytime soon.
Cyberattacks continue to grow in prevalence and sophistication. With the ability to disrupt business operations, wipe out critical data, and cause reputational damage, they pose an existential threat to businesses, critical services, and infrastructure. Today’s new wave of attacks is outsmarting and outpacing humans, and even starting to incorporate artificial intelligence (AI). What’s known as “offensive AI” will enable cybercriminals to direct targeted attacks at unprecedented speed and scale while flying under the radar of traditional, rule-based detection tools.
Some of the world’s largest and most trusted organizations have already fallen victim to damaging cyberattacks, undermining their ability to safeguard critical data. With offensive AI on the horizon, organizations need to adopt new defenses to fight back: the battle of algorithms has begun.
MIT Technology Review Insights, in association with AI cybersecurity company Darktrace, surveyed more than 300 C-level executives, directors, and managers worldwide to understand how they’re addressing the cyberthreats they’re up against—and how to use AI to help fight against them.
As it is, 60% of respondents report that human-driven responses to cyberattacks are failing to keep up with automated attacks, and as organizations gear up for a greater challenge, more sophisticated technologies are critical. In fact, an overwhelming majority of respondents—96%—report they’ve already begun to guard against AI-powered attacks, with some enabling AI defenses.
Offensive AI cyberattacks are daunting, and the technology is fast and smart. Consider deepfakes, one type of weaponized AI tool, which are fabricated images or videos depicting scenes or people that were never present, or even existed.
In January 2020, the FBI warned that deepfake technology had already reached the point where artificial personas could be created that could pass biometric tests. At the rate that AI neural networks are evolving, an FBI official said at the time, national security could be undermined by high-definition, fake videos created to mimic public figures so that they appear to be saying whatever words the video creators put in their manipulated mouths.
This is just one example of the technology being used for nefarious purposes. AI could, at some point, conduct cyberattacks autonomously, disguising their operations and blending in with regular activity. The technology is out there for anyone to use, including threat actors.
Offensive AI risks and developments in the cyberthreat landscape are redefining enterprise security, as humans already struggle to keep pace with advanced attacks. In particular, survey respondents reported that email and phishing attacks cause them the most angst, with nearly three quarters reporting that email threats are the most worrisome. That breaks down to 40% of respondents who report finding email and phishing attacks “very concerning,” while 34% call them “somewhat concerning.” It’s not surprising, as 94% of detected malware is still delivered by email. The traditional methods of stopping email-delivered threats rely on historical indicators—namely, previously seen attacks—as well as the ability of the recipient to spot the signs, both of which can be bypassed by sophisticated phishing incursions.
When offensive AI is thrown into the mix, “fake email” will be almost indistinguishable from genuine communications from trusted contacts.How attackers exploit the headlines
The coronavirus pandemic presented a lucrative opportunity for cybercriminals. Email attackers in particular followed a long-established pattern: take advantage of the headlines of the day—along with the fear, uncertainty, greed, and curiosity they incite—to lure victims in what has become known as “fearware” attacks. With employees working remotely, without the security protocols of the office in place, organizations saw successful phishing attempts skyrocket. Max Heinemeyer, director of threat hunting for Darktrace, notes that when the pandemic hit, his team saw an immediate evolution of phishing emails. “We saw a lot of emails saying things like, ‘Click here to see which people in your area are infected,’” he says. When offices and universities started reopening last year, new scams emerged in lockstep, with emails offering “cheap or free covid-19 cleaning programs and tests,” says Heinemeyer.
There has also been an increase in ransomware, which has coincided with the surge in remote and hybrid work environments. “The bad guys know that now that everybody relies on remote work. If you get hit now, and you can’t provide remote access to your employee anymore, it’s game over,” he says. “Whereas maybe a year ago, people could still come into work, could work offline more, but it hurts much more now. And we see that the criminals have started to exploit that.”
What’s the common theme? Change, rapid change, and—in the case of the global shift to working from home—complexity. And that illustrates the problem with traditional cybersecurity, which relies on traditional, signature-based approaches: static defenses aren’t very good at adapting to change. Those approaches extrapolate from yesterday’s attacks to determine what tomorrow’s will look like. “How could you anticipate tomorrow’s phishing wave? It just doesn’t work,” Heinemeyer says.
Download the full report.
On March 20, Kyle Niemer and Mallory Raven-Ellen Backstrom had the wedding of their dreams: intimate (around 40 guests), in a spacious venue with a dance floor, great food — and PCR tests on demand to check unvaccinated guests, administered by a doctor and nurse in the bridal party.
For two weeks, the couple was on edge. Niemer said he had “CNN dreams, where we were that wedding party with a covid outbreak.” “I was afraid,” agrees Backstrom, who announced she was pregnant at the wedding. “We had literally gone to every length to protect our guests. It was nerve-racking.”
While 2020 was marked by canceled or postponed weddings, 2021 is seeing a resurgence — albeit with ones that are smaller than pre-pandemic bashes. Couples like Niemer and Backstrom are navigating a tricky quagmire of ethics and etiquette to ensure the safety of their big day. While some are hosting on-site rapid testing, others — who can afford it — are requiring proof of vaccines, along with bouncers and “covid safety officers.”
The relaxation of state restrictions has helped weddings return, along with the widespread use and accessibility of PCR tests, considered the gold standard in detecting covid-19. Socially distant weddings were the first to emerge in the wake of lockdowns last spring and summer, along with “microweddings” and “minimonies” (pandemic-ese for small weddings of about 10 guests). Now vaccinations are offering the possibility of making weddings bigger, but they are also complicating the planning. The question remains: how do you keep guests safe? And how do you navigate the tricky etiquette around the topic of vaccination and testing with your guests?The ethical questions
Those questions turn up almost daily on one of the internet’s biggest wedding channels, the subreddit r/WeddingPlanning, which has nearly 150,000 members. The usual queries of where to find dresses and how to handle a meddling future mother-in-law have been interrupted by questions on how to traverse mixed vaccinated/unvaccinated weddings. “Does anyone have good wording for how to communicate to guests that we’re transitioning to having a child-free wedding because kids won’t be eligible for vaccines yet?” one asks. “Bonus points if you show examples on how you worded it on the invite!” another says.Redditors are posting sample covid inserts for paper invites for edits and thoughts. From RedditMELISSA DOLAN
Elisabeth Kramer, an Oregon-based wedding planner, says couples should be not only trying to figure out how to talk to their families but to their vendors as well. She’s created Google doc templates to help clients speak to caterers, florists, even the officiant about their vaccination or testing plans for the day
Radhika Graham, a wedding planner in Canada, says state-mandated gathering limits mean that couples are using wedding sites like Minted or questionnaires on SurveyMonkey to ask both guests and vendors how they were feeling and urging them to get (and record) vaccinations. But there’s no sugarcoating it: asking invasive health questions can rub guests the wrong way, and can dampen the celebratory mood of your wedding.
Julie-Ann Hutchinson and Kyle Burton, Baltimore-based health care professionals, went to extraordinary lengths to ensure their 40-person St. Louis wedding last September ran smoothly. They hired a “covid safety officer,” a nurse who, for $60 an hour for five hours, checked temperatures, asked guests how they felt, and handed out sanitizer and masks.
“My father came up with this idea, simply because he didn’t want family members to have to monitor the group and tell them to stand six feet apart,” Hutchinson said. “He wanted there to be an impartial neutral party.” That made sense to the couple but Hutchinson admits she thought, “He’s being ridiculous. Like what do I Google, ‘bouncer’? You can’t hire on TaskRabbit for this role. How do you even Google this?”
In the end, Burton’s aunt worked in the local military veterans hospital and knew someone who could help out, and the couple found themselves relieved of having to police their relatives. “I thought we were pandemic extra,” Hutchinson said (their wedding was profiled in the New York Times). “But it was a relief. She [the covid safety officer] would stare them down if they [guests] positioned themselves too closely.”
Neither Hutchinson nor Burton would change anything. “The conflict we faced was that we wanted to make the most of our time with our loved ones,” Burton says. “We had the option to delay the wedding entirely but we wanted to celebrate our love for each other and we wanted our family with us.”Meet the covid concierge
The two couples—Niemer and Backstrom, Hutchinson and Burton—were lucky: They were able to use a connection to find a person on short notice at a relatively low cost to monitor their wedding. But for couples who don’t find such a monitor adequate nor have healthcare connections, “private covid concierge testing” is now a service you can buy in for your big day.
Asma Rashid’s boutique medical office in the Hamptons offered 35-minute turnaround testing for clients wanting to party last summer in the area’s beach houses. She’s already received requests for weddings this summer, including one she is helping a couple plan where vaccinations are explicitly required. “You’re not allowed to enter the party without proof of vaccination,” she says. “It’s not an honorary system.”
Rashid did not provide her rate, but similar services are popping up quickly online and aren’t cheap, ringing in at around $100 per test. One company, EventDoc, offers a deal for $1,500 testing for 20 guests in New York and Florida. Veritas, a Los Angeles-based startup, is gearing up for a busy wedding season outside its usual core clientele of film production crews who are required by law to be tested regularly. The company offers rapid tests for $75-$110 depending on the size of the group.
“We’ve been approved to do vaccinations by California,” says cofounder Kristopher Sims. The firm aims to eventually offer vaccinations at pre-wedding gatherings like bridal showers so guests are vaccinated in time for the wedding day—for a fee.
The demand for covid concierge services is not limited to weddings; summer graduations, bar/bat mitzvahs, quinceaneras, and any other gathering is fair game. But weddings are the most lucrative and dependable, spawning an emerging industry of rapid testing and verification services for those who can afford it. For a wedding list of even 10, those costs can quickly add up.Simple solutions
“That’s where the challenge is: Big tech is creating a solution for the rich but in reality, it’s the masses that need it,” Ramesh Raskar says. Raskar is a professor at MIT’s Media Lab and is in the process of launching PathCheck, a paper card with a QR code that proves you are vaccinated. “It’s like a certificate,” Raskar says. When a person arrives at a venue, their QR code is checked along with a form of photo ID; if both check out, the person is permitted to enter.
On the surface, PathCheck ticks a lot of boxes: It’s pretty secure and, because Media Lab is a nonprofit, it is free—so far. And PathCheck is a paper product rather than a digital one, making it especially attractive for undocumented immigrants, the elderly, and those without internet access.
Tools like PathCheck are one possible route toward opening up safe, large gatherings to a person without much economic means in the United States. But it has drawbacks: PathCheck has to gain traction for people to trust and use it. And, as Veritas’s Sims and Capello note, there is currently no straightforward, national way to verify a person vaccinated in one state in another state. Even if there was—vaccine passports are far from an uncontroversial option.
Weddings have been another example of how the pandemic has exacerbated inequity. The decision to have a safe wedding—any gathering, really—this year has been dictated by wealth and access. Some couples can afford to have a medical professional moonlight as a covid bouncer or send at-home PCR tests. Others can’t and have to make the difficult decision to either cut their guest list down and hope for the best—or just wait until the summer and hope enough people have been vaccinated.
That won’t change soon. Sure, President Joe Biden has said every American adult is eligible for a vaccine by April 19, but children will remain unvaccinated for some time, and the April 19 date does not account for the bottleneck of people wanting vaccines but unable to access them because of demand. While it might be safe to assume most people are fully vaccinated by June, it will be hard to actually know—unless, of course, you have the money to find out.
On the other hand, wedding season might be a boon for pushing those who are vaccine hesitant toward getting a vaccine simply because of FOMO. In Israel, life is mostly back to pre-pandemic normality after its massive vaccination campaign, helped along by incentivizing vaccine skeptics to get the vaccine so they can be part of social activities, according to a recent JAMA article.
Similarly, Niemer and Backstrom said that the expected presence of two vulnerable people—Backstrom’s father, who has stage 4 lung cancer, and her 90-year-old grandmother—may have guilted people into getting the vaccine. “They [guests] knew the stakes,” Backstrom says. “Everyone was pretty much on their best behavior. We didn’t have guests who were stubborn and resistant.”
The news: The personal data of 533 million Facebook users in more than 106 countries was found to be freely available online last weekend. The data trove, uncovered by security researcher Alon Gal, includes phone numbers, email addresses, hometowns, full names, and birth dates. Initially, Facebook claimed that the data leak was previously reported on in 2019 and that it had patched the vulnerability that caused it that August. But in fact, it appears that Facebook did not properly disclose the breach at the time. The company finally acknowledged it on Tuesday, April 6, in a blog post by product management director Mike Clark.
How it happened: In the blog post, Clark said that Facebook believes the data was scraped from people’s profiles by “malicious actors” using its contact importer tool, which uses people’s contact lists to help them find friends on Facebook. It isn’t clear exactly when the data was scraped, but Facebook says it was “prior to September 2019.” One complicating factor is that it is very common for cyber criminals to combine different data sets and sell them off in different chunks, and Facebook has had many different data breaches over the years (most famously the Cambridge Analytica scandal).
Why the timing matters: The General Data Protection Regulation came into force in European Union countries in May 2018. If this breach happened after that, Facebook could be liable for fines and enforcement action because it failed to disclose the breach to the relevant regulators within 72 hours, as the GDPR stipulates. Ireland’s Data Protection Commission is investigating the breach. In the US, Facebook signed a deal two years ago that gave it immunity from Federal Trade Commission fines for breaches before June 2019, so if the data was stolen after that, it could face action there too.
How to check if you’ve been affected: Although passwords were not leaked, scammers could still use the information for spam emails or robocalls. If you want to see if you’re at risk, go to haveibeenpwned.com and check if your email address or phone number have been breached.
The disruptive shifts of 2020, including covid-19 shutdowns that led to millions of workers working remotely, forced organizations to radically rethink everything from worker well-being, business models and operations to investments in cloud-based collaboration and communication tools.
Across every industry, last year’s best-laid plans were turned upside down. So it’s not surprising that technology and work have become, more than ever, inextricably intertwined. As business moves toward an uncertain future, companies have accelerated their efforts to use automation and other emerging technologies to boost efficiency, support worker well-being, accelerate work outputs, and achieve new outcomes.
Yet, technology investments are not enough to brace for future disruptions. In fact, an organization’s readiness depends crucially on how it prepares its work and its workforce. This is a uniquely human moment that requires a human touch.
To thrive in a world of constant change, companies must re-architect work and support their workers in ways that enable them to rise to future challenges. According to Deloitte’s 2021 Global Human Capital Trends survey of 6,000 global respondents, including 3,630 senior executives, 45% said that building an organizational culture that celebrates growth, adaptability, and resilience is critical to transforming work. To reach that goal, embracing a trio of essential human attributes—purpose, potential, and perspective—can humanize work and create lasting value for the workforce, and throughout the organization and society at large.Purpose: Grounding organizations in values
Purposeestablishes a foundational set of organizational values that do not depend on circumstance and serve as a benchmark against which actions and decisions can be weighed. It relies on the uniquely human ability to identify where economic value and social values intersect. Organizations that are steadfast in their purpose are able to infuse meaning into work in order to mobilize workers around common, meaningful goals.
For example, Ed Bastian, CEO of Delta Air Lines, credits Delta Air Lines’ sense of purpose for helping the organization through the covid-19 crisis. “When I took over as CEO, we studied what our mission was and what our purpose was, which has helped us post-pandemic because we were clear pre-pandemic,” he says. “Our people can do their very best when they have leadership support and feel connected to the organization’s purpose.”Potential: A dynamic look at people’s capabilities
To thrive amid constant disruption, organizations need to capitalize on the potential of their workers and their teams by looking more dynamically at their people’s capabilities. Most leaders agree: 72% of the executives in the Deloitte survey said that “the ability of their people to adapt, reskill, and assume new roles” was either the most important or second most important factor in their organization’s ability to navigate future disruptions and boost speed and agility.
AstraZeneca, for example, is an organization that quickly mobilized its resources and took advantage of worker potential to meet a pressing need—developing a covid-19 vaccine. Tonya Villafana, AstraZeneca’s vice president and global franchise head of infection, credits the company’s accelerated response for its ability to tap into a varied pool of experts, both across the company and through its collaboration with the University of Oxford. In addition, AstraZeneca not only brought in top experts but also added “high performers who were really passionate and wanted to get involved” with the vaccine development team.Perspective: Operating boldly in the face of uncertainty
In the face of uncertainty, it’s easy to be paralyzed by multiple options and choices. Perspective—quite literally, the way organizations see things—is a challenge to operate boldly in the face of the unknown, using disruption as a launching pad to imagine new opportunities and possibilities. For instance, taking the perspective that uncertainty is a valuable opportunity frees organizations to take new, fearless steps forward, even if it means veering from the usual, comfortable path. For most executives in the survey, that includes a deliberate effort to completely reimagine how, by who, and where works gets done and what outcomes can be achieved. 61% of respondents said their work transformation objectives would focus on reimagining work, compared to only 29% pre-pandemic.
ServiceNow is one organization that shifted direction in this way during covid-19. In March 2020, the company held a “blue sky” strategy session as a forum for leaders to discuss the future of work, digital transformation, and the company. But as they considered these issues under the cloud of the emerging pandemic, CEO Bill McDermott realized the organization needed to take a different tack. “If we can’t help the world manage the pandemic, there won’t be a blue sky,” he said. As a result, he pivoted the meeting to focus on how ServiceNow could quickly innovate and bring new products to market that would help organizations maintain business operations during the pandemic. ServiceNow quickly built and deployed four emergency response management applications as well as a suite of safe workplace applications to make returning to the workplace work for everyone.Putting people at the heart of work decisions pays off
Re-architecting work is not about simply automating tasks and activities. At its core, it is about configuring work to capitalize on what humans can accomplish when work is based on their strengths.
In the survey, executives identified two factors related to human potential as the most transformative for the workplace: building an organizational culture that celebrates growth, adaptability and resilience (45%), and building workforce capability through upskilling, reskilling, and mobility (41%).
Leaders should find ways to create a shared sense of purpose that mobilizes people to pull strongly in the same direction as they face the organization’s current and future challenges, whether the mission is, like Delta’s, to keep people connected, or centered on goals such as inclusivity, diversity or transparency. They should trust people to work in ways that allow them to fulfill their potential, offering workers a degree of choice over the work they do to align their passions with organizational needs. And they should embrace the perspective that reimagining work is key to the ability to achieve new and better outcomes—in a world that is itself being constantly reimagined.
If the past year has shown us anything, it’s that putting people at the heart of a company’s decisions about work and the workforce pays off by helping companies better stay ahead of disruption. The result is an organization that doesn’t just survive but thrives in an unpredictable environment with an unknown future.
Shortly after President Biden was inaugurated, the man who was being given command of his coronavirus response had a message about what America needed to do. “We’re 43rd in the world in genomic sequencing,” said Jeff Zients at a press conference in January. “Totally unacceptable.”
The answer, he suggested, was to “do the appropriate amount of genomic sequencing, which will allow us to spot variants early, which is the best way to deal with any potential variants.”
Scientists have been sequencing the genomes of covid samples since the first identified case in Wuhan; the first mRNA vaccines were built using genetic code released publicly by Chinese scientists in January. And it’s been done at an unprecedented scale. In mid-December, 51,000 covid genomes from the US had already been decoded and posted in public repositories. That’s seven times the number of flu samples sequenced annually by the Centers for Disease Control and Prevention.
The vast majority of covid sequencing in America has been conducted at academic centers. That’s mostly because until recently it was considered an academic pursuit, tracking changes in a virus widely believed to evolve slowly and steadily.
Even in November and December, as both the UK and South Africa announced more transmissible strains and Denmark said it would kill 15 million mink to contain a mutation, many scientists and public health organizations argued that the virus was unlikely to escape vaccine-induced immunity.
“Given the small fraction of US infections that have been sequenced, the variant could already be in the United States without having been detected,” the CDC responded, in a statement published online.
“America is flying blind” quickly became a refrain, not only for scientists seeking support for their work, but for critics of the US response looking for a solvable problem. Some frustration was certainly driven by inaccurate messaging: on December 22, for instance, the New York Times reported that fewer than 40 covid genomes had been sequenced in the US since December 1. In reality, US labs submitted nearly 10,000 new sequences to public repositories in that period.
Financial and political support came quickly under the new administration, with the CDC’s $200 million “down payment” for sequencing work. Then the relief bill passed in March dedicated an eye-popping $1.75 billion to support nationwide public health programs sequencing “diseases or infections, including covid–19.”
The CDC and the WHO set a goal of sequencing 5% of positive covid cases to track variant spread. The US quickly met that goal, mostly by paying private testing labs to sequence a small number of positive samples.
The CDC and the WHO set a goal of sequencing 5% of positive cases to track variant spread—a number based on a pre-print study from the dominant manufacturer of covid sequencers, Illumina.
The US quickly met that goal, mostly by paying private testing labs to sequence a small number of positive samples. In the last week of March, when there were 450,000 reported new cases, US labs—including academic labs funded through other programs—submitted 16,143 anonymized sequences to GISAID, a global repository of biological data, and 6,811 to the National Center for Biotechnology Information, or NCBI.
(That period was one of the lowest case rates in six months, however; to sequence 5% during the January peak, US labs would have needed to spend well over a million dollars a day sequencing five times the number of samples.)
America should be an excellent place to study the genetic evolution of covid. It has widespread infections, a genetically diverse population, and the largest number of vaccinated individuals in the world. But despite the increase in genomic sequencing, some public health experts and scientists are now wondering what’s being done with all this information—and how achievable the field’s goals are.
On its site describing genomic surveillance, the CDC says that sequencing can track whether variants have learned to evade vaccines or treatments. But the agency’s surveillance sequencing program doesn’t connect any of its sequences back to the people they came from, whether they were vaccinated, or how sick they got.
The biggest argument for this kind of anonymous “surveillance” sequencing, meanwhile, is that it gives officials early warning about potential changes in case rates. But in response to news that more transmissible variants are well established in America, states have been relaxing mask mandates and reopening indoor dining.
We spoke to a number of sequencing experts with firsthand experience during the pandemic and heard the same from many of them: turning surveillance data into useful knowledge faces enormous legal, political, and infrastructural barriers in the US, some of them insurmountable.
Unless scientists and policymakers ask why they want covid sequences, and how best to put that data to use, genomic surveillance will yield diminishing returns—and much of its potential will likely be wasted.
“It’s insanely difficult to do this well in the United States,” says Lane Warmbrod, senior analyst at the Johns Hopkins Center for Health Security. “I would be very disappointed if all this money just went to getting a whole bunch of covid sequences, and no thought went toward building something that lasts.”What surveillance sequencing can’t do
There’s no question sequencing has been revolutionary for public health, not least because the mRNA vaccines were developed using sequences made public just a month after a man turned up at a Wuhan hospital with a strange illness.
Surveillance sequencing, which identifies the genetic code of a portion of positive tests and looks for changes over time, can help researchers track the virus’s evolution. If one strain increases faster than others, researchers can hone in on it for further investigation.
“Our surveillance is imperfect, but we are able to see when and where we’re getting transmission across a region, and identify broad-scale patterns of change,” says Duncan MacCannell, chief science officer of the Office of Advanced Molecular Detection, or OAMD, the CDC office responsible for expanding national sequencing efforts.
When asked why surveillance sequencing is so important, it’s common for authorities to respond that it can help track how a strain behaves in the real world. Often, though, such arguments conflate two things: sampling positive tests that have been anonymized, and using targeted analysis to understand specific, identifiable cases.
The CDC’s page on surveillance sequencing of covid variants, for example, claims that “routine analysis of genetic sequence data” can help detect variants with the “ability to evade natural or vaccine-induced immunity” and “cause either milder or more severe disease in people.”
“Just because you’re seeing a variant more often, does that mean it’s actually more transmissible? Maybe. We need to do more science to understand if it’s doing something we’re actually worried about.”
Knowing when variants learn to evade immune systems can tell scientists whether they need to change vaccine formulas. But sequences can’t tell you those things unless connected with information about the people they came from. That’s often impossible under US regulations.
“Just because you’re seeing a variant more often, does that mean it’s actually more transmissible? Maybe,” says Brian Krueger, technical director of research and development at LabCorp, which has a covid sequencing contract with the CDC. “We need to do more science to understand if it’s doing something we’re actually worried about.”
That LabCorp contract is part of OAMD’s primary sequencing program, which pays large testing labs across the country to sequence thousands of positive covid tests. The project is primarily looking for “variants of concern,” strains already suspected to cause worse outcomes or spread faster. It’s also tracking when and where different branches of covid-19’s family tree are spreading, and which genetic changes crop up repeatedly. If one branch of the virus grows quicker than others, or one mutation keeps showing up in different families, it can be flagged for attention.
At the same time, OAMD is collecting raw samples from public health labs around the country to study in the lab, growing the viral samples in dishes and pitting them against therapeutics and patients’ antibodies. Those test tube studies are the source of most recent headlines about variants getting around protection conferred by vaccines. They also tend to dramatically undersell immune protection against illness, which has many overlapping mechanisms.
But because of patient privacy and other requirements put in place for regulatory oversight, all of these samples, as well as all the sequences they collect from labs, are deliberately de-identified: they have no connection to the patient in question.Taking the 10,000 foot view
According to MacCannell, the OAMD has no intention of contextualizing its de-identified data with clinical information.
“Those contracts are set up to give us the 10,000 foot view,” says MacCannell.
Even if it wanted to combine surveillance sequences with patient information in its analyses, the agency would be fighting a massive uphill battle. In the US, most patient records—test results, immunization information, hospital records—are scattered across many unconnected databases. Whether or not the owners of that data are interested in turning the information over to the government, it would typically require each individual to give consent, a very laborious undertaking.
Instead of trying to work through these issues at the national level, the sequencing contracts allow individual public health agencies to request the names and contact information of people who have tested positive for variants of concern. But that just pushes the same problems of data ownership down the chain.
“Some states are very good and want to know a lot about variants that are circulating in their state,” says Labcorp’s Brian Krueger. “The other states are not.”
Public health epidemiologists often have little experience with bioinformatics, using software to analyze large datasets like genomic sequences. Only a few agencies have pre-existing sequencing programs; even if they did, having each jurisdiction analyze just a small slice of the dataset undercuts how much knowledge can be gleaned about real-world behavior.
Getting around those issues—making it easier to connect sequences and clinical metadata on a large scale—would require more than just root and branch reform of privacy regulations, however. It would need a reorganization of the entire healthcare and public health systems in the US, where each of the 64 public health agencies operate as fiefdoms, and there is no centralization of information or power.
“Metadata is the single biggest uncracked nut,” says Jonathan Quick, managing director of pandemic response, preparedness, and prevention at the Rockefeller Foundation. (The Rockefeller Foundation helps fund coverage at MIT Technology Review, although it has no editorial oversight.) Because it’s so hard for public health to put together big enough datasets to really understand real-world variant behavior, our understanding has to come from vaccine manufacturers and hospitals adding sequencing to their own clinical trials, he says.
It’s frustrating to him that so many huge datasets of useful information already exist in electronic medical records, immunization registries, and other sources, but can’t easily be used.
“There’s a whole lot more that could be learned, and learned faster, without the shackles we put on the use of that data,” says Quick. “We can’t just rely on the vaccine companies to do surveillance.”Boosting state-level bioinformatics
If public health labs are expected to focus more on tracking and understanding variants on their own, they’ll need all the help they can get. Doing something about variants case-by-case, after all, is a public health job, while doing something about variants on a policy level is a political one.
Public health labs generally use genomics to expose otherwise-hidden information about outbreaks, or as part of track and trace efforts. In the past, sequencing has been used to connect E. coli outbreaks to specific farms, identify and interrupt chains of HIV transmission, isolate US Ebola cases, and follow annual flu patterns.
Even those with well-established programs tend to use genomics sparingly. The cost of sequencing has dropped precipitously over the last decade, but the process is still not cheap, particularly for cash-strapped state and local health departments. The machines themselves cost hundreds of thousands of dollars to buy, and more to run: Illumina, one of the biggest makers of sequencing equipment, says labs spend an average of $1.2 million annually on supplies for each of its machines.
“We’ll miss a ton of opportunities if we just give health departments money to set up programs without having a federal strategy so that everyone knows what they’re doing”
Health agencies don’t just need money; they also need expertise. Surveillance requires highly trained bioinformaticians to turn a sequence’s long strings of letters into useful information, as well as people to explain the results to officials, and convince them to turn any lessons learned into policy.
Fortunately, the OAMD has been working to support state and local health departments as they try to understand their sequencing data, employing regional bioinformaticians to consult with public health officers and facilitating agencies’ efforts to share their experiences.
It is also pouring hundreds of millions into building and supporting those agencies’ own sequencing programs—not just for covid, but for all pathogens.
But many of those agencies are facing pressure to sequence as many covid genomes as possible. Without a cohesive strategy for collecting and analyzing data, it’s unclear how much utility those programs will have.
“We’ll miss a ton of opportunities if we just give health departments money to set up programs without having a federal strategy so that everyone knows what they’re doing,” says Warmbrod.Initial visions, usurped
Mark Pandori is director of the Nevada state public health laboratory, one of the programs OAMD supports. He has been a strong proponent of genomic surveillance for years. Before moving to Reno, he ran the public health lab in Alameda County, California, where he helped pioneer a program using sequencing to track how infections were being passed around hospitals.
Turning sequences into usable data is the biggest challenge for public health genomics programs, he says.
“The CDC can say, ‘go buy a bunch of sequencing equipment, do a whole bunch of sequencing.’ But it doesn’t do anything unless the consumers of that data know how to use it, and know how to apply it,” he says. “I’m talking to you about the robotics we need to get things sequenced every day, but health departments just need a simple way to know if cases are related.”
When it comes to variants, public health labs are under many of the same pressures the CDC faces: everyone wants to know what variants are circulating, whether or not they can do anything with the information.
Pandori launched his covid sequencing program hoping to cut down on the labor needed to investigate potential covid outbreaks, quickly identifying whether cases caught near each other were related or coincidental.
His lab was the first in North America to identify a patient reinfected with covid-19, and later found the B.1.351 variant in a hospitalized man who had just come back from South Africa. With rapid contact tracing, the health department was able to prevent it from spreading.
But county health departments have shifted their priorities away from those boots-on-the-ground investigations in response to public focus on watching for known variants of concern, he says. It’s a move he’s quite skeptical of.
“My initial vision of using it as an epidemiological and disease investigation tool has been usurped by using this as a variant scan,” says Pandori. “It’s kind of the new phase in lab testing. We’ve gone from not having enough testing, period, to not having enough genetic sequencing, I guess. That’s what people are saying now.”
(Pandori is not the only one whose research interests have been waylaid by a focus on surveillance. Kruegar, of LabCorp, built the company’s covid sequencing program hoping to study how variants evolve within individual patients. “The currency these days seems to be, how many full genomes can you submit to the different databases?” he says.)
Each month, Pandori’s lab sends 40 samples to the CDC, as requested. The team also sequences 64 of their own samples a day. When they don’t have enough recent samples, they dip into the archives; so far they’ve gotten all the way back to samples from November.
As for sequencing 5% of Nevada’s cases, the majority of tests in the state are conducted by private labs, which generally discard the samples before they can be sequenced. “Specimens that get tested by private labs, or antigen testing, those are lost to surveillance,” he says.
Pandori says he hasn’t heard from the CDC or the public health department about variant data from the CDC’s labs program.Do it because you have a question
The US may face unique difficulties in connecting variant sequences to their real-world behavior, but every system faces its own challenges. Even countries with well-developed national healthcare systems are struggling to wrangle the enormous amounts of data it will take to really understand what these genetic changes are doing.
In fact, there are few governments doing the work, and perhaps only one doing it successfully at scale.
COG-UK, a consortium of academic and government labs in Britain, organized the first major covid sequencing effort in the world, and is widely considered the shining star of the field. Its scientists have not only sequenced almost twice the number of samples as the US, but were also the first to identify and characterize a variant with increased transmission.
“Do it because you have a question. If you don’t, then please stop using up all of our Illumina reagents. Our supply chains have gone down the drain since the US announced they were going to up sequencing capacity.”
They’ve done it all for under £50 million ($69 million), according to Leigh Jackson, the consortium’s scientific project manager. “It’s quite eye-watering to compare our costs with what the private sector is charging for these types of services,” he says, noting that most of the labor has come from academic labs, which are primarily charging them for materials.
“Overwhelmingly, objective number one is going to be awareness of vaccine escape mutations in the real world. It’s going to happen. Because we have such widespread coverage and capacity now, we should be able to see them pretty quickly,” says Jackson.
That work is possible because public health and medicine in the UK are both nationalized, so tests and vaccine records are all tagged with patients’ unique NHS number. COG-UK only needs a few data-sharing agreements to link all 400,000 samples they’ve sequenced back to vaccine lists and top-level hospital data. That’s not to say combining those datasets is easy; the group is currently building out a streamlined system to connect all of the other disconnected systems together, automating the upload of new data and making it easier for partners to access.
Jackson is happy to hear about the expansion of well-designed sequencing programs, but he takes issue with mass sequencing done without clearly-defined goals.
“Don’t do it because it’s a vote winner, or it looks good, or it makes people happy. Do it because you have a question,” he says. “If you don’t, then please stop using up all of our Illumina reagents. Our supply chains have gone down the drain since the US announced they were going to up sequencing capacity.”From sample to the “so what?”
In public health—unlike in basic research—knowledge is only power if it comes with action. It’s what Quick from the Rockefeller Foundation calls “going from the sample to the ‘so what?’”
Sequences need to be connected to immunization data on a massive scale to say anything about vaccine efficacy. Decision-makers need to respond to variants if sequencing them is going to matter. (Right now, many US states are reopening movie theaters and indoor dining, despite clear evidence more transmissible strains are driving cases up around the country.)
Warmbrod from Johns Hopkins hopes this money will be used proactively, with an eye toward the future, instead of reactively.
“When I go back and look at papers that are six, seven years old, it’s like, ‘Oh god, we’ve known about this exact problem for years, and we did nothing,’” she says. “Whatever tools and infrastructure we build now, they can be used for a lot more than just covid.”
MacCannell feels the same way. “Our role is really to figure out how to expand genomic surveillance across the US public health system in ways that aren’t just covid-specific. We want to take lessons learned, and apply them broadly.”
It’s to everyone’s benefit if this vast injection of money is used not just in response to one crisis, but in preparation for the next one. It offers a real opportunity to fix cracks in our public health system, and build stable institutions that bridge health disparities, respond strongly to potential threats, and keep functioning in a crisis.
At the same time, if the CDC is to make the most of its role as a national public health agency, it should be using all the resources at its disposal—including this massive repository of real-world variant sequences—in tracking real-world behavior, like evading vaccine-induced immunity.
Failing to do so could have deadly consequences.
“We’re setting out to immunize the planet, and I’m quite concerned,” says Quick. “We need to share this data so we don’t invest a huge amount of time and effort in immunizing as many people as possible, only to find it was much less effective than we thought. That’s more lives lost, and more credibility lost for vaccines.”
This story is part of the Pandemic Technology Project, supported by The Rockefeller Foundation.