MIT Top Stories

Subscribe to MIT Top Stories feed MIT Top Stories
Updated: 1 hour 13 min ago

How AI is reinventing what computers are

Fri, 10/22/2021 - 06:00

Fall 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. But behind all the marketing glitz, there’s something remarkable going on. 

Google’s latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a “neural engine,” also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera. Almost without our noticing, AI has become part of our day-to-day lives. And it’s changing how we think about computing.

What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for. 

“The core of computing is changing from number-crunching to decision-­making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes. 

More haste, less speed

The first change concerns how computers—and the chips that control them—are made. Traditional computing gains came as machines got faster at carrying out one calculation after another. For decades the world benefited from chip speed-ups that came with metronomic regularity as chipmakers kept up with Moore’s Law. 

But the deep-learning models that make current AI applications work require a different approach: they need vast numbers of less precise calculations to be carried out all at the same time. That means a new type of chip is required: one that can move data around as quickly as possible, making sure it’s available when and where it’s needed. When deep learning exploded onto the scene a decade or so ago, there were already specialty computer chips available that were pretty good at this: graphics processing units, or GPUs, which were designed to display an entire screenful of pixels dozens of times a second. 

Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version.

Now chipmakers like Intel and Arm and Nvidia, which supplied many of the first GPUs, are pivoting to make hardware tailored specifically for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find an AI edge through hardware. 

For example, the chip inside the Pixel 6 is a new mobile version of Google’s tensor processing unit, or TPU. Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs are designed for the high-volume but low-­precision calculations required by neural networks. Google has used these chips in-house since 2015: they process people’s photos and natural-­language search queries. Google’s sister company DeepMind uses them to train its AIs. 

In the last couple of years, Google has made TPUs available to other companies, and these chips—as well as similar ones being developed by others—are becoming the default inside the world’s data centers. 

AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-­learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips. 

Show, don’t tell

The second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will be training them, says Chris Bishop, head of Microsoft Research in the UK. 

Traditionally, to get a computer to do something like recognize speech or identify objects in an image, programmers first had to come up with rules for the computer.

With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. It’s a fundamentally different way of thinking. 

Examples of this are already commonplace: speech recognition and image identification are now standard features on smartphones. Other examples made headlines, as when AlphaZero taught itself to play Go better than humans. Similarly, AlphaFold cracked open a biology problem—working out how proteins fold—that people had struggled with for decades. 

For Bishop, the next big breakthroughs are going to come in molecular simulation: training computers to manipulate the properties of matter, potentially making world-changing leaps in energy usage, food production, manufacturing, and medicine. 

Breathless promises like this are made often. It is also true that deep learning has a track record of surprising us. Two of the biggest leaps of this kind so far—getting computers to behave as if they understand language and to recognize what is in an image—are already changing how we use them.

Computer knows best

For decades, getting a computer to do something meant typing in a command, or at least clicking a button. 

Machines no longer need a keyboard or screen for humans to interact with. Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version. But as they proliferate, we are going to want to spend less time telling them what to do. They should be able to work out what we need without being told.

This is the shift from number-­crunching to decision-making that Dubey sees as defining the new era of computing.  

Rus wants us to embrace the cognitive and physical support on offer. She imagines computers that tell us things we need to know when we need to know them and intervene when we need a hand. “When I was a kid, one of my favorite movie [scenes] in the whole world was ‘The Sorcerer’s Apprentice,’” says Rus. “You know how Mickey summons the broom to help him tidy up? We won’t need magic to make that happen.”

We know how that scene ends. Mickey loses control of the broom and makes a big mess. Now that machines are interacting with people and integrating into the chaos of the wider world, everything becomes more uncertain. The computers are out of their boxes.

Decarbonizing industries with connectivity and 5G

Wed, 10/20/2021 - 17:00

Around the world, citizens, governments, and corporations are mobilizing to reduce carbon emissions. The unprecedented and ongoing climate disasters have put the necessity to decarbonize into sharp relief. In 2021 alone these climate emergencies included a blistering “heat dome” of nearly 50 °C in the normally temperate Pacific Northwest of the United States and Canada, deadly and destructive flooding in China and across Europe, and wildfires globally from Turkey to California, the latter of which damaged close to 1 million acres.

The United Nations Intergovernmental Panel on Climate Change’s sixth climate change report—an aggregated assessment of scientific research prepared by some 300 scientists across 66 countries—has served as the loudest and clearest wake-up call to date on the global warming crisis. The panel unequivocally attributes the increase in the earth’s temperature—it has risen by 1.1 °C since the Industrial Revolution—to human activity. Without substantial and immediate reductions in carbon dioxide and other greenhouse gas emissions, temperatures will rise between 1.5 °C and 2 °C before the end of the century. That, the panel posits, will lead all of humanity to a “greater risk of passing through ‘tipping points,’ thresholds beyond which certain impacts can no longer be avoided even if temperatures are brought back down later on.”

Corporations and industries must therefore redouble their greenhouse gas emissions reduction and removal efforts with speed and precision—but to do this, they must also commit to deep operational and organizational transformation. Cellular infrastructure, particularly 5G, is one of the many digital tools and technology-enabled processes organizations have at their disposal to accelerate decarbonization efforts.  

5G and other cellular technology can enable increasingly interconnected supply chains and networks, improve data sharing, optimize systems, and increase operational efficiency. These capabilities could soon contribute to an exponential acceleration of global efforts to reduce carbon emissions.

Industries such as energy, manufacturing, and transportation could have the biggest impact on decarbonization efforts through the use of 5G, as they are some of the biggest greenhouse-gas-emitting industries, and all rely on connectivity to link to one another through communications network infrastructure.

The higher performance and improved efficiency of 5G—which delivers higher multi-gigabit peak data speeds, ultra-low latency, increased reliability, and increased network capacity—could help businesses and public infrastructure providers focus on business transformation and reduction of harmful emissions. This requires effective digital management and monitoring of distributed operations with resilience and analytic insight. 5G will help factories, logistics networks, power companies, and others operate more efficiently, more consciously, and more purposely in line with their explicit sustainability objectives through better insight and more powerful network configurations.

This report, “Decarbonizing industries with connectivity & 5G,” argues that the capabilities enabled by broadband cellular connectivity primarily, though not exclusively, through 5G network infrastructure are a unique, powerful, and immediate enabler of carbon reduction efforts. They have the potential to create a transformational acceleration of decarbonization efforts, as increasingly interconnected supply chains, transportation, and energy networks share data to increase efficiency and productivity, hence optimizing systems for lower carbon emissions.

Download the full report.

Rediscover trust in cybersecurity

Wed, 10/20/2021 - 14:47

The world has changed dramatically in a short amount of time—changing the world of work along with it. The new hybrid remote and in-office work world has ramifications for tech—specifically cybersecurity—and signals that it’s time to acknowledge just how intertwined humans and technology truly are.

Enabling a fast-paced, cloud-powered collaboration culture is critical to rapidly growing companies, positioning them to out innovate, outperform, and outsmart their competitors. Achieving this level of digital velocity, however, comes with a rapidly growing cybersecurity challenge that is often overlooked or deprioritized : insider risk, when a team member accidentally—or not—shares data or files outside of trusted parties. Ignoring the intrinsic link between employee productivity and insider risk can impact both an organizations’ competitive position and its bottom line. 

You can’t treat employees the same way you treat nation-state hackers

Insider risk includes any user-driven data exposure event—security, compliance or competitive in nature—that jeopardizes the financial, reputational or operational well-being of a company and its employees, customers, and partners. Thousands of user-driven data exposure and exfiltration events occur daily, stemming from accidental user error, employee negligence, or malicious users intending to do harm to the organization. Many users create insider risk accidentally, simply by making decisions based on time and reward, sharing and collaborating with the goal of increasing their productivity. Other users create risk due to negligence, and some have malicious intentions, like an employee stealing company data to bring to a competitor. 

From a cybersecurity perspective, organizations need to treat insider risk differently than external threats. With threats like hackers, malware, and nation-state threat actors, the intent is clear—it’s malicious. But the intent of employees creating insider risk is not always clear—even if the impact is the same. Employees can leak data by accident or due to negligence. Fully accepting this truth requires a mindset shift for security teams that have historically operated with a bunker mentality—under siege from the outside, holding their cards close to the vest so the enemy doesn’t gain insight into their defenses to use against them. Employees are not the adversaries of a security team or a company—in fact, they should be seen as allies in combating insider risk.

Transparency feeds trust: Building a foundation for training

All companies want to keep their crown jewels—source code, product designs, customer lists—from ending up in the wrong hands. Imagine the financial, reputational, and operational risk that could come from material data being leaked before an IPO, acquisition, or earnings call. Employees play a pivotal role in preventing data leaks, and there are two crucial elements to turning employees into insider risk allies: transparency and training. 

Transparency may feel at odds with cybersecurity. For cybersecurity teams that operate with an adversarial mindset appropriate for external threats, it can be challenging to approach internal threats differently. Transparency is all about building trust on both sides. Employees want to feel that their organization trusts them to use data wisely. Security teams should always start from a place of trust, assuming the majority of employees’ actions have positive intent. But, as the saying goes in cybersecurity, it’s important to “trust, but verify.” 

Monitoring is a critical part of managing insider risk, and organizations should be transparent about this. CCTV cameras are not hidden in public spaces. In fact, they are often accompanied by signs announcing surveillance in the area. Leadership should make it clear to employees that their data movements are being monitored—but that their privacy is still respected. There is a big difference between monitoring data movement and reading all employee emails.

Transparency builds trust—and with that foundation, an organization can focus on mitigating risk by changing user behavior through training. At the moment, security education and awareness programs are niche. Phishing training is likely the first thing that comes to mind due to the success it’s had moving the needle and getting employees to think before they click. Outside of phishing, there is not much training for users to understand what, exactly, they should and shouldn’t be doing.

For a start, many employees don’t even know where their organizations stand. What applications are they allowed to use? What are the rules of engagement for those apps if they want to use them to share files? What data can they use? Are they entitled to that data? Does the organization even care? Cybersecurity teams deal with a lot of noise made by employees doing things they shouldn’t. What if you could cut down that noise just by answering these questions?

Training employees should be both proactive and responsive. Proactively, in order to change employee behavior, organizations should provide both long- and short-form training modules to instruct and remind users of best behaviors. Additionally, organizations should respond with a micro-learning approach using bite-sized videos designed to address highly specific situations. The security team needs to take a page from marketing, focusing on repetitive messages delivered to the right people at the right time. 

Once business leaders understand that insider risk is not just a cybersecurity issue, but one that is intimately intertwined with an organization’s culture and has a significant impact on the business, they will be in a better position to out-innovate, outperform, and outsmart their competitors. In today’s hybrid remote and in-office work world, the human element that exists within technology has never been more significant.That’s why transparency and training are essential to keep data from leaking outside the organization. 

This content was produced by Code42. It was not written by MIT Technology Review’s editorial staff.

Surgeons have successfully tested a pig’s kidney in a human patient

Wed, 10/20/2021 - 06:32

The news: Surgeons have successfully attached a pig’s kidney to a human patient and watched it start to work, the AP reported today. The pig had been genetically-engineered so that its organ was less likely to be rejected. The feat is a potentially huge milestone in the quest to one day use animal organs for human transplants, and shorten waiting lists.

How it worked: The surgical team, from NYU Langone Health, attached the pig kidney to blood vessels outside the body of a brain-dead recipient, then observed it for two days. The family agreed to the experiment before the woman was to be taken off life support, the AP reported. The kidney functioned normally—filtering waste and producing urine—and didn’t show signs of rejection during the short observation period. 

The reception: The research was conducted last month and is yet to be published in a journal or peer-reviewed, but external experts say it represents a major advance. “There is no doubt that this is a highly significant breakthrough,” says Darren K Griffin, professor of genetics at the University of Kent, UK. “The research team were cautious, using a patient who had suffered brain death, attaching the kidney to the outside of the body and closely monitoring for only a limited amount of time. There is thus a long way to go and much to discover,” he added. 

“This is a huge breakthrough. It’s a big, big deal,” Dr Dorry Segev, professor of transplant surgery at Johns Hopkins School of Medicine, who was not involved in the research, told the New York Times. However, he added: “We need to know more about the longevity of the organ.”

The background: In recent years, research has increasingly zeroed in on pigs as the most promising avenue to help address the organ shortage, but it has faced a number of obstacles, most prominently the fact that a sugar in pig cells triggers an aggressive rejection response in humans.

The researchers got around this by genetically altering the donor pig to knock out the gene that encodes the sugar molecule that causes the rejection response. The pig was genetically engineered by Revivicor, one of several biotech companies working to develop pig organs to transplant into humans. 

The big prize: There is a dire need for more kidneys for transplants. More than 100,000 people in the US are currently waiting for a kidney transplant, and 13 die of them every day, according to the National Kidney Foundation. Genetically engineered pigs could offer a crucial lifeline for these people, if the approach tested at NYU Langone can work for much longer periods.

Getting value from your data shouldn’t be this hard

Tue, 10/19/2021 - 12:00

The potential impact of the ongoing worldwide data explosion continues to excite the imagination. A 2018 report estimated that every second of every day, every person produces 1.7 MB of data on average—and annual data creation has more than doubled since then and is projected to more than double again by 2025. A report from McKinsey Global Institute estimates that skillful uses of big data could generate an additional $3 trillion in economic activity, enabling applications as diverse as self-driving cars, personalized health care, and traceable food supply chains.

But adding all this data to the system is also creating confusion about how to find it, use it, manage it, and legally, securely, and efficiently share it. Where did a certain dataset come from? Who owns what? Who’s allowed to see certain things? Where does it reside? Can it be shared? Can it be sold? Can people see how it was used?

As data’s applications grow and become more ubiquitous, producers, consumers, and owners and stewards of data are finding that they don’t have a playbook to follow. Consumers want to connect to data they trust so they can make the best possible decisions. Producers need tools to share their data safely with those who need it. But technology platforms fall short, and there are no real common sources of truth to connect both sides.

How do we find data? When should we move it?

In a perfect world, data would flow freely like a utility accessible to all. It could be packaged up and sold like raw materials. It could be viewed easily, without complications, by anyone authorized to see it. Its origins and movements could be tracked, removing any concerns about nefarious uses somewhere along the line.

Today’s world, of course, does not operate this way. The massive data explosion has created a long list of issues and opportunities that make it tricky to share chunks of information.

With data being created nearly everywhere within and outside of an organization, the first challenge is identifying what is being gathered and how to organize it so it can be found.

A lack of transparency and sovereignty over stored and processed data and infrastructure opens up trust issues. Today, moving data to centralized locations from multiple technology stacks is expensive and inefficient. The absence of open metadata standards and widely accessible application programming interfaces can make it hard to access and consume data. The presence of sector-specific data ontologies can make it hard for people outside the sector to benefit from new sources of data. Multiple stakeholders and difficulty accessing existing data services can make it hard to share without a governance model.

Europe is taking the lead

Despite the issues, data-sharing projects are being undertaken on a grand scale. One that’s backed by the European Union and a nonprofit group is creating an interoperable data exchange called Gaia-X, where businesses can share data under the protection of strict European data privacy laws. The exchange is envisioned as a vessel to share data across industries and a repository for information about data services around artificial intelligence (AI), analytics, and the internet of things.

Hewlett Packard Enterprise recently announced a solution framework to support companies, service providers, and public organizations’ participation in Gaia-X. The dataspaces platform, which is currently in development and based on open standards and cloud native, democratizes access to data, data analytics, and AI by making them more accessible to domain experts and common users. It provides a place where experts from domain areas can more easily identify trustworthy datasets and securely perform analytics on operational data—without always requiring the costly movement of data to centralized locations.

By using this framework to integrate complex data sources across IT landscapes, enterprises will be able to provide data transparency at scale, so everyone—whether a data scientist or not—knows what data they have, how to access it, and how to use it in real time.

Data-sharing initiatives are also on the top of enterprises’ agendas. One important priority enterprises face is the vetting of data that’s being used to train internal AI and machine learning models. AI and machine learning are already being used widely in enterprises and industry to drive ongoing improvements in everything from product development to recruiting to manufacturing. And we’re just getting started. IDC projects the global AI market will grow from $328 billion in 2021 to $554 billion in 2025.

To unlock AI’s true potential, governments and enterprises need to better understand the collective legacy of all the data that is driving these models. How do AI models make their decisions? Do they have bias? Are they trustworthy? Have untrustworthy individuals been able to access or change the data that an enterprise has trained its model against? Connecting data producers to data consumers more transparently and with greater efficiency can help answer some of these questions.

Building data maturity

Enterprises aren’t going to solve how to unlock all of their data overnight. But they can prepare themselves to take advantage of technologies and management concepts that help to create a data-sharing mentality. They can ensure that they’re developing the maturity to consume or share data strategically and effectively rather than doing it on an ad hoc basis.

Data producers can prepare for wider distribution of data by taking a series of steps. They need to understand where their data is and understand how they’re collecting it. Then, they need to make sure the people who consume the data have the ability to access the right sets of data at the right times. That’s the starting point.

Then comes the harder part. If a data producer has consumers—which can be inside or outside the organization—they have to connect to the data. That’s both an organizational and a technology challenge. Many organizations want governance over data sharing with other organizations. The democratization of data—at least being able to find it across organizations—is an organizational maturity issue. How do they handle that?

Companies that contribute to the auto industry actively share data with vendors, partners, and subcontractors. It takes a lot of parts—and a lot of coordination—to assemble a car. Partners readily share information on everything from engines to tires to web-enabled repair channels. Automotive dataspaces can serve upwards of 10,000 vendors. But in other industries, it might be more insular. Some large companies might not want to share sensitive information even within their own network of business units.

Creating a data mentality

Companies on either side of the consumer-producer continuum can advance their data-sharing mentality by asking themselves these strategic questions:

  • If enterprises are building AI and machine learning solutions, where are the teams getting their data? How are they connecting to that data? And how do they track that history to ensure trustworthiness and provenance of data?
  • If data has value to others, what is the monetization path the team is taking today to expand on that value, and how will it be governed?
  • If a company is already exchanging or monetizing data, can it authorize a broader set of services on multiple platforms—on premises and in the cloud?
  • For organizations that need to share data with vendors, how is the coordination of those vendors to the same datasets and updates getting done today?
  • Do producers want to replicate their data or force people to bring models to them? Datasets might be so large that they can’t be replicated. Should a company host software developers on its platform where its data is and move the models in and out?
  • How can workers in a department that consumes data influence the practices of the upstream data producers within their organization?
Taking action

The data revolution is creating business opportunities—along with plenty of confusion about how to search for, collect, manage, and gain insights from that data in a strategic way. Data producers and data consumers are becoming more disconnected with each other. HPE is building a platform supporting both on-premises and public cloud, using open source as the foundation and solutions like HPE Ezmeral Software Platform to provide the common ground both sides need to make the data revolution work for them.

Read the original article on Enterprise.nxt.

This content was produced by Hewlett Packard Enterprise. It was not written by MIT Technology Review’s editorial staff.

These weird virtual creatures evolve their bodies to solve problems

Tue, 10/19/2021 - 06:11

An endless variety of virtual creatures scamper and scuttle across the screen, struggling over obstacles or dragging balls toward a target. They look like half-formed crabs made of sausages—or perhaps Thing, the disembodied hand from The Addams Family. But these “unimals” (short for “universal animals”) could in fact help researchers develop more general-purpose intelligence in machines. 

Agrim Gupta of Stanford University and his colleagues (including Fei-Fei Li, who co-directs the Stanford Artificial Intelligence Lab and led the creation of ImageNet) used these unimals to explore two questions that often get overlooked in AI research: how intelligence is tied to the way bodies are laid out, and how abilities can be developed through evolution as well as learned.

“This work is an important step in a decades-long attempt to better understand the body-brain relationship in robots,” says Josh Bongard, who studies evolutionary robotics at the University of Vermont and was not involved in the work.

If researchers want to re-create intelligence in machines, they might be missing something, says Gupta. In biology, intelligence arises from minds and bodies working together. Aspects of body plans, such as the number and shape of limbs, determine what animals can do and what they can learn. Think of the aye-aye, a lemur that evolved an elongated middle finger to probe deep into holes for grubs.

AI typically focuses only on the mind part, building machines to do tasks that can be mastered without a body, such as using language, recognizing images, and playing video games. But this limited repertoire could soon get old. Wrapping AIs in bodies that are adapted to specific tasks could make it easier for them to learn a wide range of new skills. “One thing every single intelligent animal on the planet has in common in a body,“ says Bongard. “Embodiment is our only hope of making machines that are both smart and safe.“

Unimals have a head and multiple limbs. To see what they could do, the team developed a technique called deep evolutionary reinforcement learning (DERL). The unimals are first trained using reinforcement learning to complete a task in a virtual environment, such as walking across different types of terrain or moving an object.

The unimals that perform the best are then selected and mutations are introduced, and the resulting offspring are placed back in the environment, where they learn the same tasks from scratch. The process repeats hundreds of times: evolve and learn, evolve and learn.

The mutations unimals are subjected to involve adding or removing limbs, or changing the length or flexibility of limbs. The number of possible body configurations is vast: there are 10^18 unique variations with 10 limbs or fewer. Over time, the unimals’ bodies adapt to different tasks. Some unimals have evolved to move across flat terrain by falling forwards; some evolved a lizard-like waddle; others evolved pincers to grip a box.

The researchers also tested how well the evolved unimals could adapt to a task they hadn’t seen before, an essential feature of general intelligence. Those that had evolved in more complex environments, containing obstacles or uneven terrain, were faster at learning new skills, such as rolling a ball instead of pushing a box. They also found that DERL selected body plans that learned faster, even though there was no selective pressure to do so. “I find this exciting because it shows how deeply body shape and intelligence are connected,” says Gupta.

“It’s already known that certain bodies accelerate learning,” says Bongard. “This work shows that AI that can search for such bodies.” Bongard’s lab has developed robot bodies that are adapted to particular tasks, such as giving callus-like coatings to feet to reduce wear and tear. Gupta and his colleagues extend this idea, says Bongard. “They show that the right body can also speed up changes in the robot’s brain.”

Ultimately, this technique could reverse the way we think of building physical robots, says Gupta. Instead of starting with a fixed body configuration and then training the robot to do a particular task, you could use DERL to let the optimal body plan for that task evolve and then build that.

Gupta’s unimals are part of a broad shift in how researchers are thinking about AI. Instead of training AIs on specific tasks, such as playing Go or analyzing a medical scan, researchers are starting to drop bots into virtual sandboxes—such as POET, OpenAI’s virtual hide-and-seek arena, and DeepMind’s virtual playground XLand—and getting them to learn how to solve multiple tasks in ever-changing, open-ended training dojos. Instead of mastering a single challenge, AIs trained in this way learn general skills.

For Gupta, free-form exploration will be key for the next generation of AIs. “We need truly open-ended environments to create intelligent agents,” he says.

The challenges of hybrid cloud adoption find answers in HCI

Mon, 10/18/2021 - 10:00

Christine McMonigal is director of hyperconverged marketing at Intel Corporation.

Never before has the need for businesses to make progress along their digital journeys been more pressing—with more options to evaluate, urgencies to respond to, and complexities to understand in a complex landscape. Shifting demands, fueled in part by the covid-19 pandemic, have driven the need for businesses to make the leap to digitization at a pace never seen before. IDC estimates that as early as 2022, 46% of enterprise products and services will be digitally delivered, creating pressure on companies to pursue new ways of expediting digital transformation. Forward-thinking leaders have started this journey, ushering in a massive migration to the cloud, which serves as the heartbeat of digital transformation and establishes the foundation for future innovation.

But if digital transformation were easy, then every organization would be doing it. Instead, three common challenges occur and can often stand in the way of an organization’s progress:

Multiple cloud architectures. Apps and data continue to increase and reside in diverse clouds. Managing them to provide reduced latency, availability, and data sovereignty remains a complex undertaking.

Balancing old with new. In some cases, the urgent and rapid migration to the cloud has been costly. Applications or workloads that were moved to the cloud may have been better suited in a local environment. Businesses need more flexibility to update their legacy apps to become cloud-native over time. Simultaneously, on-premises infrastructure needs to be modernized to make it more performant, scalable, and efficient—in effect, to make it more cloud-like.

Security. The modern workforce is more decentralized, increasing the attack surface for organizations. This requires a new and dynamic security strategy that is holistic.

So, what’s the answer for enterprises to tackle these challenges? A pragmatic foundation for a modern digital infrastructure is hybrid cloud. It optimizes application deployments across locations, providing the ultimate level of agility based on changing business requirements. The on-premises side of hybrid cloud is best deployed via hyperconverged infrastructure, or HCI, which enables modernization that eases the transition by blending old and new.

By fusing virtualized compute and storage resources together with intelligent software on standard server hardware, this approach creates flexible building blocks intended to replace or optimize legacy infrastructure while providing greater agility. With this approach, many parts are brought together to offer a version of cloud infrastructure that features dynamic scalability and simplified operations.

Achieving agility through hybrid cloud

Delivering high levels of performance is a requirement for IT environments that rely on mission-critical databases and latency-sensitive applications. This is especially important in dynamic environments where data growth is constant and continuous access is a requirement, often compounded by demand for new analyses and insights. The ability to easily meet these performance and scalability requirements is essential for any business deploying hyperconverged infrastructures.  

Microsoft and Intel are working together to take the best of software and combine it with the best of hardware technologies to provide organizations with a flexible infrastructure that can handle today’s demands with agility and set the pace for digital transformation.

Flexibility coupled with seamless management

Solving for the challenge of navigating and streamlining multiple cloud architectures requires a control plane that offers simplified management of both on-premises and public cloud-based resources. The hybrid offering available via Azure Stack HCI (delivered as a service) provides a comprehensive answer for this challenge. With Azure Stack HCI and integrated services such as Azure Arc, you can easily manage and govern on-premises resources, together with Azure public cloud resources, from a single control plane.

Any viable hybrid cloud offering needs to decrease complexity through simplified management, while increasing agility, scalability, and performance. You can maintain existing operations and scale at a pace that best suits your requirements with optimized on-premises hardware and legacy functionality and improved workload virtualization. Azure Stack HCI effectivelybalances old with new, supporting on-premises operations evolution to become part of your cloud operating model, from the core data center to the edge and the cloud.

Seamless management also includes maintaining a holistic and comprehensive security posture so that associated risks can be managed without sacrificing effectiveness. As computing complexity increases across the data center, edge, and cloud, it can increase those risks if not addressed. Security must go hand in hand with digital transformation. Intel and Microsoft are leading the way with a trusted foundation, from the software down to the silicon layer. We’ll soon be announcing multiple new technologies to secure data at rest and in use, and we’ll dive deeper on data protection and compliance in the next article in this series.

A hardware foundation to handle the digitization of everything

As we continue to see increased reliance on analytics tools and AI for data insights to manage operations and customer touchpoints, the importance of semiconductors continues to grow. This digital surge is increasing the demands for more compute power—suddenly, an organization’s infrastructure has evolved from tactical to the epicenter of new strategic business opportunities.

Creating a hardware infrastructure that flexes with business demands is one of the keys to unlocking the potential of an agile hybrid cloud that can move workloads across different environments with speed and ease. Intel’s mission is to provide the best technology foundation with built-in capabilities across performance, AI, and security that unleashes new business opportunities today and in the future. At the center of this foundation are 3rd Gen Intel® Xeon® Scalable processors.

Intel and Microsoft are working together to reduce the time required to evaluate, select, and purchase, streamlining the time to deploy new infrastructure by using technologies that are fully integrated, tested, and ready to perform. As evidence of this, Microsoft and Intel recently battle-tested Azure Stack HCI on the latest Intel technologies, showcasing 2.62 million SQL Server new orders per minute, one of the most popular workloads among enterprises. These optimized configurations are available as Intel® Select Solutions for Azure Stack HCI from multiple server OEM and scale partners. 

Serving the needs of dynamic IT environments

It has never been a more dynamic time for businesses; the time to embrace hybrid cloud is now. Azure Stack HCI is charting a new and easy path to hybrid, with Intel as the technology foundation to modernize and turn infrastructure into strategic advantage.

If you’re ready to optimize manageability, performance, and costs while integrating on-premises data center and edge infrastructures into your hybrid and multi-cloud environment, learn more about Azure Stack HCI today.

Check out the latest Intel-based Azure Stack HCI systems and continuous innovation on Azure.com/HCI. While there, download the software, which Microsoft has made available for a 60-day free trial.

This content was produced by Microsoft Azure and Intel. It was not written by MIT Technology Review’s editorial staff.

In unpredictable times, a data strategy is key

Sat, 10/16/2021 - 12:00

More than 18 months after the 2020 coronavirus pandemic struck, it’s clear that the ability to make quick decisions based on high-quality data has become essential for business success. In an increasingly competitive and constantly shifting landscape, companies must be agile enough to tackle persistent challenges, ranging from cost-cutting and supply chain issues to product development and market shifts. Critical to thriving post-pandemic, say technology leaders and experts, is developing a long-term data strategy. That provides a strong foundation and clear vision which supports the organization’s ability to manage, access, analyze, and act on its data at scale to guide strategic business decisions.

“It’s an ongoing journey to get trustworthy data into the right people’s hands in a low-friction way,” says Jonathan Lutz, director of technology at Aquiline Capital Partners, a New York private equity company. The right data strategy is essential, he explains, particularly as an organization begins to scale its efforts. “There is an inflection point where manual processes are no longer tenable or sustainable,” he says.

A worldwide survey of 357 business executives, conducted by MIT Technology Review Insights and Amazon Web Services, shows that organizations of all sizes and across industries understand how crucial it is to become data-driven. Most important, they’ve learned that a supportive and successful data strategy cannot be left to chance.

Data value is front and center

The past year and a half were disruptive to businesses across industries, due in large part to the pandemic. The initial shutdowns in March 2020 meant that many companies had to turn on a dime to arrange for an all-remote workforce while also keeping up with wild shifts in consumer behavior and market demand.

The good news is, even during an unprecedented crisis, a large number of organizations continued to grow. In fact, nearly half of the survey respondents (45%) characterize their companies as “thrivers,” saying they boosted business growth over the past 18 months.

But, not surprisingly after such a challenging period, many other organizations could do little more than hold steady or try to hang on: the remaining 55% of those surveyed managed to maintain their efforts, conducting their usual level of business, or simply didn’t shut down.

Yet, whether organizations are thriving, maintaining, or just surviving, there’s no doubt that the power of data is top-of-mind for all businesses looking to succeed. In today’s digital world, companies gather or have access to vast amounts of data. Thanks to technologies such as cloud computing, analytics, and artificial intelligence (AI), they can also store, process, analyze, and put this treasure trove of data to use, in a meaningful way, to boost business outcomes.

As a result, there are many possibilities to gain business value from large data sets. According to the survey, the most common value companies are hoping to take advantage of is smarter decision-making (79%). They also want to more deeply understand their customers and industry trends (61%), provide better services and products (42%), and implement more efficient internal operations (33%).

Companies also learned valuable lessons about the importance of data as they struggled to stay competitive during the pandemic. Roughly four out of 10 survey respondents, for example, report that they need to look at more sources of data, including demographic, geospatial, and competitor information. More than a third (37%) are evaluating machine learning and analytics—technologies essential to extract critical insights from their data. And 34% need help acting on the vast sums of data they gather and process.

For Thermo Fisher Scientific, a US biotechnology company with more than 80,000 employees in 50 countries, thriving in today’s competitive life-sciences landscape is all about helping customers accelerate research, solve complex analytical challenges, improve patient diagnostics, and increase laboratory productivity. Through a scalable and secure platform on which researchers and scientists can collaborate, conduct research, and improve medical treatments, “we help our customers make the world healthier, cleaner, and safer,” says Mikael Graindorge, senior manager of commercial analytics and insight at Thermo Fisher. The company wants to provide the best service and products as well as the best ways for customers to efficiently complete their scientific research, he explains. “But to do that, we need more and more data, which means more complexity, so we need to expand our data science investment to continuously innovate for our customers.”

Data strategy is fundamental 

These days, becoming data-driven is within the reach of every organization, says Ishit Vachhrajani, enterprise strategist at cloud provider Amazon Web Services. But it doesn’t happen overnight: having a sound data strategy, he says, is fundamental to support better decision-making and drive growth. 

“Data strategy is table stakes in today’s world,” Vachhrajani says. “You can see the distance between companies that are moving fast and driving change on the journey towards a successful data strategy versus the companies that are lagging behind.” 

Download the full report.

This NASA spacecraft is on its way to Jupiter’s mysterious asteroid swarms

Sat, 10/16/2021 - 07:50

NASA’s Lucy spacecraft, named for an early human ancestor whose skeleton provided insights into our species’ muddled origins, has begun the first leg of its 12-year journey.

Lifting off from Cape Canaveral early Saturday morning on an Atlas V rocket, Lucy is headed to study asteroids in an area around Jupiter that’s been relatively unchanged since the Big Bang. It will venture farther from the sun than any other solar-powered spacecraft.

“Lucy will profoundly change our understanding of planetary evolution in our solar system,” Adriana Ocampo, a Lucy program executive at NASA, said during a science media briefing held on October 14.

The spacecraft is propelled primarily by liquid fuel, but its instruments will run on power generated by two huge solar arrays. Lucy’s technology builds on previous missions like the Mars Odyssey orbiter and InSight lander and the OSIRIS-REx spacecraft.

Lucy’s mission is to fly by one asteroid in the jam-packed area that circles the sun between Mars and Jupiter—and then continue on to the Trojans, two swarms of rocky bodies far past the asteroid belt. These asteroid swarms, which travel just ahead of and behind Jupiter as it orbits, are celestial remnants from the solar system’s earliest days.

Lucy will take black-and-white and color images, and use a diamond beam splitter to shine far-infrared light at the asteroids to take their temperature and make maps of their surface. It will also collect other measurements as it flies by. This data could help scientists understand how the planets may have formed.

Sarah Dodson-Robinson, an assistant professor of physics and astronomy at the University of Delaware, says Lucy could offer a definitive time line for not only when the planets originally formed, but where.

“If you can nail down when the Trojan asteroids formed, then you have some information about when did Jupiter form, and can start asking questions like ‘Where did Jupiter go in the solar system?’” she says. “Because it wasn’t always where it is now. It’s moved around.”

And to determine the asteroids’ ages, the spacecraft will search for surface craters that may be no bigger than a football field. 

“[The Trojans] haven’t had nearly as much colliding and breaking as some of the other asteroids that are nearer to us,” says Dodson-Robinson. “We’re potentially getting a look at some of these asteroids like they were shortly after they formed.”

On its 4-billion-mile journey, Lucy will receive three gravity assists from Earth, which will involve using the planet’s gravitational force to change the spacecraft’s trajectory without depleting its resources. Coralie Adam, deputy navigation team chief for the Lucy mission, says each push will increase the spacecraft’s velocity from 200 miles per hour to over 11,000 mph.

“If not for this Earth gravity assist, it would take five times the amount of fuel—or three metric tons—to reach Lucy’s target, which would make the mission unfeasible,” said Adam during an engineering media briefing also held on October 14.

Lucy’s mission is slated to end in 2033, but some NASA officials already feel confident that the spacecraft will last far longer. “There will be a good amount of fuel left onboard,” said Adam. “After the final encounter with the binary asteroids, as long as the spacecraft is healthy, we plan to propose to NASA to do an extended mission and explore more Trojans.”

Machine learning in the cloud is helping businesses innovate

Fri, 10/15/2021 - 13:17

In the past decade, machine learning has become a familiar technology for improving the efficiency and accuracy of processes like recommendations, supply chain forecasting, developing chatbots, image and text search, and automated customer service functions, to name a few. Machine learning today is becoming even more pervasive, impacting every market segment and industry, including manufacturing, SaaS platforms, health care, reservations and customer support routing, natural language processing (NLP) tasks such as intelligent document processing, and even food services.

Take the case of Domino’s Pizza, which has been using machine learning tools created to improve efficiencies in pizza production. “Domino’s had a project called Project 3/10, which aimed to have a pizza ready for pickup within three minutes of an order, or have it delivered within 10 minutes of an order,” says Dr. Bratin Saha, vice president and general manager of machine learning services for Amazon AI. “If you want to hit those goals, you have to be able to predict when a pizza order will come in. They use predictive machine learning models to achieve that.”

The recent rise of machine learning across diverse industries has been driven by improvements in other technological areas, says Saha—not the least of which is the increasing compute power in cloud data centers.

Over the last few years,” explains Saha, “the amount of total compute that can be thrown at machine learning problems has been doubling almost every four months. That’s 5 to 6 times more than Moore’s Law. As a result, a lot of functions that once could only be done by humans—things like detecting an object or understanding speech—are being performed by computers and machine learning models.”

“At AWS, everything we do works back from the customer and figuring out how we reduce their pain points and how we make it easier for them to do machine learning. At the bottom of the stack of machine learning services, we are innovating on the machine learning infrastructure so that we can make it cheaper for customers to do machine learning and faster for customers to do machine learning. There we have two AWS innovations. One is Inferentia and the other is Trainium.”

The current machine learning use cases that help companies optimize the value of their data to perform tasks and improve products is just the beginning, Saha says.

“Machine learning is just going to get more pervasive. Companies will see that they’re able to fundamentally transform the way they do business. They’ll see they are fundamentally transforming the customer experience, and they will embrace machine learning.”

Show notes and references

AWS Machine Learning Infrastructure

Full Transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma. This is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is machine learning in the cloud. Across all industries, the exponential increase of data collection demands faster and novel ways to analyze data, but also learn from it to make better business decisions. This is how machine learning in the cloud helps fuel innovation for enterprises, from startups to legacy players.

Two words for you: data innovation. My guest is Dr. Bratin Saha, vice president and general manager of machine learning services for Amazon AI. He has held executive roles at NVIDIA and Intel. This episode of Business Lab is produced in association with AWS. Welcome, Bratin.

Dr. Bratin Saha: Thank you for having me, Laurel. It’s great to be here.

Laurel: Off the top, could you give some examples of how AWS customers are using machine learning to solve their business problems?

Bratin: Let’s start with the definition of what we mean by machine learning. Machine learning is a process where a computer and an algorithm can use data, usually historical data, to understand patterns, and then use that information to make predictions about the future. Businesses have been using machine learning to do a variety of things, like personalizing recommendations, improving supply chain forecasting, making chatbots, using it in health care, and so on.

For example, Autodesk was able to use the machine learning infrastructure we have for their chatbots to improve their ability to handle requests by almost five times. They were able to use the improved chatbots to address more than 100,000 customer questions per month.

Then there’s Nerd Wallet. Nerd Wallet is a personal finance startup that did not personalize the recommendations they were giving to customers based on the customer’s preferences. They’re now using AWS machine learning services to tailor the recommendations to what a person actually wants to see, which has significantly improved their business.

Then we have customers like Thomson Reuters. Thomson Reuters is one of the world’s most trusted providers of answers, with teams of experts. They use machine learning to mine data to connect and organize information to make it easier for them to provide answers to questions.

In the financial sector, we have seen a lot of uptake in machine learning applications. One company, for example, is a payment service provider, was able to build a fraud detection model in just 30 minutes.

The reason I’m giving you so many examples is to show how machine learning is becoming pervasive. It’s going across geos, going across market segments, and being used by companies of all kinds. I have a few other examples I want to share to show how machine learning is also touching industries like manufacturing, food delivery, and so on.

Domino’s Pizza, for example, had a project called Project 3/10, where they wanted to have a pizza ready for pickup within three minutes of an order, or have it delivered within 10 minutes of an order. If you want to hit those goals, you have to be able to predict when a pizza order will come in. They use machine learning models to look at the history of orders. Then they use the machine learning model that was trained on that order history. They were then able to use that to predict when an order would come in, and they were able to deploy this to many stores, and they were able to hit the targets.

Machine learning has become pervasive in how our customers are doing business. It’s starting to be adopted in virtually every industry. We have more than several hundred thousand customers using our machine learning services. One of our machine learning services, Amazon SageMaker, has been one of the fastest growing services in AWS history.

Laurel: Just to recap, customers can use machine learning services to solve a number of problems. Some of the high-level problems would be a recommendation engine, image search, text search, and customer service, but then, also, to improve the quality of the product itself.

I like the Domino’s Pizza example. Everyone understands how a pizza business may work. But if the goal is to turn pizzas around as quickly as possible, to increase that customer satisfaction, Domino’s had to be in a place to collect data, be able to analyze that historic data on when orders came in, how quickly they turned around those orders, how often people ordered what they ordered, et cetera. That was what the prediction model was based on, correct?

Bratin: Yes. You asked a question about how we think about machine learning services. If you look at the AWS machine learning stack, we think about it as a three-layered service. The bottom layer is the machine learning infrastructure.

What I mean by this is when you have a model, you are training the model to predict something. Then the predictions are where you do this thing called inference. At the bottom layer, we provide the most optimized infrastructure, so customers can build their own machine learning systems.

Then there’s a layer on top of that, where customers come and tell us, “You know what? I just want to be focused on the machine learning. I don’t want to build a machine learning infrastructure.” This is where Amazon SageMaker comes in.

Then there’s a layer on top of that, which is what we call AI services, where we have pre-trained models that can be used for many use cases.

So, we look at machine learning as three layers. Different customers use services at different layers, based on what they want, based on the kind of data science expertise they have, and based on the kind of investments they want to make.

The other part of our view goes back to what you mentioned at the beginning, which is data and innovation. Machine learning is fundamentally about gaining insights from data, and using those insights to make predictions about the future. Then you use those predictions to derive business value.

In the case of Domino’s Pizza, there is data around historical order patterns that can be used to predict future order patterns. The business value there is improving customer service by getting orders ready in time. Another example is Freddy’s Frozen Custard, which used machine learning to customize menus. As a result of that, they were able to get a double-digit increase in sales. So, it’s really about having data, and then using machine learning to gain insights from that data. Once you’ve gained insights from that data, you use those insights to drive better business outcomes. This goes back what you mentioned at the beginning: you start with data and then you use machine learning to innovate on top of it.

Laurel: What are some of the challenges organizations have as they start their machine learning journeys?

Bratin: The first thing is to collect data and make sure it is structured well—clean data—that doesn’t have a lot of anomalies. Then, because machine learning models typically get better if you can train them with more and more data, you need to continue collecting vast amounts of data. We often see customers create data lakes in the cloud, like on Amazon S3, for example. So, the first step is getting your data in order and then potentially creating data lakes in the cloud that you can use to feed your data-based innovation.

The next step is to get the right infrastructure in place. That is where some customers say, “Look, I want to just build the whole infrastructure myself,” but the vast majority of customers say, “Look, I just want to be able to use a managed service because I don’t want to have to invest in building the infrastructure and maintaining the infrastructure,” and so on.

The next is to choose a business case. If you haven’t done machine learning before, then you want to get started with a business case that leads to a good business outcome. Often what can happen with machine learning is to see it’s cool, do some really cool demos, but those don’t translate into business outcomes, so you start experiments and you don’t really get the support that you need.

Finally, you need commitment because machine learning is a very iterative process. You’re training a model. The first model you train may not get you the results you desire. There’s a process of experimentation and iteration that you have to go through, and it can take you a few months to get results. So, putting together a team and giving them the support they need is the final part.

If I had to put this in terms of a sequence of steps, it’s important to have data and a data culture. It’s important in most cases for customers to choose to use a managed service to build and train their models in the cloud, simply because you get storage a lot easier and you get compute a lot easier. The third is to choose a use case that is going to have business value, so that your company knows this is something that you want to deploy at scale. And then, finally, be patient and be willing to experiment and iterate, because it often takes a little bit of time to get the data you need to train the models well and actually get the business value.

Laurel: Right, because it’s not something that happens overnight.

Bratin: It does not happen overnight.

Laurel: How do companies prepare to take advantage of data? Because, like you said, this is a four-step process, but you still have to have patience at the end to be iterative and experimental. For example, do you have ideas on how companies can think about their data in ways that makes them better prepared to see success, perhaps with their first experiment, and then perhaps be a little bit more adventurous as they try other data sets or other ways of approaching the data?

Bratin: Yes. Companies usually start with a use case where they have a history of having good data. What I mean by a history of having good data is that they have a record of transactions that have been made, and most of the records are accurate. For example, you don’t have a lot of empty record transactions.

Typically, we have seen that the level of data maturity varies between different parts of a company. You start with the part of a company where the data culture is a lot more prevalent. You start from there so that you have a record of historical transactions that you stored. You really want to have fairly dense data to use to train your models.

Laurel: Why is now the right time for companies to start thinking about deploying machine learning in the cloud?

Bratin: I think there is a confluence of factors happening now. One is that machine learning over the last five years has really taken off. That is because the amount of compute available has been increasing at a very fast rate. If you go back to the IT revolution, the IT revolution was driven by Moore’s Law. Under Moore’s Law, compute doubled every 18 months.

Over the last few years, the amount of total compute has been doubling almost every four months. That’s five times more than Moore’s Law. The amount of progress we have seen in the last four to five years has been really amazing. As a result, a lot of functions that once could only be done by humans—like detecting an object or understanding speech—are being performed by computers and machine learning models. As a result of that, a lot of capabilities are getting unleashed. That is what has led to this enormous increase in the applicability of machine learning—you can use it for personalization, you can use it in health care and finance, you can use it for tasks like churn prediction, fraud detection, and so on.

One reason that now is a good time to get started on machine learning in the cloud is just the enormous amount of progress in the last few years that is unleashing these new capabilities that were previously not possible.

The second reason is that a lot of the machine learning services being built in the cloud are making machine learning accessible to a lot more people. Even if you look at four to five years ago, machine learning was something that only very expert practitioners could do and only a handful of companies were able to do because they had expert practitioners. Today, we have more than a hundred thousand customers using our machine learning services. That tells you that machine learning has been democratized to a large extent, so that many more companies can start using machine learning and transforming their business.

Then comes the third reason, which is that you have amazing capabilities that are now possible, and you have cloud-based tools that are democratizing these capabilities. The easiest way to get access to these tools and these capabilities is through the cloud because, first, it provides the foundation of compute and data. Machine learning is, at its core, about throwing a lot of compute on data. In the cloud, you get access to the latest compute. You pay as you go, and you don’t have to make upfront huge investments to set up compute farms. You also get all the storage and the security and privacy and encryption, and so on—all of that core infrastructure that is needed to get machine learning going.

Laurel: So Bratin, how does AWS innovate to help organizations with machine learning, model training, and inference?

Bratin: At AWS, everything we do works back from the customer and figuring out how we reduce their pain points and how we make it easier for them to do machine learning. At the bottom of the stack of machine learning services, we are innovating on the machine learning infrastructure so that we can make it cheaper for customers to do machine learning and faster for customers to do machine learning. There we have two AWS innovations. One is Inferentia and the other is Trainium. These are custom chips that we designed at AWS that are purpose-built for inference, which is the process of making machine learning predictions, and for training. Inferentia today provides the lowest cost inference instances in the cloud. And Trainium, when it becomes available later this year, will be providing the most powerful and the most cost-effective training instances in the cloud.

We have a number of customers using Inferentia today. Autodesk uses Inferentia to host their chatbot models, and they were able to improve the cost and latencies by almost five times. Airbnb has over four million hosts who welcome more than 900 million guests in almost every country. Airbnb saw a two-times improvement in throughput by using the Inferentia instances, which means that they were able to serve almost twice as many requests for customer support than they would otherwise have been able to do. Another company called Sprinklr develops a SaaS customer experience platform, and they have an AI-driven unified customer experience management platform. They were able to deploy the natural language processing models in Inferentia, and they saw significant performance improvements as well.

Even internally, our Alexa team was able to move their inferences over from GPUs to Inferentia-based systems, and they saw more than a 50% improvement in cost due to these Inferentia-based systems. So, we have that at the lowest layer of the infrastructure. On top of that, we have the managed services, where we are innovating so that customers become a lot more productive. That is where we have SageMaker Studio, which is the world’s first IDE, that offers tools like debuggers and profilers and explainability, and a host of other tools—like a visual data preparation tool—that make customers a lot more productive. At the top of it, we have AI services where we provide pre-trained models for use cases like search and document processing—Kendra for search, Textract for document processing, image and video recognition—where we are innovating to make it easier for customers to address these use cases right out of the box.

Laurel: So, there are some benefits, for sure, for machine learning services in the cloud—like improved customer service, improved quality, and, hopefully, increased profit, but what key performance indicators are important for the success of machine learning projects, and why are these particular indicators so important?

Bratin: We are working back from the customer, working back from the pain points based on what customers tell us, and inventing on behalf of the customers to see how we can innovate to make it easier for them to do machine learning. One part of machine learning, as I mentioned, is predictions. Often, the big cost in machine learning in terms of infrastructure is in the inference. That is why we came out with Inferentia, which are today the most cost-effective machine learning instances in the cloud. So, we are innovating at the hardware level.

We also announced Tranium. That will be the most powerful and the most cost-effective training instances in the cloud. So, we are first innovating at the infrastructure layer so that we can provide customers with the most cost-effective compute.

Next, we have been looking at the pain points of what it takes to build an ML service. You need data collection services, you need a way to set up a distributed infrastructure, you need a way to set up an inference system and be able to auto scale it, and so on. We have been thinking a lot about how to build this infrastructure and innovation around the customers.

Then we have been looking at some of the use cases. So, for a lot of these use cases, whether it be search, or object recognition and detection, or intelligent document processing, we have services that customers can directly use. And we continue to innovate on behalf of them. I’m sure we’ll come up with a lot more features this year and next to see how we can make it easier for our customers to use machine learning.

Laurel: What key performance indicators are important for the success of machine learning projects? We talked a little bit about how you like to improve customer service and quality, and of course increase profit, but to assign a KPI to a machine learning model, that’s something a bit different. And why are they so important?

Bratin: To assign the KPIs, you need to work back from your use case. So, let’s say you want to use machine learning to reduce fraud. Your overall KPI is, what was the reduction in fraud detection? Or let’s say you want to use it for churn reduction. You are running a business, your customers are coming, but a certain number of them are churning off. You want to then start with, how do I reduce my customer churn by some percent? So, you start with the top-level KPI, which is a business outcome that you want to achieve, and how to get an improvement in that business outcome.

Let’s take the churn prediction example. At the end of the day, what is happening is you have a machine learning model that is using data and the amount of training it had to make certain predictions around which customer is going to churn. That boils down, then, to the accuracy of the model. If the model is saying 100 people are going to churn, how many of them actually churn? So, that becomes a question of accuracy. And then you also want to look at how well the machine learning model detected all the cases.

So, there are two aspects of quality that you’re looking for. One is, of the things that the model predicted, how many of them actually happened? Let’s say this model predicted these 100 customers are going to churn. How many of them actually churn? And let’s just say 95 of them actually churn. So, you have a 95% precision there. The other aspect is, suppose you’re running this business and you have 1,000 customers. And let’s say in a particular year, 200 of them churned. How many of those 200 did the model predict would actually churn? That is called recall, which is, given the total set, how much is the machine learning model able to predict? So, fundamentally, you start from this business metric, which is what is the outcome I want to get, and then you can convert this down into model accuracy metrics in terms of precision, which is how accurate was the model in predicting certain things, and then recall, which is how exhaustive or how comprehensive was the model in detecting all situations.

So, at a high level, these are the things you’re looking for. And then you’ll go down to lower-level metrics. The models are running on certain instances on certain pieces of compute: what was the infrastructure cost and how do I reduce those costs? These services, for example, are being used to handle surges during Prime Day or Black Friday, and so on. So, then you get to those lower-level metrics, which is, am I able to handle surges in traffic? It’s really a hierarchical set of KPIs. Start with the business metric, get down to the model metrics, and then get down to the infrastructure metrics.

Laurel: When you think about machine learning in the cloud in the next three to five years, what are you seeing? What are you thinking about? What can companies do now to prepare for what will come?

Bratin: I think what will happen is that machine learning will get more pervasive. Because what will happen is customers will see that they’re able to fundamentally transform the way to do business. Companies will see that they fundamentally are transforming the customer experience, and they will embrace machine learning. We have seen that at Amazon as well—we have a long history of investing in machine learning. We have been doing this for more than 20 years, and we have changed how we serve customers with amazon.com or Alexa or Amazon Go, Prime. And now with AWS, where we have taken this knowledge that we have gained over the past two decades of deploying machine learning at scale and are making it available to our customers now. So, I do think we will see a much more rapid uptake of machine learning.

Then we’ll see a lot of broad use cases like intelligent document processing, a lot of paper-based processing, will become automated because a machine learning model is now able to scan those documents and infer information from them—infer semantic information, not just the syntax. If you think of paper-based processes, whether it’s loan processing and mortgage processing, a lot of that will get automated. Then, we are also seeing businesses get a lot more efficient in terms of personalization like forecasting, supply chain forecasting, demand forecasting, and so on.

We are seeing a lot of uptake of machine learning in health. We have customers, GE for example, uses a machine learning service for radiology. They use machine learning to scan radiology images to determine which ones are more serious, and therefore, you want to get the patients in early. We are also seeing potential and opportunity for using machine learning in genomics for precision medicine. So, I do think a lot of innovation is going to happen with machine learning in health care.

We’ll see a lot of machine learning in manufacturing. A lot of manufacturing processes will become more efficient, get automated, and become safer because of machine learning.

So, I see in the next five to 10 years, pick any domain—like sports, NFL, NASCAR, Bundesliga, they’re all using our machine learning services. NFL uses Amazon SageMaker to give their fans a more immersive experience through Next Gen Stats. Bundesliga uses our machine learning services to make a range of predictions and provide a much more immersive experience. Same with NASCAR. NASCAR has a lot of data history from their races, and they’re using that to train models to provide a much more immersive experience to their viewers because they can predict much more easily what’s going to happen. So, sports, entertainment, financial services, health care, manufacturing—I think we’ll see a lot more uptake of machine learning and making the world a smarter, healthier, and safer place.

Laurel: What a great conversation. Thank you very much, Bratin for joining us on Business Lab.

Bratin: Thank you. Thank you for having me. It was really nice talking to you.

Laurel: That was Dr. Bratin Saha, Vice President and General Manager of Machine Learning Services for Amazon AI, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review overlooking the Charles river. That’s it for this episode of Business Law. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can also find us in prints on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. This show is available wherever you get your podcasts. If you enjoy this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

Reimagining our pandemic problems with the mindset of an engineer

Fri, 10/15/2021 - 06:00

The last 20 months turned every dog into an amateur epidemiologist and statistician. Meanwhile, a group of bona fide epidemiologists and statisticians came to believe that pandemic problems might be more effectively solved by adopting the mindset of an engineer: that is, focusing on pragmatic problem-solving with an iterative, adaptive strategy to make things work.

In a recent essay, “Accounting for uncertainty during a pandemic,” the researchers reflect on their roles during a public health emergency and on how they could be better prepared for the next crisis. The answer, they write, may lie in reimagining epidemiology with more of an engineering perspective and less of a “pure science” perspective.

Epidemiological research informs public health policy and its inherently applied mandate for prevention and protection. But the right balance between pure research results and pragmatic solutions proved alarmingly elusive during the pandemic.

We have to make practical decisions, so how much does the uncertainty really matter?

Seth Guikema

“I always imagined that in this kind of emergency, epidemiologists would be useful people,” Jon Zelner, a coauthor of the essay, says. “But our role has been more complex and more poorly defined than I had expected at the outset of the pandemic.” An infectious disease modeler and social epidemiologist at the University of Michigan, Zelner witnessed an “insane proliferation” of research papers, “many with very little thought about what any of it really meant in terms of having a positive impact.”

“There were a number of missed opportunities,” Zelner says—caused by missing links between the ideas and tools epidemiologists proposed and the world they were meant to help.

Giving up on certainty

Coauthor Andrew Gelman, a statistician and political scientist at Columbia University, set out “the bigger picture” in the essay’s introduction. He likened the pandemic’s outbreak of amateur epidemiologists to the way war makes every citizen into an amateur geographer and tactician: “Instead of maps with colored pins, we have charts of exposure and death counts; people on the street argue about infection fatality rates and herd immunity the way they might have debated wartime strategies and alliances in the past.”

And along with all the data and public discourse—Are masks still necessary? How long will vaccine protection last?—came the barrage of uncertainty.

In trying to understand what just happened and what went wrong, the researchers (who also included Ruth Etzioni at the University of Washington and Julien Riou at the University of Bern) conducted something of a reenactment. They examined the tools used to tackle challenges such as estimating the rate of transmission from person to person and the number of cases circulating in a population at any given time. They assessed everything from data collection (the quality of data and its interpretation were arguably the biggest challenges of the pandemic) to model design to statistical analysis, as well as communication, decision-making, and trust. “Uncertainty is present at each step,” they wrote.

And yet, Gelman says, the analysis still “doesn’t quite express enough of the confusion I went through during those early months.”

One tactic against all the uncertainty is statistics. Gelman thinks of statistics as “mathematical engineering”—methods and tools that are as much about measurement as discovery. The statistical sciences attempt to illuminate what’s going on in the world, with a spotlight on variation and uncertainty. When new evidence arrives, it should generate an iterative process that gradually refines previous knowledge and hones certainty.

Good science is humble and capable of refining itself in the face of uncertainty.

Marc Lipsitch

Susan Holmes, a statistician at Stanford who was not involved in this research, also sees parallels with the engineering mindset. “An engineer is always updating their picture,” she says—revising as new data and tools become available. In tackling a problem, an engineer offers a first-order approximation (blurry), then a second-order approximation (more focused), and so on.

Gelman, however, has previously warned that statistical science can be deployed as a machine for “laundering uncertainty”—deliberately or not, crappy (uncertain) data are rolled together and made to seem convincing (certain). Statistics wielded against uncertainties “are all too often sold as a sort of alchemy that will transform these uncertainties into certainty.”

We witnessed this during the pandemic. Drowning in upheaval and unknowns, epidemiologists and statisticians—amateur and expert alike—grasped for something solid in trying to stay afloat. But as Gelman points out, wanting certainty during a pandemic is inappropriate and unrealistic. “Premature certainty has been part of the challenge of decisions in the pandemic,” he says. “This jumping around between uncertainty and certainty has caused a lot of problems.”

Letting go of the desire for certainty can be liberating, he says. And this, in part, is where the engineering perspective comes in.

A tinkering mindset

For Seth Guikema, co-director of the Center for Risk Analysis and Informed Decision Engineering at the University of Michigan (and a collaborator of Zelner’s on other projects), a key aspect of the engineering approach is diving into the uncertainty, analyzing the mess, and then taking a step back, with the perspective, “We have to make practical decisions, so how much does the uncertainty really matter?” Because if there’s a lot of uncertainty—and if the uncertainty changes what the optimal decisions are, or even what the good decisions are—then that’s important to know, says Guikema. “But if it doesn’t really affect what my best decisions are, then it’s less critical.”

For instance, increasing SARS-CoV-2 vaccination coverage across the population is one scenario in which even if there is some uncertainty regarding exactly how many cases or deaths vaccination will prevent, the fact that it is highly likely to decrease both, with few adverse effects, is motivation enough to decide that a large-scale vaccination program is a good idea.

An engineer is always updating their picture.

Susan Holmes

Engineers, Holmes points out, are also very good at breaking problems down into critical pieces, applying carefully selected tools, and optimizing for solutions under constraints. With a team of engineers building a bridge, there is a specialist in cement and a specialist in steel, a wind engineer and a structural engineer. “All the different specialties work together,” she says.

For Zelner, the notion of epidemiology as an engineering discipline is something he  picked up from his father, a mechanical engineer who started his own company designing health-care facilities. Drawing on a childhood full of building and fixing things, his engineering mindset involves tinkering—refining a transmission model, for instance, in response to a moving target.

“Often these problems require iterative solutions, where you’re making changes in response to what does or doesn’t work,” he says. “You continue to update what you’re doing as more data comes in and you see the successes and failures of your approach. To me, that’s very different—and better suited to the complex, non-stationary problems that define public health—than the kind of static one-and-done image a lot of people have of academic science, where you have a big idea, test it, and your result is preserved in amber for all time.” 

Zelner and collaborators at the university spent many months building a covid mapping website for Michigan, and he was involved in creating data dashboards—useful tools for public consumption. But in the process, he saw a growing mismatch between the formal tools and what was needed to inform practical decision-making in a rapidly evolving crisis. “We knew a pandemic would happen one day, but I certainly had not given any thought to what my role would be, or could be,” he says. “We spent several agonizing months just inventing the thing—trying to do this thing we’d never done before and realizing that we had no expertise in doing it.”

He envisions research results that come not only with exhortations that “People should do this!” but also with accessible software allowing others to tinker with the tools. But for the most part, he says, epidemiologists do research, not development: “We write software, and it’s usually pretty bad, but it gets the job done. And then we write the paper, and then it’s up to somebody else—some imagined other person—to make it useful in the broader context. And then that never happens. We’ve seen these failures in the context of the pandemic.”

He imagines the equivalent of a national weather forecasting center for infectious disease. “There’s a world in which all the covid numbers go to one central place,” he says. “Where there is a model that is able to coherently combine that information, generate predictions accompanied by pretty accurate depictions of the uncertainty, and say something intelligible and relatively actionable in a fairly tight timeline.”

At the beginning of the pandemic, that infrastructure didn’t exist. But recently, there have been signs of progress.

Fast-moving public health science

Marc Lipsitch, an infectious disease epidemiologist at Harvard, is the director of science at the US Centers for Disease Control’s new Center for Forecasting and Outbreak Analytics, which aims to improve decision-making and enable a coordinated, coherent response to a pandemic as it unfolds.

“We’re not very good at forecasting for infectious diseases right now. In fact, we are quite bad at it,” Lipsitch says. But we were quite bad at weather forecasting when it started in the ’50s, he notes. “And then technology improved, methodology improved, measurement improved, computation improved. With investment of time and scientific effort, we can get better at things.”

Getting better at forecasting is part of the center’s vision for innovation. Another goal is the capability to do specific studies to answer specific questions that arise during a pandemic, and then to produce custom-designed analytics software to inform timely responses on the national and local levels.

These efforts are in sync with the notion of an engineering approach—although Lipsitch would call it simply “fast-moving public health science.”

“Good science is humble and capable of refining itself in the face of uncertainty,” he says. “Scientists, usually over a longer time scale—years or decades—are quite used to the idea of updating our picture of truth.” But during a crisis, the updating needs to happen fast. “Outside of pandemics, scientists are not used to vastly changing our picture of the world each week or month,” he says. “But in this pandemic especially, with the speed of new developments and new information, we are having to do so.”

The philosophy of the new center, Lipsitch says, “is to improve decision-making under uncertainty, by reducing that uncertainty with better analyses and better data, but also by acknowledging what is not known, and communicating that and its consequences clearly.”

And he notes, “We’re gonna need a lot of engineers to make this function—and the engineering approach, for sure.”

Getting the most from your data-driven transformation: 10 key principles

Thu, 10/14/2021 - 12:08

The importance of data to today’s businesses can’t be overstated. Studies show data-driven companies are 58% more likely to beat revenue goals than non-data-driven companies and 162% more likely to significantly outperform laggards. Data analytics are helping nearly half of all companies make better decisions about everything, from the products they deliver to the markets they target. Data is becoming critical in every industry, whether it’s helping farms increase the value of the crops they produce or fundamentally changing the game of basketball.

Used optimally, data is nothing less than a critically important asset. Problem is, it’s not always easy to put data to work. The Seagate Rethink Data report, with research and analysis by IDC, found that only 32% of the data available to enterprises is ever used and the remaining 68% goes unleveraged. Executives aren’t fully confident in their current ability—nor in their long-range plans—to wring optimal levels of value out of the data they produce, acquire, manage, and use.

What’s the disconnect? If data is so important to a business’s health, why is it so hard to master?

In the best-run companies, the systems that connect data producers and data consumers are secure and easy to deploy. But they’re usually not. Companies are challenged with finding data and leveraging it for strategic purposes. Sources of data are hard to identify and even harder to evaluate. Datasets used to train AI models for the automation of tasks can be hard to validate. Hackers are always looking to steal or compromise data. And finding quality data is a challenge for even the savviest data scientists. 

The lack of an end-to-end system for ensuring high-quality data and sharing it efficiently has indirectly delayed the adoption of AI.

Communication gaps can also derail the process of delivering impactful insights. Executives who fund data projects and the data engineers and scientists who carry them out don’t always understand one another. These data practitioners can create a detailed plan, but if the practitioner doesn’t frame the results properly, the business executive who requested them may say they were looking for something different. The project will be labeled a failure, and the chance to generate value out of the effort will fall by the wayside.

Companies encounter data issues, no matter where they are in terms of data maturity. They’re trying to figure out ways to make data an important part of their future, but they’re struggling to put plans into practice.

If you’re in this position, what do you do?

Companies found themselves at a similar inflection point back in the 2010s, trying to sort out their places in the cloud. They took years developing their cloud strategies, planning their cloud migrations, choosing platforms, creating Cloud Business Offices, and structuring their organizations to best take advantage of cloud-based opportunities. Today, they’re reaping the benefits: Their moves to the cloud have enabled them to modernize their apps and IT systems.

Enterprises now have to make similar decisions about data. They need to consider many factors to make sure data is providing a foundation for their business going forward. They should ask questions such as:

  • Is the data the business needs readily available?
  • What types of sources of data are needed? Are there distributed and diverse sets of data you don’t know about?
  • Is the data clean, current, reliable, and able to integrate with existing systems?
  • Is the rest of the C-level onboard with the chief data officer’s approach?
  • Are data scientists and end users communicating effectively about what’s needed and what’s being delivered?
  • How is data being shared?
  • How can I trust my data?
  • Does every person and organization that needs access to the data have the right to use it?

This is about more than just business intelligence. It’s about taking advantage of an opportunity that’s taking shape. Data use is exploding, tools to leverage it are becoming more efficient, and data scientists’ expertise is growing. But data is hard to master. Many companies aren’t set up to make the best use of the data they have at hand. Enterprises need to make investments in the people, processes, and technologies that will drive their data strategies.

With all of this in mind, here are 10 principles companies should follow when developing their data strategies:

1. Understand how valuable your data really is

How much is your data worth to you? This can be measured in a number of ways. There are traditional metrics to consider, such as the costs of acquiring the data, the cost to store and transmit it, the uniqueness of the data being acquired, and the opportunity to use it to generate additional revenue. Marketplace metrics affect the value of the data, such as data quality, age of the data, and popularity of a data product.

Your data could also be valuable to others. For example, suppose a hospital collects patient datasets that can generate value for your data. In that case, that data could be of interest to disease researchers, drug manufacturers, insurance companies, and other potential buyers. Is there a mechanism in place to anonymize, aggregate, control, and identify potential users of your data?

Opportunity, balanced by the cost it takes to deliver on it, is one way to determine the potential value of your data.

2. Determine what makes data valuable

While it may be hard to put an actual dollar value on your data, it’s easier to define the elements that contribute to data having a high degree of value. It can be reduced to a simple thought equation:

Completeness + Validity = Quality

Quality + Format = Usability

Usable Data + A Data Practitioner Who Uses it Well = VALUE

Your data project can’t proceed without good data. Is the quality of your data high enough to be worthwhile? That will depend, in part, on how complete the sample is that you’ve collected. Are data fields missing? Quality also depends on how valid the information is. Was it collected from a reliable source? Is the data current, or has time degraded its validity? Do you collect and store your data in accordance with industry and sector ontologies and standards?

Your data has to be usable for it to be worthy of investment. Setting up systems for data practitioners to use and analyze the data well and connect it with business leaders who can leverage the insights closes the loop.

3. Establish where you are on your data journey

Positioning a business to take full advantage of cloud computing is a journey. The same thinking should apply to data.

The decisions companies make about their data strategies depend largely on where they happen to be on their data journeys. How far along are you on your data journey? Assessment tools and blueprints can help companies pinpoint their positions. Assessments should go beyond identifying which tools are in a company’s technology stack. They should look at how data is treated across an organization in many ways, taking into account governance, lifecycle management, security, ingestion and processing, data architectures, consumption and distribution, data knowledge, and data monetization.

Consumption and distribution alone can be measured in terms of an organization’s ability to apply services ranging from business intelligence to streaming data to self-service applications of data analytics. Has the company implemented support for data usage by individual personas? Is it supporting individual APIs? Looking at data knowledge as a category, how advanced are the company’s data dictionaries, business glossaries, catalogs, and master data management plans?

Scoring each set of capabilities reveals a company’s strengths and weaknesses in terms of data preparedness. Until the company takes a closer look, it may not realize how near or far it is from where it needs or want to be.

4. Learn to deal with data from various sources

Data is coming into organizations from all directions—from inside the company, IoT devices, and video surveillance systems at the edge, partners, customers, social media, and the web. The hundreds of zettabytes of worldwide data will have to be selectively managed, protected, and optimized for convenient, productive use.

This is a challenge for enterprises that haven’t developed systems for data collection and data governance. Wherever the data comes from, there needs to be a mechanism for standardizing it so that the data will be usable for a greater benefit.

Different companies and different countries impose different rules on what and how information can be shared. Even individual departments within the same company can run afoul of corporate governance rules designating the paths certain datasets have to follow. That means enforcing data access and distribution policies. To seize these data opportunities, companies need to engineer pathways to discover new datasets and impose governance rules to manage them.

In manufacturing, companies on a supply chain line measure the quality of their parts and suppliers. Often, the machinery and the robotics they’re using are owned by the suppliers. Suppliers may want to set up contracts to see who has the right to use data to protect their own business interests, and manufacturers should define their data sharing requirements with their partners and suppliers up front.

5. Get a strategic commitment from the C-suite

Data benefits many levels of an organization, and personas at each of the affected levels will lobby for a particular aspect of the data value process. Data scientists want more high-powered, easy-to-use technology. Line-of-business leaders push for better, faster insights. At the top of the pyramid is the C-suite, which prioritizes the channeling of data into business value.

It’s critical to get C-level executives on board with a holistic data strategy. Doing it right, after all, can be disruptive. Extracting maximum value from data requires an organization to hire staff with new skill sets, realign its culture, reengineer old processes, and rearchitect the old data platform. It’s a transformation project that can’t be done without getting buy-in from the top levels of a company.

The C-suite is increasingly open to expanding organizations’ use of data. Next to customer engagement, the second highest strategic area of interest at the board level is leveraging data and improving decision-making to remain competitive and exploit changing market conditions, according to the IDC report “Market Analysis Perspective: Worldwide Data Integration and Intelligence Software, 2021.” In the same report, 83% of executives articulated the need to be more data driven than before the pandemic.

How should organizations ensure that the C-suite gets on board? If you’re a stakeholder without a C-level title, your job is to work with your peers to find an executive sponsor to carry the message to leaders who control the decision-making process. Data is a strategic asset that will determine a company’s success in the long run, but it won’t happen without endorsements at the highest levels.

6. In data we trust: Ensure your data is beyond reproach

As AI expands into almost every aspect of modern life, the risks of corrupt or faulty AI practices increase exponentially. This comes down to the quality of the data being used to train the AI models. How was the data produced? Was it based on a faulty sensor? Was there a biased data origin generated into the dataset? Did the selection of data come from one location instead of a statistically valid set of data?

Trustworthy AI depends on having trustworthy data that can be used to build transparent, trustworthy, unbiased, and robust models. If you know how a model is trained and you suspect you’re getting faulty results, you can stop the process and retrain the model. Or, if someone questions the model, you can go back and explain why a particular decision was made, but you need to have clean, validated data to reference.

Governments are often asked by policy watchdogs to support how they’re using AI and to prove that their analyses are not built on biased data. The validity of the algorithms used has sparked debates about efforts to rely on machine learning to guide sentencing decisions and make decisions about welfare benefit claims or other government activities.

The training of the model takes place in steps. You build a model based on data. Then you test the model and gather additional data to retest it. If it passes, you turn it into a more robust production model. The journey continues by adding more data, massaging it, and establishing over time if your model stands up to scrutiny.

The lack of an end-to-end system for ensuring high-quality data and sharing it efficiently has indirectly delayed the adoption of AI. According to IDC, 52% of survey respondents believe that data quality, quantity, and access challenges are holding up AI deployments.

7. Seize upon the metadata opportunity

Metadata is defined elliptically as “data that provides information about other data.” It’s what gives data the context that users need to understand a piece of the information’s characteristics, so they can determine what to do with it in the future.

Metadata standards are commonly used for niche purposes, specific industry applications like astronomical catalogs, or data types like XML files. But there’s also a case to be made for a stronger metadata framework where we can not only define data in common ways but also tag useful data artifacts along its journey. Where did this piece of data originate? Who has viewed it? Who has used it? What has it been used for? Who has added what piece of the dataset? Has the data been verified? Is it prohibited from use in certain situations?

Developing this kind of metadata mechanism requires a technology layer that is open to contributions from those viewing and touching a particular piece of data. It also requires a commitment from broad sets of stakeholders who see the value of being able to share data strategically and transparently.

Creating an additional open metadata layer would be an important step toward allowing the democratization of access to the data by enabling the transparent sharing of key data attributes necessary for access, governance, trust, and lineage. Hewlett Packard Enterprise’s approach to dataspaces is to open up a universal metadata standard that would remove the current complexities associated with sharing diverse datasets.

8. Embrace the importance of culture

Organizations want to make sure they’re getting the most out of the resources they’re nourishing—and to do that, they need to create cultures that promote best practices for information sharing.

Do you have silos? Are there cultural barriers inside your organization that get in the way of the proper dissemination of information to the right sources at the right times? Do different departments feel they own their data and don’t have to share it with others in the organization? Are individuals hoarding valuable data? Have you set up channels and procedures that promote frictionless data sharing? Have you democratized access to data, giving business stakeholders the ability to not only request data but participate in querying and sharing practices?

If any of these factors are blocking the free flow of data exchange, your organization needs to undergo a change management assessment focusing on its needs across people, processes, and technology.

9. Open things up, but trust no one

In all aspects of business, organizations balance the often conflicting concepts of promoting free and open sharing of resources and tightly controlled security. Achieving this balance is particularly important when dealing with data.

Data needs to be shared, but many data producers are uncomfortable doing so because they fear the loss of control and how their data could be used against them, or how their data could be changed or used inappropriately.

Security needs to be a top priority. Data is coming from so many sources—some you control, some you don’t—and being passed through so many hands. That means that security policies surrounding data need to be designed with a zero-trust model through every step of the process. Trust has to be established through the entire stack, from your infrastructure and operating systems to the workloads that sit on top of those systems, all the way down to the silicon level.

10. Create a fully functioning data services pipeline

Moving data among systems requires many steps, including moving data to the cloud, reformatting it, and joining it with other data sources. Each of these steps usually requires separate software.

Automating data pipelines is a critical best practice in the data journey. A fully automated data pipeline allows organizations to extract data at the source, transform it into a usable form, and integrate it with other sources.

The data pipeline is the sum of all these steps, and its job is to ensure that these steps happen reliably to all data. These processes should be automated, but most organizations need at least one or two engineers to maintain the systems, repair failures, and update according to the changing needs of the business.

Begin the data journey today

How well companies leverage their data—wherever it lives—will determine their success in the years to come. Constellation Research projects 90% of the current Fortune 500 will be merged, acquired, or bankrupt by 2050. If they don’t start now, they’ll be left behind. The clock is ticking.

Read the original article on Enterprise.nxt.

This content was produced by Hewlett Packard Enterprise. It was not written by MIT Technology Review’s editorial staff.

Facebook wants machines to see the world through our eyes

Thu, 10/14/2021 - 08:01

We take it for granted that machines can recognise what they see in photos and video. That ability rests on large datasets like ImageNet, a hand-curated collection of millions of photos used to train most of the best image-recognition models of the last decade. 

But the images in these datasets portray a world of curated objects—a picture gallery that doesn’t capture the mess of everyday life as humans experience it. To get machines to see things as we do will take a wholly new approach. And Facebook’s AI lab wants to take the lead.

It is kickstarting a project, called Ego4D, to build AIs that can understand scenes and activities viewed from a first-person perspective—how things look to the people involved, rather than to an onlooker. Think motion-blurred GoPro footage taken in the thick of the action, instead of well-framed scenes taken by someone on the sidelines.  Facebook wants Ego4D to do for first-person video what ImageNet did for photos.  

For the last two years, Facebook AI Research (FAIR) has worked with 13 universities around the world to assemble the largest ever dataset of first-person video—specifically to train deep-learning image-recognition models. AIs trained on the dataset will be better at controlling robots that interact with people, or interpreting images from smart glasses. “Machines will be able to help us in our daily lives only if they really understand the world through our eyes,” says Kristen Grauman at FAIR, who leads the project.

Such tech could support people who need assistance around the home, or guide people in tasks they are learning to complete. “The video in this dataset is much closer to how humans observe the world,” says Michael Ryoo, a computer vision researcher at Google Brain and Stony Brook University in New York, who is not involved in Ego4D.

But the potential misuses are clear and worrying. The research is funded by Facebook, a social media giant that has recently been accused in the Senate of putting profits over people’s wellbeing, a sentiment corroborated by MIT Technology Review’s own investigations.

The business model of Facebook, and other Big Tech companies, is to wring as much data as possible from people’s online behavior and sell it to advertisers. The AI outlined in the project could extend that reach to people’s everyday offline behavior, revealing the objects around a person’s home, what activities she enjoyed, who she spent time with, and even where her gaze lingered—an unprecedented degree of personal information.

“There’s work on privacy that needs to be done as you take this out of the world of exploratory research and into something that’s a product,” says Grauman. “That work could even be inspired by this project.”

FACEBOOK

Ego4D is a step-change. The biggest previous dataset of first-person video consists of 100 hours of footage of people in the kitchen. The Ego4D dataset consists of 3025 hours of video recorded by 855 people in 73 different locations across nine countries (US, UK, India, Japan, Italy, Singapore, Saudi Arabia, Colombia and Rwanda).

The participants had different ages and backgrounds; some were recruited for their visually interesting occupations, such as bakers, mechanics, carpenters, and landscapers.

Previous datasets typically consist of semi-scripted video clips only a few seconds long. For Ego4D, participants wore head-mounted cameras for up to 10 hours at a time and captured first-person video of unscripted daily activities, including walking along a street, reading, doing laundry, shopping, playing with pets, playing board games, and interacting with other people. Some of the footage also includes audio, data about where the participants’ gaze was focused, and multiple perspectives on the same scene. It’s the first dataset of its kind, says Ryoo.

FAIR has also launched a set of challenges that it hopes will focus other researchers’ efforts on developing this kind of AI. The team anticipates algorithms built into smart glasses, like Facebook’s recently announced Ray-Bans, that record and log the wearers’ day-to-day lives. It means augmented- or virtual-reality “metaverse” apps could, in theory, answer questions like “Where are my car keys?” or “What did I eat and who did I sit next to on my first flight to France?”. Augmented reality assistants could understand what you’re trying to do and offer instructions or useful social cues.

It’s sci-fi stuff, but closer than you think, says Grauman. Large datasets accelerate research. “ImageNet drove some big advances in a short time,” she says. “We can expect the same for Ego4D, but for first-person views of the world instead of internet images.”

Once the footage had been collected, crowdsourced workers in Rwanda spent a total of 250,000 hours watching the thousands of video clips and writing millions of sentences that describe the scenes and activities filmed. These annotations will be used to train AIs to understand what they are watching.

Where this tech ends up and how quickly it develops remains to be seen. FAIR is planning a competition based on its challenges in June 2022. It is also important to note that FAIR, the research lab, is not the same as Facebook, the media megalodon. In fact, insiders say that Facebook has ignored technical fixes for its toxic algorithms that FAIR has come up with. But Facebook is paying for the research and it is disingenuous to pretend they are not very interested in its application.

Sam Gregory at Witness, a human-rights organization that specializes in video technology, says this technology could be useful for bystanders documenting protests or police abuse. But he thinks those benefits are outweighed by concerns around other commercial applications. He notes that it is possible to identify individuals from how they hold a video camera. Gaze data would be even more revealing: “It’s a very strong indicator of interest,” he says.“How will gaze data be stored? Who will it be accessible to? How might it be processed and used?” 

“Facebook’s reputation and core business model ring a lot of alarm bells,” says Rory Mir at the Electronic Frontier Foundation. “At this point many are aware of Facebook’s poor track record on privacy, and their use of surveillance to influence users—both to keep users hooked and to sell that influence to their paying customers, the advertisers.” When it comes to augmented and virtual reality, Facebook is seeking a competitive advantage, he says: “Expanding the amount and types of data it collects is essential.”

When asked about its plans, Facebook was unsurprisingly tight-lipped: “Ego4D is purely research to promote advances in the broader scientific community,” says a spokesperson. “We don’t have anything to share today about product applications or commercial use.”

Covid conspiracy theories are driving people to anti-Semitism online

Wed, 10/13/2021 - 07:41

A warning: Conspiracy theories about covid are helping disseminate anti-Semitic beliefs to a wider audience, warns a new report by the antiracist advocacy group Hope not Hate. The report says that not only has the pandemic revived interest in the “New World Order” conspiracy theory of a secret Jewish-run elite that aims to run the world, but far-right activists have also worked to convert people’s anti-lockdown and anti-vaccine beliefs into active anti-Semitism. 

Worst offenders: The authors easily managed to find anti-Semitism on all nine platforms they investigated, including TikTok, Instagram, Twitter, and YouTube. Some of it uses coded language to avoid detection and moderation by algorithms, but much of it is overt and easily discoverable. Unsurprisingly, the authors found a close link between the amount of anti-Semitism on a platform and how lightly or loosely it is moderated: the laxer the moderation, the bigger the problem. 

Some specifics: The report warns that the messaging app Telegram has rapidly become one of the worst offenders, playing host to many channels that disseminate anti-Semitic content, some of them boasting tens of thousands of members. One channel that promotes the New World Order conspiracy theory has gained 90,000 followers since its inception in February 2021. However it’s a problem on every platform. Jewish creators on TikTok have complained that they face a deluge of anti-Semitism on the platform, and they are often targeted by groups who mass-report their accounts in order to get them temporarily banned. 

A case study: The authors point to one man who was radicalized during the pandemic as a typical example of how people can end up pushed into adopting more and more extreme views. At the start of 2020 Attila Hildmann was a successful vegan chef in Germany, but in the space of just a year he went from being ostensibly apolitical to “just asking some questions” as a social media influencer to spewing hate and inciting violence on his own Telegram channel. 

What can be done: Many of the platforms investigated have had well over a decade to get a handle on regulating and moderating hate speech, and some progress has been made. However, while major platforms have become better at removing anti-Semitic organizations, they’re still struggling to remove anti-Semitic content produced by individuals, the report warns.

Podcast: The story of AI, as told by the people who invented it

Wed, 10/13/2021 - 05:13

Welcome to I Was There When, a new oral history project from the In Machines We Trust podcast. It features stories of how breakthroughs in artificial intelligence and computing happened, as told by the people who witnessed them. In this first episode, we meet Joseph Atick— who helped create the first commercially viable face recognition system.

Credits:

This episode was produced by Jennifer Strong, Anthony Green and Emma Cillekens with help from Lindsay Muscato. It’s edited by Michael Reilly and Mat Honan. It’s mixed by Garret Lang, with sound design and music by Jacob Gorski.

Full transcript:

[TR ID]

Jennifer: I’m Jennifer Strong, host of In Machines We Trust

I want to tell you about something we’ve been working on for a little while behind the scenes here. 

It’s called I Was There When.

It’s an oral history project featuring the stories of how breakthroughs in artificial intelligence and computing happened… as told by the people who witnessed them.

Joseph Atick: And as I entered the room, it spotted my face, extracted it from the background and it pronounced: “I see Joseph” and that was the moment where the hair on the back… I felt like something had happened. We were a witness. 

Jennifer: We’re kicking things off with a man who helped create the first facial recognition system that was commercially viable… back in the ‘90s…

[IMWT ID]

I am Joseph Atick. Today, I’m the executive chairman of ID for Africa, a humanitarian organization that focuses on giving people in Africa a digital identity so they can access services and exercise their rights. But I have not always been in the humanitarian field. After I received my PhD in mathematics, together with my collaborators made some fundamental breakthroughs, which led to the first commercially viable face recognition. That’s why people refer to me as a founding father of face recognition and the biometric industry. The algorithm for how a human brain would recognize familiar faces became clear while we were doing research, mathematical research, while I was at the Institute for Advanced Study in Princeton. But it was far from having an idea of how you would implement such a thing. 

It was a long period of months of programming and failure and programming and failure. And one night, early morning, actually, we had just finalized a version of the algorithm. We submitted the source code for compilation in order to get a run code. And we stepped out, I stepped out to go to the washroom. And then when I stepped back into the room and the source code had been compiled by the machine and had returned. And usually after you compile it runs it automatically, and as I entered the room, it spotted a human moving into the room and it spotted my face, extracted it from the background and it pronounced: “I see Joseph.” and that was the moment where the hair on the back—I felt like something had happened. We were a witness. And I started to call on the other people who were still in the lab and each one of them they would come into the room.

And it would say, “I see Norman. I would see Paul, I would see Joseph.” And we would sort of take turns running around the room just to see how many it can spot in the room. It was, it was a moment of truth where I would say several years of work finally led to a breakthrough, even though theoretically, there wasn’t any additional breakthrough required. Just the fact that we figured out how to implement it and finally saw that capability in action was very, very rewarding and satisfying. We had developed a team which is more of a development team, not a research team, which was focused on putting all of those capabilities into a PC platform. And that was the birth, really the birth of commercial face recognition, I would put it, on 1994. 

My concern started very quickly. I saw a future where there was no place to hide with the proliferation of cameras everywhere and the commoditization of computers and the processing abilities of computers becoming better and better. And so in 1998, I lobbied the industry and I said, we need to put together principles for responsible use. And I felt good for a while, because I felt we have gotten it right. I felt we’ve put in place a responsible use code to be followed by whatever is the implementation. However, that code did not live the test of time. And the reason behind it is we did not anticipate the emergence of social media. Basically, at the time when we established the code in 1998, we said the most important element in a face recognition system was the tagged database of known people. We said, if I’m not in the database, the system will be blind.

And it was difficult to build the database. At most we could build thousand 10,000, 15,000, 20,000 because each image had to be scanned and had to be entered by hand—the world that we live in today, we are now in a regime where we have allowed the beast out of the bag by feeding it billions of faces and helping it by tagging ourselves. Um, we are now in a world where any hope of controlling and requiring everybody to be responsible in their use of face recognition is difficult. And at the same time, there is no shortage of known faces on the internet because you can just scrape, as has happened recently by some companies. And so I began to panic in 2011, and I wrote an op-ed article saying it is time to press the panic button because the world is heading in a direction where face recognition is going to be omnipresent and faces are going to be everywhere available in databases.

And at the time people said I was an alarmist, but today they’re realizing that it’s exactly what’s happening today. And so where do we go from here? I’ve been lobbying for legislation. I’ve been lobbying for legal frameworks that make it a liability for you to use somebody’s face without their consent. And so it’s no longer a technological issue. We cannot contain this powerful technology through technological means. There has to be some sort of legal frameworks. We cannot allow the technology to go too much ahead of us. Ahead of our values, ahead of what we think is acceptable. 

The issue of consent continues to be one of the most difficult and challenging matters when it deals with technology, just giving somebody notice does not mean that it’s enough. To me consent has to be informed. They have to understand the consequences of what it means. And not just to say, well, we put a sign up and this was enough. We told people, and if they did not want to, they could have gone anywhere.

And I also find that there is, it is so easy to get seduced by flashy technological features that might give us a short-term advantage in our lives. And then down the line, we recognize that we’ve given up something that was too precious. And by that point in time, we have desensitized the population and we get to a point where we cannot pull back. That’s what I’m worried about. I’m worried about the fact that face recognition through the work of Facebook and Apple and others. I’m not saying all of it is illegitimate. A lot of it is legitimate.

We’ve arrived at a point where the general public may have become blasé and may become desensitized because they see it everywhere. And maybe in 20 years, you step out of your house. You will no longer have the expectation that you wouldn’t be not. It will not be recognized by dozens of people you cross along the way. I think at that point in time that the public will be very alarmed because the media will start reporting on cases where people were stalked. People were targeted, people were even selected based on their net worth in the street and kidnapped. I think that’s a lot of responsibility on our hands. 

And so I think the question of consent will continue to haunt the industry. And until that question is going to be a result, maybe it won’t be resolved. I think we need to establish limitations on what can be done with this technology.  

My career also has taught me that being ahead too much is not a good thing because face recognition, as we know it today, was actually invented in 1994. But most people think that it was invented by Facebook and the machine learning algorithms, which are now proliferating all over the world. I basically, at some point in time, I had to step down as being a public CEO because I was curtailing the use of technology that my company was going to be promoting because the fear of negative consequences to humanity. So I feel scientists need to have the courage to project into the future and see the consequences of their work. I’m not saying they should stop making breakthroughs. No, you should go full force, make more breakthroughs, but we should also be honest with ourselves and basically alert the world and the policymakers that this breakthrough has pluses and has minuses. And therefore, in using this technology, we need some sort of guidance and frameworks to make sure it’s channeled for a positive application and not negative.

Jennifer: I Was There When… is an oral history project featuring the stories of people who have witnessed or created breakthroughs in artificial intelligence and computing. 

Do you have a story to tell? Know someone who does? Drop us an email at podcasts@technologyreview.com.

[MIDROLL]

[CREDITS]

Jennifer: This episode was taped in New York City in December of 2020 and produced by me with help from Anthony Green and Emma Cillekens. We’re edited by Michael Reilly and Mat Honan. Our mix engineer is Garret Lang… with sound design and music by Jacob Gorski. 

Thanks for listening, I’m Jennifer Strong. 

[TR ID]

AI fake-face generators can be rewound to reveal the real faces they trained on

Tue, 10/12/2021 - 05:15

Load up the website This Person Does Not Exist and it’ll show you a human face, near-perfect in its realism yet totally fake. Refresh and the neural network behind the site will generate another, and another, and another. The endless sequence of AI-crafted faces is produced by a generative adversarial network (GAN)—a type of AI that learns to produce realistic but fake examples of the data it is trained on. 

But such generated faces—which are starting to be used in CGI movies and ads—might not be as unique as they seem. In a paper titled This Person (Probably) Exists, researchers show that many faces produced by GANs bear a striking resemblance to actual people who appear in the training data. The fake faces can effectively unmask the real faces the GAN was trained on, making it possible to expose the identity of those individuals. The work is the latest in a string of studies that call into doubt the popular idea that neural networks are “black boxes” that reveal nothing about what goes on inside.

To expose the hidden training data, Ryan Webster and his colleagues at the University of Caen Normandy in France used a type of attack called a membership attack, which can be used to find out whether certain data was used to train a neural network model. These attacks typically take advantage of subtle differences between the way a model treats data it was trained on—and has thus seen thousands of times before—and unseen data.

For example, a model might identify a previously unseen image accurately, but with slightly less confidence, than one it was trained on. A second, attacking model can learn to spot such tells in the first model’s behavior and use them to predict when certain data, such as a photo, is in the training set or not. 

Such attacks can lead to serious security leaks. For example, finding out that someone’s medical data was used to train a model associated with a disease might reveal that this person has that disease.

Webster’s team extended this idea so that instead of identifying the exact photos used to train a GAN, they identified photos in the GAN’s training set that were not identical but appeared to portray the same individual—in other words, faces with the same identity. To do this, the researchers first generated faces with the GAN and then used a separate facial-recognition AI to detect whether the identity of these generated faces matched the identity of any of the faces seen in the training data.

The results are striking. In many cases, the team found multiple photos of real people in the training data that appeared to match the fake faces generated by the GAN, revealing the identity of individuals the AI had been trained on.

The left-hand column in each block shows faces generated by a GAN. These fake faces are followed by three photos of real people identified in the training dataUNIVERSITY OF CAEN NORMANDY

The work raises some serious privacy concerns. “The AI community has a misleading sense of security when sharing trained deep neural network models,” says Jan Kautz, vice president of learning and perception research at Nvidia. 

In theory this kind of attack could apply to other data tied to an individual, such as biometric or medical data. On the other hand, Webster points out that the technique could also be used by people to check if their data has been used to train an AI without their consent.

An artist could check if their work had been used to train a GAN in a commercial tool, he says: “You could use a method such as ours for evidence of copyright infringement.”

The process could also be used to make sure GANs don’t expose private data in the first place. The GAN could check if its creations resembled real examples in its training data, using the same technique developed by the researchers, before releasing them.

Yet this assumes that you can get hold of that training data, says Kautz. He and his colleagues at Nvidia have come up with a different way to expose private data, including images of faces and other objects, medical data and more, that does not require access to training data at all.

Instead, they developed an algorithm that can recreate the data that a trained model has been exposed to by reversing the steps that the model goes through when processing that data. Take a trained image-recognition network: to identify what’s in an image the network passes it through a series of layers of artificial neurons, with each layer extracting different levels of information, from abstract edges, to shapes, to more recognisable features.  

Kautz’s team found that they could interrupt a model in the middle of these steps and reverse its direction, recreating the input image from the internal data of the model. They tested the technique on a variety of common image-recognition models and GANs. In one test, they showed that they could accurately recreate images from ImageNet, one of the best known image recognition datasets.

Images from ImageNet (top) alongside recreations of those images made by rewinding a model trained on ImageNet (bottom) NVIDIA

Like Webster’s work, the recreated images closely resemble the real ones. “We were surprised by the final quality,” says Kautz.

The researchers argue that this kind of attack is not simply hypothetical. Smartphones and other small devices are starting to use more AI. Because of battery and memory constraints, models are sometimes only half-processed on the device itself and sent to the cloud for the final computing crunch, an approach known as split computing. Most researchers assume that split computing won’t reveal any private data from a person’s phone because only the model is shared, says Kautz. But his attack shows that this isn’t the case.

Kautz and his colleagues are now working to come up with ways to prevent models from leaking private data. We wanted to understand the risks so we can minimize vulnerabilities, he says.

Even though they use very different techniques, he thinks that his work and Webster’s complement each other well. Webster’s team showed that private data could be found in the output of a model; Kautz’s team showed that private data could be revealed by going in reverse, recreating the input. “Exploring both directions is important to come up with a better understanding of how to prevent attacks,” says Kautz.

The covid tech that is intimately tied to China’s surveillance state

Mon, 10/11/2021 - 07:00

Sometime in mid-2019, a police contractor in the Chinese city of Kuitun tapped a young college student from the University of Washington on the shoulder as she walked through a crowded market intersection. The student, Vera Zhou, didn’t notice the tapping at first because she was listening to music through her earbuds as she weaved through the crowd. When she turned around and saw the black uniform, the blood drained from her face. Speaking in Chinese, Vera’s native language, the police officer motioned her into a nearby People’s Convenience Police Station—one of more than 7,700 such surveillance hubs that now dot the region.       

On a monitor in the boxy gray building, she saw her face surrounded by a yellow square. On other screens she saw pedestrians walking through the market, their faces surrounded by green squares. Beside the high-definition video still of her face, her personal data appeared in a black text box. It said that she was Hui, a member of a Chinese Muslim group that makes up around 1 million of the population of 15 million Muslims in Northwest China. The alarm had gone off because she had walked beyond the parameters of the policing grid of her neighborhood confinement. As a former detainee in a re-education camp, she was not officially permitted to travel to other areas of town without explicit permission from both her neighborhood watch unit and the Public Security Bureau. The yellow square around her face on the screen indicated that she had once again been deemed a “pre-criminal” by the digital enclosure system that held Muslims in place. Vera said at that moment she felt as though she could hardly breathe.                    

This story is an edited excerpt from In the Camps: China’s High-Tech Penal Colony by Darren Byler (Columbia Global Reports, 2021.)

Kuitun is a small city of around 285,000 in Xinjiang’s Tacheng Prefecture, along the Chinese border with Kazakhstan. Vera had been trapped there since 2017 when, in the middle of her junior year as a geography student at the University of Washington (where I was an instructor), she had taken a spur-of-the-moment trip back home to see her boyfriend. After a night at a movie theater in the regional capital Ürümchi, her boyfriend received a call asking him to come to a local police station. There, officers told him they needed to question his girlfriend: they had discovered some suspicious activity in Vera’s internet usage, they said. She had used a virtual private network, or VPN, in order to access “illegal websites,” such as her university Gmail account. This, they told her later, was a “sign of religious extremism.”   

It took some time for what was happening to dawn on Vera. Perhaps since her boyfriend was a non-Muslim from the majority Han group and they did not want him to make a scene, at first the police were quite indirect about what would happen next. They just told her she had to wait in the station. 

When she asked if she was under arrest, they refused to respond. 

“Just have a seat,” they told her. By this time she was quite frightened, so she called her father back in her hometown and told him what was happening. Eventually, a police van pulled up to the station: She was placed in the back, and once her boyfriend was out of sight, the police shackled her hands behind her back tightly and shoved her roughly into the back seat.     

Pre-criminals

Vera Zhou didn’t think the war on terror had anything to do with her. She considered herself a non-religious fashionista who favored chunky earrings and dressing in black. She had gone to high school near Portland, Oregon, and was on her way to becoming an urban planner at a top-ranked American university. She had planned to reunite with her boyfriend after graduation and have a career in China, where she thought of the economy as booming. She had no idea that a new internet security law had been implemented in her hometown and across Xinjiang at the beginning of 2017, and that this was how extremist “pre-criminals,” as state authorities referred to them, were being identified for detention. She did not know that a newly appointed party secretary of the region had given a command to “round up everyone who should be rounded up” as part of the “People’s War.”                               

Now, in the back of the van, she felt herself losing control in a wave of fear. She screamed, tears streaming down her face, “Why are you doing this? Doesn’t our country protect the innocent?” It seemed to her like it was a cruel joke, like she had been given a role in a horror movie, and that if she just said the right things they might snap out of it and realize it was all a mistake.       

For the next few months, Vera was held with 11 other Muslim minority women in a second-floor cell in a former police station on the outskirts of Kuitun. Like Vera, others in the room were also guilty of cyber “pre-crimes.” A Kazakh woman had installed WhatsApp on her phone in order to contact business partners in Kazakhstan. A Uyghur woman who sold smartphones at a bazaar had allowed multiple customers to register their SIM cards using her ID card.

Around April 2018, without warning, Vera and several other detainees were released on the provision that they report to local social stability workers on a regular basis and not try to leave their home neighborhoods.    

Whenever her social stability worker shared something on social media, Vera was always the first person to support her by liking it and posting it to her own account.

Every Monday, her probation officer required that Vera go to a neighborhood flag-raising ceremony and participate by loudly singing the Chinese national anthem and making statements pledging her loyalty to the Chinese government. By this time, due to widely circulated reports of detention for cyber-crimes in the small town, it was known that online behavior could be detected by the newly installed automated internet surveillance systems. Like everyone else, Vera recalibrated her online behavior. Whenever the social stability worker assigned to her shared something on social media, Vera was always the first person to support her by liking it and posting it on her own account. Like everyone else she knew, she started to “spread positive energy” by actively promoting state ideology.

After she was back in her neighborhood, Vera felt that she had changed. She thought often about the hundreds of detainees she had seen in the camp. She feared that many of them would never be allowed out since they didn’t know Chinese and had been practicing Muslims their whole lives. She said her time in the camp also made her question her own sanity. “Sometimes I thought maybe I don’t love my country enough,” she told me. “Maybe I only thought about myself.”

But she also knew that what had happened to her was not her fault. It was the result of Islamophobia being institutionalized and focused on her. And she knew with absolute certainty that an immeasurable cruelty was being done to Uyghurs and Kazakhs because of their ethno-racial, linguistic, and religious differences.

“I just started to stay home all the time”

Like all detainees, Vera had been subjected to a rigorous biometric data collection that fell under the population-wide assessment process called “physicals for all,” before she was taken to the camps. The police had scanned Vera’s face and irises, recorded her voice signature, and collected her blood, fingerprints, and DNA—adding this precise high-fidelity data to an immense dataset that was being used to map the behavior of the population of the region. They had also taken her phone away to have it and her social media accounts scanned for Islamic imagery, connections to foreigners, and other signs of “extremism.” Eventually they gave it back, but without any of the US-made apps like Instagram.       

For several weeks, she began to find ways around the many surveillance hubs that had been built every several hundred meters. Outside of high-traffic areas many of them used regular high-definition surveillance cameras that could not detect faces in real time. Since she could pass as Han and spoke standard Mandarin, she would simply tell the security workers at checkpoints that she forgot her ID and would write down a fake number. Or sometimes she would go through the exit of the checkpoint, “the green lane,” just like a Han person, and ignore the police. 

One time, though, when going to see a movie with a friend, she forgot to pretend that she was Han. At a checkpoint at the theater she put her ID on the scanner and looked into the camera. Immediately an alarm sounded and the mall police contractors pulled her to the side. As her friend disappeared into the crowd, Vera worked her phone frantically to delete her social media account and erase the contacts of people who might be detained because of their association with her. “I realized then that it really wasn’t safe to have friends. I just started to stay at home all the time.”       

Eventually, like many former detainees, Vera was forced to work as an unpaid laborer. The local state police commander in her neighborhood learned that she had spent time in the United States as a college student, so he asked Vera’s probation officer to assign her to tutor his children in English. 

“I thought about asking him to pay me,” Vera remembers. “But my dad said I need to do it for free. He also sent food with me for them, to show how eager he was to please them.” 

The commander never brought up any form of payment.   

In October 2019, Vera’s probation officer told her that she was happy with Vera’s progress and she would be allowed to continue her education back in Seattle. She was made to sign vows not to talk about what she had experienced. The officer said, “Your father has a good job and will soon reach retirement age. Remember this.”   

In the fall of 2019, Vera returned to Seattle. Just a few months later, across town, Amazon—the world’s wealthiest technology company—received a shipment of 1,500 heat-mapping camera systems from the Chinese surveillance company Dahua. Many of these systems, which were collectively worth around $10 million, were to be installed in Amazon warehouses to monitor the heat signatures of employees and alert managers if workers exhibited covid symptoms. Other cameras included in the shipment were distributed to IBM and Chrysler, among other buyers.               

Dahua was just one of the Chinese companies that was able to capitalize on the pandemic. As covid began to move beyond the borders of China in early 2020, a group of medical research companies owned by the Beijing Genomics Institute, or BGI, radically expanded, establishing 58 labs in 18 countries and selling 35 million covid-19 tests to more than 180 countries. In March 2020, companies such as Russell Stover Chocolates and US Engineering, a Kansas City, Missouri–based mechanical contracting company, bought $1.2 million worth of tests and set up BGI lab equipment in University of Kansas Medical System facilities.

And while Dahua sold its equipment to companies like Amazon, Megvii, one of its main rivals, deployed heat-mapping systems to hospitals, supermarkets, campuses in China, and to airports in South Korea and the United Arab Emirates.           

Yet, while the speed and intention of this response to protect workers in the absence of an effective national-level US response was admirable, these Chinese companies are also tied up in forms of egregious human rights abuses. 

Dahua is one of the major providers of “smart camp” systems that Vera Zhou experienced in Xinjiang (the company says its facilities are supported by technologies such as “computer vision systems, big data analytics and cloud computing”). In October 2019, both Dahua and Megvii were among eight Chinese technology firms placed on a list that blocks US citizens from selling goods and services to them (the list, which is intended to prevent US firms from supplying non-US firms deemed a threat to national interests, prevents Amazon from selling to Dahua, but not buying from them). BGI’s subsidiaries in Xinjiang were placed on the US no-trade list in July 2020.           

Amazon’s purchase of Dahua heat-mapping cameras recalls an older moment in the spread of global capitalism that was captured by historian Jason Moore’s memorable turn of phrase: “Behind Manchester stands Mississippi.” 

What did Moore mean by this? In his rereading of Friedrich Engels’s analysis of the textile industry that made Manchester, England, so profitable, he saw that many aspects of the British Industrial Revolution would not have been possible without the cheap cotton produced by slave labor in the United States. In a similar way, the ability of Seattle, Kansas City, and Seoul to respond as rapidly as they did to the pandemic relies in part on the way systems of oppression in Northwest China have opened up a space to train biometric surveillance algorithms. 

The protections of workers during the pandemic depends on forgetting about college students like Vera Zhou. It means ignoring the dehumanization of thousands upon thousands of detainees and unfree workers.

At the same time, Seattle also stands before Xinjiang. 

Amazon has its own role in involuntary surveillance that disproportionately harms ethno-racial minorities given its partnership with US Immigration and Customs Enforcement to target undocumented immigrants and its active lobbying efforts in support of weak biometric surveillance regulation. More directly, Microsoft Research Asia, the so-called “cradle of Chinese AI,” has played an instrumental role in the growth and development of both Dahua and Megvii.     

Chinese state funding, global terrorism discourse, and US industry training are three of the primary reasons why a fleet of Chinese companies now leads the world in face and voice recognition. This process was accelerated by a war on terror that centered on placing Uyghurs, Kazakhs, and Hui within a complex digital and material enclosure, but it now extends throughout the Chinese technology industry, where data-intensive infrastructure systems produce flexible digital enclosures throughout the nation, though not at the same scale as in Xinjiang.       

China’s vast and rapid response to the pandemic has further accelerated this process by rapidly implementing these systems and making clear that they work. Because they extend state power in such sweeping and intimate ways, they can effectively alter human behavior. 

Alternative approaches

The Chinese approach to the pandemic is not the only way to stop it, however. Democratic states like New Zealand and Canada, which have provided testing, masks, and economic assistance to those forced to stay home, have also been effective. These nations make clear that involuntary surveillance is not the only way to protect the well-being of the majority, even at the level of the nation.

In fact, numerous studies have shown that surveillance systems support systemic racism and dehumanization by making targeted populations detainable. The past and current US administrations’ use of the Entity List to halt sales to companies like Dahua and Megvii, while important, is also producing a double standard, punishing Chinese firms for automating racialization while funding American companies to do similar things. 

Increasing numbers of US-based companies are attempting to develop their own algorithms to detect racial phenotypes, though through a consumerist approach that is premised on consent. By making automated racialization a form of convenience in marketing things like lipstick, companies like Revlon are hardening the technical scripts that are available to individuals. 

As a result, in many ways race continues to be an unthought part of how people interact with the world. Police in the United States and in China think about automated assessment technologies as tools they have to detect potential criminals or terrorists. The algorithms make it appear normal that Black men or Uyghurs are disproportionately detected by these systems. They stop the police, and those they protect, from recognizing that surveillance is always about controlling and disciplining people who do not fit into the vision of those in power. The world, not China alone, has a problem with surveillance.

To counteract the increasing banality, the everydayness, of automated racialization, the harms of biometric surveillance around the world must first be made apparent. The lives of the detainable must be made visible at the edge of power over life. Then the role of world-class engineers, investors, and public relations firms in the unthinking of human experience, in designing for human reeducation, must be made clear. The webs of interconnection—the way Xinjiang stands behind and before Seattle— must be made thinkable.


—This story is an edited excerpt from In The Camps: China’s High-Tech Penal Colony, by Darren Byler (Columbia Global Reports, 2021.) Darren Byler is an assistant professor of international studies at Simon Fraser University, focused on the technology and politics of urban life in China.

Video: How cheap renewables and rising activism are shifting climate politics

Fri, 10/08/2021 - 05:00

The plummeting costs of renewables, the growing strength of the clean energy sector, and the rising influence of activists have begun to shift the politics of climate action in the US, panelists argued during MIT Technology Review’s annual EmTech conference last week.

Those forces allowed President Joe Biden to put climate change at the center of his campaign and helped build momentum behind the portfolio of clean energy policies and funding measures in the infrastructure and reconciliation packages under debate in the US Congress, said Bill McKibben, the climate author and founder of the environmental activist group 350.org, during the September 30 session.

You can view the full video of the session below:

The measures will mark the first major climate laws in the nation if they pass in something close to their current form. Most notably, they include the Clean Electricity Performance Program, which uses payments and penalties to encourage utilities to boost their share of electricity from carbon-free sources (read our earlier explainer here).

Other speakers on the panel, titled Cleaning Up the Power Sector, advised on the creation of that program. They included Leah Stokes, an associate professor focused on energy and climate policy at the University of California, Santa Barbara; and Jesse Jenkins, an assistant professor and energy systems researcher at Princeton University.

“A writer, a political scientist, and an energy modeler walk into an MIT panel …”

Julian Brave Noisecat

They argued during the session that the legislation, designed to ensure that 80% of the nation’s electricity comes from clean sources by 2030, is more effective and politically feasible than competing approaches, including the carbon taxes favored by many economists.

“When … we say to people, ‘We’re going to make it more expensive for you to use an essential good, which is energy,’ that isn’t very popular,” Stokes said. “That theory of political change has run up against the reality of income inequality in this country.”

“The different paradigm is to say, ‘Rather than making it more expensive to use fossil fuels, let’s help make it cheaper to use the clean stuff,’” she added.

But it remains to be seen whether the clean electricity measure and the other climate provisions will pass, and in what form. Even some Democratic senators in the narrowly divided Congress have pushed back on what they portray as excessive spending in the bills.

For all the progress on climate issues, well-funded and politically influential utility and fossil-fuel interests continue to impede efforts to overhaul energy systems at the speed and scale required, stressed Julian Brave Noisecat, vice president of policy and strategy at Data for Progress, who moderated the session.

“These interests are remarkably entrenched and remain so despite significant grassroots opposition,” he said.

If legislators defang the key climate provisions, it will slow the shift to clean energy in the US and undermine the negotiating power of Biden’s climate czar, John Kerry, in the UN climate conference early next month. Should the US fail to enact aggressive new climate measures, “rest assured that will limit everybody else’s ambition, too,” McKibben said.

The moon didn’t die as early as we thought

Thu, 10/07/2021 - 14:00

The moon may have been more volcanically active than we realized.

Lunar samples that China’s Chang’e 5 spacecraft brought to Earth are revealing new clues about volcanoes and lava plains on the moon’s surface. In a study published today in Science, researchers describe the youngest lava samples ever collected on the moon.  

The samples were taken from Oceanus Procellarum, a region known for having had huge lakes of lava that have since solidified into basalt rock. The sample they analyzed most closely indicates that the moon experienced an era of volcanic excitement that lasted longer than scientists previously thought.  

Researchers compared fragments from within that same sample to determine when molten magma had crystallized. The results surprised them. In their early lives, small, rocky bodies like the moon typically cool faster than larger ones. But their observations showed that wasn’t necessarily the case for our closest heavenly neighbor.   

“The expectation is that the moon is so small that it will probably be dead very quickly after formation,” says Alexander Nemchin, a professor of geology at Curtin University in Perth, Australia and a co-author of the study. “This young sample contradicts this concept, and in some way, we need to rethink our view of the moon a little bit, or maybe quite a lot.” 

Using isotope dating and a technique based on lunar crater chronology, which involves estimating the age of an object in space in part by counting the craters on its surface, the team determined that lava flowed in Oceanus Procellarum as recently as 2 billion years ago.  

Chang’e 5 was China’s first lunar sample-return mission and the first probe to bring back lunar material since 1976. After launching in late November and returning in early December 2020, it’s one of at least eight phases in China’s lunar program to explore the entirety of the moon. 

Nemchin says there’s no evidence that radioactive elements that generate heat (such as potassium, thorium, and uranium) exist in high concentrations below the moon’s mantle. That means those elements probably didn’t cause these lavas flows, as scientists had thought. Now, they will have to look for other explanations for how the flows formed.  

The moon’s volcanic history could teach us more about the Earth’s. According to the giant impact theory, the moon may just be a chunk of Earth that got knocked loose when our planet collided with another one. 

“Anytime we get new or improved information about the age of the stuff on the moon, that has a knock-on effect for not just understanding the universe, but volcanism and even just general geology on other planets,” says Paul Byrne, an associate professor of earth and planetary sciences at Washington University in St. Louis, who was not involved in the study.  

Volcanic activity not only shaped how the moon looks—those old lava beds are visible to the naked eye today as huge dark patches on the moon’s surface—but may even help answer the question of whether we’re alone in the universe, Byrne says.  

“The search for extraterrestrial life in part requires understanding habitability,” Byrne says. Volcanic activity plays a role in the cultivation of atmospheres and oceans, key components for life. But what exactly these new findings tell us about potential life elsewhere remains to be seen.  

It’s 20 years since the first drone strike. It’s time to admit they’ve failed.

Thu, 10/07/2021 - 05:00

After the Taliban took over Kabul in mid-August, a black-bearded man with a Kalashnikov appeared on the streets. He visited former politicians and gave a sermon during Friday prayers at the capital’s historic Pul-e-Khishti mosque. But the man, passionate and seemingly victorious, was no mere Taliban fighter among tens of thousands of others: he was Khalil ur-Rahman Haqqani, a Taliban leader prominent in the Haqqani Network, the group’s notorious military wing. 

Ten years ago, the US placed a $5 million bounty on his head, so his appearance generated plenty of commentary about how he was openly traveling around Kabul—indeed, in September the Taliban even made him Afghanistan’s minister of refugees. 

But what the gossip and the op-eds didn’t mention was that the real surprise wasn’t Haqqani’s public appearances—it was that he was appearing at all: Multiple times over the last two decades, the US military thought they’d killed him in drone strikes.

Clearly Haqqani is alive and well. But that raises a glaring question: if Khalil ur-Rahman Haqqani wasn’t killed in those US drone strikes, who was?

The usual bland response is “terrorists,” an answer now institutionalized by the highest levels of the US security state. But the final days of the US withdrawal from Afghanistan showed that is not necessarily true. A day after an attack on troops at Kabul’s teeming airport, for example, the US responded with a “targeted” drone strike in the capital. Afterward it emerged that the attack had killed 10 members of one family, all of whom were civilians. One of the victims had served as an interpreter for the US in Afghanistan and had a Special Immigrant Visa ready. Seven victims were children. This did not match the generic success story the Biden administration initially told.

Something different happened with this strike, however. For years, most of the aerial operations the US has conducted took place in remote, rural locations where few facts could be verified and not many people could go to the scene. 

But this strike took place in the middle of the country’s capital. 

Journalists and investigators could visit the site, which meant they could easily fact-check everything the United States was claiming—and what had actually happened soon became clear. First, local Afghan television channels, like Tolo News, showed the family members of the victims. With so much attention being paid to the withdrawal from Afghanistan, international media outlets started to arrive, too. A detailed report by the New York Times forced Washington to retract its earlier claims. “It was a tragic mistake,” the Pentagon said during a press conference, as it was forced to admit that the strike had killed innocent civilians with no links to ISIS.

In fact, American’s last drone strike in Afghanistan was eerily similar to its first one.

In fact, America’s last drone strike in Afghanistan—its last high-profile act of violence—was eerily similar to its very first one. 

On October 7, 2001, the United States and its allies invaded Afghanistan in order to topple the Taliban regime. That day the first drone operation in history took place. An armed Predator drone flew over the southern province of Kandahar, known as the Taliban’s capital, which was the home of Mullah Mohammad Omar, the group’s supreme leader. Operators pushed the button to kill Omar, firing two Hellfire missiles at a group of bearded Afghans in loose robes and turbans. But afterward, he was not found among them. In fact, he evaded the allegedly precise drones for more than a decade, eventually dying of natural causes in a hideout mere miles from a sprawling US base. Instead, America left a long trail of Afghan blood in its attempts to kill him and his associates.

“The truth is that we could not differentiate between armed fighters and farmers, women, or children, ” Lisa Ling, a former drone technician with the US military who has become a whistleblower, told me. “This kind of warfare is wrong on so many levels.”

More than 1,100 people in Pakistan and Yemen were killed between 2004 and 2014 during the hunt for 41 targets, according to the British human rights organization Reprieve. Most of those targets are men who are still alive, like the Haqqanis, or Al-Qaeda leader Ayman al-Zawahiri, who just published another book while thousands of people have been murdered by drones instead of him. As far back as 2014, the London-based Bureau of Investigative Journalism revealed that only 4% of drone victims in Pakistan were identified as militants linked to Al-Qaeda. It also underlined that the CIA itself, which was responsible for the strikes in the country, did not know the affiliation of everyone they killed. “They identified hundreds of those killed as simply Afghan or Pakistani fighters,” or as “unknown,” the report stated

And yet many US military officials and politicians continue to spin the drone narrative. Even the targeted militant groups have joined in: for a couple of years, the Taliban have been using armed commercial drones to attack their enemies, portraying drones as technologically superior—just as American officials had done before them. “The drone’s targeting system is very exact,” one member of the Taliban’s drone unit recently told Afghan journalist Fazelminallah Qazizai

The Taliban don’t have the same drone resources as the US. They aren’t backed by a global assassination network of operators and weather experts. Nor do they have a satellite relay station like the one at Ramstein Air Base in Germany, which was described as the heart of the US drone war in documents supplied by Daniel Hale, a former intelligence analyst who became a whistleblower. 

(Hale, too, has revealed evidence showing that most drone victims in Afghanistan were civilians. His reward was 45 months in prison.) 

But even though they don’t have the same means as the US, the Taliban too have been convinced that drones are the perfect weapons. “We work for our ideology,” a Taliban drone operator told Qazizai. 

Even though they know strikes regularly miss their targets, it seems that they—just like the US—have a blind faith in technology. 

—Emran Feroz is an independent journalist, an author, and the founder of Drone Memorial, a virtual memorial for civilian drone strike victims. 

Pages