Feed aggregator

Many businesses are still failing to secure remote workers

IT Portal from UK - 9 hours 1 min ago

Many businesses are still failing to properly secure their remote workforce, risking data breaches, downtime, loss of revenue and large fines.

According to a new report from IT security firm Distology, almost half (48 percent) of IT leaders consider their cybersecurity posture subpar, at least when it comes to remote working. At the same time, attacks are becoming more common and techniques more sophisticated.

At the heart of these insufficiencies lie education and legacy tech, the report suggests. More than a third (37 percent) of respondents confirmed employees in their organization haven’t been educated on how to avoid a security breach, while 57 percent worry about employees using the same password across multiple platforms.

Meanwhile, employees in almost half (46 percent) of the organizations surveyed said they were using decade-old technologies. 

Businesses should review their cybersecurity strategy, Distology argues, which would help keep businesses, stakeholders and sensitive data safe, while ensuring employees are future-proofed.

“Technology from five years ago, let alone ten years ago, wasn’t built with today’s threats in mind. And, as threat actors show no signs of reducing in intelligence, outdated security solutions make it so much easier for attackers to exploit a business’ weaknesses. In addition, every employee in every organization should have at the very least, a basic level of training on how to spot and avoid a potential cyber-attack,” said Lance Williams, Chief Product Officer at Distology.

For almost half of the respondents (46 percent), the biggest threat lies in the advancement of technologies and threats, which is why many are looking to update their cybersecurity measures. Most are looking to obtain next-gen firewalls, deploy multi-factor authentication and ensure secure remote access.

De-hyping technology: A tool to achieve business outcomes

IT Portal from UK - 11 hours 58 sec ago

According to a recent forecast by Statista, the global big data market is on track to increase to 103 billion dollars by 2027, more than double its expected market size in 2018. Companies are becoming wise to the central role which data analytics will have to their success in the age of digitization, but the question remains weighted towards the ‘what’ of tech, rather than the ‘how’ and ‘why’. It’s not enough to have the latest software or the smartest app, progressive businesses must really grasp and understand their data - and therefore their customers - if they are to seize the golden opportunity that digitization presents to their future success.

Covid-19 has accelerated the adoption and awareness surrounding digitization and there's no question that it's imperative that companies transform in alignment with customer expectations and changed behavior. Before the pandemic, 53 percent of global organizations adopted big data technology, and as we move to a post-Covid-19 world, data will be even more crucial. 

Research suggests that many companies are still failing to question the technological infrastructure in which data is being collected, jeopardizing the potential for driving business growth and understanding their customers better. They need to start questioning the technological infrastructure of the data they collect. For example, what do you want to learn about your business and your customer? Once a company turns their attention to targeted data collection, it can inform greater understanding of their customers and their needs. 

Data analytics is a key to success in an age of digitalization. Presently, many businesses do not question their infrastructure, as they fail to ask the simple question of what they want to learn, before accelerating technological innovation and aggregating data. 

Essentially, businesses are not harnessing data to its full potential. So, how can a redefined data strategy - that focuses on insights - correct this?

De-hyping technology 

When it comes to de-hyping technology, it’s important to grasp the technology we are dealing with in the first place. Businesses need to redefine their data strategies and move away from collecting data for the sake of it. Let’s break down the three most commonly used ‘hyped’ technologies:

Edge Computing

Edge computing is a form of computing that is done on.site or near a particular data source, minimizing the need for data to be processed in a remote data center. This is expected to improve response times and save bandwidth.

Self-Service Automation

Self-service automation is the practice of connecting self-service to other business processes and platforms through a workload automation solution, or empowering end-users with a self-service portal to run preconfigured jobs and processes through an enterprise job scheduling solution.

Virtualization

Virtualization is the name given to the specific process of transforming physical IT infrastructure, such as network equipment and servers, and turning them into software alternatives. It’s a concept which is commonly used by consumers and businesses, for example, rather than adding more server racks, companies can instead keep their data in a virtual server which they can then access via the cloud.

Finding your why 

At the moment, companies spend large sums updating their systems to be the newest, fastest and most advanced, with little thought given to the ‘why’. For instance, why do they rarely utilise this wealth of technology effectively? 

When we buy a new, up to date and sophisticated item of technology or a new car, we always read the manual, but companies are rushing headlong into the new world of digital tech without first fully understanding what they are dealing with. The hype around new technologies should be for its practicality and utility; the ‘image’ or kudos should not be the priority, yet in many cases it is. People are too frequently starting with the ‘what’ instead of the ‘why’.

Some businesses are failing to question the technological infrastructure in which data is being collected, limiting the potential for driving business growth and understanding their customers better.

As leaders in technology, we should prioritize data-driven innovation that creates added value in the future. In a highly connected world, technology should be designed and adopted for a purpose.

In essence, data strategy is simple. It is all about learning how you can improve your business value and customer experience using data insights.

Looking at the current landscape and how companies are responding to this rapid transformation, it’s clear that focus needs to be shifted from what technology to adopt, to how to use and implement technologies that drive business impact.

Technology as a tool to achieve business outcomes 

Ultimately, the big question companies should be asking themselves is how they can use technology as a tool to achieve growth and strong business outcomes. After all, that is the end goal of customer intelligence and data insights; using insights-based data collection to know your customer better. By analyzing data for patterns and trends, companies can thus future-proof their businesses for the digital age. In an ultra-connected world with new technologies disrupting all sectors, technology should be designed and adopted for a purpose.

Rather than seeking technology for technology’s sake, companies need to evaluate what tech is best suited for their needs. Each company’s needs will be different and indeed each set of customers will have different expectations. By selecting technologies that best apply to a company’s objectives, businesses can take advantage of the latest and greatest innovations. From 5G and the Internet of Things (IoT) to Intelligent Process Automation (IPA) and Virtual Reality and big data analytics. 

Human augmentation is another popular technological tool among many others that increase our overall productivity. 

Meanwhile, the simulation of human intelligence processes by machines, also known as Artificial Intelligence can be used to increase efficiencies and automate tasks.

Companies need to improve their ‘why’. Why should they be using a certain tech? How will the technology achieve their business objectives? My view is that companies must avoid implementing technology that does not meet their needs, and a targeted data strategy is essential to achieve this. It is all about learning how you can improve your business value and customer experience using data insights. 

In a post-Covid-19 world, businesses looking to stand out from competitors and drive business outcomes need to rethink traditional data strategies, and focus on data-driven innovation that is aligned with customer expectations.

Jieke Pan, CTO and VP of Engineering, Mobiquity

How SpaceX’s massive Starship rocket might unlock the solar system—and beyond

MIT Top Stories - 11 hours 1 min ago

If all goes to plan, next month SpaceX will launch the largest rocket in human history. Towering nearly 400 feet tall, the rocket – Starship – is designed to take NASA astronauts to the moon. And SpaceX’s CEO, Elon Musk, has bigger ambitions: he wants to use it to settle humans on Mars.

Much has already been made of Starship’s human spaceflight capabilities. But the rocket could also revolutionize what we know about our neighboring planets and moons. “Starship would totally change the way that we can do solar system exploration,” says Ali Bramson, a planetary scientist from Purdue University. “Planetary science will just explode.”

If it lives up to its billing, scientists are already talking about sending missions to Neptune and its largest moon in the outer solar system, bringing back huge quantities of space rock from Earth’s moon and Mars, and even developing innovative ways to protect Earth from incoming asteroids. 

Starship—which is being built at a Texas site dubbed “Starbase”—consists of a giant spaceship on top of a large booster, known as Super Heavy. Both can land back on Earth so they can be reused, reducing costs. The entire vehicle will be capable of lifting 100 metric tons (220,000 pounds) of cargo and people into space on regular low-cost missions. The volume of usable space within Starship is a whopping 1,000 cubic meters—big enough to fit the entire Eiffel Tower, disassembled. And that’s got scientists excited.

“Starship is, like, wow,” says James Head, a planetary scientist from Brown University.

In mid-November, speaking in a publicly accessible virtual meeting about Starship hosted by the US National Academies of Sciences, Engineering, and Medicine, Musk discussed the project’s scientific potential. “It’s extremely important that we try to become a multiplanet species as quickly as possible,” he said. “Along the way, we will learn a great deal about the nature of the universe.” Starship could carry “a lot of scientific instrumentation” on flights, said Musk—far more than is currently possible. “We’d learn a tremendous amount, compared to having to send fairly small vehicles with limited scientific instrumentation, which is what we currently do,” he said.

 “You could get a 100-ton object to the surface of Europa,” said Musk. 

Cheap and reusable

Central to many of these ideas is that Starship is designed to be not just large but cheap to launch. Whereas agencies like NASA and ESA must carefully choose a smattering of missions to fund, with launch costs in the tens or hundreds of millions of dollars, Starship’s affordability could open the door to many more. “The low cost of access has the potential to really change the game for science research,” says Andrew Westphal, a lecturer in physics at the University of California, Berkeley, with flights potentially as low as $2 million per launch. “You can imagine privately financed missions and consortia of citizens who get together to fly things.”

What’s more, Starship has a key advantage over other super-heavy-lift rockets in development, such as NASA’s much-delayed Space Launch System and Blue Origin’s New Glenn rocket. The upper half of the rocket is designed to be refueled in Earth orbit by other Starships, so more of its lifting capability can be handed over to scientific equipment rather than fuel. Taking humans to the moon, for example, might require eight separate launches, with each consecutive “tanker Starship” bringing up fuel to the “lunar Starship” that then makes its way to the moon with scientific equipment and crew. 

Scientists are now starting to dream of what Starship might let them do. Earlier this year, a paper published by Jennifer Heldmann of NASA Ames Research Center explored some of the scientific opportunities that might be opened by Starship missions to the moon and Mars. One great benefit is that Starship could carry full-sized equipment from Earth—no need to miniaturize it to fit in a smaller vehicle, as was required for the Apollo missions to the moon. For example, “you could bring a drilling rig,” says Heldmann. “You could drill down a kilometer, like we do on Earth.” That would afford unprecedented access to the interior of the moon and Mars, where ice and other useful resources are thought to be present. Before, such an idea have been “a little bit insane,” says Heldmann. But with Starship, “you could do it, and still have room to spare,” she adds. “What else do you want to bring?”

Because Starship can land back on Earth, it will also—theoretically—be able to bring back vast amounts of samples. The sheer volume that could be returned, from a variety of different locations, would give scientists on Earth unprecedented access to extraterrestrial material. That could shed light on a myriad of mysteries, such as the volcanic history of the moon or “the question of life and astrobiology” on Mars, says Heldmann. 

Starship could also enable more extravagant missions to other locations, either via a direct launch from Earth or perhaps by using the moon and Mars as refueling stations, an ambitious future envisioned by Musk. 

Let’s go to Neptune

One idea, from an international group of scientists called Conex (Conceptual Exploration Research), is a spacecraft called Arcanum, which would make use of Starship’s heavy-lifting capabilities to explore Neptune and its largest moon, Triton. Neptune has been visited only once, a flying visit by NASA’s Voyager 2 spacecraft in 1989, and there is so much we still don’t know about it. “Nobody’s really thinking on this next level about what Starship could enable,” says James McKevitt, a researcher at the University of Vienna and the co-lead of Conex. “That’s what Arcanum is designed to showcase.”

Weighing in at about 21 metric tons, the spacecraft would be four times heavier than the largest deep space probe to date: NASA and ESA’s Cassini-Huygens mission, which explored Saturn from 2004 to 2017. No existing rocket could currently launch such a craft, but Starship would make it possible. Arcanum would have numerous components, including an orbiter to study Neptune, a lander to study Triton, and a penetrator to strike Triton’s surface and “perform a seismic experiment” to understand its geology and its structure, says McKevitt. The mission could also be equipped with a telescope, allowing for studies of the outer solar system and aiding the hunt for planets around other stars. 

Other ideas are even more speculative. Philip Lubin, a physicist from the University of California, Santa Barbara, calculated that a large enough rocket, such as Starship, could be used to prevent an asteroid from hitting Earth. Such a mission could carry enough explosives to rip apart an asteroid as large as the 10-kilometer-wide rock that wiped out the dinosaurs. Its fragments would harmlessly burn up in the atmosphere before it had a chance to reach our planet. 

Starship could also be a better way to launch giant space telescopes that can observe the universe. Currently, equipment such as NASA and ESA’s upcoming James Webb Space Telescope must be launched folded up, an expensive, complex, and delicate procedure that could be prone to error. NASA has suggested that a proposed super-telescope called LUVOIR designed to image Earth-like planets around other stars could launch on Starship, while Musk has said SpaceX is already working on “an interesting project, which is to have a really big telescope, taking a lens that was intended for a ground-based telescope, and creating a space-based telescope with it.” No further details have yet been revealed.

Say hi to the neighbors

Elsewhere, some scientists have dreams of using Starship to prepare to visit other stars. René Heller from the Max Planck Institute for Solar System research in Germany and colleagues say that Starship could offer a low-cost way to test technologies for a spacecraft that can travel multiple light-years to neighboring star systems. Starship could release a sail-powered spacecraft on a trip to Mars, which would use an onboard laser to push against a thin sail and reach incredible speeds, enabling a demonstration to be conducted beyond Earth’s orbit. “If SpaceX were kind enough to take one of our sails on board and just release it halfway on its journey to Mars, we should able to follow its acceleration and path through the solar system for a few days and almost to the orbit of Jupiter,” says Heller.

Other ideas include using Starship to send a probe to orbit Jupiter’s volcanic moon Io, a difficult task without a substantial lifting capability. “It’s extremely challenging because of both getting into orbit and protecting yourself from Jupiter’s harsh radiation,” says Alfred McEwen, a planetary geologist from the University of Arizona. “But mass helps those things. You can have plenty of fuel and radiation shielding.”

Musk has suggested that SpaceX could launch as many as a dozen Starship test flights in 2022, with missions to the moon and Mars both on the horizon—and plenty of scientific potential to boot. “Once Starship starts flying, the development will be very fast,” says Margarita Marinova, a former senior Mars development engineer at SpaceX. “There will be so many more people who will be able to fly things.” Those could be anything from standalone missions using Starship to ride-along missions on the existing flight manifest. “When you have a 100-ton capability, adding on science hardware is pretty easy,” says Marinova. “If somebody wants to buy payload space, they can have payload space. It will be a really drastic change in how we do science.”

There are, of course, very good reasons to be cautious. While Starship has flown test flights without the Super Heavy booster, we have yet to see the full rocket launch. It’s an extremely massive and complex machine that could still experience problems in its development. SpaceX and Musk, too, have previously been notoriously cavalier (to put it politely) with timelines and goals (a proposed mission to Mars, Red Dragon, was once supposed to have launched as early as 2018). And Starship’s proposed method to reach the moon and Mars, relying on multiple refueling missions in Earth orbit, remains complex and untested.

Yet there are also plenty of grounds for excitement regarding what Starship could do if it is successful. From the inner to the outer solar system, and possibly beyond, it may well open up a whole new era of space science. “I’m sure that some very smart people are starting to think about sending scientific missions on Starship,” says Abhishek Tripathi, a space scientist from the University of California, Berkeley. 

Or as Musk put it: “It’s really whatever you can imagine.” 

Losing the ROI on your RPA – it's time to make your investment count

IT Portal from UK - 11 hours 30 min ago

It’s no secret businesses have been ramping up their digital transformation initiatives in the last year. Investing in the right technology has helped companies fare far better during the pandemic and will ensure they thrive beyond it. 

Achieving optimum efficiency and productivity is critical right now. Businesses are working hard to keep their people productive and processes running smoothly while keeping up with the competition. This is why many businesses have turned towards robotic process automation (RPA) to save both time and money. But while RPA promises to transform the cost, efficiency, and quality of many back-office and customer-facing processes, it’s not without its challenges. 

There are numerous common mistakes that cause as many as 30 to 50 percent of companies’ initial RPA implementations to fail. To prevent your next RPA or intelligent automation project from failing, here are the top automation pitfalls to avoid and ensure your investments count every time.

A rush to automate bad processes 

According to an SSON study, the process selected to automate during the initial pilot is the leading cause for failure. Offering more insight into why selecting the right process matters, our research found not fully understanding the intended automated process was to blame. Instead, many organizations select low-hanging fruit initiatives without taking the time to understand their workflows and how they impact other processes. 

The main goal of RPA should be to use bots to reduce human involvement in manual, time-consuming tasks that don’t require cognitive effort. Extracting data from a document, classifying it, and inputting it into a business system, such as transferring data from an invoice into an ERP system, is an example. This is where the sophistication of an intelligent document processing solution makes the difference in speed and accuracy. If pertinent information is missing or mislabelled, the process is broken, and the bot will continue to make a mistake or stop working because these exceptions weren’t included in the rules. Rushing to target the wrong process can result in delays, additional costs, or the project being abandoned. In a time where businesses need to make every investment count, understanding how the technology will make the most impact, is crucial.

Most businesses are stumped by what seems to be a deceptively simple question: which are the right processes for automation? Determining where to start with your RPA program is critical to the success of it. Using tools such as process mining to do a thorough analysis of your business processes will give you a “digital twin” of how they work, and let you know which are best suited for digital transformation. Businesses can then safely select processes that range from rules-based and repetitive in nature to data-intensive and high error rates. 

Prioritize focus on high value-tasks  

Leaders and pundits talk about empowering employees by reducing repetitive work, but some enterprises have been using it to reduce headcount and select projects with that in mind. Arguably, the greatest benefit of adopting RPA is that it allows your talent to devote their skills to higher-value tasks. This removes the burden of performing manual operations that contribute little to the organization’s growth or improving the customer experience.

Take the role of compliance officers, for example. Many banks use RPA as a first step to automate the collection of data from documents, but the compliance officer still must sift through documents and find data needed to make decisions. Instead, robots with content intelligence can quickly read the contracts and pick out relevant data to execute decisions faster. 

To prepare highly skilled knowledge workers to work alongside their digital counterparts, they will need to obtain more digital skills themselves. A concerning 75 percent of global enterprises in IDC’s Future of Work report said it was difficult to recruit people with digital skills, and 20 percent cited inadequate worker skills and/or training was a top challenge. Every organization must step up to meet the future skills and developer shortage to remain competitive. The introduction of no-code platforms with the ease of drag-and-drop to train bots will help a wide variety of professionals to augment and improve their work productivity, from the legal team, HR, accounts payable, claims adjustors, customer service, and more.

Depriving RPA of content intelligence

Since organizations are consumed with both structured data and unstructured data that fuel all business processes, it requires bots to be smart enough to “read,” “understand,” and “make decisions” about the content it is processing. Similar to humans, you wouldn’t hire an employee that couldn’t read or understand your content, and you wouldn't hire someone who could only do one task.

Furthermore, RPA on its own cannot understand unstructured documents and requires AI to enable bots to have content intelligence. This is where content intelligence comes in: it allows bots to carry out tasks such as reading and categorizing a document, routing a document, extracting and validating data from documents, and other tasks related to understanding and processing unstructured content. Using content intelligence with RPA will speed your processes and ready your organization to add more experiential opportunities to engage with customers such as interactive mobile apps, cognitive virtual assistants that combine voice and conversational AI, and chatbots.

Don’t forget to check up on your RPA 

It’s important businesses don’t let their intelligent automation projects fail by not continuously monitoring projects once they are in action. Many organizations are using process mining to help businesses keep track of what their bots are doing best and reveal where they could perform better. This combined with bots' event logs and their analysis will help identify bottlenecks, inefficiencies, control and data quality issues, and more to give leaders comprehensive process intelligence.

What’s more, process intelligence also identifies room for continuous improvements of the processes running in your company. With monitoring, you can evaluate the performance compared to the original process. Additionally, keeping track of already deployed and active bots will help you monitor your KPIs, while also taking immediate actions if the goals are not being met or you run into issues. 

It is clear that RPA can be a helping hand for businesses who are working to streamline ongoing transformation across the entire organization. But first, businesses must take an integrated approach, one that includes AI and process mining technologies to provide a 360-degree view of your business workflow from the ground up. With a broader, more holistic digital intelligence strategy, businesses will start to see the fruits of their investments.

Neil Murphy, Global VP, ABBYY

What businesses can learn from Roblox

IT Portal from UK - 12 hours 49 sec ago

It sounds like a fanciful notion. Business leaders operating across every vertical can ingest all the management strategy self-empowerment books, tutorials and workshops they can get their hands on, but their best bet for a really progressive approach to 21st Century business could come from an online gaming platform.

The collaborative connectivity and almost ubiquitous accessibility offered by Roblox represents a state of consciousness that most commercial organizations can only dream of. But even if they aren’t going to be the next Twitter, TikTok, Facebook or indeed Roblox, there are still many lessons that every business can take away from the way this kind of innovation operates.

Staying with Roblox for a moment and leaving the purely social platforms aside, there is a special level of interactivity and engagement enabled on the platform that firms could, and perhaps should, now aspire to emulate at least to some degree.

So what makes Roblox Roblox?

For the uninitiated, Roblox is a cloud-native online gaming platform where ‘players’ can move around inside a digital virtual world and gain access to sometimes quite basic games that have been created by other users. 

Similar in form and function to IBM’s Second Life, users don’t have to necessarily ‘do’ anything in particular inside Roblox i.e. if they want to just ‘be’ and exist, then that is fine. But for those who wish to engage, lead, share and create, Roblox offers an exciting world where creativity is rewarded. The games are free, but users can buy Robux (the Roblox currency) to use in games or to buy some accessories for their personal avatar.  

What Roblox represents is disruption. It offers players access to over 40-million online games. This is certainly something of a disruption to the already-established gaming industry in and of itself, but moreover it is a disruption to the status quo of accessibility.

Indeed, applied as a parallel to modern enterprise business, Roblox should be seen as a democratizing force for user empowerment. Anyone can join, anyone can explore, anyone can add, play or leave whenever they want to. This is a cloud-based digital platform that offers an inherent level of adaptability which makes it usable – and, crucially, enjoyable – by anyone.

As in Roblox, it is in business 

As enterprises, we now need to evolve our use of software and data architectures to reflect this same kind of democratized accessibility for all. We can see how much functionality is available in something like Roblox and be able to take that same approach to openness and access forwards.

This is all about putting a lot of compute and data integration power at the backend and connecting it to a highly usable front-end interface. This enables us to open the door to different kinds of skill sets and so in terms of enterprise IT, this can help us to significantly reduce the latency factor traditionally associated with the software builds.

When and where users require new application functions and data services, they make the request to the IT department and the process of build, test, integrate, test again, debug and eventual deployment happens. What we are talking about here is a route to enable citizen developers and citizen integration specialists with the tools they need to create functions that can be capitalized upon and monetized… or be given away to the community for wider development. 

Taking the Roblox approach to this kind of enterprise software application development, different users will want to use different parts of the IT stack in different ways. The capability of a platform to be customizable with personalized insights for any company-wide role is just a must-have for today's users and literally for any type of user.

In Roblox, it is game wins and points, but in business we are talking Key Performance Indicators (KPIs) and of course profits.

Disrupting the disruptors

Why should enterprise organizations think about democratized user accessibility, ubiquitous openness and gamified incentivization? Because if they don’t, then it is likely that someone close enough to their market share will.

If we look at how a company is typically disrupted, it is not normally disrupted by a competitor, but by the wider moves of the market or by individuals who solve problems that have existed for a while, but with a new approach to the problem. 

It wasn’t another player in the photographic industry that disrupted Kodak, but new technologies like mobile phones and online sharing of pictures. Blockbuster’s demise was not at the hands of another in the film industry, but an innovative online video-on-demand sharing company.

When someone finds a better way to do things, they will. Inside the world of Roblox, people can experiment, explore, create and share, all of which means that people (by which we mean each and every single person) can look for new ways to do things. 

It is algorithmically and statistically not possible to beat this kind of platform for invention with any single investment in Research & Development (R&D), no matter how large.

Mature multiplayer mainstream muscle

In an age where everyone has a computer attached to their hand in the form of a smartphone, digital business is the battlefield that will decide the victors from the vanquished. When a technology becomes mature enough and goes mainstream, any organization that has sat back and rested upon its laurels will ultimately fail.

Roblox logic as a business development principle, ethos, template or methodology might still raise a few eyebrows around the boardroom table, but the wider drive towards this type of platform power cannot be ignored. Just remember, video games used to be low-resolution single-player experiences; now we live in a world where massively multiplayer is the new normal.

If not quite a firm lecture slot on next year’s Harvard Business School syllabus, Roblox theory has a lot to teach us when it comes to gamification, incentivization and democratization. Now is the time to play and get ready for the next wave. Ready player one?

Alessandro Chimera, Director of Digitalization Strategy, TIBCO

Evolution of intelligent data pipelines

MIT Top Stories - Mon, 12/06/2021 - 10:44

The potential of artificial intelligence (AI) and machine learning (ML) seems almost unbounded in its ability to derive and drive new sources of customer, product, service, operational, environmental, and societal value. If your organization is to compete in the economy of the future, then AI must be at the core of your business operations. 

A study by Kearney titled “The Impact of Analytics in 2020” highlights the untapped profitability and business impact for organizations looking for justification to accelerate their data science (AI / ML) and data management investments: 

  • Explorers could improve profitability by 20% if they were as effective as Leaders 
  • Followers could improve profitability by 55% if they were as effective as Leaders 
  • Laggards could improve profitability by 81% if they were as effective as Leaders 

The business, operational, and societal impacts could be staggering except for one significant organizational challenge—data. No one less than the godfather of AI, Andrew Ng, has noted the impediment of data and data management in empowering organizations and society in realizing the potential of AI and ML: 

“The model and the code for many applications are basically a solved problem. Now that the models have advanced to a certain point, we’ve got to make the data work as well.” — Andrew Ng

Data is the heart of training AI and ML models. And high-quality, trusted data orchestrated through highly efficient and scalable pipelines means that AI can enable these compelling business and operational outcomes. Just like a healthy heart needs oxygen and reliable blood flow, so too is a steady stream of cleansed, accurate, enriched, and trusted data important to the AI / ML engines. 

For example, one CIO has a team of 500 data engineers managing over 15,000 extract, transform, and load (ETL) jobs that are responsible for acquiring, moving, aggregating, standardizing, and aligning data across 100s of special-purpose data repositories (data marts, data warehouses, data lakes, and data lakehouses). They’re performing these tasks in the organization’s operational and customer-facing systems under ridiculously tight service level agreements (SLAs) to support their growing number of diverse data consumers. It seems Rube Goldberg certainly must have become a data architect (Figure 1). 

Figure 1: Rube Goldberg data architecture

Reducing the debilitating spaghetti architecture structures of one-off, special-purpose, static ETL programs to move, cleanse, align, and transform data is greatly inhibiting the “time to insights” necessary for organizations to fully exploit the unique economic characteristics of data, the “world’s most valuable resource” according to The Economist

Emergence of intelligent data pipelines  

The purpose of a data pipeline is to automate and scale common and repetitive data acquisition, transformation, movement, and integration tasks. A properly constructed data pipeline strategy can accelerate and automate the processing associated with gathering, cleansing, transforming, enriching, and moving data to downstream systems and applications. As the volume, variety, and velocity of data continue to grow, the need for data pipelines that can linearly scale within cloud and hybrid cloud environments is becoming increasingly critical to the operations of a business. 

A data pipeline refers to a set of data processing activities that integrates both operational and business logic to perform advanced sourcing, transformation, and loading of data. A data pipeline can run on either a scheduled basis, in real time (streaming), or be triggered by a predetermined rule or set of conditions. 

Additionally, logic and algorithms can be built into a data pipeline to create an “intelligent” data pipeline. Intelligent pipelines are reusable and extensible economic assets that can be specialized for source systems and perform the data transformations necessary to support the unique data and analytic requirements for the target system or application. 

As machine learning and AutoML become more prevalent, data pipelines will increasingly become more intelligent. Data pipelines can move data between advanced data enrichment and transformation modules, where neural network and machine learning algorithms can create more advanced data transformations and enrichments. This includes segmentation, regression analysis, clustering, and the creation of advanced indices and propensity scores. 

Finally, one could integrate AI into the data pipelines such that they could continuously learn and adapt based upon the source systems, required data transformations and enrichments, and the evolving business and operational requirements of the target systems and applications. 

For example: an intelligent data pipeline in health care could analyze the grouping of health care diagnosis-related groups (DRG) codes to ensure consistency and completeness of DRG submissions and detect fraud as the DRG data is being moved by the data pipeline from the source system to the analytic systems. 

Realizing business value 

Chief data officers and chief data analytic officers are being challenged to unleash the business value of their data—to apply data to the business to drive quantifiable financial impact. 

The ability to get high-quality, trusted data to the right data consumer at the right time in order to facilitate more timely and accurate decisions will be a key differentiator for today’s data-rich companies. A Rube Goldberg system of ELT scripts and disparate, special analytic-centric repositories hinders an organizations’ ability to achieve that goal.

Learn more about intelligent data pipelines in Modern Enterprise Data Pipelines (eBook) by Dell Technologies here.

This content was produced by Dell Technologies. It was not written by MIT Technology Review’s editorial staff.

Most security leaders worry traditional approach doesn't shield again supply chain attacks

IT Portal from UK - Mon, 12/06/2021 - 07:00

Most security leaders believe traditional threat detection solutions are not equipped to combat supply chain threats, a new report from Vectra AI suggests.

The security firm recently polled 200 UK IT security decision makers from companies with at least 1,000 employees and found that 89 percent don’t trust traditional approaches to cybersecurity.

In fact, three-quarters (76 percent) bought cybersecurity tools that failed to live up to their promises, as they struggled to integrate with existing systems, could not detect modern attacks and failed to provide proper visibility. 

As a result, more than two-thirds (69 percent) think they may have been breached without knowing it (a third think this is “likely”).

The respondents essentially believe cybercriminals are bigger visionaries than those on the other side of the security equation. Two-thirds (69 percent) believe security innovation is “years” behind the attackers. 

There are numerous reasons this is the case, the respondents further explained, citing legacy thinking around security, as well as poor communication between security teams and the board. In fact, 58 percent think the board is a full decade behind when it comes to security discussions. 

But as the number of high-profile attacks grows, so does awareness in the boardroom, the report further claims. More than half (54 percent) are shifting away from the prevention-first mentality, and are increasing their investment in security solutions. 

“Companies are not the only ones innovating. Cybercriminals are too. As the threat landscape evolves, traditional defenses are increasingly ineffectual,” said Garry Veale, Regional Director, UK & Ireland at Vectra. 

“Organizations need modern tools that shine a light into blind spots to deliver visibility from cloud to on premise. They need security leaders who can speak the language of business risk. Boards that are prepared to listen. And a technology strategy based around an understanding that it’s ‘not if but when’ they are breached.”

The therapists using AI to make therapy better

MIT Top Stories - Mon, 12/06/2021 - 05:12

Kevin Cowley remembers many things about April 15, 1989. He had taken the bus to the Hillsborough soccer stadium in Sheffield, England, to watch the semifinal championship game between Nottingham Forest and Liverpool. He was 17. It was a beautiful, sunny afternoon. The fans filled the stands.

He remembers being pressed between people so tightly that he couldn’t get his hands out of his pockets. He remembers the crash of the safety barrier collapsing behind him when his team nearly scored and the crowd surged.

Hundreds of people fell, toppled like dominoes by those pinned in next to them. Cowley was pulled under. He remembers waking up among the dead and dying, crushed beneath the weight of bodies. He remembers the smell of urine and sweat, the sound of men crying. He remembers locking eyes with the man struggling next to him, then standing on him to save himself. He still wonders if that man was one of the 94 people who died that day.

These memories have tormented Cowley his whole adult life. For 30 years he suffered from flashbacks and insomnia. He had trouble working but was too ashamed to talk to his wife. He blocked out the worst of it by drinking. In 2004 one doctor referred him to a trainee therapist, but it didn’t help, and he dropped out after a couple of sessions.

But two years ago he spotted a poster advertising therapy over the internet, and he decided to give it another go. After dozens of regular sessions in which he and his therapist talked via text message, Cowley, now 49, is at last recovering from severe post-traumatic stress disorder. “It’s amazing how a few words can change a life,” says Andrew Blackwell, chief scientific officer at Ieso, the UK-based mental health clinic treating Cowley.

What’s crucial is delivering the right words at the right time. Blackwell and his colleagues at Ieso are pioneering a new approach to mental-health care in which the language used in therapy sessions is analyzed by an AI. The idea is to use natural-language processing (NLP) to identify which parts of a conversation between therapist and client—which types of utterance and exchange—seem to be most effective at treating different disorders.

The aim is to give therapists better insight into what they do, helping experienced therapists maintain a high standard of care and helping trainees improve. Amid a global shortfall in care, an automated form of quality control could be essential in helping clinics meet demand. 

Ultimately, the approach may reveal exactly how psychotherapy works in the first place, something that clinicians and researchers are still largely in the dark about. A new understanding of therapy’s active ingredients could open the door to personalized mental-health care, allowing doctors to tailor psychiatric treatments to particular clients much as they do when prescribing drugs.

A way with words

The success of therapy and counseling ultimately hinges on the words spoken between two people. Despite the fact that therapy has existed in its modern form for decades, there’s a surprising amount we still don’t know about how it works. It’s generally deemed crucial for therapist and client to have a good rapport, but it can be tough to predict whether a particular technique, applied to a particular illness, will yield results or not. Compared with treatment for physical conditions, the quality of care for mental health is poor. Recovery rates have stagnated and in some cases worsened since treatments were developed. 

Researchers have tried to study talking therapy for years to unlock the secrets of why some therapists get better results than others. It can be as much art as science, based on the experience and gut instinct of qualified therapists. It’s been virtually impossible to fully quantify what works and why—until now. Zac Imel, who is a psychotherapy researcher at the University of Utah, remembers trying to analyze transcripts from therapy sessions by hand. “It takes forever, and the sample sizes are embarrassing,” he says. “And so we didn’t learn very much even over the decades we’ve been doing it.”

AI is changing that equation. The type of machine learning that carries out automatic translation can quickly analyze vast amounts of language. That gives researchers access to an endless, untapped source of data: the language therapists use. 

Researchers believe they can use insights from that data to give therapy a long-overdue boost. The result could be that more people get better, and stay better. 

Blackwell and his colleagues are not the only ones chasing this vision. A company in the US, called Lyssn, is developing similar tech. Lyssn was cofounded by Imel and CEO David Atkins, who studies psychology and machine learning at the University of Washington. 

Both groups train their AIs on transcripts of therapy sessions. To train the NLP models, a few hundred transcripts are annotated by hand to highlight the role therapists’ and clients’ words are playing at that point in the session. For example, a session might start with a therapist greeting a client and then move to discussing the clients’ mood. In a later exchange, the therapist might empathize with problems the client brings up and ask if the client practiced the skills introduced in the previous session. And so on. 

The technology works in a similar way to a sentiment-analysis algorithm that can tell whether movie reviews are positive or negative, or a translation tool that learns to map between English and Chinese. But in this case, the AI translates from natural language into a kind of bar code or fingerprint of a therapy session that reveals the role played by different utterances.

A fingerprint for a session can show how much time was spent in constructive therapy versus general chitchat. Seeing this readout can help therapists focus more on the former in future sessions, says Stephen Freer, Ieso’s chief clinical officer, who oversees the clinic’s roughly 650 therapists.

Looming crisis

The problems that both Ieso and Lyssn are addressing are urgent. Cowley’s story highlights two major shortcomings in the provision of mental-health care: access and quality. Cowley suffered for 15 years before being offered treatment, and the first time he tried it, in 2004, it didn’t help. It was another 15 years before he got treatment that worked.

Cowley’s experience is extreme, but not uncommon. Warnings of a looming mental-health crisis ignore a basic truth: we’re already in one. Despite slowly receding stigma, most of the people who need help for a mental-health issue still don’t get it. About one in five of us has a mental illness at any given time, yet 75% of mentally ill people aren’t receiving any form of care.

And of those who do, only around half can expect to recover. That’s in the best mental-health systems in the world, says Blackwell. “If we went to a hospital with a broken leg and we were told there was a 50-50 chance of it being fixed, somehow that wouldn’t seem acceptable,” he said in a TED talk last year. “I think we can challenge ourselves to have higher expectations.”

The pandemic has exacerbated the problem but didn’t create it. The issue is fundamentally about supply and demand. The demand comes from us, our numbers swelled by one of the most taxing collective experiences in living memory. The problem on the supply side is a lack of good therapists.

This is what Ieso and Lyssn are addressing. According to Freer, people typically come at the supply problem with the assumption that you can have more therapists or better therapists, but not both. “I think that’s a mistake,” he says. “I think what we’re seeing is you can have your cake and eat it.” In other words, Ieso thinks it can increase access to care and use AI to help manage its quality.

Ieso is one of the largest providers backed by the UK’s National Health Service (NHS) that offer therapy over the internet by text or video. Its therapists have so far delivered more than 460,000 of hours of cognitive behavioral therapy (CBT)—a commonly used and effective technique that helps people manage their problems by changing the way they think and behave—to around 86,000 clients, treating a range of conditions including mood and anxiety disorders, depression, and PTSD.

Ieso says its recovery rate across all disorders is 53%, compared with a national average of 51%. That difference sounds small—but with 1.6 million referrals for talking therapy in the UK every year, it represents tens of thousands of people who might otherwise still be ill. And the company believes it can do more.

Since 2013, Ieso has focused on depression and generalized anxiety disorder, and used data-driven techniques—of which NLP is a core part—to boost recovery rates for those conditions dramatically. According to Ieso, its recovery rate in 2021 for depression is 62%—compared to a national average of 50%—and 73% for generalized anxiety disorder—compared to a national average of 58%. 

Ieso says it has focused on anxiety and depression partly because they are two of the most common conditions. But they also respond better to CBT than others, such as obsessive compulsive disorder. It’s not yet clear how far the clinic can extend its success, but it plans to start focusing on more conditions. 

In theory, using AI to monitor quality frees up clinicians to see more clients because better therapy means fewer unproductive sessions, although Ieso has not yet studied the direct impact of NLP on the efficiency of care.

“Right now, with 1,000 hours of therapy time, we can treat somewhere between 80 and 90 clients,” says Freer. “We’re trying to move that needle and ask: Can you treat 200, 300, even 400 clients with the same amount of therapy hours?”

Unlike Ieso, Lyssn does not offer therapy itself. Instead, it provides its software to other clinics and universities, in the UK and the US, for quality control and training.

In the US, Lyssn’s clients include a telehealth opioid treatment program in California that wants to monitor the quality of care being given by its providers. The company is also working with the University of Pennsylvania to set up CBT therapists across Philadelphia with its technology.

In the UK, Lyssn is working with three organizations, including Trent Psychological Therapies Service, an independent clinic, which—like Ieso—is commissioned by the NHS to provide mental-health care. Trent PTS is still trialing the software. Because the NLP model was built in the US, the clinic had to work with Lyssn to make it recognize British regional accents. 

Dean Repper, Trent PTS’s clinical services director, believes that the software could help therapists standardize best practices. “You’d think therapists who have been doing it for years would get the best outcomes,” he says. “But they don’t, necessarily.” Repper compares it to driving: “When you learn to drive a car, you get taught to do a number of safe things,” he says. “But after a while you stop doing some of those safe things and maybe pick up speeding fines.”

Improving, not replacing

The point of the AI is to improve human care, not replace it. The lack of quality mental-health care is not going to be resolved by short-term quick fixes. Addressing that problem will also require reducing stigma, increasing funding, and improving education. Blackwell, in particular, dismisses many of the claims being made for AI. “There is a dangerous amount of hype,” he says.

For example, there’s been a lot of buzz about things like chatbot therapists and round-the-clock monitoring by apps—often billed as Fitbits for the mind. But most of this tech falls somewhere between “years away” and “never going to happen.”

“It’s not about well-being apps and stuff like that,” says Blackwell. “Putting an app in someone’s hand that says it’s going to treat their depression probably serves only to inoculate them against seeking help.”

One problem with making psychotherapy more evidence-based, though, is that it means asking therapists and clients to open up their private conversations. Will therapists object to having their professional performance monitored in this way? 

Repper anticipates some reluctance. “This technology represents a challenge for therapists,” he says. “It’s as if they’ve got someone else in the room for the first time, transcribing everything they say.” To start with, Trent PTS is using Lyssn’s software only with trainees, who expect to be monitored. When those therapists qualify, Repper thinks, they may accept the monitoring because they are used to it. More experienced therapists may need to be convinced of its benefits.

The point is not to use the technology as a stick but as support, says Imel, who used to be a therapist himself. He thinks many will welcome the extra information. “It’s hard to be on your own with your clients,” he says. “When all you do is sit in a private room with another person for 20 or 30 hours a week, without getting feedback from colleagues, it can be really tough to improve.”

Freer agrees. At Ieso, therapists discuss the AI-generated feedback with their supervisors. The idea is to let therapists take control of their professional development, showing them what they’re good at—things that other therapists can learn from—and not so good at—things that they might want to work on. 

Ieso and Lyssn are just starting down this path, but there’s clear potential for learning things about therapy that are revealed only by mining sufficiently large data sets. Atkins mentions a meta-analysis published in 2018 that pulled together around 1,000 hours’ worth of therapy without the help of AI. “Lyssn processes that in a day,” he says. New studies published by both Ieso and Lyssn analyze tens of thousands of sessions.

For example, in a paper published in JAMA Psychiatry in 2019, Ieso researchers described a deep-learning NLP model that was trained to categorize utterances from therapists in more than 90,000 hours of CBT sessions with around 14,000 clients. The algorithm learned to discern whether different phrases and short sections of conversation were instances of specific types of CBT-based conversation—such as checking the client’s mood, setting and reviewing homework (where clients practice skills learned in a session), discussing methods of change, planning for the future, and so on—or talk not related to CBT, such as general chat. 

The researchers showed that higher ratios of CBT talk correlate with better recovery rates, as measured by standard self-reported metrics used across the UK. They claim that their results provide validation for CBT as a treatment. CBT is widely considered effective already, but this study is one of the first large-scale experiments to back up that common assumption.

In a paper published this year, the Ieso team looked at clients’ utterances instead of therapists’. They found that more of what they call “change-talk active” responses (those that suggest a desire to change, such as “I don’t want to live like this anymore”) and “change-talk exploration” (evidence that the client is reflecting on ways to change) were associated with greater odds of reliable improvement and engagement. Not seeing these types of statements could be a warning sign that the current course of therapy is not working. In practice, it could also be possible to study session transcripts for clues to what therapists say to elicit such behavior, and train other therapists to do the same.

This is valuable, says Jennifer Wild, a clinical psychologist at the University of Oxford. She thinks these studies help the field, making psychotherapy more evidence-based and justifying the way therapists are trained. 

“One of the benefits of the findings is that when we’re training clinicians, we can now point to research that shows that the more you stick to protocol, the more you’re going to get symptom change,” says Wild. “You may feel like doing chitchat, but you need to stick to the treatment, because we know it works and we know how it works. I think that’s the important thing—and I think that’s new.”

These AI techniques could also be used to help match prospective clients with therapists and work out which types of therapy will work best for an individual client, says Wild: “I think we’ll finally get more answers about which treatment techniques work best for which combinations of symptoms.”

This is just the start. A large health-care provider like Kaiser Permanente in California might offer 3 million therapy sessions a year, says Imel—“but they have no idea what happened in those sessions, and that seems like an awful waste.” Consider, for example, that if a health-care provider treats 3 million people for heart disease, it knows how many got statins and whether or not they took them. “We can do population-level science on that,” he says. “I think we can start to do similar things in psychotherapy.”

Blackwell agrees. “We might actually be able to enter an era of precision medicine in psychology and psychiatry within the next five years,” he says.

Ultimately, we may be able to mix and match treatments. There are around 450 different types of psychotherapy that you can get your insurer to pay for in the US, says Blackwell. From the outside, you might think each was as good as another. “But if we did a kind of chemical analysis of therapy, I think we’d find that there are certain active ingredients, which probably come from a range of theoretical frameworks,” he says. He imagines being able to pull together a selection of ingredients from different therapies for a specific client.“Those ingredients might form a whole new type of treatment that doesn’t yet have a name,” he says.

One intriguing possibility is to use the tools to look at what therapists with especially good results are doing, and teach others to do the same. Freer says that 10 to 15% of the therapists he works with “do something magical.”

“There’s something that they’re doing consistently, with large volumes of clients, where they get them well and the clients stay well,” he says. “Can you bottle it?”

Freer believes the person who treated Kevin Cowley is just that type of therapist. “That’s why I think Kevin’s story was such a powerful one,” he says. “Think of how many years he’s been suffering. Now imagine if Kevin had had access to care when he was 17 or 18.”

To reach cloud nine, will you take the red pill or the blue pill?

IT Portal from UK - Mon, 12/06/2021 - 05:00

With the fourth installment of the movie “The Matrix” scheduled for this year, viewers will once again enter a new world, where they will need to choose their own reality. In the original 1999 movie, Neo is offered the choice between a red pill and a blue pill. The red pill represents an uncertain future, and it would allow him to escape into the real world. While the blue pill would lead him back to ignorance, living in confined comfort. 

The shift to remote work at the start of the pandemic became a catalyst for businesses, which are now facing a similar conundrum when it comes to their cloud automation journey – causing “blue pill or red pill” decisions to take place overnight. By choosing the blue pill the company will want to migrate its IT infrastructure to the cloud without making significant changes to its current IT architecture — while this solves some key operational problems, it comes at a cost and leaves many opportunities untapped. Businesses that choose the red pill recognize that the benefits of cloud can only be achieved by radically changing their way of working. 

So, which pill should businesses take to escape the Matrix and successfully adopt cloud technology

Choose to rehost with the blue pill

Cloud platforms have proven for over a decade how organizations can completely transform their business, simplify complexity, and stay prepared for the future. By choosing the blue pill, businesses understand that the option to fully migrate to the cloud — the red pill — exists but have still chosen the alternative. Making this conscious decision means businesses have chosen not to make significant changes to their IT infrastructure and instead kept the shift to the cloud to the bare minimum. But, if migrating to the cloud is so great, why choose the blue pill?

It’s true that during the pandemic, more enterprises realized how constrained they are with an on-premises IT infrastructure that can't accommodate a remote workforce. However, the pandemic also caused leaders to think about where to cut costs and limit investments; this meant many businesses could only rehost (lift and shift) their infrastructure. One thing is clear: For those relying on in-house server platforms, embracing the cloud is a lot easier said than done.

That’s not to say that choosing the blue pill is the wrong option. “Blue pill” businesses will often choose to save money and time now to “get to the cloud” quicker and plan to deal with application modernization once they have migrated. The benefits of this rehosting strategy includes limited project cost, effort and complexity compared to re-platforming and refactoring. However, businesses shouldn’t forget that this may lead to the migration of brittle processes to the cloud. Furthermore, existing applications outside the cloud can be inefficient and expensive; and if the application has an existing problem, known or unknown, it will likely bring that problem to the cloud. 

Choosing to rehost infrastructure seems like the safer option, but just like in the Matrix, businesses will fail to realize the true benefits unless they choose complete cloud migration.

Rebuild with the red pill

The pandemic highlighted the need for cloud migration. But cloud migration isn’t just about moving to the cloud; it involves a state of continuous reinvention if the cloud strategy is to reduce costs and create new opportunities — it’s no wonder this option appears to be a hard pill to swallow. The red pill approach is about disrupting your market, without disrupting your business during cloud migration. This means eliminating silos between infrastructure and application and architect cloud-native solutions that address key business problems.

Different organizations have different reasons for choosing the red pill option. Some businesses are moving to the cloud to keep up with latest technology trends like IoT, video, chat solutions, and exponential growth in data associated with these technologies. Other organizations that aren’t in technology-focused industries are seeing an increase in technology needs. But rather than hiring more technology-specific staff, they need to rely on cloud vendors to maintain their systems so that the organization can focus on the work that matters — whether that’s customer service or manufacturing the highest quality product.

The operational benefits of the cloud have been clear during the Covid-19 pandemic —helping staff transition to working remotely and contributing to business resilience during an ongoing, major disruption. With all great things there will be a number of challenges. According to Deloitte’s Cloud Survey 2021 the complexity of migration and clarity of ambition were the most common barriers to cloud adoption indicating the need for careful consideration and planning. 

However, this hasn’t deterred organizations away as nearly 90 percent of organizations are using cloud infrastructure or are planning to do so in the next three years. By choosing this cloud-only or red pill approach, businesses will need to rethink their way of working and use this as an opportunity to modernize legacy processes to achieve the best outcome.

Head up in the clouds?

Migrating to the cloud can provide several benefits including reduced costs, increased flexibility and enabling collaboration with the current distributed workforce. But whether the cloud can live up to its benefits depends on the approach a business takes.

As with any new technology investment, the old ways of conducting business will likely change which is why leaders need to ensure it is the right decision. But unlike The Matrix, choosing the blue pill in a business’ cloud migration story doesn’t mean they can’t afterwards choose the red pill.

Attar Naderi, UK Business Manager, Laserfiche

Does your organization have a data platform leader? It could soon

IT Portal from UK - Mon, 12/06/2021 - 04:30

There’s no one-size-fits-all solution for a modern data platform, and there likely never will be with the proliferation of multiple public and private cloud environments, entrenched on-premises data centers, and the exponential rise in edge computing – data sources are multiplying almost at the rate of data itself.

Today’s data platforms increasingly take a broad multi-platform approach that incorporates a wide range of data services (e.g. data warehouse, data lake, transactional database, IoT database and third-party data services), and integration services that support all major clouds and on-premise platforms and applications that run on and across these environments. Modern data platforms need a data fabric – technology that enables data that is distributed across different areas to be accessed in real-time in a unifying data layer,  – to drive data flow orchestration, data enrichment, and automation To meet the varied requirements of users across an organization including data engineers, data scientists, business analysts and business users, the platform should also incorporate shared management and security services, as well as support a wide range of application development and analytical tools. 

However, these needs create a singular challenge: who’s going to manage the creation and maintenance of such a platform? That’s where the role of the data platform leader comes in. Just as we’ve seen the creation of roles like Chief Data Officer and Chief Diversity Officer in response to critical needs, organizations require a highly skilled individual to manage the creation and maintenance of their platform(s). Enter the data platform leader – someone with a broad understanding of databases and streaming technologies, as well as a practical understanding of how to facilitate frictionless access to these data sources, how to formulate a new purpose, vision and mission for the platform and how to form close partnerships with analytics translators.  We’ll get to those folks in a minute.

Developing a new purpose, vision and mission

Why must a data platform leader develop a new purpose, vision and mission? Consider this: data warehouse users have traditionally been data engineers, data scientists and business analysts who are interested in complex analytics. These users typically represent a relatively small percentage of an organization’s employees. The power and accessibility of a data platform capable of running not just in the data center, but also in the cloud or at the edge, will invariably bring in a broader base of business users who will use the platform to run simpler queries and analytics to make operational decisions. 

However, accompanying these users will be new sets of business and operational requirements. To satisfy this ever-expanding user base and their different requirements, the data platform leader will need to formulate a new purpose for the platform (why it exists), a new vision for the platform (what it hopes to deliver) and a new mission (how will it achieve the vision).

Facilitating data service convergence

Knowledge of relational databases with analytics-optimized schemas and/or analytic databases has long been part of a data warehouse manager’s wheelhouse. However, the modern data platform extends access much further, enabling access to data lakes and transactional and IoT databases, and even streaming data. Increasing demand for real-time insights and non-relational data that can enable decision intelligence are bringing these formerly distinct worlds closer together. This requires the data platform leader to have a broad understanding of databases and streaming technologies as well as a practical understanding of how to facilitate frictionless access to these data sources. 

Enabling frictionless data access

A data warehouse typically includes a semantic layer that represents data so end users can access that data using common business terms. A modern data platform, though, demands more. While a semantic layer is valuable, data platform leaders will need to enable more dynamic data integration than is typically sufficient to support a centralized data warehouse design.  Enter the data fabric to provide a service layer that enables real-time access to data sourced from the full range of the data platform’s various services. The data fabric offers frictionless access to data from any source located on-premises and in the cloud to support the wide range of analytic and operational use cases that such a platform is intended to serve. 

Working with analytics translators 

I mentioned earlier that data platform leaders would need the ability to form close partnerships with analytics translators. Let’s start with what an analytics translator does and then we’ll get to why a close relationship is important. 

According to McKinsey & Company, the analytics translator serves the following purpose: 

“At the outset of an analytics initiative, translators draw on their domain knowledge to help business leaders identify and prioritize their business problems, based on which will create the highest value when solved. These may be opportunities within a single line of business (e.g., improving product quality in manufacturing) or cross-organizational initiatives (e.g., reducing product delivery time).”

I expect the analytics translator and the data platform leader will become important partners. The analytics translator will be invaluable in establishing data platform priorities, and the data platform leader will provide the analytics translator with key performance indicators (KPIs) on mutually-agreed-upon usage goals.

In conclusion, the data platform leader has many soft and hard skillset requirements in common with a data warehouse manager, but there are a few fundamental and significant differences. The key difference includes developing a new purpose, vision and mission, having expertise in new data services and data fabrics, knowing how best to access those services, and possessing the ability to form close partnerships with analytics translators.

Teresa Wingfield, Director of Product Marketing, Actian

Quality in, quality out: how to get a machine learning platform humming

IT Portal from UK - Mon, 12/06/2021 - 04:00

Machine Learning is often presented as the cutting edge of what we can currently achieve with technology. A lot of innovation and progress is coming from the fields of data science and machine learning, but a lot of the language around the topic is filled with jargon and is aimed at an expert audience. As such, the whole concept can feel quite opaque. Having built the machine learning algorithm behind Infogrid’s smart building platform, I can answer one of the foundational questions, what do Machine Learning algorithms learn from? And what is their first lesson?

A Machine Learning platform is a bit like a car’s engine, where the pistons are replaced by algorithms. No matter how good that engine is, it can’t run without a form of fuel. For a Machine Learning platform, that fuel is data. When you are starting from scratch you need to create what is called ‘ground truth’: the core dataset against which everything else can either be based upon or checked against. Without a robust ground truth you won’t be able to trust the outputs of your engine, the foundation of the ‘fuel’ is crucial. This is why the first lesson for an ML algorithm is always based around developing an understanding of the world based on observation, measurement and collection of real-world data. This grounds the whole algorithm in reality and allows for extrapolation.

How do you create a ‘ground truth’?

There are a few ways to get the ground truth for your system. In some scenarios you may be able to find an existing dataset already available for free in the public domain or one that can be purchased. Some companies have already collected a lot of real-world data on their customers as part of the normal operation of their business. For example, a supermarket will have in-depth information on the shopping habits of members of their loyalty scheme. They could use that data to run a machine learning platform that, in theory, provides better deal recommendations, or delivers insights on changing trends in customer behavior. What do you do in a scenario where you are developing a ML platform where the data isn’t readily available? In that case, you have to run experiments to create the data yourself. This really puts the ‘science’ into data science and may come as a surprise to people who think you must be stuck behind a computer all day to create anything called an algorithm.

Infogrid provides a smart building platform which can automate a range of extremely time-intensive tasks, from checking air quality and virus risk in office spaces to monitoring for legionella risk in water pipes. The breadth of what Infogrid can do means that we have had to create more than one ground truth dataset.

The first dataset we needed to collect was based on understanding how people use offices. This was a critical first step as we needed to collect this data before we could provide any analysis of our customers’ offices. So, we installed a wide range of sensors in our own offices to create the ‘ground truth’ dataset based on how our team was using the workplace. For example, to collect data around desk occupancy we put a pressure sensor in each seat and a temperature sensor under the desk. This double-blind data collection lets us understand how often people were sat at their desks while weeding out any scenarios where a bag or a box was left on a chair. With two sets of sensors in action, we were also able to find more stories in the data than we had initially expected. The temperature sensors let us understand which areas within the office were hotter when there was bright sun coming in through the windows and which remained cooler. With this data, we could figure out when we should roll down blinds and reduce our own heating and air con costs. All the data is anonymized and generalized and is used to give the algorithm a core truth of how people use an office, not to keep tabs on our own staff!

As Infogrid grows so do our capabilities, best seen in our development of legionella compliance. Legionella, for those of you who aren’t familiar, is a deadly pathogen that multiplies very quickly in warm, stagnant water. This kind of environment is often found in poorly maintained or under-used warm water plumbing. That's why facilities managers and building supervisors need to ensure that all hot water taps are regularly flushed and that the temperature of the hot water system remains above 47 degrees Celsius. Traditionally, Legionella checks must be performed manually, i.e. someone has to manually run each hot water tap in a building for around 5 minutes and measure the water temperature. This process takes a lot of time and wastes a lot of water. It’s a process that was ripe for automation. 

The way we use ML for this activity is slightly different to the previous desk occupancy example above. The aim has been to reduce costs and complexity, and using a single heat sensor, measure when the tap was last used as well as recording the water temperature. 

To achieve this we had to create a ground truth of how water temperature changes in a pipe when it is used. We again had to turn to real-world experiments and we installed automatic tap controls in our office. This way we could tell when a tap was opened and for how long. We could then track the heat changes that took place when a tap was turned on. From this ground truth our ML algorithm can now, from just a heat sensor, know when a tap was last used, pretty neat! 

When you are doing these kinds of experiments, you need to be flexible and work out solutions to problems that you didn’t foresee in initial planning. For example, when installing a heat sensor you have to be mindful of how close to the boiler the sensor will be. If you are too close then the heat will likely be conducted along the metal from the boiler, rather than from the water within the pipe. There are a few ways you can mitigate an issue like this. The easiest thing to do is to move the sensors further away. You could also set up your system so that if a sensor has to be placed near the boiler you can tell the platform to account for the resulting heat disparity. Ultimately you want a system that can figure out whether a sensor is near the boiler and adjust accordingly without the need for human input. We are not quite there yet, but we’re close!

In the end, it is important to be scientific and rigorous when collecting ground truth datasets. We collect data across multiple sources which gives us real confidence in the data. It is also an ever-evolving process. As our platform expands, our existing ground truths increase in accuracy and complexity. And as we move into offering new services, we add new ground truths to our library. There is a lot of creativity and real-world testing which goes into developing machine learning platforms, and you have to be conscious of the limitations of the data you collect, constantly working to ensure any weaknesses are bolstered over time. 

The proof is always ultimately in the pudding. If your ML system is doing what you intended it to do, then your ground truth is probably accurate enough. If you get odd outputs, it could be a warning sign that you need to go back to the drawing board and collect your ground data from scratch. Ultimately an engine is only as good as the fuel you put in it, and that is still true of an ML platform. Data scientists need to be rigorous in the data they use to build their systems, otherwise all outputs are compromised. Put the time into getting the ground truth right and you will be rewarded with a nice and shiny machine learning platform!

Roger Nolan, CTO, Infogrid

Why a password manager could be your most vital security tool

IT Portal from UK - Mon, 12/06/2021 - 03:30

As every business knows, staying secure is vital to success - no matter what industry you're in, keeping your data and your workers safe is paramount. But with cyberattacks a worryingly increasing threat, the consequences of being hit are more serious than ever.

In order to stay protected, your business needs a comprehensive suite of security offerings - it's no longer enough to just rely on a standard firewall or antivirus. You need to make sure everything important stays safe - and that includes passwords.

Having a strong and effective password manager allows your business and your employees to create and share unique, strong passwords easily.

1Password has become one of the biggest players in the market in recent years, and whether you're a small business looking to grow, or a larger enterprise aiming to keep a widespread workforce safe, it has something for you.

The company recently passed 100,000 business customers worldwide, so what makes it such an effective security partner?

(Image credit: 1Password)

To start with, 1Password is secure by design, and any information you store is end-to-end encrypted using 256-bit AES encryption, offering not just incredibly tough security, but also meaning that only you can decrypt your data.

Passwords are only as strong as the people that create them, and using a password manager is a great way to ensure weak logins don't give criminals a backdoor into your business. 1Password is the only password manager to combine a Master Password with a unique, locally generated, 128-bit Secret Key to authenticate, and its zero-knowledge architecture means the data you save can’t be accessed by anyone else, even the company itself.

1Password also comes with compliance built-in, giving you one less thing to worry about as your business grows. As well as conforming to all the top industry security standards, the platform also offers you the chance to create customised security policies, firewall rules, and access priorities. You can get all the visibility you need through analytics tailored to your business, allowing you to really dig into the data, with activity logs to help you track vital data for any audit trails.

To keep you protected from even the latest dangers, 1Password also offers advanced threat monitoring protection, with its Watchtower tool automatically alerting not just when a major breach occurs on sites you use, but also flagging potential weak spots in terms of login information, unsecured websites, and expiring items.

And if the worst does occur and you think you’ve been exposed in a data breach, 1Password can issue customizable email notifications to anyone that may have been affected, as well as examining and filtering breach results in terms of seriousness.

There's even two-factor authentication to give that extra layer of protection, allowing you to use authenticator apps, security keys or other systems to keep your data safe.

So if this sounds like the way to go, get in touch with 1Password today - you can even sign up for a 14-day trial of its Teams or Business account to make sure it's exactly what you're looking for.

The Popular Stock Metric That Can Lead Investors Astray

Harvard Business School Working Knowledge - Mon, 12/06/2021 - 00:00
Investors may rely too heavily on a financial measure that no longer reflects the economic fundamentals of modern business. What should investors do? Research by Charles C.Y. Wang and colleagues.by Rachel Layne9857Research & Ideas

What is VPS hosting?

IT Portal from UK - Fri, 12/03/2021 - 09:14
What is VPS hosting?

Virtual private server (VPS) hosting is high-powered hosting that dedicates a specified amount of RAM, storage, bandwidth, and other server resources to you and your project. In our guide to the best web hosting services, you will find various platforms offering excellent VPS hosting solutions.

What does VPS hosting do?

VPS hosting is a great option for running powerful online stores (Image credit: Pexels)
  • Uses virtualization technology to split one physical server into numerous smaller VPS servers with their own dedicated RAM, storage, bandwidth, and CPU processing power
  • Enables excellent configurability and flexible management options: most VPS hosting comes with root access, which means that they enable users to install and manage software in whatever ways you want
  • Provides a much higher level of security than shared hosting: through the virtualization process, VPS web hosting effectively segments different people using the same physical server, which makes it much less likely that problems with one person’s hosting account will affect other users’ VPS
  • Offers an excellent level of uptime and performance: because users have a specified amount of server resources allocated to them and their subscription, they can rest assured that they will experience a high level of performance at all times
  • Enables users to scale rapidly and when required: most VPS hosts provide one-click scaling options, and some even use hourly billing to ensure that users are only ever paying for what they use
How businesses can use VPS hosting

VPS hosting is versatile, and can be used for numerous different things (Image credit: @wocintechhat, Unsplash)

In simple terms, VPS hosting is an attractive option for businesses that require something a little more powerful than shared hosting, but can’t justify the cost of their own dedicated server. VPS hosting offers excellent versatility, the ability to scale as required, and an allocated amount of server resources that are accessible to you and you alone. 

Many businesses will use VPS hosting to run their website or online store. High-end shared hosting can be a viable option for this, but it comes with a set of security and performance risks that many businesses just don’t want to take. With VPS, you can rest assured that your site and any associated information will be fully secure at all times. 

Some businesses also use VPS for secure file storage. Cheap VPS servers with a large amount of storage present great options for securely storing server backups, important documents, and other files that you can’t afford to have lost or compromised.

Another popular use case is to test new web applications and other programs before making them available to the public. The combination of power and affordability makes VPS a popular option for this, particularly among smaller businesses that can’t justify the higher price of a dedicated server. 

Last, but not least, a very small number of businesses use VPS hosting to host game servers. Multiplayer games, such as Minecraft and Rust, enable you to create your own multiplayer games that are hosted on specialized servers and available to hundreds of people at a time.

Features and benefits of VPS hosting

VPS hosting usually comes with full root access, enabling you to configure your server as required (Image credit: @wocintechchat, Unsplash)

Full root access

A large percentage of VPS hosting plans come with root access, which is excellent for those who want to configure their servers in a specific manner. With root access, you will have full control over your server. You can add and remove software as required, install custom scripts, select your own control panel, and much, much more.

Dedicated server resources

One of the biggest problems with shared hosting is that your website can be affected by other sites hosted on the same physical server. If one site experiences a spike in traffic, it can use more than its fair share of resources, affecting the performance of other sites. With VPS hosting, you don’t have to worry about this. You will have a dedicated amount of server resources that are only available to you and your website, enabling you to maintain a high level of performance across the board.

Excellent security

In a similar manner, the separation and distance VPS hosting gives you from others sharing your physical server is great for security. Since you will have your own virtualized server that’s only available to those with the correct access permissions, there’s very little chance of your security being compromised due to issues with another website.

Scalability and configurability

VPS hosting is known for its excellent scalability, which basically means that most VPS packages enable you to add and remove server resources as required. Cloud VPS is particularly good for this, as it’s often billed hourly. This means that you can boost the power of your server temporarily, say during a busy weekend when you anticipate higher traffic than normal, then go back to regular resource use as required.

Reliable performance

One of the main benefits of using VPS hosting over shared hosting is its usually excellent performance. Once again, this comes back to the virtualization process, which effectively distances your website from any other users sharing the same physical server.

How much does VPS hosting cost?

Fully managed VPS hosting solutions can be more expensive (Image credit: @wocintechchat, Unsplash)

The price of a new VPS hosting subscription can vary from just a few dollars a month to hundreds or even thousands of dollars a month. The amount you pay will depend on a number of factors, including the server resources you require, whether or not you need technical server management, and how many extra tools are included with your subscription. 

For example, Hostinger, one of our favorite all-around web hosting providers, offers eight different VPS plans. At the lower end of the spectrum sits the VPS 1 plan, which includes just 1GB RAM, 20GB SSD storage, and 1TB bandwidth, for $3.95 a month (renews at $8.16 a month). On the expensive side, a VPS 8 subscription starts at $77.99 a month (renews at $219 a month) for 16GB RAM, 250GB SSD, and 12TB bandwidth.

Another popular option is Hostwinds, which offers a selection of managed and unmanaged VPS in both Windows and Linux flavors. Its unmanaged Linux options range from $4.99 to $328.99 a month, and its managed Linux hosting costs between $8.24 and $395.24 a month. Windows VPS is a little more expensive, with unmanaged options costing between $10.99 and $376.99 a month and managed solutions ranging from $12.74 to $431.24 a month.

VPS hosting FAQ What is a VPS server used for?

A VPS represents a step up from basic shared hosting. It’s generally used by those who need an advanced web hosting solution that offers more power and security than budget shared hosting. 

However, VPS hosting can be used for so much more than just web hosting. If you’re a developer, you might use your VPS to test new web apps and programs. Many people use VPS hosting to create gaming servers for games like Minecraft, while others use VPS for things like file storage or to back up their main server.

How much RAM does my VPS need?

The amount of RAM your VPS needs will depend on the things that you’re planning to use it for. If you require a fast, high-performing server to complete difficult tasks, or deal with a large number of website visitors, you will need more RAM. Similarly, if you’re just using your VPS for something like file storage, you should be able to get away with a lower amount. 

As an example, 1GB or 2GB should be enough if you’re running a single website or basic server. Larger ecommerce stores will benefit from a higher amount, such as 4GB. Small gaming servers can work with 1GB or 2GB, but we recommend going for an option with at least 4GB or 8GB.

Can you host a website on VPS?

Yes, you can host a website on VPS. In fact, website hosting is one of the most common uses for VPS servers. They offer more power and reliability than cheaper shared hosting options, making them a preferred choice among those creating large, high-traffic sites. In addition, you can rest assured that, when you use VPS hosting, your website and its performance won’t be affected by any other sites occupying the same physical server.

What is the difference between VPS and dedicated hosting?

Both VPS and dedicated server hosting will give you access to a specified amount of RAM, storage, bandwidth, and processing power. A dedicated server is, as the name suggests, a physical server that’s dedicated to you and your requirements. A VPS is a virtual server, and there can be multiple VPSs on the same physical server. 

Can I use VPS for gaming?

Yes, VPSs are an excellent option for gamers wanting to create their own servers. Platforms such as Hostinger offer specialized game hosting, including Minecraft hosting. There are also numerous companies that specialize in game hosting, including Shockbyte, ScalaCube, and MCProHosting.

Main takeaways
  • VPS hosting is more powerful than shared hosting, but much more affordable than high-end dedicated servers, making it a popular option among SMBs
  • One of the standout features of VPS hosting is the server virtualization process, which effectively eliminates any possible interactions between your sites and those of other users sharing the same physical server
  • With VPS hosting, you can do everything from creating a new website to testing web apps, and securely storing important files and data
  • The price of VPS hosting can vary considerably, with factors like the amount of server resources, management level, and operating system you require all impacting cost
  • The scalability and configurability offered by VPS servers is excellent, ensuring you always have access to the server resources you require
Further reading on VPS hosting

If you would like to find out more, consider checking out our guide to the best VPS hosting available today. You might like to find out how VPS hosting is related to your business, or what the difference is between dedicated servers and VPS.

Many businesses still refuse to shake legacy systems and processes

IT Portal from UK - Fri, 12/03/2021 - 07:30

Businesses want to be powered by the cloud and are making strides towards that goal, but are still being held back by legacy systems, a new report from cloud consultancy Mobilise suggests.

Surveying 603 small, medium and large companies across the UK for the report, Mobilise found that the vast majority of businesses (83 percent) have begun their cloud journey. Almost a third (29 percent) are already cloud-native, with the bulk of these firms operating out of London (34 percent). 

However, clinging onto legacy systems is creating problems for these firms. For almost a quarter (24 percent) it’s the biggest barrier to adoption, ahead of financial outlay, lack of staff capacity, lack of understanding, and decision-makers being oblivious to the benefits of cloud.

But as businesses migrate to the cloud, they need help. External assistance is in high demand, with almost nine in ten (87 percent) saying they’d considered (or are using) an outsourced supplier. On the other hand, two in five (39 percent) believe they can manage the migration without assistance. 

James Carnie, Co-Founder and CTO of Mobilise, said: “It’s no surprise that reliance on legacy systems is the biggest barrier to adopting the cloud. The ‘if it ain’t broke, don’t fix it’ mentally persists in so many businesses."

“What is surprising is how few businesses are taking their teams with them on the cloud journey. Firms won’t get far if their firms aren’t properly trained, even with the best of intentions."

“Companies who fail to fully embrace the cloud will soon find themselves on the back foot and losing out to firms who are more agile, secure and cost effective due to cloud integration. In fact, firms who want an edge over competitors really need to be aiming for not just cloud nativity, but cloud excellence.”

Why blanket travel bans won’t work to stop omicron

MIT Top Stories - Fri, 12/03/2021 - 07:19

Countries are slamming their borders shut again. Since the omicron variant was discovered in southern Africa and reported to the World Health Organization last week, more than 50 countries have imposed border controls. They target mostly South Africa and Botswana, which reported the first cases, but also neighboring countries in the region. 

The aim was to stop omicron from spreading, but these bans are too little, too late. Omicron has now been detected in 24 countries, including the US, Israel, Australia, Saudi Arabia, Hong Kong, and many in Europe, including the UK. Crucially, some of these cases predate South Africa’s sounding the alarm—omicron was already in the Netherlands a week before, for example. Oliver Pybus, co-director of the Oxford Martin School Program for Pandemic Genomics, told The Guardian that the evidence suggests omicron has been circulating since late October.

The moral of the story? Blanket travel bans don’t work, says the WHO. 

“Blanket travel bans will not prevent the international spread, and they place a heavy burden on lives and livelihoods. In addition, they can adversely impact global health efforts during a pandemic by disincentivizing countries to report and share epidemiological and sequencing data,” the organization said in a statement on December 1.

Short-term bans can help to buy time if they are imposed very early, giving under-resourced countries a chance to put public health measures in place. But by the time the virus is circulating freely in multiple countries, they are invariably too late to make a difference. Last year the CDC admitted that the travel bans put in place by the Trump administration during the pandemic’s early stages came in far too late to be effective—the virus was already widespread in the US by that point.

A modeling study published in Nature in January 2021 looked at the effect of international travel bans on the pandemic and found that while they helped reduce incidence of covid spread in the early stages, they soon had little impact, with international travelers forming a very small proportion of a country’s new cases.

In fact, travel bans don’t solve the problem—they just postpone it, says Raghib Ali, an epidemiologist at the University of Cambridge, UK. Better testing is a far more effective measure.

“We need a balanced and proportional response. That means no travel bans, but testing and quarantine for people coming from countries where omicron is circulating,” says Ali.

The travel bans could have another negative knock-on effect: cutting South Africa off from the scientific supplies it needs to do the genomic surveillance that could elucidate the impact of omicron in real-world settings. Tulio de Oliveira, a bioinformatician at the University of KwaZulu-Natal in Durban, South Africa, told Nature: “By next week, if nothing changes, we will run out of sequencing reagents.”

The bigger fear is that the treatment of southern African countries will lead other countries to conclude that if you detect a new variant, it’s best to keep it to yourself. 

“They see others getting penalized for spotting a new variant, and that might put them off sharing the data we need. That’s not a theoretical possibility; it’s very real,” says Ali.

Omicron won’t be the last variant of concern. When the next one hits, we need countries to share what they know as soon as possible. Blanket travel bans put that openness in danger.

“Putting in place travel bans that target Africa attacks global solidarity,” said Matshidiso Moeti, WHO regional director for Africa, in a statement last week. 

2022 tech predictions: What lies ahead for the industry

IT Portal from UK - Fri, 12/03/2021 - 05:00

2021 was a year of evolution for many sectors of the tech industry, as the impact of the pandemic continued to unfold. Many tech leaders had to pivot their strategies by shifting to a fully remote or hybrid work environment, increase reliance on technology, and invest in digital transformation strategies.  

The tech industry has had major changes this year and because of that many experts are predicting new or evolved trends in 2022. Below multiple tech experts highlight their top predictions within the technology industry for the new year. 

Rafael Sweary, President and Co-Founder, WalkMe: 

“In 2022, we will see organizations analyzing and measuring those investments they made to their technology stack. What’s working? What’s not? It’s time to reap the benefits. It’s no longer going to be ok to have shelfware. For years, organizations have been running analytics on their websites but not doing the same for their other tech platforms, software, and apps. The shift to the need, and mandate, for analytics on all tech investments is coming, if not already here. No one can justify spending millions without proving its worth through strong analytics.”

James Winebrenner, CEO, Elisity:  

“The impact of the Covid-19 pandemic has accelerated the need to secure a permanently distributed and hybrid workforce. As we look to the future, organizations are realizing that while trendy approaches like SASE are useful for protecting remote workplaces, they cannot account for east-west traffic across managed or unmanaged devices. The goal now is to integrate multiple sources of identity across systems, legacy and new, and to assimilate them into a unified policy whether on-prem or in the cloud. In 2022 and beyond, we will see more and more enterprises move beyond cloud-only SASE models and point products to holistic solutions that protect data, apps and assets wherever they are being used.” 

Sven Müller, IP Product Director, Colt Technology Services:

“Future SD-WAN solutions will offer incredible scale, all of which will be done in near real-time and deliver on-demand connectivity. Think about adding a branch or multiple access circuits to one branch site and managing how this is load-balanced in split seconds. And, when it comes to enterprise functionality, SD-WAN solutions will need to sharpen themselves up in the ‘looks’ department and become visually easier to navigate, with dashboards being physically easier to use.  Added intelligence will include layering in tools like AI and machine learning to help get a better handle on the expanding intelligent edge. It won't be long before these networks become smarter, applications start fulfilling their own SLAs, and the networks have better control on understanding where packet loss emanates and how it can be curbed dynamically.  

Sure, the future of SD-WAN is not black and white, but it is inevitable. Agility is critical, so is on-demand and fluid bandwidth, cloud-native is key, and security is the next steppingstone in its evolution while SD-WAN solutions build more agile networks.”

Simon Taylor, CEO and Co-founder, HYCU: 

“The entire Covid and pandemic experience has become in some ways a giant social experiment. We were able to brute force test many things. For example, how flexible work really works, how efficient is it for businesses to be in a flexible working model. Because of this, I don’t think any of this is going away anytime soon. I see the future of work as supporting an increasingly flexible workforce inside of an increasingly flexible workspace. What I mean by that is that I think people will now feel almost every employer will have a heightened sense of comfort. Comfort with hiring people in relatively remote areas. At the same time, I think there is going to be a working expectation that regular office visits will need to take place, again for that important face-to-face time and collaboration time.”

Patrick Harr, CEO, SlashNext: 

“Multi-channel spear phishing attacks will continue to be the number one cybersecurity challenge that organizations face in 2022. Such attacks increased by 51 percent this year over 2020 – an already record-breaking year – and we’ll see a similar jump next year. Since 95 percent of all cyber breaches start with spear phishing, we’ll experience more data theft, ransomware attacks, and financial fraud in 2022. Any one of those attacks has the potential to be as chaotic and disruptive as attacks like the Colonial Pipeline one was this year. There are several reasons these attacks are so dangerous. Attacks are arriving on all digital channels – including SMS/text, Slack, LinkedIn, Zoom, and much more -- and from legitimate infrastructure – like AWS, Azure, outlook.com, Google workspaces, and more.  

Moving completely to the cloud, using apps and browsers to increase productivity combined with the new reality of a hybrid remote/office working environment, means cybercriminals are targeting the most vulnerable and least protected parts of organizations – humans using apps and browsers. The same bad actors have become very sophisticated with access to easy-to-obtain and affordable automation technology. That enables them to deliver targeted spear-phishing attacks on a massive scale, through unprotected channels and move faster than many traditional phishing detection services. Protecting users from multi-channel phishing and human hacking will be an important trend in 2022. as phishing continues to move beyond email to include collaboration tools such as SMS/text, Slack, LinkedIn, Zoom and Microsoft Teams.” 

Yinglian Xie, CEO and Co-founder, Datavisor:

“Like other areas of cybercrime, we’re seeing increasingly sophisticated fraud operations. As fraud prevention technologies evolve, so do the fraudsters’, and the cycle will inevitably continue. The financial gain is immense and the dollar loss from the industry as a whole is rising rapidly, with no signs of slowing. Fraud operations are incredibly organized and efficient, and the digitization of everything is only making things easier. Massive banks of stolen information are readily available and allow precise targeting of systems around the world.  

With a deep understanding of the latest defense mechanisms, fraudsters are leveraging real user behavior to hijack sessions and exploit weak points like first-time logins. Using GPS simulation and device emulators, fraudsters can appear to be from any device anywhere in the world or hundreds simultaneously. Moving forward, fraudsters will continue innovating in response to advanced detection systems, and vice versa.”

Rob Deal, Senior Vice President, Healthcare, Everise:

“Healthcare providers will wrestle with how to maintain patient loyalty and will need to meet the expectations for their patient experiences to be successful. People have become accustomed to the seamless and easy experience that organizations like Doordash and Amazon provide – and expect it to be that easy to interact with any organization. Today, it can be easier to book a dog’s grooming than their own doctor’s appointments – and consumers will ultimately turn to easier interactions. Providers who make even incremental patient experience improvements in 2022 – say, improving timely responses to most asked questions, or launching friendly and seamless follow-ups with patients for wellness checks after procedures and checking medication adherence – will quickly stand out as the ‘easy to work with’ choice in healthcare providers.” 

Steve Schmidt, General Partner, Telstra Ventures: 

“Strategic vectors of faster software development, more collaboration, and lower costs will be increasingly intertwined with a level of marketing and customer personalization/customer experiences like never seen before – largely through the use of Customer Data Infrastructure platforms.  2022 will be a great opportunity for companies to make game-changing technology advances.  Tremendous value is being created by a thriving ecosystem of fast-growing start-ups - perhaps this is why there’s now over 800 unicorns – many of which will be decacorns before too long.” 

Rahul Pradhan, Head of Product and Engineering, Cloud Databases, Couchbase: 

“Enterprises will leverage edge computing and multi-cloud, in conjunction with emerging networking technology that brings the cloud closer to the end user, in order to make services faster and easier to access. As microservices adoption grows, more enterprises will need to consider adopting observability platforms that can help development teams identify and resolve root causes of application performance issues.”

Alexis Richardson, Co-Founder and CEO, Weaveworks: 

“Enterprises have missed the 'app store moment' largely because each organization has been running their own infrastructure for the past decades. While the adoption of Containers is providing the next level of abstraction to encapsulate applications and brings us closer to the app store ideal, it's really Kubernetes that adds the security and management that will finally turn enterprise software into an asset. Companies want to use the same core platform everywhere so they can focus on the same skill set, the same tools, the same way of thinking, but not the same data centers. In 2022, we’ll see GitOps increasingly deemed essential for product management-oriented DevOps teams to deliver standardization for enterprises’ core platforms so that people, their most important asset, can truly focus on delivering applications.”

Brad LaPorte, former Gartner analyst and Ordr advisor: 

“Ransomware attacks will continue to increase. The impacts of double extortion and crimeware-as-a-service will continue to plague businesses worldwide. The number of victims will triple - up from 20 percent to 50 percent. The number of companies that pay a ransom to recover their data will increase from 10 percent to 30 percent. Cybercriminals will achieve this through more aggressive tactics including destroying data, leaking sensitive information, targeting high value targets, and disrupting business operations to force enterprises to pay.

Third party and supply chain attacks will continue to increase. 2022 will be the Year of the Supply Chain Attack. Already up 430 percent since 2019, the growth of these types of attacks will increase exponentially to be the #1 global attack vector. As more enterprises are adopting more mature cybersecurity practices, criminals are going upstream to weaker targets that can maximize their blast radius that allows them to have an impactful one-to-many attack ratio. Historically, attacks have been spray and pray and will now become more surgical as supply chain attacks become weapons of mass destruction.”

Paul Stringfellow, analyst, GigaOm: 

“The next steps in enterprise security management – the move to cloud has opened up many opportunities for the enterprise to improve their security and deliver protections that have been traditionally difficult. This will see an increase of adoption of ever more robust and crucial capabilities such as just in time admin access, rights management (so the security becomes embedded in the information), full adoption of data loss prevention technologies across the enterprise. Data security remains critical to the enterprise, the tools are there and more will begin to adopt them.”

IT Experts

The culture of contracting: do tech contracts kill creativity?

IT Portal from UK - Fri, 12/03/2021 - 04:30

If Covid has taught us anything, it’s how fast businesses can adapt when they need to. organizations have had to accelerate their digital transformation programs, as they pivot their businesses to survive in this new world. 

Our priorities have shifted. We know now that teams can work from home without reducing productivity. Great customer service can actually be delivered from a back bedroom or kitchen table. Children don’t have to sit exams in halls to achieve their GCSEs.

All of this, of course, relies on technology that organizations can deploy as fast as the world is changing. Gone are the days of siloed technology delivery, starting with long consulting periods, moving through development to production, and ending up with testing - all delivered by different teams, or even different contractors, duplicating efforts in some cases, and always slowing down the process.

We know that agile development is the way to go to remain truly competitive, but to become an agile organization, businesses (and the providers that support them) need to create contracts that support, rather than hinder, this new agile way of working.

Agile development has  replaced ‘waterfall’ in almost all instances  

Agile development, or building software iteratively, has  been replacing traditional ‘waterfall’ methods of development for many years, but the pedal hit the floor during the pandemic. No one has the time or desire to spend months or even years rolling out huge, lumbering one-off projects, we needed the changes to happen now. 

In an agile framework, change happens both incrementally and quickly, in continuous cycles, so the benefits of new technology (to revenues, growth, customer experience and interaction – and ultimately to the business) are felt immediately.

Change begets change: insights from each development release are gathered in real-time, fed back into the design process and inform the direction of the next release. Development happens in ‘sprints’ - small components of delivery that build on each other to constantly improve results. This is a way of developing and learning from technology that moves as fast as the changing behaviours of consumers, and the changing needs of the business. It is a continuous evolution. 

Team structures are adapting to service the new agile model

Developing at this kind of pace requires new team structures, too. In an agile framework, one ‘scrum’ team takes accountability for the delivery of a project from beginning to end, combining different skill sets and roles, and collaborating across traditional divisions and departments (or even across organizations). Siloes are broken down to foster the kind of innovation that only comes from a team of people with different and diverse experiences and viewpoints. This is a way of working that is a true collaborative partnership, where everyone takes responsibility and plays their part in the process, all with a common purpose, aiming for the same ultimate business goal.

And yet, the way organizations structure their contracts to deliver transformational technology kills any real opportunity for creativity or innovation. Traditionally, contracts focus on the process rather than the end goal. They are often based on penalties rather than incentives. They focus on how a product should be developed, rather than why. They fail to look at the outcome the business wants from the technology, documenting instead the process of delivery. 

That leaves little room for true agility, to adapt to changes along the way, and continuously improve and evolve. A traditional contract will drill down into the component parts of the technology to be delivered, rather than allowing the flexibility to think creatively about how to deliver the business goal. 

A bad contract can kill creativity

Contracts generally don’t leave room for creative thinking and agility. They deal in units, prices, delivery dates, and penalty clauses - all tangible things that can be measured at set times. Instead, they should focus on the ultimate goal - the impact of the solution on the organization. Value to the business in an agile framework will be delivered incrementally, at pace, and can be measured in hard terms: the difference the product is making to the organization; improvements to customer experience; increased interaction levels; improved revenues; take-up of new features. 

Redefining ‘done’ 

Achieving these kinds of results is, to me, the difference between a project that is ‘done’, and one that is ‘done, done’. Traditional contracts deal in ‘done’. ‘Done’ says: “I’ve delivered this piece of software that you asked me for. It looks exactly like the contract said it would a year ago. It followed the precise process laid out on paper. It might not do exactly what you need it to do today, but here it is.”

‘Done, done’ is more dynamic. It says: “The latest incremental changes to your technology are already showing value to your business. The product is meeting all your goals of interaction, experience and revenue. And what’s more, I’ve got some great ideas for the next release, based on what we’re seeing in user behaviour, so let’s work together to agree KPIs for that, too.”

Because this approach is agile, rather than prescribed, it takes trust between the business and the delivery team, and trust often requires a different mindset and way of working. It requires a true partnership, based on trust, between the provider and the organization it’s serving. Sometimes it jars with a company’s culture, if that company is used to more traditional ways of working. It’s not something that procurement and legal teams can measure easily and it’s hard to establish in law.

Trust and contracts aren’t natural bedfellows, but they should and can be. Establishing trust is critical for successful agile development projects, which are based on collaboration, open communication, cultural fit, and a strong working partnership.

So, it’s time to rethink how we structure contracts for agile delivery, and create contracts that are dynamic, focused on the big picture goals, and flexible enough to incentivise creativity and innovation.

It is only then that organizations will really reap the full benefits of the agile development methods that can deliver so much value to their business.

Edward Batrouni, Director and Cofounder, Zenitech

From risk to reward: why data is the best means of defense

IT Portal from UK - Fri, 12/03/2021 - 04:00

Today, data informs the work of almost every department in the enterprise. Marketers use metrics to track the success of their campaigns. Sales teams monitor performance so it can be compared against targets. Even the catering department will monitor the food that is going uneaten in order to make better decisions about what to cook for lunch. 

Similarly, in the cybersecurity world, CISOs and security teams have access to many metrics which show everything from the number of intrusion attempts to the time it takes to patch vulnerabilities after they have been discovered. But is this enough? To find out, we spoke to Cherif Sleiman, CRO at Safe Security. 

How can organizations use data better?

Organizations today deploy an average of 45 cybersecurity products and each of these products provides various signals and data points that security teams have to manage. However, this data exists in silos and lacks correlation, leaving security teams to analyse, prioritize and make subjective decisions when it comes to risk management. For example,  a firewall tells you only about network security, antivirus products tell you only about endpoint security, and a SOC alerts you to a cyber incident only after it has occurred. 

Organizations need a single and unified metric which is objective, easy to understand and dynamically correlates signals across people, process, and technology for both first and third parties to provide one score that matters. This score will represent the present enterprise-wide cyber risk posture, and the related financial impact in case a breach occurs. 

Such metrics are actionable, and gives the board and other senior business leaders the confidence to take data-backed and informed decisions on cybersecurity. 

Why should organizations use cyber risk scoring? 

Traditional risk management practices are point-in-time and often only produce a sense of security. Cyber attacks are continually on the rise in frequency, sophistication, and expense; it’s not a matter of if, but when, a cyber-attack will impact your company. In such a scenario, depending on quarterly audits, or cybersecurity products alone is no longer enough. Cyber risk scoring provides the much needed real-time visibility into an organization’s risk posture both at a macro and at a micro asset level. Furthermore, cyber risk scoring simplifies understanding of cyber risk, helping security & risk management leaders to communicate better with the board and senior leaders within the organization. 

It enables an organization to simply accept, mitigate or transfer the risk with cyber insurance more effectively and build a proactive strategy to measure, manage and mitigate cyber risks.

What data should be used to build a cybersecurity risk score?

Today, security teams are already sifting through huge amounts of data to make subjective decisions about the organization’s risk posture. Organizations need to adopt Cyber Risk Quantification platforms that correlate that data into a single, objective breach likelihood score which is actionable and enables security teams to take decisions backed by data. 

Let’s take an example of the data that needs to be considered for understanding the Breach Likelihood score of every asset within the organization:

  • Asset information such as geography, industry, industry size, asset vertical
  • Organization level policy controls such as password management policy, media handling policy, logging and monitoring policy amongst others
  • Organization level cybersecurity product controls such as network access controls, antivirus, DLP etc
  • Asset level configuration controls such as System Security Administration, Vulnerability Management, Data Security etc.
  • Asset level vulnerabilities and malware 

With a cybersecurity risk score backed by such data, security & risk management leaders can make confident decisions about their organization’s risk management strategy. 

How can data help organizations to develop a proactive defense?

It is not a matter of if you will be breached, but when you will be breached and by adopting Cyber Risk Quantification platforms, organizations can accurately predict their present risk posture and take corrective measures to mitigate their biggest risks, bringing the likelihood down. For example, Cyber Risk Quantification platforms such as SAFE continuously track which techniques are being leveraged by at least one APT groups to proactively detect threats of the future, and give customers the opportunity to fix issues before they become more severe. 

Having access to the right information at the right time allows companies to develop the correct strategies – which sounds easy, but has become difficult for larger organizations. Today, the average Fortune 200 CISO uses 12 dashboards to monitor their environment. That is a lot of information to monitor, particularly when point products do not communicate with each other efficiently.  

Too many businesses do not have access to a dynamic, real-time view of their organization’s security, allowing them to identify the problems that need immediate attention and build a proactive defense. 

How should organizations score their cybersecurity posture?

There are two important ways of assessing an organization’s security stance. Firstly, companies should carry out internal, intrusive testing. This should start by scanning devices on a network to provide a connection overview. When vulnerabilities are found, they should be prioritized according to their severity and given a security score so that the most serious problems are dealt with first. Then comes the job of fixing these issues, followed by tests to assess if the patches were successful. This process should be repeated as often as possible to provide a continuous assessment of their cybersecurity posture.

The second type of test is external, non-intrusive testing. This method uses risk vectors that can be measured externally and then correlated against actual security incidents. This could involve monitoring the dark web for leaks of credentials or other sensitive information, as well as other signs that suggest malicious activity is happening inside an organization’s network. Assessing third-party suppliers is also a crucial part of external testing. If a company somewhere along an organization's supply chain has been compromised, then it should brace for attack and make preparations immediately.

Cherif Sleiman, CRO, Safe Security

The US crackdown on Chinese economic espionage is a mess. We have the data to show it.

MIT Top Stories - Thu, 12/02/2021 - 07:35

A visiting researcher at UCLA accused of hiding his connection to China’s People’s Liberation Army. A hacker indicted for breaking into video game company servers in his spare time. A Harvard professor accused of lying to investigators about funding from China. And a man sentenced for organizing a turtle-smuggling ring between New York and Hong Kong. 

For years, the US Department of Justice has used these cases to highlight the success of its China Initiative, an effort to counter rising concerns about Chinese economic espionage and threats to US national security. Started in 2018, the initiative was a centerpiece of the Trump administration’s hardening stance against China.

Now, an investigation by MIT Technology Review shows that the China Initiative has strayed far from its initial mission. Instead of focusing on economic espionage and national security, the initiative now appears to be an umbrella term for cases with almost any connection to China, whether they involve state-sponsored hackers, smugglers, or, increasingly, academics accused of failing to disclose all ties to China on grant-related forms. To date, only about a quarter of defendants charged under the initiative have been convicted, and about half of those defendants with open charges have yet to see the inside of an American courtroom. 

Although the program has become a top priority of US law enforcement and domestic counterintelligence efforts—and an unusual one, as the first country-specific initiative—many details have remained murky. The DOJ has not publicly defined the initiative or answered many basic questions about it, making it difficult to understand, let alone assess or exercise oversight of it, according to many civil rights advocates, lawmakers, and scholars. While the threat of Chinese intellectual property theft is real, critics wonder if the China Initiative is the right way to counteract it.

Today, after months of research and investigation, MIT Technology Review is publishing a searchable database of 77 cases and more than 150 defendants. While likely incomplete, the database represents the most comprehensive accounting of the China Initiative prosecutions to date.

Our reporting and analysis showed that the climate of fear created by the prosecutions has already pushed some talented scientists to leave the United States and made it more difficult for others to enter or stay, endangering America’s ability to attract new talent in science and technology from China and around the world.

Here’s what we found:

  • The DOJ has neither officially defined the China Initiative nor explained what leads it to label a case as part of the initiative.
  • The initiative’s focus increasingly has moved away from economic espionage and hacking cases to “research integrity” issues, such as failures to fully disclose foreign affiliations on forms.
  • A significant number of research integrity cases have been dropped or dismissed. 
  • Only about a quarter of people and institutions charged under the China Initiative have been convicted.
  • Many cases have little or no obvious connection to national security or the theft of trade secrets.
  • Nearly 90% of the defendants charged under the initiative are of Chinese heritage. 
  • Although new activity appears to have slowed since Donald Trump lost the 2020 US presidential election, prosecutions and new cases continue under the Biden administration.
  • The Department of Justice does not list all cases believed to be part of the China Initiative on its webpage and has deleted others linked to the project.

Two days after MIT Technology Review requested comment from the DOJ regarding the initiative, the department made significant changes to its own list of cases.

Lawmakers say our findings are “startling.”

The Justice Department is “intentionally obtuse with us and will not address specific cases,” said Representative Judy Chu, a Democrat from California. “Whenever we ask for data, they usually don’t give it back to us. What you have are numbers, and it is startling to see what [they] are.” 

Two days after MIT Technology Review requested comment from the DOJ regarding the initiative, the department made significant changes to its own list of cases, adding some and deleting 39 defendants previously connected to the China Initiative from its website. This included several instances where the government had announced prosecutions with great fanfare, only for the cases to fail—including one that was dismissed by a judge after a mistrial.

Our findings highlight “the disconnect between the theory behind the China Initiative and the prosecutions that are brought in practice,” said Ashley Gorski, a staff attorney with the American Civil Liberties Union’s National Security Project.

To support MIT Technology Review’s journalism, please consider becoming a subscriber.

They also demonstrate the “disproportionate impact on Asian Americans and the immigrant community,” said Gisela Kusakawa, a staff attorney at Asian Americans Advancing Justice | AJC, an advocacy group. “Essentially, national security issues are being used as a pretext to target our community.

“This is resulting in a brain drain from and distrust towards the United States, which is counter-productive to national security.”

What our data show

Our database of China Initiative cases draws primarily on the press releases that have been added to the DOJ’s China Initiative webpage over the last three years, including those recently removed from its public pages. We supplemented this information with court records and interviews with defense attorneys, defendants’ family members, collaborating researchers, former US prosecutors, civil rights advocates, lawmakers, and outside scholars who have studied the initiative.

It is also worth noting our disclosures, including cases involving MIT, which owns this publication, and the personal experiences of our reporters with government investigations. A full report on our methodology, which includes a detailed transparency statement, is available here

Here’s what we’ve learned from our analysis:

The China Initiative has no official definition

Though considered one of the DOJ’s flagship efforts, the department has never actually defined what constitutes a China Initiative case. Wyn Hornbuckle, the deputy director of the DOJ public affairs office, said it had “no definition of a ‘China Initiative’ case other than the goals and priorities we set out for the initiative in 2018.”

A former senior DOJ official, who we are not naming so as to share their full perspective, said the China Initiative was an attempt to tell law enforcement that “these are the types of crimes we’re seeing run rampant” and that “these are important crimes to investigate, these are worthy of your time and resources.” 

Former US Attorney for the District of Massachusetts Andrew Lelling, a founding member of the initiative’s steering committee, said his interpretation was that “all cases involving researchers got in,” and that, “if the tech was going to China, I’m certain they would categorize that as in the China Initiative.”

There’s a decreasing focus on economic espionage

The China Initiative claims to be centered on countering economic espionage, yet our database finds that only 19 of the 77 cases (25%) include charges of violating the Economic Espionage Act (EEA). The EEA covers both theft of trade secrets, which can benefit any outside entity that does not own the intellectual property, and economic espionage, which has additional burden of proof requirements that the theft is ultimately for the benefit of a foreign government. 

Eight of the 19 China Initiative cases specifically charged economic espionage, while the remaining 11 alleged only theft of trade secrets.

The number of charges filed under the EEA has remained steady each year, but the increasing focus on other areas means that the proportion of economic espionage charges has decreased over time: In 2018, 33% of new cases (four out of 12) announced included violations of the EEA. By 2020, only 16% of new cases (five out of 31) included EEA violations.

Research integrity cases grew to dominate China Initiative !function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r

In addition, some of the project’s stated goals have never been met. When announcing the initiative in 2018, then-Attorney General Jeff Sessions said it would also focus on countering covert efforts to influence US leaders. But there has been just one case of an attempt to influence American lawmakers on behalf of the People’s Republic of China—that of Elliott Broidy, a former finance chairman of the Republican National Committee. He pleaded guilty to acting as an unregistered agent of a foreign government in October 2020. President Donald Trump pardoned Broidy three months later, on his last day in office—the only China Initiative defendant who has been pardoned to date. 

There's an increasing focus on "research integrity"

While the proportion of EEA cases has decreased, 23 of the 77 cases (30%) have involved questions of “research integrity.” Most of these involve prosecutors accusing academics of failing to fully disclose all Chinese affiliations and sources of income in various forms—although whether these were deliberate attempts to hide Chinese ties or the result of unclear rules has been heavily contested by defense attorneys and outside critics. 

Our analysis shows a significant shift in focus toward academics beginning in 2019 and continuing through 2020. In 2018, none of the cases were about research integrity. By 2020, 16 of the 31 (52%) of newly announced cases were. (One research integrity case in 2020 also included a charge of violating the EEA.)

At least 14 of these research integrity cases began due to suspicions arising from links to “talent programs,” in which Chinese universities provide financial incentives for academics to conduct research, teach, or bring other activities back to the sponsoring institution, on a part- or full-time basis. (At least four cases of trade secret theft also involve alleged talent program participation.) 

Federal officials have repeatedly said that participation in talent programs is not illegal—though they have also called them “brain gain programs,” in the words of Bill Priestap, former FBI assistant director of counterintelligence, that “encourage theft of intellectual property from US institutions.”

Cases charged under the China Initiative by year !function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r National security links are sometimes weak.

The initiative’s increasing focus on research integrity has included several cases of academics working on topics such as artificial intelligence or robotics, which may have national security applications. But most of the work in these areas is basic research, and many disciplines in which cases have been brought have no clear links to national security. 

Nine of 23 research integrity cases involve health and medical researchers, including people studying heart disease, rheumatoid arthritis, and cancer; six of those centered on researchers funded by NIH—a reflection of the institute’s aggressive stance on countering “inappropriate influence by foreign governments over federally funded research,” said a representative of the NIH Office of Extramural Research. NIH’s efforts predate the China Initiative, and the representative referred questions on the initiative to the Justice Department.

Funding agencies allegedly defrauded in research integrity cases !function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r

Instead, the national security implications seem to center around concerns that any individuals with links to China could serve as “non-traditional collectors,” which the China Initiative fact sheet describes as “researchers in labs, universities, and the defense industrial base that are being coopted into transferring technology contrary to US interests.” But as our database shows, only two of 22 researchers were ever accused of trying to improperly access information or smuggle goods into China. The charges were later dropped. 

China Initiative cases aren’t as successful as the DoJ claims

Three years after the program’s start, less than a third of China Initiative defendants have been convicted. Of the 148 individuals charged, only 40 have pleaded or been found guilty, with guilty pleas often involving lesser charges than originally brought. Almost two-thirds of cases—64%—are still pending. And of the 95 individuals still facing charges, 71 are not being actively prosecuted because the defendant is in an unknown location or cannot be extradited.

In particular, many of the cases concerned with research integrity have fallen apart. While eight are still pending, seven cases against academics have ended in dismissal or acquittal while six have ended in a guilty plea or conviction. That’s a sharp contrast to the usual outcomes of federal criminal cases, where the vast majority end in a guilty plea, according to a Pew Research Center analysis of federal statistics.

Outcomes for defendants charged under the China Initiative !function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r Nearly 90% of all cases are against people of Chinese origin

One of the earliest and most persistent criticisms of the China Initiative was that it might lead to an increase in racial profiling against individuals of Chinese descent, Asian Americans, and Asian immigrants. DOJ officials have repeatedly denied that the China Initiative engages in racial profiling, but individuals of Chinese heritage, including American citizens, have been disproportionately affected by the initiative. 

Our analysis shows that of the 148 individuals charged under the China Initiative, 130—or 88%—are of Chinese heritage. This includes American citizens who are ethnically Chinese and citizens of the People’s Republic of China as well as citizens and others with connections to Taiwan, Hong Kong, and long-standing Chinese diaspora communities in Southeast Asia.

Defendants of Chinese heritage !function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r

These numbers are “really high,” said Margaret Lewis, a law professor at Seton Hall University who has written extensively about the China Initiative. “We knew that it’d be a majority,” she added, but this “just underscores that the ‘but we’re prosecuting other people too’ argument…is not convincing.”

New cases are still being brought under the Biden administration

The initiative was launched under the Trump administration, and while the number of cases explicitly linked to the China Initiative has fallen since President Joe Biden took office, they have not stopped.

For example, Mingqing Xiao, a mathematics professor in Illinois, was charged in April 2021 with failing to disclose ties to a Chinese university on his application for a National Science Foundation grant. And an indictment against four Chinese nationals for hacking dozens of companies and research institutions was unveiled in July.  

Meanwhile, federal attorneys have continued to push prosecutions forward. The trial of Charles Lieber, a Harvard chemistry professor accused of hiding his ties to Chinese universities, is scheduled to begin in mid-December. Prosecutors are planning to go to trial in cases against high-profile academics in Kansas, Arkansas, and elsewhere in the first few months of 2022.  

New China Initiative cases brought in 2021 !function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r

How it began

Concerns about Chinese economic espionage targeted at the US have been growing for years, with estimates of the cost to the American economy ranging from $20 billion to $30 billion to as high as $600 billion. Enforcement began rising dramatically under the Obama administration: in 2013, when the administration announced a new strategy to mitigate the theft of US trade secrets, China was mentioned more than 100 times. 

In 2014, the Justice Department filed cyberespionage charges against five hackers affiliated with the Chinese People’s Liberation Army—the first time state actors had been prosecuted by the US for hacking. Then in 2015, the United States and China signed a historic agreement committing not to conduct commercial cybertheft against each other’s businesses. 

But it was not until 2018, as part of the Trump administration’s far more confrontational approach to China, that the department formally launched its first country-specific program.

The effort was “data-driven,” according to the former Justice Department official, and ”born out of the intelligence briefings to the attorney general and senior DOJ leaders from the FBI that, day after day, showed that the PRC and affiliated actors across the board [were] deeply involved in hacking, economic espionage, trade secret theft, subverting our export controls, and engaging in nontraditional collection methods.” He said this included Chinese consulates helping to “mask the actual backgrounds of Chinese visa applicants to avoid visa rejection based on their affiliations with the PRC military.” 

Trump, however, had campaigned partly on anti-Chinese and anti-Communist rhetoric— infamously saying at one rally in 2016, “We can’t continue to allow China to rape our country, and that’s what they’re doing.” 

In the months before the initiative launched, Trump reportedly told a group of corporate executives at a closed-door dinner at his Mar-a-Lago estate that “almost every [Chinese] student that comes over to this country is a spy.” 

This was the backdrop when Sessions announced the launch of the China Initiative on November 1, 2018. 

“We are here today to say: enough is enough,” the attorney general told reporters, before announcing the unsealing of an indictment in a dramatic, years-long saga of high tech trade theft: three Taiwanese individuals who had been charged with allegedly stealing trade secrets from an Idaho-based semiconductor company, Micron, for the ultimate benefit of a Chinese state-owned enterprise. 

The three worked for the Taiwanese chipmaker UMC, which had made a deal with a Chinese counterpart to jointly develop memory chips using a type of semiconductor technology known as dynamic random-access memory. UMC, which said it wasn’t aware of its employees’ actions, pleaded guilty to theft of trade secrets in October 2020 and agreed to pay a $60 million fine. The case against the three individuals has not yet been resolved.

The Micron case was meant to signal the types of trade theft the new initiative would focus on, but our data show that it was far from the norm. 

Chilling effects

Only one research integrity case linked to the China Initiative has gone to trial, and it ended in a high-profile acquittal. Anming Hu, a professor of nanotechnology at the University of Tennessee-Knoxville, originally was accused of defrauding NASA by failing to disclose all of his overseas affiliations and was ultimately charged with six counts of wire fraud and false statements. After a mistrial, a judge threw out the government’s attempt to retry Hu and acquitted him of all charges. 

“Without intent to harm, there is no ‘scheme to defraud,’” the judge wrote in his decision, noting that NASA also received the research that it paid for. (NASA declined to comment for this story.) Hu’s case was one of those removed from the China Initiative webpage after MIT Technology Review reached out with questions. 

Other cases have been dismissed more quietly. In the space of one week in July 2021, shortly after the collapse of Hu’s trial, the government dismissed five cases against Chinese researchers accused of lying about their military affiliations on visa applications. The government did not explain in court filings why it dropped the cases, but the dismissals came after doubts arose about whether the forms’ questions about military service clearly covered the defendants, who were civilians working at military universities. 

On November 19, those cases were also removed from the China Initiative webpage, after MIT Technology Review submitted a list of questions to the Justice Department. Last year, the government had spotlighted those same cases in a statement marking the initiative’s two-year anniversary.

“I am most concerned about how the initiative will deny the USA access to the world’s best science and technology talent.”

—Randy Katz, former vice chancellor of research, UC Berkeley

The effect of all these cases on Chinese, Chinese American, and scientific communities has been profound. 

A member survey of more than 3,200 physicists carried out in September by the American Physical Society found that more than 43% of foreign early-career researchers now consider the United States to be unwelcoming for international students and scholars. Less than 25% believe that the US federal government does a good job of balancing national security concerns with the research requirements for open science. 

Another survey of nearly 2,000 scientists at 83 research institutions carried out by Arizona State University found that 51% of scientists of Chinese descent, including US citizens and noncitizens, feel considerable fear, anxiety, or both, about being surveilled by the US government. This compares to just 12% of non-Chinese scientists.

Some respondents in the Arizona State University study indicated that this climate of fear has affected how—or what—they choose to research. One said they were limiting their work to only use data that is publicly available rather than collecting their own original data; one indicated that they would no longer host visitors from China; another said they would focus on what they call “safer” topics rather than “cutting edge” research.

The effects of the initiative stretch even further. No one knows the exact number of scientists who have returned to China as a result of investigations or charges, but in late 2020, John Demers, then the assistant attorney general for national security, said that “more than 1,000 PLA-affiliated Chinese researchers left the country.” An additional group of 1,000 Chinese students and researchers had their visas revoked that September due to security concerns. How their security risks or affiliations with the People’s Liberation Army of China were determined, however, has not been explained. 

Randy Katz, a computer science professor at UC Berkeley who served as the university’s vice chancellor for research until earlier this year, says the initiative will have a grave impact on US innovation.

“I am most concerned about how the initiative will deny the USA access to the world’s best science and technology talent,” he said in an email. “Recently, as [many] as 40% of our international graduate students were from China. These students are heavily represented in the STEM fields, are highly competitively selected…and represent a critical component of our research workforce. We want them to come and we want them to stay and innovate in the USA.”

Changing course?

After three years of prosecutions and fear, the tide may be turning. 

Criticism of the initiative has ramped up in recent months, particularly after Anming Hu’s acquittal and the decision to drop several cases against academics. In July, Representative Ted Lieu, a Democrat from California, and 90 members of Congress sent an open letter to Attorney General Merrick Garland urging him to investigate the “repeated, wrongful targeting of individuals of Asian descent for alleged espionage.” 

A growing chorus of civil society groups and scientific associations have also made pleas for the program to be terminated, including a coalition of civil rights groups that wrote an open letter to Biden in January and more than 2,000 university professors who signed a request to Garland in September to end the initiative.

Even former DOJ officials are advocating for a change in direction. 

Demers reportedly considered a proposal for amnesty programs that would allow researchers to disclose previously undisclosed ties with no fear of prosecution—though this plan was quickly shot down by Republican lawmakers. 

Meanwhile Lelling, the former Massachusetts prosecutor, said he also believes that “general deterrence has been achieved.” “If the message was, ‘Make sure you are utterly transparent about your foreign collaboration,’ all right, everyone gets it,” he said. “There’s no need to prosecute another 23 academics.”

This fall, a group of lawmakers sat down with Garland to discuss the China Initiative as well as the rise in anti-Chinese hate during the pandemic. Garland did not commit to ending the project, but he did promise that he would restart the implicit bias trainings at the DOJ that had stopped under Trump. 

He also indicated that Matt Olsen, the newly confirmed assistant attorney general of the DOJ’s national security division, is planning a review of all programs under his portfolio. Hornbuckle, the DOJ spokesperson, did not respond to a follow-up question regarding whether the review was intended to address specific criticisms of the China Initiative. 

Today, the DOJ continues to announce new indictments and move forward with existing prosecutions, while the White House Office of Science and Technology Policy is considering a Trump-era presidential directive on strengthening the security of federally funded research. 

In the meantime, the people caught up in the China Initiative have been left to deal with the damage done to their lives and careers—even if their cases were ultimately thrown out.

Hu, the professor who was acquitted after a mistrial, has been offered his old job at the University of Tennessee-Knoxville; he is a Canadian citizen, however, and it is still unclear whether he will be allowed to remain and work in the United States. MIT Technology Review found that some American and Chinese citizens who intended to stay in the US have moved overseas, primarily to China, and some who were fired by their US employers are now conducting their research elsewhere—in some cases leading the laboratories to which they were once accused of hiding their affiliation.

Yasheng Huang, a professor at MIT Sloan School of Management who has spoken about many China Initiative cases, says that the long-term costs of these investigations is only starting to be felt. 

“We've heard stories of young PhD students who are not thinking at all of applying to jobs in the United States: they want to go to Europe, they’re going to Asia,” he said. “They don’t want to stay in the United States. Some of these people are the best and brightest in their fields.” 

“The US is losing some of its most talented people to other countries because of the China Initiative,” he said. “That’s bad for science, and that’s bad for America.”

Do you have more information, or questions you'd like answered, about the China Initiative? Please reach out to us at tips@technologyreview.com.

Additional reporting by Tate Ryan-Mosley, Bobbie Johnson, Patrick Howell O'Neill, Alyssa Wickham, and John Emerson,.

Pages