Digital transformation. It’s a phrase uttered frequently and with much fervour among senior government figures, heads of public sector and tech innovators in the UK. But it isn’t exclusively a British phenomenon. Governments across the world are committed to bringing digital transformation to their public services, overhauling outdated analogue models of delivering services to citizens. In essence, public sector organisations like the NHS and local councils want to digitise their services to meet the online expectations the public now has through the behavioural change organisations, such as Amazon, have brought about.
And while this is an ambitious but necessary goal for the public sector, digitising services is merely the first step in a very long journey towards true transformation, stretching from updating the back-end office system to the cultural mind-set shift of staff.Digitisation is only the beginning
At a recent Digital Leaders Public Sector Innovation Conference, Kevin Cunnington, Director General at the Government Digital Service (GDS) – talked about the democratisation of digital across the government as a high priority. During his speech, he encouraged government departments to aim for transformation rather than just digitisation.
Cunnington touched on a very significant point. When councils talk about transformation, it should not merely be limited to overhauling legacy contracts, moving systems to the cloud or building digital citizen portals. This is the digitisation part and it’s the start of the process but far from the end of it. Transformation is much more widespread and deep-rooted – it must take place with the acceptance from staff first so that they can embrace digital rather than letting it lead them, and potentially overwhelming them.
Important initiatives like the GDS Academy build digital skills and capability across the civil service, giving staff the specialist knowledge they need, as well as a sense of ownership over the transformation of their local authority. Where digitisation ends, it is where cultural change, training schemes, and true transformations begin.
The problem with digital transformation projects, as Cunnington alludes to, is that they begin with the right intentions but the wrong direction. It is not enough to replace your legacy systems with more updated software and call it a day. Digital transformation requires a complete overhaul of how a government department operates, ensuring its civil servants are comfortable with their back office digital technology and understand how the available data can help them better serve the public. Through collaborative effort, clear communication from all levels of the organisation, and a comprehensive training scheme for staff, the transformation of a local authority can reach deeper levels that the procurement of new technology.Overcoming the resistance to change
Across the country, many councils are moving their services to one single platform in the cloud. In the same way that citizens can purchase anything through one online portal on Amazon, UK councils want its citizens to be able to interact in this way too, and in fact, in any way they so wish. By accomplishing this, and having a solid CRM in place, councils can achieve the coveted ‘single customer view' of its citizens and, armed with this connected data, can personalise its services accordingly whilst achieving the real efficiencies that are needed.
However, with digital transformation can come a familiar stumbling block – people can resist change. There is a level of comfort and security which leads to a sense of confidence in one’s ability to do one’s job with operating existing legacy systems, regardless of their inefficiency in the current digital landscape. Emerging digital technologies and talk of AI is perceived as the unknown, the untried and possibly seen as a risk to jobs
Simply put, the appetite for digital transformation in the public sector, whilst very real, does not actually manifest itself in action when it comes to making the practical change, but rather it becomes a straight swap out of legacy for pretty much the same working practices with little real long-term benefits realised. Even if you change the technology that is being used, it is still the same people using it. So, the question the public sector needs to ask themselves is ‘how do we change this situation?’
By building workplace tools and implementing processes which make it easier for staff to carry out their work more efficiently. Moreover, departments should not be operating in siloes, cut off from each other like leaves from a tree, but should be connected by a common root which binds them. Collaborative services will ensure that public sector organisations, such as the NHS, can have access to data from, for example social housing departments which would enable them to better understand patients’ needs.
Moreover, there needs to be a consistent digital standard across the Government. Organisations across the country should be operating from one agreed level of quality and be sharing best digital practice, to ensure progress is universal across the sector. Transformation should not be a race with winners and losers; it should be a consistent effort across all sectors.Collaboration between the public and private sector
Achieving a high digital standard throughout the public sector is a challenge but with learning from the private sector, who have made the move to shifting their working practices and customer interactions to a true end to end digital environment, is not beyond the scope of many departments as it currently stands. While collaboration between public sector organisations will be crucial to achieving the highly ambitious objectives of digital transformation, the public sector must also collaborate with the private sector to help it achieve digital transformation success. Having trust and confidence between sectors will allow the sharing of best practice and best technology, and will pave the road for long-term, stable change. Without this, the two sectors will continue to progress at different rates and true transformation may be more difficult to achieve.
It is encouraging that, after years of inertia, the public sector is now taking digital transformation seriously. However, when launching projects, the sector must remember that just because you start with the digital, it does not guarantee the transformation. Transformation will only be realised when a culture shift occurs - staff understanding the benefits and what it can deliver to the customer. Only then can complete digital transformation happen.
Colin Wales, Business Development Director, Arcus Global
Image Credit: Ditty_about_summer
When you’re running a modern data cluster, which are becoming increasingly commonplace and essential to businesses, you inevitably discover headaches.
Typically a wide variety of workloads run on a single cluster, which can make it a nightmare to manage and operate - similar to managing traffic in a busy city. There’s a real pain for the operations folks out there who have to manage Spark, Hive, impala and Kafka applications running on the same cluster where they have to worry about each app’s resource requirements, the time distribution of the cluster workloads, the priority levels of each app or user, and then make sure everything runs like a predictable well-oiled machine.
Anyone working in data ops will have a strong point of view here since you’ll have no doubt spent countless hours, day in and day out, studying the behaviour of giant production clusters in the discovery of insights into how to improve performance, predictability and stability. Whether it is a thousand node Hadoop cluster running batch jobs, or a five hundred node Spark cluster running AI, ML or some type of advanced, real-time, analytics. Or, more likely, 1000 nodes of Hadoop, connected via a 50 node Kafka cluster to a 500 node Spark cluster for processing.
Just listing the kind of environments that I see regularly, one can become aware quickly of what can go wrong in these multi-tenant big data clusters. For example:
Oversubscribed clusters – too many apps or jobs to run, just not enough resources
Bad container sizing – too big or too small
Poor queue management – sizes of queues are inappropriate
Resource hogging users or apps – bad apples in the cluster
So how do you go about solving each of these issues?Measure and analyse
To understand which of the above issues plagues your cluster, you must first understand what’s happening under the hood. Modern data clusters have a number of precious resources that operations team must keep a constant eye on. These include memory, CPU, and NameNode.
When monitoring these resources make sure to measure both the total available and consumed at any given time.
Next, break down these resource charts by user, app, department, and project to truly understand who is contributing how much to the total usage. This kind of analytical exploration can help quickly reveal:
If there is any one tenant (user, app, dept, or project) causing the majority of usage of the cluster, which may then require further investigation to determine if that tenant is using or abusing resources
Which resources are under constant threat of being oversubscribed
If you need to expand your big data cluster or tune apps and system to get more juiceMake apps better multi-tenant citizens
Configuration settings at the cluster and app level dictate how much system resources each app gets. For example, if we have a setting of 8GB containers at the master level, then each app will get 8GB containers whether they need it or not. Now imagine if most of your apps only needed 4GB containers. Well your system would show it’s at max capacity when it could actually be running twice as many apps.
In addition to inefficient memory sizing, big data apps can be bad multi-tenant citizens due to other bad configuration settings (CPU, number of containers, heap size, etc.) inefficient code, and bad data layout.
Therefore it’s important to measure and understand each of these resource hogging factors for every app on the system and make sure that they are actually using and not abusing resources.Define queues and priority levels
Your big data cluster must have a resource management tool built-in, for example YARN or Kubernetes. These tools allow you to divide your cluster into queues. This feature can work really well if you want to separate production workloads from experiments or Spark from HBase or high priority users from low priority ones, etc. The trick is to get the levels of these queues right.
This is where measure and analyse techniques help. You should analyse the usage of system resources by users, departments or any other tenant you see fit to determine the min, max, average that they usually demand. This will at the least get you some common sense levels for your queues.
However, queue levels may need to be adjusted dynamically for best results. For example, a mission critical app may need more resources if it processes 5x more data one day compared to the other. Therefore having a sense of seasonality is also important when allocating these levels. A heatmap of cluster usage will enable you to get more precise about these allocations.Proactively find and fix rogue users or apps
Even after you follow the steps above, your cluster will experience rogue usage from time to time. Rogue usage is defined as bad behaviour on the cluster by an application or user, such as hogging resources from a mission critical app, taking more CPU or memory than needed for timely execution, having a very long idle shell.
In a multi-tenant environment this type of behaviour affects all users and ultimately reduces the reliability of the overall platform.
Therefore setting boundaries for acceptable behaviour is very important to keep your big data cluster humming. For example:
Time limit for application execution
CPU, memory, containers limit for each application or user
Setting the thresholds for these boundaries should be done after analysing your cluster patterns over a month to help determine what is the average or accepted values. These values may also be different for different days of the week. Also, think about what happens when these boundaries are breached. Should the user and admin get an alert? Should these rogue applications be killed or moved to a lower priority queue?
Only by thinking through these options can the multi-tenant big data clusters play well together.
Kunal Agarwal, CEO, Unravel
Image source: Shutterstock/wk1003mike
Roughly a month ago, news broke out that Facebook had stored millions of passwords on its own servers, in plaintext. The company employees have had access to the servers, and thus the passwords, for most of the time, and although Facebook said they haven't abused the unnecessary privilege, we did know that some 2,000 engineers and devs made some nine million internal queries for data elements that contained passwords in plain text.
However, Facebook has since updated the post (as opposed to issuing a new one), in which it gave new information about the case. As it turns out, the original story of “hundreds of millions of Facebook Lite users” and “only tens of thousands of Instagram users” has now changed to “millions of Instagram users”.
“Since this post was published, we discovered additional logs of Instagram passwords being stored in a readable format,” the update reads.
“We now estimate that this issue impacted millions of Instagram users. We will be notifying these users as we did the others. Our investigation has determined that these stored passwords were not internally abused or improperly accessed.”
Initially, the company said between 200 and 600 million passwords were exposed.
The news came the same day when Facebook was found to have been uploading email contacts of almost two million users without explicit consent.
Image Credit: Katherine Welles / Shutterstock
A lot has been said in the last few years about the transformative power of technology and the change it is bringing to how businesses operate. However, one technology in particular seems to be spoken about more than most – Robotic Process Automation (RPA).
RPA has been around for some time but recent developments in artificial intelligence and other emerging technologies has brought some newfound fame. Solutions that have been suggested for some time now seem plausible and business cases for investment can be more easily made.
As such, investment in the technology is growing rapidly. For example, Grand View Research predicts that the RPA market will be worth more than $3 billion by 2025 due to challenges such as increasing market competition and changing customer preferences. Acumen Research takes things a step further, predicting that it will hit the $4.1 billion mark in 2026.
So, why exactly are more and more businesses embracing RPA technology and how can they ensure a successful transformation?Employee empowerment
What has quickly become clear is that RPA has the power to modernise how businesses operate. Deploying a virtual workforce can enable organisations to drive a whole host of workforce advancements, with robots taking over many of the more mundane, rules-based processes. For example, RPA robots can complete tasks such as processing transactions or filling out forms faster, meaning employees will no longer have to make repetitive, transactional decisions.
The real power of RPA comes from supporting employees in their daily jobs. It’s easy to focus on the threat to human jobs when it comes to automation but that’s often because employee minutes saved is the easiest thing to measure. However, we need to take a broader look at the new economy that automation will create, as well as how it will transform the way we live and work. There are also much bigger benefits, especially with the way it can transform the employee experience, freeing them from monotonous tasks to undertake more exciting and rewarding tasks.
With the average employee spending an average of 80 per cent of their day on mundane, routine work that doesn’t necessarily need human input, a significant amount of potential often goes unrealised. Automated processes mean better employee experience and loyalty, with employees feeling that they are able to focus on the tasks they truly love and free to take on tasks that truly add value.
Not only does it make the lives of employees easier, RPA also has the potential to create new opportunities. According to Forrester, the use of RPA can lead to the creation of higher-skilled positions as employees can focus on building relationships and spend their time on activities that have a wider impact on business growth. A new breed of intelligent Virtual Assistant robots can even help them with these complex tasks but that’s a story for another day…
Although RPA may appear to be a silver bullet to the multiple challenges facing businesses today, the truth is that around 40 per cent of RPA projects fail. That’s why it’s vital that companies put a clear strategy in place before implementing an RPA-driven transformation.Planning for success
With so many potential benefits on offer, many businesses fall into the trap of jumping headfirst into RPA. They try to automate something quickly just ‘to get going’, forgetting that RPA requires proper design, planning and governance if it's to bolster the business.
This starts by identifying the business processes that are most in need of automation. These are often identified as the processes that are the most procedural, or the most easily automated. These often sit in back-office areas, such as Finance or HR. Whilst there is certainly value in automation for those areas, these types of processes are not always the right initial candidates to help grow momentum in a business. There can be other areas, such as customer services where customer experience could be directly improved as a result of robotic automation. Fortunately, there are now tools available that can help with the identification of candidates for automation based on applying AI and machine learning to data captured in the operations.
However, once candidate processes have been identified and agreed upon, businesses can then gradually roll out RPA and become familiar with the technology before scaling up their investment and moving on to more complex processes. It’s vitally important that people from across the organisation are bought into the transformation, understand the changes that will occur and can see the benefits both to themselves and the organisation as a whole.
It is also vital that businesses establish an RPA Centre of Excellence (CoE) in order to help them maximise the potential of automation, as well as scale and maintain it on an ongoing basis. A CoE offers many benefits, such as enabling businesses to leverage best practices, tools and implementation methods, providing a common knowledge repository and aligning with change management processes.
This all helps to ensure an effective and stable RPA deployment that can grow over time by providing a framework for long-term success and actively engaging multiple stakeholders across the organisation.
Ultimately, there can be no arguing the fact that RPA is fast becoming a necessity rather than a nice-to-have for businesses, with the potential to reduce workload and improve the employee experience.
But, in order to reap the rewards, businesses must understand how to effectively manage an RPA transition. Simply picking an easy-to-use tool and building a quick automation isn’t enough. It starts by putting a clear plan in place and taking things step by step. This might seem like a lot of effort but with the transformative results at stake, putting in that effort will be more than worth it.
Gareth Hole, Solution Sales Manager, Advanced Process Automation, NICE
Image source: Shutterstock/everything possible
When the NHS suffered the largest cyber-attack in all its history back in 2017, the huge risk posed to businesses by ‘archaic’ computer systems became clear. In this case, it was revealed that one in twenty NHS devices ran on Microsoft XP – an operating system that was 16 years old. Fast forward two years and we’re still seeing outdated systems being used by businesses - last month, ex-British Airways employees revealed that the firm’s German call centre was operating in a vulnerable way due to outdated systems.
On top of this, the data security issues raised by the ex-British Airways’ employees were a stark reminder of the pitfalls presented by the flexible remote working concept. More and more employers are offering flexible work environments including the ability to work from home.
There are some obvious benefits – it reduces corporate real estate costs and attracts new talent looking for a better work-life balance.
However, ensuring adequate cyber security solutions for home-workers can be particularly challenging. As data and devices flow between home and business networks, it becomes harder to control the security of information. For businesses that do provide security solutions, these generally only benefit the employer and not the individual, which can make usage and take-up of these security solutions low.Cyber-attacks are on the up: what are the risks?
News of hacks and data breaches seem to be happening more frequently – particularly as it seems technology is advancing at a faster rate than large businesses can adapt.
Securing all aspects of the business network is extremely important and IT leaders need to be agile to keep up with continually emerging threats. Attacks targeting cloud infrastructure, for example, can have an immediate and potentially catastrophic effect on a company’s brand, as well as the data of its staff and customers.
Hackers are usually quick to find areas of weakness in a system and exploit this, typically with malicious intent. Breaching data and stealing information is often a mixed bag when it comes to the types of companies targeted and the information stolen. Credit card numbers, personal information and medical records are just some of the types of data that can be hacked and leaked.
With just a username or email address, hackers can attempt to gain access to accounts using password cracking techniques. Password cracking is commonly used by hackers to gain unauthorised access using common passwords or algorithms that guess passwords. If a hacker successfully cracks a password, they may then use credential stuffing tools which enter the email, username, password combination into hundreds of popular websites to try and gain access.
Similarly, a hacker with access to a Wi-Fi password can gain access to all connected computers and devices within a network. This can be used to do anything – from copying data to monitoring usage and websites visited.
It takes just one weakness within a system for a massive data breach to occur and cause significant damage to both the business and customer.What can be done?
On top of IT leaders ensuring computer systems are modern, fit for purpose and regularly updated, businesses need to pay more attention to the way every member of staff operates online. Do employees know how to spot basic phishing scams? Do they recognise the need for a unique password for every site, account and device? Employees should be provided with, at the very least, the basic tools and knowledge needed to keep themselves and the company safe from potential attack.
Likewise, businesses should be able to operate with remote-working benefits and employees should be able to enjoy this. Having a flexible working structure doesn’t have to negatively influence the cybersecurity credentials of a business. Reduced real estate costs mean additional budgets to re-invest in other parts of the business – like adequately securing the remote working environment for staff. The best way to get staff on side with protecting company assets is to empower them to protect themselves and their families in a personal capacity first.
As such, more education on cybersecurity is needed across the board – there is a lot of misinformation out there, including the myth that antivirus software is enough to protect people. Regular training on evolving threats can ensure employees are more mindful of potential dangers online – both at work and in their personal lives. The concept of a cyber security dashboard is a useful way of raising awareness and helping to inform employers of potential risks. If businesses knew the cyber security risk per department, even down to individual employees, training and resources could be made available to improve the security credentials of vulnerable individuals in the workforce.
Whilst the tech sector overall is working towards improving regulations, businesses and consumers need to be confident that they are each playing their part in protecting their data and combating cybercrime – education is a key factor in avoiding falling victim to these types of threats.
Andrew Martin, CEO and Founder, DynaRisk
Image source: Shutterstock/jijomathaidesigners
While automated software testing has come on leaps and bounds in a large number of industries, within the retail sector, the complexity of legacy technology and systems has slowed its advances. Until recently, this has resulted in manual software testing being the only method used within the sector, which not only relies on massive amounts of people power and takes a significant amount of time but can also result in human error.
Fortunately, retailers are now able to overcome many of these drawbacks through the use of automated software testing. Software testing is vital to continuous healthy systems, as checking for blockages and weaknesses allows retailers to take corrective action before a minor problem becomes critical. Automated software testing allows retailers to continue to do this but much faster, more effectively and with a greater degree of accuracy than manual testing afforded.
However, increasing automation of processes and roles often sparks concerns about what it means for the humans currently carrying out these roles. While these concerns are only natural, employees currently in manual software testing positions stand to benefit as much as retailers since automated software testing makes it possible to relieve testers from mind-numbing testing tasks and allows them to move into value-adding roles. So, as retailers begin to implement automation testing, what is it and how can retailers implement automation testing without compromising human values?How can retailers use automation testing?
The demand for rapid execution of increasingly complicated orders means warehouse management systems need to perform progressively more complex functions and the sheer volumes of transactions have increased. This often means upgrading software or finding ways to automate manual processes. Testing is critical to changes like these to ensure the new approach will work once implemented. It can be carried out manually using regression testing scripts that humans must follow repeating the same processes to identify any errors, gaps or problems in the software. However, there are numerous drawbacks to this approach, as the scripts use a lot of resource to create and testing often takes a large number of people hours. For the people carrying out these functions, the work is repetitive, meaning people become fatigued and prone to making mistakes, potentially overlooking a critical issue. Automation also allows for scalability testing, something which is integral to warehouse management systems, but until now has been almost impossible to achieve with human testers.
Additionally, if there is limited human resource, testing usually cannot run alongside fixes, resulting in an even lengthier process. Manual testing also doesn’t provide sufficient scale or enable load testing due to the small numbers of transactions that can be carried out. With such an important piece of work using up significant labour, without the guarantee of a flawless end result, retailers are in need of a new solution.What advantages does automation bring?
Retailers stand to see a number of significant benefits from adopting automated software testing. Not only does it make software testing much quicker, but it also allows for improved software testing tactics to be used as human testers can take a more varied, interesting and value-adding role in the process. Instead of following traditional test scripts, testers could follow Gherkin scripts written in BDD format, for instance. This approach enables requirements to be written as assets, so the user acceptance testing requirements also become a test case. Not only does this save time and effort but it makes test cases traceable and places business requirements at the heart of the process. Automated software testing also allows testers to update new developments in parallel. By completing more software releases and hot fixes than were previously possible, therefore freeing up a substantial amount of human power, systems can be up and running much faster.
The collaborative environment automation testing facilitates, due to the ability for software testing to be stored in a cloud-based server, is also widely advantageous for retailers and their employees. With all the scripts, images and development code available to testers, they can learn how to write in a more automated fashion which means they can develop automated scripts themselves. Once a framework has been built, a test engineer can add as many new automated tests as is required which further enhances the testing process for developers. Instead of repeatedly re-running manual regressions, the team can write new tests, run them, tick them off and then allow them to run on their own. This makes for a faster, more efficient testing project.How can retailers continue to support human testers?
In order to effectively embrace automation, retailers must invest in training and skills development in their test resource to grow out new competencies. With training initiatives in place, automation testing creates new opportunities for testers to develop new skills and move into new roles. The introduction of automation allows testers to pivot towards higher value activities, such as writing BDD scenarios which can then be converted into technical automation scripts.
Automation testing drives the requirement for a different skill set within a QA function or test team and enables testers to grow and build a more comprehensive set of skills. For example, testers will be able to acquire more advanced and nuanced skills in terms of test preparation and more technical skills with regards to automation scripting. This allows individuals to move into higher-skilled, better-paid roles in which individual productivity is raised due to the support of automation. As well as allowing for testers to move into new roles, automation also creates higher-skill jobs to support the manufacture and maintenance of that automation.
Although implementing automation testing requires some initial investment, over time it will drive cost savings which can then be ploughed back into investing in upskilling employees and automating more of the software testing process. This way of reinvesting will help retailers develop a more skilled workforce and move towards generating greater savings in time and effort, all while controlling costs.
As long as retailers ensure they continue to support and value their employees by upskilling them and moving them into value-adding roles, everyone stands to benefit from increased automation and positions automation as a catalyst for positive change.
Mike Callender, Executive Chairman, REPL Group
Image source: Shutterstock/Vasin Lee
“Time is what we have so that everything doesn’t happen at once” -Albert Einstein
Einstein is certainly the re-focus of attention of late as astronomers worldwide excitedly pore over the first pictures of a black hole -where time ends, and space-time, matter and light disappear- and consider their implications for the theory of General Relativity.
Whatever you think of the images, excitement about the advance in our understanding of the fundamentals of physics, or disappointment that they are rather grainy compared to the many visual model's scientists have used to depict the black hole phenomenon, they are still the first fixed images of one of space’s alluring mysteries.
The images secured by the Events Horizon Telescope were achieved using telescopes in eight observatories around the world, co-ordinated to capture the images by highly accurate atomic clocks.
Curiously, a similar technique is being used to reimpose the concept of time and traceability back on earth in respect of our own “black hole” of computer space. Traceable Time as a Service (TTaaS) combines a series of atomic clocks to create a constantly accurate source of UTC, delivered globally by low latency fibre, with a software solution which enables local timestamping. It should attract the attention of Data Centre businesses, fibre suppliers and anyone reliant upon accurate and traceable time.
There is a problem in the virtual world of computers that many people know about but few talk about publicly. Understanding and measurement of Time has become broken and fragmented because of poor synchronisation. As computer clocks fail to keep pace with speed of execution, different machines, in different locations, all running pieces of the same process, do not share the same time. So as Einstein would have predicted, lots of things look like they happen all at the same time, or even in quite the wrong order, potentially costing enormous amounts.
Time traceability is critical for all computer-based transactions -if clocks on servers in data centres don't all agree then transactions can fall apart or even be lost as a result. So, without time traceability in cyberspace, our concept of time accuracy similarly fragments and warps as it does in outer space.
This is particularly relevant in the Financial Services industry where recent regulations in Europe (MiFID II) and in the USA (CAT) mandate accurate and traceable time requirements in an attempt to address the problem. These regulations require market participants to synchronise the clocks on their trading servers with universal time to a level of accuracy that will give every computer decision a unique timestamp so that processes can be accurately reconstructed after an event from the machine records.Accurate timing is of the essence
As recently as this week the UK Financial Conduct Authority (FCA) bemoaned the fact that they “continue to see errors in transaction reports … driven by inaccurate clock synchronisation” when transactions should be recorded in UTC.
Traditionally the difficulties and expense of implementing time synchronisation solutions in situ have proven to be a barrier to widespread adoption of UTC synchronisation.
And yet, the TTaaS AI software solution is already available - easy to install and use, cost-effective, resilient, and accurate. It is also regulation compliant. It can ensure accurate transaction records through computers constantly corrected to UTC. The downloadable synchronisation software continually adjusts the clocks on servers to the global standard of Universal Time (UTC), is microsecond accurate and is based on a time feed from atomic clocks in London, Toyko, and New York, not unlike the set -up of the Events Horizon telescope.
The software also verifies a computer’s performance by creating a cloud-based timing log of a computer’s transactions, so that its performance is recorded and subsequently traceable: an accurate image of time’s imprint in the black hole! To maintain the accuracy of the time feed worldwide it is deliverable anywhere in the world through low latency fibre connections.
Misalignment of computer clocks is a major and growing issue in a range of sectors. The broadcasting industry is migrating to internet protocol distribution, for which accurate and traceable timing globally will be critical to ensure synchronised timing of broadcast feeds around the world. Digital ledgers (Blockchain) should be wholly dependent on having accurate and traceable functionality. The betting industry too could benefit from accurate traceability of its transactions.
In fact, any business where many routine processes are automated in distributed systems, and huge numbers of transactions take place over very short intervals needs accurate timing and traceability. If the computers running these processes aren't aligned to the same time, it can be extremely difficult to work out the order in which events took place.
The attractions of TTaaS to the Financial Services industries and beyond are obvious but they are equally appealing to Data Centre businesses and fibre suppliers globally. As entire industries move operations to the cloud, Data Centre operators and carriers face increasing demands from a growing volume of customers and an increasingly competitive landscape from a variety of providers. Offering the best service and greatest value is paramount.
TTaaS, which provides significant value and even essential utility to customers at a low cost to providers, may well supply operators of Data Centres and fibre suppliers alike with the defining edge they seek.
Simon Kenny, CEO, Hoptroff London Limited
Image Credit: Still Life Photography / Shutterstock
The bank of the future will look very different from what people have become accustomed to today. It will be defined not by the banks, but by the demands and expectations of their customers. Consumers don’t want to be tied to any one provider or channel. They want to bank where they want, when they want, and how they want.
This shift is being accelerated by open banking and PSD2 in Europe, with both putting the power of data into the hands of the customer. However, while this shift might seem like a threat to banks, it actually presents a major opportunity for those ready to grasp it.A marketplace of opportunities
Given banks are so data-heavy, there is significant opportunity to unlock new revenue channels by using that data to enable highly personalised customer experiences in collaboration with other banks and service providers. PSD2 and open banking provide the impetus for banks to develop APIs that enable them to make initial steps towards greater collaboration. And the more open that banks become, the more opportunities they can create to inject themselves into joint value chains with other service providers. It’s critical therefore to not see open banking as a compliance exercise with the goal of ticking a box, but as the basis for successful digital transformation.
At its heart, open banking is an opportunity for banks to become curators of financial services, creating a marketplace where customers and providers can come to select the best products at the right price. In the same way that more wealth was able to flow between Europe and Asia as the Silk Roads extended further east and west, banks can enable more revenue to flow through their marketplaces by building reusable APIs that connect to trusted third parties. For example, HSBC was one of the first UK banks to make a serious step towards this vision with the release of its Connected Money app, bringing in data from more than 20 rival banks to create a hub from which customers can manage all their bank accounts. More recently, NatWest began trialling Mimo, a virtual personal assistant that uses open banking APIs and artificial intelligence (AI) to help customers switch to better insurance and utilities deals.Building a trading route
If banks want to thrive in this new era and position themselves as a digital marketplace where consumers can come to satisfy any financial requirement, they need to reimagine their business as a platform. This can best be achieved by unbundling and repackaging their digital assets as a discrete set of capabilities exposed via APIs. With this approach, every service, process and digital capability within the bank is ‘productised’ and discoverable to others.
This model will naturally begin to form what is known as an application network, creating a digital Silk Road paved with applications, data and devices that are connected via APIs. This makes these assets pluggable and reusable for any team that requires them, both internally and externally. As a result, it lays the perfect foundation for rapid innovation and closer collaboration between banks, fintechs and other service providers, thereby future-proofing banks for success in the years ahead.The road to the future
By securely opening up their APIs through an application network, traditional banks can behave more like Silicon Valley start-ups, creating new revenue channels by sharing their core banking capabilities and customer base with authorised innovation partners. For example, MasterCard has turned many of its core services into a platform of APIs and is growing an ecosystem around its capabilities. The Mastercard Travel Recommender allows travel agents and transport providers to access customer spending patterns through its APIs and to offer customers targeted recommendations for restaurants, attractions and activities based on their previous behaviour.
These new revenue generating opportunities can have a significant impact on banks’ bottom-lines. MuleSoft research found 36 per cent of organisations with APIs are generating more than 25 per cent of their revenue through those APIs. This indicates APIs will play a central role in enabling the bank of the future, providing a positive catalyst to drive the advent of a new business model built on openness and choice for the consumer.
As banks embark into this brave new world, it’s critical that they understand going it alone will not deliver value for customers and may see them leaving altogether for a nimbler competitor. However, there are huge gains to be made for those bold enough to reimagine their business as a platform and embrace the change that lies ahead. These gains will only be achieved if traditional banks adopt an API-centric mindset that accelerates integration and innovation and provides a seamless customer experience. Unlocking data through APIs and an application network is the best way to stay ahead of the pack as the pace quickens in the race to become the bank of the future.
Danny Healy, financial technology evangelist, MuleSoft
Image source: Shutterstock/MaximP
Imagine typing in a government internet address, and ending up on a website that looks like a government website, acts like a government website, but steals your data.
That's basically what happened recently to Arab governments, but also to government websites, intelligence agencies, telecommunications companies and internet giants in 13 countries, for more than two years.
The ominous news was confirmed by two cybersecurity agencies – Cisco's Talos and FireEye. They are claiming that two separate entities, one of which might be state-sponsored, are doing the dirty work.
They dubbed them DNSpionage and Sea Turtle (who comes up with these names, really?).
The attack revolves around DNS hijacking. Hackers first use spear phishing to compromise a target and get into a network. Then they scan the network for vulnerabilities, targeting servers and routers which allows them lateral movement across the network. They gather passwords along the way.
Then, using the obtained credentials, they target the organisation's DNS registrar. They update the registrar's records so that the domain name points to a server that's under hackers' control.
And boom – there you have it. One moment you're on a government website, the next – a group of hackers is sniffing through your data.
Talos says Netnod was compromised this way by Sea Turtle, and Netnod confirmed. This is a Sweden-based DNS provider, and one of the 13 root servers that powers the global DNS infrastructure.
We don't know exactly who was under assault, but we do know that hackers targeted Armenia, Egypt, Turkey, Sweden, Jordan and the United Arab Emirates.
Image source: Shutterstock/alexskopje
Authentication apps are in vogue – but there’s a big reason SMS 2FA will be relied on by businesses for years to come.
Upon hearing rumours he had died, author Mark Twain is said to have quipped to a newspaper: “Reports of my death have been greatly exaggerated.”
Keep this quote in mind if you come across articles claiming that authentication apps are going to consign SMS two-factor authentication (2FA) to history.
Far from dying off, SMS will be confirming online identities across the world for many years to come.
In fact, you can bet on its usage growing – fast.Ease and simplicity
With cyber breaches and data exploitation making headlines on a frequent basis, it’s clear we live in an era where online security should be a number one priority for businesses and their customers.
Two step authentication techniques are indeed a great way to ensure safety is not compromised.
Yet the reality is that people across the globe – including a reported 90 per cent of Gmail users – are still leaving themselves wide open to fraud by securing important online accounts with a single password only.
They’re failing to take up the option of 2FA for reasons including: the hassle involved, the unfamiliarity of the technology, or because they underestimate the threat to their accounts and applications.
And whilst strong passwords are an important component for security, it’s clear that a simple and accessible way for people to add that extra layer of authentication is needed.
That’s where SMS comes in.The rise of authentication apps
SMS 2FA is a beautifully simple system because almost everyone has a mobile phone and almost everyone uses their text inbox. The service is quick, easy-to-understand, and no Wi-Fi is required. To receive passcodes via SMS, you only need to tick a permission box.
So, how does it work? First, you enter your username and password into a website, as usual. Then you receive an SMS with a unique one-use PIN delivered straight to your pre-determined phone number. You enter that too, and you’re in. This means that even if someone has your username and password, they won’t be able to sign into your account without access to your text messages.
Authentication apps are another excellent option for businesses and consumers that are serious about security. They generate unique passcodes, which must be entered as part of a log-in process.
However, authentication apps have a downside. If a business wants people to use an authentication app, it must first persuade them to download it. This is a small but significant barrier in itself. Additionally, the user must undergo a security process to enter their details and confirm their identity (which often includes being sent an SMS 2FA code through their phone). And there’s real inconvenience if you ever change phones, as you have to update authentication details on all your apps.
In a time where consumers expect a quality and speedy service, the reality is that many organisations will struggle to persuade large numbers of users to do this. Many Gmail users have not adopted 2FA, despite having access to a ready solution in the form of Google’s own Authenticator app.
Network security is all about managing risk and finding solutions that encourage consumers to take action, and continue to act a certain way. SSL web-browsing has risks, but it has boosted online security because it’s conveniently built into web browsers. And therein lies the beauty of SMS 2FA: it’s easy and accessible, and it’s far safer than relying on a one-factor password process. You don’t ignore car seat belts because they can’t protect you from every sort of crash. You use them and look for other ways to keep yourself safe as well – airbags for example.SMS 2FA is dead...long live SMS 2FA
So what are the security issues with SMS?
A few years ago, just as SMS 2FA was taking off, a flaw in the system came to light. Attackers worked out they could call a mobile network provider claiming to be a customer, then persuade the operator to port that customer’s number onto a new sim card. This meant an attacker could receive a customer's SMS messages on a new SIM – including any 2FA alerts.
Fortunately, this process lapse has been fixed. Now, all major network providers insist that customers prove their identity before accessing their account.
But, there’s also the rare incidence of attacks on the SS7 system to consider. SS7 is a set of protocols that allows phone networks to exchange information with each other. Sophisticated attackers can potentially access the SS7 system. If they also have a target's username and password, they can then reroute text messages for that person's number.
Fortunately, these types of attacks are incredibly rare and difficult to pull off. Unless attackers are going after extremely high-value individual targets, they’re highly unlikely to go to all the trouble of both entering the SMS network and getting hold of usernames and passwords.
What’s more, operators across the world have woken up to the SS7 threat and have been installing firewalls to protect the network over the past few years.Official backing for SMS 2FA
In recent years, the US’s National Institute of Standards and Technology (NIST) – one of the most influential authorities on online security in the world – created a draft of its annual publication, which questioned the effectiveness of SMS 2FA based on SS7 vulnerabilities.
This led to many headlines announcing the demise of SMS 2FA. But, following further investigations, NIST experts revised their decision. The final version of the guidelines specifically recommended SMS as an effective 2FA measure, while discounting email or VoIP channels because they don’t “prove possession of a specific device”.
In short, SMS has been found by NIST to improve security exponentially without creating barriers for employees and customers to overcome. It can be rolled out to thousands of users at lightning speed, and it’s incredibly cost effective. For these reasons, it’s likely to remain the most widely-used and effective 2FA tool for organisations and their stakeholders.
Reports of the death of SMS 2FA are, indeed, greatly exaggerated. It has given users the peace of mind that their details can be protected, wherever they are in the world.
Michael Mosher, Director, Global Information Security & Privacy, OpenMarket
Image Credit: Gilles Lambert / Unsplash
The Chinese will soon no longer be able to use Amazon to buy from local sellers, the company confirmed earlier.
The retail giant is slowly pulling out of the country, with its local sellers business being the first one to pay the price, so to speak. As of July, the Chinese will be able to use Amazon only to buy goods from international sellers. The company’s cloud business will continue to operate as usual in China, it was added.
The rumour that this might happen first started circulating when Reuters reported that Amazon was eyeing more lucrative businesses, such as imported goods and cloud services.
A spokesperson for the company told the BBC that it was "working closely with our sellers to ensure a smooth transition and to continue to deliver the best customer experience possible".
The same source believes Amazon was actually pushed out of the local Chinese market by domestic players, such as Alibaba and JD.com.
Back in 2004, Amazon bought Joyo.com for $75 million, which sold books, music and videos at the time. It later rebranded into Amazon, but allegedly, it has ‘struggled’ to maintain ground with local competitors.
It also seems that Amazon will be looking to make up lost ground in India, where it has committed to spending $5.5 billion on e-commerce. There, too, it will have to compete with the locals. This time it’s going to be Flipkart.
Image Credit: Ken Wolter / Shutterstock
We all know a data centre can be a beast to manage, from ever growing capacity demands to budgets tightening as demands complexify. The best way to solve big issues is to clearly identify them and prepare a prompt plan of attack. We have come up with the ‘four horsemen of the data centre’ which are the top issues data centres currently face - and how they can be rectified with the minimum of proverbial blood spilt.
The first issue identified is reporting and yes, it’s not the most compelling part of everyone’s working life, but it’s important and it is a must for good technology and software asset management. Having a solution which provides real-time reporting is vital to keep data centre managers sane and on top of their game. The solution to combat all this stress is Data Centre Infrastructure Management (DCIM). The solution turns massive amounts of raw data from across tech assets into information in a form that can be easily comprehended and quickly acted upon. The first option is for the solution to generate a detailed raw report of the data and allows the user to have full access to the data. The second is a predetermined report that is a set of data centre business management reports that are known to be of value to data centre professionals and to the smooth operation of the business. It allows one to see a dashboard with data spanning the whole data centre, from monitoring to control systems and all points in between. A DCIM solution also lets anyone from the board to the control room access operations views of the data centre as it is, not as it was (that real-time again).
Money, money, money. At the end of the day that is what the top dogs are looking at when it comes to data centre operations. High performance costs money, and is it being spent wisely? Many questions are fired at the manager on this topic. What can help reduce costs? Software Asset Management (SAM) and Technology Asset Management (TAM) are your troops to make this a reality. They hunt out the underutilised software and hardware and so stop any new unnecessary software or hardware from being bought. Renewing software licensing thankless task and so many organisations fall into the trap of renewing everything and anything to stay over-compliant. SAM identifies what licences need updating and how much the software is being used. This can significantly reduce the cost of a data centre as they streamline what they really need in order to operate effectively and efficiently.Proactivity is a must
Data breaches have been a headline so often in recent years, and this is not going to slow down, we suspect. Data centres need to be proactive in ensuring their facilities are secure. TAM identifies software, hardware and IoT which could be a potential security risk and compiles it into a digestible format to give a great overview of the data centre, its assets, and their health. With the explosive growth of IoT devices being connected to the network it means there is a higher risk for hyperscale data centres, as it is tough to actually know what is connected and what’s going on with it. By giving full visibility of all the devices which are connected to the network, it lets data centre managers have control and gives them a chance to identify potential risks before anything catastrophic happens. Such solutions also track what has been added or deleted and its usage changes - this all aids in identifying whether there have been any unauthorised changes which could be a possible cyber intrusion.
Scale: The data centre is ever evolving and needs to be able to cope with new technologies very frequently. To be a hybrid, which includes traditional data centre, cloud computing and SaaS models, is no mean feat, so having a tool like Workload Asset Management (WAM) is a life saver. It allows the data centre to optimise its environment by organising workloads accordingly, and scaling as needed. WAM closely aligns itself with the IT and business needs of applications. This allows organisations to optimise the placement of the application workloads through comprehensive management. By monitoring, managing, automating and sharing information, WAM, gives a traditional data centre the opportunity to expand and develop its offering to become hyperscale.
The issues above are just a few of the things that can turn the running of a data centre into a challenge but, by keeping on top of all these ‘horsemen’, your data centre won’t go into apocalyptic mode. The simple way to ensure the sanity of data centre managers and the business they support is to make sure that they have the correct toolbox of solutions. By having the essential solution to face business challenges, it means that data centres can develop with new technology instead of just catching up with them.
Mark Gaydos, CMO, Nlyte
Image Credit: Welcomia / Shutterstock
Over the past few years, the Government has been working harder to do more business with SMEs to level the playing field. In spite of this, the complexity of the procurement process has proven to halt many a potential purchaser in their tracks. As a result, in late 2018, a range of experts spoke to MPs on the Science and Technology Committee on how attempts to open up procurement to SMEs have progressed over the last three years.
The gist of the discussions was that early success in shifting contracts to a more balanced portfolio of suppliers had seen some modest success, but that the trend seemed to be reversing in the last year. One of the underlying causes of this was cited as the procurement processes that are used to make awards, and this is a theme that has received much attention in the last six months in other forums.
The Local Government Procurement Strategy guidelines published in 2018 described mature behaviour in procurement as ‘taking a pro-active approach to integrating SME organisations into procurement and commissioning,’ but has this vision really been achieved? This move towards maturity has seen lukewarm interest, with many authorities choosing instead to remain working as they have always worked, retaining legacy thinking in procurement activities.
A pro-active approach can be described as ‘dialogue-based’, with local authorities taking the time to assess procurement suppliers based on their merits and their outcomes, with less reliance on the price tag. However, this may have been a premature idea at the time it was introduced – many authorities used a scoring system which was intended to mark suppliers in terms of their value. And, as is common, the conclusion was that ‘cheap’ is synonymous with ‘best value’. As we all know, SMEs are not always the cheapest option. Therein lies the first obstacle.
This stumbling block has caused some authorities to retain old practices or buy the way that they have always bought. The ‘old way’ invariably favours larger businesses which can traverse tender submissions with relative ease, automatically possessing the reference sites and high revenue that SMEs might not yet have built up for themselves.
Then the G-Cloud framework was introduced, designed to remedy these woes with a simplified system that allowed buyers to choose based on their specific needs and wants. Most of the suppliers on that framework were SMEs. The framework reduced the bid cost for both buyer and supplier and opened up the assessment system to more than a few key suppliers. Dialogue was welcome, encouraged and easily accessible. Or, it should have been.Moving beyond simplification
This vision of easy and open dialogue saw modest progress but quickly buyer behaviour shifted backwards. Why was this? For some, it was a matter of protection. The risk of opening themselves up did not equal the potential gains for some buyers, who are accustomed to purchasing online with little to no contact, whether that is for smaller projects or large, complex operations. For others, the average contract length presented something of a sustainability problem; it was just two years long. It allowed for an easy ‘get-out’ clause based on the short contact length. The plethora of obstacles that both buyer and supplier can face acts as a swift deterrent.
Creating the G-Cloud framework was originally intended to simplify the procurement process and encourage local government to take steps towards fairer buying methods. The aim was to take away the red tape that favoured the powerful and the cheap, but instead of taking it away, the G-Cloud framework only replaced it with other, equally vexing hurdles. Any local authority looking to make changes in the way they went about procurement was swiftly discouraged from choosing SMEs, who lacked the large recurring revenue stream to meet financial standards and present themselves as ‘best value’. Adding to this, the aim of simplification may have contributed to the role of the procurement process being diminished.
This new system was simple but obstinate. By favouring safety over risk (even in the face of great rewards), those involved in procurement may have felt they were safeguarding jobs and keeping the sanctity of the process in place. The motivation to change was tepid.
The Government Digital Service (GDS) is responsible for transforming the way we work digitally. The three themes of innovation, transformation and collaboration discussed on their most recent year review podcast are clearly still a focus, but we need to address our understanding of why local authorities are still hesitant to change legacy thinking. Only a small number of workers in the civil service feel like their local authority is open to involving SMEs in the tech procurement process, and this has to change. The riddle runs deeper than a few overzealous regulations and unenthusiastic purchasers.
The procurement process needs to move beyond simplification, because simplification does nothing to fix regulations that stop SMEs from escaping the ‘cheapest is best’ trap – a trap that discourages dialogue rather than encouraging it as the GDS originally set out to do. We must make the outcomes far more appealing, worth the risk – while at the same time reducing the risk altogether. Legacy attitudes towards risk management cite jobs and job value as the reasons, so we also need to reassure, and do all we can to ease concerns.
So, we raise a glass to the innovators, the risk takers and the visionaries who cut through the culture of 'we can't' and start to think about 'how we can'. And that nearly always starts with making sure that procurement is an enabler to change, not an obstacle to it.
Colin Wales, Business Development Director, Arcus Global
Image source: Shutterstock/violetkaipa
In news that probably won't surprise anyone, Facebook has uploaded email contacts of 1.5 million people online without their specific consent.
So here's what happened. If people were to sign up for a new account any time from May 2016, Facebook would ask not only for their email, but for that email's password as well. Those that gave Facebook their email password would then be notified that the social media site was “importing contacts”.
Once you got into that mess, there was no going back.
At a later (unknown) date, Facebook deleted the message saying it was importing contacts, but kept the practice, so people would give their password and would have no idea what was going on in the background.
As the story evolved, we then learned that Facebook not only used new users’ email access to import contacts, but to “improve ads”, as well.
As the news broke out, Facebook reacted, issuing a statement saying it stopped with the email verification functionality a month ago, and that it is deleting the data.
“Last month we stopped offering email password verification as an option for people verifying their account when signing up for Facebook for the first time. When we looked into the steps people were going through to verify their accounts, we found that in some cases people’s email contacts were also unintentionally uploaded to Facebook when they created their account,” the announcement reads.
“We estimate that up to 1.5 million people’s email contacts may have been uploaded. These contacts were not shared with anyone and we’re deleting them. We’ve fixed the underlying issue and are notifying people whose contacts were imported. People can also review and manage the contacts they share with Facebook in their settings.”
Image Credit: Anthony Spadafora
The industry’s use of analytics is ubiquitous and highly varied. From correlating all components in a technology ecosystem to learning from and adapting to new events as well as automating and optimising processes - in many different ways, these use cases are all about assisting the human in the loop and making them more productive and reducing error rates.
We as a society are finding that analytics are increasingly seen as the glue or brain that drive emerging business and social ecosystems that can, and already are, transforming our economy and the way we live, work and play.From people data to ‘thing’ data
The old touchstone of the technology industry - ‘people, processes and technology’ - is firmly entrenched, but we might start replacing ‘technology’ with ‘things’; increasingly so as embedded and unseen tech becomes truly ubiquitous from sensors and connected tech in everything around us.
As we become more connected, it’s been called an Internet of Things or an internet of everything, but for a truly connected and efficient system we are beginning to layer on top a much needed ‘analytics of things’. Forrester talk of ‘systems of Insight’ and believe that these are the engines that are powering future-proofed digital businesses. This is required as it’s only through analytics that businesses and institutions can synchronise the varied components of this complex ecosystem that is driving business and social transformation. Put another way, if we can’t understand and make use of all this data, then why are we bothering to generate it all?
While having a digital fabric means that so much can connect together, from varied enterprise solutions to manufacturing, or even consumer digital solutions like home control applications, it is analytics that coordinates and adapts demand using cognitive capabilities in the face of new forces and events. It’s needed to automate and optimise processes, making humans more productive and able to respond to pressures like the money markets, global social media feeds, and other complex systems in a timely and adaptive manner.
However, the fly in the analytics ointment has tended to be the well-known plethora of problems with data warehouses – even well-designed ones. Overall, data warehouses have been good for answering known questions, but business has tended to ask the data warehouse to do too much. It’s generally ideal for reporting and dashboarding with some ad hoc analysis around those views, but it’s just one aspect of many data pipelines and has tended to be slow to deploy, hard to change, expensive to maintain, and not ideal for many ad hoc queries or for big data requirements.Spaghetti data pipelines
The modern data environment relies on a variety of sources beyond the data warehouse, like production databases, applications, data marts, ESB, big data stores, social media, and other external data sources - and unstructured data too. the trouble is, it often relies on a spaghetti architecture joining these up with the ecosystem and the targets like production applications, analytics, reporting, dashboards, websites and apps.
To get from these sources to the right endpoints, data pipelines consist of a number of steps that convert data as a raw material into a usable output. Some pipelines are relatively simple, such as ‘export this data into a CSV file and place into this file folder’. But many are more complex, such as ‘move select tables from ten sources into the target database, merge common fields, array into a dimensional schema, aggregate by year, flag null values, convert into an extract for a BI tool, and generate personalised dashboards based on the data’.
Complementary pipelines can also run together, such as operations and development, where development feeds innovative new processes into the operations workflow at the right moment - usually before data transformation is passed into data analysis.
As long as the process works efficiently, effectively and repeatedly - as well as pulls data from sources through the various data processes, to the business users that need it - be they data explorers, users, analysts, scientists, or consumers, then it’s a successful pipeline.Dimensions of DataOps
DataOps provides a series of values into the mix. From the agile perspective, SCRUM, kanban, sprints and self-organising teams keep development on the right path. DevOps relies on continuous integration, deployment and testing, with code and config repositories and containers. Total quality management is derived from performance metrics, continuous monitoring, benchmarking and a commitment to continuous improvement. Lean techniques feed into automation, orchestration, efficiency, and simplicity.
The benefits this miscellany of dimensions bring include speed, with faster cycles times and faster changes; economy, with more reuse and coordination; quality, with fewer defects and more automation; and higher satisfaction, based on a greater trust in data and in the process.
AI can add considerable value to the DataOps mix, as together data plus AI is becoming the default stack upon which many modern enterprise applications are built. There’s no part of the DataOps framework that AI cannot optimise, from the data processes (development, deployment, orchestration) or data technologies (capture, integration, preparation, analytics); or the pipeline itself from ingestion to engineering and analytics.
This AI value will come from machine learning, AI, and advanced analytics beyond troubleshooting (though that will be a massive cost, resource and time saving), through greater automating and rightsizing the process and the parts to work in optimal harmony.Recap: Where DataOps adds value
The goal of good architecture is to coordinate and simplify data pipelines, and the goal of DataOps is to fit in and automate, monitor and optimise data pipelines. Enterprises do need to inventory their data pipelines and ensure they carefully explore DataOps processes and tools so that they solve their challenges with the right-sized tools. AI will layer on top by bringing the ultimate value from DataOps.
Kunal Agarwal, CEO & co-founder, Unravel Data
Image Credit: Enzozo / Shutterstock
Online privacy is critically important in this era of privacy breaches. There has been a lot of discussion about how to stay safe online and many experts have suggested using a virtual private network. A number of people are subscribing to a VPN service because it keeps overzealous intelligence agencies away from monitoring online activity and also keeps browsing history hidden from data-hungry advertisers.
However, your privacy can be at risk even after you are using a VPN for PC. Three common vulnerabilities can leak your data when you are surfing online. These include App leaks, WebRTC and DNS leaks. Keep reading to know how these leaks work and what can you do to counter such leaks.App leaks
It is estimated that by 2020, there will be 2.87 billion smartphone users. It is evident from such numbers that people prefer to spend more and more time with their smartphone. The apps we use on our smartphone know a lot about us, if anyone of it leaks out our information then it would practically be quite difficult to pinpoint which one did it.
App leaks happen when an app fails to secure the data. The defaulted apps are often found to connect with online services and leak information of the user, for instance; when signing in for a social media platform you need to put in the credentials to the server to verify you; this data can be saved by the app and later leaked.
Apps that use HTTPS protocol are found to be more secure than those who do not. Apps that rely on HTTP are susceptible to data leakage. These apps can be ad-laden gaming apps, business apps, sports apps, and news apps.
Information that an app can potentially leak includes your email address, postal address, Username, passwords, and your credit card information. People should think twice before allowing apps what information they can access.
If you are not using a VPN for app leak, then you should opt for one. A VPN cannot fill in the holes left by the app developer to encrypt your data, but it surely can prevent the data leakage by changing your IP address and prevent data from being connected.DNS Leaks
DNS stands for “Domain Name Server,” that potentially reveal your online identity even if you are using a VPN service.
When you type in a website’s URL in the search bar of a browser, then your browser asks a DNS for the IP address of the web server which is connected to that URL. Then your browser asks the host server for the web page and displays it.
The web browser automatically uses the DNS services that are available until you specify it not to. The DNS request is so strong that it can bypass a protective layer set by your VPN provider. This means that whenever you visit a website, your IP’s DNS server is contacted and they can be potentially exploited to keep logs of your online activity.
The DNS leak issue can be fixed with some minor tweaks. Here’s how to;
You can manually change your setting to use different DNS like Google DNS or OpenDNS instead of using the default one by your internet service provider. You can change it by typing in the IP of the DNS server you want to use in the internet adapter settings.
Alternatively, you can automate the process by using the services of a VPN provider that claims DNS leak protection by automatically switching the browser to a secure DNS.WebRTC Leaks
WebRTC (Real-Time Communication) allows a peer-to-peer exchange of videos and audios through a browser. It is a protocol which is used by popular apps like Facebook Messenger, Google Hangouts and Discord. Though WebRTC, it is useful to communicate and send videos, this protocol has its downfall. The connection created by WebRTC is used to share data that bypasses the protection of your VPN until your VPN is designed to catch it. Moreover, if you are not protected, then it will leak out your IP address.
An IP address is a key to reveal a user’s identity. A simple Google search is capable of revealing the location when an IP address is searched. If anyone gets access to your IP address (hacker in many cases), then he can access all the data that is linked with your IP address, this also includes your postal address.
WebRTC can leak your information even if you are using a VPN service; to be sure you are protected, use a VPN service that claims strong protection from WebRTC leak.
You can also take these steps to stop the privacy leak.
If you are using Google Chrome, then you can find apps on the Chrome Web Store to disable the WebRTC leak.
If you are using Mozilla Firefox then disabling WebRTC is so simple just head to the configuration page and turn off peer connection settings.
Note: There is no way to disable WebRTC on Safari and Microsoft Edge. If you are so conscious about your privacy, then it is advised to either switch to another web browser or use a VPN that has a built-in WebRTC blocker.
Terry Higgins, Marketing Director, AllBestVPN
Image Credit: Balefire / Shutterstock
Products, software and services are more interconnected than ever, thanks to the costs of sharing and transferring data being lower than ever before. While traditionally this data has been used for marketing purposes, it’s now powering these connections and is the cornerstone of many enterprises’ operations. What’s more, the recent wider adoption of machine learning, AI and the expansion of the IoT market means it’s no surprise that 90 per cent of the world's data has been produced in just the last two years.
Today, organisations of all shapes and sizes, from traditional software developers to hardware manufacturers, are also leveraging the customer and partner data they store to do everything from inform plans and strategy, to directly selling it. Consequently, most (85 per cent) now believe it’s worth the same as currency for solving business challenges, and 48 per cent are already commercialising their data to external parties, up from just 10 per cent in 2014. However, before any organisation can begin to think of ways to monetise data, they need to address the elephant in the room: GDPR.The challenges and opportunities
Data regulations are causing a rethink of how businesses protect and use personal identifiable information (PII), creating both challenges and opportunities. Notably, organisations must now carefully understand and manage their use of data, whereas before they had carte blanche. Consequently, the value of data to an organisation is balanced against the potential cost – financially and legally – to it in the event of a breach or data incident. For most organisations this takes the form of the risk-reward ratio, which is used to measure the expected gains of a given investment against the risk of loss.
Despite this, GDPR has presented a significant opportunity for businesses by creating a legal framework for how data can be used and protected. By clarifying the legal risks and measures for data management, companies have a consistent framework to share and monetise data with their data buyers in a transparent manner. By identifying where data is stored and employing security solutions such as encryption on all vulnerable data, businesses can ensure they are adhering to GDPR and use data responsibly. The regulations also limit the ‘grey areas’ that some companies were operating in when commercialising data, preventing them from sweeping poor data practices under the carpet. While only reputations were harmed in the past, concrete rules now exist which make it clear how data should be exchanged externally.
So, once an organisation has gotten to grips with GDPR, how should they make the most of their data while minimising risks, and ensure that their chosen monetisation strategy is right for them?The two sides of data monetisation
Data Monetisation can be defined as the value generated from data, whether that’s monetary or derived from any other attributes which might have value to a business. Quite often, the term monetisation is used for both internal or external value; however, there are clear distinctions between the two:
- Internal Monetisation – Traditionally the value of data has been derived from analysing it to develop new business strategies or insights, guiding everything from branding to new product features or process improvements. This approach only impacts the organisation internally and ensures that the data they store remains within the network. With a robust data management and protection strategy in place, the business is less likely to run afoul of any regulations. This is usually the first step before being able to derive a monetary value out of data and can be split into three main components: understanding the customer problems already being solved; the problems that haven’t been solved yet, and the operational problems that cost organisations money.
- Direct External Monetisation – The other approach involves directly monetising data. With permission from customers or partners if PII is involved, a business can either sell its data to other organisations that may find the insights and information useful for their own products and services. Alternatively, they can use it to sell additional complimentary services like analytics to their existing customers. Directly monetising data requires the business developing application programme interfaces (APIs) to monitor which data is transmitted in real time to whom. Coupled with an API product manager, this gives the business oversight of each customer it shares data with. While sharing data externally can potentially expose a business to more risk, with the right solutions in place the supply chain of data should remain secure, and ultimately generate revenue.
Before any monetisation project can reap rewards, the data needs to thoroughly analysed to: firstly discover if there’s anything unique about the data that competitors don’t have; secondly how reliable the data is for third parties, and finally how much the data is worth for third parties in specific industries. It could be that the third party using a business’ data also operates in another industry which competes with them. For instance, tractor manufacturers might collect data of the crops and sell this information to raw material traders, companies they had no contact with before.
But, data sharing and direct monetisation is not without risks.
This is particularly true when it comes to customer and product usage information. One criteria is to understand market acceptance and how the business will be perceived by its existing customers and partners. Another immediate concern is whether this data is valuable for competitors, and if it could give them an edge with their own products. Whilst a further risk is that the data could potentially help create new competitors who weren’t previously operating in that space – and who now have access to the organisations’ data.
As the Cambridge Analytica scandal with Facebook proved, not only must the company selling data care about how it’s used, but it must consider how third companies are using it too. Businesses must ensure they have tools in place to respond if a third party be found to be misusing their data. During this instance, Facebook’s only solution was to deprecate the problematic APIs, meaning that all developers (including those who were not misusing it) were unable to use them and had to integrate new ones. To get around this, a business must conduct a thorough analysis of not only how the data they want to share can be used, but also how it could be used against them.
If properly managed, direct external monetisation can be a lucrative strategy for an organisation. Not only can PII be shared, but it can even be aggregated externally, which limits risks. Better yet, for businesses with many partners, sharing data can increase integration between products and services, leading to better customer experiences. Similarly, adopting an open data exchange can attract potential future partners, creating more business opportunities, and enhancing stickiness.
Many businesses will have a preference for their approach to data monetisation – whether that involves erring on the side of caution and analysing data internally or embracing the benefits of directly monetising their data. Importantly, it’s about assessing the risks: which data does a business want to share, and why? This does not mean that both approaches are not complementary, either. Even once a business has decided to share its data, it must ensure it has the flexibility in its APIs to have granular control over what is shared.
However, a business’ data capabilities are only as good as how easily they can share that data with customers. In order to do this, organisations need to ensure they use their APIs correctly. APIs must be considered as products themselves, and created with flexible packaging, pricing and business models for customers; understanding the value they provide for existing and new customers, in order to offer the APIs that fit their requirements best.
Further, APIs need to evolve based on the customer experience and changes in their needs, to ensure they have a fast time to market. In the subscription economy, 70 per cent of revenues come from renewals, upsells and cross-sells. Therefore, it is essential to have the right tools to handle data transfers and not leave easily monetisable data on the table by providing the best customer experience with low friction.
As enterprises become increasingly digital and connected, they will have more confidence in sharing their data, thanks to better understanding of its value and the practices needed to protect it. With data sharing enabling customers to use software and products seamlessly across different platforms and solutions in the future, businesses must determine their data monetisation strategies now in order to make the most of this opportunity.
Jamshed Khan, strategy and marketing vice president for cloud protection and licensing activity, Thales
Image Credit: StartupStockPhotos / Pixabay
Human communication may not be the first thing you’d associate with artificial intelligence, especially if you’re used to less-than-fluid automated calls or chatbots. But, when harnessed effectively, the technology offers a great deal of potential for facilitating better, and even more creative, collaboration.
As AI becomes more ingrained in workplace collaboration technology, we’ll be able to channel some of the energy we used to devote toward mundane note-taking, scheduling and other tasks into juicing up our creative output. The use of AI is set to create a whole new world in the office environment, that were not deemed to be possible a few years ago.
Here are five ways AI can give rise to creative meetings and transform workplaces as we know it.1. Automated note taking allows brainstorms to go full-speed ahead
Have you ever been the person responsible for taking notes during a meeting, and during the meeting you constantly find yourself chasing whatever the last participant just said? At the end of a meeting like this, it’s probable you could barely remember the content of the meeting, let alone the details.
Adopting automated note-taking and accurate meeting transcripts can be one of the simplest ways AI can help free up meeting attendees to actually focus on the discussion taking place.
Furthermore, transcripts can be searched for important keywords and ideas, allowing participants to fully absorb each detail and idea after the meeting has concluded. Giving everyone at the meeting the ability to participate without the burden of constant note-taking fosters a lively and uninhibited discussion, encouraging a seamless flow of ideas.2. AI-powered action items and agenda updates keep you from getting bogged down with remembering specific details after the meeting has ended
AI technology is founded in rules-based responses to decisions, meaning it can be taught to recognise particular keywords. Organisers can plug in important words and AI can recognise those words and react – meaning AI is equipped to take action items, a more complex task than just providing a transcript of what occurred during a meeting.
On top of action items, AI can help to record deadlines, and, if programmed to do so, would be able to send out reminders as deadlines approach. AI can record the most important part of a meeting and share them with attendees after the fact, ensuring that none of the actions, intentions or necessary follow-ups are forgotten.3. Automated capture of nonverbal cues can let you know when you’re onto a good idea
We’ve all been guilty at some point of watching the slow progress of the clock as we sit through yet another meeting that would have been better handled through a group message or email. While we often fixate on “meetings gone wrong,” there are also those amazing meetings that truly made an impact – where a team had a critical inspiration that unlocked success, closed a crucial deal or had a break-through in forging a relationship with a boss, employee or client.
One of the things we talk about in the communications and collaboration division of LogMeIn are those “ah ha” moments in meetings. These are the moments during a meeting where ideas are born and all participants react strongly to an idea, and engagement and information-sharing are at their highest. If you can capitalise on these moments, you can unlock your team’s full potential and drive dramatic acceleration in delivering on your most critical objectives. The challenge lies in being able to identify and capitalise on these instances to maximise engagement and productivity between and during future meetings.
AI will be able to more easily recognise and record those moments, because they are generally identified by nonverbal cues such as facial expressions, nods, laughter, peaks in the audio when everyone has that “ah-ha!” moment and other reactions that human note takers likely would not be able to accurately capture. This helps these moments stay intact and be easily identified later, preventing great ideas from being misinterpreted or lost.4. Improved overall efficiency prevents meetings from dragging on and draining people of their creative energy
Everyone has experienced a meeting that seems to drag on endlessly, or watched coworkers talk in circles. This can happen when people are not paying attention because they’re scribbling on notepads and typing on laptops, bringing up topics that were already discussed. This is what turns meetings into chores instead of the energising moments of team collaboration they are meant to be.
When AI removes the more mundane aspects of a meeting like scheduling or taking attendance, however, attendees can move through administrative tasks and housekeeping items rapidly, knowing the AI will have it all recorded for later reference, and move into free-flowing exchanges of ideas. And for those routine meetings that occur frequently and don’t always entail a major brainstorm, AI facilitates effective and concise meetings so everyone can get in to the meeting quickly, have a productive meeting and then get back to the more inspiring work.5. More personal interactions become possible when AI takes care of the mundane meeting tasks and you can put all of your focus on collaboration
The more that meeting attendees are able to focus on the meeting content itself, the more they will be able to come up with better ideas and more creative solutions to problems, thus building team rapport. By reducing the responsibility that seems to come with a meeting, people can relax, build candour and create a team that functions better in and out of the meeting room.
Ultimately, AI will improve the way we work with each other. Eliminating the repetitive and easy tasks that come along with the administrative aspects of meetings allows humans to work without constraint. It is much better, in the long run, to allow AI-powered assistants to take care of necessary but low-value tasks such as note taking, action items, agendas and reminders. Without the burden of worrying about these tasks, employees can bring their A-game to every meeting, increasing the usefulness of meetings tenfold.
Steve Duignan, VP of International Marketing, LogMeIn
Image Credit: John Williams RUS / Shutterstock
Posted by Slackware Security Team on Apr 17[slackware-security] libpng (SSA:2019-107-01)
New libpng packages are available for Slackware 14.2 and -current to
fix security issues.
Here are the details from the Slackware 14.2 ChangeLog:
This update fixes security issues:
Fixed a use-after-free vulnerability (CVE-2019-7317) in png_image_free.
Fixed a memory leak in the ARM NEON...