Data is often referred to as the new oil because just as gasoline powers cars, data powers today’s businesses. However, data fiction can inhibit the free flow of data which slows business and decreases productivity. The friction growing between data operators and data consumers is also growing as the demand for data has reached new highs.
To better understand the negative side effects of data friction and how DataOps can help bridge the divide between data operators and data consumers, TechRadar Pro spoke with Delphix’s CEO Chris Cook.
Can you describe what Delphix does?
Delphix eliminates data friction that inhibits the free flow of data and slows business, done via a dynamic data platform that provides personalised DataPods to accelerate cloud, governance and application projects.
Just as DevOps was a key driver of the first 10 years of the cloud era, transforming the way that the Global 2000 thought of IT, DataOps will be a key driver of the next 10 years. Today, data operators struggle to manage, secure, and deliver the data environments demanded of them. Data is expensive to maintain, full of sensitive information, difficult to copy, hard to track over time, and slow to deliver to the teams that need it. The Delphix Dynamic Data Platform reduces data friction by providing a collaborative platform for data operators (such as DBAs, InfoSec & IT Operations teams) and data consumers (such as developers, QA, Analysts and data scientists), ensuring that sensitive data is secured and that the right data is made available to the right people, when and where they need it.
What is data friction and why should it matter to today’s businesses?
Despite billions being poured into the innovation pot, projects are so often slowed down by data friction, the result of growing volumes of siloed data and multiple access requests. This means that whilst the future holds exciting new developments in technology such as automation and AI, data friction is the principal obstacle which stands in the way of truly taking advantage of these innovations.
Data is costly to store, slow to copy, difficult to move and hard to govern, and with the demand for fast access ever increasing, we see a friction developing between data operators and data consumers as the tug of war between each of their demands intensifies. Consumers use data to drive new projects and innovation, which results in more users looking for access to the data, a higher demand for faster access and an increase in access points across the company. Whilst the operators need data to run the everyday functions of their organisations, such as infrastructure, security and maintenance.
As the struggle between groups develops, their relationship suffers as a by-product of this.
What is DataOps and why is it significant to modern businesses?
Research has shown that less than 37% of business organisations perceive that IT’s digital initiatives are aligned with the business and only 25% believe that IT are correctly using their data. Until this divide is rectified, businesses will not be able to achieve the level of innovation which they so desire in the digital economy.
DataOps seeks to bridge this divide. DataOps creates one collaborative team of data operators and data consumers. It first acknowledges who the consumers are and who the operators are, and once this has been achieved, the process flow of data can be mapped out so that access requests can flow smoothly for both parties. Both groups must be involved in the process as a team so that all constraints and obstacles can be identified and quantified. Once identified, constraints can be eliminated through means such as automation, self-service, or simply through policy change.
However, DataOps would be nothing if it didn’t have the technological infrastructure to support it. In order to ensure the success of this process, organisations need to make sure they have comprehensive technology in place to support this collaborative effort. Areas of technology which should be frictionless should be:
• Governance, the security and integrity of the data
• Operation and scalability, monitoring, recovery and reliability of data systems
• Delivery and distribution of data environments
• Transformation, such as the modification of data, including masking and platform migration
• Versioning, the ability to capture data as it changes over time
Data is often referred to as the new oil, how do you expect this to change in 2019?
Data is needed everywhere across an organisation. AI and machine learning applications, for example, require ready access to clean, secure, datasets for training and execution. But much of the most important data remains stuck in enterprise systems, stymied by legacy tools and processes. Organisations that have automated their development and machine learning infrastructure have found they have failed to provide similar capabilities for their data.
Data has grown exponentially in size, complexity, and cost, all with escalating security and privacy concerns. IT teams are forced to limit data access and availability because moving, copying, and securing large amounts of data is simply too cumbersome and costly. Yet those that are driving speed in DevOps, cloud, and other initiatives need data to be everywhere and available on-demand. There’s a very real tension that exists when constraints on data prevent people from meeting the ever-growing demands of the business. And this problem is growing by the second.
We need to move away from organising our teams and technology around the tools by which we manage data, such as application development, information security, analytics and data science. Instead, we need to recognise that data is the critical assent, and bring together everyone that uses or manages data to take a data-centric view of the enterprise. When teams move beyond the mechanics of data delivery and focus instead on the policies and constraints that govern data in their enterprise, they can better align their infrastructure to let data flow through their enterprise to those who need it.
To get there over the next year, DataOps requires that teams embrace the complexity of the today’s technology landscape and think creatively about common solutions to data challenges in their enterprise. For example, you may have information about specific users and their roles, attributes of the data and what is required to secure it for different audiences, and knowledge of the resources required to deliver that data where it is needed. Bringing those together in one place with novel solutions allows the organisation to move faster. Instead of waiting hours, days or even months for data, environments need to be provisioned in minutes and at the speed required to enable rapid development and delivery of applications and solutions. At the same time, organisations don’t have to choose between access and security; they can operate with confidence that their data is appropriately secured for all environments and users without cumbersome manual reviews and authorisations.
What are your predictions for the tech landscape in 2019?
The next few years will reshape the face of computing through IoT devices, machine learning, augmented reality, voice computing, and more. And with that change will come more data, and with more data comes greater vulnerability. This is due to the fact that data is scattered in multiple environments, active and non-production.
GDPR which is now in place will help aid companies in their fight against hackers. The measures which organisations will have to employ in order to ensure compliance will force companies to obtain a better understanding of where their data resides and who has access to it. But it will also put incredible pressure on organisations, and whoever cracks the problem first will win.
Companies can defend against security issues by employing a platform which allows them to identify sensitive data which is stored in all environments, including non-production, on premise data centres and cloud-based environments. Organisations need to be able to keep track of what data is stored where, especially when they are using sensitive data within their testing environments, this reduces the likelihood of hackers being able to access important data.
With DataOps, IT can overcome the cost, complexity, and risk of managing data to become an enabler for the business. DataOps can create a comprehensive library of data sources that enables users to pinpoint the exact location of sensitive data across an organisation’s entire IT estate, whether on-premises or in the cloud. What’s more, with the right tools organisations can identify which data values are subject to compliance, and adapt these to the business’ unique definitions of what is considered personal, confidential information.
Identifying personal data is only half the challenge, protecting it comes next and a big challenge to companies is masking this data for all live and non-production systems. Modern dynamic data platforms can be used to apply masking policies for multiple systems at once in a matter of minutes meaning you can be compliant without inhibiting speed or agility.
Chris Cook, CEO of Delphix (opens in new tab)
- Also check out the best data visualisation tools