Data has always been there. Right back at the start of the last century, organisations of the age still ran their operations using log books, paper folios and hand-drawn accounts sheets. It was still data, but it was predominantly static, often duplicated or inaccurate, tough to search and only accessible to a few people.
After the second and third industrial revolutions and the arrival of the PC, things finally changed. Software spreadsheets became ubiquitous and dedicated databases grew out of the mainframe era and found their way onto our machines before subsequently progressing to the web and the cloud.
So around the turn of the last millennium, things did change… but this was still a time of relatively slow-moving data intelligence. Jump forward two decades to today and a new era of data management is upon us.
The era of next-generation data
This is, if you will, the time of next-generation data. This is not just about spreadsheets and databases (they still exist, obviously), this is about the move to a higher level of predictive intelligence, the ability to work with a more agile process structure and the chance to access more advanced data management tools.
In this next-generation data era, we no longer keep data locked away in departmental silos, operational repositories or other organisational archives. We move forward to a time when users, under approved and appropriate policy access procedures, can get self-service access to data in real time.
Next-generation data organisations are able to meet business requests for new datasets on-the-fly because they have used next-generation data management tools to organise, orchestrate and coordinate these actions. There are still challenges and we know that data governance, security and compliance regulations will continue to have an effect, but next-generation data organisations know this is so… and so, accordingly, they have put the right tools in place to manage these tasks automatically, autonomically and intelligently.
The realities of modern data
Inside contemporary business systems, the actual ‘value’ of data itself is continuing to rise. More applications are being developed that draw their intelligence from a wider variety of data sources and the core volume of data ingestion itself is also (in some cases exponentially) growing. This presents a challenge for the next-generation data organisation as it strives to deliver persistent demand for self-service data to every user. A precision-engineered data platform is needed to deliver on these demands.
These ‘data realities’ should help us understand where we are heading. Data is becoming more widely distributed, more stringently regulated, more complex in its form and there is an increasing need for access to it to be delivered in real time. All of this is happening in the face of elusive data quality controls and a lack of fully proficient data management skills. Again, a precision-engineered data platform is needed to answer these challenges.
The next-generation data organisation will need to look at these realities and embrace today’s distributed data topology rather than trying to apply traditional data centralisation paradigms. This is not a question of rip and replace (something that should rarely apply in technology, if ever). Instead, organisations should look to build upon the data estate that they currently operate with and scale upwards to more intelligently software-defined control methods that open the door to more agile work methods. Once again, a precision-engineered data platform is needed.
Once firms understand that the journey to becoming a full-fledged next-generation data organisation is not a rip and replace exercise, they can become more strategic in terms of how they move forward. To use an old term, don’t go boiling the ocean. Target one application or one workflow or one dataset (which could all be one single entity, or be separate) and start to apply modern data management techniques. At the same time, keep the core of the business operating smoothly in what you might call mode #1 as you enhance that with a more exploration-oriented mode #2.
An eye for AI
Helping to accelerate the next-generation data organisation is a new channel of data management intelligence. Built-in artificial intelligence (AI) and machine learning (ML) techniques augment existing data management professionals to reduce workloads by automating manual processes such as data discovery and matching, model design and query optimisation.
The next-generation data organisation can also use role-based tools to expand its pool of data management enabled people. It is then able to involve business domain experts to improve data quality and relevance throughout the business. Using a unified data management suite to manage these processes is a more intelligent way of applying more intelligence to the business.
Although they have a place, the next-generation choice is to centralise upon a unified data management suite rather than adopt any separate application for metadata management, MDM, data governance, data catalog tasks, data modeling, data security and data integration. Although those individual tools might be best-in-class, they fail to offer the holistic end-to-end intelligence that the thoroughbred next-generation data organisation needs for the days ahead.
Just as the early data consumers of the early 20th century will have done with their logbooks and accounts folios, there is great comfort in knowing where your business is financially and operationally. The difference now would be unfathomably progressive to those early bookkeepers working on paper by candlelight. But even next-generation data organisations know their history, so sharpen a pencil and be grateful for today.
Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, Cyber Security & Cloud Expo and 5G Expo World Series with upcoming events in Silicon Valley, London and Amsterdam and explore the future of enterprise technology.