Twitter Facebook Linkedin

Why Big Data is Such a Big Deal in Decision Making – By: Jay Jesse

To share...Tweet about this on TwitterShare on FacebookEmail this to someoneShare on LinkedInShare on StumbleUpon

Author: Jay Jesse, CEO of Intelligent Software Solutions

I read an intriguing article last week, authored by Don Peppers, noted author and founding partner at Peppers & Rogers Group. The article, titled Moore’s Law Doesn’t Apply to Business Decisions, talks about the proliferation of new data and how we are producing new information 50 times faster than in 2005. As a quick refresher, in its simplified form, Moore’s Law states that computing power will roughly double every two years.

The article discusses the convergence of this growing computing power with the proliferation of data. According to Peppers, “businesses not only have thousands of times more data to work with, but they also have thousands of times more computational power with which to do the work, from massive “in memory” databases to advanced statistical programs and algorithms.

Like me, you may be asking why, with this avalanche of data, supported by extremely powerful hardware and software – can the effectiveness of corporate and governmental decision making keep pace?  If not, we need to ask whether the problem is one of process, technology or people (or perhaps all three).

People.  To quote from the Don Pepper’s article, “The problem facing us now, however, is that the human skills and talents business managers require, in order to make better decisions with all this data and computational power are not improving. Moore’s Law doesn’t make us a thousand times better at reasoning every couple of decades. It doesn’t even make us 10 times better.”

Individuals tasked with making decisions face challenges that haven’t really changed over the years (decades). They bring their past habits and biases to bear, as well as a reluctance to try new things. The problem may also be one of training or technical sophistication. This is why many of us who have smartphones containing dozens of features and applications may only take advantage of only a handful.   And perhaps the vast distance between the speed of improvement for computing vs. us humans is why some people predict the “rise of the machines”.

Processes.  Even the hottest new technology (hardware and software), combined with talented and committed personnel, will fall short of achieving goals if not combined with the business processes necessary to capitalize on the data. We talk about this in terms of the four V’s model of capitalizing on Big Data in a way that allows organizations to benefit from data, rather than being swamped or having useful data disconnected and slumbering away on various databases or storage media.  The first three Vs in the value chain are Volume, Variety and Velocity.

Contending with, and overcoming the challenges of the first three Vs yields the fourth: Value. By accounting for volume, variety and velocity, we equip our business analysts and leaders to “shrink the haystack,” establishing a data processing ecosystem that can process, enable search and allow users to interact with the data in fruitful ways, rather than being overwhelmed and in the dark. The end result is better decision-making through superior insight by revealing threats and opportunities that had previously been invisible in a mass of data.

Technology.  The role of technology in the decision process is to support the processes and people as we identified above, in the four key areas shown in this graphic.

BigData_OurApproach

Technology Support for Big Data Decision Making

  • Content Acquisition – The first stage of technology support is to pull all information into a common environment so that it can be pushed through an analysis pipeline.
  • Search/Discovery – Enterprise search is the first of a “one-two punch” that eventually enables actionable insight from what was previously an unmanageable mountain of data. After content acquisition, content is indexed and pushed into its own optimized search engine. This index can be tuned for the kind of search and discovery that supports the kind of queries your data analysts need to make.
  • Semantic Enrichment – NLP (natural language processing) is the second part of the one-two punch that fuses what the analyst knows with what he or she doesn’t know, allowing users to constantly tune and refine smaller subsets of data for key factors. The system “learns” as the user refines their searches to better target their data domain, constantly improving search effectiveness.
  • Applying Data Perspectives – As refined searches isolate the critical content, data perspectives give you the ability to reduce the data gleaned from targeted queries and roll it up into graphs, time-series databases, geospatial representations and more, revealing connections and trends that were invisible at the beginning of the process.

By aligning and optimizing your people, processes and technology, you will be able to take full advantage of Moore’s Law, reap the fruits of Big Data and make decisions that have a positive impact on shareholders, customers, employees or citizens.

To share...Tweet about this on TwitterShare on FacebookEmail this to someoneShare on LinkedInShare on StumbleUpon

| View

  • DataH

    Jay, big data is surely a great deal. We definitely are seeing an increase in activity with companies responding to the impact big data has made on their business. For companies of any size, getting meaningful insights from data analytics is an important priority. LexisNexis has open sourced its HPCC Systems big data platform which represents more than a decade of internal research and development in the big data analytics field. Designed by data scientists, the built-in libraries for Machine Learning and BI integration provide a complete integrated solution from data ingestion and data processing to data delivery. More at http://hpccsystems.com