APICS is the premier professional association for supply chain management.

The Big Data Revolution

By Richard E. Crandall, PhD, CFPIM, CIRM, CSCP | March/April 2013 | 23 | 2

APICS magazineInvestigating recent developments in collection and analytics 

In the time it takes you to read this article, about 10 thousand gigabytes of data will have been collected throughout the world. This article represents about 350 kilobytes of data in its final form; so, if I’ve calculated correctly, about 30 million similar articles will have been created before you finish—unless you’re a particularly fast reader. 

In the APICS Operations Management Now e-newsletter “Joining the Big Data Revolution” (October 26, 2012), APICS CEO Abe Eshkenazi, CSCP, CPA, CAE, highlights the increasing interest of business leaders to participate in what some are calling an “information revolution.” Eshkenazi also calls attention to the “APICS 2012 Big Data Insights and Innovations” report, which identifies the following trends:
  • Many organizations are challenged by data overload and the abundance of trivial information. 
  • Important data are not reaching practitioners in efficient time frames.
  • Despite the sophistication of current systems, data are not always easily accessible. 
  • Current technology is not yet at the level of providing measurable, reportable, and quantifiable data in areas including scheduling, inventory levels, and customer demand across the supply chain.
  • Supply chain data flow includes direct suppliers and customers, but there are gaps in many end-to-end supply chain flow models.

That said, let us tiptoe further into the topic of big data—I use “tiptoe” because one could easily drown in the amount of material available. Edmund Schuster (2012), in a post on career networking site LinkedIn, reports that the study of big data was initiated in 2003 with the advent of the Massachusetts Institute of Technology Data Center Program. Before this, most of the early research beginning in the late 1990s used “data analytics” as the key descriptor. In the past two-to-three years, however, references to big data have become more frequent.

According to the Leadership Council for Information Advantage (2012), big data is not a precise term. It describes “data sets that are growing exponentially and that are too large, too raw, or too unstructured for analysis using relational database techniques.” Other definitions reinforce the idea that businesses are now collecting immense quantities of data in a myriad of ways. The result is that the traditional methods of data organization and analysis cannot keep pace. Don’t throw away your spreadsheets; instead, look for new ways to manage the oncoming onslaught of data. It’s estimated that organizations effectively use less than 5 percent of their available data (Leadership Council 2012).

How much data are being collected? One author estimates that more than 15 petabytes—about 15 million gigabytes—of new information is collected each day. That’s more than eight times the information in all the US libraries (McKendrick 2010). 

Where do these data come from? The New York Times reports that data are doubling every two years and are generated by not only existing sources, but also entirely new streams, including industrial equipment, automobiles, electrical meters, and shipping crates. The information gathered includes location, movement, vibration, temperature, humidity, and even chemical changes in the air (Lohr 2012).

Stevenson (2012) estimates that about 80 percent of new data are unstructured. They consist of tweets, product reviews, or Facebook “likes,” as opposed to more traditional data. There is a need for the right infrastructure and effective techniques for manipulating the data (Provost 2012).

IBM is a leading provider of big data systems. Zikipoulos et al (2013) outline four Vs that characterize IBM’s approach to big data:
  • Volume: Aggregate data can be measured in zettabytes, each of which contains about one million petabytes.
  • Variety: This includes efforts to capture the new, unstructured data to complement the existing structured data.
  • Velocity: This describes the rate at which data arrive at the enterprise and are processed or understood.
  • Veracity: This refers to the quality and reliability of the data. 

One novel way to look at big data comes from Fisher et al (2012): “Fundamentally, big data analytics is a workflow that distills terabytes of low-value data (e.g., every tweet) down to, in some cases, a single bit of high-value data.”

A joint survey by Industry Week and SAS (2012) asks respondents about their levels of satisfaction with their supply chain management and enterprise resources planning systems. Only about 12 percent are “very satisfied” with these tools. The general conclusion is they do well at reporting the past, but do not help predict the future. For companies that are very confident in their data, 43 percent have seen increases in their revenues and gross margins, but only 28 percent of those less confident in their data are achieving similar gains. Other reported benefits of data analytics include improving key metrics, deriving more value from assets, and increased inventory turns.

In analyzing ways to improve supplier performance, Seth (2012) cites the less-tangible benefits of data analytics, such as

  • gaining visibility into the lower tiers of the supply chain to understand performance trends
  • benchmarking supplier performance continuously against peer groups
  • delivering qualified predictions based on trends in aggregate key performance indicator scores
  • assessing future risk within the supply base and proactively taking corrective actions.

One study indicates that companies that incorporate big data and analytics into their operations show productivity rates and profitability 5-to-6 percent higher than their peers (Barton and Court 2012).

In early articles about a new trend or technology, authors tend to report on success stories and explain how the concept or technique can be of value. It is only later that writers begin to point out what actions are necessary to realize the benefits. Fisher et al (2012) from Microsoft Research outline the following five steps as necessary to big data success:
1. Acquisition. Data are available from a variety of sources, and users must decide which will be useful.
2. Choosing architecture. This involves balancing cost and performance to obtain a platform based around programming abstractions different from those of the normal desktop environment. Most companies don’t have the in-house capability, and cloud sources are expensive.
3. Shaping the data to the architecture. Analysts ensure the data are uploaded in a way that is compatible with the architecture and distribute and partition them appropriately. Transferring between the cloud and a local machine is common, though often time-consuming.
4. Coding. Select a programming language, design the system, decide on an interface, and be prepared for the rapidly changing environment. 
5. Debugging and iteration. This includes the process of looking for errors in code, making changes, and visualizing at multiple scales.

Other obstacles include too much data, poor data quality, wrong metrics, lack of skills or training, and growing complexity in the marketplace (Robbins 2010).

Big data has some interesting applications. It’s used by airlines to determine flight prices and by retailers to select locations for inventories and merchandise. In addition, big data can help companies identify fraud in real time, evaluate patients for health risks, track changes in consumer sentiment, and explore network relationships on social media sites (Davenport, Barth, and Bean 2012). During the 2012 US presidential elections, it was obvious that the television networks dug into big data in an unprecedented way, not only in terms of granularity—breaking down information by state, county, and precinct—but also for classifying information by age, gender, and ethnic background. 

There are a myriad of big data analysis techniques in use today and considered for future study. A structured approach might include the following levels: standard reports, ad hoc reports, query drilldown or online analytical processing, alerts, statistical analysis, forecasting, predictive modeling, and optimization (Industry Week and SAS 2012). One emerging set of techniques lies in artificial intelligence, with strategies such as natural-language processing, pattern recognition, and machine learning (Lohr 2012).

APICS magazineBusiness consequences
Although big data and data analytics are in the spotlight, many companies are unsure of how to proceed (Barton and Court 2012). A 2011 Economist Intelligence Unit survey of almost 600 senior executives revealed the following:
  • There is a strong link between effective data management strategy and financial performance.
  • Extracting value from big data remains elusive for many organizations.
  • Numerous companies struggle with the most basic aspects of data management, such as cleaning, verifying, and reconciling data across the organization.
Companies that are furthest along the data-management competency continuum provide a useful model for how organizations will need to evolve if they are to extract data and learn from data-driven insights (Briody 2011). 

Leaders in data analytics differ from traditional users in that they pay attention to flow instead of stocks, employ data scientists and product and process developers rather than data analysts, and shift analytics away from information technology and into core business functions (Davenport, Barth, and Bean 2012).

One potential issue is the decision to perform analytics in-house versus in the cloud. Companies without in-house capabilities will have to rely on third-party providers; however, they must select applications wisely due to the high costs. Larger companies that opt for an in-house solution not only have to understand the technology, but also must enact infrastructure and cultural changes, as well as investing substantial resources. 

Finally, it’s important to address the potential environmental concerns of big data. Vast, modern-day data warehouses cover acres of land and consume lots of energy. Emerson Network Power (2011) estimates there are more than 500,000 data centers in the world occupying nearly 300 million square feet of space and releasing tons of carbon emissions. Focus must be placed on making data centers smarter and less wasteful. 

Big data is here, but it is in the early stages of its life cycle. Currently, data collection techniques are ahead of conversion techniques, data conversion techniques are ahead of user comprehension, and user comprehension is ahead of general population acceptance. However, as companies grow in experience, develop sharper analysis techniques, and embed big data in their cultures, this all is likely to change.

  1. APICS The Association for Operations Management. 2012. “2012 Big Data Insights and Innovations.”
  2. Barton, Dominic, and David Court. 2012. Harvard Business Review. http://hbr.org/2012/10/making-advanced-analytics-work-for-you/ar/1.
  3. Briody, Dan. 2011. “Big Data: Harnessing a game-changing asset.” Economist Intelligence Unit. http://www.sas.com/resources/asset/SAS_BigData_final.pdf
  4. Court, David. 2012. “Putting Big Data and Advanced Analytics to Work.” McKinsey & Company. http://www.mckinsey.com/features/advanced_analytics.
  5. Davenport, Thomas H., Paul Barth, and Randy Bean. 2012. “How ‘Big Data is Different.” MIT Sloan Management Review 54 (1), 43–46.
  6. Emerson Network Power. 2011. “State of the Data Center 2011.” http://www.emersonnetworkpower.com/en-US/About/NewsRoom/Pages/2011DataCenterState.aspx. 
  7. Fisher, D., R. DeLine, M. Czerwinski, and S. Drucker. 2012. “Interactions with Big Data Analytics.” Interactions 19 (3), 50. 
  8. IBM. 2010. “Keeping Students on Track.” http://www-01.ibm.com/software/success/cssdb.nsf/CS/GREE-8F8M76
  9. Industry Week and SAS. 2012. “Supply-Chain Analytics: Beyond ERP & SCM.” http://www.highbeam.com/doc/1G1-268601567.html.
  10. Leadership Council for Information Advantage. 2012. “Big Data: Big Opportunities to Create Business Value.” http://www.emc.com/microsites/cio/articles/big-data-big-opportunities/LCIA-BigData-Opportunities-Value.pdf.
  11. Lohr, Steve. 2012. “The Age of Big Data.” New York Times.  http://www.nytimes.com/2012/02/12/sunday-review/big-datas-impact-in-the-world.html. Accessed November 5, 2012.
  12. McKendrick, Joe. 2010. “Big Data, Big Issues: The Year Ahead in Information Management.” http://www.dbta.com/Articles/Editorial/Trends-and-Applications/Big-Data2c-Big-Issues---The-Year-Ahead-in-Information-Management-71972.aspx.
  13. Provost, Taylor. 2012. “No Fad, Big Data is Real Deal.” CFO. http://www3.cfo.com/article/2012/11/technology_sloan-summit-big-data-predictive-analytics-netsuite-kpmg-infrastructure?_mid=99002&_rid=99002.51300.10225.
  14. Robbins, James. 2010. “Using More Analytics Can Help Industrial Manufacturers.” Industry Week. http://www.industryweek.com/articles/using_more_analytics_can_help_industrial_manufacturers_23248.aspx.
  15. Schuster, Edmund. 2012. “Big Data is a Big Reality.” http://ingehygd.blogspot.com/2012/02/big-data-is-big-reality.html.
  16. Seth, Vineet. 2012. “The Next Frontier of Competitive Wars in Supplier Management.” Industrial Distribution, November/December 2012, 49–51.
  17. Stevenson, Tom. 2012. “Why Big Data Offers Big Opportunity.” Investment Week. http://www.investmentweek.co.uk/investment-week/feature/2214986/why-big-data-offers-big-opportunity.
  18. World Economic Forum. 2012. “Big Data, Big Impact: New Possibilities for International Development.” http://www3.weforum.org/docs/WEF_TC_MFS_BigDataBigImpact_Briefing_2012.pdf.
  19. Zikopoulos, Paul C., Dirk de Roos, Krishnan Parasuraman, Thomas Deutsch, David Corrigan, James Giles. 2013. Harness the Power of Big Data: The IBM Big Data Platform. McGraw-Hill, New York.

For a free annotated bibliography of over 60 articles on this topic, contact the author at crandllre@appstate.edu.

Richard E. Crandall, PhD, CFPIM, CIRM, CSCP, is a professor at Appalachian State University in Boone, North Carolina. He may be contacted at crandllre@appstate.edu.

Further your knowledge about big data with the new APICS Big Data Folio: Exploring the Big Data Revolution. APICS members also can download the APICS 2012 Big Data Insights and Innovations report for free. To learn more, visit apics.org/research.


  1. RadEditor - HTML WYSIWYG Editor. MS Word-like content editing experience thanks to a rich set of formatting tools, dropdowns, dialogs, system modules and built-in spell-check.
    RadEditor's components - toolbar, content area, modes and modules
    Toolbar's wrapper 
    Content area wrapper
    RadEditor's bottom area: Design, Html and Preview modes, Statistics module and resize handle.
    It contains RadEditor's Modes/views (HTML, Design and Preview), Statistics and Resizer
    Editor Mode buttonsStatistics moduleEditor resizer
    RadEditor's Modules - special tools used to provide extra information such as Tag Inspector, Real Time HTML Viewer, Tag Properties and other.