While the quantity of information is increasing [vastly, every day], the amount of useful information almost certainly isn't. Most of it is just noise, and the noise is increasing faster than the signal. There are so many hypotheses to test, so many data sets to mine--but a relatively constant amount of objective truth. 
Nate Silver’s insightful book, The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t, highlights for us just how intrinsically fallible our human judgment is, and how our relatively meager range of experience colors our interpretation of the evidence around us—even without our knowing it is doing so.
Our instinctual shortcut may deceive us
Silver reminds us that “[t]he instinctual shortcut that we take when we have ‘too much information’ is to engage with it selectively, picking out the parts we like and ignoring the remainder, making allies with those who have made the same choices and enemies of the rest.”
As a result of our predilection for things familiar and, therefore, deemed safe, we “face danger whenever information growth outpaces our understanding of how to process it.”  So, “unless we work actively to become aware of the biases we introduce, the returns from additional information may be minimal—or diminishing,” according to Silver. 
Rescued by computers
So, that’s why we use computers! There’s clearly too much data for us to process manually. We use computers to sift, filter, organize, relate and present our big data to us so that we don’t have to rely on our human intellect alone, I hear you say.
But “the numbers have no way of speaking for themselves,” Silver warns. “We speak for them. We imbue them with meaning. Like Shakespeare's Caesar, we may construe them in self-serving ways [to our detriment] that are detached from their objective reality.”  Remember! Even our computer programs are designed and developed by humans with inherent biases about which data they will process, as well as how they will process it, filter it, organize it, and present it. In doing so, the data are imbued with meaning by the software developers before you even have an opportunity to take it all in.
Unfortunately, we humans tend to “regard computers as astonishing inventions, among the foremost expressions of human ingenuity…. And we expect computers to behave flawlessly and effortlessly, somehow overcoming the imperfections of their creators.” In fact, Silver opines: “[W]e view the calculations of computer programs as unimpeachably precise and perhaps even prophetic.” 
Evidence of our unwarranted faith in computers
I frequently remind my clients of some a simple fact when they are considering typical manufacturing and supply chain management computer systems. I tell them:
Okay. You are about in embark on the implementation of a very complex system. To get and keep this system operating, you will need to supply it with—ultimately—thousands of factors (when all of the SKUs, routings, bills of material, work centers, and more are considered) upon which the computer system will base its calculations. You will be asked for waste factors, work center efficiency factors, set-up times, run-times, lead times, queue times, wait times, consumption rates, and much more. In many cases, the numbers you provide will be averages. In equally as many cases (I believe), the numbers you provide will be best guesses.
All of these thousands of factors that you have fed into the machine will be “crunched” and the system will produce reports like shop floor schedules (to the minute, perhaps) and material requirements plans (for exact quantities that you should produce where, and precisely when). The system will also calculate for you your “costs” to buy or make hundreds or thousands of components and finished goods.
Here is the problem: Even though you fed the system with “averages” and “best-guesses” at the outset, you are now going to actually believe that the data produced in dozens of plans and reports are “precisely correct.” If the system says it cost you $127.19 to produce a unit of SKU 1001, you are going to believe it—and probably act on it as a “fact.” If the system tells you that you need to buy 1,507 units of SKU 4712 into location ‘A’ by 07/19/20XX, you will probably believe it to be precisely correct and attempt to act accordingly.
But, if you supplied the system with averages and best-guesses, the result cannot possibly be as precise as you are going to believe them to be!
Silver expresses this wisdom in a simple sentence:
Technology is beneficial as a labor-saving device, but we should not expect machines to do our thinking for us. 
Traditional ERP and MRP systems answer the wrong question
Beginning with our inputs of averages and best-guesses, traditional ERP and MRP systems attempt to very precisely answer the following question:
What should we make, buy or transfer and when should we make it, buy it or transfer it?
But, because the answers to those questions—while precise—can never be assured of being accurate, the real question that systems need to answer is this one:
Given what we know at this moment, how likely is it that our stock buffers, time buffers, and capacity buffers will adequately protect the FLOW of relevant materials in our supply chain? 
If we know the answer to that question for each strategically placed buffer in our supply chain, we automatically are able to prioritize actions that will maximize our opportunities to profit from FLOW across our supply chain.
Simply. Effectively. Accurately.
We can help you get there. Please leave your comments below, or feel free to contact us directly, if you prefer.
 Silver, Nate. The Signal and the Noise: Why So Many Predictions Fail - But Some Don't. New York, NY: Penguin Books, 2015.
 The production, shipping of irrelevant materials (stuff for which there is no known or foreseeable actual demand at the present time) is a waste of time, energy, money and other resources regardless of how efficiently it may be being done.