It is purely a coincidence that the eyes of these decision-makers immediately roll up in their heads and they lose consciousness as the pressure of all that data pushes all the blood from their brains.
Though our research methods aren't advanced enough to explain why those super-quantified analytical processes result in decisions that graph in patterns consistent with marks made by a monkey poking a stick at a wall.
It is clear, however, that our entire economy and lifestyle depends on "thinking" machines that don't think at all in the way a human would perceive it.
They don't daydream about something else for seven hours 58 minutes per day and then make complex decisions based on what will get other annoying people out of their offices the quickest.
IBM's intentions are good – to produce computers able not only to learn, but adapt what it learns to match new situations or synthesize new answers from existing data when the reality it senses doesnt' match the data is has been given.
The next step would be to make the processors dense enough to mimic the number and interconnectedness of neurons in the human brain, then advance self-adaptive programming to take good enough advantage of the architecture to expand beyond artificial intelligence into the real kind.
I'm not sure we humans would recognize that when it happens, though.
We're designing machines to think in detail about every shred of sensory data going on around them, using defined and calculated laws of physics to predict what each object or sensory characteristic indicates about the rest of the environment.
Machines like that would be working for humans, whose analytical priorities evolved through the need to identify a banana as something to eat and a lion as something to be avoided. That's not a very complex seek/avoid algorithm.
We might need two SyNAPSE machines: one to advance to the point of real intelligence and the other to recognize what it's done and translate the news into monkey-speak so we can understand what's happened.