IBM finds a way to completely ruin computers

Cognition and sensory analysis are fine, but don't burden artificial intelligence with the human variety

You've probably already seen a lot of gushing coverage about a huge breakthrough in artificial intelligence:

The new processors, which go by the name SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics), are a huge achievement for specialists in processor design and development of more complex artificial intelligence.

It's a huge potential disaster for the rest of us.

The SyNAPSE chips have only 256 neuron-like nodes and 262,144 programmable synapses, compared to billions in the human brain, and is only a small step toward computing with a real resemblance to the processes of a human brain, according to the IBM Cognitive Computing division that produced the chip and Dharmendra Modha, its director. .

According to a description in MSNBC's story, the brain-like chip "not only analyzes complex information from multiple sensory modalities at once, but also dynamically rewires itself as it interacts with its environment — all while rivaling the human brain's compact size and low power usage."

IBM and MSNBC are right in assuming human brains do this to a certain extent every day. They don't mention that human brains that develop intense, psychotic needs to kill all humans do the same thing, but far more frequently, to keep up with dynamic changes in reality demanded by the voices in their heads.

Even in completely sane humans, the ability to filter and misinterpret sensory data protects the brain from sensory overload and gives it the ability to choose the reality it sees and ignore evidence that its choice could be not only a mistake, but a disaster.

If IBM built a computer that could mimic that process it would not be a computer like a human brain, it would be a schizophrenic computer.

What it's actually built is a processor able to look at several streams of unrelated data at once and identify them as part of an equation it's trying to solve.

That's a great addition to the capabilities of computers.

Making them work more like human brains almost certainly wouldn't.

I'm not quite old enough to remember the days when business executives managed their communications and organizational needs with a type of liveried servant class called a "secretary," and all the back-office processes – accounting, payroll, capacity and supply chain planning and management – had to be handled by superclusters of wetware platforms known as "clerks," "operations and management staff" or "office workers."

Judging from the reruns I saw on TV as a kid none of it worked very well. Someone was always gumming up the chocolate-candy manufacturing line or prompting some errant relative to sneak in to the advertising agency for some covert witchcraft, or carrying empty briefcases around just to provide activity in the background when some mid-level executive's kids came to visit.

Not efficient.

And the entertainment value? Before the computer age really began a really top-quality massively multiplayer role playing game required the mobilization of entire economies, manufacture of an astonishing volume of both armed and unarmed transport and the commitment of years – not six or 12 or 36 hours here or there until you've mastered Call of Duty: World at War.

That's just wasteful.

In business, decisions were made based on "data" gathered with clipboards by supervisors or managers whose job performance was judged by the optimism of the numbers they presented to the boss. Numbers the boss would have a really hard time verifying until after that particular manager was fired for reasons that were usually a lot more interesting and personal than fudging a few production numbers to look as if one division had made quota.

Now big decisions are made using detailed reports generated by sophisticated business-process software running as secure, isolated workloads on virtual servers within a cloud-based infrastructure. They use scrubbed and current operational data from facilities around the world, correlated and compared to performance during previous quarters and metrics reflecting industry standards and the result of competitive analyses.

They're presented to executives as complete, data-centric analyses just in time to enable informed, documentable, quantifiably justified decision-making.

It is purely a coincidence that the eyes of these decision-makers immediately roll up in their heads and they lose consciousness as the pressure of all that data pushes all the blood from their brains.

Though our research methods aren't advanced enough to explain why those super-quantified analytical processes result in decisions that graph in patterns consistent with marks made by a monkey poking a stick at a wall.

It is clear, however, that our entire economy and lifestyle depends on "thinking" machines that don't think at all in the way a human would perceive it.

They don't daydream about something else for seven hours 58 minutes per day and then make complex decisions based on what will get other annoying people out of their offices the quickest.

IBM's intentions are good – to produce computers able not only to learn, but adapt what it learns to match new situations or synthesize new answers from existing data when the reality it senses doesnt' match the data is has been given.

The next step would be to make the processors dense enough to mimic the number and interconnectedness of neurons in the human brain, then advance self-adaptive programming to take good enough advantage of the architecture to expand beyond artificial intelligence into the real kind.

I'm not sure we humans would recognize that when it happens, though.

We're designing machines to think in detail about every shred of sensory data going on around them, using defined and calculated laws of physics to predict what each object or sensory characteristic indicates about the rest of the environment.

Machines like that would be working for humans, whose analytical priorities evolved through the need to identify a banana as something to eat and a lion as something to be avoided. That's not a very complex seek/avoid algorithm.

We might need two SyNAPSE machines: one to advance to the point of real intelligence and the other to recognize what it's done and translate the news into monkey-speak so we can understand what's happened.

Insider: How the basic tech behind the Internet works
Join the discussion
Be the first to comment on this article. Our Commenting Policies