Whether the world likes it or not, programmers are people too. They eat, drink, sleep, dream and work just like everyone else. A programmer looking at the St. Louis Arch will almost certainly see something that is taller than it is wide â€“ just like everyone else. Even when it is known that the arch is as wide as it is high, the eyes tell a different story. We cannot help the illusions we fall for.
I have written about the inevitability of certain types of illusion before â€“ in particular purely logical illusions. Illusions that exist solely within our minds, not requiring aid from senses such as sight or hearing.
Reading the always-interesting Steve Vinoski and the perennial subject of RPC and its inherent flaws, it occurred to me that perhaps RPC is an example of a programmer's illusion. A fallacy that our minds trip us up on. Something which, like the St. Louis Arch illusion, is not amenable to correction by the mere acquisition of knowledge to the contrary.
There has to be some reason why a technique long known to have severe problems in engineering terms steadfastly refuses to go away. Perhaps this is why. Other explanations have been put forward. Steve argues cogently that the sheer convenience of RPC over-rides our higher faculties where we worry about correctness, reliability etc. Programmers are people too. We like our creature comforts and RPC sure is comfortable.
It must be said to that, being human, we can be suckered in by snake oil. We so want RPC to be okay that we will spend money on things that claim to make it reliable. Wizards that utter "Shazam!" over our systems and make the head-hurting issues just go away. Gosh, wouldn't that be cool?
My favorite explanation for the resilience of the RPC illusion is something I have also written about before: linguistic determinism. If you train a programmer by exposing him or her, day in day out, to a set of languages that are all based on a complexity management technique called (variously) a procedure, a function, a method, it can hardly be surprising that their world-view rotates around that notion. You invoke one of these things. You don't worry about the details of what goes on inside. When it finishes, your code keeps on trucking. Very convenient.
Perhaps therein lies the biggest question for twenty first century computing. Is the very concept of a function/procedure/method the root of our ills? Do we need to back up the truck and admit that the simplicity of the function idea â€“ borrowed from mathematics â€“ can only be pushed so far in a distributed, asynchronous, unreliable (not to mention pernicious) world?
It seems to me we are at a crossroads that curiously mirrors the situation in physics. The synchronous, reliable function call paradigm works brilliantly in the small but doesn't work in the large. In the large, time is complex and relative. Einstein rules. In the small, time is simple and absolute. Newton rules. We know that absolute time is an illusion but it works most of the time.
It is convenient but not correct, just like RPC. Is "most of the time" good enough?