We software types are not the only ones with codebases. In fact, our own lives and behaviors are governed by another, older one all the time, in some sense like the objects we manipulate in Python or Java (though some of us are buggier than others in our operation).
To me, the U.S. Code is a fascinating analogy to software (as are legal codes in general) on a few different levels:
Its rules must be formatted according to a precise syntax in order to achieve validity and applicability – which is why the notion of “legalese,” a sort of programming language, exists.
It must be maintained with respect to legacy material – though the stakes in the “evolution vs. revolution” choices faced by nation-states as they update their products according to changing times are generally higher than those faced by software companies.
It used revision control long before software did – open up any statute and you’ll see things like “1982-Pub. L. 97–295 substituted ‘Administration’ for ‘Admministration’ after ‘Atmospheric’.”
Most relevant to my topic today, the code is developed and maintained via a process that is every bit as algorithmic as its product.
Anyone who has sat through a meeting of any assembly run according to parliamentary procedure can attest to this last point. It is not, shall we say, a mode of casual conversation.
Robert’s Rules – motioning, tabling, making points of order or information, etc. – would be farcically complex and arcane if used to plan an outing or conduct an art class. For the production of governance, however, they have proven reliable for 100 years.
And reliability is the name of the game. Along with speed, reliability is the very reason we use computers in the first place. Their mechanisms are simple and intelligible (compared to those of human psyches, at any rate, and certainly compared to human groups), and we can depend on them almost like physical laws: Garbage in, garbage out. Set and forget. No fuss, no muss.
No for-loop will ever terminate early out of spite or incomprehension. No file access will ever fail because of ideological rigidity (Rev. Benek notwithstanding). Software is the very epitome of predictability.
The really interesting thing, though, is that when humans need to be reliable (and have enough resources to determine how), we ourselves become algorithmic. Armies march in lockstep according to standard drills. Legislators fit their content contributions into a programmatic framework and acquiesce to its limits. Scientists follow standard protocols. And software companies practice software development methodologies.
In the world of Web usability, we often deal with the need to reconcile the separate ways of being in the world represented respectively by the mechanical and the human.
Do we somehow make humans remember byzantine password-validation rules, or do we somehow make computers recognize users by their physical attributes?
Do we require humans to confirm each potentially destructive action they take, or do we require computers to be willing and able to revert such actions?
Do we make users deal with search strings that must match content exactly, or do we make computers deal with search strings that match content inexactly?
None of these options is “natural” to its respective constituency (human or machine) prior to the influence of usability engineering.
And yet, in so many ways, humans have been volunteering to mechanize themselves for millennia. Bureaucracies, football teams, sometimes orchestras, construction crews and more all organize themselves so that inputs A, B and C by sources D, E and F are processed by units G, H and I into outputs J, K and L. And the humans like me who produce software are no different.
In the example of a software company using user-centered design and the Scrum methodology, everything from our corporate mission statement to the personas we create for our designs to the individual conditions of satisfaction for a user story must be formatted so that it validates as input into one or more processes. Redundancy or ambiguity in these inputs results in error and possible breakage within our human system just as surely as the analogous conditions in code do in their cybernetic one. Just so, efficiency and wisdom in either results in program success.
As a usability engineer, I find reason for optimism in all this. To me, it means that a capacity for mechanization exists within humanity (which makes sense, since we are the ones who began mechanizing our tools in the first place). In turn, that seems to mean the gap that Web usability seeks to bridge between human and machine is not as formidable as it might be otherwise.
Too often UI has been the mechanization of human beings against or irrespective of their wills. Usability has generally striven for the humanization of machines instead, reasoning that machines should serve humans rather than the reverse. It may be, however, that compassionately supporting humanity’s demonstrated tendency to mechanize itself for its own purposes is a legitimate and fruitful role for UI to play.
This article is published as part of the IDG Contributor Network. Want to Join?