Racing micromice set record for maze solving, raise questions about robot intelligence

Task-focused robots are efficient; what do we do when they're smart, too

Building a better mousetrap is supposed to be one of the no-fail ways to get the world to beat a path to your door. But what if, while you're building a better mousetrap, someone else is building a better mouse?

Competitors in the annual All Japan Micromouse contest have been working on that for 32 years – one of a micro-industry of contests designed to give new or amateur robotics and electronics designers a forum to show off their latest ideas.

The goal, as with tests on real mice, is to negotiate a maze as quickly as possible.

Micromouse learning its way

The micro-mice – all of them autonomous, not remote-controlled as are the highest-end robots in military use – get practice time on the 16-by-16-foot maze to learn the way through and scout back to find out if the first path it found through is really the fastest one.

They compete to see which can make the fastest run from start to finish.

This year's winner, Ng Beng Kiat's Min7 won with a time of 3.982 seconds – the first winner to break four seconds.

Micromouse competition aficionados describe Ng Beng Kiat, whose list of prizes won is as long as his bullet-list history of the development of his own mice – as king of the sport, though not a terribly remote one. He shares his engineering secrets pretty easily.

Racing micromouse

The micromouse competition highlights a split in robotics design: is it better to design devices focused on a particular set of functions, whose shape and coding are optimized for those functions?

Or is it better to design general-purpose robots that are often humanoid but suffer from our inability to build smart enough central processors or adaptive-enough artificial intelligence software to let them operate as human-servant analogues?

My vote is for the non-human, narrow-focus robots, at least for now.

It's far better to build robots that are very good at welding or vacuuming or mouse-maze-running than it is to build robots that aren't much good for anything but have a "face" we can relate to and humanoid form we can recognize.

We already have plenty of humans who aren't able to function at the level required of them; we don't need to build less capable, less human versions to fail more expensively and spectacularly.

At some point, though, we're going to get to the point that even the "stupid," task-focused robotic systems are going to begin developing capabilities greater than those we thought we built in.

At that point we're going to have to figure out how to deal with real artificial intelligence and the complaints it will undoubtedly raise about how it was developed, how it is treated and what rights, if any, we give it.

It's not an issue we have to deal with right now. We're still a couple of decades away from robotic systems that are really good at handling even repetitive patterns of tasks within relatively unstructured areas, let alone systems able to set their own goals and existential agendas.

As a post in FutureConscience points out, however, humans are interested in robotics for more than an economic calculation of what robots can do for us. We're also fascinated with the development of intelligence analogous to ours and with finding ways to avoid or short-circuit our own shortcomings.

Those impulses, as well as the potential economic benefit of automating complex tasks that currently can be done only by humans, will continue to push the ability of robots to both do and think far beyond the level we demand to accomplish specific tasks.

Before it actually happens, we're going to have to figure out in more than a theoretical way when we'll consider robots to have achieved enough intelligence to be considered beings in their own right, and whether we're going to be comfortable building them, breaking them or running then through mazes for no benefit of our own.

We will also have to decide, once we're able to decide where sentience actually begins and are able to design it, whether we want to create artificial beings that present so many questions about intelligence and individuality and the simple right to exist.

It's possible, even likely, that once we've been able to build real intelligence into an artificial being, we'll decide we prefer the Micromouse approach after all.

Human societies that assigned one class to be the servants and another the masters have always run into major conflicts; humans aren't naturally compliant or satisfied to be told they have a place and have to stay in it.

What are the odds we would build a servant class that would raise the same questions, when all we want is something that will do the dishes without complaining?

So is it possible to create a new aesthetic not just for programmed robots but even for new forms of sentience? One that can be filled with identity and creative expression (I hesitate to use the word ‘personality’ here for obvious reasons) that we could relate to in a way that does not constantly assess the level of similarity to ourselves.  As we strive to make robots more like us, is an opportunity being missed to widen our emotion of empathy away from its genetic imperative?  Of course, we find it difficult to imagine possibilities because our experience of sentience is relatively limited.  But that doesn’t mean the potential isn’t there to create new links and expand our concepts and cognitive domains…

…As we begin to more and more closely assimilate artificial intelligence with highly advanced engineering we should not see ourselves as limited by the biological necessities that have previously created a boundary for physical existence and the expression of identity…

…We can no longer simply presume that our understanding of being, based on biological cognitive architecture, is equipped to deal with these newly changing boundaries of sentience never before encountered.   In the relatively near future, we will need to widen our capacity to recognize individuality, rationality and sentient existence whilst finding new ways to enter into relationships with entities that we may have been the progenitors of; but that will eventually be our peers.

FutureConscience, Dec. 11, 2011.

Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.

Infographic: Starting salaries for computer science grads
View Comments
You Might Like
Join the discussion
Be the first to comment on this article. Our Commenting Policies