Andy Hunt: Secrets of a Rock Star Programmer (Part 1)
SOURCE: This article is excerpted from Secrets of the Rockstar Programmers: Riding the IT Crest by Ed Burns (McGraw-Hill, 2008), with permission from the McGraw-Hill Companies. All rights reserved. McGraw-Hill makes no representations or warranties as to the accuracy of any information contained in the McGraw-Hill material, including any warranties of merchantability or fitness for a particular purpose.
The speakers who frequent the IT lecture circuit are a gold mine of rock star programmer talent. The premiere series on that circuit is Jay Zimmerman’s “No Fluff, Just Stuff” (www.nofluffjuststuff.com), at which I had the pleasure of speaking in Reston, Virginia, in the fall of 2006. The keynote speaker at this particular conference was Andy Hunt, who presented his engaging “Refactoring Your Wetware” talk. Just ten minutes into Andy’s presentation, I knew I wanted to share his insights in an interview for this book. He puts out the vibe of a true Renaissance man, and he is -- right down to toting his well-worn Moleskine notebook and fronting his own rock-jazz-swing band, the Independent Memes (www.independentmemes.com).
Andy’s career arc has brought him through most of the roles one can have in the IT industry, from rank-and-file programmer at a Fortune 100 company, to senior architect, to independent consultant, to his current role as the co-founder of the Pragmatic Programmers LLC. The Pragmatic Programmers are widely recognized in the IT business for high-quality content (no fluff, you might say). They are also seen as a programmer-friendly publisher for IT authors. This is probably because it is run by two programming pioneers who “get it” when it comes to managing the process of authoring a technology book because they’ve read, used, and written them themselves. He and fellow Pragmatic Programmers founder Dave Thomas are seen as true thought leaders in today’s global programming community.
Name: Andy Hunt
Home Page: http://toolshed.com
Rock Star Programmer Credentials: Co-founder of the Pragmatic Programmers, founder of the Agile Alliance, sought-after consultant and lecturer
Time of Birth: Mid-1960s
Region of Birth: Eastern United States
Marital Status: Married
Degree: Bachelor of Science in Information and Computer Sciences, Georgia Institute of Technology
Years as an IT Professional: 30
Role: Entrepreneur and co-founder of the Pragmatic Programmers LLC
Ed: Andy, I like to start out with some soft-skills questions to warm things up. From your vantage point at the helm of Pragmatic Programmers, I bet you get to see a lot of new technology just as it’s taking shape. How do you tell when something new is going to also be something big?
Andy: Well, that’s an interesting question because -- and this isn’t just true of me or Dave or anyone else in particular in the industry, but just in general—when you’ve been in the industry a long time, people start to watch you and watch what you’re interested in, so past some particular tipping point, it becomes a bit of a self-fulfilling process.
Ed: You mean, you create the trends?
Andy: You start getting interested in, say, Erlang for concurrent programming, and now people start thinking, “Ooh, these guys are interested in Erlang. That must mean something.”
Ed: That’s a nice place to be.
Andy: You can cause your own industry swings, and I think it’s far too early to say that about Erlang yet, but certainly I think that was the case with Ruby.
Dave and I wrote the first English-language Ruby book (Programming Ruby, Addison Wesley 2001) and really brought that to the attention of the Western world. Dave, in particular, has been instrumental in beating the drum and getting people interested in it, using the language, bringing it to people’s attention.
Ruby was something that we were particularly looking for...that kind of technology.
I had just done a large data mining project in something like 50,000 lines of object-oriented Perl, and for various reasons, that was the environment I needed at that point. It was frustrating, but it was also really quite promising because it was so close to being a really nice environment, except for the fact that in Perl you had to do all the object orientation by hand. But the promise of having a scripted object-oriented language that had regular expressions, as well as good access to the operating system internals, and ran as an interpreted scripting language was a powerful thing.
At the conclusion of that project, both Dave and I started looking explicitly for that technology. We said to ourselves, “Okay, here’s a need. This is something I want to see. What’s out there?” And we looked and we scoured around, and Dave actually stumbled on Ruby in Japan. He found this website and said, “Hey, well, what about this? Take a look at this. This looks pretty cool.”
We started looking at it, and that started the ball rolling. In that case, it wasn’t really a question of looking at emerging technologies and picking the winner. It was a question of “I have this need. What is out there that could possibly fulfill this now or in development?” It was very much need-driven.
In general, I think that’s a better way to look at the question. New technologies pop up all the time. We get book proposals for new frameworks, new libraries, new languages, some stuff I’ve never even heard of, and it comes flying across [my desk] and [I] say, “Well, what’s this? Is this interesting? Does this have promise?”
You look at it, and the question I always try to ask is, “Okay, what’s the problem they’re trying to solve?” Concurrency is a big, hairy problem and trying to do it properly in a more traditional language like Java or C++ or C is problematic at best. You can do it, but it’s like the object-oriented Perl. You can do it, but there’s a certain amount of pain involved.
Then you’d look at something like Erlang that says, “Okay, well, here’s this problem and we’re gonna solve this in a completely different way and rather elegantly, so that a lot of the problems that you were facing just simply disappear by virtue of how the technology is created.”
Ed: Let me characterize what I’ve heard you say regarding the process here. After you were done with that project with the 50,000 lines of Perl, you had an idea of the attributes of a solution and then you went out looking for a technology that filled those attributes, right?
Ed: In essence, it’s almost like you had a requirements document and you were looking for something that filled those requirements.
Andy: I wouldn’t use the phrase “requirements document.” It’s a genuine requirement. This is something that I can see a need for, or clients or friends have had a need for, so there is a definite requirement.
Ed: There’s a tension between always moving on to the next new thing and sticking with it long enough to achieve true mastery. I want to find out if you think there’s any relationship between personal intelligence and one’s susceptibility to losing interest in something before achieving mastery.
Andy: I think there’s definitely a correlation there because, again, one of the facets of the programmer’s personality that I think is very important is curiosity. If you have mined the territory pretty thoroughly and there’s nothing left to be interested in, then, yeah, you’re gonna be in a world of hurt on a twoyear long project. One of the challenges for a project manager or sponsor is to somehow keep some level of interest going for your smartest developers because the less skilled, less curious developers really won’t run into that problem.
They’ll be perfectly happy still discovering the rudiments of it two years later. They’ll be fine. The best and brightest expert is gonna be bored in a week -- it’s no longer a novelty. Once the novelties run out, [boredom] becomes an issue; there are a couple of ways around this.
Some of the folks who advocate pair programming say rotating team members around all the aspects of a project helps ameliorate [boredom] because you don’t get so completely inured to one aspect of the project. You may work on that for a while, but then you’re in a totally different area, so that it becomes a little bit more novel. I think that’s probably a pretty good way of going about it. Maybe not necessarily pair programming, but certainly having different roles to play in a project.
If it’s a very large project, I’ve seen a lot of teams that will rotate out programmers to work in quality assurance (QA) for a while and see the other end of the spectrum or be more closely allied with the end user’s requirements, elicitation, or whatever, but something just to literally break up the monotony of sitting there and writing the same bit of the same report module for two years in a row or whatever it may be. Variety and novelty is really the key there.
If you’ve got real experts that get bored very quickly, it might be profitable to actually swap them out between entire projects on some periodic basis. If you’ve got some hotshot you just can’t keep ahead of and he’s churning out code so well and so fast and you can’t keep him engaged, give him a day [or] a week to go work on some open-source project and donate the results to the Web. It’ll keep him engaged and interested and [he won’t] leave your company and you’re not actually losing any real productivity ’cause they’re such hotshots to begin with. In extreme cases, you can look to doing something like that.
Ed: The attributes you’re describing seem like desirable character attributes. But there is a flip side to it. How do you tell when someone is actually like that or they just get distracted real easily? They don’t necessarily achieve true mastery. They just get tired of working on it.
Andy: Well, that’s a good question. That’s more towards burnout than boredom, really, and there’s a distinction, because if you do have someone who’s fairly well skilled, they can be subject to burnout just as well. They’ve been looking at the same thing over and over again and they’re just tired of it, and that’s where just rotating people among different roles, even within the team, is probably useful. Nobody wants to work on the report module for a year straight. If anyone did, would you want them to? That sorta would be a danger sign. If somebody says, “No, I’ll just stay over here in the corner and work on this little piece all the rest of my life,” that’s probably an issue.
Ed: Say you’re hiring someone. Is there ever a case where it’s a desirable attribute to have someone just stick with one thing for a long time, or is it always a danger sign if someone is like that?
Andy: I think it would almost always be a danger sign. I’m sure there’s probably a counterexample somewhere, but, in general, I want developers working for me that have an insatiable curiosity.
Ed: I think it’s important that this insatiable curiosity be focused inward as well, increasing self-awareness. How important is it as a software developer to be aware of one’s own ignorance?
Andy: It is absolutely vital. That’s a real, real important thing. You know, understanding our own limitations, understanding the constraints that we’re operating against, whether it’s our ignorance or the situation, that’s a real -- that’s a real, real big thing. It is absolutely vital to know what you don’t know and to be okay with it.
It’s just that people have a lot of difficulty answering a question with “I don’t know.” That’s a perfectly valid answer, yet it shouldn’t be your final answer on the subject by any means. All too often, I’ll see folks in a
corporate environment where the culture is such that they can’t say, “I don’t know.” So they make something up or they do some waffle words around it or the famous politician thing of [saying], “Oh, we’re having a blue ribbon committee look into that.” That’s just nonsense. That doesn’t really help anyone. I’m a big fan of saying, “I have no idea. I don’t know how that works, but I’ll try and find out.”
Ed: How much time do you spend in a meta-cognitive mode, being aware of your own thought processes and how you are interacting with the world around you?
Andy: Not as much as I should. For me personally, that’s one of those things that just varies as life moves on. When I can [go meta-cognitive], things work out better, and when I get crushed under time pressures and
scheduling pressures and I don’t [go meta-cognitive], things tend to go a little bit worse.
One of the things that we’ve always advocated in our books and seminars is to set something akin to an alarm clock to interrupt you periodically. Every couple of hours, stop what you’re doing, get out of the mud for a minute and take a deep breath and re-evaluate: “Does what I’m doing even still make sense? Am I still on the right track with this or do I need to adapt and adjust a little bit?”
Ed: Forced meta-cognition.
Andy: Yeah, exactly. And it really needs to be interrupt-driven, because if it’s a sure thing, where if you said, “Okay, tomorrow at eight, I’m gonna take stock and take a deep breath and evaluate stuff,” that doesn’t seem to ever work. It’s one of the few things that’s better interrupt-driven.
Ed: Yes, that’s true. But I’ve heard you say in your “Refactoring Your Wetware” talk that “multitasking kills.” So this is not multitasking, it’s putting everything else aside and doing the meta-cognition stuff.
Andy: Yeah, and it’s being deliberate. That’s another consistent focus through a lot of the stuff I talk about -- the idea of being deliberate about something, not letting it just happen accidentally or as a by-product. But taking deliberate steps to say, “Yes, this is what I’m doing right now and here’s why.”
Ed: Being deliberate is certainly an attribute of a successful developer. What are some other attributes of successful developers?
Andy: My stock answer may be different from what most people think. The biggest thing I look for is language arts skills. I would rather have an English major learn to program than a math major or an engineer learn to program.
I think that’s a somewhat contrarian stance, but when you look at it, the two things that we as developers do the most are communication and learning. And by communication, it doesn’t necessarily mean sending e-mails or writing white papers or that sort of stuff. Programming is an act of communicating to the computer, getting requirements -- we’re communicating with the end user, with other people. Working in a team to develop the software, we’re communicating with the other members of the team, with the team as a whole, all these sorts of things.
But we [software developers] are really the communication hub between various other humans and various business technologies, and that’s a big thing of what we do. I would much rather have someone who was
more trained in the communication arts. Writing a program, to me, is much more like writing an essay or a novel than, say, a mathematical theorem. [A program’s] logical flow, progression of ideas, all these sorts of things that come over from the language side are much more important than being a hard math geek.
Ed: Well, this fits very well into the next question. How would you break down the technical skills one needs to possess as a software developer? The first one on that list would be communication skills?
Andy: Communication would be big. I think curiosity is [also] a huge one. Curiosity is probably the biggest thing. That’s really what drives us. You know, as you said before, you’re getting to know the project, and all of a sudden you realize you don’t know about something. You know, for most developers, it’s not the schedule, it’s not the attributes of the project itself that drives them; it’s the curiosity.
Persistence is a key trait that goes along with that. You can’t give up on the first “page not found” or if the first article you find, you don’t understand.
Most of the best developers I know are both very curious and very persistent. So they track it down. Now, all of that has to be over a bed of a deep understanding of the fundamentals. You know, I don’t particularly care if somebody knows Java or Lisp or C# or Ruby. These things, you can pick up once you learn more than a few languages. The big deal is to learn how the operating system works internally.
Ed: Looking into the future a bit, new layers of abstraction are constantly being added on top of what has come before. How will future programmers deal with the necessity to understand the whole stack, all the way down to the metal?
Andy: I suspect what will happen is [that] the lower levels will begin to fall off. Even now, thinking in assembly language or working at that level is becoming more and more rare. I remember there was a time when even just endian-ness and differences between processors was a huge issue. This was a big deal. People spent a lot of ink, a lot of network protocols, and this and that to try and get around it, and now it’s like, “Well, hell, everyone uses an Intel.” It hasn’t gone away, but it’s certainly lessened.
There’ll always be a need to go down to the MOSFET [hardware] level and the gates and the K-maps and whatnot and chip design and lower levels, but fewer and fewer folks will do that and, unfortunately, I suspect fewer and fewer folks will even be aware of it.
Ed: Regarding [the] skills one needs to have, did you have any courses in college that you continue to use today?
Andy: Yeah, it’s kind of tough. Thinking back, I remember a couple of courses on finite automata, where it was bits of theory that would make you think, “Okay, this really is foundational.” You know, “This is stuff you canleverage on.”
That was much more valuable than learning the OSI protocol stack, which ended up being [something] nobody used. You know, TCP won the day. Technical education tends to overemphasize transient technologies. Whether that’s the current programming language or even the current programming style. You know, procedural versus object-oriented, versus functional, versus declarative, versus whatever. You should learn one ofeach of those kinds of languages in school so that you’ve got a good basis going forward.
A lot of people coming up don’t know what functional languages are, like Erlang or Haskell; they never tried declarative programming in PROLOG or something like that. Their idea of object-oriented is Java, which is unfortunate because that’s really not an object-oriented language. I really want to see the real solid base-level fundamentals. When you get out of school, four or five years after your freshman year, the technology is all gonna change anyway. We’re not doing what we did five years ago. We’re not gonna be doing this five years from now. In that time span, it really doesn’t make sense to get too hot and heavy into the flavor of the day.
Ed: Well, there’s an unfortunate tension between what you and I know is a sound approach to longevity and what hiring managers who are faced with thousands of resumes are dealing with.
Andy: Oh, absolutely. My recommendation would be spend the first three, three and a half years of your college career mastering the fundamentals and basics and putting [yourself] in the position so that you can learn the technology du jour very quickly. Then you spend the last semester saying, “Okay, this is what you guys are gonna be doing when you get out there.”
Ed: Can you say anything else about fundamental skills?
Andy: First, learn the fundamentals of assembly language, real basic stuff. Meeting most developers -- I can tell whether they grew up hacking inside of a operating system and writing assembly code or if they went to college and were taught Java and don’t really know anything lower than that. It really shows. The folks who have the grasp of the fundamentals are different without doubt. When something weird happens or something crashes or something new comes along, it’s less of a big deal. They can roll with it more. And for the folks who came in somewhat late to the game and have been, I think, robbed of a proper education -- to them, how it all works is kind of magic.
Ed: But to you, it’s not magic. Is that because you are old enough to have grown up along with the computing and software industry itself?
Andy: My first computer was an Ohio Scientific [OSI Model C4P], and I was 12 years old [when I got it]. It did make a difference, because in those early days…my first several computers really didn’t have an operating system. They had a firmware monitor and everything else was do-it-yourself.
I remember when the first TRS-80s came out with their TRS DOS. I was leery of the fact that when you saved a file, it would pick what sectors on disk to put it on. You didn’t specify that by hand. I was, like, “What the hell is this nonsense? I don’t want this thing guessing for me and fragmenting my file and jamming it up all over!” I was perfectly happy with do-it-yourself file system maintenance and allocating sectors and dealing with the fragmentation by hand.
And, similarly, in the days of CP/M, where you had to do your own memory management, you had to use overlays to swap out portions of your program because there wasn’t enough main memory to fit. That’s a really interesting exercise in architecture and design. I hate to sound like an old fart, but kids today really have no awareness of that.
Ed: That’s the thing. It’s not their fault; it’s just a product of the time. We had this benefit that this was the only way you could do it. There was no other choice. We had to go up through the bootstraps.
Andy: I agree wholeheartedly. I think that the folks who came out of that era where you started really close to bare metal are better suited to understanding the larger order of things.
Ed: So then maybe the takeaway would be [that] if you were a curious person, it wouldn’t sit well with you to not know how it worked all the way down to the bottom. Even if you have no interest in learning an assembly language, you have to know there is a benefit to doing so. So maybe the moral is study computer history?
Andy: No. There’s no future in history, somebody once told me. What you want to do is play. I do see the curious young developers now, they’re running [GNU/]Linux in several flavors at home, and they’re building their own media centers and screwing around with very low-level protocols and cables and soldering stuff together and writing device drivers. That’s really what you want to do. That gives you the most modern equivalent to the sorts of experiences that the rest of us [older folk] had growing up, just getting in there and hacking, because that teaches you a fair bit, and you start getting an appreciation for -- by the time you end up in a high-level language and ability on top of an operating system, you’ve got a good understanding of all the bits and pieces and how this all hangs together.
Ed: From the past to the future. What are some attributes of the language and environment that people will be using to develop software ten years from now?
Andy: Oh, I love this question. Okay, so here’s my thought on that, and I have no idea if this is actually going to work this way or not, but it strikes me that there is a real aspect of the cobbler’s children having no shoes here.
If you look at any of our popular computer languages today, with very few exceptions, you could render any popular program today onto paper tape or punch card. It’s a limited character set. It’s a limited line length for the most part. It’s black and white. It’s two-dimensional. It’s text. Gutenberg could print virtually any computer program in use. Any computer language program in use today could be printed on a Gutenberg press. It’s that basic, that simple, and it occurs to me in a lot of environments, given the richness of what we’re trying to express, that that’s a pretty poor model. You look at where, say, the gaming community is at, even simple things -- the use of color, but even going into three dimensions, use of spatial cueing, any kind of a richer environment, it strikes me that there’s an awful lot of opportunity there for a far richer expression of programming constructs.
And I’m not talking necessarily just about graphical programming or boxes and lines and that kinda stuff, but something more along the lines of interacting in, say, Second Life or some very rich virtual environment like that for a couple of reasons. Writing programs in black and white text seems pretty limiting, bandwidth-wise. That’s a waste of bandwidth.
Peter Coad had a book out a few years back on UML modeling in color (Java Modeling in Color with UML, Prentice Hall PTR, 1999), which I thought had quite a lot of merit to it. He basically had a color-code scheme for different archetypes of classes. I thought that was an interesting approach.
Looking at a model real quickly, you could easily discern yet another facet of things that wasn’t shown by the class diagram box style or the font. For the most part, that really didn’t seem to take off very well. It didn’t really capture the imagination of the population. I think that’s a shame. We’re back to syntax highlighting for convenience in IDEs (Integrated Development Environments), but it’s not part of the language proper. You can’t make a variable static by making it red or what have you, and it occurs to me that we’re really missing an opportunity for much richer expression there.
So you couple that with the idea that the folks just entering the workplace now are much more imbued with this idea of gaming and first-person shooters and virtual reality and all these sorts of things that us fuddy-duddies are far less comfortable with. Even if we do it, it’s not bred into our DNA. And the groups that are growing up with that now, I think they will end up making a large impact on the very notion of what we consider a computer program to be.
So I would say -- I don’t know about 10 years from now but maybe 20 years from now, they’ll look back on these syntax-colored IDEs with curly braces in them and snicker the same way we look back at paper tape
rolls and say, “My goodness. How primitive! Those poor people. How on earth did they survive?”
Andy: I don’t know the exact form it would take, but I can well see it looking more like Second Life and less like the Gutenberg print.
Ed: One pragmatic approach that has emerged recently, in part due to new ideas entering the workforce, but also due to advances in processor hardware, is the use of desktop virtualization products such as VMware and Parallels. Do you see virtualization playing a larger role in future software development practice?
Andy: Yeah, I can definitely see that. I’ve seen that in an embryonic sense already, and I can just see that as systems get more and more powerful and memory gets cheaper and cheaper and more dense, I could see having -- on a small, modest computer -- having 20, 50, 100 different setups, a couple of different QA ones with different configurations, testing the networking in between them in the box but pretending it’s a large grid cluster, whatever. Yeah, I could see that really becoming much more prevalent. I especially like the idea of extending version control to the operating system image level. This is a very powerful idea at Internet service providers (ISPs): You roll out some new bit of the stack or upgrade the operating system, and if it doesn’t work out, you just roll back to the previous image, and in a matter of minutes, you’re back to where you were and then you can go straighten it out again.
Look at some of these massive outages -- airline reservation systems or air traffic control or customs, most recently, I think was a big problem out in L.A. -- any of these kinds of headline-making outages because some software update failed somewhere and took the system down…it could certainly help ameliorate that sort of thing. And even on the developer’s desk, just having the freedom to say, “Well, let’s try it with all 14 different versions of this.” Yes, I think would help a great deal.
Ed: Can you give me an example of one of the really hard problems in computing today?
Andy: I think this is a two-fold thing. One problem is a cross between a cultural issue and a computing issue. The hard problem in computing, I don’t think, is stuff like facial recognition, voice recognition, trying to emulate these aspects of human senses. Yeah, they were a real pain, and a lot of researchers have spent a lot of time and a lot of energy trying to work it out, and they’ll figure it out someday, somehow. They’ve made great strides. These aren’t areas that I’m expert in, [and] they’re very hard, but they’re not the real hard ones.
The real hard one, to me, is getting any kind of a computer system to exhibit situational awareness and actual judgment. Getting some sort of a system that has any kind of situational awareness, I think, is the really hard part, because the danger you run into now as the computer becomes ubiquitous is you end up with an entire class of computer workers, not programmers, but the folks who work in fast food or banking or a call center, where they are genuinely slaves to the machine.
How many times have you called up your credit card company or a utility or ISP or phone service and there’s some problem with your account and the person on the other end of the phone says, “I’m sorry. The computer won’t let me” or “I can’t do this ’cause the computer won’t let me” or “It’s not showing on the computer.” “It’s all the computer’s fault.” Whatever the problem is, something has gone wrong and the person has no capability of correcting it and the computer has no capability of correcting it. This is the stuff of science fiction fear-mongering: you get to the point where nobody knows how to fix it. Your civilization is some thousand years in the future and you’re all slave to some computer that can’t fix itself until Kirk and Spock come along and make it blow itself up.
Ed: Right, kinda like the Nomad machine in that episode of Star Trek the Original Series: The Changeling.
Andy: It’s a legitimate gripe: When something out of the ordinary happens, software in general is not designed to deal well with that. It’s designed to deal with the average case and the normal situation, and as soon as something happens outside the norm, our software is not sophisticated enough to learn from that, to realize that there are other venues, other possibilities.
Ed: Okay. Now to me that is essentially a complexity problem. As an analogy, consider the goal of software quality. One proven solution to achieving that goal is to use a fine-grain level of unit testing. So you keep doing this unit testing and you break it down small enough that you can achieve quality by having enough unit testing throughout the breadth of the system.
Now is there some analogous thing here to address the notion of exceptional cases and the permutations and combinations of these exceptional cases that lead to the computer operator being unable to take any effective action?
Andy: I don’t think so because I think the combinatory explosion of everything that’s possible would just simply be overwhelming even for the fastest quantum-based computer. I think that’s kinda the wrong way to go.
The hard problem basically comes down to having a system in the largest sense of the word that can actually be aware and learn, because you cannot necessarily teach it what to do when something happens a priori. We
don’t do that with human education and human training. You can’t prepare your children for every single eventuality that’ll hit them in life. You hit the high points to give them the tools to make their judgment calls when the time comes. I think that’s the big chasm that we have to cross from the very literal, almost naïve, approach of, “teach the computer these 12 steps and it will do them forever” to having a system that can actually learn and apply the basics, the principles you’ve given it to novel situations.
Ed: Jumping from a really hard problem…You might find my next question easy, but I’m quite certain our readers would still like to hear your answer. When you’re working on an existing system as a maintenance programmer and you make performance optimizations to the code to make it go faster, for example, what are some effective ways to avoid introducing bugs and fostering maintainability?
Andy: That’s a simple answer compared to the Star Trek computer thing. That’s the combination of what we’ve always called a safety net. [It consists of] having the basic technical practices in place of version control so that you’ve got an ability to roll back changes to be able to compare and contrast and test the system at any point in time from before your changes, after your changes, months and months before any of that even started, when it was peak load, whatever, to be able to just dial the old time machine to any point in time and work with the system as it existed then.
And that’s a little bit beyond what most people can do with version control. You roll the system back a certain amount. Suddenly, you don’t have the right libraries to work with that version. You don’t have that same compiler anymore, so there are some potential issues there, but, ideally, what you want is to be able to re-create the system as it existed at any point in time. That’s on the one hand.
[On] the second hand, you need the fairly comprehensive unit tests so that you can prove this is exactly how it functioned then. For example, you can say, “I’ve made these changes, and guess what, it still functions exactly the same or these things have changed but we can migrate that and have a plan for that.” So you have to have version control, you have to have unit testing, and you have to have automatic artifact creation.
If you’re in compiled language, building object files, linking, installing, slamming a WAR [Java web archive] file somewhere or a JAR [Java archive] file somewhere else or whatever your particular platform demands -- however you actually construct and deploy the software needs to be completely automated and those instructions for that automation need to likewise be under version control so that there’s really nothing left to chance.
The production process is ironclad and subject to version control. The unit tests are fairly complete so you can test anything that you’ve introduced for good or for bad, and it may be acceptable. You may decide to change things as a result of it, but at least you know if you’ve introduced anything that makes a material change to the functioning of the software.
With these three low-level technical practices in place, it’s pretty safe to do almost anything to the code base. You can optimize it for speed. You can add functionality, remove outdated functionality, change things depending on changing business needs, refactor the design, because now you’ve got to hand it over to maintenance programmers who aren’t as up-to-date on the techniques you used originally.
Ed: Speaking of maintenance programmers, how do you hone your sense of smell for where a bug might be?
Andy: The best place to look for where a bug might be is quite near the last one you found. That’s my number one tip. You can look this up in the ACM papers—there are studies that show bugs tend to clump. [They’re] not uniformly distributed at all. They come in clumps. So when you’re doing code review of other people’s code or you find something you did horribly wrong, the odds are pretty high it’s not sitting there by itself and it’s gonna have something else real nearby. So that’s always a good starting point.
Ed: Assuming they’ve already tried that and it didn’t work, do you have any advice to help people track down hard-to-find bugs?
Andy: The number one thing to do when you’re stuck on a problem [such as finding a bug] is to step away from the keyboard. Take a break. Go walk around the parking lot. Go get a soda. Go get a beer if you’re so inclined. Whatever your environment is, it’s to remove yourself from that immediate L brain track. And this is one of the interesting things I discovered from the research [I did for the “Refactoring Your Wetware” talk]. You know, when you’re stuck on a hard problem, sitting at the computer is literally the worst place to be. I’ve given talks -- many dozens of times now -- and I’ve had so many people come up to me to corroborate the anecdote that you’re sitting there and you’re debugging or it’s a design problem that you just, you can’t get the ends to meet, and how are you gonna work this out… . And they’ll sit there and sweat bullets for some arbitrary amount of time and then, in disgust, go walk off to the bathroom, the parking lot, go home, whatnot, and halfway through the parking lot, bang, the answer hits them. You know? Or, worst case, it will be in the shower the next morning or on the commute or whatever it is, and it just asynchronously pops into their head.
That brilliant idea popping into your head will not do it most of the time when you’re sitting there pounding on the keyboard in frustration. You know, it just blocks you from doing that. So the number one advice I have when stuck is stop. You know, take a deep breath, literally take a deep breath, ’cause that actually does help re-oxygenate you and get things kicking along a little bit better. Mom was right when she said, “Stop, take a deep breath, and count to 10.” There are actual real physiological reasons you should do that.
Ed: Okay. Well, what about reproducing the bug or writing a test case, an automated test case, adding a test case to the test suite?
Andy: Oh, yeah, yeah. Yeah, you do all that stuff. That’s the currently approved way to do a bug: have a test case that will demonstrate it conclusively first. Before you touch it in code, make sure you can reproduce the bug via a test case. Then go in, fix the code, and then re-run the test case to make sure that it’s fixed. But those are the easy ones. That’s Agile canon and certainly the best way to go about it. But what do you do when the bug is more elusive than that? You know, it’s non-deterministic. You can’t reproduce it well. There is a race condition somewhere, some area deep down somewhere, and you really don’t know all the constraints that apply to it. That’s where it gets a lot more interesting, and that’s where you need to try what you can and then when you’re just not figuring it out, walk away from it for a little while.
Because then, when you step away from it, you’ll find, “Well now. I did never consider that this would be something in the cookie on the user’s machine” or this or that or some other factor that comes into play that you might not have thought of.
Ed: Since you are one of the pioneers and leading proponents of Agile software development practices, let’s talk a bit about test-driven development. Specifically, how do you deal with bugs in the tests and the time it takes to debug the bugs in the tests? Some of the Agile detractors will point that out and say the test code is just as likely to be buggy as the production code.
Andy: Yeah, there are folks [who] say that. I don’t really buy that as an argument. Certainly from a philosophical point of view, that’s true.
You know, code is code, and you can write bugs in test code just as easily as you can write bugs in normal code. However, test code by nature tends to be pretty simple stuff. You’re setting up some parameters and you’re calling something, and it’s not rocket science. I can make a typo in “hello world.” I’m not throwing stones here. I can certainly introduce bugs in the simplest of circumstances, as can everyone. But on the whole, properly written test code is not some big monstrous morass that’s hard to figure out. It’s very simple, small methods, four or five lines of code each. Pretty easy to take a look at and say, “Yes, this is reasonable,” or, “Yeah, it’s a little bit suspicious.” On the one hand, I don’t buy [their argument], but on the other hand, you’ve got a nice validation mechanism. If you are suspicious of your test code, it’s pretty easy to go in and deliberately introduce bugs into the real code and make sure that the test code catches them.
I don’t really buy their argument. The parallel argument is, “It takes a lot of time to write the test code.” [Is equally specious.] No [it doesn’t]. It’s like an investment strategy. You’re spending an incremental amount of time to write test code, but if you get nailed with a hard bug, it may be exponential time to solve it.
It’s really kind of a false argument. They only say that because it’s easier to measure the amount of time that you’re spending on test code.
Ed: Is it ever appropriate to not write tests?
Andy: It depends. There have been lots of times where I’ve done test-driven design, test-first design, and it’s saved my bacon. I’ve had some times where that’s been less appropriate, where it’s something more exploratory and I’ll go more bottom-up. You know, develop something first, kind of play with it, and then quantify the unit test and work from there. As with most things, the real danger is dogmatism where “this is my hammer and everything looks like a nail.”
That’s what you want to avoid. You always want to use the right tool for the job in the right context. So I’m a fan of test-first development. I don’t always do it. It’s not always appropriate.
It’s the same with anything else. You know, pair programming or even big up-front design. There are some cases where that’s actually the best way to go. There are more cases where it’s not, but every technique has its place somewhere in the world in some context.
Ed: Several of the people I’ve interviewed had some things to say about the intrinsic value of the tests in a software system. Do you have anything to say about that?
Andy: Some venture capitalist type once asked me where the real value was in a software system, and he was thinking it was in the source code and we were trying to convince him that, no, that’s actually not the case.
If you think about it, the most valuable part of a well-constructed software system is not the code itself. It’s the unit tests. That actually defines the behavior of the system. It is a functional working specification. In terms of intellectual property, that’s actually far more valuable, ’cause given that you could re-create the source code in any number of ways, the source code becomes far more disposable when you think of it that way.
Ed: Right. Isn’t it the case that the unit tests represent a very, very specific and strongly constrained set of requirements?
Andy: Yes, absolutely, and whatever you choose to do that happens to fulfill them, that’s a much larger set. There’s the system that happened to be written, but that doesn’t preclude writing a completely different one that would fulfill the same requirements and provably so.
Ed: Yes, it’s proven that automated testing is a bulwark against human nature. But here’s a situation often faced by human programmers. Let’s say it’s late at night, you’re up against a deadline, and you’re desperate to get a piece of code working and checked in. You come to a situation where you’re faced with the choice between doing the right thing and doing the quick thing. How do you motivate yourself to do the right thing?
Andy: I think Douglas Adams’ advice is real key there: “Don’t panic.” This is the number one place where we get into trouble as developers. We feel the crushing pressure of the deadline and we do something stupid because it’s expedient. It’s like gambling or Powerball. Every so often you win the $5 prize and go, “Woo-hoo. There’s some success to this.” Every so often you will take that cheap shortcut and it works out. You don’t get caught, you get away with it. You think, “Hey, this is great.” And then, of course, you take the next shortcut and the whole thing goes tits up and you’re just totally blown.
So the first thing I suggest is, don’t panic. If you feel like you’re compelled to do the wrong thing because it’s quicker, step away from the keyboard.
Ed: Switching to an easy subject, when you get a new machine, what do you have to do to it so it’s useful for development?
Andy: Typically what I do is I set up the connection to my CVS archive and I [do a] checkout, and then I’ve got 99 percent of what I need.
I actually had to do this the other day. One of my machines blew up, and I took it into the Mac store and they replaced it. And here comes this working machine, and I have things set so that I was back in business literally in under a half an hour.
Ed: Wow. That’s great.
Andy: So, again, people: Use version control, but use it for everything that’s important to you—you know, not just the source code of your project, but also the important stuff, config files and that sort of thing. So that’s real big with me.
Ed: That’s an excellent use of version control. I hadn’t seen that before. Version control systems are one example of the tools used by software professionals to achieve great results. How important is mastery of tools?
Andy: The key there isn’t so much [mastery of tools] -- it’s always good to have real facility with your tools. I can fly through vi and do stuff in a very short amount of time, and it doesn’t matter whether I’m local or SSH’d (connected via a secure terminal protocol, from another machine) in somewhere. I’ve seen people who do things in Emacs that I don’t know how the hell they manage it, but they’re expert at that tool.
And that’s all very helpful, but I think the real underlying thing isn’t so much proficiency with a given tool. The real expert isn’t necessarily the one who flies through Emacs, but the one who knows that “when this situation comes up, oh, I can go use this and knock this problem out in no time flat.”
So it’s more being aware of the whole catalog of what’s possible, what’s available -- be it algorithms, tools, languages, products, whatever -- just knowing that something’s out there and being able to apply it. I think that’s probably more key to productivity than anything else.
Ed: Who is the most productive programmer you’ve ever worked with, and why were they so productive?
Andy: I think the key to the most productive programmer doesn’t necessarily have to do with cranking out lots of code or doing it faster or that sort of stuff. I think the key is—and this is gonna sound contradictory to what I said before and I’ll just warn you up front—the key is not to be distracted. Take [this] scenario: A user says they want the program to do X and Y and Z, and you start looking at that and it’s very easy to get distracted by, “Ooh, there’s this neat algorithm I could put in here” or “Hey, I haven’t used the last five of the Gang of Four patterns lately. I can put that in.” And there are all these sorts of -- for lack of a better word -- “distractions” coming up. You harp on one thing that the customer said and miss their main point or miss what’s most important to them, or you seize on the first solution that comes to you and doggedly go down that road even if it really doesn’t make much sense.
So there are the usual consultant tricks you can do: Any time you come up with a solution, make sure you come up with at least three so you have two that you can throw out.
A lot of times, in the interest of speed and trying to impress each other, we blurt out the first architecture or design or idea that comes to mind and roll with it, and that’s usually not a very good idea. It’s a distraction, if you will.
So when I say “not distracted,” I’m saying the best programmers actually look at the genuine problem and work on a real solution to it without being distracted by dead ends, obvious false starts, that kinda thing. Everybody runs into trouble eventually, but the best programmers I’ve seen run into that kinda thing a lot less. They may get [a slower start] outta the gate. They may [get a faster start] outta the gate, but they take fewer wrong
Ed: How important is it to be a continual optimizer of your workstation and work environment?
Andy: I’ll admit I’m not particularly diligent about that. I’m the sort who will let small annoyances build up to the point where it’s like, “What the hell is going on here?” And then I’ll take a day and clean up six things, write a couple of shell scripts, do whatever needs to be done and get back on track again, and it’s like that story of the boiled frogs that was back in Pragmatic Programmer that we tell all the time: If you put a frog in boiling water, [it will] jump right out.
Ed: Right. Yeah, yeah.
Andy: If you put [it] in cold water and turn the heat up very slowly, [it doesn’t] notice and you cook the frog, and, as I like telling people, that is actually how we all end up in hot water. It gets turned up slowly and you don’t notice, and then you do notice and you say to yourself, “Whoa! This is a problem.” This is very true in terms of personal productivity. Small delays add up fast. Waiting for your e-mail client to load, clicking a message and waiting for the next message to show up. I cut my e-mail handling down by a half when I read this technique of how to optimize its internal database.
And the world is full of stuff like that -- your browser cache, things on your disk, things related to the operating system, whichever one you happen to use. There’s a myriad of small things all over the place that can make that application run faster or smoother, or there’s a shortcut key that you could use that you haven’t been using -- all that kinda stuff.
The irony of it is because this is what we do all day and for so long, learning one keystroke combination instead of a mouse sequence could end up saving you days at the end of a year ’cause it’s such a repeated activity, so I’m a big fan of keystrokes instead of mouse moves, key macros, anything like that.
Ed: You mentioned earlier that being aware of what’s out there in terms of algorithms, tools, patterns, etc. was hugely important for a successful developer. One aspect of that is reuse. Let’s say you find yourself writing a small piece of code that you know has been written before. How do you decide when to reuse or not?
Andy: There’s a real tension there. We like to invent and reinvent wheels over and over again, and we’ve got piles of industry-reinvented wheels. So you could argue pretty successfully that we actually haven’t invented
anything new in computer science since about 1968.
Andy: Everything is in some flavor a retread. Java is a retread of C++, which was an enhancement to C.
The other side of it is you go to look for something else to repurpose or reuse off the Net, programming by Google [for example], and you’ll find 15 implementations that are close to what you want. Then you run the danger of [using] Google [as an] IDE and you end up with Frankencode. You get this patched-together monster [that] is all stitched together and never quite works right, because each piece they drag in has a couple of pieces they didn’t really need, but it came along for the ride. And that’s where you run into real danger. What will happen is I’ll look at five implementations on the Net of something, get the best idea of how I want to proceed, and use one of those as a starting point, or just do it from scratch.
Are you really reinventing the wheel there? And in that case, I’d say no, because every project is different, every situation is different. You know, you have to make local adaptations. But on the grander scale of things, why is it something you have to go out and look for anyway? Why isn’t this a feature of the service or framework you’re using or the language you’re using? There’s a successful argument that says the design patterns as expressed in the Gang of Four book (Design Patterns, Addison-Wesley 1994) really shouldn’t exist. There are, what, 23, 26 patterns in the GoF book? And many of them are there due to limitations in C++.
You know, it’s very much limitations of one particular language at one particular point in time that doesn’t have a certain regular expression that forces you into these things. And I think that it was Paul Graham who made this argument in that particular case. But I think there’s a general case to be made: we do that [kind of reinvention] a lot of the times. It’s not strictly reinventing the wheel, but what we’re doing is inventing patches to a crappy wheel, when, instead of a wheel, we ought to be making a jet ski or whatever it may be. And I think you can really see a lot of this in the historical things that came out of the C++ community, CORBA (Common Object Request Broker Architecture) being a big example. CORBA, despite its language
neutrality, was extremely influenced by C++, similarly with Unified Modeling Language (UML). There’s a lot in UML that you can look back and go, “Oh, my God, that is C++.” If you had started simply off looking at Smalltalk or Lisp or Self or PROLOG or something weirder, Erlang, you’d get entirely different results. And that tells me, well, that’s not such a general-purpose wonderful thing. That’s really got a big old giant Band-Aid for something that wasn’t what it should have been in the first place.
Ed: On the flip side of reuse is the idea of starting over from scratch. When is it appropriate to do that?
Andy: The philosophical note I’ll throw in here is [that] a number of philosophies suggest looking at life from the point of view of abundance, not scarcity. So rather than bitching and moaning that something is scarce, be grateful and [appreciate] the abundance you actually do have. I think that is particularly true in the coder’s world, and I think a lot of the danger we get into from deploying things that are too buggy, too early, writing it too quickly, any of these sorts of issues all come from the basic idea that “That code was so hard to write the first time, I could never possibly write it again” or “The code that we have took so long to develop, we have to keep it and Band-Aid it because we couldn’t do it again,” and that’s taking a viewpoint of scarcity, which is the wrong one. And, in fact, even going back to [Fred] Brooks, there’s this idea that, hey, guess what? The second time you write it, it’s gonna go 100 times faster and end up better because you had the experience of writing it the first time. Now you actually know what to do.
So you’re better off doing it the second time and not hanging on to this fundamental idea of scarcity and hanging on to the first version, which really wasn’t that good to begin with.
Ed: And the act of making software is the act of learning. So the second time you try to learn something, you’re gonna learn it better and faster.
Andy: Absolutely. You’ve already prepared the bed, as it were. And jokingly, a lot of people say, “What’s the best thing I can do to enhance my code?” and I say, “Take a big magnet to the hard drive.” That’s the number one issue for code cleanliness ’cause you will tend to do it much more straightforward.
Ed: Yeah, but like anything else, though, there’s -- there’s the flip side of that, because if you’re developing something really complex, over the years, you get the bug fixes in there and a lot of other people are working on it, but sometimes it’s very attractive and seductive to just say, “Let’s just throw everything out and start again from scratch.”
Andy: That’s another tension where developers love to do that. The reason they love to do that is because they know they can do it better this time. Certainly on large systems, where you’ve had a lot of different people’s input, you need to weigh, okay, “Are they all in that same position to contribute again?” or “Do we not have that opportunity anymore?” It’s different if you’re talking one or two people versus a 10- or a 50-person team.
You know, that introduces a new dynamic there. If you got the same 50 people together and they all have learned and know better than what they did the first time, then, yeah, you might be better off scrapping that first two-year effort and replacing it with a new six-month effort. But if 45 of the people have moved on, then that doesn’t work out so well.
Ed: How important is it to let your understanding of the business of software and the perceived demand for specific skills inform where you invest in stewarding your skillset?
Andy: Well, that’s all a question of managing what we call your “knowledge portfolio,” and this is a concept that was put forth in the original Pragmatic Programmer book -- that everything that you know is part of your knowledge portfolio and, like a regular portfolio, this is something [where] you want to play with the balance of risk and reward, return on investment ratios.
Something that’s new and hot, if everyone’s doing it, then it’s actually fairly low-risk. You could be assured that we will get some kind of a gig out of it ’cause everyone’s doing it, but it’s also fairly low-reward. You’re not gonna get handsomely rewarded, because everyone else is doing it. Something like Java in the very early days or Ruby perhaps or Erlang now -- there’s a lot of risk associated with it.
You could invest heavily in it and it might turn out that no one ends up using it. It could be a flash in the pan or it could be the next hot thing, so you’ve got a high risk but a commensurate high return on investment if that’s the thing that pans out.
If you got involved in Java back when it was Oak and in the very early days and positioned yourself there, you’d have done very well for yourself. You could get involved in whatever the latest hot thing is this week and who knows?
Andy: That might turn out to be the next hot thing; it might not. So like anything else, it’s a mixture of trying to balance that—the risk to the reward ratio.
Andy: What Dave [Thomas] and I have long suggested in our seminars and our writings is you want to balance yourself out, learn a couple of each. Learn some high-risk, high-reward things in case it pans out, but pad the stable out with some fairly sure bets of, “well, there will always be a need for XYZ,” and balance it out that way.
Ed: How does that relate to outsourcing?
Andy: As long as you’re problem-solving, making decisions, helping the business grow as more of a business consultant, you’re probably going to be much more tied into the business and have greater longevity than if you’re just a replaceable code monkey, one of a hundred that’s not thinking, not doing anything particularly valuable.
Those are the jobs at the very bottom. They’re the first to get outsourced to whatever the next country is on the outsourcing list. An interesting side note there -- one of the most interesting books that we came out with on career development is entitled My Job Went to India and All I Got Was This Lousy Book by Chad Fowler (Pragmatic Bookshelf, 2005). And the title was meant to be somewhat tongue-in-cheek because, obviously, there’s a lot of concern about jobs being outsourced to India, to other low-priced countries, and the whole book really takes the tack of “Here’s what you need to do in terms of career development so that yours is not the job that gets outsourced. Here’s how to market yourself internally. Here’s how to make sure that your boss and the powers-that-be understand what it is you do that adds value to the organization, where do you add value that others don’t” -- all this sort of approach, and I think that’s really quite valuable, because at the end of the day, you have to add some proven value to the organization or it’s their responsibility to replace you.
Everyone harped on India because they were the first out of the gate to attract outsourcing, but the irony is I’ve got a few friends who run outsourced operations in India and they’re quite concerned because they’re losing business to the next set of countries down the rung -- Eastern Europe, Vietnam, Southeast Asia. Other places are filling in. They’re fighting the cheapness war and winning, and so it goes. Those will get comfortable for a while. Then they’ll lose out to the next rung down the ladder, and on it goes.
Ed: It’s clear, then, that the organizational structure of a company has an impact on job security. How about developer productivity? Can you give me an example of where it has a negative impact?
Andy: This is a popular dysfunction in some companies, where the maintenance folks are on a different budget than the developer folks. There’s no incentive for the developers to write bug-free code. There’s incentive for them to write code fast and get it the hell off their plate. So that is not an optimal way to structure your organization.
There are actually a lot of oddball things like that, that have nothing to do with software development. Instead it has to do with some accounting rule somewhere and how it affects the dynamics of what the team can do and what they’re paid for that ultimately affects the code. There are whole classes of organizational dysfunctions that just stem from accounting rules, oddly enough.
Ed: Speaking of accounting, how organized are you in your personal life?
Andy: My desk is neither spotless nor [a] pigsty. What I tend to do is organize by piles, very much pile management, so I’ll have a hot and important pile and maybe several stages of archived piles or different topics. But I find trying to sort it to any real finer grain ends up being a waste of time, and not sorting at all becomes a problem where you can’t find stuff. So I take a predictably pragmatic approach and sort it in fairly large-grained piles -- enough so that if I need to find something, I may not be able to put my finger right on it, but it’s kinda like “bucket sort.” I can go to the right bucket and within a quick linear search be able to locate it fairly rapidly.
Ed: Do you keep a journal?
Andy: Not in the diary sense, but I do keep a Moleskine notebook with me at all times. So you always want to have something with you to jot down on. I’ve got mind maps in it, bullet lists, thoughts, designs, archive stuff. It’s more like an engineering journal, but really more like a notebook. Not like a diary. It’s useful for me to reload context. You know, “I figured this out once… I don’t remember what conclusions I came to. What were the constraints, what were the issues? Oh, yes, that’s right.” So it’s like a quick memory reload for important thoughts that you had.
Ed: How about a day planner? Do you use one of those?
Andy: I do not.
Andy: I use the calendar app on the Mac, which synchronizes with my cell phone, and it beeps at me when I have to do something.
Ed: What are some things you can say about how to successfully do schedule negotiation with a client, and how do you ensure that you and your client agree on a schedule that’s realistic?
Andy: I think -- in general terms, I think the best way to manage that is to take small bites. I had a very demanding client some few years back where it was a very difficult sell. They wanted to know two years from now what I’d be doing on a Tuesday… and clearly that was going to become a problem. We had a couple of heart-to-heart talks and I said, “Listen. Let’s just try it this way.” And I gave them the Agile song and dance and ended up just trying to say, “Okay, let’s just do this -- let’s take this one step at a time,” and really ended up doing what looked more like scrum than anything else. I’d start off with a backlog, just a private list on my Wiki of “here’s the things I’m gonna do next and you tell me what’s most [important]” -- it’s really a matter of relationship building more than anything else. That’s how people get to trust you. That’s how they know that you’re not trying to pull the wool over their eyes. So we ended up developing a relationship where they came to know that if it was something that they deemed important and of high relevance to them, something they needed fast, they knew that they would tend to get it pretty quickly, within a week, within two weeks.
It’s as much a matter of training the customer to know what your speed is as anything else. Once they get a flavor for it, then you say, “Oh, that’s gonna take us a long time. Can we do something else in the meantime or can we break that apart? What are our choices here?”
What you need to do by whatever means is train the customer. When the Agile folks say “work closely with the customer,” that’s really what it means. It means “gain their trust.” Let them see how you work, how fast you work, how fast your team works. Get them to have a reasonable set of expectations of your capabilities.
With that kind of relationship, negotiation is not adversarial. It’s “Hey, we both want your company to make a lot of money and succeed. Otherwise, we’re all out of a job, so what can we do to make this happen?”
Ed: Right. It’s kind of just realizing that we’re all on the same team.
Andy: Yeah, exactly. Exactly, and that’s a hard road for some folks to go down. Especially in a larger, more bureaucratic organization, there is very much the idea that it’s not all the same team.
Ed: Small teams, big teams—in every team, collaboration is vital to success. How does one increase one’s proficiency at collaboration?
Andy: I think it’s the same answer as how to increase your proficiency in anything, and that’s do it.
Ed: Practice. Okay.
Andy: Practice. Practice. Practice. You know, “How do you get to Carnegie Hall?” “Practice, man, practice.” That’s the joke, but that’s really it. This is something I harp on in my talks: “go with experience.” You know, experience really is the best teacher, so you want to set yourself up to be able to play with stuff, be able to work with it, be able to do it, ’cause, otherwise, even just reading about it, it’s not the same.
Ed: Right. Okay, well, what do you when you’re on a team that is having trouble with collaboration? How would you approach improving the collaboration when it’s determined that the collaboration is the problem?
Andy: I get them to talking. Anytime I’ve gone and consulted [with] a team where they’ve had those sorts of communication issues in between themselves, the number one thing that seems to help is something like a scrum standup meeting. It’s a daily meeting. It’s very focused on the agenda. You answer your three questions and you get the hell out of there. It’s not some lengthy meeting or discussion or diatribe. You don’t problem-solve; you don’t discuss. You answer the scrum three questions: here’s what I’m doing today, this is what I was doing yesterday, here’s what I plan to do tomorrow, here’s what’s in my way.
The idea is [that] whatever is in your way the manager takes as his to-do list, and you just go bop, bop, bop, bop around the room, and now everyone knows what everyone else is working on [and] if there’s an issue with that, but they don’t need to be working on it ’cause you did something similar last week, or you decided with somebody else that that’s not the way this project is gonna go or whatever. Now you’re aware of it. You know what everyone else is doing. You know what they were working on yesterday; you know what they’re blocked on. This person needs a QA machine. This person needs you to finish what you’re working on. You know, whatever it is, you get a sense of everyone’s pace, everyone’s velocity, because you hear day after day, “I’m doing this, now I’m doing that, now I’m doing this other thing.” You can tell if somebody’s falling behind.
It’s a really, really effective way just to get everybody playing on the same page. And then from there, you can do the little spur-off meetings, say, “Okay, well, let’s -- let’s work this out. Let’s solve this problem,” you know, whatever it takes. So that is the number one way to kick-start getting a team to communicate with each other.
Ed: Let’s close up with some personal questions. What sort of student were you in college?
Andy: Curious. I was a curious student.
Ed: All right. How much stock do you place in GPA and other traditional academic measures of success as a predictor of real-world success?
Andy: I actually don’t place a lot of stock in GPAs. I had a very high GPA within my major and so-so overall, but I didn’t place much stock in it then or since. I knew quite a few folks who dropped out and were very successful. They made a lot of money. I knew quite a few folks who had 4.0 averages who I wouldn’t trust to clean my garage because they didn’t have any real-world experience or abilities. I don’t find it to be a particularly well-correlated indicator of practical expertise.
Ed: What was the most useful class you took in college?
Andy: This is actually quite an interesting question, and I’m gonna expand it to cover grade and high school as well as college, but I would say for the entire educational process, the most valuable class that I took anywhere was Latin in high school.
Andy: And that, I think, is just supremely ironic, because if you had asked me at the time, that would surely not have been my answer. At the time, I thought it was the most useless waste of space to study a dead language that no one actually speaks anymore. You know, what was the point? That just absolutely didn’t make it for me, but in retrospect, that gives you the basis for the language skills that’s really unparalleled. It really expands your vocabulary. It expands your ability to understand vocabulary or even foreign languages, at least Latin-based languages, that you may not know how you say this in French or Spanish or what have you, but you’ll see something that’s close and you’ll be able to figure it out enough to get by. And, again, I said before that I favored language skills far and above mathematical skills for the most part, so I think that was something that was quite, quite useful, far above and beyond any of the other classes of that sort.
Ed: How good are you at keeping your professional and personal lives balanced?
Andy: It’s very difficult for me to think of myself as separate people. You know, a “work me” versus a “home me” versus the “research me” versus the whatever. It’s really all one thing. And my situation is a little bit unusual. I’ve very much gone the career arc from in-the-trench programmer at a Fortune 100 company, through to working at really boutique, interesting, high-tech software companies, to being a consultant for all of the above and then some, to being an author, to being a publisher. I’m firmly in the entrepreneur seat at the moment. It’s the same challenges with a different twist. So with the entrepreneurship hat on, work, home, play is really all one thing, or it’s a continuum. It’s not discrete, different elements.
Ed: Are you married? Are you a parent?
Andy: Yes, we have children and they are very much a part of the business as well.
Ed: Talk more about that, because that’s really interesting to me, how thatworks.
Andy: It’s a fascinating thing. My kids are young. They help me pack or prepare when I go off and give lectures and speaking gigs. They help with some of the aspects of the business. It’s not like the old days, where dad would come home at 5:00 after commuting on the train and knock back a few martinis and then make that transition from work life to personal life. You know, we don’t have that sharp distinction. You know, I could be sitting at the pool with them, with my laptop, working on an article for a magazine, or working on sales figures for our publishing business, or writing code, or doing something for them. It’s all one continuum.
Ed: Interesting. Now, obviously, you have to be on the same page with your wife if you’re gonna mix the two so closely together. How does she feel about it?
Andy: She’s a great contributor to our business. She also does consulting for companies in her field and, of course, managing the house and the kids. But really, the family knows we’re all in this together. We’re really all in the same boat.
Ed: Let’s say you’re working on a really engrossing problem, perhaps a creative coding problem, perhaps an article that you have to really think about deeply. How do you context switch and put that aside when it’s time to spend quality time with your kids and do the dad thing in a real concrete way?
Andy: It depends. It’s really a matter of prioritization. If the kids have something that is high priority -- they’ve got a performance or a show or some event that’s at a particular scheduled time we know that’s coming up, then you make allowances for that. Okay. Now, I can’t work on this then, whenever that may be, because this is what we have to go do. But they’re also very aware that if it’s at night time or on the weekend and I’m hunched over a laptop and deep into something, they pretty much will respect that, too, and they know, oh, “dad’s in the middle of something.” It’s a give and take both ways, and it’s certainly something I try not to overuse.
Ed: Do you have a non-IT Plan B to earn a living?
Andy: [immediately] Yes, I do.
Ed: Okay, what’s that?
Andy: I’m not sharing it with you.
Ed: That’s fine.
Andy: [laughs] Yes, I do and it’s classified.
Andy: Quote me on that.
Ed: What’s your personal life goal?
Andy: I can make that short and sweet. I’ll just say, “To understand.”