Saturday, March 08, 2003

The Jamie Test - Step 6



The Jamie Test continues, my eighteen step process. And step 6 is: believe in a higher power. Just kidding. Step 6 is:

After feature freeze, use your bug find / bug fix rates to estimate your ship date

Once in alpha, you may want to have some idea if you're going to be at zero bugs on time. You might think you could take the entire bug list, ask everyone how long it's going to take to fix their bugs, and be done. You shouldn't do this, because:
  • The time they spend estimating is time they spend not fixing bugs.
  • DeMarco and Lister have some evidence that programmers work most productively when there are no estimates.
  • According to Steve Maguire in *Debugging the Development Process*, bug fixing is notoriously hard to estimate. (I can vouch for this one: when Die By The Sword was in alpha, I was convinced that I didn't have enough time to finish all the bugs on my plate, and had a lot of them assigned to others. Then I finished my list in record time, and asked to have those bugs reassigned back to me. I might as well have kept my mouth shut.)
  • At any given moment, your open bug list is a small fraction of the long list of bugs that remain to be found and fixed; your estimate is only going to represent how long it takes to fix the current set, not the total.

  • So what are you supposed to use? Strong language?

    Greg John has used this process on the last few projects we've worked on together. The way it works is each day you count how many bugs you have in your database, and make a chart. It should, ideally, be a curve that shoots up rapidly after the product goes into testing, hits a peak, and then trails off towards an asymptote of zero. While the curve is still shooting up, the way to estimate your ship date is to pull a guess out of your ass as to how many bugs you are going to have, total. You can do this by looking at your previous projects and extrapolating. (If you don't have previous projects, now's a good time to start gathering this kind of data!) Greg's rule of thumb, based on the projects he's worked on, is to take the number of people-months that have gone into the project and multiply by ten. (In other words, every one of us introduces a bug that doesn't get caught every three days or so.) At your shop, this number will quite likely be different, depending on your process and your testing team. It could vary from under a thousand (LucasArts), to three thousand (Lionhead Studios), to eighteen thousand (yours truly.)

    When you're ascending the curve, and testing is finding bugs faster than you're fixing them, your resources are the bottleneck. Overtime should probably be mandatory during this period, as it's one of the only ways to bring the project in sooner. You may even scrounge up resources from other teams at your company. And you can mark as many bugs "as designed" or "will not fix" as possible.

    Once you've "broken the back" of the bug list--you're over the hump, and fixing bugs faster than you find them--this means two things. One: you can get an idea of when you're going to hit zero bugs by looking at the trajectory of the graph. You can see how closely this number relates to your previous estimate of how many bugs there were going to be. Two: you need more testing, as now they are the bottleneck. This is the time (okay, one of the many times) you yell and scream at your publisher, because you're doing all you can to bring the project in on time, and they are the ones holding you back. (Evil publishers--*cough* Crave *cough*--may even have a completion-on-time bonus they don' t want to give you if they don't have to, and will therefore give you just the right amount of testing to ensure that you complete just a week or two late.) It's also a good idea to devote your own people in-house to testing, although it may take some work to train your idle artists and coders how to be good testers.

    If you're lucky enough to have the kind of guys on your team that care so much about the project that they implement their own features when nobody's looking, it's time to cancel that shit right now. (And maybe you should cancel that shit before the project even begins, but I'm still undecided about that. It sure is nice having people that care.) On my first project--Magic Candle II--we fixed a cosmetic bug--you could walk in a certain kind of foothill that you weren't supposed to be able to--and accidentally introduced a stop-shipment bug: now you could walk on the ocean, but you couldn't sail on it! And guess what! We didn't catch the bug! (Our testing was, like, two people.) It shipped like that! Talk about needing a patch! So this is the time to remind everyone to not fix anything they don't absolutely have to.

    I'm making it sound like once you're armed with these tools you never have to stress those last few months again. Unfortunately, you can always be surprised. On our last project, we blew through our rule-of-thumb estimates for how many bugs there were going to be--it was our first multi-platform release and it took us a while to realize that the number of bugs that were getting reported were multiplied by three. And after we thought we had gotten over the hump, we asked Activision for more QA, and they gave it to us. And the bug find rates climbed right back up again. In fact, our open bug graph ended up looking like the stock market. (Normally Gamasutra puts my post-mortems on their site whether I want them to or not, but for some reason they've taken their time with my Spider-Man post-mortem, which has a picture of that terrifying graph.) Times like these make you feel like you're a first year fucking game developer. (Which reminds me of how Peter Molyneux seemed surprised to enter alpha with Black & White to find that they had 3000 bugs and that fixing one created three more. How long have you been doing this, Peter?)

    As counterpoint, Chris Busse has done enough large projects at this point that he just "gets a feeling" for when the project's going to hit zero bugs and he's usually pretty close. He can probably go into more detail, but it seems like for him, alpha breaks down into two stages:

    The bugs are like fruit on the ground stage. In this stage, you can't play the game for more than a couple of minutes without hitting a stop shipment bug. When the game is in this state, the testers aren't going to try to do tricky things to break the game, like force their avatars into tight crevices where they might drop out of the world, or find some way to take the thug who has the key to get through the waterfall and throw him through the waterfall. (This exact bug was revealed in Spider-Man, after we shipped, by Capcom when they were localizing it for the Japanese. They have some good testers. They sent us a videotape. Thanks guys. Why don't you just give us paper cuts and rub lemon juice in them?) When you're in this stage, you are still at least a month from being done. Probably two. A lot of developers will start blaming the testers for doing a poor job at this point. I have been guilty of this sin. "Why aren't you guys finding the tough bugs? Why didn't you find this bug sooner?" The answer is because they were so busy writing down things like, "Game crashes when you try to punch thug," they didn't exactly have time. (You might point out that a game should never be in this kind of sorry state. I totally agree, but don't know how to prevent it from happening on teams of more than a dozen guys.)

    The finding the hard bugs stage. You are officially within striking distance of being done. You can start sending presubmissions to the console manufacturers. (And fix the slew of bugs that they report.) You are a few weeks from being done.

    The firemen stage. Here, you've hit zero bugs, and each morning you come to work and the testing team has found half a dozen new bugs overnight. Most of these you WNF, the rest you get fixed by mid-afternoon, and then you sit around and browse web-sites and pray. Maybe you're getting two sets of bug reports a day. You may give the artists the week off, with the understanding that they are On Call, in case a bug crops up that only they can deal with. At this point, you are almost done, and as soon as you've gone a couple days without a bug report, you fire off submissions to console manufacturers. (Or, as was the case with my last project, you spend all day fixing bugs, every day, and then on the absolute last night you can send submissions and still meet your agreements with Best Buy and Wal*Mart and all them, you get down to zero internally and spend all night making the submissions and fire them off in the morning, untested. Woo hoo. Life on the edge.)




    If you don't mind me patting myself on the back for the moment, one thing about my eighteen step program is that, unlike Alcoholics Anonymous, you don't have to do all eighteen steps for an individual step to work. You could consider these as Game Management Gems or Best Practices. Steve McConnell, in Rapid Development, points out that you should be wary of "Methodologies" that claim they will only work if you adopt every facet of the methodology. XP and FDD both suffer from this flaw. With most of these eighteen steps (none of which I invented myself, btw), you can introduce them, and almost immediately feel the improvement.

    Friday, March 07, 2003

    Can Do Vs. Can't Do



    In Slack, Tom DeMarco says that people can either be "Can Do" or "Can't Do" people--that is, they either have a "Can Do" or "Can't Do" attitude--and a team should probably be led by both at the same time; one to seek and exploit those risky opportunities, the other to worry about the risks and make sure that nobody bites off more than they can chew.

    I'm not sure this is true. With proper planning and risk management, I think a "Can't Do" type--such as myself--will be willing to take on risks, and a "Can Do" type will resist the urge to bite off more than they can chew. This is yet another thing I like about Joel Spolsky's scheduling system; when used properly it lets us ask, "Can We Do It?" and gives us a pretty good answer. For the past several months, I have seemed like a "Can Do" person; as long as there was slack in the schedule, I was willing to entertain feature creep.

    Today we ran out of slack; the time we have remaining equals the time we have estimated. From here on out, I become a "Can't Do" person. When asked for a new feature, I ask right back, "Which feature should we cut to make this other feature happen?" or "Will you give me the additional resources we need?"

    Here's an article by Scott Crabtree on what to do when your schedule is out of slack.

    Tuesday, March 04, 2003

    This Hurts Me



    Just saw a Yahoo DSL banner ad with a dorky looking guy who says, "I just rearranged my entire homepage! I bet there's, like, twenty programmers freaking out!"

    Ouch. I wonder if we even have twenty.

    More On Comments


    Certainly I'm too lazy too comment, and it's nice that Steve McConnell has given me this "I write self-documenting code" excuse for not commenting.

    Still, one man's laziness is another man's cost-effectiveness. I have no studies to back this up, but is it possible that the time spent commenting is not made up for by the time saved by people who come across your code later and try to read it?

    A side note on commenting: I think it's Weinberg who masks off the comments when he reads code, to try to avoid the phenomenon he called "perceptual set": we see what we want to see, and comments can trick us into thinking the code does something it does not. Yet another excuse not to comment.

    Honestly, I do comment, probably around one line of comment per fifty lines of code or something like that.

    It's fairly rare that someone on the team gets on somebody else's case for not commenting. For having the game state spread across dozens of modules and linked like spaghetti, yes, people get upset. But for lack of comments? No.

    Monday, March 03, 2003

    The Comment Holy War


    Occasionally sweng-gamedev gets bogged down with holy wars about coding style or whether to comment or not. I can't resist contributing to the comment war; here is an e-mail I sent to Christer Ericson earlier today, after he put up an obfuscated function--in response to a Noel Llopis claim that writing self-documenting code is better than commenting--that he claimed could not be rewritten for clarity without the addition of comments:

    Obviously this function would benefit from a comment listing the reference were the formula was taken from, but more important than commenting it is rewriting it to use meaningful identifiers. The a, b, c, d, t, p, a1, a2, a3, a4 obviously have to be replaced with clearer identifiers. I grant you that once you have seg1begin, seg1end, seg2begin, seg2end, etcetera, you still won't know *how* the function works, but I'm not sure you need anything more than one line comment that refers the reader to a paper or a chapter of a book for that.

    I assume this function is a bottleneck function and that this obfuscated algorithm is a clever optimization. This is one of the few cases where comments are necessary; if the function was non-bottleneck, you could use a different algorithm that a programmer could read without referring to a mathematical text.

    I could continue to argue Noel's point for him, but the crux of Noel's argument is put most eloquently by Steve McConnell in Chapter 19 of *Code Complete.* You could check it out there.


    Ironically, the comment functionality on this website doesn't seem to work anymore.