Beta testing should involve a methodical prove-in of a carefully designed system, such as a software product, Web site, or automated tool. It's not meant to be a hit-or-miss, cross-your-fingers-and-hope-everything's-OK Band-Aid that you can apply at the last minute. You need to do more than randomly bang on the system in an attempt to find a way to break it. Here are 10 strategies for successfully carrying out the process.
We've all seen examples of software programs -- even from well-known, respectable software companies -- that arrive on our desktops barely breathing. They seem to be full of bugs, and thereby cause us more grief than they help us carry out work. Or we try to use a Web site that looks great, but we can't get from the shopping cart to the order page. Or we buy a new widget, yet even using the instruction booklet, we can't jump from the main menu to the critical functions the way we're supposed to.
Are you anxious to catapult your business into the ranks of companies that frustrate their customers this way?Of course not! Therefore, I'm confident that you will do things differently.
That's why testing involves such a systematic, tedious, yet indispensable sequence of activities. Without a method to the madness, you're not doing anything more than randomly banging on the system to see if by chance you can find a way to break it. So, what do you need to know to properly estimate the effort, carry out the process, and keep the testers happy? Here are 10 strategies for achieving testing success.
1. Design test scenarios.
What's a "test scenario"? Each test scenario should be mirror image of a "use scenario" that's been guiding a team to design and develop the system. A use scenario describes one typical interaction a customer has with the system. For instance, for an automated teller machine, one scenario involves a customer inserting a card in order to withdraw some cash. In another scenario, a customer makes a deposit. In another, he or she checks the balance.
Scenarios must represent any plausible ways in which users could interact with the system, including unusual and unintended actions. So both use scenarios and test scenarios should account for possible error conditions such as jammed cards, cancelled transactions, or overdrawn accounts.
2. Write a test procedure.
A test procedure specifies how testers will exercise the test scenarios, including the order to follow. In the ATM example, it might say, "Test withdrawing cash denominations in this order: $20, $30, $50, $100. Run another test in reverse order: $100, $50, $30, $20. Then run several tests in random order." It should also explain what results to expect in each case.
You'll want the procedure to test all new system features or changes. You'll also want the procedure to test features in various combinations. For example, you might specify 1) withdrawing cash, then 2) checking balance information, and then 3) making a deposit. Be sure to vary the order, and test error conditions.
3. Determine what data you need.
If your system stores values in a database, you'll need to load some typical data to test the scenarios. In the ATM example, values would include account balances -- for testing withdrawal limits and giving balance information. Create the sample data sets and pre-load the systems to be tested. Don't forget to include extremely high and low values!
4. Plan specific roles for testers.
Schedule each tester to focus on specific test scenarios and related data sets. If there are enough testers, assign more than one to cover each test scenario. Each person will approach it differently.
5. Create a bug reporting system.
It could be designed as a form, a database, an e-mail message, or a combination. Have testers submit bug reports as they find errors in each round of testing.
6. Establish a test schedule.
The schedule should allow for several iterations of beta testing. Be sure to clear the schedules of testers for each round in which they will be participating.
7. Get all materials ready for testing.
The following items should be ready for the kickoff meeting: A new or updated system, lists or descriptions of any bugs fixed, new or updated documentation, test scenarios and procedures, and so on.
8. Set a start date.
On the start date, hold a kickoff meeting! Also schedule progress checks. If testers find numerous bugs -- or especially critical ones -- before reaching a given checkpoint, stop testing, fix the bugs and/or documentation, and return to Step 1. Ask before restarting: Are new test scenarios or data sets needed?
9. Perform a new round of testing for each new test baseline.
This means starting the complete test from scratch after each round of fixes. You can't sidestep this requirement, because each time something is fixed, it can "break" something else. Stop the cycles of testing only when no new bugs are evident.
10. Plan a reward for a job well done.
Testing is very tedious -- so testers need a special incentive to keep them focused on the goal. Although they're helping to produce a high-quality system, a post-testing party never hurts morale!Thorough beta testing is essential for producing quality systems. If you discover errors you can't fix in time, you could decide to release a system with known defects (documented in your "Read-me" notes). The stakes can be high, so weigh this option carefully before proceeding.
Should I Train or "Tune up" My Organization?
Is there a standard cure for every performance gap? When your organization detects areas it wants to improve, it's critical to prescribe the right remedy for each situation. This article explores two ways of many to close achievement gaps, using 1) training and 2) organizational tune-ups to remove "burning hassles" and obstacles that hinder productivity.A Best-Practice Blueprint for Banishing "Burning Hassles"
Your organization may be experiencing "burning hassles" -- the sometimes hidden and sometimes obvious obstacles and sinkholes that keep people from performing ideally and dissolve morale, productivity, and customer satisfaction. This article provides the step-by-step, "how-to" formula for detecting and resolving hassles once and for all.Are "Burning Hassles" Melting Your Morale and Pulverizing Your Productivity?
Your organization may be experiencing "burning hassles" -- the sometimes hidden and sometimes obvious obstacles and sinkholes that keep people from performing ideally. This article explains how to recognize situations in which hassles may be dissolving productivity, morale, and profitability like corrosive acid.