Testing Without a Formal Test Plan

A formal test plan is a document that provides and records important information about a test project, for example:

  • project and quality assumptions
  • project background information
  • resources
  • schedule & timeline
  • entry and exit criteria
  • test milestones
  • tests to be performed
  • use cases and/or test cases

For a range of reasons -- both good and bad -- many software and web development projects don't budget enough time for complete and comprehensive testing. A quality test team must be able to test a product or system quickly and constructively in order to provide some value to the project. This essay describes how to test a web site or application in the absence of a detailed test plan and facing short or unreasonable deadlines.

Identify High-Level Functions First

High-level functions are those functions that are most important to the central purpose(s) of the site or application. A test plan would typically provide a breakdown of an application's functional groups as defined by the developers; for example, the functional groups of a commerce web site might be defined as shopping cart application, address book, registration/user information, order submission, search, and online customer service chat. If this site's purpose is to sell goods online, then you have a quick-and-dirty prioritization of:

  1. shopping cart
  2. registration/user information
  3. order submission
  4. address book
  5. search
  6. online customer service chat

I've prioritized these functions according to their significance to a user's ability to complete a transaction. I've ignored some of the lower-level functions for now, such as the modify shopping cart quantity and edit saved address functions because they are a little less important than the higher-level functions from a test point-of-view at the beginning of testing.

Your opinion of the prioritization may disagree with mine, but the point here is that time is critical and in the absence of defined priorities in a test plan, you must test something now. You will make mistakes, and you will find yourself making changes once testing has started, but you need to determine your test direction as soon as possible.## Test Functions Before Display

Any web site should be tested for cross-browser and cross-platform compatibility -- this is a primary rule of web site quality assurance. However, wait on the compatibility testing until after the site can be verified to just plain work. Test the site's functionality using a browser/OS/platform that is expected to work correctly -- use what the designers and coders use to review their work.

By running through the site or application first with known-good client configurations allows testers to focus on the way the site functions, and allows testers to focus on the more important class of functional defects and problems early in the test project. Spend time up front identifying and reporting those functional-level defects and the developers will have more time to effectively fix and iteratively deliver new code levels to QA.

If your test team will not be able to exhaustively test a site or application -- and the premise of this essay is that your time is extremely short and you are testing without a formal plan -- you must first identify whether the damned thing can work, and then move on from there.

Concentrate on Ideal Path Actions First

Ideal paths are those actions and steps most likely to be performed by users. For example, on a typical commerce site, a user is likely to

  • identify an item of interest
  • add that item to the shopping cart
  • buy it online with a credit card
  • ship it to himself/herself

Now, this describes what the user would want to do, but many sites require a few more functions, so the user must go through some more steps, for example:

  • login to an existing registration account (if one exists)
  • register as a user if no account exists
  • provide billing & bill-to address information
  • provide ship-to address information
  • provide shipping & shipping method information
  • provide payment information
  • agree or disagree to receiving site emails and newsletters

Most sites offer (or force) an even wider range of actions on the user:

  • change product quantity in the shopping cart
  • remove product from shopping cart
  • edit user information (or ship-to information or bill-to information)
  • save default information (like default shipping preferences or credit card information)

All of these actions and steps may be important to some users some of the time (and some developers and marketers all of the time), but the majority of users will not use every function every time. Focus on the ideal path and identify those factors most likely to be used in a majority of user interactions.

Assume a user who knows what s/he wants to do, and so is not going to choose the wrong action for the task they want to complete. Assume the user won't make common data entry and interface control errors. Assume the user will accept any default form selections -- this means that if a checkbox is checked, the user will leave it checked; if a radio button is selected to a meaningful selection, the user will let that ride. This doesn't mean that non-values that are defaulted -- such as the drop-down menu that shows a "select one" value -- will left as-is to force errors. The point here is to keep it simple and lowest-common denominator and not force errors. Test as though everything is right in the world, life is beautiful, and your project manager is Candide.

Once the ideal paths have been tested, focus on secondary paths involving the lower-level functions or actions and steps that are less frequent but still reasonable variations.

Forcing errors comes later, if you have time.

Concentrate on Intrinsic Factors First

Intrinsic factors are those factors or characteristics that are part of the system or product being tested. An intrinsic factor is an internal factor. So, for a typical commerce site, the HTML page code that the browser uses to display the shopping cart pages is intrinsic to the site: change the page code and the site itself is changed. The code logic called by a submit button is intrinsic to the site.

Extrinsic factors are external to the site or application. Your crappy computer with only 8 megs of RAM is extrinsic to the site, so your home computer can crash without affecting the commerce site, and adding more memory to your computer doesn't mean a whit to the commerce site or its functioning.

Given a severe shortage of test time, focus first on factors intrinsic to the site:

  • does the site work?
  • do the functions work? (again with the functionality, because it is so basic)
  • do the links work?
  • are the files present and accounted for?
  • are the graphics MIME types correct? (I used to think that this couldn't be screwed up)

Once the intrinsic factors are squared away, then start on the extrinsic points:

  • cross-browser and cross-platform compatibility
  • clients with cookies disabled
  • clients with javascript disabled
  • monitor resolution
  • browser sizing
  • connection speed differences

The point here is that with myriad possible client configurations and user-defined environmental factors to think about, think first about those that relate to the product or application itself. When you run out of time, better to know that the system works rather than that all monitor resolutions safely render the main pages.

Boundary Test From Reasonable to Extreme

You can't just verify that an application works correctly if all input and all actions have been correct. People do make mistakes, so you must test error handling and error states. The systematic testing of error handling is called boundary testing (actually, boundary testing describes much more, but this is enough for this discussion).

During your pedal-to-the-floor, no-test-plan testing project, boundary testing refers to the testing of forms and data inputs, starting from known good values, and progressing through reasonable but invalid inputs all the way to known extreme and invalid values.

The logic for boundary testing forms is straightforward: start with known good and valid values because if the system chokes on that, it's not ready for testing. Move through expected bad values because if those fail, the system isn't ready for testing. Try reasonable and predictable mistakes because users are likely to make such mistakes -- we all screw up on forms eventually. Then start hammering on the form logic with extreme errors and crazy inputs in order to catch problems that might affect the site's functioning.

Good Values

Enter in data formatted as the interface requires. Include all required fields. Use valid and current information (what "valid and current" means will depend on the test system, so some systems will have a set of data points that are valid for the context of that test system). Do not try to cause errors.

Expected Bad Values

Some invalid data entries are intrinsic to the interface and concept domain. For example, any credit card information form will expect expired credit card dates -- and should trap for them. Every form that specifies some fields as required should trap for those fields being left blank. Every form that has drop-down menus that default to an instruction ("select one", etc.) should trap for that instruction. What about punctuation in name fields?

Reasonable and Predictable Mistakes

People will make some mistakes based on the design of the form, the implementation of the interface, or the interface's interpretation of the relevant concept domain(s). For example, people will inadvertently enter in trailing or leading spaces into form fields. People might enter a first and middle name into a first name form field ("Mary Jane").

Not a mistake, per se, but how does the form field handle case? Is the information case-sensitive? Or does the address form handle a PO address? Does the address form handle a business name?

Extreme Errors and Crazy Inputs

And finally, given time, try to kill the form by entering in extreme crap. Test the maximum size of inputs, test long strings of garbage, put numbers in text fields and text in numeric fields.

Everyone's favorite: enter in HTML code. Put your name in BLINK tags, enter in an IMG tag for a graphic from a competitor's site.

Enter in characters that have special meaning in a particular OS (I once crashed a server by using characters this way in a form field).

But remember, even if you kill the site with an extreme data input, the priority is handling errors that are more likely to occur. Use your time wisely and proceed from most likely to less likely.

Compatibility Test From Good to Bad

Once you get to cross-browser and cross-platform compatibility testing, follow the same philosophy of starting with the most important (as defined by prevalence among expected user base) or most common based on prior experience and working towards the less common and less important.

Do not make the assumption that because a site was designed for a previous version of a browser, OS, or platform it will also work on newer releases. Instead, make a list of the browsers and operating systems in order of popularity on the Internet in general, and then move those that are of special importance to your site (or your marketers and/or executives) to the top of the list.

The most important few configurations should be used for functional testing, then start looking for deviations in performance or behavior as you work down the list. When you run out of time, you want to have completed the more important configurations. You can always test those configurations that attract .01 percent of your user base after you launch.

The Drawbacks of This Testing Approach

Many projects are not mature and are not rational (at least from the point-of-view of the quality assurance team), and so the test team must scramble to test as effectively as possibly within a very short time frame. I've spelled out how to test quickly without a structured test plan, and this method is much better than chaos and somewhat better than letting the developers tell you what and how to test.

This approach has definite quality implications:

  • Incomplete functional coverage -- this is no way to exercise all of the software's functions comprehensively.
  • No risk management -- this is no way to measure overall risk issues regarding code coverage and quality metrics. Effective quality assurance measures quality over time and starting from a known base of evaluation.
  • Too little emphasis on user tasks -- because testers will focus on ideal paths instead of real paths. With no time to prepare, ideal paths are defined according to best guesses or developer feedback rather than by careful consideration of how users will understand the system or how users understand real-world analogues to the application tasks. With no time to prepare, testers will be using a very restricted set input data, rather than using real data (from user activity logs, from logical scenarios, from careful consideration of the concept domain).
  • Difficulty reproducing -- because testers are making up the tests as they go along, reproducing the specific errors found can be difficult, but also reproducing the tests performed will be tough. This will cause problems when trying to measure quality over successive code cycles.
  • Project management may believe that this approach to testing is good enough -- because you can do some good testing by following this process, management may assume that full and structured testing, along with careful test preparation and test results analysis, isn't necessary. That misapprehension is a very bad sign for the continued quality of any product or web site.
  • Inefficient over the long term -- quality assurance involves a range of tasks and foci. Effective quality assurance programs expand their base of documentation on the product and on the testing process over time, increasing the coverage and granularity of tests over time. Great testing requires good test setup and preparation, but success with the kind testplan-less approach described in this essay may reinforce bad project and test methodologies. A continued pattern of quick-and-dirty testing like this is a sign that the product or application is unsustainable in the long run.