Beta Desktop and Web Applications


This article was published by ComputorEdge, issue #2647, 2008-11-21, as the cover article, in both their PDF edition (on pages 6-8) and their website.

Software development firms — as for any type of manufacturing enterprise — have long faced the challenge of how to iron the bugs out of their wares, prior to releasing them to the intended users. This is true whether the users are people out in the marketplace, or internal clients within the company itself. In addition, it is true regardless of the complexity of the software, the price tag, or the type — from x-ray controller programs to X-Men video games.

Decades ago, when software vendors first began developing and marketing spreadsheet programs, word processors, character-based games, and the other pioneering products of that era, the companies tended to be small operations. As a result, the programmers themselves — and any friends and relatives who could be roped into helping out — performed most if not all of the system testing, in order to identify any problems in which the program failed to do what it was supposed to do, or did something completely unplanned. (Simply speaking, "system testing" means testing the program as a whole, which is how the end user will see it. This is distinct from "unit testing", in which a programmer will test functions and other individual components of the program he is writing.)

As business and personal computer ownership became widespread, demand for commercial programs grew tremendously, as did the revenue streams and market capitalizations of the more successful software companies. The most singular example of this is Microsoft, whose market cap exceeded (at one point during the dot-com mania) $50 billion, which is more than the annual GDP of many countries. Industry heavyweights such as the Redmond, Washington giant have the financial wherewithal — as well as the marketplace pressure — to employ a small army of testers and quality assurance (QA) specialists for putting their products through the paces, prior to anyone outside of the firm seeing the final results.

Small shops, with even smaller testing budgets, were sorely pressed to compete on the QA front with the likes of Microsoft, Lotus, and others. Yet somehow they had to find a way to thoroughly test their code and find those defects lurking in even the best-designed software.

Alpha Beta Alphabet Soup

Given the enthusiasm that countless computer hobbyists developed for their hardware and software, it was not long before the QA-handicapped vendors discovered a legion of potential testers: their users! Computer geeks — regardless of any stereotypes of reticence — tend to be observant, detail-oriented, critical, opinionated, and vocal — all the qualities needed in valuable software testers. Consequently, software companies began to increasingly tap this previously-neglected resource.

Initially, they started with the communication methods utilized by non-tech firms for centuries before the invention of the microprocessor: listening to the feedback sent in by telephone, letters, postcards, and word-of-mouth. However, the wiser managers soon realized the fundamental limitation in relying entirely on these passive methods, namely, that unhappy consumers are more likely to discard, return, or sell a product with which they are dissatisfied, and figuratively walk away from the company, than they are to invest time and energy into submitting feedback to the company — either positive, so the product managers know what to focus on, or negative, so they know what to change.

Software shops began to offer their more knowledgeable users free or discount copies of the products, prior to finalizing and releasing them (the products, not the users!). The quality of these pre-release versions could be whatever the managers decided — from solid and stable, to buggy and crash-prone. It did not take the industry long to figure out that there was a fine line between releasing too early and releasing too late. In the former case, premature software could annoy the company's user base bad enough to cause irreparable damage to the perception of the company and their products. On the other hand, releasing software too late could result in the company missing valuable feedback in time to make a positive and sizable difference in the direction of the product and its eventual market success.

In time, companies and users alike began to recognize several stages in this testing process. When the code of any software product is still being heavily modified, and it does not possess all of the planned features, then it is said to be in the "pre-alpha" stage. The code is changing daily, and typically these changes are checked in by the end of the day, in preparation for each night's "build". These nightly builds may or may not be made available to testers within the company, but they are certainly scrutinized by programmers, who are on the hook for fixing any bugs found.

When most if not all of the features have been added, and the product is considered reasonably stable, then it enters the "alpha" stage, at which point internal testers go at it full-bore. Eventually, the product becomes feature-complete, and all of the critical bugs have apparently been worked out in, at which point it is considered "beta" quality. Finally, the version released to the public is oftentimes termed the "gold" or "general availability" (GA) version.

Testing Web Applications

Software development companies initially released beta versions to their select groups of users. This is especially true for software whose target users are programmers themselves, because they generally can spot problems faster than non-technical users. These are not just problems within the product's user interface, but also secondary consequences of the product. For instance, the average computer user will probably not detect that a program unintentionally clobbers dynamic link libraries (DLLs) needed by other programs. Also, such a user might not have an outbound firewall in place, and thus not learn that the installation process assumes access to a remote Web server.

As more firms started within the software industry, and as consumers' expectations increased steadily, there developed significant pressure on the surviving firms to grab more market share by quickly getting features out to potential consumers — either first-time buyers or people choosing to upgrade. Consequently, vendors began to release earlier, which naturally provided the benefit of feedback earlier in the development cycle. But, as one might expect, more of this feedback was negative, as users became plagued by more frequent crashes and other glitches.

These dynamics are now playing out for developers of Web-based applications, just as they did for their desktop industry counterparts years ago. Yet there are some differences, and perhaps the most striking one of all is that beta versions of Web applications can be released and put into the hands of testers even quicker than desktop programs, because even though the latter can be downloaded from websites anywhere in the world, the testers must take the time and effort to do the downloading and installation of the latest version, and sometimes uninstall the previous one. Not so with Web applications, which are, in effect, always installed, provided that the user has an adequately fast connection to the Internet.

But quicker distribution of alpha and beta versions of applications, is not entirely a good thing. Vendors of Web-based applications are finding it even easier to release buggy and incomplete sites. Secondly, malicious hackers are always waiting to pounce on newly discovered security holes, and quickly communicate them to their colleagues.


As the undisputed leader in making desktop program functionality available on the Internet (much to the dismay of Microsoft), Google is currently gaining a reputation for overuse — if not abuse — of the concept of beta software. A recent Pingdom blog posting details how, of the 49 Google products that could be identified, 22 are still in beta — a whopping 45 percent! This figure excludes all of the products in Google Laps, otherwise the portion would have been even higher, at 57 percent.

One might assume that the beta applications are only those that have recently been moved out of Google Labs. But they include, as of this writing, such major applications as Gmail, Google Docs, Google Finance, and Orkut. Yet perhaps most remarkable of all, is how long some of these applications have been in the beta stage. For example, Gmail, which is arguably an extremely stable and polished application, has been in beta for more than four years (since April 2004)! Orkut is even older, dating back to January of that year.

Some Web users are not pleased with the idea of Google charging for a product, such as Google Docs, that is still in beta — a stage normally reserved for products that lack functionality and stability. These people argue that if they are being used as guinea pigs to test an unfinished product, they certainly should not have to pay money for the privilege. Detractors further argue that keeping products in perpetual beta provides the vendor with an easy excuse for any bugs, security holes, or other problems — "don't expect perfection, because it's still in beta."

A Network World article notes that someone in Google public relations defended their practice, arguing that "…beta has a different meaning when applied to applications on the Web, where people expect continual improvements in a product. On the Web, you don't have to wait for the next version to be on the shelf or an update to become available. Improvements are rolled out as they're developed. Rather than the packaged, stagnant software of decades past, we're moving to a world of regular updates and constant feature refinement where applications live in the cloud." Most of the feedback suggests that people are not buying this argument, and are displeased with this significant redefinition of the term "beta", which can only cause confusion in the software marketplace.

Google is clearly taking the approach of dominating its markets, making the most of feedback from an enormous audience of "users" (read: testers), and perhaps even starting the practice of destabilizing competitors through "embracing and extending" established standards and terminology — all reminiscent of the Redmond juggernaut.

Copyright © 2008 Michael J. Ross. All rights reserved.

Content topics: