Return to Bad Software: What To Do When Software Fails.


 

SOFTWARE LIABILITY

Cem Kaner, J.D., Ph.D.

Copyright © 1997. All rights reserved.

In press, Software QA

Note: This paper is based on talks of mine at recent meetings of the Association for Software Quality's Software Division and the Pacific Northwest Software Quality Conference. The talks surveyed software liability in general and focused on a few specific issues. I've edited the talks significantly because they restate some material that you've seen in this magazine already. If you don't have those articles handy, check my website, www.badsoftware.com.

 

W. Edwards Deming is one of my heroes. I enjoyed and agreed with almost everything that I've read of his. But in one respect, I flatly disagree. In Out of the Crisis, Deming named seven "deadly diseases." Number 7 was "Excessive costs of liability, swelled by lawyers that work on contingency fees." (Deming, 1986, p. 98).

Software quality is often abysmally low and we are facing serious customer dissatisfaction in the mass market (see Kaner, 1997e; Kaner & Pels, 1997). Software publishers routinely ship products with known defects, sometimes very serious defects. The law puts pressure on companies who don't care about their customers. It empowers quality adocates. I became a lawyer because I think that liability for bad quality is part of the cure, not one of the diseases.

Life is more complex than either viewpoint. It's useful to think of the civil liability system as a societal risk management system. It reflects a complex set of tradeoffs and it evolves constantly.

Risk Management and Liability

Let's think about risk. Suppose you buy a product or service and something bad happens. Somebody gets hurt or loses money. Who should pay? How much? Why?

The Fault-Based Approach

If the product was defective, or the service was performed incompetently, there's natural justice in saying that the seller should pay. This is a fault-based approach to liability.

First problem with the fault-based approach: How do we define "defective"? The word is surprisingly slippery.

I ventured a definition for serious defects in Kaner (1997a). I think the approach works, but it runs several pages. It explores several relationships between buyers and sellers, and it still leaves a lot of room for judgment and argument. More recently, I was asked to come up with a relatively short definition of "defect" (serious or not). After several rounds of discussion, I'm stalled.

I won't explore the nuances of the definitional discussions here. Instead, here's a simplification that makes the legal problem clear. Suppose we define a defect as failure to meet the specification. What happens when the program does something obviously bad (crashes your hard disk) that was never covered in the spec? Surely, the law shouldn't classify this as non-defective. On the other hand, suppose we define a defect as any aspect of the program that makes it unfit for use. Unfit for who? What use? When? And what is it about the program that makes it unfit? If a customer specified an impossibly complex user interface, and the seller built a program that matches that spec, is it the seller's fault if the program is too hard to use? Under one definition, the law will sometimes fail to compensate buyers of products that are genuinely, seriously defective. Under the other definition, the law will sometimes force sellers to pay buyers even when the product is not defective at all.

This is a classic problem in classification systems. A decision rule that is less complex than the situation being classified will make mistakes. Sometimes buyers will lose when they should win. Sometimes sellers will lose. Both sides will have great stories of unfairness to print in the newspapers.

Second problem with the fault-based approach: We don't know how to define "competence" when we're talking about software development or software testing services. I'll come back to this later, in the discussion of professional liability.

Third problem: I don't know how to make a software product that has zero defects. Despite results that show we can dramatically reduce the number of coding errors (Ferguson, Humphrey, Khajenoori, Macke, & Matuya, 1997; Humphrey, 1997), I don't think anyone else knows how to make zero-defect software either. If we create too much pressure on software developers to make perfect products, they'll all go bankrupt and the industry will go away.

In sum, finding fault has appeal, but it has its limits as a basis for liability.

Technological Risk Management

It makes sense to put legal pressure on companies to improve their products because they can do it relatively (relative to customers) cheaply. In a mass market product, a defect that occasionally results in lost data might not cost individual customers very much, but if you total up all the costs, it would probably cost the company a great deal less to fix the bug than the total cost to customers. (Among lawyers, this is called the principle of the "least cost avoider." You put the burden of managing a risk on the person who can manage it most cheaply.)

I call this technological risk management--because we are managing the risk of losses by driving technology. Losses and lawsuits are less likely when companies make better products, advertise them more honestly, and warn customers of potential hazards and potential failures more effectively.

At our current stage of development in the software industry, I think that an emphasis on technological risk management is entirely appropriate. We save too many nickels in ways that we know will cost our customers dollars.

However, we should understand that the technological approach is paternalistic. The legal system decides for you what risks companies and customers can take. This drives schedules and costs and the range of products that are available on the market.

The technological approach makes obvious sense when we're dealing with products like the Pinto, which had a deadly defect that could have been fixed for $11 per car. It's entirely appropriate whenever manufacturers will spend significantly less to fix a problem than the social cost of that problem. But over time, this approach gets pushed at less and less severe problems. In the extreme, we risk ending up with a system that imposes huge direct and indirect taxes on us all in order to develop products that will protect fools from their own recklessness.

As we move in that direction, many companies and individuals find the system intolerable. Starting in the 1970's we were hearing calls for "tort reform" and a release from "oppressive regulations." The alternative is commercial risk management: let buyers and sellers make their own deals and keep the government out of it.

Commercial Risk Management

This is supposed to be a free country. It should be possible for a buyer to say to a seller, "Please, make the product sooner, cheaper, and less reliable. I promise not to sue you."

The commercial risk management strategy involves allocation of risk (agreeing on who pays) rather than reduction of risk. Sellers rely on contracts and laws that make it harder for customers to sue sellers. Customers and sellers rely on insurance contracts to provide compensation when the seller or customer negligently makes or uses the product in a way that causes harm or loss.

This approach respects the freedom of people to make their own deals, without much government interference. The government role in the commercial model is to determine what agreement the parties made, and then to enforce it. (Among lawyers, this is called the principle of "freedom of contract.")

The commercial approach makes perfect sense in deals between people or businesses who actually have the power to negotiate. But over time, the principle stretches into contracts that are entirely non-negotiated. A consumer buying a Microsoft product doesn't have bargaining power.

Think about the effect of laws that ratify the shrink-wrapped "license agreements" that come with mass-market products. In mass-market agreements, we already see clauses that avoid all warranties and that eliminate liability even for significant losses caused by a defect that the publisher knew about when it shipped the product. Some of these "agreements" even ban customers from publishing magazine reviews without the permission of the publisher (such as this one, which I got with Viruscan, "The customer will not publish reviews of the product without prior written consent from McAfee.")

Unless there is intense quality-related competition, the extreme effect of a commercial risk management strategy is a system that ensures that the more powerful person or corporation in the contract is protected if the quality is bad but that is otherwise indifferent to quality.

Without intense quality-driven competition, some companies will slide into lower quality products over time. Eventually this strategy is corporate suicide, but for a few years it can be very profitable.

Ultimately, the response to this type of system is customer anger and a push for laws and regulations that are based on notions of fault or of technological risk management.

Legal Risk Management Strategies are in Flux

Technological and commercial risk management strategies are both valid and important in modern technology-related commerce. But both present characteristic problems. The legal policy pendulum swings between them (and other approaches).

Theories of Software Liability

Software quality advocates sometimes argue that we should require companies to follow reasonable product development processes. This is a technological risk management approach, which is obvious to us because that's what we do for a living: use technology to improve products and reduce risks.

A "sound process" requirement fits within some legal theories, but not others. There are several different theories under which we can be sued. Different ones are more or less important, depending on the legal climate (i.e., depending on which legal approach to risk management is dominant at the moment).

A legal "theory" is not like a scientific theory. I don't know why we use the word "theory." A legal theory is a definition of the key grounds of a lawsuit. For example, if you sue someone under a negligence theory:

Every lawsuit is brought under a specifically stated theory, such as negligence, breach of contract, breach of warranty, etc. I provided detailed definitions of most of these theories, with examples, in Kaner, Falk, & Nguyen (1993). You can also find some of the court cases at my web site, along with more recent discussion of the law--check the course notes for my tutorial at Quality Week, 1997, at www.badsoftware.com.

Quality Cost Analysis

Any legal theory that involves "reasonable efforts" or "reasonable measures" should have you thinking about two things:

We are, or should be, familiar with cost/benefit thinking, under the name of "Quality Cost Analysis" (Gryna, 1988; Campanella, 1990).

Quality cost analysis looks at four ways that a company spends money on quality: prevention, appraisal (looking for problems), internal failure costs (the company's own losses from defects, such as wasted time, lost work, and the cost of fixing bugs), and external failure costs (the cost of coping with the customer's responses to defects, such as the costs of tech support calls, refunds, lost sales, and the cost of shipping replacement products). Note that the external failure costs that we consider as costs of quality reflect the company's costs, not the customer's.

Previously (Kaner, 1996a), I pointed out that this approach sets us up to ignore the losses that our products cause our customers. That's not good, because if our customers' losses are significantly worse than our external failure costs, we risk being blindsided by unexpected litigation.

The law cares more about the customer's losses. A manufacturer's conduct is unreasonable if it would have cost less to prevent or detect and fix a defect than it costs customers to cope with it (Kaner, 1996b).

Cost of quality analysis was developed by Juran as a persuasive technique. "Because the main language of [corporate management] was money, there emerged the concept of studying quality-related costs as a means of communication between the quality staff departments and the company managers" (Gryna, 1988, p. 42). You can use this approach without ever developing complex cost-tracking systems. Whenever a product has a significant problem, of any kind, it will cost the company money. Figure out which department is most likely to lose the most money as a result of this problem and ask the head of that department how serious the problem is. How much will it cost? If she thinks its important, bring her to the next product development meeting and have her explain how expensive this problem really is. There is no expensive cost-tracking system in place, but there's a lot of persuasive benefit here.

When the company's cost of external failures is less than the cost a customer will face, don't use these numbers to try to persuade management to fix the problem. The numbers aren't persuasive and they almost certainly underestimate the long term risks (litigation and lost sales). Instead, come up with some scenarios, examples that illustrate just how serious the problem will be for some customers. Make management envision the problem itself and the extent to which it will make customers unhappy or angry.

Survey of the Theories

Here's a quick look at theories under which a software developer can be sued:

Software service providers can also be sued. A "software service provider" is a "person" that writes custom software, maintains or supports software, trains other people to use software, does software testing or certification, or enters into other contracts involving software in which a significant component of the benefit to be provided by the seller involves human labor. Service providers can be sued in many of the ways that I listed for products, above, but also for:

Some software quality advocates are calling for professionalization. The say that software quality "engineers" should be licensed by the government and held to professional standards. If you are thinking along these lines, please consider the problem that we need a solid basis for distinguishing unacceptable from acceptable practices. Otherwise, professional liability will be a lottery: you will be sued for practices that you consider good. Here are some examples of disagreements:

I'm not alone in these views, but many of you will disagree with me. That's the point of these examples. We do not have a consensus on professional practices.

One last word of caution. If a person who is not a licensed professional identifies herself to potential clients as a professional, they can sue her for malpractice as if she were a member of that profession. If you call yourself a "quality engineer" (in California) or an "engineer" (some other states), you might discover yourself at the wrong end of an engineering malpractice suit. The suit will challenge judge and jury to figure out what the professional knowledge and standards a software quality engineer would have if there was such a profession and if it had generally accepted practices. Your lawyer would make a lot of money on this case.

Uniform Commercial Code Revisions

This paper is based on my talks at ASQ (Kaner, 199h) and PNSQC. I spent a fair bit of time at those meetings on Article 2B, the proposed revisions to the Uniform Commercial Code. I've written about that proposal in this magazine (Kaner, 1996c) and in other publications that you may have read (such as Kaner, 1997f; Kaner & Lawrence, 1997). I'll write a survey of Article 2B for this magazine again as it gets closer to its final form.

At the moment, I think Article 2B is a disaster in the making. If you're interested in details, Kaner (1997g) is my best current commentary. Article 2B is a moving target, significantly revised every two months. Earlier detailed analyses appear in Kaner 1997b, 1997c, and 1997d.

The only reason that I mention 2B here is to bring us back to the issue of risk management that opened this paper. Article 2B illustrates this decade's trend, which is an almost exclusive focus on commercial risk management. 2B is so heavily biased partially because so few of us on the technology side are making ourselves available to explain technological risk management to the commercial lawyers who are drafting legislation. The same problems are showing up in the Uniform Electronic Transactions Act, which will govern electronic commerce. Lawyers need concrete examples and clear explanations of how law can be used to encourage technological improvement, or they will stick with purely commercial approaches. If the only tools that lawyers have are hammers, they will pass laws declaring that all non-hammers are nails.

REFERENCES


Return to Bad Software: What To Do When Software Fails.

 
The articles at this web site are not legal advice. They do not establish a lawyer/client relationship between me and you. I took care to ensure that they were well researched at the time that I wrote them, but the law changes quickly. By the time you read this material, it may be out of date. Also, the laws of the different States are not the same. These discussions might not apply to your circumstances. Please do not take legal action on the basis of what you read here, without consulting your own attorney.
Questions or problems regarding this web site should be directed to Cem Kaner, kaner@kaner.com.
Last modified: Tuesday November 11, 1997. Copyright © 1997, Cem Kaner. All rights reserved.