A sender of a message purports to be a person named S.
A receiver of the message purports to be a person named R.
A Certification Authority is named CA.
Sender sends a digitally signed message to R, who checks with CA whether the signature was made by S's key. It was. S has not yet repudiated or suspended the key. In reliance on this, R ships merchandise to the address specified in the message and bills S.
S claims that S never sent this message. R's merchandise is not in S's possession and there is nothing in S's records that indicates that S received the merchandise. Sender is a crook, who impersonated S.
Who should lose the money?
Article 2B recommends that S lose the money. The underlying assumption is that a fraudulent sender would have gained access to S's key through S's negligence. Therefore, the burden of proof should be on S to prove non-negligence, which S can probably not do, even if S was non-negligent. Draft Uniform Electronic Transactions Act follows the same risk allocation.
I have a PGP key pair, which I use to communicate electronically with my clients. I have not registered the key pair with a CA because I think that only an insane person, an ignorant person, or a fool would choose to accept such a risk allocation under modern technology. I advise my clients not to register their key pair with CA's either.
Criminals Will Read And Copy Keys Of Non-Negligent People Who Will Be Unable To Prove Their Non-Negligence
My problem is that reasonable, prudent people may have their key read and copied by a third party under circumstances that look like "normal course of business" situations, without any fault on the part of the key-holder.
Example 1: Electronic Registration
How often have you bought software and, while installing it, been encouraged to register the software electronically? In this case, you fill in a form, and the registration program will then dial the software publisher and upload the registration information to the publisher.
A couple of years ago, a company that makes a widely used electronic registration tool received an award in a software operations conference. The rationale for the award was that the tool facilitated software technical support, because it transmitted information about the customer's computer configuration as well as the information filled out by the customer. This additional information will help a support person troubleshoot your system, if you call for help.
Understand this transaction. You fill out a form that appears harmless. You allow the publisher to send this information to itself. Unknown to you, the tool lets the publisher send additional information, perhaps including a copy of your directory structure, your registry of software and hardware, your configuration files, and other stuff. This is happening today, and has been happening for several years.
I am not aware of any electronic registration program that was designed with a criminal purpose, but if programs can read your private directory structure, registry files, etc., and transmit THAT information, they can just as well also transfer all of your PGP-related information.
If your digital signature could be used like cash to order merchandise, someone will use an electronic registration technique to get this information. It's just a matter of time.
Very few customers realize that, when they register software electronically (which is the normal and requested mode by many software publishers), they might also be transmitting plenty of other private information about themselves. A reasonable, prudent non-security-expert would probably not recognize electronic registration of retail software as a security risk. But it is.
If a third party gains access to the customer's key in this way, how will the customer prove non-negligence? How will the customer ever come to realize that this was the means of access?
Example 2: Electronic Bug Reporting
There are several emerging standards for customers to report bugs (defects) electronically to software publishers. I am most familiar with E-Support, which is a reporting system developed by the Software Support Professionals Association (SSPA) and Touchstone Software. My firm is a member company of SSPA. I support its work and personally trust its executives. This use of E-Support as an example is in no way a criticism of SSPA or Touchstone.
Here you are, using your favorite word processor (I'll call it BugWare 97) from your favorite vendor (Let's use a hypothetical vendor name, ShipIt Software). The program fails. Under a system like E-Support, you can now bring up an electronic bug report form and write your complaint/query/plea for help. (The software running on your computer system is an E-Support "client".) You have probably not been trained in software quality control and therefore your bug report will probably miss or obscure some important information. E-Support copes with this by taking a snapshot of parts of your system. It looks at your memory, system files of various kinds, etc. You are made aware of this by the E-Support folks--there is no element here of unfair surprise. You can configure E-Support so that it only transmits certain classes of information, and does not transmit other classes of information.
When the E-Support client takes a snapshot of your system, it encrypts the snapshot. You never get to see what E-Support actually sends in its bug report. The snapshot, along with a plaintext copy of the bug report that you typed, goes back to the E-support server (probably via your e-mail system). The E-support server passes the message to ShipIt Software. It might also forward the message to your printer manufacturer, or to some third party whose product is on your system and might interact with BugWare in a way that makes a problem with one of those products appear to be a BugWare bug. If the receiver of this message is an E-support licensee, then it has the means to decrypt the E-support message and see your configuration. If it is not an E-support licensee, then it can read the plaintext complaint that you wrote, which it receives at no charge, but it cannot decrypt the information about your system.
I believe that the E-Support people are honest and have designed this system in good faith.
But what about a hypothetical product, C-Support, an E-Support look-alike manufactured by your favorite cluster of organized criminals? There is no C-support today, but if you create a financial incentive for stealing encryption keys, they can use a C-support client to do it.
Would a reasonable, prudent person recognize this as a security risk? Maybe you lawyers would say "of course." It sure looks like an obvious risk to me. But when I raised it at an SSPA forum, some attendees (executives, with years of computer support, diagnostics, or service management experience) expressed surprise and dismay that this could be a security risk. In my experience discussing this with customers and technical support specialists, unless I flag the issue to them (directly or indirectly), the security concern is rarely spontaneously raised as a potential problem with the system.
Therefore, I conclude that reasonable, prudent customers might reasonably believe that it is reasonable practice to file electronic bug reports.
So, if C-Support (the hypothetical criminal variation of E-support) took your encryption key from your system when you filed an electronic bug report, how would you know? How would you prove your non-negligence at trial?
Example 3 -- Repairs
If you have a technician service your computer, guess what the technician has access to your hard disk. If you have an encryption key on the disk, the technician could steal it.
Example 4 -- Remote Control
It is common to allow a remote technician to use a program called "Remote control" in order to diagnose problems with your computer or program. This is strongly encouraged by several software companies. Some offer discounts to customers who use remote control. Remote Control allows a technician who has called in over a telephone line, to control the computer as if they were right there at your keyboard.
A diagnostic session can take quite a while, and a reasonable person might walk away from this unintelligible series of commands being issued by the support technician, get a cup of coffee, and come back when the problem is closer to resolution.
The technician can download documents from your computer, probably in ways that would not be obvious (as to the content being taken) to a normal observer.
Example 5 -- Browser Security, Java Security, Etc.
We constantly hear that Browser X, or integrated office product Y, has some security flaw that allows a web site owner to put up a program that scans your hard disk when you visit their web site. Then we hear that this bug is fixed, just download version 3.04.02.21a and all will be well (until we find a new bug, which will be fixed in 3.04.02.21b).
Anyone who logs onto the internet might hit the web site of an unknown criminal who exploits an unpublicized new security flaw and gains access to the user's files. How will a reasonable, prudent person prove that they were non-negligent if this is how their key was discovered (and they don't know this)?
Example 6 -- Good Old Fashioned Hacking
Buy a fax modem. Connect it to the phone jack. Set the computer up to answer the phone when you're away, either to receive voice calls or faxes
(Let's not even think about modem calls). Someone calls. They thereby connect to your peripheral device on your computer, and now have the opportunity to hack your machine. They copy your key and you never realize that your machine was hacked. How do you prove your non-negligence?
Should we say that it is negligent to set your computer fax to auto-answer? Maybe I'd personally agree (I don't do this), but this is common practice among computer owners. How can we call the ordinary behavior or reasonable people, "negligent"?
Example 7 -- Computer Literate Housekeepers
It is common practice to let your housekeeper clean your house while you are not there. What stops the housekeeper from turning on your computer when you're out and copying the contents of your hard disk to her portable hard drive? Nothing. And there'll be no trace of this on the typical home computer.
It would be unreasonable to declare a societally normal practice "negligent." But if your housekeeper steals your key, how do you prove non-negligence (unless you learn that your housekeeper is the thief)?
Conclusion: Your Key is Not Sufficiently Safe For Strong Presumptions
There are more examples, but this is enough to make the point. Normal, prudent people who behave in ways that I would call not-unreasonable, can still be in a position in which their encryption key is discovered.
If your key is compromised, without your knowledge, how much are you at risk? You stand to lose everything. The house, the dog, all of your money, your credit rating, unlimited liability. Sender's computer(s) can crank out thousands of relatively small orders for merchandise in a relatively short period of time. You don't learn about them until you start receiving bills.
We Should Manage The Risk Rather Than Allocating It
So, let's come back to the problem:
Sender (who calls herself S) sends a message to Recipient (who calls himself R) who checks with Certification Authority (CA) whether the encryption key attributed to S is properly registered with the CA and not repudiated or suspended. CA says the message is a valid S-message and so R ships merchandise to the address specified by Sender. Unfortunately, Sender is a crook and is impersonating S. No one knows who Sender is, Sender is long gone, and the merchandise has disappeared.
Who should pay for the stolen merchandise?
There is no fair allocation of risk here. S, R, and CA are all potential victims of the crook. There is no argument in principle that makes S or R or CA the fairer target to hit.
Rather than arguing over who to stick with the risk of potentially huge liabilities, I think that we should provide incentives in the law -- to the greatest degree that is reasonably practicable -- to reduce the potential liability.
Encryption Is Just One Security Mechanism. We Can Give Customers Control Over Additional Security Capabilities And Then More Fairly Allocate Remaining Risks To Them
My concern with Digital Signature technology is that it relies primarily on one security-protection mechanism, encryption. If the user's key is compromised, she is at risk of unlimited liability.
Contrast this with a credit card number, such as MasterCard. The number is transmitted in plaintext. Copies of valid numbers are available in garbage cans, on the street, in every cash register, etc.
There is nothing like encryption in this system, but there is a great deal of risk-of-loss limitation in the MasterCard system. Shortly, I'll list some techniques that member banks use to limit their losses from fraud. Each of these techniques could (in theory) be used with a digital signature, and I'll note that application below.
I don't recommend that all of these techniques be applied to every digital signature. What I do recommend is that a person who creates her own key pair (or who lawfully gets a pair from a third party) should be able to specify whether or not these techniques will be used with her key.
This gives a key owner the opportunity to manage her own level of security and to limit her losses to an amount that she can tolerate. Given this opportunity to manage risk, especially if there is no money cost for adding security, a reasonable customer is more likely to feel fairly treated if she loses money from fraud, because she was able to control the amount of money that she was putting at risk. This provides a much stronger argument for the fairness of allocating risk onto the customer (and thereby reducing risk to the seller and the CA).
Technique 1 -- Delivery Location
Try using your credit card to buy an airplane ticket by phone. The airline will not send the ticket to any address other than your credit card billing address.
DIG-SIG: A great deal of fraud could be eliminated if Sender could only have merchandise delivered to S's billing address.
Technique 2 -- Credit Limit
The member bank refuses to authorize transactions that take you over your credit limit. If you have a $5000 credit limit on your card, then the bank is not at risk of being defrauded of more than $5000.
DIG-SIG: the CA tracks the amount of money signed for under a given digital signature. If the amount signed for within the last 30 days exceeds LIMIT then suspend the card for 10 days.
Technique 3 -- Transaction Limit
I tried to buy a computer with a credit card. The purchase was well within my credit limit. This was the largest purchase I'd every made with that card, by more than an order of magnitude. The bank rejected the transaction as one that was too large under the circumstances.
DIG-SIG: This signature is not valid for purchases over $150.
Technique 4 -- Floor Limit For Authorization
For purchases over $50 ($100, whatever a given store's floor limit is), the retailer must call MasterCard for authorization before completing the transaction. (In many areas now, every purchase is authorized by modem, but the principle is the same--we just have a really low floor limit.)
DIG-SIG: This signature is not valid for purchases of over $100 unless you call me (subscriber) at the following phone number for authorization.
Technique 5 -- Requirement For Additional Identification
I bought a computer for my daughter, using a credit card. The bank required the retailer to check my photo ID before authorizing the purchase. Some merchants require photo ID as a matter of course for credit card transactions, probably as part of an agreement with their bank that reduces what they pay the bank for the credit card transactions.
DIG-SIG: This signature is not valid for purchases of over $100 unless you call me (subscriber) for authorization at the following phone number.
Technique 6 -- Pattern Analysis For Location
If you use a credit card in an odd geographical pattern (Florida, San Francisco, Mexico, and Toronto, in that order, in a two-day period), the credit card issuer might suspend the card until the issuer confirms with the cardholder that he has been travelling through those locations and is the person who used the card.
DIG-SIG -- I don't know of a good analog for this. The assumption of the encryption key is that it will be used with web sites from everywhere.
Technique 7 -- Pattern Analysis For Frequency Of Use
If there is a huge burst of small purchases, the card issuer might suspend the card until checking with the cardholder.
DIG-SIG -- CA should suspend the key if there are X purchases within Y time units. This instruction to the CA is kept reasonably private between the CA and the subscriber.
Technique 8 -- Pattern Analysis For Size Of Purchases
I used to manage clothing stores. Our bank gave managers periodic security training, which we were to pass on to our staff. This particular bank was very likely to suspend a card if a customer was carrying several packages and was making two or more purchases in our store in a way that kept each purchase under our floor limit.
DIG-SIG: In an electronic situation, I'd look for multiple small transactions, small enough that none of them should trigger any alarms, especially if there were multiple separate purchases made from a single seller. There might be good algorithms for this; I don't know enough to know how to specify this choice to a subscriber. Again, the choice made by the subscriber should be kept private between the subscriber and the CA.
Technique 9 -- Notification Of Rate Of Purchases
No credit card issuer has done this for me, but some telephone card issuers have -- after I used the card much more frequently than normal, the phone company called me to check if these calls were mine. Until the company reached me, it suspended my card.
DIG-SIG: e-mail notification or (for a fee) telephonic notification if there have been more than N purchases in M minutes. The rule (the fact that it's turned on for this customer, and the values of N and M) should be private, between the CA and the subscriber.
Technique 10 -- Limited Scope
Some cards (such as a gasoline company credit card) can only be used for some types of purchases.
DIG-SIG: This key can only be used to sign court documents. This other key can only be used for retail merchandise purchases (as opposed to hotels, plane tickets, etc.). This limitation might be kept private between subscriber and CA.
Customers Should Be Free To Use Or Not Use These Methods
Each of these techniques is imperfect. Each of them has proved to be a pain in the neck sometimes. I would not want to IMPOSE these techniques on anyone using a digital ID, but I would want to allow a digital ID subscriber to choose any combination of them (and there are probably several others).
How should we associate a choice of restriction with a key? It seems natural to embed the restriction into the certificate itself, or to store the restriction with the CA, and have the CA enforce some restrictions and notify potentially relying parties of others.
I've been told that this isn't technologically feasible today. I don't know if that's true. Assume that it is. Nothing stops us from writing rules for the short term, today, and better rules that will automatically replace the first set, 5 years from now. For example, we can allocate risk today in ways that put some liability burdens on CA's, that will disappear as soon as they adopt the new risk management features.
Will The Market Protect Us?
I'm hesitant to rely on "the market" to guarantee availibility of these or any other loss control features. We have absolutely no assurance that CAs and other vendors will go out of their way to improve customer security when the customer bears all the risk of a breach of security. Competition might result in this, but we cant rely on that:
In Britain, the courts have not yet been so demanding; despite a parliamentary commission that found the personal identification number (PIN) system was insecure, bankers simply deny that their systems can ever be at fault. Customers who complain about phantom withdrawals are told that they must be lying, or mistaken, or that they must have been defrauded by their friends or relatives. This has led to a string of court cases in the U.K. . . .
The three main causes of phantom withdrawals did not involve cryptography at all: they were program bugs, postal interception of cards, and thefts by bank staff. . . .
It is well known that it is difficult to get an error rate below 1 in 10,000 on large, heterogeneous transaction processing systems such as ATM networks; yet, before the British litigation started, the government minister responsible for Britains banking industry was claiming an error rate of 1 in 1.5 million! Under pressure from lawyers, this claim was trimmed to 1 in 250,000, then 1 in 100,000, and most recently to 1 in 34,000. . . .
British banks dismiss about 1% of their staff every year for disciplinary reasons, and many of these firings are for petty thefts in which ATMs can easily be involved. There is a moral hazard here: staff know that many ATM-related thefts go undetected because of the policy of denying that they are even possible.
Security methods are speed bumps on the fraud superhighway. They slow fraud down but over time, even the strongest security methods get beaten.
In some ways, public key encryption is remarkably secure. In other ways, we can identify clear risks. If the primary method of authenticating a document is a digital signature, then I recommend that the liability rules change such that there is either:
I have identified additional means for customers to protect themselves. These look at the capabilities that a key-holder can (or should be able to) associate with their key.
If you create a system that lets a person manage and
limit her own risk, it is fairer to allocate that risk to her,
and she will probably perceive the system as fair. If you create
a system that puts technological risk management in the hands
of a third party, it is unfair and unsound (doesn't further the
improvement of security) to allocate unlimited liability to the
keyholder instead of the third party.
Drafts of this paper were circulated to the September,
1997, meetings of the drafting committees for the Uniform Electronic
Transactions Act and the Uniform Commercial Code, Article 2B.
The final draft of this paper will be published in the February,
1998 issue of the Uniform Commercial Code Bulletin.
About Cem Kaner
Cem Kaner attends Article 2B meetings and Uniform Electronic Transactions Act meetings as an observer. He practices law, usually representing individual developers, small development services companies, and customers. He also consults on technical and management issues and teaches within the software development community.
His book, Testing Computer Software, received the Award of Excellence in the Society for Technical Communications 1993 Northern California Technical Publications Competition. It is currently the best selling book in its area.
Kaner has managed every aspect of software development, including software development projects, software testing groups and user documentation groups. He has also worked as a programmer, a human factors analyst / UI designer, a salesperson, a technical writer, and an associate in an organization development consulting firm. He teaches courses on software testing and on the law of software quality at UC Berkeley Extension, at UC Santa Cruz Extension, and by private arrangement.
He has also served pro bono as a Deputy District Attorney, as an investigator/mediator for Santa Clara Countys Consumer Affairs Department, and as an Examiner for the California Quality Awards.
Kaner holds a B.A. (Math, Philosophy, 1974), a J.D. (1993), and a Ph.D. (Experimental Psychology, 1984) and is Certified in Quality Engineering by the American Society for Quality Control.