a small dispatch from the coalface

category count %
total 5 838 383 100.000
ok 2 212 565 37.897
ok (redirected) 1 999 341 34.245
network or protocol error 798 231 13.672
timeout 412 759 7.070
hostname not found 166 623 2.854
page not found (404/410) 110 241 1.888
forbidden (403) 75 054 1.286
service unavailable (503) 18 648 .319
server error (500) 15 150 .259
bad request (400) 14 397 .247
authentication required (401) 9 199 .158
redirection loop 2 972 .051
proxy error (502/504/52x) 1 845 .032
other HTTP response 1 010 .017
crawler failure 329 .006
syntactically invalid URL 19 .000

Sorry about the non-tabular figures.

Redesigning Income Tax

Here is an opinionated proposal, having no chance whatsoever of adoption, for how taxes ought to be levied on income. This post was originally scheduled for Income Tax Day here in the good old U.S.A., but I was having trouble with the algebra and then I was on vacation. So instead you get it for Canadian tax day. Everything is calculated in US dollars and compared to existing US systems, but I don’t see any great difficulty translating it to other countries.

Premises for this redesign:

  • The existing tax code is way too damn complicated.
  • It is the rules that need to be simplified, not the mathematics. In particular, the marginal tax rate does not need to be a step function just to simplify the arithmetic involved. You look that part up in a table anyway.
  • All individuals’ take-home pay should increase monotonically with their gross income, regardless of any other factors.
  • High levels of income inequality constitute a negative externality; income tax should therefore be Pigovian.
  • Tax rates should not vary based on any categorization of income (e.g. interest and capital gains should be taxed at the same rate as wages). This principle by itself removes a great deal of the complexity, and a great deal of the perverse incentives in the current system as well.

Read on for concrete details of the proposal.

Continued…

Secure channels are like immunization

For a while now, when people ask me how they can improve their websites’ security, I tell them: Start by turning on HTTPS for everything. Run a separate server on port 80 that issues nothing but permanent redirects to the https:// version of the same URL. There’s lots more you can do, but that’s the easy first step. There are a number of common objections to this plan; today I want to talk about the it should be the user’s choice objection, expressed for instance in Google to Gmail customers: You WILL use HTTPS by Robert L. Mitchell. It goes something like this:

Why should I (the operator of the website) assume I know better than each of my users what their security posture should be? Maybe this is a throwaway account, of no great importance to them. Maybe they are on a slow link that is intrinsically hard to eavesdrop upon, so the extra network round-trips involved in setting up a secure channel make the site annoyingly slow for no benefit.

This objection ignores the public health benefits of secure channels. I’d like to make an analogy to immunization, here. If you get vaccinated against the measles (for instance), that’s good for you because you are much less likely to get the disease yourself. But it is also good for everyone who lives near you, because now you can’t infect them either. If enough people in a region are immune, then nobody will get the disease, even if they aren’t immune; this is called herd immunity. Secure channels have similar benefits to the general public—unconditionally securing a website improves security for everyone on the ’net, whether or not they use that website! Here’s why.

Most of the criminals who crack websites don’t care which accounts they gain access to. This surprises people; if you ask users, they often say things like well, nobody would bother breaking into my email / bank account / personal computer, because I’m not a celebrity and I don’t have any money! But the attackers don’t care about that. They break into email accounts so they can send spam; any @gmail.com address is as good as any other. They break into bank accounts so they can commit credit card fraud; any given person’s card is probably only good for US$1000 or so, but multiply that by thousands of cards and you’re talking about real money. They break into PCs so they can run botnets; they don’t care about data stored on the computer, they want the CPU and the network connection. For more on this point, see the paper Folk Models of Home Computer Security by Rick Wash. This is the most important reason why security needs to be unconditional. Accounts may be throwaway to their users, but they are all the same to the attackers.

Often, criminals who crack websites don’t care which websites they gain access to, either. The logic is similar: the legitimate contents of the website are irrelevant. All the attacker wants is to reuse a legitimate site as part of a spamming scheme or to copy the user list, guess the weaker passwords, and try those username+password combinations on more important websites. This is why everyone who has a website, even if it’s tiny and attracts hardly any traffic, needs to worry about its security. This is also why making websites secure improves security for everyone, even if they never intentionally visit that website.

Now, how does HTTPS help with all this? The easiest several ways to break into websites involve snooping on unsecured network traffic to steal user credentials. This is possible even with the common-but-insufficient tactic of sending only the login form over HTTPS, because every insecure HTTP request after login includes a piece of data called a session cookie that can be stolen and used to impersonate the user for most purposes without having to know the user’s password. (It’s often not possible to change the user’s password without also knowing the old password, but that’s about it. If an attacker just wants to send spam, and doesn’t care about maintaining control of the account, a session cookie is good enough.) It’s also possible even if all logged-in users are served only HTTPS, but you get an unsecured page until you login, because then an attacker can modify the unsecured page and make it steal credentials. Only applying channel security to the entire site for everyone, whoever they are, logged in or not, makes this class of attacks go away.

Unconditional use of HTTPS also enables further security improvements. For instance, a site that is exclusively HTTPS can use the Strict-Transport-Security mechanism to put browsers on notice that they should never communicate with it over an insecure channel: this is important because there are turnkey SSL stripping tools that lurk in between a legitimate site and a targeted user and make it look like the site wasn’t HTTPS in the first place. There are subtle differences in the browser’s presentation that a clever human might notice—or you could direct the computer to pay attention, and then it will notice. But this only works, again, if the site is always HTTPS for everyone. Similarly, an always-secured site can mark all of its cookies secure and httponly which cuts off more ways for attackers to steal user credentials. And if a site runs complicated code on the server, exposing that code to the public Internet two different ways (HTTP and HTTPS) enlarges the server’s attack surface. If the only thing on port 80 is a boilerplate try again with HTTPS permanent redirect, this is not an issue. (Bonus points for invalidating session cookies and passwords that just went over the wire in cleartext.)

Finally, I’ll mention that if a site’s users can turn security off, then there’s a per-user toggle switch in the site’s memory banks somewhere, and the site operators can flip that switch off if they want. Or if they have been, shall we say, leaned on. It’s a lot easier for the site operators to stand up to being leaned on if they can say that’s not a thing our code can do.

Should new web features be HTTPS only?

I doubt anyone who reads this will disagree with the proposition that the Web needs to move toward all traffic being encrypted always. Yet there is constant back pressure in the standards groups, people trying to propose network-level innovations that provide only some of the fundamental three guarantees of a secure channel—maybe you can have integrity but not confidentiality or authenticity, for instance. I can personally see a case for an authentic channel that provides integrity and authenticity but not confidentiality, but I don’t think it’s useful enough to back off on the principle that everything should be encrypted always.

So here’s a way browser vendors could signal that we will not stand for erosion of secure channels: starting with a particular, documented and well-announced, version, all new content features are only usable for fully HTTPS pages. Everything that worked prior to that point continues to work, of course. I am informed that there is at least some support for this within the Chrome team. It might be hard to sell Microsoft on it. What does the fox think?

HTTP application layer integrity/authenticity guarantees

Note: These are half-baked ideas I’ve been turning over in my head, and should not be taken all that seriously.

Best available practice for mutually authenticated Web services (that is, both the client and the server know who the other party is) goes like this: TLS provides channel confidentiality and integrity to both parties; an X.509 certificate (countersigned by some sort of CA) offers evidence that the server is whom the client expects it to be; all resources are served from https:// URLs, thus the channel’s integrity guarantee can be taken to apply to the content; the client identifies itself to the server with either a username and password, or a third-party identity voucher (OAuth, OpenID, etc), which is exchanged for a session cookie. Nobody can impersonate the server without either subverting a CA or stealing the server’s private key, but all of the client’s proffered credentials are bearer tokens: anyone who can read them can impersonate the client to the server, probably for an extended period. TLS’s channel confidentiality assures that no one in the middle can read the tokens, but there are an awful lot of ways they can leak at the endpoints. Security-conscious sites nowadays have been adding one-time passwords and/or computer-identifying secondary cookies, but the combination of session cookie and secondary cookie is still a bearer token (possibly you also have to masquerade the client’s IP address).

Here are some design requirements for a better scheme:

  • Identify clients to servers using something that is not a bearer token: that is, even if client and server are communicating on an open (not confidential) channel, no eavesdropper gains sufficient information to impersonate client to server.
  • Provide application-layer message authentication in both directions: that is, both receivers can verify that each HTTP query and response is what the sender sent, without relying on TLS’s channel integrity assurance.
  • The application layer MACs should be cryptographically bound to the TLS server certificate (server→client) and the long-term client identity (when available) (client→server).
  • Neither party should be able to forge MACs in the name of their peer (i.e. server does not gain ability to impersonate client to a third party, and vice versa).
  • The client should not implicitly identify itself to the server when the user thinks they’re logged out.
  • Must afford at least as much design flexibility to site authors as the status quo.
  • Must gracefully degrade to the status quo when only one party supports the new system.
  • Must minimize number of additional expensive cryptographic operations on the server.
  • Must minimize server-held state.
  • Must not make server administrators deal with X.509 more than they already do.
  • Compromise of any key material that has to be held in online storage must not be a catastrophe.
  • If we can build a foundation for getting away from the CA quagmire in here somewhere, that would be nice.
  • If we can free sites from having to maintain databases of hashed passwords, that would be really nice.

The cryptographic primitives we need for this look something like:

  • A dirt-cheap asymmetric (verifier cannot forge signatures) message authentication code.
  • A mechanism for mutual agreement to session keys for the above MAC.
  • A reasonably efficient zero-knowledge proof of identity which can be bootstrapped from existing credentials (e.g. username+password pairs).
  • A way to bind one party’s contribution to the session keys to other credentials, such as the TLS shared secret, long-term client identity, and server certificate.

And here are some preliminary notes on how the protocol might work:

  • New HTTP query and response headers, sent only over TLS, declare client and server willingness to participate in the new scheme, and carry the first steps of the session key agreement protocol.
  • More new HTTP query and response headers sign each query and response once keys are negotiated.
  • The server always binds its half of the key agreement to its TLS identity (possibly via some intermediate key).
  • Upon explicit login action, the session key is renegotiated with the client identity tied in as well, and the server is provided with a zero-knowledge proof of the client’s long-term identity. This probably works via some combination of HTTP headers and new HTML form elements (<input type="password" method="zkp"> perhaps?)
  • Login provides the client with a ticket which can be used for an extended period as backup for new session key negotiations (thus providing a mechanism for automatic login for new sessions). The ticket must be useless without actual knowledge of the client’s long-term identity. The server-side state associated with this ticket must not be confidential (i.e. learning it is useless to an attacker) and ideally should be no more than a list of serial numbers for currently-valid tickets for that user.
  • Logout destroys the ticket by removing its serial number from the list.
  • If the client side of the zero-knowledge proof can be carried out in JavaScript as a fallback, the server need not store passwords at all, only ZKP verifier information; in that circumstance it would issue bearer session cookies instead of a ticket + renegotiated sesson authentication keys. (This is strictly an improvement over the status quo, so the usual objections to crypto in JS do not apply.) Servers that want to maintain compatibility with old clients that don’t support JavaScript can go on storing hashed passwords server-side.

I know all of this is possible except maybe the dirt-cheap asymmetric MAC, but I don’t know what cryptographers would pick for the primitives. I’m also not sure what to do to make it interoperable with OpenID etc.

Crashing should be fixed now

This site should no longer be causing certain versions of Firefox (particularly on Mac) to crash. If it still crashes for you, please flush your browser cache and try again. If it still crashes, please let me know about it.

As an unfortunate side effect of the changes required, there is no longer an owl at the bottom of each page. I’d appreciate advice on how to put it back. The trouble is persuading it to be at the bottom of the rightmost sidebar, but only if there is enough space below the actual content—formerly this was dealt with by replicating the background color on the <body> into the content elements for the sidebar, but now it’s all background images and there are visible seams if I do it that way. Note that body::after is already in use for something else, html::after can’t AFAIK be given the desired horizontal alignment, and (again AFAIK) media queries cannot measure the height of the page, only the window; so that excludes any number of more obvious techniques.

(If you mention Flexbox I will make the sad face at you.)

flex: input scanner rules are too complicated

If you get this error message, the Internets may lead you to believe that you have no option but to change magic numbers in the source code and recompile flex. Reader, it is not so. Try the -Ca option, before doing anything else.

No, I don’t know why an option that’s documented to be all about size/speed tradeoffs in the generated, DFA, scanner also has the effect of raising the hard limit on the number of NFA states (from 32000 to about 231), but I already feel dirty just having looked at the code enough to discover this, so I’m going to stop digging while I’m ahead.

some trivia about the Alexa 1M

Alexa publishes a list of the top 1,000,000 sites on the web. Here is some trivia about this list (as it was on September 27, 2013):

  • No entries contain an URL scheme.
  • Only 247 entries contain the string www.
  • Only 13,906 entries contain a path component.
  • There are 987,661 unique hostnames and 967,933 unique domains (public suffix + 1).
  • If you tack http:// on the beginning of each entry and / on the end (if there wasn’t a path component already), then issue a GET request for that URL and chase HTTP redirects as far as you can (without leaving the site root, unless there was a path component already), you get 916,228 unique URLs.
  • Of those 916,228 unique URLs, only 352,951 begin their hostname component with www. and only 14,628 are HTTPS.
  • 84,769 of the 967,933 domains do not appear anywhere in the list of canonicalized URLs; these either redirected to a different domain or responded with a network or HTTP error.
  • 52,139 of those 84,769 domains do respond to a GET request if you tack www. on the beginning of the domain name and then proceed as above.
  • But only 41,354 new unique URLs are produced; the other 10,785 domains duplicate entries in the earlier set.
  • 39,966 of the 41,354 new URLs begin their hostname component with www.
  • 806 of the new URLs are HTTPS.
  • Merging the two sets produces 957,582 unique URLs (of which 392,917 begin the hostname with www. and 15,434 are HTTPS), 947,474 unique hostnames and 928,816 unique domains.
  • 42,734 registration names (that is, the +1 component in a public suffix + 1 name) appear in more than one public suffix. 11,748 appear in more than two; 5516 in more than three; 526 in more than ten.
  • 44,299 of the domains in the original list do not appear in the canonicalized set.
  • 5,183 of the domains in the canonicalized set do not appear in the original list.

Today’s exercise in data cleanup was brought to you by the I Can’t Believe This Took Me An Entire Week Foundation. If you ever need to do something similar, this script may be useful.

Postcard from L.A.

I’ve never liked Los Angeles. It’s too hot, to begin with, and it’s the Platonic ideal of everything that’s been wrong with American urban planning since Eisenhower (if not longer): strangling on its own traffic yet still car-mad, built where the water isn’t, and smeared over a ludicrous expanse of landscape. In the nearly twenty years since I moved away, all these things have only gotten worse. I only come back out of family obligation (my parents still live here), which doesn’t help.

This time, though, I find that I am enjoying myself regardless. I’m here with my sister, who does like it here, knows fun things to do and people to hang out with. We’re not clear out at the west end of the San Fernando Valley near our parents’ house; we’re in North Hollywood, a surprisingly short subway ride from downtown. (There is a subway now. A heavily used, grungy, practical subway. I can hardly believe it.) People even seem to be building somewhat denser. I was able to walk to the nearest dry cleaners’, which is also hardly believable.

We went to a show at Theatre of NOTE last night, called Eat the Runt: billed as black comedy, but really more of a farce, packed full of in-jokes about museums, grantwriting, and the entertainment biz, and with the cast randomly assigned to roles by pulling names out of a hat before each show. It was hilarious, although I wonder how much it depends on those in-jokes.

At the corner of Hollywood and Vine, the Walk of Fame has a special plaque for the Apollo 11 astronauts, shaped like the moon instead of a star, but still with the little brass old-timey TV. (I suppose it was a television broadcast of great significance, although memorializing it as such seems to miss the point.) I am not sure how I have managed never to notice this before.

Tonight, there will be more theater. Tomorrow, there will be the Huntington Library. Monday, back on an airplane.

Institutional secrecy culture is antidemocratic

For the past several weeks a chunk of the news has been all about how the NSA in conjunction with various other US government agencies, defense contractors, telcos, etc. has, for at least seven years and probably longer, been collecting mass quantities of data about the activities of private citizens, both of the USA and of other nations. The data collected was largely what we call traffic analysis data: who talked to whom, where, when, using what mechanism. It was mostly not the actual contents of the conversations, but so much can be deduced from who talked to whom, when that this should not reassure you in the slightest. If you haven’t seen the demonstration that just by compiling and correlating membership lists, the British government could have known that Paul Revere would’ve been a good person to ask pointed questions about revolutionary plots in the colonies in 1772, go read that now.

I don’t think it’s safe to assume we know anything about the details of this data collection: especially not the degree of cooperation the government obtained from telcos and other private organizations. There are too many layers of secrecy involved, there’s probably no one who has the complete picture of what the various three-letter agencies were supposed to be doing (let alone what they actually were doing), and there’s too many people trying to bend the narrative in their own preferred direction. However, I also don’t think the details matter all that much at this stage. That the program existed, and was successful enough that the NSA was bragging about it in an internal PowerPoint deck, is enough for the immediate conversation to go forward. (The details may become very important later, though: especially details about who got to make use of the data collected.)

Lots of other people have been writing about why this program is a Bad Thing: Most critically, large traffic-analytic databases are easy to abuse for politically-motivated witch hunts, which can and have occurred in the US in the past, and arguably are now occurring as a reaction to the leaks. One might also be concerned that this makes it harder to pursue other security goals; that it gives other countries an incentive to partition the Internet along national boundaries, harming its resilience; or that it further harms the US’s image abroad, which was already not doing that well; or that the next surveillance program will be even worse if this one isn’t stopped. (Nothing new under the sun: Samuel Warren and Louis Brandeis’ argument in The Right to Privacy in 1890 is still as good an explanation as any you’ll find of why the government should not spy on the general public.)

I want to talk about something a little different; I want to talk about why the secrecy of these ubiquitous surveillance programs is at least as harmful to good governance as the programs themselves.

Continued…