Ideas that don’t make money

The sad Internet news of this week is that the multiplayer online game/community Glitch will have to shut down next month. The announcement makes it sound like mostly a financial problem (not enough revenue to keep going), with a side order of getting caught between technology curves. They built the desktop client on Flash, which is on its way out now, but the technologies that will replace it are not completely ready yet; meanwhile, Flash is mostly not available at all on mobile devices but they didn’t have the engineering manpower to build a whole new client for each such platform.

This is a personal disappointment for me, since I liked the game, but it’s also not the first time I’ve seen an Internet community built around a compelling idea fall apart because the money wasn’t there. Something very similar happened to Metaplace and Faunasphere. It’s not just games; the WELL, paragon of elder days, had to be bought out by its users, and this was only possible because it goes back to elder days and has users who are very, very rich. TV Tropes, timesink par extraordinare and valuable resource for high school English students, is ad-supported so it keeps getting jerked around by Google.

You get the idea: the ecology around the Web is only capable of supporting ideas that bring in the money. It doesn’t really matter how good the idea is on its own terms, or how desirable it is to its audience if that audience isn’t big enough to provide enough money. Kickstarter and the like help with that last bit, but they don’t work for things that need lots of money or a continuous stream of money. Glitch staff quoted a figure of six million U.S. dollars a year to keep the game running, which is comparatively small for a business—thirty-ish people at $100,000/yr, plus however much the servers and the connectivity cost, plus overhead. But one million dollars is extraordinary for a Kickstarter project.

The requirement for a continuous stream of money to keep the servers running also hurts things on the Net that were successful but are now declining. I can still play Super Mario World any time I want; even after the original hardware stops working altogether, there will be emulators. But I can’t go back to Star Wars Galaxies, and I’m not sure if I should believe the website that’s telling me I can still play Uru Live. Again this isn’t just about games; we all remember what happened to Geocities.

Free software helps, but not enough, because it’s not enough to be in possession of all the code and data that you need for a client-server MMO. Some specific person or group has to actually run the server, and now we’re back to that continuous stream of money requirement—most of which will be going to people, not to computrons or tubes. You might not need developers, but you definitely need sysadmins. I was a sysadmin in college, for a tiny little computer lab that almost never had crises at four in the morning, and it was still a shitload of work. For an MMO you also need in-game and out-of-game moderators, which is even more difficult and thankless a gig than sysadminning, and while people do sometimes volunteer to do it for free, often those are exactly the people who should not be doing that job (yeah, I’m looking at you, Reddit).

Is there a solution? I don’t have one. I think it’s more a problem of capitalism than a problem of software architecture.

CCS 2012 Conference Report

The ACM held its annual Conference on Computer and Communications Security two weeks ago today in Raleigh, North Carolina. CCS is larger than Oakland and has two presentation tracks; I attended less than half of the talks, and my brain was still completely full afterward. Instead of doing one exhaustive post per day like I did with Oakland, I’m just going to highlight a handful of interesting papers over the course of the entire conference, plus the pre-conference Workshop on Privacy in the Electronic Society.

Continued…

CCS’12: StegoTorus

I just presented the major focus of my time and effort for the past year-and-a-bit, StegoTorus, at this year’s ACM Conference on Computer and Communications Security. You can see my slides and the code (also at Github). I was going to explain in more detail but all of the brain went into actually giving the talk. My apologies.

This is an ongoing project and we are looking for help; please do get in touch if you’re interested.

git backend, hg cli

LWN has an article with a nice chunky comment thread talking about the history of DVCSes and how git has basically taken over the category. Mozilla, of course, still mostly uses Mercurial, but there’s a lot of people who prefer git now, and there are bridges and stuff.

I have a weird perspective on all of this. I hacked on Monotone back in the day, so I have the basic DVCS concept cold, and Mercurial is only a little different; it never surprises me. Git, however… I read the documentation, and I think I understand what’s going on, and then I do something that according to (my understanding of) the documentation should do what I want, and instead it mangles my local repo and I get to spend an hour or two repairing it. Or, in one memorable case, it mangled the remote, shared repo—thankfully that was easily fixed once I figured out what it had done, but I still don’t know why it did that instead of what I expected it to. (A matter of which branch’s HEAD pointer got updated with the result of a merge.) I’ve been actively hacking on projects whose primary VCS is Git for over a year now and this consistently happens to me about once every 20 to 40 hours of coding time.

So I don’t trust Git and I don’t like using it. I do, however, appreciate its speed, which as far as I can tell is down to back-end stuff—storage format, network protocol, and so on. So here’s what I want: I want someone to write an exact clone of the Mercurial CLI that uses git’s back end. I have no time, but I would totally contribute money to the development of this. It has to be an exact clone in terms of command line behavior, though. If that means throwing away front-end features of Git, I am 100% fine with that. I would happily lose the index/working copy distinction, for instance. I could also live with losing support for arbitrary Mercurial extensions; I would miss MQ in principle but I suspect there’s an alternate development model for Mozilla that doesn’t need it. Everyone else seems to manage.

Anyone else interested in something like that?

Making OSX Emacs less broken

If you find that Emacs on OSX fails to pick up the same $PATH setting that you get in command line shells, instead defaulting to an impoverished default that doesn’t include (for instance) anything installed via MacPorts:

(add-hook 'after-init-hook
         #'(lambda ()
             (setenv "PATH"
               (with-temp-buffer
                 (call-process "/bin/bash"
                               nil
                               (list (current-buffer) nil)
                               nil
                               "-l" "-c" "printf %s \"$PATH\"")
                 (buffer-string)))))

I am only embarrassed that I put up with the brokenness for as long as I did.

HTML Fragment Parser with Substitution and Syntactic Sugar

This is a little off my usual beaten path, but what the heck.

This is two related proposals: one for a new DOM feature, document.parseDocumentFragment, and one for JS syntactic sugar for that feature. It is a response to Ian Hickson’s E4H Strawman, and is partially inspired by the general quasi-literal proposal for ES-Harmony.

Compared to Hixie’s proposal, this avoids embedding a subset of the HTML grammar in the JS grammar, while at the same time being more likely to conform with author expectations, since the HTML actually gets parsed by the HTML parser. It should have at least equivalent expressivity and power.

Motivating Example

function addUserBox(userlist, username, icon, attrs) {
  var section = h`<section class="user" {attrs}>
                    <h1>{username}</h1>
                  </section>`;
  if (icon)
    section.append(h`<img src="{icon}" alt="">`);
  userlist.append(section);
}

Continued…

teaser: some very alpha software

Readers of this blog may find https://github.com/TheTorProject/stegotorus and https://github.com/zackw/moeller-ref of interest.

The Conference Formerly Known as Oakland, day 3

This day had a lot of interesting papers, but some of the presentations were disappointing: they spent their time on uninteresting aspects of their work, or handwaved over critical details.

That said, most of the work on passwords was compelling, and if you read to the end there’s a cranky rant about the panel discussion.

Continued…

The Conference Formerly Known as Oakland, day 2

I skipped the 8:30AM session today, it was mostly not interesting to me and I badly needed the extra hour of sleep. I’m sorry to miss On the Feasibility of Internet-Scale Author Identification, but I will read the paper. I also skipped the business meeting, so, summaries start with the 10:30 session, and end with the short talks.

Continued…

The Conference Formerly Known as Oakland, day 1

I’m attending the IEEE Symposium on Security and Privacy, 2012 and I’m going to try taking notes and posting them here, again. The last time I tried this (at CCS 2010), most of the notes didn’t ever get posted, but I paid a whole lot more attention to the talks than I do when I’m not taking notes. This time, I’m going to try to clean up the notes and post them the next morning at the latest.

S&P was at the Claremont Hotel in Oakland, California for thirty-odd years, and they didn’t really want to leave, but there wasn’t room for all the people who wanted to attend. Last year they turned nearly 200 people away. This year, it’s in San Francisco at a hotel on Union Square—amusingly, the exact same hotel that USENIX Security was at, last August—with much higher capacity, and while I still have to get up at dawn to get there on time, at least I don’t have to drive.

I have not had time to read any of the papers, so this is all based on the talks, only. However, where possible I have linked each section heading to the paper or to a related website.

Mozilla folks: I would like to draw your attention particularly to the talks entitled Dissecting Android Malware, The Psychology of Security for the Home Computer User, and User-Driven Access Control.

Continued…