?

Log in

No account? Create an account
eyes black and white

Taking private commerce seriously

With many authoritarian governments cracking down on peaceful exchange and commerce, including via electronic means, there is clearly a need for discussion and e-commerce software with good privacy and security — hence a market opportunity to provide the technical means to achieve such peaceful human interaction. Of course I am only discussing the means of avoiding oppression by illegitimate governments, and not at all about escaping the just surveillance of what legitimate governments there may be. You're each big enough to distinguish oppression from justice by yourself, and to know when to surrender even to oppressive governments.

To seize this opportunity, a group of technically advanced software developers could write general-purpose secure discussion and e-commerce software. To demonstrate their technology, these developers would maintain a discussion and commerce site beyond any legal reproach: that site would defensively follow every applicable regulation in every single country it is allowed to operate: this includes restricting speech and commerce to what is uncontroversially legal, paying all relevant taxes, and tracking users no less (though no more) than legally required. Of course, the software being open source, anyone can trivially customize and deploy it with minimal configuration, making their own responsible choices as to which oppressive laws to flaunt in which oppressed countries. But official maintainers of the software are barred from ever operating or having operated such a site. A clean separation is essential to avoid any legal trouble, and to keep the project running, accepting funds and attracting academic contributions. Thus, the open source project may accept anonymous donations (within legal limits) and people can post bounties on its bug tracker. No open source developer will ever touch anything illegal — they just help build basic secure internet infrastructure.

There are many technical challenges, with two main themes, distinct but related: raising the state-of-the-art in secure software, and rejecting those mainstream technologies that are insufficiently secure.

The first theme, raising the state-of-the-art in secure software, is because every government crackdown on private networks thereby demonstrates that security is currently insufficient. Now, it's not just a matter of "currently": security is a race. You don't have to be perfect, you only have to be better than the other guy; but it's also not enough to be perfect, you still have to be better than the other guy. And as the other guy improves, you must improve ahead of him if you want to stay ahead; and you want to stay ahead if you don't want to spend the rest of your life in prison. Offense has a permanent team to improve its position in the race. Defense also needs a permanent team if it wants to compete. And that is why a dedicated team is needed.

Also, "technology" must be understood broadly: it is not just a matter of the software, or its development process, but also of the people running it. Operational security requires operators to follow a strict discipline, jumping through sufficient hoops, every single the time, to cover their tracks. Software can help with it, by automating what can be automated, by providing checklists for recurrent human actions, by offering tutorials and training and documentation, etc. Some of the actions shall remain secret and/or context-specific, and each operator of a private site will have to extend the software in ways that make sense.

Obviously technology is also a matter of software, and of software development process. On the pure software technology front, "raise the security level" means that privacy developers should use languages, libraries and protocols that will minimize the attack surface of their systems, while maintaining sufficient (and as high as affordable) productivity, within their development budget. Otherwise, they are likely to lose the race to their competitors, and hence to their enemies: as the joke goes, you don't always have to run faster than the tiger, only faster than the other preys that the tiger is just as interested in. In particular, the privacy developers should invest in programming languages and operating system infrastructure with formal semantics, limits on side-effects, verified compilers, verified protocols, full abstraction for domain-specific languages, etc.

As regards the software development process, rigorous design, extensive documentation, precise specification, comprehensive testing, including fuzz testing, stringent review, are all necessary, but not sufficient, to promote quality. Reviews can be facilitated using automated linting and code formatting, that remove syntactic concerns from the reviewers so they can focus on the essential; but even with this burden removed, review is still error prone, the reviewers of a change may be missing context, and unobvious bugs or underhanded backdoors can pass through. Therefore regular code walkthroughs of the entire codebase, especially before release, are a good idea. And that includes any software dependencies. To make these walkthroughs possible, an emphasis on simplicity is essential — in the style of Alan Kay whose ViewPoints Research Institute (VPRI) built a complete software system in twenty thousand lines of code, including compiler, network stack and graphical interface, though excluding device drivers, backward compatibility modules, and various extensions.

And this brings us to the second main theme: rejecting those mainstream technologies that are insufficiently secure. Most mainstream technology is not optimized for security at all; security comes as an afterthought. Typical software environments involve huge codebases with gaping security holes and gigantic attack surfaces, in which it's OK for security to be breached, because there isn't too much money at stake, operators will be there to clean up the mess and restore from backup, and if needed government agents will use supreme force to go after the attackers, not the defenders. But these common standards of security that are good enough for mainstream use are just not good enough when developing software meant to resist attack by oppressive government agencies themselves. And this means much of the existing body of software must necessarily be excluded from the trusted code base used to conduct private commerce. In particular, if the choice of protocols necessitate the use of the giant gas factories that modern web browsers have become, it's game over. These beasts are not likely to be made (much less kept) secure any time soon.

Insecure by construction includes a lot of current technologies: HTML, CSS, Javascript, SSL, HTTPS, HTTP, maybe even long-running end-to-end TCP connections, they all have to go. Programming languages such as C++, C, Perl, Python, PHP, Ruby, Java, will also have to be wholly avoided. Safe replacements of some of these technologies may have to be developed where these technologies possess unrivaled features, and these features are not completely wrongheaded with respect to security. In developing these replacements, the lessons of these existing technologies can be preserved, and even large parts of the code base for these technologies can be ported. Yet when a safer technology replaces an unsafe one, it is important to not try to "look alike" the previous popular but unsafe technology, because willful confusion between similar technologies is itself a surface of attack by malicious actors. The safe choice is to be explicitly incompatible with any unsafe standard (or worse insufficiently specified semi-standard), and to fail fast and loud when the attempted use of unsafe technologies is detected. Trying to provide "best effort" compatibility is doomed, as it will open confused users to security issues.

Authentication by centralized authorities, whether through SSL certificates or secure DNS, is antithetical to the purpose of private commerce. They may not be used beyond bootstrapping the installation of basic secure software. This disqualifies HTTPS as a valid protocol, even if some "extension" to it ever were to one day support decentralized certificates as well as centralized ones: its very support for centralized authorities makes it a liability by which users may be confused into connecting using the wrong authorities. Instead, a private commerce protocol must exclusively rely on decentralized identities; the simplest naming scheme might be to read (digests of) cryptographic public keys as sequences of words using (some variant of) diceware (a bit in the style of Urbit having pronounceable names) — and always insisting that users check the full sentence before they complete their first connection. When users use stateful clients, these should remember connection keys and accept aliases (a bit in the style of SSH), and they should warn loudly against partial matches or near-matches.

Network connections may have to be avoided for private commerce, because long-running TCP connections can help identify participants. To keep communications private against a spy router at your ISP, at other ISPs nationwide and maybe even worldwide, and at anonymizing network nodes, might involve partaking in a mix relay network into which you'd inject traffic at a constant rate through many intermediate nodes. Doing it might might both add a lot of latency and considerably restrict bandwidth. Whatever the means to achieve privacy, they have a price that is likely incompatible with using the latest and greatest web browsing technologies en vogue on the non-private Internet. Private commerce operators must accept that their sites must intrinsically function with much lower latency and throughput than non-private sites. Goodbye, Web 2.0, AJAX, HTML5, rich interfaces and bloated pages; no cookies, no personalization, no language or format negotiation. Private commerce technologies should revert to Web 1.0 and earlier: asynchronous store-and-forward mailboxes, where marketplaces transmit catalogs of data, or search engines return hundreds of results, that users browse locally before to mail in order forms.

Bandwidth and security limitations mean that private commerce may be made mainly a lot of text, which may be "rich", but not too rich: there will never be any code in it (no Javascript), and no style engine capable of expressing "weird machines" (like CSS); only a well-specified, precisely-versioned, and strictly-validated, variant of markdown (all protocols should be well-specified, precisely-versioned, and strictly-validated). The only style parameters will be screen size and font size, and they will be under the exclusive control of the client, not at all of the server. There will be few pictures and sounds, at a premium; video will be a rare luxury; interactivity will be extremely limited. Private commerce will be austere compared to the public web. But it will be private, and, secure.

How much will the development of such private commerce infrastructure cost? Probably millions of dollars over several years. But many existing pieces of software can be leveraged: Peer-to-peer protocols, mix networks, cryptographic libraries, distributed hash-tables, cryptocurrencies, decentralized naming protocols, etc. Yet, probably few of those pieces can be used as is, and even though that can will require integration into a coherent system that securely ties all these technologies together. The development team must publish a set of protocols and a reference implementation of clients and servers. This trusted code base should not include anything like a modern web browser, or any other untrustable blob of software bloat. (This disqualifies OpenBazaar, anything to do with the Tor Browser, etc.) Others may feel free to fork and extend, or reimplement, the code base; they may even add or remove a backdoor or two, improve the interface or its implementation, contribute features — the code is opensource. The core development team will focus on providing the core functionality, making sure that it's tightly secure where it should, and extensible where it could.

What's the point of a super-secure cryptocurrency if the weakest link in using it is a terribly insecure client?

PS: Of course a secure client also requires a secure operating system and a secure computer below it, but that's another issue. See my speech Who Controls Your Computer? For the moment, I'd recommend to run secure client software on a dedicated computer that doesn't run any other application (say, a cheap ARM board with its own display and keyboard), and assume some general Linux distribution wasn't specifically hacked to target you (and doesn't have such gaping holes that everything is lost for everyone).

Comments

Canaries

Other point: each and every developer should maintain a canary on whether anyone forced them to introduce changes (or fail to introduce changes) in the code.

As part of his operation checklist, the operators every day, every week, or at every code release or maintenance cycle, should update a page and make an oath (or a detailed series of oaths) that they haven't been compelled to introduce changes (or fail to introduce changes) in the software itself (or in the server infrastructure to disseminate it, or any other process surrounding the software) today or any previous day.

In the unfortunate event that something happens, the operator stop checking that item in the checklist as he updates the page. And/or he can update a counter "X" days since last compelled to tamper with the software project.

Of course, "authorities" could evolve their "gag orders" to compel people to lie and keep publishing false canaries. The presence of canary can never prove that code hasn't been tampered with --- it could be tampered without the canary-maintainer knowing. But the absence of a canary is a powerful signal.
eyes black and white

July 2018

S M T W T F S
1234567
891011121314
15161718192021
22232425262728
293031    

Tags

Powered by LiveJournal.com