?

Log in

No account? Create an account
eyes black and white

Confusing constants and variables in Computer Programming

I am always amazed when people fail to distinguish between constants and variables. I am all the more amazed when the victims of such confusion are the otherwise brilliant implementers of programming languages. You'd think that if anyone knows the difference between a variable and a constant, it would be a programming language implementer.

For instance, a CLISP maintainer explicitly based his argument for making some backdoor compulsory on the belief that the behavior of his hypothetical source-closing adversary will remain the very same after the backdoor is created. But what is constant here is the hypothetical adversarial will of said antagonist, not his behavior; the known backdoor will be trivially circumvented by this adversary, and will only remains as a constant hassle and security hazard to all the friends.

In another instance, the respected creator of Python argued against proper tail calls because they allegedly lose debugging information as compared to recursion without tail call elimination. But as said hacker implicitly acknowledges without making the explicit mental connection, in programs for which proper tail calls matters, the choice is conspicuously not between proper tail calls and improper tail calls, it is a choice between proper tail calls and explicit central stateful loops. And the absence of debugging information is constant when you transform tail calls into stateful loops. Stateful loops precisely make it harder to get debugging information, whereas proper tail calls are trivially disabled, wrapped or traced (and trivially so if you have macros). In addition, state introduces a lot of problems because of the exponential explosion of potential interactions to take into account. But more importantly, proper tail calls allow for dynamic decentralized specification of a program in loosely coupled separate modules by independent people, whereas loops force the static centralized specification of the same program by a one team of programmers in one huge conceptual gob requiring tight coupling. Finally, loops are trivially macro-expressible in terms of tail-calls (i.e. through local transformations), whereas it requires a global transformation to transform arbitrary programs requiring tail-calls into programs using loops - and if we allow for such transformations, then who needs Python, INTERCAL is the greatestest language ever designed.

Brilliant operating system designers have argued that microkernels can simplify software development because factoring an operating system into chunks that are isolated at runtime allows to make each component simpler. But the interesting constant when you choose between ways to factor your system and compare the resulting complexity is not the number of components, but the overall functionality that the system does or doesn't provide. Given the desired functionality, run-time isolation vastly increases the programmer-time and run-time complexity of the overall system by introducing context switches and marshalling between chunks of equivalent functionality across the two factorings. Compile-time modularity solves the problem better; given an expressive enough static type system, it can provide much finer-grained robustness than run-time isolation, without any of the run-time or programmer-time cost. And even without such a type system, the simplicity of the design allows for much fewer bugs, whereas the absence of communication barriers allows for higher-performance strategies. Hence HURD being an undebuggable joke whereas Linux is a fast, robust system.

In all these cases, the software designer enforces some kind of hurdle that doesn't help honest people; the only beneficiaries are the specialists who get job security at handling the vast increase in gratuitous complexity.

These people, though very intelligent, fall for an accounting fallacy. They take a myopic look at the local effect of one alternative on some small detached parts of the system where they can indulge in some one-sided accounting whereby the alternative they like has benefits at no cost, whereas the other one has costs at no benefit. And they neglect to consider the costs and benefits in other parts of the system outside of their direct focus though they are necessarily being changed by the switch between alternatives to preserve the actual overall constants that make the choice meaningful.

It is possibly the individual interest of these experts to promote labor-intensive solutions where their expertise is the development bottleneck. Conscious dishonesty isn't even necessary when the rational incentive is for the experts to ignore the real costs of their choices, because they don't bear these costs. And so ultimately, the laugh is on the users who follow the advice of these experts.

In a choice between proposed alternatives, what needs be evaluated is the economic cost of each alternative, i.e. its relative cost to other alternatives with respect to the overall system. And before you may even evaluate this cost, you must determine what is constant and what varies when you make the choice.

Woe be on software designers who confuse constants and variables!

Comments

accounting fallacy, as i once heard it: "your lack of a feature is not a feature".

"It is possibly the individual interest of these experts to promote labor-intensive solution where their expertise is the development bottleneck."

If you've ever had to use open source software that ThoughtWorks has touched, you'll develop this kind of suspicion immediately. Selenium is a fantastic HTML testing tool, however the infrastructure supporting the tooling seems deliberately half complete and painful to use. Setting up and running Selenium Grid on anything but a clean room environment is an exercise in grief heartache and madness. Good ideas are replied to with curt responses that 1) you dont want to do that 2) my way is better 3) you are wasting my time suggesting that 4) i have better things to do. It seems like an awfully defensive reply for someone asking for test tools a junior automator (and not a dedicated programmer) could use.

(Anonymous)

You're pretty off about microkernels

You're totally off on the microkernels. Go take a look at Minix 3. It is tiny, yet achieves near linux performance with less code and in a safer manner. Go take a look at OSX, you have unix and your GUI running on top of a microkernel. Look at FUSE you have microkernel architecture running on top of Linux and OSX.

Re: You're pretty off about microkernels

Minix 3 is tiny, has a small subset of linux functionality. I demand substantiation on the claims that its performance and robustness are remotely comparable to those of Linux.

OSX is a monolithic kernel on top of a microkernel. The microkernel contributes zero, nada, zilch, to the functionality -- and a small but noticeable performance decrease as compared to a normal BSD system. The reason the performance hit doesn't get out of hand is precisely because the system works AROUND the microkernel, doing everything in the monolithic BSD kernel, instead of actively USING the microkernel, splitting things in plenty of tiny isolated servers.

As for FUSE, it is precisely based on a big kernel; performance is bad, and remains usable because only a single subsystem was moved out of the kernel. As in the OSX case, things do not get out of hand precisely because only one level of privilege crossing was added.

Of course, with modules in a strongly typed programming language, you can get both the actual performance benefits of macrokernels and the falsely claimed benefits of microkernels. See SPIN or ML/OS.

(Anonymous)

Re: You're pretty off about microkernels

The point was that you slander microkernels and use a FAILURE case to do so, totally ignoring the successful cases. Then you bother to respond with something totally offtopic from your actual post.

I know this is the internet and you feel you must defend yourself, but you're responsible for what you post. What you posted is factually inaccurate and your reply here shows that. Take responsibility.

Re: You're pretty off about microkernels

The point is that microkernels are a total failure. The "best" microkernels take a ~3% performance hit for the benefit of doing nothing at all -- because you avoid their functionality like plague, except for your having to pay the initial tax of them being between you and the hardware.

If you actually start using the microkernel architecture for anything at all, each additional level of splitting your software into microkernel-level isolated servers will take its toll both at programming-time and run-time for zero gain in functionality.

(Anonymous)

Re: You're pretty off about microkernels

Wtf are you talking about? He replied exactly to your post. OS X isn't a microkernel and never was: http://events.ccc.de/congress/2007/Fahrplan/attachments/986_inside_the_mac_osx_kernel.pdf

Do you have any examples of things that *actually are* microkernels and *actually do* have comparable performance with Linux while having a comparable feature set? Yea, me neither.

(Anonymous)

Microkernels

"Hence HURD being an undebuggable joke whereas Linux is a fast, robust system."

I think you're making the same mistake you criticize here: do you think the HURD developers would be able to ship a fast reliable system right away if they switched to a monolithic kernel? It's been fairly well documented to be a social problem, not a technical one.

(I also note you call HURD "undebuggable", but make no claims about Linux's debuggability, and rightly so: I've not found it to be at all easy to debug.)

Another problem we're dealing with is that the sample size of operating systems (especially after you try to split it into piles of microkernels and monolithic kernels) isn't terribly big. I'm suspicious of generalizations made with sample size N=2 (Linux and HURD). Compared to other monolithic systems, for example, Linux is not even remotely typical. Most monolithic kernels don't have that kind of international support, that many eyeballs looking at them, that growth curve, and so on. It's not even the first monolithic open-source x86 Unix. Why do you ignore the failed monolithic kernels, and the successful microkernels?

As for features and reliability, Linux has gotten *much* better about this over the years, but it's still not up to what a microkernel provides. Has anybody using Linux on a desktop box for more than a year *not* had a driver bug take down their whole system?

Modern Unix reliability is largely the result of the GNU and Linux improvements, and that suggests social changes, not technical ones. A typical Unix system in 1980 also had a monolithic kernel, but was not nearly as reliable as Linux today (or even Linux 10 years ago). In your terms, time and reliability changed, but they stayed constantly monolithic, which suggests the latter didn't cause the former.

N.B., I agree with your post overall, but it seemed like you took an opportunity to take a swipe at HURD and generalized it to all microkernels, despite the evidence, for some strange reason. You can find a single failed project in any field, but if you want to generalize that failure to a single design decision, you need more evidence. :-)
Isn't QNX a microkernel? And from what I understand it's extremely efficient and was very successful in its chosen niche. Back in the mid-90s, there was a demo system shipped containing QNX, TCP/IP stack and web browser all on one 1.44 MB floppy.

Granted I've not played with it, but it seems to me to be one of the more inspiring pieces of OS design that suggest to me that there's a much better way of doing things than the current status-quo, be it Microsoft or Linux.

http://en.wikipedia.org/wiki/QNX

QNX is indeed a microkernel. Because it's proprietary software with a confidential niche audience, it's hard to call it either a success or a failure, either technically or commercially.

From all that I remember reading about QNX, its main useful technical ideas, that made it easy to write distributed applications, is hardly tied to the "microkernel" paradigm. Exposing an API that allows to easily abstract away locality? Either Plan9 or Erlang will provide that without "microkernel" nonsense. Simple design? The Amiga had something simple without being a "microkernel".

Once again, the "microkernel"ity is the stupid thing that's easy to intoxicate oneself with. But it's not the interesting thing that positively contributes to performance, robustness or simplicity.
It's a pity that Plan 9/Inferno never took off in a big way. Though I guess Inferno has only been open sourced for five years, so it could still have a breakthrough. The Rio window manager model particularly looks very interesting.

(Anonymous)

... and woe be on anyone who confuse's woe and woo.

Still, I guess the other way round may be worse. "I'm just off to woe a young woman".

Tail Calls - couldn't agree more. Every serious modern language implementation should support them.

Microkernels - disagree here. My favourite case-in-point is Microsoft Research's Singularity - a .NET (ish) O/S that uses a Microkernel architecture but (largely as its .NET based, so can do so safely) eliminates the costly context-switches leaping in and out of protected mode, so drastically speeds up system calls. It trounces everything I've seen (even Linux) in those departments. So I guess the issue isn't Micro/Monolithic Kernel as much as how such things are actually implemented.

Lenrekorcim

Thanks for the spelling lesson.

As for Singularity: how does it qualify as Microkernel? Your description makes it sound more like what I recommend, and what SPIN, ML/OS and other systems have already implemented in the past: a system with statically safe extensions that communicate without any unnecessary runtime barrier. The exact opposite of a Microkernel.

eyes black and white

January 2018

S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031   

Tags

Powered by LiveJournal.com