A pair of fascinating links from LtU on choosing the right defaults. First, an entertaining TED talk by Dan Ariely, which I highly recommend. Second, a very promising looking IDE concept, CodeBubbles, which certainly seems to be providing some good defaults.
I have some sympathy for Alan Kay’s “biological analogy” that software could be constructed to resemble biological cells, with each component having a clearly defined barrier (cell membrane) and ability to act autonomously as well as collaboratively. This is, after all, how agent-oriented languages should work too. However, the analogy can clearly only be taken so far. A human cell contains around 6 billion base pairs of DNA, each of which can store a maximum equivalent of 2 bits of information. This gives a total theoretical storage capacity of 12e9 bits or 1.5e9 bytes — roughly 1.4GB, or slightly less than an iPod shuffle. In computational terms, that’s quite a lot. Furthermore, this information is redundantly duplicated on a massive scale within every single cell of your body, and largely duplicated between individuals even of separate species. If this were a computer system, it would be a hugely inefficient and anti-modular one. By modern software engineering standards, a cellular organism is a jumble of duplicate components, each of which is a monolithic mess of spaghetti code!
Nevertheless, I do believe there is still much value in this analogy. After all, as Kay would point out, biological systems are much more sophisticated* than anything we can yet build, and generally much more resilient too. But we must be careful in pursuing such analogies, as the design goals of evolution are different to our own. While evolution can dispense with the intelligent designer, we cannot! And those intelligent designers have their limits. They need to be able to revisit design decisions and update running systems to correct flaws, optimise performance, adjust to new challenges, etc. Modularity is crucial to these tasks. If there is a God, He is not a software engineer.
PS – I recognise that Kay himself is well aware of these limitations of the analogy, as is abundantly evident in the wonderful design of Smalltalk.
* NB: I much prefer the term “sophisticated” to “complex”: the former is perhaps a goal worth pursuing, the latter should be minimised or eliminated.
The terms first-class and higher-order can be confusing to people first encountering them in relation to programming languages (usually through exposure to some functional programming language). At first glance it can seem that they mean sort-of the same thing, but this isn’t really the case. Both refer to the ability to pass functions around like other values, but they relate to different aspects. The term higher-order is a statement about the interface of a function. It means that the function is able to accept other functions as arguments (or even the same function) and/or return functions as its value. It originates in logic, where a first-order logic is able to make statements only about objects in the domain of discourse (e.g., “all men are tall”) , whereas a second-order logic is also able to make statements about first-order relations over those objects (e.g., “all properties of men are also properties of women”). A higher-order logic is then able to make statements about any lower-order relation. On the other hand, the term first-class is a statement about how a language handles certain kinds of values. In particular, it usually means that a certain kind of value (such as functions) can be treated on an equal level with all other kinds of value (such as strings or integers), and can be passed to/returned from functions, stored in data structures, and so on. These are distinct concepts: a language might permit higher-order functions that are not first-class, and vice-versa.
I’m becoming increasingly confused about what direction the Web is heading in. More precisely, I’m slightly concerned that the direction the Web is heading in is completely different to the direction in which various researchers believe it is heading in.
I came across this ridiculous Internet argument today, via xkcd. I find it both amusing, and worrying, the number of people who insist that the plane will not take off. This is a classic example of what philosophers call a thought experiment, and demonstrates only what most thought experiments demonstrate: that our intuitions about even quite simple phenomena are often very wide of the mark. In this case, the intuition is that a vehicle on a treadmill will not move if the treadmill matches the speed of its wheels. While this is true for vehicles (and people) that get their forward thrust from pushing against the ground, an aeroplane certainly doesn’t. The thrust here comes from the propellers or jet turbines that push against the air. These generate an enormous force backwards, which, as per Newton, generates an equal and opposite force pushing the plane forward. If you draw a diagram and label the force arrows (as in school physics lessons), you will see an enormous forward arrow with the only opposing forces being air resistance and a tiny amount of friction from the wheels. It is a basic law of mechanics that when there is a net force acting on an object in a certain direction, then that object will accelerate in that direction. No amount of fast spinning treadmills can reverse the laws of physics. As others mentioned, the wheels will simply spin faster, as they are acted on by forces both from the aeroplane and the treadmill. The wheels decouple the rest of the plane from this treadmill force, and thus the transferred force is negligible. This is, after all, the entire point of having wheels on a plane, and not for instance just having skis.
A pet peeve of mine is the ongoing public misperception that computers are really all about numbers. This is constantly reinforced by the mainstream (and even IT) media. Take, for instance, this quote from a BBC News story today:
The DNS acts as the internet’s address books and helps computers translate the website names people prefer (such as bbc.co.uk) into the numbers computers use (188.8.131.52).
What this quote means to say is “DNS transforms textual domain names (such as bbc.co.uk) into the numeric dotted-form (184.108.40.206) that is used by the Internet Protocol (IP)”. To the actual computer, both of these are just patterns of bits that represent symbols. A numeric interpretation is no more or less convenient for the computer than any other symbolic representation: it’s all just patterns of electrical charge as far as the machine is concerned. The inventors of IP could just as easily have chosen to use a DNS-style address representation. It happens that the sort of symbol shifting that computers do is very good for numerical tasks, and it also happens that we know a lot about how to do things with numbers, and so numbers are useful in a lot of applications of computers. In this case, the numeric form can be compactly represented, and efficiently manipulated. That was an engineering decision, not a fundamental limitation of computers.
My love of computers stems from a fascination with language, logic, and fundamental notions of representation and interpretation. Diving into computing has brought me into contact with all sorts of ideas from philosophy, physics, mathematics, linguistics, psychology, cognitive science, and AI. It saddens me that people miss out on these extraordinarily beautiful ideas because of a persistent belief that its all really about numbers, which puts a lot of people off.
Steven Pinker has an interesting talk on the TED conference website, debunking the idea that we are living in an increasingly violent society. As Pinker shows, over pretty much any timescale you care to look at, violent death rates have been dropped considerably, and we now live in the safest time ever. Fascinating and cheering.