Single Interface Java 8 Applications

Now that the dust has settled on the launch of Java 8, we can begin to see the benefits that all these new features will bring to those of us willing to throw off the yoke of corporate oppression and start committing lambda expressions to a “Java 5” code base. The possibilities that lambdas bring, along with default methods, and the startling addition of static methods in interfaces  are real game changers. For instance, it is now possible to write an entire Java application in a single interface! Those of us who have long railed against the tyranny of a single-class-per-file can now rejoice at being able to place all of our logic in a single file, and it doesn’t even need to be a class. As you will see from this post, the future is here and it is beautiful.

Continue reading “Single Interface Java 8 Applications”

Java 8 Sieve of Eratosthenes

We had a competition at work to calculate the count and sum of all prime numbers between 1 and 1,000,000 in the shortest time. Naturally we all went for the Sieve of Eratosthenes as the best way to implement this. The fastest solution was a straightforward imperative approach, but I came up with the shortest implementation using Java 8 streams and a BitSet that comes within about 20% of the winner. It’s quite cute so I thought I’d share it. Note: it uses a stateful consumer so cannot be parallelised. I’d love to see a good stateless version if anyone can think of one.

Continue reading “Java 8 Sieve of Eratosthenes”

Pimp my Java!

There seem to be lots of new programming languages popping up at the moment. Clojure, Scala, Opa, Dart, Go, Rust, and so on. I’m always a fan of new programming languages, as I feel there is still so much untapped innovation to explore. However, it can take years, or even decades for a new language to reach commercial acceptance (a sad reality). In the meantime, mainstream languages can incorporate proven features from more experimental cousins. Java, for instance, has adopted generics and annotations, both features I have some fondness for. In the spirit of encouraging further progress I offer the following as a list of largely conservative enhancements to Java that are in keeping with it’s fundamental design as an imperative object-oriented language. I don’t wish to turn Java into Haskell, because Haskell is already good enough.

So, here’s my top five “easy” extensions to Java:

Continue reading “Pimp my Java!”

Relations ain’t “flat”

A new month a new job. I’m now working as a Java Application Developer for a large software consultancy. I’ll be developing Java EE applications for this job, which will be an interesting experience. I’m keen to really get my hands dirty with what is currently the de facto standard for building large business apps.

My first challenge is to get to grips with the Java Persistence API (JPA), which provides a standardised object-relational mapping (ORM) for Java applications. Basically, it takes a bunch of annotated Java classes and determines how best to map these to a relational database, taking care of foreign key constraints, inheritance, joins, and so forth. On the surface, this is all very good. Current best practice in developing these kinds of apps is to separate out the data model of the domain into a layer of simple Java beans (i.e., objects with properties and little custom behaviour), so something like JPA makes this much easier to knock together than using JDBC directly. (Although, it’s not that hard to use JDBC). Continue reading “Relations ain’t “flat””

The right defaults

A pair of fascinating links from LtU on choosing the right defaults. First, an entertaining TED talk by Dan Ariely, which I highly recommend. Second, a very promising looking IDE concept, CodeBubbles, which certainly seems to be providing some good defaults.

Are cells a good analogy for software?

I have some sympathy for Alan Kay’s “biological analogy” that software could be constructed to resemble biological cells, with each component having a clearly defined barrier (cell membrane) and ability to act autonomously as well as collaboratively. This is, after all, how agent-oriented languages should work too. However, the analogy can clearly only be taken so far. A human cell contains around 6 billion base pairs of DNA, each of which can store a maximum equivalent of 2 bits of information. This gives a total theoretical storage capacity of 12e9 bits or 1.5e9 bytes — roughly 1.4GB, or slightly less than an iPod shuffle. In computational terms, that’s quite a lot. Furthermore, this information is redundantly duplicated on a massive scale within every single cell of your body, and largely duplicated between individuals even of separate species. If this were a computer system, it would be a hugely inefficient and anti-modular one. By modern software engineering standards, a cellular organism is a jumble of duplicate components, each of which is a monolithic mess of spaghetti code!

Nevertheless, I do believe there is still much value in this analogy. After all, as Kay would point out, biological systems are much more sophisticated* than anything we can yet build, and generally much more resilient too. But we must be careful in pursuing such analogies, as the design goals of evolution are different to our own. While evolution can dispense with the intelligent designer, we cannot! And those intelligent designers have their limits. They need to be able to revisit design decisions and update running systems to correct flaws, optimise performance, adjust to new challenges, etc. Modularity is crucial to these tasks. If there is a God, He is not a software engineer.

PS – I recognise that Kay himself is well aware of these limitations of the analogy, as is abundantly evident in the wonderful design of Smalltalk.

* NB: I much prefer the term “sophisticated” to “complex”: the former is perhaps a goal worth pursuing, the latter should be minimised or eliminated.

First class and higher-order

The terms first-class and higher-order can be confusing to people first encountering them in relation to programming languages (usually through exposure to some functional programming language). At first glance it can seem that they mean sort-of the same thing, but this isn’t really the case. Both refer to the ability to pass functions around like other values, but they relate to different aspects. The term higher-order is a statement about the interface of a function. It means that the function is able to accept other functions as arguments (or even the same function) and/or return functions as its value. It originates in logic, where a first-order logic is able to make statements only about objects in the domain of discourse (e.g., “all men are tall”) , whereas a second-order logic is also able to make statements about first-order relations over those objects (e.g., “all properties of men are also properties of women”). A higher-order logic is then able to make statements about any lower-order relation. On the other hand, the term first-class is a statement about how a language handles certain kinds of values. In particular, it usually means that a certain kind of value (such as functions) can be treated on an equal level with all other kinds of value (such as strings or integers), and can be passed to/returned from functions, stored in data structures, and so on. These are distinct concepts: a language might permit higher-order functions that are not first-class, and vice-versa.

Where is the Web going?

I’m becoming increasingly confused about what direction the Web is heading in. More precisely, I’m slightly concerned that the direction the Web is heading in is completely different to the direction in which various researchers believe it is heading in.

Continue reading “Where is the Web going?”

Aeroplanes and Treadmills

I came across this ridiculous Internet argument today, via xkcd. I find it both amusing, and worrying, the number of people who insist that the plane will not take off. This is a classic example of what philosophers call a thought experiment, and demonstrates only what most thought experiments demonstrate: that our intuitions about even quite simple phenomena are often very wide of the mark. In this case, the intuition is that a vehicle on a treadmill will not move if the treadmill matches the speed of its wheels. While this is true for vehicles (and people) that get their forward thrust from pushing against the ground, an aeroplane certainly doesn’t. The thrust here comes from the propellers or jet turbines that push against the air. These generate an enormous force backwards, which, as per Newton, generates an equal and opposite force pushing the plane forward. If you draw a diagram and label the force arrows (as in school physics lessons), you will see an enormous forward arrow with the only opposing forces being air resistance and a tiny amount of friction from the wheels. It is a basic law of mechanics that when there is a net force acting on an object in a certain direction, then that object will accelerate in that direction. No amount of fast spinning treadmills can reverse the laws of physics. As others mentioned, the wheels will simply spin faster, as they are acted on by forces both from the aeroplane and the treadmill. The wheels decouple the rest of the plane from this treadmill force, and thus the transferred force is negligible. This is, after all, the entire point of having wheels on a plane, and not for instance just having skis.

Computers and numbers

A pet peeve of mine is the ongoing public misperception that computers are really all about numbers. This is constantly reinforced by the mainstream (and even IT) media. Take, for instance, this quote from a BBC News story today:

The DNS acts as the internet’s address books and helps computers translate the website names people prefer (such as into the numbers computers use (

What this quote means to say is “DNS transforms textual domain names (such as into the numeric dotted-form ( that is used by the Internet Protocol (IP)”. To the actual computer, both of these are just patterns of bits that represent symbols. A numeric interpretation is no more or less convenient for the computer than any other symbolic representation: it’s all just patterns of electrical charge as far as the machine is concerned. The inventors of IP could just as easily have chosen to use a DNS-style address representation. It happens that the sort of symbol shifting that computers do is very good for numerical tasks, and it also happens that we know a lot about how to do things with numbers, and so numbers are useful in a lot of applications of computers. In this case, the numeric form can be compactly represented, and efficiently manipulated. That was an engineering decision, not a fundamental limitation of computers.

My love of computers stems from a fascination with language, logic, and fundamental notions of representation and interpretation. Diving into computing has brought me into contact with all sorts of ideas from philosophy, physics, mathematics, linguistics, psychology, cognitive science, and AI. It saddens me that people miss out on these extraordinarily beautiful ideas because of a persistent belief that its all really about numbers, which puts a lot of people off.

%d bloggers like this: