Copestake’s Law

The Turing test is a test … of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

So that the machine is judged only on its intelligence and not its ability to imitate human communication, an added stipulation is that the machine need only communicate via text i.e. it need not produce convincing human speech.

I believe that – using only today’s technology – it’s possible to create a very simple AI that would pass the Turing test 100% of the time, if only we were allowed to add one extra condition: The topic of conversation must be football.

Some background

Over the course of my 27 years of life, I’ve had the opportunity to overhear many conversations about football, whether at social gatherings or around the workplace. It was only as recently as 2014 that I began to notice a pattern.

I observed that conversations would begin with openers such as, “Did you see [team x]‘s game at the weekend?” To which the other party would reply, “I know, mate. Shocking.” At this point, one of the two will posit, “They need to start playing [player y].” The other party will now conclude the discussion by expressing some form of agreement.

And it’s here that we arrive at Copestake’s law:

Any one conversation about football is identical to any other conversation about football, with the exception of two variables.

Passing the Turing test

Copestake’s law allows for a beautifully simple algorithm capable of passing our hypothetical Turing test. Depending on whether the AI instigates the conversation or is the respondent, one out of two courses of action can be taken. Below is some illustrative pseudocode.

If (human initiates)
    read input;
    print "I know, mate. Shocking.";
    read input;
    print "I know, mate.";
    set x to (team name);
    print "Did you see " + x + "'s game at the weekend?";
    read input;
    set y to (player name);
    print "They need to start playing " + y + ".";
    read input;

Closing comments

I had to get some amusement out of this Euro 2016 bollocks – so this is it. Forgive me, world.


A quirky solution to HackerRank’s “Team Formation” problem

As a preface, I’d like to say that this article isn’t intended to help people cheat at the problem, but is posted for the intellectual exercise contained within. Hopefully the folks in and around the HackerRank community won’t mind.

Lately I’ve been having some fun with the challenges on HackerRank. By all accounts, the general (and consensual) solution/s to the Team Formation problem involve some combination of data structures, sorting algorithms and such.

That’s all well and good, but having hit my head many times as a child, something in the back of my mind was nagging away at me, insisting that there existed a more specialized solution. As is a recurring theme of this blog – and indeed my life – I could not resist its call. Fortunately, this one actually paid off.

The algorithm I subsequently developed does not (in theory) depend on any particular data structure nor require sorting; it satisfies the condition of being “greedy“; (potentially) has a time complexity of O(n), requiring only 1-2 loops; and for added lolz, can be made stable and even online if necessary.*

* While this is all possible, in some cases it’s not practical; in one particular implementation I did have to use std::sort. I’ll elaborate more on this at the end of the post.

The problem (briefly)

The objective of the Team Formation challenge is to accept a list of integers representing skill values for members of a team – and to organise those members such that a) for any one member with skill x, either that member is the lowest skilled member of the team or there exists another member of the team with the skill x-1 – and b) the members are organised into as few teams as possible, with members being spread as evenly as possible across every team. To pass the test, one must output the size (number of members) of the smallest team.

For example, given the set \{ 4, 5, 2, 3, -4, -3, -5 \}, the two teams that can be formed are \{ -5, -4, -3 \} and \{ 2, 3, 4, 5\}. The smallest team is \{ -5, -4, -3 \} and so the correct output for this test case is 3. (Note that it would be erroneous to organise this input into the teams \{ -5 \}\{ -4, -3 \} and \{ 2, 3, 4, 5\}, for example.)


Imagine a grid. If you’re struggling, below is a visual to help you along:


A graph, similar in style to those pioneered by William Playfair in the late 1700s.

For an example input of \{ 51, 47, 45, 46, 46, 47, 48, 47, 50, 47, 47, 49, 51 \}, the correct team formation can be visualised as:


If another value 48 were to be added, it would travel down the x axis towards 0 to the next free slot, resulting in:


This works because there is a “free” 47 at coordinates (3, 47). If this were not the case, the new value 48 would be placed at the end of the row – at (5, 48). Below is another example, using an additional value of 50.

You get the idea by now.

You get the idea by now.

Following a set of rules for how to handle various local conditions when a new member is added, it’s possible to organise and – if implemented properly – effortlessly reorganise the members into their most appropriate structure. You can even track the size of each team on the fly.

The implementation

For the same reason that I’ve not gone into the algorithm in much depth (i.e. not wanting to help out the dirty cheaters too much), I won’t post any code here. However, I did write an implementation which passed the HackerRank test cases.

Following that, I decided to grab a few of the other top scoring solutions along with some of the larger test cases and benchmark them locally against my algorithm using Very Sleepy. The numbers are listed below (lower is better):

User Average time
surwdkgo 0.494
visanr 0.275
Kostroma 0.182
Meeeeee! 0.166

As you can see, I outperformed the competition – which was a reassuring conclusion to this little experiment, having spent far more time and energy than was warranted.

Some notes about implementation

This algorithm doesn’t require use of any particular data structure or sorting algorithm; you can just use the integer value of the skill level x and perform local operations on x-1 and x+1 and such.

Under certain conditions (e.g. access to large memory or some assurance that x will be within a certain range) this means you can perform all necessary operations using a regular unsorted integer array. However, as the conditions of the challenge state that x may be as large as 10^9 – which is far too big for the test scenario – that leaves two options: use another lookup method (e.g. hash tables) or sort the array.

In my first attempt I did use an unordered_map, but lookups created a bottleneck so I tried out the sorting approach, which significantly improved performance.

Even after all of this, there’s still a voice in my head telling me that an even better algorithm exists, but… I must move on. I’ve satisfied my curiosity (mostly). And I have things to draw.

Fuck it

For a while now (and by “while” I mean 5+ years) I’ve been wanting to create a web comic. By some miracle I have taken the first step towards that goal and set up a Tumblr blog to host said strips.

For those who are interested, the URL is I’ll probably also be pestering my Twitter followers with updates – so there’s the option of following me there

My intention is that the cartoons will feature a number of topics, from gaming to world events. This is all assuming I actually post anything further in the future (I will try, I promise).


Programming as art?

Not too long ago, admiring the craftsmanship of the Kritonios Crown got me thinking about the various ways in which people have taken a particular skill – such as painting, sculpting, smithy, writing, etc – and turned it into art.

While plenty has been said about computer programming having artistic undertones, I began to wonder whether it would be possible to create a work of art in the truest sense of the word, through the medium of code.

First, I had to define some parameters for what counts as art. The essence of “what is art” has forever been elusive and is hotly debated. However, I settled on a condensed criteria that I believe the majority of people would be happy to accept. They are:

1. Expression; purpose and meaning.
2. Evocation; provoking an emotional response.
3. Imagination; creativity and innovation.

With this, I began thinking about a way that I could turn code into art.

Expression and evocation

I knew immediately that I wanted to pay homage to the miracle of existence, life and the universe. I hoped originally to inspire in the viewer’s mind the same emotions of reverence and confusion that I feel when I look out to space or consider the intricacies of nature – and I feared that some (or indeed many) may feel nothing at all when they look at my piece. If that were the case, for those people, the art would be a failure.

But then I considered that some people do feel nothing but apathy and indifference when they look at nature (presumably in between watching reality TV shows and listening to Beyonce, but that’s just stereotyping). In its own way then, even indifference towards my art could be taken as another accurate portrayal of my chosen subject.

Ultimately, I decided to drop the expectation of provoking any one particular thought or emotion, as long as one or the other occurred at all. I embraced the subjectivity of my art, just as the meaning of existence is itself subjective.


Having settled on what I wanted to say and how I wanted people to feel, I needed to plan the execution. I decided that I wanted to write some code which would mystify even my fellow programmers, but at the same time, I felt like obfuscation would be a cheap copout.

Even the choice of language was assigned meaning. Spoiler alert: I chose PHP because the history of PHP itself is a tale of haphazard guesswork and random mutations that may or may not be beneficial, much like biological evolution. (Not language bashing; just some innocent humour.)

The result

The end result can be seen in this Gist.

I title this piece, “Microcosm.” Recall that this code is not merely a script that has been fed through a code obfuscator; the code stands as-is – and, like the universe it hopes to represent, has mechanics and implications.

Thank you and good night. Much love.

Web development in Julia: A progress report (Warning: Contains benchmarks)

Continuing my quest to explore the idea of using Julia for web development, I wanted to address some of my own questions around performance and implementation. My two biggest concerns were:

  1. Should Julia web pages be served by a Julia HTTP server (such as HttpServer.jl) – or would it be better to have Julia work with existing software such as Apache and nginx?
  2. How would Julia perform on the web compared to the competition?

Addressing the HTTP server question

After some consideration, my personal conclusion is that a server implemented in Julia would be another codebase that would need to be maintained; would mean missing out on tools available to existing server software, such as .htaccess, modules and SDKs; and would ultimately feel like reinventing the wheel. I feel it would be more sensible to leverage existing software that already has active development and has been tried and tested in the wild.

Following from this, I knew that my primary performance concern should be the interface between the server and Julia. In my previous posts, I was using Apache and running Julia via CGI. CGI is slow enough, but a known fact of Julia is that the binary is somewhat slow to start due to internal processes/compilation. I figured that FastCGI would be the next best option – and as there are no existing solutions (except for an incomplete FastCGI library), I set about creating a FastCGI process manager for Julia.

FYI: I’ve decided to release all of my web-Julia-related code under the GitHub organisation Jaylle, which can be found at Currently only the FPM and CGI module are available, but in future that’s where I’ll add the web framework and whatever else gets developed.

I plan to elaborate on the process manager more in a future post, but in short there are two parts:

  • The FastCGI server / process manager (coded in C). This accepts requests and manages and delegates to the workers.
  • The worker (coded in Julia). This listens for TCP connections from the FPM, accepts a bunch of commands and then runs the requested Julia page/code.

This way, there’s always a pre-loaded version of Julia in memory, circumventing any startup concerns (unless a worker crashes, of course).

Some early benchmarks

Now that the FPM is in a usable prerelease state, I wanted to see how it could perform compared to the alternatives. In this case, I chose PHP (obvious) and Python. I chose Python because the name often crops up in Julia discussions and there’s a FastCGI module available for it.

To run these tests, I used the Apache ab tool from my Windows machine. The server is a cheap 1-core VPS running CentOS 6 64-bit.

In all tests, the server software used was nginx. For the languages, I used PHP-FPM for PHP, for Python and the Jaylle FPM for Julia.

The individual tests are superficial and the results anecdotal, but I just wanted something to give me an idea of how my FPM performed by comparison. To elaborate:

  • Basic output: Printed “Hello, [name]” – with [name] taken from the query string (?name=…)
  • Looped arithmetic: Adding and outputting numbers in a loop with 7000 iterations.
  • Looped method calls: Calling arithmetic-performing methods from within a loop with 7000 iterations.

Below is a table of the results. The numbers shown are requests per second; higher is better.

Basic output Looped arithmetic Looped method calls
PHP 28.17 11.29 10.92 (Python) 24.61 7.92 7.25
Jaylle (Julia) 24.85 5.27 5.12

The only thing that I can say from these results is that I’m comforted seeing that my FPM’s performance isn’t obviously terrible compared to the others, but that there’s probably some work that does need to be done to at least get it up to the same level as Python, if not PHP.

In other news, I’ve realised (4 years late) that all the cool people use Twitter now. I therefore have started actively using my account. I can’t promise that following me will improve your quality of life, but feel free to give it a chance: @phollocks

Coming soon: FPM documentation + writeup (as soon as I’m comfortable enough to tag a release).

Revisiting emulated OOP behaviour and multiple dispatch in Julia

In an earlier post, I explored one approach to emulating bundling functionality with the data on which it operates, akin to object methods in OOP languages such as C# and PHP. A comment posted by Matthew Browne questioned whether this approach was compatible with Julia’s multiple dispatch.

This is something I thought about at the time of writing the original article, but I had assumed it wouldn’t be possible due to the way in which the anonymous functions are assigned to variables i.e. assigning one definition would overwrite the previous. However, Matthew’s question prompted me to reconsider – and after some brief experimentation and some small alterations, I found that there is indeed a way to maintain compatibility with multiple dispatch.

Below is an updated example type definition:

type MDTest

    function MDTest()
        this = new()

        function TestFunction(input::String)

        function TestFunction(input::Int64)
            println(input * 10)

        this.method = TestFunction

        return this

The theory is basically the same, with the constructor assigning the methods to their respective fields within the type. The difference is in how the functions are defined and assigned.

On lines 7 and 11, methods are defined with different argument types. These methods could be defined outside of the type definition without error, but defining them within the constructor has the advantage of not polluting the global scope.

On line 15, the function is assigned to its field using some slightly different syntax, which allows both methods to be called.

With this, the example code below:

test = MDTest()



Produces the output:


Another advantage to this approach is the absence of anonymous functions – which, according to benchmarks and GitHub issues, have significantly worse performance compared to named functions.