Over Chinese New Year clebrations, a friend asked (again) about the curious lack of a particular feature in Gorgonia, the deep-learning package for Go: tuples, which led to this tweet (that no one else found funny :( )

"I follow the teachings of Lambda Calculus turing times of hardship"

— Chewxy (@chewxy) February 17, 2018

and

"I'm a follower of Church of Turing Machines"

are puns that should be considered the epitome of puns.

I may be a bit intoxicated. Just a bit

The feature that was missing is one that I’ve vehemently objected to in the past. So vehemently objected I was to this that by the first public release of Gorgonia, there was only one reference that it ever existed (by the time I released Gorgonia to public, I had been working of 3 versions of the same idea).

You can’t find a reference to tuple types in Gorgonia now, but I added it back to hm, the Hindley-Milner type system that Gorgonia uses - mainly because my other projects needed it.

Yes, once upon a time, in a far earlier version of Gorgonia, tuples were supported. I then had a sort of epiphany that tuples are super powerful. Given that at that time I was trying to remove as much power as possible from Gorgonia, it seemed rather prudent to remove tuples. My current views on the power of programming languages have changed somewhat - I still believe that a programming language shouldn’t be too powerful. A decent programming language in my view, should make easy things easy, and hard things hard.

When I say powerful, I mean expressive power - i.e. how easy it is to express certain concepts in a language. My main objection to higher powered languages is one that can be judged to be relatively luddite and possibly very unfounded, but entirely practical:

# Constructive Algebras

To get a sense of why tuples are powerful, we’ll look at constructing various number types. I’ve written that numbers are weird but I feel it might be worth revisiting the idea.

## Natural Numbers To Integers

Let’s imagine a world where the only numbers known are positive whole numbers - 0, 1, 2, 3, 4… The people of this world knows how to perform addition, and means of repeated addition, they can do multiplication. They also know which if one number is bigger than another, and whether two number are the same.

The question is this: How do you invent negative numbers?

The answer: with tuples. And as a bonus, you automatically invent subtraction. And algebra. Here’s how:

Restricting ourselves to 2-tuples $latex (a, b)$, there are three possible types of tuples:

Now let’s look at the possible operations between the tuples. We’ll start with the notion of equality. It’s a fairly simple idea:

In other words, if you swap the second element of each tuple - i.e. swap b and y, and should they add up to the same number, we can say the tuples of $latex (a, b) = (x, y)$. Here’s a couple of worked example that you can induct from:

Defining addition is fairly straightforwards:

Multiplication is a little trickier (the whys of), but let’s define multiplication as such:

There we have it, we have all the pieces needed to invent negative numbers. But first, let’s define a way to represent the natural numbers AS tuples. Doing so is fairly straightforwards. Because the *least* natural number is $latex 0$, we can simply define the representation of natural numbers to be thus:

Here’s a brief proof that you can induct upon that $latex (a, 0)$ does in fact represent $latex a $:

\end{eqnarray} $$

Given that $latex (a, 0)$ represents $latex a$, it might be worth asking the question, what does $latex (0, a)$ represent?

There cannot be a number that is smaller than $latex 0$, because it’s the *least* element (which we aribtrarily defined above.). So let’s see what happens when $latex (0, a)$ interacts with another tuple:

Let’s try rewriting it in natural numbers only. Anything we cannot represent in natural numbers (recall that the only numbers we know are whole numbers), we’ll write $latex \Box $. And since we know the answer to the equation above, we’ll fill in both right and left sides:

We’ve just defined an addition with an unrepresentable number. We should give it a name - perhaps *negative* number? And since we know that $latex \Box $ is $latex (0, 1)$, why not invent new notation for it? we’ll write $latex (0, 1)$ as $latex -1$. Congratulations. We’ve just invented integers.

Along the way, if you squint, we invented algebra. Well, an algebra of integers anyway. But in order to invent algebra for integers, we need to invent an additional operation: subtraction. It should be fairly trivial to show that $latex (a, b) - (x, y) = (a+y, b+x)$, using only the things we’ve listed above.

## Integers to Rationals

If you followed through the entire process, you may note that $latex (a, b)$ is shorthand for $latex a - b$. That’s quite the point I’m geting at: tuples allow us to encode functions as well as new kinds of things. I’ll get to that in a moment. Negative numbers can be created by writing two natural numbers in a tuple, and defining operations on them.

Let’s try that with rational numbers. A rational number is written as a ratio of two integers, like so:

Or we can write them like so:

It isn’t too hard to follow the processes above and invent division (and really along the way, the ring structure). This blog post has grown away from what I had intended to write, so I shall skip that part. Leave it as an exercise to the reader as any maths text books will tell you.

## Rationals to Reals

We can follow through the motions and arrive at rational numbers. But we run into a problem when we try to invent real numbers. But if you follow the Dedekind Cut method of constructing real numbers, you realize that you can represent real numbers as a tuple of things too. Specifically, a Dedekind Cut is the set of gaps between two sets of rationals, typically written $latex (L, U)$ - for **L**ower and **U**pper.

$latex L$ is a *set* of rational numbers with no greatest element. $latex U$ is a *set* of rational numbers that may not have a least element. Adding constraints to the elements allow us to “search” for the number in question. When the conditions become such that $latex U$ has no least element, then the Dedekind Cut is a set of irrational numbers.

What I wrote above can sound quite confusing, but the principle is the same as the natural numbers to integers example: a tuple can represent something that does not exist, yet. For example, we want to know if $latex \sqrt{2}$ can really exist.

First we take the set of all rationals, $latex \mathbb Q $, and split it to two ranges (we’ll use 0 as a standard example):

To find the Dedekind Cut that makes $latex \sqrt{2} $, we simply realise that $latex \sqrt{2}^2 = 2$. So we add that to the condition:

Now of course the Dedekind Cut can also represent rational numbers (note now the Upper set contains the rational number:

The fact that reals have to be constructed from sets of rationals representing the set of possible numbers between the lower and upper sets, instead of a smooth straightforward construction like the construction of integers from naturals or rationals from integers have vexed many people. In fact, what I just wrote up to this point can typically be found in the introductory sections on any real analysis text book - probably more formalized than my slipshod explanation too.

Fortunately, my interest in arithemization of numbers end with real analysis. I will however mention that a fact that shouldn’t be missed: A complex number is represented as a tuple of two real numbers. I’ll leave the rest of the pondering to better, more qualified authors than me.

# Tuples Represent Functions

From the relatively verbose examples above, it should be easy to notice that the tuples implicitly encode a function - subtraction for the construction of integers, and division for the construction of rationals. Tuples are important in representing pairing functions for any pure analysis related work too. I consider tuples to be probably the most powerful abstraction tool we have as humans.

Tuples of two different types of things on the other hand, yielded rational numbers. It also yields computer programs. Consider a simple tuple definition, here written in Go:

```
type Tuple struct {
A string
B string
}
```

If you squint at it, it’s the same as `(A, B)`

. What can we do with this? Obviously we can use it as a data structure to store two strings. But we can also use it as a computer program. Like so:

```
var square = Tuple{
A: Tuple{
A: "*",
B: "a",
},
B: "a",
}
```

If you pass this `add`

data structure to an interpreter that has rules of reduction (and an understanding of bound and unbound variables), you have just defined a function. Specifically, you just wrote a valid expression under the rules of lambda calculus.

In fact, a typical lambda calculus expresion, often written as such: $latex \lambda x . x$ can be seen as this tuple: $latex (Var, Body)$. The rules of lambda calculus must of course be defined, otherwise it’d just be a dumb data structure. But it doesn’t take much to redefine lambda calculus.

A tuple could just be a black box if you have no way of accessing the elements of the tuple - it’d just be a structure of things holding some data. A data structure that doesn’t allow access to the data is pointless *Well, it's useful in the case of Unit/Void, but you would ever need one instance of that and might as well not exist. Afterall, one would need to be able to access a data structure to touch the data.

But the moment you allow intensionality, you’ve just set yourself up for a world of pain. It’s currently fairly easy to prove that Gorgonia is not Turing complete. Programs written in Gorgonia WILL terminate. There are no `fold`

s, no loops, no branches either.

As shown in the earlier paragraphs, it doesn’t take much to define a new function from first principles. I created Gorgonia to make it easier and clearer to create deep neural networks and symbolic/automatic differentiation processes. Allowing abstractive intensional capabilities seem to be a bad idea from this point of view. Any control structure should be defined in the external system, making it easier to debug and understand what goes wrong.

Of course I could just have written Gorgonia to have introspection without recursiveness, but that would entail all sorts of newe algorithm regarding ownership of data alá Rust. Unfortunately my plate is pretty full as it is.

# I’m Clearly Wrong… To Some Extent

Now clearly I’m taking an extreme view here. My views have changed significantly over the past few years, clearly. After reading a lot more on $latex RCA_0, WKL_0$ and similar systems, and deeply understanding slightly more of Gödel’s $latex \beta$ function, I’m no longer convinced that recursiveness is the problem. Self-reference, and most importantly, self-application (i.e recursivity) is still the key to complexity,

I still believe it’s possible to build a complex system from a system that is not recursive/self-referential. The trick is to move recursivity to the enclosing system. In fact that was the major reasoning to make Gorgonia’s graphs as dumb as possible. For one, non-turing complete systems are easire to debug. Recent experience shows otherwise with Gorgonia - a lot of work still needs to be put into the debugging capabilities.

But if you notice carefully, you will notice the biggest flaw in my own arguments: being able to express Turing completeness from within the system is one thing, the system itself may be Turing complete.

What I mean is this: to write a recurrent neural network using Gorgonia, you wouldn’t be able to use `foldleft`

functions, BUT, you can use a loop written in Go to append a RNN cell to the previous one.

You wouldn’t be able to move back and forth in the “tape” of the program from within the expressions generated, BUT you could still cause the machine to do so, in the control loop that is written in Go.

I’ll probably have more to say about insides and outsides of systems (blahblah Gödel’s Incompleteness blahblah), when I have learned more, but I suspect we may see tuples and things like `foldleft`

in Gorgonia just yet.

Another weird thing you may also have noticed is the fussing over Turing completeness. There’s clearly more to writing correct programs than wondering if it would halt. Again, I will have another post musing about that, when I feel like I know more.

In the meantime - tell me what you think?