On Binary Classification of Human Beings

Over the years I have come up with some fun ideas of binary classifying people. They say “those who can’t do, teach”. That’s a binary classification – teachers and doers. I once did something like that, with a longer elaboration: Hackers and Engineers

Abstract Thinking Capabilities

Some people have better abstract thinking capabilities than others. I’ll use an example that makes this a particularly dangerous thought. Consider two young girls, A and B, who are playing with Barbie dolls. Now, let’s say both girls are black and are about say, 7 years old. They’re playing with white looking Barbie dolls. Girl A thinks of the doll she’s playing as a white girl. Girl B thinks of the doll she’s playing as an abstract representation of what a human female would be like.

I would suggest that both girls will grow up to be very different, based on the way they think alone. Girl A will grow up feeling that her race isn’t represented well in the toys she plays. Girl B on the other hand will not experience this as structural racism, mainly because – I would think – that having better abstract thinking capabilities mean a lack of attachment to one’s own identity (I call this the abstraction of identity).

To test this idea: we should be able to devise a test of sorts to test a person’s abstract thinking capability. We can also devise a test to examine people’s experiences with structural racism. Then correlate the answers. My hypothesis is that higher abstract thinking scores would correlate to lower experience of structural discrimination.

Functions and Instructions

A while ago, a friend gave a group of us an IQ quiz (and I hate those). The question was as goes: given 2 cups, one with 175ml and the other with a 250ml capacity, extract 100ml, 200ml, 220ml of water from a limitless jug of water. While some among the group were puzzling on how to get those quantities, there were others among the group who immediately called out 220ml as impossible* The long story was the question giver meant to say 225ml, not 220ml, but messed up the question .

You see, the problem is a gcd problem in disguise. It’s the very same problem as Euclid faced. Only multiples of the gcd (25, in case you were wondering) can be derived from machinations using the two cups. For us (well, me, at least), it was a gut feeling. Something seemed.. wrong about the number.

But that’s not the interesting part. The interesting part was most of the group who pointed out that 220 was impossible, took a very long time puzzling over the steps of how to actually get the others that are a multiple of 25. I especially took a long time to figure out the steps * though I’d blame the alcohol I had imbibed by then . The other group who didn’t point out that 220 was impossible, took a far shorter time to figure out the succession of liquid pouring steps to get to their desired amount.

A quick survey noted that the group who pointed out 220 was impossible without even trying had math based degrees (well, all of us had math based degrees), and the other group had spent more time in their careers programming. This suggests to me that there are two modes of thinking. One I suspect the programming language theory community has long known: algorithmic thinking and machine-based thinking.

One thinking is based on more abstract concepts like function (Church and lambda calculus), and the other thinking based more on a concrete Turing machine (instructions in a linear fashion). One is not better than the other, in my opinion.

Ironically though, most of us in the group don’t use functional programming languages in our daily lives (all of us used Python, and most of us eschew the functional bits of it too). Though I did recommend using Haskell as a thinking tool for solving problems* thinking of using Haskell in production and the cabal hell makes me want to just go cry into my shitty python virtualenv .

comments powered by Disqus