# Haskell is cool

I have been learning this programming language called Haskell for about a year now, and it is probably the single most fascinating and frustrating topic I have ever learned since I started programming. This is probably the 10th time rewriting the first paragraph because I can't find a way to write something convincing as an elevator pitch, and nobody else seems to be able to explain it well either. The general consensus usually sounds something like it's clean and maintainable. Yes, the syntax and features are quite interesting, but I promise you that's nowhere near the full reason. This isn’t a tutorial on the language, but a representation of the foundations it's built on, the math stuff, and the logic behind it because that's what makes it useful and rather addictive.

Here’s a piece of code for the Fibonacci function, pretty beautiful. You just define the first two cases, zero is zero, one is one, and then recursion for the rest. Should be pretty easy to understand what it's doing.

```
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
```

Here's another piece of code for sorting a list using quicksort, although we have some rare syntax over here, `p:xs`

. It's a pattern-matching thing that just assigns the pivot `p`

to the first element and access for the rest of the list, and then we just put the p element in the middle and join it with the lesser and greater elements of the list, both recursively sorted. Most of them are defined at the bottom, which is the filter function. This is a little bit more complicated, but I'm hoping it's not too hard to understand if we know what the quicksort function does in other languages.

```
quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater)
where
lesser = filter (<p) xs
lesser = filter (>=p) xs
```

The next logical thing to do will be to figure out how to print `hello world`

, but it's actually impossible, at least without cheating, but it's still cheating in a pretty interesting way. If you look at a beginner Haskell book, you’ll go through lists, strings, recursions, functions, story, algorithms, modules, types, and then we get to printing things. That's because printing is not a straightforward and normal process. It's one of those things that the Haskell rules forbid, so instead relies on an infamously hard-to-grasp concept in Haskell’s design called *Monads*. This rule is a result of the idea of purely functional programming, and it's not just a different way of writing code compared to other languages, but a fundamental difference in computing.

To explain this difference and why it's so, we will take a detour to some history and look at a bit of deep computer theory. So our typical style of programming goes step by step from the top of a file to the bottom. In 1936, Alan Turing made a theoretical computer called the Turing Machine. It has a step-by-step parsing process, but it's pretty simple. Imagine a cursor can move around and change numbers, and we just follow a flow chart or a table to see how it moves depending on the number. This cursor sets the number underneath it, moves left or right, and changes the state. So each state has its own rules on what to do with the next number, and this is enough to be able to compute anything that can be computed.

Here's where Haskell differs. There's another simpler way of expressing calculations, and that’s through **Lambda Calculus**, invented by one of Alan Turing's teachers, Alonso Church. (a pretty equally important guy). It sounds like complicated math, but the word calculus just means relating to calculations, and lambda's just a Greek letter.

Here, all calculations are represented by math functions like **f(x) = x^2 + 5**, and to just take your input and give an output, that's it. Your entire program is a one-line expression like **f(g(x))**, and you can run the program by evaluating the answer. There are no step-by-step commands in number calculus; order is only defined by what is needed to get the answer. So, for example, if we write **a = b + 1** and **b = 4**, and if we want to find **a**, you have to find **b** first, no matter where you write it.

**So, a quick tangent:** This is basically pretty similar to the difference between **imperative** and **declarative** programming. It's commonly explained as telling the computer what to do versus how to do it. For example, if you want to print out a list of numbers, the imperative way is with a `for Loop`

, and the declarative way is `.forEach()`

. I found this rather vague because why couldn't you just say, "*I declare for the computer to do your for Loop?*" So, the most important part that actually distinguishes between the two is that you don't have any manually defined order in the declarative case.

#### Imperative

*(how to do it)*

```
for(item in list){
print(item)
}
```

#### Declarative

*(what to do)*

`list.forEach(print)`

But not all declarative things are functional programming, though. CSS and HTML are declarative because it tells you the layout of the styles things should be, but it's not a programming language, but it's still declarative. Anyway, however different they are, Alonzo Church and Alan Turing co-published a paper that proved both of them can do exactly the same thing. That means that while Haskell is completely different from regular programming languages, it can do everything they can do, and some people argue it can do it better. All of this is why Haskell seems so weird.

If you don't consider functional programming's mathematical origin, the rules seem pretty arbitrary. Here are the rules:

**Functions are pure.**This means they have no side effects like printing to the console, changing the file, etc. This weird restriction makes perfect sense if it's like math functions where you take an input and calculate an output. There's no ability to change anything else, so if you want to do things like changing your file, the basic simple explanation is that you can just return a string or return a number, and some other part of the computer will handle what it does with that.**Variables are immutable.**The common explanation is that making your variables constant makes sense that you won't accidentally change them, which causes problems with other parts of the code that are also using it, but it isn't just an organization trick. It's one of the fundamental ideas of how functional programming and Lambda calculus work. So, when we're doing calculations that way, variables such as shortcuts for labeling an expression, and obviously, they wouldn't change. And if you think about it, pure functions can only take inputs and return outputs. There's no operational command to tell the computer to change the variable, so you can't do that at all. As a benefit, the equal sign actually does mean what it means in math, and we don't have the confusing**x = x + 1**type of thing now.

Just like how modern computers are more complicated than turning machines, functional programming likewise is also much more complicated than just Lambda calculus, but functional rules are a bit more strict, so we can't just tack on new ideas that easily.

*~ artwork by neatcoolfun*