Object-oriented vs. functional(-oriented) programming is the new hot debate since Swift arrived, but what does it even mean to “orient” programming? In this talk Graham Lee shows that maybe this dichotomy isn’t so true after all, and that OOP and FP are deeply intertwined.

How Can We “Orient” Programming? (00:24)

Object oriented programming, library oriented programming, protocol oriented programming, functional programming (function oriented programming): these terms change. Byte magazine (1981): object oriented meant that the memory management system was oriented to the size of the objects that were in your program. It would automatically allocate them and free them as you use them. The programming paradigm was called message oriented programming.

Now we have functional programming, which is not oriented. We have protocol oriented, which is the same data types as project oriented programming. We have library oriented programming from Justin. What is going on here? If I use Objective-C, am I automatically object oriented? If I use Objective-C and only use C functions and primitive data types and structs and unions, am I still object oriented (I am using an object oriented programming language)? If I use Swift, am I automatically functional? Where is the commonality of orientation there? What is it that you think is being oriented? And in what direction?

Paradigm means a thought process, a way of thinking. We are not orienting our programmer, or our source code. Programming is the art of high level thinking about a problem and then translating that into the low level mechanics. It is going to express the solution in a way that a computer can run.

Functional points (06:49)

Given a Cartesian representation of a point x,y, find its distance from the origin and angle from the x–axis.

Functional points — attempt 1 (07:23)

``````Point_radius :: float, float → float
Point_radius(x,y) = √x² +y²
Point_angle :: float, float → float
Point_angle(x,y) = arctan(x/y)
``````

This is vaguely Zed, a mathematical notation. I have a function (called `Point_radius`), which takes a float, and a float and it returns a float. The specific implementation of point radius is that it returns the result of that maths (and the same for the angle).

We can see problems. First, coupling and how libraries can help to reduce coupling. The opposite of coupling is cohesion. This is not very cohesive solution: I a point; I have created two unrelated functions. That is just not guaranteed to work. I am going to use a tool common to functional decomposition to address this problem: pattern matching.

Functional points — attempt 2 (09:07)

``````Point :: float, float, operation → float
Point(x, y, Radius) = √x² +y²
Point(x, y, Angle) = arctan(x/y)
``````

The idea that a function can have different implementations is algorithm based on pattern matching is inputs. I have a radius and an angle. I can enumerate that collection: radius/angle operation. I could enumerate that operation and pattern match on it. Now I have a single function. People are going to use the arguments in one way because there is only one entry point. It takes a float and a float and an operation and it returns a float. We will pattern match on the operation. If you give it x, y and a radius it will do the radius calculation. If you give it x, y and an angle it will give you the angle calculation. That was a single use of a functional programming, common in Swift and Haskell languages.

Get more development news like this

But it is not particularly extensible. Everything has to return a float. What if I wanted to: transform the coordinate system and return another point; or return a string description? What if I had a function that is going to take another argument? I cannot do that because I only have float and operation. I can use another functional programming trick in order to make this even more extensible and generic. I can use higher order functions.

Functional points — attempt 3 (10:46)

``````Point :: float, float, operation → Function
Point(x, y, Angle) = Point_angle
``````

I can use a function that returns a function. I take my float, my float, my operation. And I return a function. When I pattern match on the radius, I return the radius function. Evaluating that gives me the radius of the point. Same with the angle. And then I can add other cases for other operations.

My third trick of functional programming is Lambda calculus (the partial application). This function, which takes three arguments and returns a function, is a function that returns a function that returns a function that returns a function using this partial application technique (which consumes and effectively encapsulates the earlier arguments). Returning a function that has closed over them and will use them in subsequent calculations.

Functional points — tidying up (12:03)

``````Point :: float, float → operation → Function
Point(x,y) is a Constructor for an object, p
p(operation) is a Message, which takes a Selector and returns a Method.
Point'(r, θ) could also be a Constructor. . .
``````

I have this function that takes two floats. It returns a function that takes a selector and returns a method. I have used all of the tools of functional programming. I can construct an instance of a class of things. When they receive a message that contains a selector, chooses an appropriate method to execute as a result of that message. This is functional programming: classes, instances, selectors and methods. All I have done is to take a function, pattern match on it is input and then partially apply it is arguments. I have instance variables; this function that takes and operation, a selector and returns a method, is just a function that encapsulates it is instance variables and responds to messages. Encapsulation, instance variables, classes, instances, selectors, messages. This is functional programming.

I created another function; it takes the magnitude and the angle (r and theta). But it returns a thing (consumes selectors and returns methods). I would then have two constructors of different classes that both respond to the same selectors and give me interchangeable objects (polymorphic objects).

Object–Oriented Compiler (14:55)

Object oriented programming is used for building large scale systems (functional programming is used for building Monad tutorials). I am going to look at a compiler. Imagine I had some definition of a source language and I want to take that and optionally build an executable if it is valid in the syntax of my source language. And return an error and not generate an executable if the syntax cannot be passed. We are going to use object oriented techniques.

Object–oriented compiler — attempt 1 (15:40)

Compiler

``````+compile(source:String): Optional<Executable>
+getErrors(): Array<CompilerError>
``````

It captures important nouns and verbs from the problem specification (using my UML style use case driven object oriented programming techniques, from the Yakobson books in the 1990’s). This is a public method called compile. It takes a string and maybe gives me an executable back. I can look at the errors as well with this access.

But let’s look at the coupling between these two methods. If I do not call the compile method, or if I call the compile method and it succeeds (or even I call the compile method and it fails then I call the compile method and it succeeds), it is not clear what does getErrors do. It also violates one of the object oriented design principles: tell, do not ask. You do not poke into an object to find out what it is state is; you get it to tell you when interesting things happen.

Object–oriented compiler — attempt 2 (17:09)

Compiler

``````+compile(source:String,errorReporter:ErrorReporter): Optional<Executable>
``````

ErrorReporter

``````+reportError(error:CompilerError): void
``````

Let’s use the “tell do not ask” principle to create an error reporter object. This is essentially a consumer. It is receives an error and it does not return you anything, it just does whatever it needs to do. And now the compiler has a single method which consumes the source string and the thing that it needs to report errors (error sink). And maybe returns an executable. But there is still a problem. When we compile an executable, we have to look at the syntax of the source language and try match the text in the source string to the tokens in that language. Then we have to build some representation of that in an abstract machine. Then we need to turn that into the executable language of the target machine and build an executable that is in a format that an operating system can read.

Different things that could potentially be happening: many different reasons for this object to change. If I am on ARM and I want to do the time warp and go back to PowerPC then I have to change this compiler. If I want to add dots and tacks to my property look up in my source language, I have to change the compiler. If I want to stop using Mach-O executable format and I want to use the ELF executable format instead, I have to change my compiler. The object oriented programming principles include the single responsibility principle: an object should have only one reason to change.

Object–oriented compiler — attempt 3 (19:27)

Tokeniser

``````+tokenise(source:String,errorReporter:ErrorReporter): Optional<TokenisedSource>
``````

Compiler

``````+compile(source:TokenisedSource,errorReporter:ErrorReporter): Optional<AssemblyProgram>
``````

Assembler

``````+assemble(program:AssemblyProgram,errorReporter:ErrorReporter): Optional<BinaryObject>
``````

``````+link(objects:Array<BinaryObject>,errorReporter:ErrorReporter): Optional<Executable>
``````

ErrorReporter

``````+reportError(error:CompilerError): void
``````

I have a tokenizer that has a tokenize method (it optionally returns a tokenized source). The tokenized source can be used by the compiler (which has a compile method). That puts assembly out, which can be used in the assemble method of the assembler and then in the link method of the linker. Now we have classes that are all called verber. And they all have a method called verb: each one consume some input and returns some output. They each have one function because we followed the single responsibility principle. They do not maintain any internal state, because we used the dependency inversion principle to put all of the state into the arguments. They are maps from their input values onto output values: each one of these is a single and pure function. I am going to need a binder. Any function that takes a `t` and a `u` and returns and optional `v` and anything that takes a `v` and a `u` and returns and optional `w`, I need to bind those together in order that I can put the output of this into the output of that. Then compose, in a sequence, the various functions that are on these classes.

So What Just Happened? (22:28)

We have pure maps from input to output. We have compositions of chains of these pure transitions. We have object oriented programming. I used the solid principles. I get an object oriented solution made of a composed sequence of functions. In the same way I used the functional programming rules to get my classes of objects that enclose their instance variables and responds to messages. It is obvious how completely different functional programming and object oriented programming are when you follow the different rules for them, for these examples, and end up in two completely different places.

Conclusions + Q&A (22:47)

Object oriented programming is what happens when you follow the rules of functional programming. Functional programming is what happens when you follow the rules of object oriented programming.

Q: I wholeheartedly agree with functional programming and what you showed as an object oriented programming are trying to achieve. They are trying basically to achieve the same thing. I think the problem with object oriented programming is the fact that if you, for example, considered yes in the solid, as you did, the single responsibility, it is so vague that sometimes, actually many times, people think that the single responsibility is something completely different. You are not going to end up with object like those. You are going to end up with, uh, you know big collections of methods, that also mutate the object. What do you think about the state of object oriented programming today and do you think it is similar to what you showed?

Graham: I was deliberately driving the rules hard in order to demonstrate that both are solving exactly the same problem. They both use exactly the same expression of the problem as their source for inspiration. In 1968 Doug Macelroy wrote about the industrialization of the software industry. But it was not an industry back then. He was saying it was this collection of indies in their cottages all doing hand-crafted artisanal bespoke work and not actually working together not all sharing their source and their findings and working as a collective academic and intellectual community. Luckily, things have moved on since then. But his point was that in order to have a mature industry of software engineering you need components. Where I can look at a catalog of components and go, right, what I need is a quick sort. I need to be able to use if from the Swift language and it needs to have these Runtime constraints and these memory constraints and whatever. I should be able to pick and choose those and plug them together and build my solution after that. Or rather, I should be able to sale components. As programmers would sale components and end-users would assemble things after them. Or there would be people who would build larger assemblies out of these components and then end-users could plug those together in order to build the applications they need. Which is basically what Apple Script and Automator do. They are at that level of end-user composition. The functional programming people came up with the idea of being able to interchange and compose functions. In object oriented programming there is two ways to look at it. 1) The object oriented programming is message passing; late Runtime resolution of the function that you want to run. If you look back at the operation, it consumes a selector and returns a method (all that Alan Kay cares about in object oriented programming). That is, to him, object oriented programming. But then, Java took inspiration from a couple of places. One is Objective-C - Sun hired programmers from NeXT in the early 90’s. And the other is Barbara Liskov and her work on abstract data types and the CLU language. The classes, the objects and the rules for inheritance, co-variance and contra-variance and Java come from not an object oriented world; everything is an object where you just look up stuff with Runtime and Yolo. It is more mathematically inclined abstract data type world, which builds all of those constraints. When you have those constraints along side single inheritance, you have all of the problems with having to do interface segregation, not being able to couple things if one of them requires you to subclass and then preferring composition. The rules of modern object oriented programming come from the huge spark in it is popularity. The launch of Java in the 1990’s correlated with the rise of the Web. Which is weird, because we do not use Java on the Web anymore. But that launched a thing that was not the same as the object oriented programming that existed before, but co-opted the name. Object oriented programming as practiced (to use Kari Bernhardt’s phrase), does not match object oriented programming as presented here.

Q: In your last slide, you say that you have to bring your own thinking. And you showed how you can shape language, programming any paradigm you like. On the other side, do you think that a language can shape the way you reason? For instance, I do not know, NSArray does not have a map method, it is not automatic to do.

Q: Why are we always late in adopting stuff? For example, you mention object oriented programming that has been created by Alan Kay in the 70’s as Smalltalk and then we have functional programming that has been there for Liz Pammel and Haskell late in the 90’s. Why are we always 20 years late?