Pragma graham lee cover?fm=jpg&fl=progressive&q=75&w=300

*-Oriented Programming

Object-oriented vs. functional(-oriented) programming is the new hot debate since Swift arrived, but what does it even mean to “orient” programming? In this talk Graham Lee shows that maybe this dichotomy isn’t so true after all, and that OOP and FP are deeply intertwined.


How Can We “Orient” Programming? (00:24)

Object oriented programming, library oriented programming, protocol oriented programming, functional programming (function oriented programming): these terms change. Byte magazine (1981): object oriented meant that the memory management system was oriented to the size of the objects that were in your program. It would automatically allocate them and free them as you use them. The programming paradigm was called message oriented programming.

Now we have functional programming, which is not oriented. We have protocol oriented, which is the same data types as project oriented programming. We have library oriented programming from Justin. What is going on here? If I use Objective-C, am I automatically object oriented? If I use Objective-C and only use C functions and primitive data types and structs and unions, am I still object oriented (I am using an object oriented programming language)? If I use Swift, am I automatically functional? Where is the commonality of orientation there? What is it that you think is being oriented? And in what direction?

Paradigmatic Programming and Paradigmatic Thinking (04:27)

Paradigm means a thought process, a way of thinking. We are not orienting our programmer, or our source code. Programming is the art of high level thinking about a problem and then translating that into the low level mechanics. It is going to express the solution in a way that a computer can run.

Functional points (06:49)

Given a Cartesian representation of a point x,y, find its distance from the origin and angle from the x–axis.

Functional points — attempt 1 (07:23)

Point_radius :: float, float → float
Point_radius(x,y) = √x² +y²
Point_angle :: float, float → float
Point_angle(x,y) = arctan(x/y)

This is vaguely Zed, a mathematical notation. I have a function (called Point_radius), which takes a float, and a float and it returns a float. The specific implementation of point radius is that it returns the result of that maths (and the same for the angle).

We can see problems. First, coupling and how libraries can help to reduce coupling. The opposite of coupling is cohesion. This is not very cohesive solution: I a point; I have created two unrelated functions. That is just not guaranteed to work. I am going to use a tool common to functional decomposition to address this problem: pattern matching.

Functional points — attempt 2 (09:07)

Point :: float, float, operation → float
Point(x, y, Radius) = √x² +y²
Point(x, y, Angle) = arctan(x/y)

The idea that a function can have different implementations is algorithm based on pattern matching is inputs. I have a radius and an angle. I can enumerate that collection: radius/angle operation. I could enumerate that operation and pattern match on it. Now I have a single function. People are going to use the arguments in one way because there is only one entry point. It takes a float and a float and an operation and it returns a float. We will pattern match on the operation. If you give it x, y and a radius it will do the radius calculation. If you give it x, y and an angle it will give you the angle calculation. That was a single use of a functional programming, common in Swift and Haskell languages.

Get more development news like this

But it is not particularly extensible. Everything has to return a float. What if I wanted to: transform the coordinate system and return another point; or return a string description? What if I had a function that is going to take another argument? I cannot do that because I only have float and operation. I can use another functional programming trick in order to make this even more extensible and generic. I can use higher order functions.

Functional points — attempt 3 (10:46)

Point :: float, float, operation → Function
Point(x, y, Radius) = Point_radius
Point(x, y, Angle) = Point_angle

I can use a function that returns a function. I take my float, my float, my operation. And I return a function. When I pattern match on the radius, I return the radius function. Evaluating that gives me the radius of the point. Same with the angle. And then I can add other cases for other operations.

My third trick of functional programming is Lambda calculus (the partial application). This function, which takes three arguments and returns a function, is a function that returns a function that returns a function that returns a function using this partial application technique (which consumes and effectively encapsulates the earlier arguments). Returning a function that has closed over them and will use them in subsequent calculations.

Functional points — tidying up (12:03)

Point :: float, float → operation → Function
Point(x,y) is a Constructor for an object, p
p(operation) is a Message, which takes a Selector and returns a Method.
Point'(r, θ) could also be a Constructor. . .

I have this function that takes two floats. It returns a function that takes a selector and returns a method. I have used all of the tools of functional programming. I can construct an instance of a class of things. When they receive a message that contains a selector, chooses an appropriate method to execute as a result of that message. This is functional programming: classes, instances, selectors and methods. All I have done is to take a function, pattern match on it is input and then partially apply it is arguments. I have instance variables; this function that takes and operation, a selector and returns a method, is just a function that encapsulates it is instance variables and responds to messages. Encapsulation, instance variables, classes, instances, selectors, messages. This is functional programming.

I created another function; it takes the magnitude and the angle (r and theta). But it returns a thing (consumes selectors and returns methods). I would then have two constructors of different classes that both respond to the same selectors and give me interchangeable objects (polymorphic objects).

Object–Oriented Compiler (14:55)

Object oriented programming is used for building large scale systems (functional programming is used for building Monad tutorials). I am going to look at a compiler. Imagine I had some definition of a source language and I want to take that and optionally build an executable if it is valid in the syntax of my source language. And return an error and not generate an executable if the syntax cannot be passed. We are going to use object oriented techniques.

Object–oriented compiler — attempt 1 (15:40)

Compiler

+compile(source:String): Optional<Executable>
+getErrors(): Array<CompilerError>

It captures important nouns and verbs from the problem specification (using my UML style use case driven object oriented programming techniques, from the Yakobson books in the 1990’s). This is a public method called compile. It takes a string and maybe gives me an executable back. I can look at the errors as well with this access.

But let’s look at the coupling between these two methods. If I do not call the compile method, or if I call the compile method and it succeeds (or even I call the compile method and it fails then I call the compile method and it succeeds), it is not clear what does getErrors do. It also violates one of the object oriented design principles: tell, do not ask. You do not poke into an object to find out what it is state is; you get it to tell you when interesting things happen.

Object–oriented compiler — attempt 2 (17:09)

Compiler

+compile(source:String,errorReporter:ErrorReporter): Optional<Executable>

ErrorReporter

+reportError(error:CompilerError): void

Let’s use the “tell do not ask” principle to create an error reporter object. This is essentially a consumer. It is receives an error and it does not return you anything, it just does whatever it needs to do. And now the compiler has a single method which consumes the source string and the thing that it needs to report errors (error sink). And maybe returns an executable. But there is still a problem. When we compile an executable, we have to look at the syntax of the source language and try match the text in the source string to the tokens in that language. Then we have to build some representation of that in an abstract machine. Then we need to turn that into the executable language of the target machine and build an executable that is in a format that an operating system can read.

Different things that could potentially be happening: many different reasons for this object to change. If I am on ARM and I want to do the time warp and go back to PowerPC then I have to change this compiler. If I want to add dots and tacks to my property look up in my source language, I have to change the compiler. If I want to stop using Mach-O executable format and I want to use the ELF executable format instead, I have to change my compiler. The object oriented programming principles include the single responsibility principle: an object should have only one reason to change.

Object–oriented compiler — attempt 3 (19:27)

Tokeniser

+tokenise(source:String,errorReporter:ErrorReporter): Optional<TokenisedSource>

Compiler

+compile(source:TokenisedSource,errorReporter:ErrorReporter): Optional<AssemblyProgram>

Assembler

+assemble(program:AssemblyProgram,errorReporter:ErrorReporter): Optional<BinaryObject>

Linker

+link(objects:Array<BinaryObject>,errorReporter:ErrorReporter): Optional<Executable>

ErrorReporter

+reportError(error:CompilerError): void

I have a tokenizer that has a tokenize method (it optionally returns a tokenized source). The tokenized source can be used by the compiler (which has a compile method). That puts assembly out, which can be used in the assemble method of the assembler and then in the link method of the linker. Now we have classes that are all called verber. And they all have a method called verb: each one consume some input and returns some output. They each have one function because we followed the single responsibility principle. They do not maintain any internal state, because we used the dependency inversion principle to put all of the state into the arguments. They are maps from their input values onto output values: each one of these is a single and pure function. I am going to need a binder. Any function that takes a t and a u and returns and optional v and anything that takes a v and a u and returns and optional w, I need to bind those together in order that I can put the output of this into the output of that. Then compose, in a sequence, the various functions that are on these classes.

So What Just Happened? (22:28)

We have pure maps from input to output. We have compositions of chains of these pure transitions. We have object oriented programming. I used the solid principles. I get an object oriented solution made of a composed sequence of functions. In the same way I used the functional programming rules to get my classes of objects that enclose their instance variables and responds to messages. It is obvious how completely different functional programming and object oriented programming are when you follow the different rules for them, for these examples, and end up in two completely different places.

Conclusions + Q&A (22:47)

Object oriented programming is what happens when you follow the rules of functional programming. Functional programming is what happens when you follow the rules of object oriented programming.

Q: I wholeheartedly agree with functional programming and what you showed as an object oriented programming are trying to achieve. They are trying basically to achieve the same thing. I think the problem with object oriented programming is the fact that if you, for example, considered yes in the solid, as you did, the single responsibility, it is so vague that sometimes, actually many times, people think that the single responsibility is something completely different. You are not going to end up with object like those. You are going to end up with, uh, you know big collections of methods, that also mutate the object. What do you think about the state of object oriented programming today and do you think it is similar to what you showed?

Graham: I was deliberately driving the rules hard in order to demonstrate that both are solving exactly the same problem. They both use exactly the same expression of the problem as their source for inspiration. In 1968 Doug Macelroy wrote about the industrialization of the software industry. But it was not an industry back then. He was saying it was this collection of indies in their cottages all doing hand-crafted artisanal bespoke work and not actually working together not all sharing their source and their findings and working as a collective academic and intellectual community. Luckily, things have moved on since then. But his point was that in order to have a mature industry of software engineering you need components. Where I can look at a catalog of components and go, right, what I need is a quick sort. I need to be able to use if from the Swift language and it needs to have these Runtime constraints and these memory constraints and whatever. I should be able to pick and choose those and plug them together and build my solution after that. Or rather, I should be able to sale components. As programmers would sale components and end-users would assemble things after them. Or there would be people who would build larger assemblies out of these components and then end-users could plug those together in order to build the applications they need. Which is basically what Apple Script and Automator do. They are at that level of end-user composition. The functional programming people came up with the idea of being able to interchange and compose functions. In object oriented programming there is two ways to look at it. 1) The object oriented programming is message passing; late Runtime resolution of the function that you want to run. If you look back at the operation, it consumes a selector and returns a method (all that Alan Kay cares about in object oriented programming). That is, to him, object oriented programming. But then, Java took inspiration from a couple of places. One is Objective-C - Sun hired programmers from NeXT in the early 90’s. And the other is Barbara Liskov and her work on abstract data types and the CLU language. The classes, the objects and the rules for inheritance, co-variance and contra-variance and Java come from not an object oriented world; everything is an object where you just look up stuff with Runtime and Yolo. It is more mathematically inclined abstract data type world, which builds all of those constraints. When you have those constraints along side single inheritance, you have all of the problems with having to do interface segregation, not being able to couple things if one of them requires you to subclass and then preferring composition. The rules of modern object oriented programming come from the huge spark in it is popularity. The launch of Java in the 1990’s correlated with the rise of the Web. Which is weird, because we do not use Java on the Web anymore. But that launched a thing that was not the same as the object oriented programming that existed before, but co-opted the name. Object oriented programming as practiced (to use Kari Bernhardt’s phrase), does not match object oriented programming as presented here.

Q: In your last slide, you say that you have to bring your own thinking. And you showed how you can shape language, programming any paradigm you like. On the other side, do you think that a language can shape the way you reason? For instance, I do not know, NSArray does not have a map method, it is not automatic to do.

Graham: It does, it has enumerator objects using block. In fact no, value for key. NSArray value for key path is your Map function. but it allows you to return nill which makes it a bit weird and not truly semantically the same as map. But it does happen. Apple using more words. Or, rather NeXT likes using more words. It can constrain things. If you have been doing Smalltalk programming for 30 years, you are unlikely to go, I need a strict compile time type system and types to find. And I need functions that are going to map from type to type. Because you are going to have been thinking about objects for three decades. Imagine you have a matrix, you can represent your computer program, the software that you are going to write as a matrix. And we will put all of the type constructors along the columns. And we will put all of the operations along the roles. You might have int, float, point, employee. And you might have, add, subtract, give pay raise, describe. Object oriented, however you write your software, what you are going to do is fill in that matrix? I need to be able to do this type, therefore I need to fill in this hole here. In object oriented programming, you would look at the columns first and for the int type, add and divide; I do not need to give a pay raise, and I do need to describe. For the float type I add and I divide and I do not give pay raise and I do describe. For employee I do not add, I do not divide I do give a pay raise, and I do describe. Functional programming goes through the other way. Let me pattern match on the contructors that were passed to me to work out which version of this operation I am going to use. I am in the add. If I have been given ints, then I will use this version. If I have been given floats, I will use that version. Both decomposing the same problem, but in orthogonal ways. And whichever one you have been mostly exposed to, or thinking about recently, you are going to choose that one. In all of those cases, you are going to add and divide (things that I want to be able to do to more than one type). I will pull the rules into something I am either going to call a protocol or a super class or a type class depending on which one it is that my language gives. But inheritance is orthogonal. The inheritance, part of object oriented programming, is equally valid in functional programming.

Q: What books should I read to get good information about this?
Graham: The main flash of inspiration for doing this was two papers I read. One by Aday Ready, Birmingham University (in the 80’s), called Objects as closures. For example, my function that takes the instance variables returns selector returns the methods. Imagine I built those out of closures in Smalltalk or in Objective-C. I have objects masquerading as functions. And I use those functions to build objects. But you can think of an object as essentially pure function. If your objects are a function from selector to method, they all have the same type. Even in the strong type system of the Zed notation or Haskell. They are all absolutely interchangeable. The other paper was entitled Theorems for Free by Phillip Wadler. When you have the type system, you can derive certain truths. When people say, functional programming helps you reason about your code. When a mathematician or a computer scientist says that what they mean is there are fundamental truths you can derive from the functions signature. It takes an array of a and returns an array of a. And, if all you know about a is that, then you cannot apply any operations to a (you do not know what operations a has). That is why he was not able to implement the quick sort. He had to implement quick sort of a extends comparable, an array of those to an array of those. Given that signature, given array of a extends comparable to array of a extends comparable, I can compare stuff. I can take elements from the first array from the input, and reorder them in the output. I can drop them from the output. I cannot create new instances of a. I cannot possible return any new a’s (I do not have a constructor or any operations that let me get a new a). I only have the comparable ones which let me get boolean information based on two of them. That means that if I know I have the array or whatever it was, I cannot get the number nine in the output because I do not have numbers.

Q: Why are we always late in adopting stuff? For example, you mention object oriented programming that has been created by Alan Kay in the 70’s as Smalltalk and then we have functional programming that has been there for Liz Pammel and Haskell late in the 90’s. Why are we always 20 years late?

Graham: Not 20 years, because Alan Kay, he had object oriented programming from Semilow which was 1963. There was a tape loader for an operating system on an old mainframe computer where the first block on the tape told you the locations on the tape of the other operations by name. If you were trying to load an employee then you read this table and you looked up the load employee name. You looked up that location on the tape. You go to that location, spool in the code, and then you run that. This tape systems, in the end of the 50’s, had an object oriented dispatch system as it is table of contents on the tape drive. You need the marketing and the promotion to be there; if someone has a good idea then the industry will pick it up - is nonsense. Kent Beck did not invent TDD. Kent Beck rediscovered TDD. And then wrote a book about it. When Smalltalk was popular, there were many Smalltalk programmers. When Smalltalk stopped being popular decided that they still liked having jobs; they went to other environments (Java and Ruby). Hirstikova, Martin Fowler, Kent Beck, Bob Martin. KOKO as well. Marcus. I have been using Objective-C for 10 years. Wil Shipley said, in 1989, I saw this Objective-C and decided everybody should be using it. These are the people that are defining the fashion of the iOS programming experience from the beginning. Same with TDD. Same with object oriented programming went, I can make more money out in the real world. I want a job at Microsoft bringing Monads to Visual Basic. And they go to Microsoft and create Lync. Or they go, I want to prove that this can work at scale. And Simon Marlo goes and works at Facebook and does spam filtering and Haskell. That is why the functional programming concepts are becoming popular now. Because the intersection of the people who know about it and the wider community has finally become a reality.

Next Up: iOS Paradigms #2: Practical Protocol-Oriented-Programming

General link arrow white

About the content

This talk was delivered live in October 2015 at #Pragma Conference. The video was transcribed by Realm and is published here with the permission of the conference organizers.

Graham Lee

Graham Lee is the author of Professional Cocoa Application Security, the first book on application security for Mac and iOS programmers; Test-driven iOS Development (Developer’s Library), the first book on TDD and unit testing for Mac and iOS programmers; and APPropriate Behaviour, a book on all the things programmers must do that aren’t programming. But he’s not just an author and blogger. He writes software primarily in the object-oriented mode, but has been seen wearing out the parenthesis keys with LISP too. He also teaches other people how to do the same.

4 design patterns for a RESTless mobile integration »

close