Have you ever wondered why some interfaces are more “intuitive” than others? What makes one UI resonate with people while another doesn’t? This talk is meant to shed a bit more light on this mystery. In some ways the human mind is incredibly adaptable while in other ways it seems to be stuck in the Stone Age. This dichotomy presents interesting obstacles and opportunities for those of us designing and building digital experiences for humans.
My name is Bojana. This is BeardyMan, the mascot of Typeform, where I am the lead UX designer.
I want to talk about designs for the human brain. We design for humans, and therefore we design for the brain. I’ll talk about some of the core limitations, and benefits or exploits of the human brain. You can think about it like design hacking. When we understand the why, we understand why something works the way it does, and we’re in a better position to create applications (or whatever it is that we are building or designing) to be understandable for that context.
What is Design? (01:07)
There is much disagreement.
Design is a plan or drawing that is created to build something. For example, architectural plans, business processes, circuit diagrams, sewing patterns, as well as comps that you work on a daily basis with. Designing includes considering the aesthetic, the functional, the economic, socio-political, as well as design contexts.
These are important when we’re designing products or building products, and it may involve considerable research. It may involve interaction and adjustment, and then redesign or iteration. User experience design is this other level focused on making things more usable, making it friendly for the customer and making them feel good when they are using something. You’re working on the experience.
We live in a world of interfaces (02:59)
We’ve been making and designing things for other humans for a long time (i.e., architecture, clothing). Now, this is the new evolution of design. But we live in a world of interfaces.
Think of the hand: you have dense tissue, tendons, bones, and structures that are all obscured by the skin; to use it seems so easy, and yet we’re able to do complex things. The skin is the interface of everything that sits behind it.
Get more development news like this
If you look at buildings that we live in, walls are full of wiring and systems that make the building run in a certain way. The switches that we see on the walls are interfaces. UIs are the next iteration of this abstraction of complexity, in order to make communication and use easier (and make our lives easier). It saves us time.
#1 Purpose of UI: Abstraction of Complexity (04:00)
I like to think about the number one purpose of UI being the abstraction of complexity. You can do more than just make it pretty on the surface. It’s how the system itself is put together.
As we’re interested in designing for humans today, and we’re talking about how to design apps that people use on their phones, I’d like to talk about the human brain.
The Human Brain (04:19)
I am going to generalize. If there are any neuroscientists in the audience, please excuse the oversimplification and understand that it is done in the interest of better design.
For today’s purposes, I like to think about the brain as having three parts, each evolved on top of each other (like layers of an onion):
- The brain stem is the part of the brain that regulates our core function (i.e., breathing, digestion, safety), as well as procreation, and it has been around for 300 million years. These are the types of animals that have this type of a system (see slides).
- The next level up, we have the middle brain: our emotions - fear, anxiety, joy, happiness.
- The top part of the brain is the newest part of the brain, and it has evolved around 200,000,000 years ago. It includes the prefrontal cortex, as well as the neocortex at its top layer. Imagine that it is a structured on top of the others, and in the last two to four million years. This part of the brain has grown three times, and in the last 30,000 years it started to shrink again (and we are not quite sure why).
The point is that the changes that are happening in the brain are counted in thousands of years and millions of years. This is where we control logic, conversation, written language, communication with other people. These are the types of animals (see slides) that have a prefrontal cortex; most mammals have it, including elephants, dolphins, and monkeys, but for us humans, it is the most developed.
We benefit enormously as builders and designers of software to understand how the brain works, and to remember that the older parts are still there. The prefrontal cortex is the part that understands language and written communication. The older parts of the brain are still there, but they “do not understand” written communication.
If you’ve ever gone skydiving, you know how this feels: you’re standing there on the edge, and you’re saying you are going to go, but there’s that part of you that just does not want to go, and it is that tension between those two, because the older parts of the brain are not understanding that you’re safe. You still have the fear. You still have the anxiety, and it’s never going to go away.
When we design, it’s important to understand these parts and how we’re communicating with each part of the brain.
How is the human mind static? (07:53)
There are characteristics of the brain that are going to be static. They’re never going to change, or they’re not going to change anytime soon enough for it to matter.
What we see, how we see now is always how we are going to see. This is important for us when we are building applications: the way that we’re designing them for vision will help us communicate them better.
Vision is optimized for contrast, and it’s not optimized for color. You can see here, these orange dots are the exact same color, but because the backgrounds are different, they look different (different shades). What we see well is contrast, not colors.
Fun fact: by the time that everybody in this room is 60, your eye will require 3 times more light to enter through the cornea in order to see the same thing that you saw at 20 years old.
As we start to design, we must remember we have plenty of users that are older. We tend to design as if they can all see as well as 20 year olds.
Peripheral Vision (09:16)
Our peripheral vision is bad. We can see straight ahead well, but when it comes to sides and periphery, we do not see well. If you’ve ever wondered why flashing banners, icons flashing at the edge of your vision, are annoying; this is because peripheral vision, although it is bad, it’s good at detecting motion, because that was the most important thing to detect.
When you have distractions on the side of a screen, people habituate to it, and they’re going to ignore it. Even if you display something important in the periphery, they’re not going to notice it, or they’re going to abandon it - there is a certain percentage of the population that can’t at all habituate to that; for them, it will continue to be annoying.
Designing for How We See (10:10)
It is important to optimize the text and background for context. That affects readability. Any time you get away from the ultimate contrast it is going to impact your readability. I don’t care if it is super dark gray or very dark gray (like in The LEGO Movie for Batman).
Multiple unit colors are also difficult to see, especially in the periphery. Anything thin, i.e., thin text that is in color, small objects, icons that are colored and do not have the most amount of contrast possible, are going to be affected by this.
When you use peripheral motion, if you need to display something in the periphery, make sure that you do it sparingly and only when you need to. Like using the color red - that’s “no, don’t go, stop.”
Memory is unreliable.
When we remember something, the brain pulls it out and puts it together like a puzzle piece - because memory lives in all different parts of the brain - and then breaks it back down and saves it again. Every time you recall, it’s being rewritten, and it is being affected by the context in which you’re recalling it. If you feel really good, or if you feel really bad, it will affect the memory.
Recall is hard (if you think about how long it took to memorize multiplication tables when you were a kid). Like if you’ve met a person, and you remember their face, who they are, the context, but you can’t remember the name; then, they say their name, and in your mind you go, “I knew that. How did I not know that?” It’s because recall is hard.
You are made to recognize things. Recognition is easier. That’s why we like to take photos, to make scrapbooks, and collect mementos; because it makes it easier for us to recall.
Designing for How We Remember (12:25)
If we’re creating an app that is allowing users to read something, we should always bring them back to the same place that they left off. These are small things, but they make such a big difference. Visually distinguish “done” and “not done” states. Search engines do this all the time, but we tend to forget about it. These are old, tried and true techniques that work.
The brain is also goal focused. When we have a certain goal in mind, we tend to ignore everything else in the periphery, or that is not relevant to the goal. If you look at this (see slides), one of the things that really describes this is change blindness. If you’re looking at these two pictures, can you tell me what the difference is between the two? There is one, and it’s not easy to find. It takes a long time because we’re bad at this. When a person is focused on a goal, they will just think about that specific end goal.
Designing for How We Think (13:31)
Make it easy for the user to understand the task, and anything irrelevant to the goal will be obsolete. If you have ever been in an app that asks you for something before it was necessary - this is that concept. For example, apps that ask for users to create a login or require them to create an account before checking out or finishing the payment process - it’s the same exact thing.
How The Mind Is Plastic (14:00)
How is the mind changeable? The brain is a pattern recognition machine. For example, in these pictures (see slides) you can’t help but notice faces (we are particularly wired to recognize faces).
One thing to consider about designing with patterns is which patterns are visible and which aren’t. Or which patterns are recognizable, and which ones aren’t. Switches, for example: toggle switches exist in the physical world, and we use them the time (i.e., light switches).
However, something like the hamburger menu (and there is huge discussion about this), is not as recognizable. Over time, it becomes recognizable. Over time, people can habituate to understanding what the hamburger menu is.
It’s important to understand these two differences and use things accordingly. You can use recognizable patterns as metaphors, for easy recognition.
Designing With Patterns (14:21)
Take a pattern that exists in the physical world, and apply it. You could evolve a button or an icon over time. If you have a large user base or many people are using your app, you can change things slowly, and they will habituate to it.
You can also create some depth in the visual language. I know flat design is the new thing, but we are tactile beings. We have evolved to experience the world in a three-dimensional way, and it matters to us. We can see and read a screen better when it has some depth. A button needs to look like it should be pressed (if you want people to understand it better).
Mirror Neurons (16:06)
There has been interesting debate about these.
When we see somebody do something, our brain fires with the same patterns as the person that is doing something. That’s how we learn - mimicry. Seeing somebody do something or seeing an example is probably the best way. How can we design for learning? We can show rather than tell, and show with examples.
Start with the context. If a customer or user is entering an app that you’ve designed, what do they need to know about the overall structure first? When they see it, it will make more sense. If you start with the particulars, there’s no context in place for people to remember things. It doesn’t mean anything to them. Give the relevant info at the relevant time.
Designing for How We Learn (16:35)
Learning through experience and repetition is the best way that we can learn. Humans interpret the world in a certain way, and certain things about the way we interpret the world are not going to change.
It’s going to be thousands of years before our vision improves, thousands of years before certain parts of the brain evolve. Maybe we’re less responsive to peripheral movement now, but we are going to evolve our ability to recognize patterns.
Imagine a world of UIs designed for how we think and see best (17:08)
My hope for this talk is to impart curiosity for you to start thinking about humans as a system, to start thinking about the hardware that we have, as something we have to design for. It’s not about that top layer; it’s not about the visual aesthetic entirely. There’s more to design. If you have curiosity about these things, and if you learn more about the human brain, about the way it functions, you can do a lot.
Anyone who touches a product is designing it. The decisions that you are making will impact how people perceive what you have built. We can design for those strengths and weaknesses if we know what they are.
About the content
This talk was delivered live in September 2016 at try! Swift NYC. The video was recorded, produced, and transcribed by Realm, and is published here with the permission of the conference organizers.