Serverless alexander stigsen   facebook

Serverless in an Offline-First World

Development for mobile devices can become extraordinarily complex to combat the high latency, unstable connectivity, and spontaneous application restarts that come from working on a mobile platform. In this talk from Serverlessconf, Alexander Stigsen shares how Realm and using objects as API simplifies the infrastructure and combats the previously brittle API calls, making them resilient against mobile issues.


Introduction (0:00)

I’m Alexander Stigsen, CEO of Realm. Before this, I worked for many years as a mobile engineer at Nokia, working on how to make data work effectively on mobile devices, giving me far too much experience working on mobile. I have been working on mobile since even before the iPhone, on small feature phones, working with small displays. It’s crazy to think about how much mobile has changed.

I’m going to speak on the real serverless – as in “there is no server” – which isn’t entirely true, but it’s an interesting context.

What is Realm (1:30)

So, what is Realm? Realm started as a mobile database. It was a database that only ran on the mobile device, something you used within your app like a replacement for SQLite, Core Data etc. We wanted to build a real mobile, object database that worked. So that has become hugely popular and a lot of people are using it now.

Just recently we launched a backend to mirror that. It allows you to synchronize your data transparently, sharing it between devices. Most of the platform and database is open source, and available for download on GitHub. It’s very popular, and almost every big company out there is using Realm today to some extent within their apps. This gives us perspective on how people are building mobile apps today, and what kind of problems they run into, as well as how they want to interact between their apps and their servers. With that in mind we’ve discovered how great it is to be in a serverless world.

How does serverless help? (3:14)

Serverless empowers the individual developer. We see this especially in the mobile space. When you build a mobile app and you want to have a new feature. You have to go to your backend team and tell them ‘I need this new endpoint for my rest API’, then you have to wait for them to schedule and implement that and start all over, slowing down the whole process.

If as an individual developer, you can build things yourself without having to interact with the backend, just using the same models, the same languages, and get the backend to work the way you want, it allows you to build apps much faster.

The oxymoron about serverless is that it is a server. However, from a developer’s perspective, it’s not something you have to think of as a server, but more as a service. Something you can just update and make react to your changes in any way you want. The cool thing is that it gives you all these different services that you can integrate with, all the cool things that other people have built you can just plug into.

Why Mobile (4:59)

If you look, everything is moving to mobile. These are recent statistics of how much time people spend online and how it’s developing. The time people spend in browsers stays pretty stable with not much development. But the large majority of people’s time is spent in mobile apps – not in the browser on mobile, but actually in apps… Which means that if you want to capture people’s attention, you really have to be in the mobile app. But building mobile apps is very different from the web.

Limitations of Mobile (5:59)

First of all, mobile networks have much higher latency than local connections. If you actually measured it, you’d be really surprised at how high the latency is. Because every single roundtrip can take up to half a second, even on a 4G network. So, that really affects how you build your APIs, because if you have long sagas of multiple REST requests before something happens, that can slow down to a crawl when you’re on a mobile network.

Get more development news like this

This is true even on good mobile networks, because most mobile networks have great bandwidth. Streaming data great, but the moment you do a lot of back-and-forth like a lot of rest calls, things slow down to a crawl.

The defining feature of mobile is that you have a risk of loss of connectivity at any time. Anybody’s who’s been on the train, on the plane, or just driving outside the city, knows connectivity can just disappear, or even worse, go up and down. Sometimes it’s fast, sometimes it’s slow, sometimes it goes on and off intermittently, sometimes it’s perfect and sometimes it’s just totally gone. You can’t count on connectivity when you have a mobile device. That is just the nature of the beast, right? Sometimes you are available, sometimes you’re not. So, in many ways, you can think of that as just different aspects of latency, sometimes it’s gonna take a really long time to get a reply, and sometimes it’s gonna be quick. You can’t really predict which part of it it’s gonna be.

The last thing that really surprises many people when they start developing apps is that your app can be shutdown and restarted at any time. Especially on Android, you will see that if there’s a high load on the memory, on many devices, especially in Asia where people have devices with very little memory, it’ll just shutdown the app and restart it. The user just swipes away from the app, and it gets shutdown, and you swipe back, and it’s restarted, transparently. But what happens if you’re in the middle of a REST request while you’re doing that? That gets troublesome.

What you see is that protocols like REST get really brittle and cumbersome in the mobile world. You’ve seen in many companies, it works fine to begin with. You can build an app, everything works perfectly because, when you’re testing in your own lab you have perfect connectivity. The app is running, everything just works. But then you start hitting the edge cases, when you actually get out in the wild and you’ll get weird bug reports that you can never reproduce because it’s all these weird timing issues.

Example (9:09)

To give a simple example, imagine that you want to build a retail app and you want to add a buy button. What happens if you don’t have any connectivity when someone clicks the buy button? Well, you still have to buy at some point so maybe you’ll queue up the requests and say, “I’ll do it later”, so you build a queue. What if the app gets shutdown? Now you have to persist that queue as well because otherwise you’ll lose it the moment the app gets shutdown. More code. Then you wait and watch for connectivity to come back. The connectivity comes back and you send the request. But what if you lose connectivity before you get a reply. Did I just buy something? So you you have to go back to your backend team and add a token to this buy button so I can later check whether I actually bought again. So I need another request, another REST endpoint where I can see if the token actually went through. Also, I need to stall that token persistently on the device so I can know what I was in the process of, and it just escalates from there.

How do I deal with all these situations? It’s just so brittle. That’s why you see it’s like you’re finding your way through the iceberg of risks. Where, at the top of the list, it’s really simple, I just sent a request, I get a reply, update UI.

How hard can it be?

Did you actually look at all the things that can go wrong in the process when you’re in this situation, where things just get shut on and off all the time? It just becomes this explosive thing that most people, just ignore and build the app. It’ll work 90% of the time, but when it doesn’t then you have all these annoying reports.

So, that’s the world we live in. Especially in the world where you have really complex things, doing tons of requests, doing searches where you get tons of replies back and you have these requests that have to be coordinated in long sequences.

What can we do? (11:41)

In the mobile world, the big question, of course, is what can we do? Is there a better way? Is there a way to work with this and accept the problem and say, okay, how do we find something that’ll actually work in a world where networking is just not guaranteed.

At Realm we’ve been working a lot with this because we come from a different perspective, we came from a database perspective or even more, I would say, from a data perspective. We are database, but we’re very different from every other database out there. We’re really an object database. Which is so close to the language that it’s even hard to call a database. It’s just objects. The fact that they can be persisted and created and all the things you can do with a database is kind of a side benefit.

What is it really we want? We’re connecting to this server and we want to do something together with the server. We want them to just be able to share some state, in a manner that’s independent of connectivity. We have all these mobile devices that have data on them, and they want to do stuff; and then we have the servers, and really, we want to just keep a synchronized state between all of them.

We discovered, most people use a sort of real-time sync. For database sync, you want to collaborate and you want to share data and that’s all really cool, but one thing that we noticed was this becomes so much more powerful when you use this to replace your APIs.

Objects as API’s (13:23)

So, if we think about the same problem that we had before with the shopping cart, where I want to buy something. If, instead of thinking, I have all these REST APIs I have to connect to, think: I have some shared data with my server; what if I just post an object to the server and the server can react to that.

So, in this example, on the mobile side, I create a new object, a buy object, that says I want to buy something. The buy object will have the links to my shopping cart object, with all the stuff I want to buy. Just like doing it with all the new, serverless frameworks like AWS Lambda and Azure Functions. On the server you want to be able to react on things and be triggered by events that happen.

Since this is real-time synchronizing, on the server you see the same object pop up on both sides transparently. The moment this point pops up, you run some code on the server, it can react on the object, and just trigger an action. In this case, since we’re in the shopping cart, that action could be sending a request out to Stripe to actually process the user’s credit card. While it does that, it can update the status to say “now I’m processing”. Since this is magically synchronized, it goes right back to the object on the mobile device. Which, is also reactive, so you can connect that directly to UI.

So, as a developer you don’t even have to update the UI, it’ll just show the content of that object, and just show that we are processing. And while this happens, you can have connectivity issues, but it doesn’t really make much of a difference. You’re just not going to see any updates on the mobile device. You could also have the app get totally restarted even while, on the server side, the status changes are done, Stripe returned the reply and the order went through. So, at some point, you get connectivity again, the app gets restarted, etc, and it just synchronizes back to the app, and you get the ‘done status’ in the app.

You abstracted away all the networking, and not just all the issues of going online, offline, shutdown, but also the, “should I parse all the JSON?”, “How do I handle networking?” It all just goes away.

How does this model change development? (16:08)

The developer only did two things, he created an object and he connected it to the UI. On the server side, all he did was create an event that says: when an update has been added, run this code, connect to Stripe, process the order, update the object and return. And that’s all. In that way, you removed all of these problems.

You put yourself, proxy or between the mobile device and all the troublesome backend API that was not designed for mobile. Which simplifies the whole architecture and just make it really robust and hardened against all the issues you have in the mobile world.

It’s a really simple mental model, but it has fascinating effects. There’s a lot of side benefits from this since the server knows everything that’s on the client, it can also optimize in the sense that it never has to send the same thing twice. It’s a link to order info. It can have a long list of items, like the status on the order, and you can just keep updating the object, and the mobile device will always know the status.

If you look from a development perspective, the actual code that the developer has to write is minimal. You just use what’s in iOS, KVO, which allows you to observe the object for changes. Every time this object changes, call this callback and update the UI. So, it’s extremely simple. There’s no networking code on the client side, or on the mobile device at all.

How can this be used? (21:53)

So, I think it’s something that you should consider if you are building serverless services, think about when we are mobile, when our users are mobile, how are they going to interact with this? How are we going to protect them so that no matter what happens in the app or with the networking connectivity? How do we ensure that the app still works, that it doesn’t break in all kind of unpredictable manners? Because that’s a really bad experience for everybody.

Having a shared-state can be used for all kind of things, the most obvious one, is just sharing data. It can be used for collaboration where you can have live, shared data. You want apps to be live, and engaging, where you can play with things and it feels instant and real-time.

But that’s just one thing, the API bridging is actually really important. This idea that you can access the whole world of APIs in a manner that’s safe and extremely easy for the developers to work with, is powerful.

We have the same thing with connectors, where you connect to existing databases. If you have databases as a service, you can make it connect. What if we could just expose that as objects? That way you don’t have to have any kind of calls back and forth. You’ll to be able to push data, and you want to be able to stream data back from the devices, but that’s all the same. What’s on one side is on the other.

So, basically, you take all of these, seemingly very separate things and you can generalize it to one single concept. That way, the developer only has to learn one thing. It’s all just objects. It really simplifies the whole stack.

Key Benefits (23:56)

So, what are the key benefits?

You abstract away the networking. That is really huge. Networking, basically, doesn’t exist for you. You just work with objects. It also creates this unified data model where you have the same objects and you have the same reactivity. Like the ability to observe and react to changes in objects, both on the client and on the server. It’s just one mental model of how you work with things, you really have the same language on both sides.

That’s why node.js is so popular. The fact that you can have just one language. The idea that you can have one data model and one way of interacting with it is really powerful. When you have objects you can just bind them directly into the UI, removing tons of complexity.

Summary (24:50)

So, to summarize, the first thing to learn, is that mobile is really different from the web. You have to think about things in a much different way. The fact that you have high latency, connectivity that’s unstable, and apps that can just get restarted any time, makes it a different world. You have to adapt to dealing with all these small problems.

It is dangerous, if you just build apps the way you used to build. They will work to begin with. Then you’ll start getting a lot of users and you start getting into Asia, where they have all kind of weird devices that have small memory and small processing. How do you even start debugging that?

The whole idea of doing API calls is just brittle when you’re in that kind of environment. So you really want something better, and shared objects is one way. There might be other ways, but it’s definitely something you want to keep in mind.

We think the right solution is just to say objects are the new API. It’s such a simple model. You just post objects and you can react on them, and you can do anything any REST API can do, anything something like Draft.js can do, etc.

Questions (26:29)

Q:How does that scale on the server side? What kind of limitation are you getting?

Suddenly you have a lot of state. Which is a problem. You have to track state for every single object, which is a lot because you can have millions of devices, with users who have millions of endpoints out there. Our perspective on this is what we call endpoint virtualization. Where you really want to think like you have a virtual instance of all your endpoints on your server. The idea is that this is extremely light weight, objects take up hardly any space. This is a milli-fraction of the resource to do that. That you just spin up a small data container, we call it a Realm. So, in our experience, we have single servers that have been able to handle up to around 100,000 connections. If you have a different avenue, if you have millions and millions of users, you’re going to have scale issues, as with anything you have to deal with.

Q:If you’re offline, and people change the same object at the same time, how do you deal with the conflict when they meet?

That is actually the main issue we deal with. What was the reason of existence for Realm? How to deal with conflicts. The consequence of being offline, and even the consequence of having high latency is that even with perfect connectivity, if two changes are being made at the same time to the same thing there will be conflicts solely because of latency. This also happens with REST API, but in that case, you’re just in trouble. What we do, because we have an object model and because it’s a real database that actually works, we know everything the user does. We know, not just that you updated the object, but we know how he updated the object and what context. It means that you can send extremely rich information to the server about what happened, and that allows us to handle merged conflicts. So, what we do is, for essentially our data structure is like one big CRDT (communicative replicated data type). What that means, in essence, is all changes, when they conflict, will get resolved in a ditrodestic manner. As a developer you will know that it’s going to end up in this result at the same time. So, some of the results are very similar, if you have, a list and you have people appending data to the list. If you have two people appending data to the list at the same time, you know because of the nature of the data structure that it will end up in one list where these all are added in the order that they were added. So they’ll kind of interweave as they go, and the whole data structure is built like that.

The key thing is that we make sure that, as a developer, you don’t have to worry too much about that. You have to understand the basic rules, but the moment you understand the consistency rules you can just let it go, and you never have to manually resolve conflicts.

Q:Does Realm support different scopes like a user section for object loading or sharing function? Can you have different sharing rules for different objects?

Think of the Realm as a mini-data container that you can spin up as many of as you want. You can literally have a million of them., and every one of them can have their own sharing rules. Basically, each realm has an ACL, where you can say this Realm is shared with these two users, this Realm is shared with the public for everybody, this Realm is just for this single user, this guy only has read-access to it, this guy only has write-access to it. In most cases when we see apps, you have multiple different Realms open in the same app with different sharing rules.

So, for example, if you want to do a chat app, you would say every channel is just a Realm and it’s just shared with those people who are members of that channel.

Next Up: Offline-First #2 – REST API Failure Situations

General link arrow white

About the content

This talk was delivered live in October 2016 at Serverless London. The video was transcribed by Realm and is published here with the permission of the conference organizers.

Alexander Stigsen

Formerly a Nokia systems engineer, Alexander Stigsen is CEO & co-founder of Realm, a mobile database for iOS and Android, designed to help app developers build fast apps, faster. Realm is a YCombinator startup based in San Francisco with funding from Khosla Ventures, Scale VP, Andreessen Horowitz, Greylock, and more.

4 design patterns for a RESTless mobile integration »

close