Mobilization roman mazur header

Interacting with Your App Through the Command Line

Android command line is a very powerful tool. During this Mobilization 2016 talk, Roman Mazur briefly reviews what Android shell commands can be useful for app development, testing, and automation, concentrating on two approaches: the first one is based on Android framework tools, and the second is the core of the Facebook Stetho tool.


Introduction (00:00)

There are two sets of instruments that Android developers use almost every day. You probably start with your IDE, and after writing some code, switch to command line to run the application, and somehow your APK is uploaded to the device and it starts.

If you go a bit deeper into what is happening, you’ll discover that there is a command called adb, which stands for Android Debug Bridge. You can actually get a list of connected devices with the simple adb devices command. And then you might start thinking that you probably can do something useful and automate a lot of stuff.

One day, I thought: I have a set of automated tests for our app. I would like to have a machine that I can connect a new device to and then have all the tests run in it automatically. And then, you have to detect the event that you plug the device; identify what the device is plugged; connect to it and start your tests.

And you start solving it with the same adb devices command. You start thinking how to react to some USB event. But when you open Android Studio and start this runtime interface, they display a set of devices that are connected to your machine and then update this list in real time. And you doubt that they actually invoke adb devices every five seconds. You start thinking that these guys know some magic that I do not understand.

You use the tools because they work for us (work perfectly, they solve our tasks). But sometimes you want to know more. You probably take a look at what other people do and you are faced with a tool developed by Facebook called Stetho.

Stetho (02:45)

Stetho makes it possible to integrate your application with Chrome development tools. Frontend web engineers love these tools because they’re used to intercept different requests that come from the browser to the back end. They are used to see the views hierarchy, or HTML structure in their terms, and they just love it.

Get more development news like this

Facebook made a tool that can integrate your application with dev tools. Open a Chrome window, go to the Inspect tab, and you’ll see your device in this list. If your application has this tool used as a library, you will see your app in this list. You can click on Inspect and you’ll see the standard dev tools that web engineers use. But it will work with your application.

Let’s take a look at our app. You can see how the layout is structured, and the views will be highlighted when you browse the hierarchy to provide you the same ability to intercept network request and to see the details. You even get an ability to script some interaction with your app using some JavaScript. They give you an ability to embed a JavaScript interpreter into your application and then you can type JavaScript here and it will be executed on your device.

For basic integration, you only need one line of code. So Google knows some magic, and Facebook knows some magic. How do you get this knowledge?

Let’s look at Stetho. They not only provide the integration with development tools, they also provide a script which is called “dumpapp”. Dumpapp allows you to talk to your application from the command line. It is recognized like a set of plug-ins that you can extend from your application providing your specific application logic. But, by default, you have a set of plug-ins that can be viewed with our list command and you can browse all the files that are present in your application sandbox. Or, you can also modify your share preferences. Or, print the shared preferences, and see what is stored there.

There is a file called stetho_open.py. It connects to a port, it opens a TCP connection on port 5037 on your local host. Now you realize that, “on my machine some server’s running, let’s prove it”. Let’s take a look at what is running on my machine now that this port is open. There is a server which listens for incoming connection on this port. Android Studio connects to this port, and Google Chrome connects to this port. This is how they all do this magic.

ADB Server and ADB Daemon (08:56)

Next I looked at their Android source code. (Fortunately, this part of our Android internals is documented much better than the rest.) There is a special protocol for ADB. Our debug bridge is not called “the bridge” just for the joke sake; it’s actually the bridge.

On our machine, we have our ADB server running on the port. On the device, we have what is called an ADB daemon. If you go to the device, via ADB shell, you’ll be able to see it in the list of processes. There is an ADB daemon, which is run as a shell user on the device.

This daemon provides a set of services that the server can connect to. Examples of these services include invoking shell commands, or connecting to a TCP socket. This means that communication works as follows: your client - either ADB, or dumpapp in the case of Facebook Stetho, or Android Studio - talks to ADB server and since ADB server can be connected to multiple devices, it says that the client wants to talk to some specific device. When you specify the device, you send another command, which says what service on that device you want to communicate with.

Stetho is a command to connect to a local socket, which is opened by your own application when you embed that library into your application. When your process has started, the socket is up and then the script can establish a connection between your machine and your application.

How does this client understand what socket it should connect to? Because it’d have to discover this socket to identify it to some name. This is solved easily: you can go to the device and list all the open local sockets (cat /proc/net/unix) and search for “remote” (grep remote).

You will see that there are sockets with special names. Everything that ends with devtools_remote is handled by Chrome tools as a server that they connect to. Chrome and your Stetho/dumpapp client listen to these open sockets, figuring out what they can connect to, and start communication.

Chrome dev tools start an HTTP connection and use web sockets in order to get the data from the application, and then render that view hierarchy and intercept network requests and so on. Dumpapp looks at sockets with a stetho prefix and sends its own commands in order to communicate with the plug-ins that you define in your application. Now we understand the magic.

Other use case (13:12)

We can think about another use case: If you combine this with a recent Android API that allows you to obtain a media projection, render it to virtual display and then encode it and transmit using this ADB connection, you’ll get a solution that I’m currently using to stream the device pictured for you, called Vysor (I’m not Vysor’s author, but having this knowledge, I can guess that’s what they do). But it is different from establishing an actual socket connection and having two-way communication. It’s more trivial, and purely relies on ADB shell’s abilities.

ADB shell (13:47)

When we use ADB shell, the picture is the same: we have ADB as a client with talks to an ADB server, that establishes a connection to ADB daemon and uses its shell service in order to invoke the commands. Because ADB daemon runs as a shell user, we always get that shell user’s permissions when you type adb shell. We can do something more than a regular application can do, because every application on Android runs as a specific user and has a restricted set of permissions.

Shell user has a wider set of permissions. For instance, shell users can inject different events for other applications. It’s not what you can do from a custom application. The app can’t inject commands to Google Inbox, but the shell user can do it.

We can use ADB shell to communicate with the device and with the application. Let’s start with a simple example of how we can turn the screen on.

If we use the input command which is available in Android shell, then we can use code 4 to simulate the back button press. The device reacts and switches the screen on. We can inject code 3 to navigate to the home screen.

dumpsys activity (16:40)

We can use a command called dumpsys to communicate to different services that run in the Android system. It’s like when you talk to the services who are using some AlarmManager class - you talk to the alarm service. When you use Context.startActivity, you talk to the activity service. We can do similar things using dumpsys.

When we call dumpsys activity to talk to the activity service, you can type -h and get a brief picture of available commands. The most useful for me is dumpsys activity top - it will provide information about the activity which is on the top of the task stack.

Let’s take a closer look. At the beginning, we see the package name of this activity - this is “launcher”. And note that its package name is not called launcher, it’s “Google click search box”. You can see the view hierarchy, the dump of hierarchy of this activity. For the launcher, it’s impressive, rather long, and you will see a lot of additional debugging information.

You see the title “debug logs” (this part is not something standard). If you dump another activity, you will not see this information. This information is explicitly written by the launcher activity when we talk to it and request the dump.

For instance, let’s start the “inbox” activity with the AM command (which is activity manager client). Then we request its dump. After the view hierarchy, you will not see anything similar to that. It gives us a clue that we can customize how our activity actually processes this dump request. And this is true.

Let’s start my own custom activity. The AM command supports starting activity services and sending broadcasts to broadcast receivers. If we take a look at my activity and request its dump, you will see that my activity writes that there are possible commands that we can pass to it. We can see that activity actually can react to what we type.

The code that implements this is simple. Almost every component in your application (like activity, service, or content provider) has a method called dump which can be overridden. And this is the method which is called by activity service when it communicates to it and requests a dump. In the parameters of this method, you get the output that you can write to. If no arguments are passed, it will display some help text; if there are some arguments in the input, it will try to interpret them.

We used this in practice with a blurring algorithm that was used to dynamically generate the background of our application. We wanted it to try it on different pictures, so we implemented a small piece of code that could download any picture by URL, and were submitting this URL via these commands, and then in real-time we saw what the image would look like.

Another use case would be to have real-time conversations with a designer, and try something in real-time. The same thing can be done with the service. The service also has a dump method, and we can use a variation of our AM command line to talk to it. I can type start service and specify the name of my service. I will be able to request its dump in similar way.

The service supports a custom action which is called “state” which will toggle this state to an opposite value. This can be used when you have a service that works with a background task, and you just want at random time to see what state they are in (e.g., dumping your queue state and analyzing what is happening).

Communicating with your application is not that hard. Now you know what happens behind the scenes. I hope this gives you some inspiration for better development tools.

References (25:30)

ADB sources. This is linked to ADB documentation, that you can analyze and see how to implement this connection to your app.

Stetho tool

“Embedded Android” book. I highly recommend the Embedded Android book, which is not mine but I enjoyed reading it. It has information about dumpsys commands, and other stuff that happens in Android internals and how you can view them. Thank you again.

Q & A (26:00)

Q: When you have access to network logging with Stetho, does it bypass SSL and any encryption, or do you still have to install a custom certificate like in Charles Proxy?

Roman: It doesn’t work like Charles Proxy or anything else. The integration is done custom for every way you perform your network operations. They have custom integration for URL connection, custom integration for awk HTTP, and basically it works like taking the input that you pass to the framework and rendering it to the view you see in Chrome dev tools. It doesn’t require any additional moves to unencrypt something, or actually work with the traffic. It’s just your input that you provide to the libraries over the framework and it is rendered in the tools. Including the output. Actually, anything that your application has access to.

Q: You showed the JavaScript console and that looked really nice, but then there was no demo. I want to see what you can do with that!.

Roman: Yes, because I wanted to concentrate on a different thing. You can import some variables - for instance, application context. If you want to work with an activity it will become a bit more complex. Basically when you set up all this data, you do it usually in your application plus. Which automatically makes it a bit difficult to deal with activities in other components that have some different lifecycle from your process. But, with some tricks, you can issue that if you need that. I personally didn’t.

Q: We can define events like clicking on the x and y axis and working with the buttons and the keys, yes? But if we do swiping, for example, and pinching on the screen, it’s many actions on a console. On Stack Overflow, it’s an easy option to define a swipe by, for example, x and y axis. The only one solution is to record your finger move… do you know any libraries to do it easier?

Roman: No, unfortunately I do not. I remember that some time ago, there was an application called “gesture detector” and probably it can help. But I didn’t do it and can’t help.

Next Up: New Features in Realm Java

General link arrow white

About the content

This talk was delivered live in October 2016 at Mobilization. The video was transcribed by Realm and is published here with the permission of the conference organizers.

Roman Mazur

Roman has been working with Android for more than 6 years. You can find him delivering fixes and new features to a plenty of Android libraries and frameworks including support library, Robolectric, Madge, Spoon, Retrofit, and Helium. At Stanfy he is working on a customized Android OS for in-room tablets installed at hotels by KEYPR.

4 design patterns for a RESTless mobile integration »

close