Droidconbos andreas colubri processing header

Creating Sensor-Aware & VR Apps with Processing for Android

This talk will give you an introduction to Processing for Android. It will cover the basics of the Processing language that allows to effectively program interactive graphics in 2D and 3D, and will describe the application of these techniques to different types of Android devices: smartphones, tablets, wearables and smartwatches, as well as to Cardboard-compatible devices in order to create VR experiences. Processing started in 2001 at the MIT Media lab as a project to increase software literacy in the arts and design, and today it is used around the world as a teaching and production tool. An advantage of Processing for Android over more complex programming environments is that it allows the users to focus on the interactions and visual output of their code rather than in the implementation details of the Android platform.


Intro (0:00)

I’m Andres Colubri, and I am super excited to be here. This is a project I’ve been working on for some time, and I really want to show you what we have been adding. A lot of new features to the virtual reality are wearables and many other things that I will show you today.

Processing (0:23)

Before getting into all these details, I want to tell you what processing is. It’s often the first interaction to do some kind of visualization or to use code to create cool visuals that people have. Today, processing has been growing in many different directions. Hopefully, this will be exciting for everyone.

For those who don’t know, processing is a language and development environment for artists and designers. I think, this is a fine answer, but kind of a little bit limited. It doesn’t give you enough details to know what is it really. I was thinking, how can I define it in a little bit more detail, more technically.

One way you can think about it, is that it’s really an API. It’s an API for drawing. It allows you to get visual output on the screen really easily. It has been implemented through many different languages. Although it was originally in Java, now there is a JavaScript and a Python version. It also comes with a minimal editor.

I think that may have given you a more detailed view of what processing is, but it’s not really that. I think it’s better to say that processing is really a community. It’s a community of artists, developers, students, and scientists, who try to merge art and technology. One of this community’s core goals is to bring in people from different backgrounds from different places with different interests. It’s really about inclusion and diversity. One of the core principles of processing, and I think, how to make this community larger, is having different interests in it. It is really one of the most important missions of the project of this community.

History of Processing (3:39)

The way I want to focus today is thinking of processing as an additional sketch book to sketch out product ideas quickly. That was one of the initial motivations of processing, to sketch out ideas and iterate. I’ll give you a little history.

Processing is not a project that started a little while ago. It has been developed since 2001. The project started at the media lab just across the river in Cambridge. This project was implemented by Ben Fry and Casey Rees. It was actually inspired by an earlier project called Design by Numbers from some writer.

When Ben and Casey started working on this project they really wanted to use it for their own work as a prototyping tool and a teaching medium for teaching programming to artists and designers. This was the initial motivation for the project. It built in ideas about sketching, prototyping, and iteration into how you use the programming environment. It also very importantly, showed how to obtain visual output quickly. This project has been around more than 15 years, so people have been doing lots of very cool stuff.

Get more development news like this

One user made a music video. I really like this music video because when I first got involved in this project I started working with the open georender, working on how to improve it and make it faster, so people could do more complex graphics using processing. Because processing is based on Java, there were some problems around performance. When you see people doing this amazing stuff, it’s incredible. They are using the tool you developed, but doing things that you cannot really do. It’s really amazing to see how the tool enables people to do things that are really amazing.

Processing, the tool itself and the language are part of a larger initiative, like a community. There is a foundation that was started in 2012. This foundation has a number of initiatives, and fellowships, figuring out ways in which this open source development can be sustained through time. It’s a really important initiative to get all of this working in the future.

Processing for Android (11:15)

There is a version of processing for Android, which is what I will be talking about today. There are a few other projects that even have processing Ruby. We have a ton of sister projects within this community.

What is processing for Android? Essentially, it’s two things.

One is a library. You can think of it as a library that you can use to develop Android apps using the processing API. The other thing is it is an extension of the processing environment, so you write the code in the processing developing environment, and from there, run it either on the device or on the emulator.

What is really important about processing for Android is the idea of making it easier to get started with development on Android. It’s more focused on how to do the prototyping and how to experiment with the ideas of interaction, and ideas of visualization.

It’s also important to set this aim to understand what processing for Android is not. It doesn’t try to be a replacement for Android Studio; it’s more like a compliment of it. It’s not a library to do anything. It’s specialized in creating visual sketching. Processing is also not a library to create a UI. Although you can do it.

Demo (13:27)

With these things in mind, I want to give you a quick demo of how processing works and how this idea of getting visual output quickly functions. I’m simply working in the Java mode. I have a number of modes, Java, JS, Python, and Android. I can start working in the Java if I want to do something very simple.

This is kind of the structure that all processing programs have: a setup function and a draw. In Draw, we are going to put the stuff we want to do and just simply draw ellipses.


	void setup() {
		size(800, 400);
	}

	void draw() {
		ellipse(mouseX, mouseY, 20, 20);
	}
  

Very quickly we put together a very simple drawing application. One thing to notice, we don’t have to worry about setting up the context of the output. We just simply define our setup, function, and our draw function, and there we go.

Now we could try this same code in Android. I just switch the mode. There are a lot of options to set the permissions of your app, and you can choose the type of output you want.

We are writing a regular app, a fragment based app with an activity and everything, but all of these details are hidden from you. You just worry about getting your output correct. Internal is creates an Android project putting all the dependencies, pushing the resulting bug package to the phone. The same thing I just ran on the desktop is running on the phone. I didn’t change anything in the code. That’s pretty cool.

I just wanted to give you a very quick intro into how processing works in general and also how this same ideas in Java translate into the Android mode.

More History (18:23)

What happens in processing is not new. Processing for Android is not new. The project started when Andy Ruben contacted Ben Fry asking him, “Hey can we use processing to make wallpapers for Android.” That was kind of what started the processing development for Android.

It’s very problematic, because some people say while processing is this language to create wallpapers, Casey and Ben really hate that.

There was some initial development and then the project languished a little bit and then there was a renewed efforts thanks to some of the code in 2014 was added. For 2014, and 2015, there was a much more energized push adding all these new features of VR or Android Wear into processing for Android which is kind of ongoing.

With this recent push in the development of processing for Android, we are going from what we had until last year, which is essentially typical apps, glass support for sensors, and also the option to export apps for distribution. Last year in version three of the mode, and what we are adding now is live wallpapers.

Finally after seven years, we have the option to export and install sketches as a live wallpaper.

We have Android wear, Google VR, and are also working on the integration with Android Studio. You can do your prototyping with processing for Android and then export the project so you can pick it up with Android studio, and continue adding other elements.

New Modes (20:52)

This is kind of the new version that is in development right now, moving forward. I’ll go through some of these features and hopefully get you excited. Hopefully, we’ll give you a sense of what you can do with it.

I’ll now go over how to install the mode. There is a mode installation UI in the Processing Development Environment (PD). You say “Okay I want to add a mode.” Then you have a contribution manager. You can select the Android mode from there. Once you install it, if you have the SDK, it will ask you to install the SDK, you can even install an SDK that is separate from the SDK you might use in Android Studio to avoid conflicts there.

Demo, Part 2 (21:44)

While I already showed you some very simple examples, this is another kind of hello world example where, again there is a setup basic function. We simply draw one rectangle of one color if we press one area of the screen, and another color if we press the other one.


	void setup() {
		fullScreen();
		noStroke();
		fill(0);
	}

	void draw() {
		background(204);
		if (mousePressed) {
			if (mouseX < width/2) {
				rect(0, 0, width/2, height);
			} else {
				rect(width/2, 0, width/2, height);
			}
		}
	}
  

What do we do next?

We can export our sketches as an SDK. It will ask you to create the keys tool for you and get the information that is needed for that.

Let’s get into a more involved example: How to create a live wallpaper where we do some more interesting visuals. Why this example? Why particle systems? In general, particle systems are pretty cool. If you want to do something kind of visually interesting, you can define some particles that interact in very interesting ways and we will try to do that.

We want to make a live wallpaper that is pleasing and naturally moving in the background. This particle system can give us the way to do that. As I mentioned, it’s a good example of a more complex sketch.

The process starts by thinking visually what I want to do. I want to do something that is like a fluid, or has a fluid nature. One can start looking at some inspirations. For example, “The Starry Night” from Van Gogh. We could look at some other things people have without photo realistic rendering. There are some very interesting resources that are online where people upload processing sketches that you can run online. You can learn a lot from looking at different examples.

I often do preliminary sketching on paper before actually starting to work with code. At this point when I started work on this example, I wasn’t sure how to go ahead. I want to get the fluid motion that looks really natural and could be nice for wallpaper.

To translate this idea into an actual result, Daniel Shiffman was really helpful. I really want to mention about him, because he’s a force behind the processing community in terms of how to make all this knowledge available for teaching, in terms of how to use code and to be able to express your ideas. He has been working for many years in the community. In particular from all the tutorials and books that he has online, I found something that could really work here. I’ll show you a little bit of that.

This uses the idea of autonomous systems. I want to have particles that have behavior that is autonomous, I don’t need to control all the details. Defining how the particles move with certain velocity and then following a certain field of velocities I can have this flow. I essentially want to do that. I started playing with some ideas of a field of velocities and then how particles move in that field and then started doing some sketches on that.

This is the first experiment where the flow comes from the mouse, if I’m testing on the desktop or I can also do it on the device, using the touch screen to define the velocity flow. Then I get particles moving there.What I wanted to do is have this as a completely autonomous system, so one can take this velocity flow from an image. I got a random image, the texture of Jupiter. If you calculate the intensity of each pixel, you can get a velocity flow.

This is great, because then I can take photos from say the storage of the device and then have this evolving autonomously. In other words, you can have the processing sketch reading the photos stored on the device and then generate this particle system based on that. It’s brilliant.

This is an example of something that this is a little more involved. It actually kind of gives you a great idea of what the kind of things you can do. It’s not just this little drawing sketch, but you can really do much more complicated things.

Watch Example (28:06)

I’m going to jump into watch faces. We have the facility to build on all of these ideas of measuring time. We have smart watches. The great thing is that we just simply use the same API to write our sketches to create watch faces. This is just a very simple example where we just used time to draw an arch.


	void setup() {
		fullscreen();
		strokeCap(ROUND);
		stroke(255);
		noFill();
	}

	void draw() {
		background(0);
		if (wearAmbient()) strokeWeight(1);
		else strokeWeight(10);
		float angle = map(millis() % 60000, 0, 60000, 0, TWO_PI);
		arc(width/2, height/2, width/2, width/2, 0, angle);
	}
  

The structure of the sketch is the same as the ones I showed you before. There’s a setup, there’s a draw. Now we have some additional functions that are specific to where I have this function where I know that the watch is in ambient mode or not. Essentially it is the same as I was doing before.

In addition, we can do body sensors. In 1911 to get your electrocardiogram measured, you had to had to put your limbs in buckets of saline water. Luckily today we’ve come a long way for measuring heart rate.

We can access the senors on a smart watch very easily. This is an example where it’s just showing you an ellipse that follows the rhythm of your heart rate.


	void draw() {
		background(0);
		translate(0, wearInsets().bottom/2);
		if (wearAmbient()) {
			fill(255);
			text(bpm + " bpm", 0, 0, width, height);
		} else {
			int period = 750;
			if (0 < bpm) period = 60000 / bpm;
			float x = (millis() % period) / 1000.0;
			float k = 1/(0.25 * (period/1000.0));
			float a = impulse(k, x);
			float r = (0.75 + 0.15 * a) * width;
			translate(width/2, height/2);
			fill(247, 47, 47);
			ellipse(0, 0, r, r);
		}
	}

	float impules(float k, float x) {
		float h = k * x;
		return h * exp(1.0 - h);
	}
  

It’s simply taking the beats per minute from the sensors and using that to control the ellipse through the code. You can actually import the packages that you need to define a sensor. There is also the function to request the permission to access the body sensors. In this case, I just need it for the heart rate.

It’s where you initialize the context. Where you get the context and then add the sensor and then attach a listener to it, and then, it’s just the draw.

We have the same structure in the processing sketch, extended to be able to create visual output on smart watches. We can do more complex things.

We make activity tracking more exciting. For example, if you can grow a tree or doing some visual output it would be more engaging.

Virtual Reality (32:54)

I know a lot of people are interested in Virtual Reality so I’ll cover it quickly. There are many people doing very interesting things in virtual reality. I don’t know if you have tried, for example Tilt Brush, it’s amazing. It’s an amazing experience. You draw in space and it’s amazing.

There is another very interesting project. It’s more like an art installation where you have these crazy headsets. The idea is you experience how an animal sees. You can see as a bat or as a deer. The people created this installation imagined how animals see so you can experience it.

It’s really a new medium so we are just starting to use it. I think, we are helping this processing by making the visual sketching and experimentation easier. This hopefully will give people another tool to expand with this new medium.

How difficult is it to use VR in processing. It’s the same as I showed you before, it was pretty easy. You use the same basic structures from processing that people used to use on the desktop. Same thing for wear, wallpapers, and for VR.


import processing.vr.*;

void setup() {
	fullScreen(PVR.STEREO);
}

void draw() {
	background(157);
	lights();
	rotateX(frameCount * 0.01f);
	rotateY(frameCount * 0.01f);
	box(500);
}

It’s very easy to go from a 3D sketch in processing, it’s the same code, you just run as a VR app and that’s it. You don’t need to do any configuration. Nothing special. All you need to do is add import processing.vr.*; and that’s it. You have to say, “I want an image in full screen using the stereo render” and that’s it. The rest of the stuff is the same that you can find in a tutorial on 3D for processing. There’s nothing new really. All the stuff is solved internally.

There’s one thing that is very technical, and not in here, but you need to center the screen. Things are centered automatically when you are using VR because the y axis is pointed up instead of pointing down. You also have to make sure that your app doesn’t make people motion sick. That’s important.

There is stereo render, and also import in processing VR. All I need to do is to say here, well instead of running my app as a regular app or as a wallpaper, run as a VR app. And that should be enough. This is just very simply a demo that shows you how to create an aim to select objects, which is very important especially on cardboard, where in terms of the interaction, you typically have all these controllers. On cardboard, it’s much more limited, but typically you select by looking at things.

All you need to do is say, “I have my camera, I have the forward vector,” that gives me the pointer. That’s all you need to do. The rest is all the same as you would normally do in 3D sketching processing. There’s not really much else to do.

It’s just you trying to figure out ideas to experiment with this. Hopefully these gave you some idea of some things you can do.

Conclusion (38:42)

We want to integrate processing with Android Studio. For example, if we have the visualization that then we want to integrate with a larger app, and we can do that. This actually is what I’m doing in my day job where I’m working biomedically and creating health apps.

We have a medical app where we input data from patients and then we create a visualization of some sort. In this case, mortality risk for the patient. You can then select treatment options. This part, is what is with processing: interactive visualization, the rest is just regular data input forms.

It’s part of a larger project where we’re trying to deploy these tools in the field and do data collection, and machine learning. We just export the sketch as an Android project, and then, we get that into Android Studio. What you get is essentially your sketch that you had in the PD, but now is a class in the project.

How do you embed the processing sketch into the main activity in this case? You have a layout. You then set the sketch, create a sketch object, and then you say “I want that in this layout.” That’s kind of what we are working on to really integrate both things.

I’m also playing around with the idea of using processing to write a wearable sketch, and also, the hand held sketch. Imagine your are tracking activity or some other information, and you want to visualize that on the phone. That is the typical thing if you are doing an activity tracker. We’re working on that. You can do that with processing.

It’s a little bit more like a hack right now, because you have to create the two apps separately and then you merge them together and import that into Android Studio. Essentially, it is a bit of something that we are working on, it’s more experimental. Use processing to create the hand held and wear app and then you can manually move one hand held project into the mobile folder of the wear project. Then, you do some updates, and you can then add some other code to the data. You can then open that with Android Studio and get it compiled there and install on the device.

You really can integrate your kind of visualization or you sketches in processing, you can put them into Android Studio even if you’re working on something more complex.

Thank you.

Questions (42:14)

Going back to the VR example, one of the things that often trips me up is the difference between diegetic and nondiegetic drawing. When I’m going from just being in a regular mobile or web app where the window is the view, to being in VR, where there’s a significant difference between what the object is being drawn in space and the way it’s actually being mirrored on the screen. Are there considerations like that, or do you have accommodate that and have to think about that when you’re working with processing for VR?

In processing, we don’t do anything special. Really it’s all what the Google VR SDK does. It has some corrections, which I believe address what you’re saying, but I’m not actually 100 percent sure. I think that there are things that are taken care of by Google VR already, but we can talk a little more about that, because actually I’m interested to know, exactly that or something else that we are missing.

Thank you!

Next Up: New Features in Realm Java

General link arrow white

About the content

This talk was delivered live in April 2017 at Droidcon Boston. The video was recorded, produced, and transcribed by Realm, and is published here with the permission of the conference organizers.

Andres Colubri

Andres Colubri is a researcher working in the visualization and modeling of biomedical data, as well as with interactive graphics for arts and design. He originally obtained a doctoral degree in mathematics back in his native Argentina, and later pursued an MFA at the Design Media Arts department at the University of California, Los Angeles. He is also a member of the Processing project, a programming language, environment, and community focused on making code-based art. He is currently developing the next version of Processing for Android, which includes support for wearable devices and VR.

4 design patterns for a RESTless mobile integration »

close