Namratabandekar videoprocessingonandroid cover

Video Processing on Android

An increasing number of social media apps such as Instagram, Snapchat and Messenger are using video features. With modern Android devices having enough processing power to run video processing algorithms locally without the need to send videos to be processed on the backend. Overlays, transcoding, and cropping are just a few of the challenges you face when working with video on-device.

In this talk, I will go through my experience experimenting with built-in as well as third-party libraries for video manipulation on Android. We will look at the benefits of two leading media processing libraries, and how you can leverage them to enhance Android’s native MediaCodec API to accomplish these tasks. You will walk away with a head start on how to tackle the most common challenges with videos on Android.

Introduction

I’m Namrata Bandekar, and I will go over how to get started with video processing in your Android apps. I am an iOS engineer at OANDA, and also I’m also a part of the Android tutorial team at raywenderlich.com, which is my development tutorial blog.

Video Processing

In this talk, I will refer to video processing in the context of video editing; things like scaling, cropping, trimming or overlaying data on a video.

Video processing is used in popular social media apps like Snapchat and Instagram. Many of these apps have special effects you can apply to videos, like filters, and overlaying text and images. Some support merging video together, along with cropping, trimming video.

In early 2015, I started working at Stagename as an Android developer for a product called WeatherGIF. The app provided a feed of data through looping video. By the time I joined, the iOS app was already completed. In iOS, Apple provides a great library called AVFoundation for working with video.

I started looking for what native support we have on Android, and I came across a few promising APIs.

  • MediaCodec, lets you encode and decode videos.
  • MediaExtractor, which lets you take the audio out independently of the video.
  • MediaMuxer, which lets you put the audio and video back together.

Get more development news like this

These were the only three classes in Android, and there was no native support for features like cropping, scaling, and overlaying data.

I then looked at third-party libraries, such as GStreamer - which provided to be too complex and large to be included in the app, and FFmpeg - the popular open-source library for anything related to video.

FFmpeg supports filters, like the ones in Instagram, as well as the things I needed to do like cropping and scaling.

I started looking at how I could use FFmpeg on Android, and I found two ways to do this:

  • Build it as an executable: run it on Android’s app’s process, and access it using the command line.
  • Build it as a shared object library: or a .so file, and use the Java native interface to interface with the FFmpeg executable.

I decided to look at building it as an FFmpeg executable because of time constraints, and this was much faster than going the .so route.

Building the Executable

On a MacBook, I first had to install many dependencies, such as autoconf (used to automatically configure your native source code), automake (used to automatically generate make files), and libtool (used for creating portable compiled libraries).

The first step to building the executable is to download the Android NDK, then download the FFmpeg source code from their repository. Then you put all of the code that you downloaded, the FFmpeg source code and libraries that you downloaded under the NDK sources folder.

Next, configure your FFmpeg executable so that you can control the features that it has. You can use FFmpeg on a lot of different platforms and architectures.

First, define the target OS, which is Linux in our case, along with the architecture for which I’m compiling, which would be either ARM or X86. Then specify the directory in which your NDK is using the sysroot flag. The compiler uses this path to search for headers to compile the native code.

	./configure \
	--target-os="$TARGET_OS" \
	--arch="$NDK_ABI" \
	--cpu="$CPU" \
	--sysroot="$NDK_SYSROOT" \
	--enable-pic 
	--enable-libx264 \

Next, enable and disable specifc flags in the configure script. FFmpeg actually consists of a bunch of different libraries, and here, we are enabling and disabling different libraries. When I do enable decoders and enable encoders, I am enabling the libav codec library, and then the enable filters enables the libavfilter library which lets me do all the video processing tasks like cropping and scaling.


\
	--enable-decoders \
	--enable-encoders \
	--enable-muxers \
	--enable-demuxers \
	--enable-filters \
	\
	--enable-hwaccels\
	--disable-debug \
	\
	--enable-ffmpeg \
	--disable-ffplay \
	--disable-ffprobe \
	--disable-ffserver \
	--disable-network \
	\
	--enable-yasm \
	--disable-shared

After I’m done compiling, I run make, which will create the FFmpeg executable. It will take at least 20 minutes for each architecture to compile the code. After you compile, you’re going to get an .exe file, which will be under the bin folder in your NDK. Copy it to the raw folder in your app, which is under your resources directory. Then write Java wrapper classes to pass commands to this executable file.

To make sure that this file can be executed inside your Android app’s process, change the permissions on the file, by copying the FFmpeg executable file out of the raw folder into the app’s bin directory.

	try {
		File f = new File(ctx.getDir("bin", 0), "FFmpeg");
		if (f.exists()) {
			f.delete();
		}
		copyRawFile(ctx, R.raw.ffmpeg, f);
		// Change the permissions
		String filePath = f.getCanonicalPath();
		Runtime.getRuntime().exec("chmod 0755"+filePath).waitFor();
	} catch (Exception e) {
		String errorMsg = e.getLocalizedMessage();
		Log.e(TAG, "installBinary failed: "+errorMsg);
		return null;
	}

I defined the bin directory path, then we use the chmod command and change the permissions to 755.

Use the ProcessBuilder class to pass commands to FFmpeg. ProcessBuilder can be extenuated by passing the list of commands to its constructor, and also you need to specify the directory in which your FFmpeg executable is stored. Then, you take the ProcessBuilder object, and you call the start method which returns you the process object. This starts the process of sending those commands.

FFmpeg takes in commands using a filter chain (the different video editing tasks like cropping, scaling, and overlaying are all run as filters), and when you chain them together, it’s called a filter chain.

Here are the commands:


	ArrayList<String> cmd = new ArraytList<String>();

	cmd.add(mFfmpegBin);

	cmd.add("-y");
	cmd.add("-i");
	cmd.add(new File(inputVideo.path).getCannonicalPath());
	cmd.add("-vf");

	cmd.add("movie="+watermarkImg.path+" [logo]; [in] scale="
		+newWidth+":"+newHeight+" "+"[scaled; [scaled] crop="
		+newDimension+":"+newDimension+" [cropped]; [cropped][logo] |overlay=0:0 [out]");

	result.path = outputPath;
	result.mimeType = "video/mp4";

	cmd.add(new File(result.path).getCanonicalPath());

	execFFMPEG(cmd,sc);

Shared Object Library

I decided to explore the second way of implementing this, which was to build FFmpeg as a shared object library. The setup for this process is similar to the executable process, you download the source code, put it under the NDK sources folder, and put your JNI library project code in the same directory as your NDK.

JNI is the Java Native Interface, and you can use this to write wrapper classes, it can be C++ or C code, to access the methods in the native library code of FFmpeg.

Next, I made a android.mk makefile. This file is defined per library. I had to make an android.mk file for the JNI library, wrapper library, and I had to make an android.mk file for the FFmpeg library.

	LOCAL_PATH := $(Call my-dir)
	include $(CLEAR_VARS)
	LOCAL_MODULE := videokit
	ANDROID_LIB := -landroid
	LOCAL_CFLAGS := -I$(NDK)/sources/ffmpeg
	LOCAL_SRC_FILES := videokit.c ffmpeg.c cmdutils.c
	LOCAL_SHARED_LIBRARIES := libavcodec libavutil libavfilter

	include $(BUILD_SHARED_LIBRARY)
	$(call import-module,ffmpeg/android/$(CPU))

This file specifies different environment variables which are used by the compiler.

The next step is to create the application.mk file - this defines properties for all the modules in your project. It’ll define those properties for the videokit library as well as the FFmpeg library.

Place both the android.mk and application.mk files in the same folder, and run NDK build. It will look at the same configure script that we used for the previous executable file technique, and it will produce three .so files. Place these in the jniLibs folder under the architecture that you compiled it for.

To access the .so file, write a Java wrapper class to load the library, and define the native methods that are in the .so file using the private native keyword.

FFmpeg Licensing

FFmpeg is distributed under the LGPL license, but some of the libraries that you might want to use are distributed under the GPL license, like libx264. FFmpeg has a compliance checklist on their website. I would stress on following every point because it’s all to satisfy legal requirements.

If you are using your app for commercial or proprietary use, you shouldn’t include “–enable-gpl” or “–enable-nonfree” flags in your configure script. If you use “–enable-nonfree” for example, it’s going to put your code under the GPL license.

The other things to note are that FFmpeg can only be used as a dynamically linked library if you want to follow the LGPL, even if you’re not modifying the FFmpeg source code, you’re required to put the FFmpeg source code you used in your repository and make it public.

Limitations of FFmpeg

Limitations:

  • I found the license compliance checklist to be pretty tedious.

  • FFmpeg is slow, even for small videos.

  • A large library: if you want to include all of the features of FFmpeg in your library, the library that you’re going to build is going to be really large, about 30 megabytes in size.

MP4Parser

I looked at other options that anyone could use, if they didn’t need features like cropping and scaling and overlaying, and I came across MP4Parser, a lightweight Java library.

MP4Parser supports merging multiple videos into a single one. It supports trimming a video to make it smaller in duration and supports muxing the audio and video into a single file, along with demuxing them. Here is an example:


	RxMp4Parser.concatenateInto(
	//The output, where should be stored the resulting Move Object
	output,
	//Cropped Video
	RxMp4Parser.crop(f, 8.5f, 13f);
	//A full video
	RxMp4parser.from(f)
	)
	.subscribe(new Action<File>() {
		@Override
		public void call (File file) {
			mProgressDialog.dismiss();
		}
	}, new Action <Throwable>() {
		@Override
		public void call(Throwable throwable) {
			Toast.makeText(MainActivity.this, "Concatenation failed! |" + 
				throwable.getMessage(), Toast.LENGTH_SHORT).show();
		}
	});

An advantage of using this library is that it is available as a Java library, so it’s unnecessary to build an executable or the shared object library. It has a clean API and is very lightweight.

What are the limitations for the MP4Parser?

It is limited in functionality. You can only merge, trim, mux, and demux the videos. It doesn’t support any encoding or decoding.

Resources

The first one is the Guardian Project; it has a lot of well-documented steps on how to build your FFmpeg executable, and it also has a Java wrapper class that you can use to pass commands to the executable. The Yelp repository has a script that builds FFmpeg as an executable for all different architectures.

I used the video kit library to build the .so file.

Then this is a link to the license compliance checklist and I highly recommend you go and look at it if you’re going to use FFmpeg in your app.

Finally, is the MP4Parser repositories and the reactor version of it.

Questions

Is the RX MP4Parser a React Native library, like Facebook’s React Native.

It is actually a React Java library, so it is an RX Java library. It’s not the same as React Native.

Media codec is slow for some, do you have any suggestions around this?

You could use FFmpeg. It supports a lot of different codec formats for decoding, encoding, as well as supporting some others. You can use other third-party libraries like libvpx that I mentioned.

Telegram uses MP4Parser, but I haven’t really seen any other apps. I believe Snapchat and Instagram use their own code.

If someone wanted to make an app with video processing, is it viable to be using FFmpeg, or what would you recommend?

I would say if you’re using really low-quality videos like I was, it is completely viable. It wasn’t that bad; about eight to 10 seconds. It wasn’t very noticeable I would say.

Next Up: New Features in Realm Java

General link arrow white

About the content

This talk was delivered live in July 2017 at 360 AnDev. The video was recorded, produced, and transcribed by Realm, and is published here with the permission of the conference organizers.

Namrata Bandekar

Namrata is a Software Engineer at OANDA and has experience doing native Android and iOS development. She is a member of the Ray Wenderlich Android tutorial team. Apart from building apps, she is passionate about travelling, dancing and hiking with her dog.

4 design patterns for a RESTless mobile integration »

close