Slug aure prochazka cover?fm=jpg&fl=progressive&q=75&w=300

Fun and Fast Audio with Swift

Even though iOS and OSX are rich multimedia environments, app sound is often overlooked, and true innovation in audio technology has lagged behind. Aurelius Prochazka shows how easy it is to develop audio expertise using Swift playgrounds to learn about audio synthesis, processing, and analysis with real time audio feedback. He discusses how to use audio in apps, from traditional synthesizer emulations and sound-enhanced apps to more innovative applications. At the end, he dives deep into the digital signal processing internals of Audiokit’s audio units.


Interactive Playgrounds (0:14)

Newton’s Cradle (0:14)

I have used Xcode 7.3 for most of my demos because they have the new interactive playgrounds. Apple’s example interactive playground is neat, but it’s missing something. I updated that playground to make sounds associated with a Newton’s Cradle.

You can add sound using AV audio engine, but I am using a toolkit I built called AudioKit. In my demo, I load a wav file (placed in the resources folder of the playground), and then I play it on collisions of each ball in the Newton’s Cradle.

In the playground, they give you access to the physical parameters of the space that this cradle lives in. For instance, I can create a reverb that is proportional to the gravity parameter. The feedback and the cut-off frequency of the reverb are affected.

Get more development news like this

XY Pad, Telephone (2:21)

Another playground (based on Erica Sadun’s blog about interactive playgrounds): a drawing tool. I have attached a frequency modulating oscillator to the XY locations of the drawing, which changes the frequency and modulation parameters accordingly.

One last quick example: the simulator does not have a telephone in the phone app, but you can mock one up easily using playgrounds. By using two oscillators playing different frequencies at certain times, you can make all the standard phone sounds.

Status Quo (5:20)

Swift is a welcoming community, and people are rethinking how things are done: questioning the status quo, and even taking ideas from other programming areas and bringing them to Swift. For audio programming, the status quo is to record, find or purchase a song or sound, and play it back at a certain time or on a certain event. Maybe you add a filter to the sound or spatialize it in 3D space, but not much else.

pd, Wwise, fmod (5:52)

You may be drawn to one of the visual multimedia programming environments: MaxMSP or PD. Creative people, especially in academics, use PD. However, it may or may not scale well or be friendly to distributed collaboration. Perhaps your company has the resources for one of the professional solutions that powers of iOS games.

Does it seems right that one of top audio solutions, Wwise, has its authoring engine distributed as a Windows program that runs on your Mac as a WineBottle? I thought the Mac was the audio platform.

More generally, is it right that audio is done outside of our normal developer toolset, by a completely different person who has years of skill in a particular piece of software?

When I worked for MCA Universal Pictures, we had graphic artists using Photoshop and Painter in one room, and they would give us files in the other room for the HTML programmers, and there was no communication. Over time, we had people who were good at both graphics and programming and giving us tools - CSS and Boot Camp. But for audio, we are still back in 1994, where audio people are still very separated from the programmers.

We could try to make the argument: “Let’s do it all in Swift!”

Core Audio (7:47)

You can access Core Audio and AV Audio engine from Swift. I represent Audiokit, but I will also direct you to some great work by:

  • Michael Tyson from The Amazing Audio Engine and his Loopy and Loopy Masterpiece app
  • Gabor Szanto from Superpowered, who is a force of nature
  • Harris Ali from EZAudio, a friend to AudioKit

… yet I built my own. Why?

Issue 24, Objc (9:02)

At Issue 24, Aaron Dave suggested that, to conquer your fear of adopting audio, you have to Play, Fail, Iterate. You have to learn how to work in audio, and playgrounds in Swift make this possible.

AKDelay (11:32)

In this example, I attach the microphone to a delay processor. With the delay effect, it repeats what I say until I bring up the feedback, which increases the number of repeats. The cut-off frequency makes those repeats more muffled. If I reset these parameters, I can show you the interesting part, which is changing the time parameter. With that, you can get into some trippy effects (e.g. pitch shifting, robotic voices, etc).

AudioKit for iOS Playgrounds (13:00)

I have over 100 playgrounds in AudioKit. The table of contents starts with a Getting Started playground, plus some fun ones to get people interested. There is also a set of basic tutorials on how to run playgrounds; the audio processing node; playgrounds for plotting, for analyzing the frequency and amplitude of the ambient sound, etc.

There are also playgrounds that inspired our synthesizer example, such as playback, sampling, MIDI playgrounds, audio synthesis, oscillators, noise generation, physical models, and an array of effect processing: delays, distortions, reverbs, and filters.

Finally, you can get down and develop your own node that has custom processes, called “operations.” Each level, as you learn, you get a chance to go deeper. In fact, you could go all the way down to programming in C.

AudioKit V3 (14:44)

In terms of architecture, the top layer of AudioKit is written in Swift. It is a Swift framework, so you can drag it into your project. However, the Swift framework calls down into Objective-C and C++, all the way down to pure C. It uses Apple’s Audio Unit version 3 API, and it requires iOS 9 or Yosemite on OSX, and it works across the Apple devices.

AudioKit is 100% free and open source, and includes many custom effects, most of which are done through Paul Batchelor’s SoundPipe and fORTH modules. You can wrap anything from Csound or FAUST to the Synthesis Toolkit and ChucK. If you email us about something you have seen in the open source world that sounds cool, we will probably add it to the list too.

AudioKit was created with the help of two Realm products: Jazzy, which is awesome, and a daily runthrough of SwiftLint.

AudioKitParticles (17:05)

The example that comes with AudioKit is a full-fledged analog synthesizer with oscillators, with different waveforms that can be morphed, an ADSR envelope, and filters (see video for demo). The synthesizer was made by AudioKit’s core team member, Matthew Fecher from Colorado.

Another core team member, Simon Gladman, took the sound analysis tools for frequency and amplitude and connected them to his Metal-based particle simulator.

Minim, by Livid Instruments (18:26)

Minim is an iPhone-sized MIDI controller by Livid Instruments. One of its engineers is Jeff Cooper (also an AudioKit core team member), responsible for AudioKit’s sequencing, sampling and MIDI implementation.

Unity iOS + AudioKit (21:33)

Alexander Hodge (from University of Waterloo) is taking Unity and AudioKit and using the Echo Nest API to allow users to play background music and have these sound effects of the game be harmonized with the backing track.

Getting Involved (22:25)

At the AudioKit core team, we can use help from all walks of life:

  • Grizzled C/C++ Veterans: Optimize and improve our base code
  • Audio Programmers: Make more nodes and port from other open audio libraries
  • Swift Trailblazers: Help us try to make audio feel as “Swifty” as possible
  • Bright-eyed Optimists: We want to see what you dream up
  • Bloggers / Educators: Help us use playgrounds as a platform for building understanding of audio fundamentals

References (23:18)

Realm Stuff We Love:

Other Audio Engines:

Sources for open source audio processing algorithms:

Next Up: Build a Realtime Swift App with Realm #1: Building Reactive Apps with Realm

General link arrow white

About the content

This content has been published here with the express permission of the author.

Aurelius Prochazka

Aurelius Prochazka started AudioKit in 2012 and has rewritten the code to take advantage of Swift and audio features in iOS 9. He is the co-author of the Ruby on Rails Tutorial. Aurelius is an independent contractor in Mill Valley, and has been dedicated to building free, open-source software for almost twenty years.

4 design patterns for a RESTless mobile integration »

close