Tryswift tim oliver cover?fm=jpg&fl=progressive&q=75&w=300

Advanced Graphics with Core Animation

Animation is one of the cornerstones of the UI experience on iOS, and thanks to the animation APIs in UIKit, it is incredibly easy. However, by dropping down to Core Animation level, it is possible to create even more dynamic and impressive animations that can help make your app stand out. In this talk from try! Swift, Tim Oliver explains the concepts of how Core Animation works in iOS, how to work with the API, as well as examples on what types of animations and effects it can enable in your app.

Introduction (00:00)

Hi, I’m Tim Oliver! In today’s post, I’d like to talk about advanced graphics with Core Animation.

Before we dive in, some notes:

As a a side project, I am working on a DRM-free comic reader app for iOS: iComics. The goal is to buy comics yourself off the internet, put them into your iOS device, and then you can read them anywhere. There are websites that serve to republish old comics that have lost their copyright status, as well as new sites (e.g. Humble Bundle and Image Comics). On the Japanese side, there is Mangatoshokan Z where you can buy comics in a DRM-free format.

It is challenging: when you have an app that relies on user input data, you usually cannot tell what format the entered data will come in. Sometimes it is small, but often it is big and needs to be treated before it can be effectively rendered on an iOS device. As a result, I have spent much time over the years playing with various techniques: image scaling, image rendering, and playing with Core Animation to get it to work.

This is where I am coming from with my presentation today. I would like to show you everything I have learned from my experience.

Note: I am happy to say that iComics uses Realm. Even though I work at the company, coming from a customer perspective, I recommend it.

Overview (00:36)

  1. Introduction to Core Animation, and how it differs to UIKit. This will include CA layout object, and what effects you can apply to these layer objects to create new effects (not possible with UIKit).

  2. APIs: what APIs are available, and what effects you can achieve.

  3. CALayer subclass APIs, and GPU level effects.

What is Core Animation? (04:16)

Core Animation is one of my favorite frameworks in iOS. The API has not changed much between Objective-C and Swift, but it is the type of framework that many developers do not know about, as it is not necessary when you are developing apps. But, once you get to know how to use it, you can create great effects.

Core Animation was introduced in OS 10 as an optional component. You could opt in if you wanted that performance for your graphics. Whereas in iOS, it is one level lower than UIKit, but an integral part of the system.

It is the graphics rendering and animations system available in iOS, and it is deeply integrated with UIKit. Many properties in UIKit, especially in UIView objects, map directly to properties in Core Animation. The two are linked together: accessing a property in one would automatically update the property in the other one.

Core Animation is not executed on the CPU. Their commands are more offloaded to the GPU and used to create the graphics that are shown on the screen, powered by the GPU hardware.

I like to think of it as akin to game development. Back in the days when I did modding of game engines, the UI was the same hierarchy: you had a series of GPU-backed squares, quads, on the screen. That is, texture bit-mapped polygons, and you would manipulate them on the GPU level. In fact, dare I say it, if you have done low level graphics programming (e.g. game development or other types of technologies that you incorporate OpenGL), many of these properties in Core Animation will seem familiar. Hence, having an understanding there will make it easier to work with the graphics hardware.

But, is it possible to make a game on top of Core Animation? I found out that Disney does a few of their games on top of Core Animation. No game engine, no Unity, no Unreal. If you turn debugging mode on, you can even see the layer shapes and transformations happening in real time.

Why is it Good to Know About Core Animation? (07:00)

This is not a fundamental framework you have to be aware of when you make apps. You can make functional and well-designed apps without needing it. However, knowing it helps you boost the quality of your apps, because you understand what is inherently happening in the back end, and how content is being drawn on the screen.

Get more development news like this

This can potentially be of great benefit when you are making UI layouts that are possibly more complex than normal. If you have many effects and layers blended over each other, the performance can drop heavily if you do not know what you are doing, or where the bottlenecks are. In this case, by learning more about how the system works, you can optimize your animations and your graphics to ensure that your apps run at 60 frames per second (FPS). Also, the amount of Core Animation’s functionality that is exposed to UIKit is limited. There is more you can do when you drop down to that level and start accessing the APIs yourself. It can help you make a polished app: one that performs smoothly, looks nice, and comes with effects that would not otherwise be in the native system implementation (which users ❤️).

What About Core Graphics? (08:05)

Core Animation is supposed to be the graphics system of the framework, but there is also Core Graphics.

Core Graphics is entirely done on the CPU, and cannot be performed on the GPU. Because it is an entirely CPU-bound operation, it is sometimes slower on older devices (e.g. the iPad 2 or the iPad 3rd Generation). This is something you need to keep in mind when using it in your apps. But, the good thing about Core Graphics is you can combine it with Core Animation.

You can use Core Graphics to create the actual bitmaps, and use Core Animation to display them to create some cool effects. A cool desktop app I recommend is PaintCode, which lets you import graphics, or even draw them freehand in the app itself, and it will create the code necessary to render that content in Core Graphics.

So. Core Animation? (09:33)

It is comprised of a series of layout objects. The fundamental class that represents these layout objects is the CALayer class. It looks similar to how UIViews are implemented. If you look at the code below and execute it, you will get a red square on the screen. One of the well-known properties of CALayer is this .cornerRadius property to achieve a nice rounded edge effect.

import QuartzCore

let newLayer = CALayer()
newLayer.frame = CGRectMake(0, 0, 100, 100)
newLayer.backgroundColor = UIColor.redColor().CGColor
newLayer.cornerRadius = 10

Where is it in UIKit? (10:09)

Everything you see when you look at a UIView is not being done on the UIView level, but by a backing layer attached to that view.

public class UIView {
   public var layer: CALayer { get }

Everything you see is being done by a CALayer; while the layer is providing the visual content, on the UIKit level, UIView is providing other things (order layout functionality, touch recognition, guest recognizers). All that is introduced on top of the layout through UIKit.

Deeply Integrated with UIView (10:37)

CALayer and UIView are tightly integrated. Many times, when interacting with the UIView, you are implicitly affecting the properties of the layout underneath. For example, when you change the .frame property on a view, you are manipulating the bounds and the center properties on the layer underneath, which are then implicitly returned back as the .frame property of the view.

public class UIView {
   public var frame: CGRect {
      get {
         return self.layer.frame
      set {
         self.layer.frame = newValue

let newLayer = CALayer()

While I do not expect the actual implementation of UIView to look like this, many of the UIView level properties are simple access overrides that are manipulating the properties for the layer underneath.

Why is it Not a Superclass? (11:13)

Why is UIView not a subclass of CALayer? Why is the layer being exposed to a property on UIView instead?

Apart from CALayer, Apple also provides subclasses (tile and gradient layers). You can insert them into a UIView subclass by swapping out which type of layer is exposed by that property. This is a model that would not have been possible if UIView was simply a subclass of CALayer. Let’s say you wanted to make a UIView subclass that displayed a dynamic gradient. It is very easy to do if you override the .layerClass property of the view and specify a CAGradientLayer instead.

public class MyGradientClass : UIView {

    override class func layerClass() -> AnyClass {
       return CAGradientLayer.self


Mapping Contents to CALayer (11:56)

It is very easy to create a red quad on the screen, but that’s boring. Instead, it is possible to map a bitmap to a layer by accessing the .contents property (see code example).

let trySwiftLogo = self.trySwiftLogo() as UIImage

let trySwiftLayer = CALayer()
trySwiftLayer.contents = trySwiftLogo.CGImage

This is very similar to a UIImageView. We can take the “try! Swift” logo we generated with Core Graphics earlier, export that as a UIImage, convert it to a CGImage, and directly apply that to the contents property of the layer.

Managing the Scale of CALayer Contents (12:30)

Another cool thing is that the .contents property is animatable, which can have some cool effects. It is possible to configure a layer to change how the bitmap is rendered depending on the size of the frame of the layer itself.

This property is called .contentsGravity, and it behaves similar to content mode on the UIView level. The default value is kCAGravityResize: whatever size the frame is, the image is distorted and squished to match exactly the same size. When you change the setting to kCAGravityResizeAspectFill, the aspect ratio of the image stays the same, but it is scaled up to fill all of the layer, and anything outside of the region is just disregarded. Changing to kCAGravityResizeAspect will make sure the image stays at its normal aspect ratio and all of it is visible within the bounds of the frame, regardless of the size. When set to center (kCAGravityCenter), the image is not resized, but placed in the middle of the frame regardless of its size.

With this capability, what type of effects are possible in iOS?

  • One of my favorite apps, Tweetbot, uses this functionality in a cool way: as the content scroll goes down, the image in the background gets larger. The width of the frame does not change, only the height. As the frame gets bigger, the aspect fill capability of the layer increases the image to fit at the same aspect ratio.

  • I found a cool way to leverage this capability of CALayers in my own app, iComics. When iOS 7 came out, there was a shift in the design of iOS apps to be more minimal. I thus wanted the slider graphics in my own app to be transparent and the background to still be poking through from behind. Two elements needed to work together (challenging!): the background layer that had the page numbers in it, and a transparent handle button that the user manipulated. But, how do you clip out the portion of the numbers layer directly behind the sliding button? The solution turned out to be very easy: you render one instance of the numbers graphic and then you apply it to two separate layers on either side of the button. If you set the contents gravity of the lefthand layer to left (kCAGravityLeft) and the righthand one to right (kCAGravityRight), when you manipulate their frames around the sliding button, it gives the illusion that they are not changing any position.

Because this method leverages the GPU, it is incredibly performant. There are other ways you could have gone about doing this. For example, using a masking layer, or doing it in Core Graphics. But, because both of them would have leveraged the CPU, it would have been slower.

Bitmap Sampling in CALayer (15:13)

Core Animation also exposes settings that lets you configure which resizing resampling algorithms the GPU uses. Whenever you change the size of a layer and the size no longer matches the original size of the bitmap mapped to it, resampling needs to be done to make sure it does not look jagged or distorted.

By default, the sampling mode that Core Animation uses is called bilinear filtering (kCAFilterLinear), a simple linear interpolation between two pixels. Sometimes, even bilinear filtering is too slow. If you are rapidly resizing a frame during animation, you might get stuttering.

In cases where you are changing a frame rapidly, it might be best to set it back to nearest (kCAFilterNearest). Nearest mode completely disables pixel resampling. In cases where you have made a layer very big or very small (it is obvious that) there is no sampling applied, and it will look jaggy. But in cases where the animation is happening so fast you might not even notice, then it is good for getting better performance.

The final mode is called trilinear filtering (kCAFilterTrilinear), where the GPU will generate differently sized versions of the same bitmap, and blend them together to create resizing of the texture in question. This can result in slowdowns because there is a CPU element involved - it depends on your needs, but you can get superior resampling quality by doing it this way.

A good example of where nearest filtering can become useful in iOS is open and close animation for apps. When an app closes, not only is it rescaling very quickly, but it is also cross-fading back to the original icon. This combination can cause FPS stuttering on older devices. That being said, while the animation happens quickly, you can use nearest filtering and it is not very visible to the user.

Masking CALayer Objects (16:48)

Masking CALayer Objects: taking a layer and setting it as the mask property of another layer.

let myLayer = CALayer()
myLayer.contents = self.makeRedCircleImage().CGImage

let myMask = CALayer()
myMask.contents = self.makeMaskImage().CGImage

myLayer.mask = myMask

A layer will be clipped by this mask, but it will still be functional, interactive, and animatable. At iComics, I wanted to make a tutorial view that explained what a setting did in the app. I used PaintCode to generate a series of images, and an alpha channel mask to composite these images together. The cool thing about the final effect is it does not matter what the color or pattern of the background is. It will work on any background, even while scrolling. It is a dynamic and useful effect.

Adding Shadows to CALayer (17:37)

Another common effect in iOS (not so much now with iOS 7), is the use of shadows. What would be a long and arduous process, Core Animation makes easy to generate dynamic shadows and attach them to any shape.

The following code will indeed render a shadow. However, because the system has to do a per pixel comparison to work out the size of the shadow, it will be incredibly slow in terms of rendering and animation.

 let myLayer = view.layer
 myLayer.shadowColor = UIColor.blackColor().CGColor
 myLayer.shadowOpacity = 0.75
 myLayer.shadowOffset = CGSizeMake(5, 10)
 myLayer.shadowRadius = 10

let myShadowPath = UIBezierPath(roundedRect:
                     view.bounds, cornerRadius: 10)

myLayer.shadowPath = myShadowPath.CGPath

As a result, whenever you are working with shadows in Core Animation, you should always make sure to set the .shadowPath property. This property will tell Core Animation in advance what the shape of the shadow will be, reducing render time.

Transforming a CALayer (18:15)

Core Animation also provides a transform property on CALayer. Unlike the transform property on UIView, which is purely 2D, the one on CALayer provides 3D transformations.

let myLayer = CALayer()
myLayer.contents = self.makeTrySwiftLogoImage().CGImage

var transform = CATransform3DIdentity
transform.m34 = 1.0 / -500
transform = CATransform3DRotate(transform, 45.0f * M_PI / 180.0, 0, 1, 0)
myLayer.transform = transform

While the design paradigm for iOS 7 is ostensibly flat and minimal design, there are instances in the system where 3D transformations are used. The most famous example of a third party application using this effect is the app Flipboard. Flipboard uses 3D transformations to create a page-turning effect whenever you are transitioning between articles in its app. While that is not possible in the UIView level, it is easy to do on the Core Animation level.

Blend Modes with CALayer (19:04)

The ability to add blending modes to CALayers is a cool thing in Core Animation, but it uses some private system APIs. While I cannot officially recommend this, it is still cool to check out. That being said, it would be very easy to obfuscate; if you want to consider it for an official app, it is a possibility.

let myBlendLayer = CALayer()
myBlendLayer.setValue(false, forKey: allowsGroupBlending) // PRIVATE
myBlendLayer.compositingFilter = screenBlendMode"
myBlendLayer.allowsGroupOpacity = false

For example, if you have two layers, and you set the top one to be screen blend mode, you will get this effect where the top layer is being additively blended on the one below.

One day, I got curious and decided to reverse engineer the “slide to unlock” UIView inside iOS. It is not a simple gradient, as most people would assume. There is a subtle fractal shimmering effect. I wondered how many layers it had. Five layers, it turns out!

The top layer is a mask-blending layer that clips everything to its alpha channel. The next layer is a flat base color that serves as the background of the effect. The next three layers are gradient layers that create a wedge shape highlight sheen, and a highlight color. The final layer is the most complicated. It is an outline of the text, stroked, dashed, and blurred, and then overlayed to create that shimmering effect when blended with the gradients. I have also published this code on GitHub, if you are interested in checking out how it works in more detail.

When I was inspecting the official “slide to unlock” view, one app that was handy was Reveal. Reveal is a debugging tool that lets you inspect the views of your app from the device on your mac as it is executing. This is extremely valuable if you are developing an app with a complex UI and you need to debug it in real time. I could not have done this introspection without it.

Animating with Core Animation (20:52)

How to implement animations in Core Animation, and how it compares to the UIView animation APIs in the UIKit level? I will start with how to implement animations on the UIKit level, and then show you how to drop down to the Core Animation level.

Compared to UIKit (21:17)

let trySwiftView = UIImageView(image:)
 let = CGPointZero

 UIView.animateWithDuration(2, delay: 0, options: .CurveEaseInOut, animations: { = CGPointMake(0, 500)
 }, completion: nil)

On the UIKit level, creating an animation is easy. All it takes is one method call, and you supply a closure that modifies the properties you want to animate; optionally, you supply a second closure that executes once the animation is finished. Core Animation is trickier, with more code and management required.

CABasicAnimation (21:29)

Compared to the previous UIKit example, to create a similar effect in Core Animation, you need to create a CABasicAnimation object and fill that with all the necessary parameters. Once the object is created, you add it to the layer object to start the animation.

let trySwiftLayer = //...

let myAnimation = CABasicAnimation(keyPath: position.x)
myAnimation.duration = 2
myAnimation.fromValue = trySwiftLayer.position.x
myAnimation.toValue = trySwiftLayer.position.x + 500
myAnimation.timingFunction = kCAMediaTimingFunctionEaseInEaseOut
myAnimation.repeatCount = .infinity

trySwiftLayer.addAnimation(myAnimation, forKey: myAnimationKeyName)

When you create an animation in UIKit, this is what is happening in the background. You can access these animations from the .animationsKeys property of the layer.

Timing Function (21:32)

One of the major advantages of creating animations at the Core Animation level is that you can create custom timing functions to control how an animation progresses over time. A good resource for this is a website called This website lets you directly manipulate an animation timing function and provides you with the floating point values that you can copy straight to Core Animation.

let timingFunction = CAMediaTimingFunction(controlPoints: .08, .04, .08, .99)

let myAnimation = CABasicAnimation()
myAnimation.timingFunction = timingFunction

This lets you manipulate the speed over time, in which an animation occurs at an extremely high level of precision.

Animating a CALayer’s Contents (22:38)

This is how you can animate the contents property of a CALayer.

let imageView = UIImageView()
let onImage = UIImage()
let offImage = UIImage()

let myAnim = CABasicAnimation(keyPath: contents)
myAnim.fromValue = offImage.CGImage
myAnim.toValue = onImage.CGImage
myAnim.duration = 0.15

                               forKey: contents)

imageView.image = onImage

If you had one image and you wanted it to crossfade to another image, you would create two views and animate the respective crossfade; one from 100% alpha to 0%, the other from 0% alpha to 100%. However, when both layers hit 50% opacity, there will be a brief moment where you can see through both of them. That creates a dimming effect that is easy to pick up to the human eye. Subsequently, if you animate the contents property of a CALayer, you get a crossfade effect without that dimming artifact. This is useful for animations where both images have transparency, because the dimming effect is more pronounced in that case. This visual style is more iOS 7 - it has changed in iOS 8.

CAKeyframeAnimation (23:33)

CAKeyframeAnimation is a powerful class - you can chain up multiple animation points within one object. But not just that: each keyframe point can have a CG path object assigned, which lets you create animations that are not just linear, point-to-point transitions, but curves.

let rect = CGRectMake(0, 0, 200, 200)
let circlePath = UIBezierPath(ovalInRect:rect)

let circleAnimation = CAKeyframeAnimation()
circleAnimation.keyPath = position
circleAnimation.path = circlePath.CGPath
circleAnimation.duration = 4

// Manually specify keyframe points
// circleAnimation.values = //...
// circleAnimation.keyTimes = //..

let trySwiftLayer = //...
                            forKey: position)

CAAnimationGroup (23:53)

While in the UIKit level you could add multiple animations within one object, in Core Animation you must make multiple CABasicAnimation objects. If that is the case, you can create your animation objects and add them to a CA animation group object, which controls the timing (and consequently, makes control Animation more efficient).

let myPositionAnimation = CABasicAnimation.animation(keyPath: position)
let myAlphaAnimation = CABasicAnimation.animation(keyPath: opacity)

let animationGroup = CAAnimationGroup()
animationGroup.timingFunction = kCAMediaTimingFunctionEaseInEaseOut
animationGroup.duration = 2
animationGroup.animations = [myPositionAnimation, myAlphaAnimation]

let trySwiftLayer = CALayer()
trySwiftLayer.addAnimation(animationGroup, forKey: myAnimations)

Animation Completion Handling (23:53)

On a similar note to UIKit animations, you consider a completion closure that executes once the animation has finished.

// Set a delegate object
let myAnimation = CABasicAnimation()
myAnimation.delegate = self

// Animation completion sent to ‘animationDidStop(anim: finished flag:)

// ———

//Set a closure to be executed at the end of this transaction

   // Logic to be performed, post animation


As you can see, there is code that needs to be written if you want to write animations at the Core Animation level. Similar to PaintCode, there is another app, Core Animator: you create the animations in the app itself and it generates the relevant Core Animation code that you can then copy to your app (I definitely recommend you check it out). It will probably save you a time in the long run.

Features of Core Animation Subclasses (24:41)

In iOS, Apple provides a variety of CLS subclasses, with many different features.

  • Some of these subclasses rely on the CPU for the operations which they perform; it may be necessary to test these on certain devices to make sure they fill your specific needs.

  • To insert a CLS subclass into a UIView, all you need to do is subclass the UIView, and then override its layer class property.

 public class MyGradientClass : UIView {
      override class func layerClass() -> AnyClass {
         return CAGradientLayer.self

CATileLayer (25:15)

Tile layers are a staple of vector-based drawing on iOS. They automatically manage a grid of tile regions and manage the asynchronous drawing of content to those regions via Core Graphics. When placed in a zooming scroll view, they can be configured to redraw the content at a much bigger resolution when they are being zoomed in. This is incredibly useful for rendering content that can be rendered at any resolution dynamically. For example, text in a pdf file or topology information in a mapping application.

CAGradientLayer (25:49)

CAGradientLayer is a simple layer in the fact that all it takes is a series of points and colors to produce a dynamic gradient effect. The gradient is produced on the GPU - it is incredibly fast. It is used often on other layers that have been 3D transformed in order to add more perception of depth to their effect.

CAReplicaterLayer (26:10)

The next CLS subclass is one that is reminiscent of a thumbnail tiling effect that was in the iOS 7 music app. CAReplicaterLayer allows you to specify one layer and have the one duplicated multiple times over on the GPU. The copies can then be tweaked. For example, changing their color or their position to create amazing effects. Because it is all done on the GPU, it is much faster than if you tried to do this system manually.

CAShapeLayer (26:33)

The next example is one I highlighted in the “slide to unlock” example. CAShapeLayer allows you to insert a CG path property and customize that visually. For example, it is very easy to create a circle that you can either completely fill or add a stroke to. This layer is useful for dynamically creating effects that line up with the iOS 7’s design style (e.g. loading spinners and buttons). I recommend you check it out and also check out some of the libraries that people have already built around it on GitHub, e.g. UAProgressView.

CAEmitterLayer (27:18)

As seen in the Disney app earlier, this layer serves as a particle emitter, which emits other layers that are configured to animate in a specific way. I am not sure why this effect is around, but I get the feeling it was probably for OS 10, where there are certain particle effects when you perform actions in the dock, such as dragging an icon out.

You could certainly add these effects to your own app if you want a magical touch or transitions. If you are looking at working with a CA middle layer, I recommend the app Particle Playground. Available in the Mac App Store, it lets you configure your particle settings with a visual previewer, and export the lot as a class that you can then copy straight into your app.

Other Layer Subclasses (27:59)

There is a few more other classes that are available, but I have not tried them myself, and I get the feeling that most of them are not appropriate for iOS, leaning more towards OS 10 capabilities:

  • CATextLayer that behaves similar to UILabel, except it renders text at the top of the frame instead of in the middle. That being said, because we have UILabel available in iOS and they have things like order layout. I recommend you use them whenever you can.

  • CSScrollLayer is probably more oriented towards OS 10 because we have UIScrollView on iOS, but it is the same functionality where you can scroll large amounts of content.

  • Unlike the transform property we talked about earlier, we manipulated a layer in 2D space, CATransformLayer lets you manipulate a layer completely in 3D space and create effects, such as spinning cubes. A good example of this might be the old iBooks store animation in iOS 5.

  • Two types of layers that are low level GPU level layers: CAEAGLayer and CAMetalLayer. They both let you set up a rendering context in which you can perform low level graphics rendering. This is the layer you use when you are making a game and you want direct access to the GPU to write your own rendering code and bypass Core Animation.

Conclusion (29:03)

Try and work on the UIView level first, and when you hit an instance where you cannot get the control you want, then drop down to the Core Animation level. As long as you understand how UIView and how Core Animation work, even if you have to make compromises, there should be few times where you are sacrificing 60 frames per second for the animation you want.

While Core Animation offers more control and functionality, it is also more code that needs to be written and maintained. I always recommend you try and achieve the effect you want in UIKit first.

Let’s all aim to make amazing, beautiful apps!

About the content

This talk was delivered live in March 2017 at try! Swift Tokyo. The video was recorded, produced, and transcribed by Realm, and is published here with the permission of the conference organizers.

Tim Oliver

Tim Oliver hails from Perth, Australia! He has been an iOS developer for 6 years, and recently joined Realm in March 2015. Tim has a cool app called iComics and he loves karaoke! He does, in fact, also sometimes have the problem of too many kangaroos in his backyard.

4 design patterns for a RESTless mobile integration »