TLDR: Table view (and collection view) data source should be backed by a snapshot of your data store; not a real-time view of it.

I attended last week’s iOS KW meetup, which was a talk by Chris Liscio about Collection View Controllers on iOS and macOS. The talk was good, featuring a lot of interesting details of how Chris implemented the main view in his music app, Capo. But what was really interesting is what I learned in chatting with him afterwards. I’ve been doing iOS development for more than 6 years, and this finally flipped a switch in my brain about why managing updates in table views (and collection views) was always so difficult for me.

Animating updates to a UITableView’s contents is ostensibly pretty easy—you get notifications about the contents of your data store, and you tell your table view which rows were added, removed, or updated. In order to have things animate all together nicely, you do this in a transaction:

tableView.performBatchUpdates { 
	tableView.insertRows(at: [IndexPath(row: 3, section: 1)], animation: .automatic) 
	tableView.reloadRows(at: [IndexPath(row: 0, section: 0)], animation: .automatic)

However, there are some arcane rules about the order in which these updates are processed—not in the order in which you submit them, but all reloads first, then deletes, then insertions. In other words, reloads and deletes are processed relative to the state of the table before you begin making any updates, and insertions are processed relative to the state of the table once deletions have completed.

In the past, when I had to make updates based on a completed network request, for instance, I would call tableView.beginUpdates(), then respond to KVO notifications by calling insertRows() and its friends as the network response handler added, updated, and removed items in the data store, and then call endUpdates() when processing was complete. Since the updates to the data store could happen in any order, it was extremely difficult to try to re-order everything so that it would make sense in the order that UITableView performs its updates. Crashes occurred often. This was also a source of high coupling between classes that should have little or now knowledge of each other.

It turns out the answer is very simple. As Chris described, your data source should represent a snapshot of your data, not a real-time view of it. After changes have finished, you move your snapshot forward, compute a diff, and send the necessary insert / delete / refresh messages to the table view. This is apparently what NSFetchedResultsController does under the covers, and other technologies such as Realm allow you to either update your thread’s snapshot manually or at the beginning of a runloop.

The key is that any changes that occur in your UITableViewDataSource should only happen during a table view updates transaction. If your data source reflects the real-time state of your backing store, values might have already changed by the time you’re responding to a them, which can get you into trouble with the frameworks.

So, lesson learned, 6+ years later: `UITableViewDataSource` shouldn’t represent live data. Move your snapshots forward during your table view’s update transactions. Have fewer crashes.

I wrote in August of last year about some potential confusion coming out of the decision to statically dispatch calls to methods defined in a protocol extension that were not defined in the protocol itself. (Whereas calls to methods defined in the protocol are always dynamically dispatched.) This allowed the protocol / extension designer to differentiate between customization points (methods defined in the protocol) and non-customizable code (methods not defined in the protocol, but implemented in an extension).

It looks like Apple’s received some feedback around this, and Doug Gregor acknowledges this is his “Completing Generics” Manifesto. (Update May 5: Austin Zheng has created a formatted version of the manifesto, available on GitHub.) However I’m not optimistic that this will get changed in future versions of Swift, as he puts it in his “Maybe” section:


There are a number of features that get discussed from time-to-time, while they could fit into Swift’s generics system, it’s not clear that they belong in Swift at all. The important question for any feature in this category is not “can it be done” or “are there cool things we can express”, but “how can everyday Swift developers benefit from the addition of such a feature?”. Without strong motivating examples, none of these “maybes” will move further along.

Dynamic dispatch for members of protocol extensions

Only the requirements of protocols currently use dynamic dispatch, which can lead
to surprises:

protocol P {
  func foo()
extension P {
  func foo() { print(“”)
  func bar() { print(“”)
struct X : P {
  func foo() { print(“”)
  func bar() { print(“”)
let x = X() // //
let p: P = X() // //

Swift could adopt a model where members of protocol extensions are dynamically dispatched.

Fingers crossed, but I think the ship has probably sailed on this – I wonder if the change of behaviour would result in more confusing bugs than this quirk of the language did in the first place.

Mike Ash’s Friday Q&A this week takes on Swift’s funny-looking String API. Most of what you’d normally want to do with strings is absent — for the time being, we should be using NSString methods — and it doesn’t allow us to iterate over its characters in the way we’d expect. As of Swift 2.0, we can’t do:

for char in "some string" { /* ... */ }

That’s because there are several ways to interpret the above code — do we want to iterate over all bytes in the string, assuming it’s represented as UTF-8? All Unicode code points? All grapheme clusters? The String API’s design in Swift 2 forces the developer to explicitly decide by exposing these possibilities as views on the string, leaving the system’s internal representation of the string as an invisible implementation detail.

From Mike’s post:

Swift’s String type takes an unusual approach to strings. Many other languages pick one canonical representation for strings and leave you to fend for yourself if you want anything else. Often they simply give up on important questions like “what exactly is a character?” and pick something which is easy to work with in code but which sugar-coats the inherently difficult problems encountered in string processing. Swift doesn’t sugar-coat it, but instead shows you the reality of what’s going on. This can be difficult, but it’s no more difficult than it needs to be.

This is an excellent companion to‘s must-read NSString and Unicode showing the implications for Swift.

If you want the Cole’s Notes version, Apple has a pretty good, much shorter explainer on their Swift blog.

Update March 7, 2016: Doug Gregor wrote about this problem on swift-evolution

This post is also available as a playground.

Seemingly the most-talked-about session from this year’s WWDC was Session 408: “Protocol-Oriented Programming in Swift”. Since Apple made all WWDC videos available to the public this year, you can watch it on Apple’s Website, or read a transcribed version on ASCIIwwdc. As promised, it’s a really interesting talk about how most abstractions should be modelled in Swift as protocols instead of base classes, and how that meshes well with generics in order to produce nicely architected code. Lots has been written on the subject, so I won’t repeat it all here – you can watch the session or read any of the number of blog posts on it.

However, as I was watching the talk, I noticed a quirk with how protocol extensions work that I think is going to confuse people and lead to difficult-to-track-down bugs. I tweeted a screenshot of a playgound illustrating this. But unless you’ve been following along closely with Swift’s development, it probably doesn’t make much sense:

So here’s the more in-depth explanation.

The setup

Apple’s vision of protocol-oriented programming is built on top of Swift’s protocol extensions and default protocol implementations. (If you’re coming from a non-Apple development background, substitute “interface” for “protocol.”)

As an aside, the main feature that enables Apple’s vision of protocol-oriented programming (default protocol implementations) would have solved a big architectural problem for us at work. We use Core Data for many of our data objects, but not all of them. Therefore we have two base classes for model objects: one that inherits from NSObject and one from NSManagedObject. These two base classes provide almost identical base implementation for a bunch of functionality, which we’ve formalized by making both of them conform to a Model protocol. Each of the required methods from that protocol are implemented – usually identically – in the two base classes that conform to it. Default protocol implementations would have saved an awful lot of copy and paste in this situation!

Let’s say we have a protocol for a piece of our app that prints out textual representations of emotions:

protocol EmotionPrinter {
    func printHappiness() -> Void
    func printSadness() -> Void

This allows various implementations of the protocol to decide:

  • what they’re going to print
  • how they’re going to print it

But we want to be nice to implementers. We want them to be able to ignore one (or both) of these decisions if they don’t care about it. Let’s provide some default implementations:

extension EmotionPrinter {
    func printHappiness() -> Void {
    func printSadness() -> Void {

    func output(emotion: String) -> Void {

To check it works, we can create an implementation that just uses the default implementations:

class BasicEmotionPrinter : EmotionPrinter {}
let basicEmotionPrinter = BasicEmotionPrinter()

basicEmotionPrinter.printHappiness() // Prints "HAPPY"

Now we’ll create a custom implementation which wants control over both what is printed and how it’s printed.

I’m too lazy here to invent an alternate method of printing. You could imagine rendering this in a UILabel or something. I’m just prepending some text and using print in this implementation of output.

class EmojiEmotionPrinter : EmotionPrinter {
    func printHappiness() -> Void {
    func printSadness() -> Void {

    func output(emotion: String) -> Void {
        print("Emoji Printer 3000 says \(emotion)")

let emojiPrinter = EmojiEmotionPrinter()

emojiPrinter.printSadness()    // Prints "Emoji Printer 3000 says 😭"

Additionally, (as expected) even if the compiler doesn’t know the concrete type of an EmotionPrinter, the right version of output still gets called from inside each implementation of printHappiness or printSadness. Here the compiler types printer as EmotionPrinter.

let printerArray : [EmotionPrinter] = [basicEmotionPrinter, emojiPrinter]
for printer in printerArray {

results in:

Emoji Printer 3000 says 😀

The confusing bit

Okay, so everything has been pretty straightforward so far. We have a default implementation that does one thing, and an implementation that overrides the defaults, and successfully does its thing, too.

Here’s where things get tricky. Let’s say we do something funky, like calling output directly on an instance of a type that implements EmotionPrinter. (And here’s where my example falls down a bit. In this example, it would take a very foolish programmer to call myEmotionPrinter.output("blah") but we’rd going to do exactly that.)

for printer in printerArray {
    printer.output("I am a banana 🍌");

I’d expect this code to produce the output:

I am a banana 🍌
Emoji Printer 3000 says I am a banana 🍌

But instead the output is:

I am a banana 🍌
I am a banana 🍌

Here’s an even weirder way to illustrate the same thing:

let castEmojiPrinter : EmotionPrinter = emojiPrinter;

emojiPrinter.output("🍌")      // Prints Emoji Printer 3000 says 🍌
castEmojiPrinter.output("🍌")  // Prints 🍌

emojiPrinter and castEmojiPrinter are two references to the same point in memory. I find it very surprising that based on what the compiler knows about the type (as opposed to the actual runtime type) can cause method dispatch to behave differently.

Apple’s rationale

As Dave Abrahams says in the WWDC talk:

You might ask, ‘what does it means to have a requirement that’s also fulfilled immediately in an extension?’ Good question.

The answer is that a protocol requirement creates a customization point.

A protocol requirement is anything that goes in your protocol declaration, like printHappiness and printSadness in my example, and is considered a customization point. Anything else that goes into a protocol extension is not. The latter is something that the default implementation wants to maintain control over.

This makes sense on the face of it. When you’re doing class-based polymorphism, you mark some methods as public or protected to allow subclasses to customize behaviour, and others as private or final to prevent that. In many cases what Apple’s done here is desired behaviour, and I think it forms an important feature in their approach to protocol-oriented programming.

But it’s a subtle and important difference in how the entire rest of the type system works in Swift. Everywhere else we have some form of dynamic dispatch – the runtime type determines which implementation of a method gets run, but here we have the compile-time type making that decision.

We made a mistake

With this “customization point” rationale in mind – that protocol requirements are customization points and methods in a protocol extension are not – this means that we made a mistake in modeling our protocol. I said that we wanted implementations to be able to decide what to print and how to print it. That means that output is a customization point and should have been included in the protocol definition:

protocol EmotionPrinter {
    func printHappiness() -> Void
    func printSadness() -> Void
    func output(emotion: String) -> Void

Try it out in the playground – it works the way you’d expect.

Language design is hard

If I were designing Swift (which, I should make clear, I should not be allowed to do) I think I would favour consistency with the rest of the type system over the expressiveness of “customization points” mixed in with other methods.

My first thought would be to mark the protocol extension’s implementation of output as final. That way by default everything is a customization point, similar to traditional subclassing, except for things specifically marked as “you can’t override this.”

But that doesn’t work because of the nature of protocol extensions. It’s conceivable that the EmotionPrinter protocol and EmojiEmotionPrinter implementation could be provided as part of a system library or 3rd party framework. Then I might have written the protocol extension on EmotionPrinter, providing default implementations for any types I was writing that conformed to that protocol.

And here’s the crux: by definition, any type that conforms to a protocol was designed with the knowledge of the protocol’s requirements. But it may not have been designed with the knowledge of any extensions on that protocol. EmojiEmotionPrinter implements a method output. But if I add a protocol extension with a final implementation of a method called output, that would have to cause a compile error, since EmojiEmotionPrinter is now overriding a method marked as final. And I think Apple chose to avoid this situation because it’s nonsense for code that a consumer of a framework writes to cause compile errors in the framework on which that code depends.

That situation would be both confusing and destructive. Code X that doesn’t depend on code Y shouldn’t fail to compile because of some change in code Y.


In a traditional subclassing pattern, EmojiEmotionPrinter would inherit from a base class that provided default implementations – and by definition would be aware of the existence of an output method in its superclass. But in this case it’s being modified after the fact, by a third party to conform to a protocol and its default implementation. Apple’s rules on dynamic vs static dispatch here are just one of many equally-or-more-confusing solutions to this problem. In the end, it’s just another quirk of Swift (and there are many!) that we’re going to have to keep an eye out for.