064: Garbage Collection (Android) vs Reference Counting (iOS)

In this mini-Fragment episode, Kaushik talks about the process of Garbage collection and how it compares to Reference counting (which is the equivalent process in iOS).

How does each work? What are the differences? Which is better šŸ˜ ? Listen on for all the juicy details…

Download directly

Show Notes

Garbage collection (Android)

Reference counting (iOS)

Contact

Transcript

Kaushik Gopal: I was talking to one of my colleagues who works on our iOS applications the other day. We started off the conversation talking about advantages and disadvantages of the different local database optionsā€”like Realm and SQLite.

Then he mentioned something that blew my mind. A parent-child relationship is a typical use case in any marginally complex system, and we were talking about how to model them in a local database. Then he said, “Yeah, you’ve got to be extra careful with those relationships. You don’t want retain cycles.” This sounded a lot like a memory leak to me, so I probed a bit further, and that led me to the idea behind this mini-Fragment episode: the world of garbage collection and reference counting.

As I chatted more with my colleague and did some research, I began to understand the differences between how garbage collection (GC) is done in Java (specifically, on Android) and how the equivalent concept (reference counting) is done on iOS.

Let’s start off with a little theory. Most of us are already aware of what GC is. In a nutshell, it’s the process by which we free up memory. If you’re done using a whole bunch of variables and objects in your method, class, or activity, those still take up memory in what’s called the “heap”. Since you don’t have infinite memory, that space has to be released at some point, and the process that releases it is called garbage collection.

To be accurate, there are many different algorithms for garbage collection, but the most common one (which Android uses, for the most part) is called “mark and sweep”. The advantage of the “mark and sweep” algorithm is that it’s the first that could actually reclaim cyclic data structures. This is important, especially when you compare it to iOSes’ method of reference counting,ā€”but we’ll get to that in due course.

When using “mark and sweep,” unreferenced objects are not reclaimed immediately. Instead, the garbage is allowed to accumulate until all the available memory has been exhausted. When that happens, the program’s execution is suspended temporarily while the algorithm collects all of the garbage, so to speak.

By the way, when this happens, it actually halts your UI for a brief moment. This is what we usually call “UI stutter” or “jank”. If you run low on memory and this begins to happen a lot, it’s then called “trashing”. Garbage collection trashing happens, unfortunately. But coming back to the topic, once all of the unreferenced objects have been reclaimed, the normal execution of the program just resumes.

An interesting question you may be asking at this point is, “How exactly does the OS detect if an object is unused?” The evaluation for whether something needs to be collected or not starts at a point called a “garbage collection route”. Think of this as a type of link-less structure, where you have a starting node or a tree structure. From that node, you keep trying to track the next reference. If you started with object A, and that references another object, you then jump to that object and see all of its references. Then you continue this process, marking the objects as you go along the way. Then, at the very end, the garbage collection process steps back and says, “Which objects were not marked? If they weren’t marked, then they’re not referenced by anyone. That basically amounts to garbage, so I can release the resources that were used up by those objects.” That’s how the “mark and sweep” algorithm functions, in a very simple and basic way.

Now you should remember that this mechanism also accounts for cyclic references, which is pretty cool. We’ll touch on that again a little later when we try compare and contrast the differences between these methods.

We’ve talked about how GC works at this point, so we have a pretty good understanding of that. Let’s dive into some dark territory and see how iOS does its version of garbage collection: reference counting. Reference counting is similar to GC in that the objective is the same, but the way that it’s achieved is radically different. In the early days of iOS programming (or more accurately, Objective-C programming), programmers had to manually count and release the references that are automatically taken care of for us in GC. So every time you created a variable, you actually had to write code saying, “Hey, retain this,” or “Release that.” Retain and release were real Objective-C matters that you had to write. I know what you’re thinking: “That’s barbaric!” Well, that’s what the process was for the longest time. They called it Manual-Retain-Release (MRR).

But then, in Xcode 4.2, Apple introduced a concept called Automatic Reference Counting (ARC). ARC automatically inserts all of these retain and release matters that we just talked about for you, so that you don’t have to manually write them. As a developer, it’s similar to GC in that you don’t have to take care of it. It just happens on its own. But you still have to remember that the process is the same. It’s very much a reference counting process, which is different from the garbage collection process that we previously discussed.

Reference counting basically works by marking heap objects with the number of parties or variables that refer to them. It actually keeps a running tally of the number of references on the side. Say person #1 needs this resource. It would bump the reference count up to ‘1’. Then person #2 needs the same thing, and it bumps the count up to ‘2’. Then another person needs this, and it bumps the reference count up to ‘3’. Now person #1 says, “Actually, I person no longer need it, so release it.” Then the reference count goes down to ‘2’, and so on. Basically, it keeps a running tally of references throughout the program for an object.

When that number finally drops to zero, it assumes that the object is no longer needed, because it has zero references. That’s how the memory is released.

As you can see, it’s very, very different from garbage collection. While GC waits for some time before kickstarting this side process, reference counting is always happening. It’s constantly checking references and releasing them at every single step. It’s a very different approach. GC clumps the complete process and does it all at once, but reference counting distributes the work. Instead of doing it all in one big chunk, it breaks it into extremely small chunks, and constantly keeps doing that process.

Now that you have an idea about how each of these processes work, let’s enter the holy war territory. Which is better: GC or reference counting? It’s really hard to answer that question, because there are so many variables. In different situations, one approach works better, and in other situations, the other works better. How much total memory do you have? How often do you do object allocations? All of that plays into this decision.

Instead of trying to come up with a single answer, let’s talk about the advantages and disadvantages of each approach in general. That way, we can make an educated guess about situations where one would beat the other. Let’s start off with GC. The biggest ding against garbage collection is that it’s unpredictable. We know it will happen, but we don’t exactly know when. The garbage collector can essentially kick off anytime. The GC will just decide, “You know, I feel like now is a good time to run the process.” This can be a problem, because if you don’t know exactly when this process kicks off, then if you’re running this crazy cool animation in your UI and the GC decides to kick in, it will cause a jank. There’s no guarantee of a consistent delta time between one frame and the next, so some frames will be skipped because your GC process decided to kick in at that point. Then everyone who cares about buttery smooth UI performance will call us out.

Reference counting, though, is super predictable. These retain and release matters are automatically jotted down at compile time. From the ARC mechanism, you know exactly when the references will go down, because that code is right in front of you. It’s all in the variables that you see.

I should say that there isn’t a guarantee that ARC will happen, but it’s what we call “deterministic”. The lifetime of every object is clearly defined, and you know with a great deal of assurance that it’s going to kick in. Also, the actual collection happens in a very incremental fashion, as we discussed. It isn’t in a big chunk, but is distributed in very small chunks, so it’s constantly happening in very small time periods. Because of this, there are no long pauses for collection cycles, so the chances of jank happening are far lower. You can clearly see the state of all of your objects and can decide when it should kick in and when it shouldn’t.

Many folks say that this is the primary reason why Android was considered to be pretty janky in its early days. It never had the buttery smooth experience that our iOS counterparts enjoyed. With recent efforts, like Google’s Project Butter, this has changed significantly, and the arguments don’t necessarily hold. But conceptually, the concepts are still the same. You still have a GC that kicks in, but reference counting still happens incrementally at each point.

Along the same lines, the other disadvantage is RAM usage. With reference counting, objects are released as soon as they can no longer be referenced. The minute an object drops to zero in the side tally, it gets immediately collected. Typically, memory constrained environments like mobile devices, this is great. You don’t want to hold onto a reference any longer than you actually need to, because once the reference goes down, you immediately open up usable memory. Your RAM is reclaimed faster. But in garbage collection, on the other hand, you keep accumulating that garbage for a longer period of time. Until the GC finally decides to kick in, your RAM is basically going to be used up.

This is also why (again, in the early days) Android phone manufacturers used to advertise, “Our phones now have 2 GB or 3 GB of RAM,” but the iPhone (for the longest time) had just 1 GB of RAM. People would go nuts online: “How is it that Apple is so cheap? They don’t want to spend any of their own money, so they put just 1 GB of RAM in their phonesā€”and it still manages to beat the UI from Android phones in terms of performance!”

In the early days of Android, the big reason for this difference was that the garbage collection wasn’t really optimized. Again, I keep saying “the early days”. Now the GC process is tuned to run correctly. Potentially, instead of waiting for the RAM to get full, it kicks in faster, when it isn’t necessarily at a point where it’s going to stall the UI. There are other episodes that we’ve done, and other podcasts that talk in detail about this process. Just keep that in mind.

At this point, you’re probably thinking, “Wait a minute, this whole reference counting thing is probably a better idea than garbage collection.” You’ve already seen its advantages. Well, it’s not all rosy there, either. Let’s talk about some of the disadvantages of ARC and the advantages of garbage collection.

Retain cycles were actually the topic that spawned this all for me. My iOS colleague was saying that retain cycles were a pain in the butt to deal with it. They happen because of circular references. What does that mean? Let me just give an example: say that Object A references Object B, and Object B references Object C, so you have this straight “A depends on B depends on C” structure. But then, Object C goes all the way back and references Object A. You have this circular reference: “A depends on B depends on C, which now comes back and depends on A.

If you think about it, this is actually a big problem for reference counting. Even today, iOS engineers have nightmares about this, because it’s so simple to introduce a circular reference. Think about the reference count tally that’s happening in the background. If A depends on B depends on C, it’s essentially never going to go down, because you have a circular reference in a loop.

This is very easy to introduce. You may think that it’s a contrived example, but it really happens. Take a simple example: say you have this parent object in your applicationā€”a father or a mother. That parent has a child, so there’s a strong reference between a parent and his or her child. But if you pull the child object alone, then this son or daughter would have a parent object again, so there would be a strong reference back to the parent. Parent depends on child in a strong reference; child depends on parent in a strong reference. It’s a retain cycle.

I’m being a little dramatic, but the way that you deal with this is to essentially say, “No, no, no. One of these references is a weak reference. If the parent depends on the child, let’s not make that a strong reference, but a weak reference.” This type of weak reference is very similar to weak references in Java, so I’m not going to go into those details.

But this is still a real problem. Even with Swift, Objective-C, and all of the cool new things that they have, they have to worry about retain cycles, because circular references are an inherent problemā€”one that’s pretty easy to run into. As an iOS developer, you have to remember to not shoot yourself in the foot with these bi-directional references.

If you’re curious, you may ask, “Wait a second? Isn’t that also the case for us as Android developers?” If you think about it, if the references never go, all of those references are still intact. How does the GC figure this out? Doesn’t it also get tricked into always thinking that we need these references? Won’t they live in memory forever and never be released? If you remember what I mentioned earlier about the definition of how GC works, this actually won’t happen. This is pretty cool. Because of the way that GC works, the evaluation starts from the GC root and keeps traversing and marking those objects.

But if it so happens that the chain is not reachable in a spot, which is what would happen here, it’s free to garbage collect all the rest of the objects. From the GC root, you haven’t touched those references. It doesn’t matter if you’re in your happy, private, merry-go-round reference circle. As long as you’re not in the GC root, you will be collected. If you actually learn the computer science concepts, the “mark and sweep” algorithm for garbage collection was a kind of breakthrough, because it was the first that could actually deal with circular references. Even to this day, ARC still cannot deal with circular references.

The final thing is that when you think about these computer science reference concepts that have come along, garbage collection is supposed to be more efficient for larger complicated systems. As our systems become more complex, GC will eventually win. This is because you will tend to have non-trivial object life cycles with large applications. Even now, we already have that. So you wind up spending a lot of your time trying to figure out, “How exactly do those life cycles work? Do I have a reference here? Do I have a reference there? Is there a retain cycle here?” With garbage collection, you get to write your program much faster, and (in the long run) you’ll just be more efficient. Therein lies this huge advantage.

I’ll leave you with some parting thoughts that I found on a Reddit thread. Android was originally designed to be a general purpose computing platform, so there were many complex things like background services and true multitasking that came to us early. A large part of the reason why we were able to get these things early was because we had garbage collection that was built to accommodate these complicated systems. But what that meant was that there were compromises made, and UI fluidity and RAM usage suffered. We bit the bullet there, but we got some of these other features.

By the way, the Android Runtime, which is what we currently use, is super optimized. With initiatives like Project Butter, most of these things are almost indistinguishable. The performance and UI fluidity are almost neck to neck.

iOS, on the other hand, was designed from the ground up to be very smooth and responsive. Low-input latency and smooth redraws were top priorities, which was why early versions of iOS were super smooth compared to early versions of Android. But again, they had to make compromises. Reference counting is much easier to do with simple systems, but it’s very hard to pull of in an efficient way with complicated systems. This, in my opinion, is the main reason why iOS got features like background services and multitasking super late relative to the Android world. Obviously, they’ve caught up with Android and have a lot of similar features these days.

So, in the end, it’s neck to neck. Everyone is pretty much in the same boat. There isn’t necessarily one better way. It’s not easy to say that garbage collection is better than reference counting or vice versa, because each of them has its advantages. Both Google and Apple are obviously coming up with some really cool and innovative ways of circumventing the problems that are inherently present with each of their mechanisms.

I hope you enjoyed this deep dive into a rather pitiful topic. If you have questions, feel free to hit me up on Twitter (@kaushikgopal). I had to do a lot of research, especially in the iOS part of the world, since I don’t really do iOS development. I could have said something that’s not factually accurate about iOS, so I’ll make sure to include all the different links and resources that I used for my research in the show notes. That’s all, folks!