Realm Migrations Supercharged with Dagger

My current tech obsessions are: Realm, Dagger and Unit Testing. Therefore, I'm always looking for opportunities to improve my code in some way that involves one or more of the above. That being said, I realized that the recommended way of handling migrations in Realm could be improved significantly by means of Dagger 2

We're going to be refactoring the following Migration class:

With only two version updates, we already have a decent sized method to deal with. What's more if you didn't start out by creating tests for your migrations, once this method gets much longer you probably never will. But all is not lost, Dagger's Multibinding Support is coming to the rescue. Let's take a look!

Getting Setup

The first thing we're going to do is create a new Interface, VersionMigration. This will have only the following:

The migrate method will take in a DynamicRealm instance and a long which represents the previous version of the schema that you want to migrate from. 

With this available, we can now create two VersionMigration classes that implement our new Interface. Here's the implementation for the Version1Migration class:

On line 12, we implement the migrate method. Notice that we've mostly just pasted in the same code that we had in our original Migration class. The key difference is on line 16, where we use the getObjectSchema method to retrieve the schema instead of grabbing it directly. I'll explain why we did it this way momentarily.

Next we are going to create a new Dagger module, named MigrationsModule. Then by means of the following annotations: @Provides, @IntoMap, and @IntKey I'm defining how I want my VersionMigrations to be created and injected. The combination of these annotations allows all of the VersionMigration Providers to be injected into a Map. The key to the map will be an Integer that corresponds to the previous schema version that the given VersionMigration should be used for.

For example, if my previous schema version is 2 and the current schema version is 3, then I would use the Version2Migration class. There would be no need for me to use any other VersionMigration. However, if my previous schema version is 1 and the current schema version is 3, then I would use both the Version1Migration and Version2Migration classes. This will all come together once we take a look at our updated Migration class.

Our class has now been "Daggerized"! We start out by having a Map of VersionMigration Providers injected into our constructor on line 7.  Recall that in our MigrationsModule we used those three annotations: @Provides@IntoMap, and @IntKey. Based on that, Dagger is clever enough to gather both of our Providers together and store them in a Map that uses the Integer constant we defined as the key.

Moving down to our migrate method. There on line 15 we have a simple for loop that starts with the oldVersion and goes until we get to the newVersion. Keep in mind, that the oldVersion corresponds to the schema version that is active on the user's device. It's the version that we want to migrate from, so that we can be on the newVersion. The main Dagger awesomeness happens on lines 17 - 22. We look in our versionsMigrations Map for the correct Provider using the loop variable value. Then we return an instance of the appropriate VersionMigration and execute it's migrate method. That's it. This means that no matter how many migrations we need in the future we don't have to bother this class anymore.

What's more we also have our migration logic isolated into testable bits. Here's a look at some unit tests for the Version1Migration class using JUnit and Mockito.

Remember I said, I would explain the use of the getObjectSchema method. Well, there on line 17 when I create my migration object, I override the getObjectSchema method to return a mock of the RealmObjectSchema class. I ran into several weird exceptions when I tried to mock the RealmSchema class directly, but this solution worked just fine.

I hope you can use a similar approach to make your code more testable. All of the code snippets can be found in this gist. Thanks for reading!

If you would like to view additional Android content, I encourage you to check out my video tutorials available on

Conference Speaking One Year Later: Every Single Thing I've Learned

I started speaking at Technical Conferences a year ago and I've learned so many things since then. Things about the process of putting on a conference, choosing speakers, writing talks, crafting slides, interacting with the audience, and more.

I've never had a fear of public speaking per se; yet I just felt like I didn't know "enough" to get up in front of dozens of people and share what "little" I did know. A few wonderful women in the Android Community helped me come to my senses and realize that I did have "something to say". Not only that, what I wanted to share was valid, entertaining and useful. 

If you're interested in becoming a Conference Speaker, I hope that you find at least one thing in this post that can help you on your journey.

Background Work with Android Job and Dagger

Background work on Android can be challenging when you have to support a wide range of API levels. Specifically you can use Alarm Manager, Job Scheduler or GCM Network Manager depending on your minimum API level and if the device has Play Services. To help abstract away which implementation you're using to perform background work, the good folks at Evernote have open-sourced, Android Job.

Android Job works by first allowing you to define how you want your jobs to be created, by means of the Job Creator class. Then you can schedule requests using the Job Manager and have the confidence that they will be run when the requirements are met. In the image below you can see a representation of the various components involved with using Android Job in your application. Each Job is identified by a tag; this is just a simple String that is used to differentiate the various jobs in your application.

Let's take a look at each piece!

Getting Setup

In order to use Android Job you just need to add the dependency to your build.gradle file. If you plan on taking advantage of GCM Network Manager for older devices, you will need to include that dependency as well.

Next, you would want to create your application's Job Creator. This class is responsible for providing the correct Job class that corresponds to the tag it receives in its create() method. The Job Creator is a Singleton that uses Dagger's Multibinding Support to house a Map that references Providers of Jobs. This allows you to add new Jobs to your application without having to modify the Creator class. Then on lines 15 and 16, you look in the Map for the correct Provider and return an instance of the appropriate Job.

I like to keep the definition of my Jobs in a separate Dagger module, named JobsModule. This helps to isolate those dependencies. Notice that I'm first defining my Job Manager which uses the Job Creator that we defined previously. Then by means of the following annotations: @Provides, @IntoMap, and @StringKey I'm defining how I want my Job to be created and injected. The combination of these annotations allows all of the Job Providers to be injected into the Map, with their associated tags, for later use when a Job Request is received.

Creating a Job

The heavy lifting is done by each Job class in its onRunJob() method. A simple example is queuing a Job that adds a new record to an API endpoint. Let's assume you're using Retrofit for your API needs. In the constructor of your Job you would annotate it with the @Inject annotation to get the needed dependencies, namely your API resource. Then you would perform the network operation inside of the onRunJob() method as usual. What's unique is that you should return a Result from your method, so that the system knows whether to attempt to run your Job again at a later time.

Scheduling a Request

The final piece of the puzzle is building and then scheduling your requests. In order to send a request, you would simply use the schedule() method of the Job Manager class. This method takes a Job Request as its input.

The Job Request is built inside the containing Job class. You can send information regarding the desired network connections, device state, as well as extra data to be used when the request is executed. The below is a typical request that I use when scheduling jobs in my application.

I've really enjoyed working with this library and once all the pieces are set up it's been easy to use. All the code snippets can be found in this gist. Thanks for reading!

If you would like to view some of my video content, I encourage you to check out my video course and bite-sized tutorials available on