Kotlin interop: mixing Kotlin and Java ButterKnife-annotated activities

I’ve been working with Kotlin for a while, mainly for side-projects or toy-projects. Since last Google I/O 2017 announcement it has become clear that there are no more reasons or excuses to not use it in production.

One of the big selling points of Kotlin is that you can start small, by converting one class or two, or by creating new ones, while keeping all the remaining code in Java. So, interop between the two languages is almost 100% transparent. Almost.

Working to convert a small project step by step, I started to convert activities into Kotlin. Those activities use ButterKnife (I’m using current version, which is 8.7.0) to inject the views. So, after converting the first activity I stumbled upon a problem with the annotation processor: in Gradle script, you have to use either annotationProcessor or kapt, but not both at the same time. So, you have to choose:

  • using annotationProcessor only will not find Kotlin classes, and because of that injection will silently fail at runtime,
  • using kapt only will make compilation fail.

The final workaround I found was:

  • using kapt3 (by applying kotlin-kapt plugin to the Gradle script) and,
  • adding a @JvmField() annotation in addition to ButterKnife annotations so Kotlin compiler generates public fields instead of getters and setters.

By applying kapt3 we fix the compilation error involving “kotlin.jvm.internal.FunctionReference.(ILjava/lang/Object;)V” and by converting Kotlin fields to plain-old Java fields we allow ButterKnife compiler to find the fields to inject, as is unable to find Kotlin fields.

You can find the source code with different options in different branches (the one with the final solution is kotlin-workaround) in this GitHub project.

The project has two activities, one (MainActivity) that is written in Java and kept in this language, and the second one (NextActivity) that is converted to Kotlin. Notice that a simple suite of tests is available to check that both activities are being correctly injected, and that there is a TextView in both activities that has its text replaced by code to prove that the activity has been successfully injected.

Hope this tip is useful!

Offtopic: mechanical calculators (cardboard and wood!)

A few days ago I happened to stumble with this fun article of a mechanical 4-bit calculator made in cardboard. Even if it’s quite unreliable, is interesting to see the methodical approach and all the creativity to make this work with only cardboard and marbles.

In the comments of the article there is a Youtube video of a calculator that implements 6-bit addition made in wood. It is very reliable, and it takes a completely different approach.

Disabling (and removing) code on release builds

Repo updated on 2017-03-13: added additional debug configurations to check different options to enable/disable minification, optimization and obfuscation. See the discussion in Optimize without obfuscate your debug build, including this comment.

Sometimes you have to add code to your applications that is used for debugging purposes. This can be very useful, and sometimes is keep there as it helps in the development and debugging of different parts of the application. But, some of this code can have unintended consequences:

  • it can reveal sensitive data to a potential attacker (internal URLs, session cookies, etc.)
  • it can have a performance impact in your application (excessive logging, performing operations not needed for release builds, etc.)
  • it can lower the security of your application (backdoor-like features to help while debugging, that can disable certain security features, or completely bypass them, etc.)

Continue reading Disabling (and removing) code on release builds

Jenkins builds with a different Java version

We have a Jenkins server taking care of CI for an Android project. In the server we were using Java 7, but since we updated a few dependencies we needed Java 8 to run some Gradle plugins. After the change, we suddenly started to get this error in the builds:

* What went wrong:
A problem occurred evaluating project ':my-project-app'.
> java.lang.UnsupportedClassVersionError: com/android/build/gradle/AppPlugin : Unsupported major.minor version 52.0

After googling it, I found the following SO answer, pointing us to the need of Java 8 (major version 52): How to fix java.lang.UnsupportedClassVersionError: Unsupported major.minor version.

So, given that:

  1. server has both Java 7 and Java 8 installed
  2. Jenkins is configured to run using Java 7, and changing may pose some problems (and currently any failure could put our schedule in jeopardy)

we decided to just run Gradle script using Java 8. To configure it, the following change was done to the configuration of the job: in section Build Environment, enable Inject environment variables to the build process and add the following to Properties Content:

JAVA_HOME=(path_to_jdk)

in our case:

JAVA_HOME=/usr/lib/jvm/java-8-oracle

And hit Rebuild!

npm install not installing all the packages

A few days ago I was trying to install a project’s dependencies with a simple npm install, but in the instructions I was given there was an extra step to manually install one additional package. When I checked package.json, the package was there… So, what was going on!?

After searching for a while, it turns out that a previous developer used npm shrinkwrap “to predownload the packages” (or something like that, from what he said). It turns out that this command “freezes” a list of packages with its current version, so next time you do a npm install you will have the exact same dependencies versions, even if you’re using imprecise version specification in your package.json. It does this by creating a npm-shrinkwrap.json file next to your package.json (similar to a pip freeze but it also affects npm install). The problem is that once you have this file, package.json is ignored, and even if you add a new package it will not install.

So, while this is good for reproducibility of the environments, when you need to update you package list, you also need to update npm-shrinkwrap.json file. So, the steps are:

$ npm install my_new_package --save
$ npm shrinkwrap

Or, to update:

$ npm update
$ npm shrinkwrap

If you’re dependencies are screwed up like it happened to me, the easiest way to clean up the mess is:

$ rm -rf node_modules
$ npm install
$ npm shrinkwrap

 

Fixing Python virtualenv on OS X

This weekend I’ve been toying with an old project I created in Django a while (~2 years) ago. The project uses virtualenv, as commonly happens in this kind of projects. The original project was created originally on a different machine, and migrated to the current one. Original machine had OS X Lion, I think… or Snow Leopard, not sure. The point is, the project had been migrated thought 4 or 5 major OS X versions… and Python was a different version from the original one used to create the original virtualenv.

So, back to the problem, trying to use Anaconda for Sublime Text, I found that the Python interpreter I was specifying in the configuration was not working. After some Googling, I found a link (I’m not able to find it again…) on how to troubleshot the configuration, which at the end it boiled down to execute the Python interpreter inside your virtualenv with the following command:

~/Virtualenvs/myvirtualenv/bin/python -c pass

 
In my case, the result was the folling:

dyld: Library not loaded: @executable_path/../.Python
  Referenced from: ~/Virtualenvs/myvirtualenv/bin/python
  Reason: image not found
[1]    74178 trace trap  ~/Virtualenvs/myvirtualenv/bin/python -c pass

 
After some Googling about this problem, I found the following post: “dyld: Library not loaded: @executable_path/../.Python” Basically: “dump the packages configuration, recreate the virtualenv, restore packages, and move everything there”. In my case, the “move everything there” was a bit tricky, and the “restore packages” was even worst, as it was using some unsupported packages which was impossible to install (no pip support, and manually install was quite time consuming…). So, I tried to do something in between, and finally I found the real difference, and the reason the interpreter was failing (notice different python version):

$ ls -la old-virtualenv
...
lrwxr-xr-x   1 myuser  staff      78 29 nov  2014 .Python -> /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/Python
...

$ ls -la new-virtualenv
...
lrwxr-xr-x   1 myuser  staff      80  5 mai 22:42 .Python -> /usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/Python
...

 
So, the real fix in my case was:

$ cd old-virtualenv
$ rm .Python
$ ln -s /usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/Python .Python

Problems with Instant Run (Android Studio 2.0 beta 4) and Retrofit

This morning I was playing with a toy app I have which uses Retrofit, and I’ve found the following problem with it (and Instant Run):

02-16 07:41:55.550 2976-2976/com.test.android A/art: art/runtime/thread.cc:1329] Throwing new exception 'length=227; index=1017' with unexpected pending exception: java.lang.ArrayIndexOutOfBoundsException: length=227; index=1017

The exception was in the call itself, so it was either Retrofit doing strange things or something deeper. I found the problem to be because of some interaction between ART and Instant Run, and it was already reported.

The original reporter has opened a bug in the AOSP and it’s still unsolved. See also (Retrofit).

All in all, currently there is no fully reliable solution. So, if you find this problem in a call using Retrofit, the best course of action is to disable Instant Run and rebuild. Preferences -> Build, Execution, Deployment -> Instant Run -> (uncheck) Enable Instant Run to hot swap code/resource changes on deploy (default enabled)

Disable Instant Run