Security & Pen Tests

The security of a product (backend, iOS and Android app) can be evaluated with penetration tests (often done by specialized companies). Android apps are already protected by the Android Sandbox and critical operations like authentication, authorization and content storage should be done in a secure backend, so critical findings in apps are rare. However pen tests will usually produce common findings, that are the same for most apps.

Management or legal might treat these common findings like a checklist, where everything must be implemented, because they are unsure of the costs and benefits and err on the side of safety. Instead advise them on a case by case basis.

In my experience the costs (complexity, bad UX, alienating users, false positives breaking the app) for these common findings are often high and potential benefits are low, because the OS leaves little room for improvements, so often no action is necessary at all. Not all apps are equal though. E.g. benefits for making it harder to analyze the app might match the costs in high value targets like banking apps.

Links:

Security Tips

  • Beware of security by obscurity. It is incredibly easy to decompile apps or look at their resources. It’s also possible to modify the code (remove protections) and recompile it. Consider your app’s code fully transparent and modifiable.
  • Adversaries can run and fully analyze your app in an environment that you have no control over. Most detection heuristics are trivially circumvented. This is not an issue if your app is secure by design.
  • Let the OS handle security instead of rolling your own limited solutions. The most influential thing you can do is raising the minSdkVersion because old OS versions (older than 3 to 5 years) will not receive any security patches.
  • Storing sensitive data in external storage used to be a critical issue, which is why it was solved on OS side. The app’s internal storage directory is sandboxed so it can not be accessed by other apps and is automatically encrypted since Android 10. Nevertheless it’s always a good idea to minimize persistent client side storage of sensitive data.
  • Don’t roll your own crypto (algorithms), instead use the Android crypto APIs.

Common Findings

Clear Text Traffic

This finding is likely a false positive, as with Android 9 clear text communication (not using HTTPS) is prevented by default. False positives include deeplink URLs in manifest (so app is also opened for HTTP deeplinks) or support for local dev servers.

If you are actually using clear text communication, you should have a very good reason, e.g. exception in security config for legacy backend without maintainer.

Tapjacking

Other apps may use overlays to listen for touch events (e.g. to get passwords).

There are two ways this can happen:

  • Overlays:
    • User has Android 6.0.1 without June security patches or Android <4.0.3. This allowed apps to show a Toast (transparent or with misleading content) infront of other apps and intercept touch events.
    • On newer OS versions the user has to explicitly give permission for apps to draw over other apps.
    • StackOverflow solutions will recommend using android:filterTouchesWhenObscured. This will break legitimate apps like blue light filters. Instead consider raising minSdkVersion, because this is fixed on OS side, and these old versions also contain other security issues, like Heartbleed in Android 4.1.1.
  • User has a malicious keyboard app:
    • While iOS has an API for disabling third party keyboards, there is no such thing on Android.
    • Unlike iOS there is no “first party” keyboard, because every OEM can preinstall their own keyboard app.
    • Apart from that, I think users should be allowed to use their preferred keyboard.

You can add FLAG_SECURE to Windows so they don’t appear in screenshots or the recent tasks preview. Preventing screenshots will annoy users, so this only makes sense in very rare cases like password managers that show plaintext passwords.

Secrets in Code

Findings will usually consider every secret (API keys, passwords, keystores, etc.) with the same severity. But not all secrets are equal. We have to consider these points:

  • Does the secret have to be included in the app (APK or at runtime)?
  • What can an adversary do with a leaked secret?
  • Can we easily revoke and replace this secret?
  • Is the codebase hosted in a public repository (e.g. open source project) or a private repository (proprietary project)?

For public projects it makes sense to just hide all kinds of secrets in local.properties or environment variables and provide default secrets. This makes it easier to configure forks and prevents automated bots from grabbing them. If you pushed a secret to a public repository, consider it compromised.

For private projects it is preferable to save complexity and make building and deploying the app as easy as possible (convention over configuration). Here we can distinguish between secrets that must end up in the app and those that don’t:

  • Many libraries (e.g. Google Maps) store their API keys in the manifest. Users can just open the manifest and copy the keys. This is by design, because there are hard restrictions on what these keys can do. As long as the key has to end up in the app at some point, e.g. if you provide these keys at runtime via a backend, it can be intercepted by a determined hacker. You can consider these kind of API keys publicly available. Having the above kind of keys in the code is fine for private code repositories. It’s probably easier to extract the key from the app, than to gain access to the codebase.
  • For other cases, check if the secret needs to appear in the app. Maybe it is only used at compile time or for an internal test variant of the app. Maybe it can be stored in (and never leave) the backend, which then acts as a proxy to external APIs. Especially risky general purpose secrets like AWS tokens, should not end up in the app and in the code base. Be sure to also rewrite your git commit history, when removing them.

The keystore with the key can be checked into private repositories. This is completely fine in my opinion for the following reasons:

  • The keystore is useless without keystore password and key password (unless the passwords can be brute forced, in that case you should just move the key to keystore with a longer password). If an adversary also has access to the passwords, e.g. via secrets manager or CI server, they most likely also have access to other secrets like the keystore.
  • If the keystore contains the upload key for Google Play App Signing, then it would be of no use to an adversary (even if they had the passwords), as they also need access to Google Play Console (and the passwords) and you can just invalidate the upload key and generate a new one.
  • If it contains the actual signing key, it is preferable to keep it safe in the repository, than to risk losing access to it. Adversaries still need access to Google Play Console though they might create new builds for delivery outside of Google Play if they also have access to keystore password and key password.
Information Leakage

If you use Logcat for logging API requests, it may contain sensitive user data, tokens, etc. Other apps cannot access your app’s logs (since Android 4.1). It can still be read via adb logcat, however this requires direct access to the user’s device.

If this is an issue, make sure that these logs do not contain sensitive data or completely remove them by using these ProGuard rules (which make debugging issues in productive apps harder):

# Disable logging
-assumenosideeffects class android.util.Log {
    public static boolean isLoggable(java.lang.String, int);
    public static int v(...);
    public static int d(...);
    public static int i(...);
    public static int w(...);
    public static int e(...);
}
No Root Detection

The app can react (e.g. stop working) to running on a rooted device, where the sandbox is not guaranteed.

The benefit is small:

  • It can prevent this scenario: User uses your app on a rooted device, installs a malicious app and grants it root access. The malicious app can now access your app’s internal data directory. With root detection the user could not use your app at all, so there is no data to access.
  • Root detection is fragile and easily circumvented.

The cost is high:

  • Implementing effective root detection requires a lot of effort and complexity (see tamper protection).
  • About 3.6% of devices are rooted. Most are custom ROMs, which users install at their own risk. Some devices (One Plus, Xiaomi) are pre-rooted. Affected users can not use the app, will be frustrated and post negative reviews.

In my opinion, this is only useful for very high risk apps (banking), if at all. In that case it could be preferable to show a message to users, that their device is not safe and they are at their own risk.

No Tamper Protection

This checks the integrity of app and environment, to find out if an adversary has recompiled the app or is running it in a hostile environment to analyze it. This usually involves:

  • Root detection (see above)
  • Emulator detection
  • Check if debugger is connected

The benefit is small:

  • It is marginally harder to analyze or recompile the app. Checks can be removed and circumvented.

The cost is high:

  • Implementing effective tamper protection requires a lot of effort and complexity. It is an arms race between protection tools and bypass tools.
  • This might make development harder, e.g. if the app can not run on emulators anymore. Test automation might break.

This might make sense on a small scale, that just checks the app certificate at runtime, to prevent automatic recompilation with injected malware.

No Obfuscation

Obfuscation with R8 makes it harder to analyze the app, but will not stop a determined hacker. You should enable it in any case, because R8 can dramatically reduce your APK’s download size through code and resource shrinking.

You have to manage a proguard-rules.pro file and check your release builds for shrinking issues. If this was not done from the start of the project, it might be too late to get it working, because of conflicting and really hard to debug errors. This is especially hard if you use proprietary libraries, that do not provide their own consumer-rules.pro.

It will slow down builds, so it should only be enabled for releases:

defaultConfig {
    ...
    proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}

buildTypes {
    debug {
        // set to true, if you want to debug code shrinking:
        minifyEnabled false
        shrinkResources false
    }
    release {
        minifyEnabled true
        shrinkResources true
    }
}
No Certificate Pinning

Certificate pinning is not recommended by Google.

The benefit is low. Communication between app and backend can not be intercepted (man-in-the-middle) in these additional special cases:

  • One of the 150 certificate authorities is compromised and their certificate not immediately revoked by an OS update.
  • Developers analyzing the app in a hostile environment (though they can just recompile the app without certificate pinning).
  • Only rooted devices or devices with Android <7: Third party accessing user device’s certificate store (e.g. companies installing certificates on employee devices)

The cost is high:

  • Requires considerably organizational overhead, as you need to keep certificates up to date.
  • Old apps will be broken and you will get negative reviews, unless the app has a force-update mechanism.