In May 2019, Kotlin, a programming language for modern multi-platform applications, became Google’s preferred language for Android app development. As a result, many developers have shifted from using Java, the original language for building Android apps, to embracing Kotlin. According to a recent survey, 62% of developers are now using Kotlin to build mobile apps, with an additional 41% using Kotlin to build web-backend projects, meaning the language is here to stay.
In tandem with Kotlin’s emergence, we’re also seeing a greater emphasis placed on mobile application security from prominent organizations, including the U.S. Government. Its recent Study on Mobile Device Security, commissioned through the Department of Homeland Security (DHS) in consultation with the National Institute of Standards and Technology (NIST), found that vulnerabilities in applications are usually the result of failure to follow secure coding practices and these vulnerabilities typically result in some sort of compromise to a user’s data—serving as a wakeup call to the industry at large.
Now, more than ever before, and in light of National Cybersecurity Awareness Month taking place throughout October, it’s important for developers to familiarize themselves with Kotlin and understand secure coding best practices for mobile apps when it comes to using this language. To do this, let’s look at some of the common pitfalls when using Kotlin:
- Insecure data storage
The Android ecosystem provides several ways to store data for an app. The kind of storage used by developers depends on the type of data stored, the usage of the data, and whether the data should be kept private or shared with other apps.
Unfortunately, a very common coding error revolves around storing sensitive information in clear text. For instance, it is frequent to find API keys, passwords, and Personally Identifiable Information (PII) stored on the ‘Shared Preferences’ or databases used by the app. We’re seeing this oversight increasingly lead to loss of confidential data since an attacker, able to access the database of the app (rooting the device, backup of the app, etc.), can retrieve the credentials of the other users using the app.
- Insecure communication
Currently, most mobile applications exchange data in a client-server fashion at some point. When these communications happen, data traverses either the mobile carrier’s network, or between some WiFi network and the Internet.
Although exploiting the mobile carrier’s network is not an impossible task, abusing a Wi-Fi network is usually much easier. If communications lack SSL/TLS, then an adversary will not only be able monitor traffic transmitted in clear text, they are also able to steal the exchanged data and can execute Man-in-the-Middle (MitM) attacks. In order to prevent insecure communication, it’s important to always assume that the network layer is not secure and continuously ensure that all communications are encrypted between mobile apps and backend servers.
- Insecure authentication
Weak, or insecure authentication for mobile applications is fairly prevalent due to mobile devices’ input factor: 4-digit pins are a great example of this. Either a weak password policy due to usability requirements, or authentication based on features like TouchID, make your application vulnerable.
Unless there’s a functional requirement, mobile applications do not require a back-end server to which they should be authenticated in real-time. Even when such back-end servers exists, users are typically not required to be online at all times. This poses a great challenge for mobile applications’ authentication. Whenever authentication has to happen locally, then it can be bypassed on jailbroken devices through runtime manipulation or modification of the binary.
Insecure authentication is not just about guessable passwords, default user accounts, or data breaches. Sometimes, the authentication mechanism can also be bypassed and the system will fail to identify the user and log its (malicious) activity.
- Code tampering
Once a mobile application is downloaded and installed on a device, both the code and data will be available there. Since most mobile apps are part of the public domain, this gives adversaries the chance to directly modify the code, manipulate memory content, change or replace the system APIs, or simply modify an application’s data and resources. This is known as code tampering.
Today, rogue mobile apps often play an important role in fraud-based attacks, becoming even more prevalent than malware. Typically, attackers exploit code modification via malicious types of apps, tricking users to install the malicious app via phishing attacks.
To prevent code tampering, it’s important that the mobile app can detect at runtime that code has been added or changed. From there, development teams should be able to react accordingly by reporting the code integrity violation to the server or shutdown the execution entirely.
Exploitation techniques are always evolving; new vulnerabilities might be found in the future based on dependencies that may reveal new application tampering points. By watching for these coding errors, developers can build more secure Android apps and avoid pitfalls that can lead to these avoidable scenarios. Additionally, developers can stay up-to-date by referring to the OWASP Mobile Top 10 security weaknesses list and reviewing Google Codelabs’ recent training modules that include Android Kotlin Fundamentals, Kotlin Bootcamp for Programmers, as well as Refactoring from Java to Kotlin.