Quizzr Logo

Mobile OS Resource Management

Inside the Low Memory Killer and Jetsam

Explore the internal mechanisms Android and iOS use to prioritize and terminate processes during memory pressure and how to adjust your app's priority.

Mobile DevelopmentAdvanced12 min read

The Hierarchy of Survival: Understanding Process Priority

In a mobile environment, hardware resources are finite and must be shared among dozens of competing applications. Unlike desktop operating systems that often rely on disk-based swap files to handle memory overflow, mobile systems avoid this to preserve battery life and prevent the degradation of flash storage. When the physical memory becomes full, the system cannot simply move data to the disk; it must reclaim space by terminating existing processes.

The decision of which process to terminate is not random but is dictated by a complex ranking system. Both Android and iOS maintain a dynamic list where every running application is assigned a priority score based on its current visibility and importance to the user experience. An app currently being interacted with is highly protected, while an app that has not been opened for several hours is the first candidate for eviction.

This prioritization logic ensures that the user perceives a fast and responsive interface even when the system is under extreme pressure. Developers who fail to understand these priority buckets often find their apps disappearing unexpectedly or failing to complete background work. By mastering the internal scoring mechanisms, you can design applications that remain resident in memory longer and provide a seamless transition when the user returns.

The mobile operating system is a zero-sum environment where the performance of the active application is bought with the termination of background processes.

Android Low Memory Killer and OOM Scores

Android manages memory through a kernel-level component called the Low Memory Killer Daemon. This system assigns an Out Of Memory adjustment score to every process, ranging from negative seventeen hundred to one thousand. A lower score indicates a higher priority, making the process less likely to be killed during a resource crunch.

Foreground applications usually have the lowest possible scores because they are currently drawing to the screen and interacting with the user. Visible processes, such as an activity that is partially obscured by a dialog, sit in the next bucket of importance. Below these are service processes, which perform background tasks like music playback, and finally cached processes which are kept around only to speed up subsequent launches.

  • Foreground Process: Hosting an activity the user is interacting with or a foreground service with a notification.
  • Visible Process: Doing work that the user is aware of but not directly interacting with, such as a visible dialog.
  • Service Process: Running a background service that the user does not see but expects to continue, like data syncing.
  • Cached Process: A process that is currently not needed and can be killed safely if memory is required elsewhere.

The iOS Jetsam Mechanism

iOS utilizes a dedicated process called Jetsam to monitor system-wide memory pressure and enforce limits. Unlike Android which uses a sliding scale of scores, iOS maintains a priority-based linked list where each entry represents a running application or system daemon. When the kernel signals that available memory has dropped below a critical threshold, Jetsam begins traversing this list from the bottom up.

Each application on iOS is also subject to a hard memory limit that varies based on the specific hardware device. If an individual app exceeds its allotted memory footprint, Jetsam will terminate it immediately with a specific exception code, even if there is still global memory available. This dual-pronged approach protects the system from both global exhaustion and individual resource leaks.

The Lifecycle of Targeted Processes

Before the operating system resort to the finality of process termination, it provides several warnings to the application. These signals are the first line of defense, allowing your code to proactively release non-essential resources like image caches or temporary data structures. Ignoring these warnings is a common pitfall that leads to a poor user reputation and lower retention rates.

When an app receives a memory warning, it should prioritize releasing memory that can be easily recreated. This includes clearing bitmap caches, flushing database cursors, and nullifying references to large objects that are not currently in use. By reducing the memory footprint voluntarily, the app can often lower its ranking in the termination queue and avoid being killed entirely.

kotlinResponding to Memory Pressure on Android
1class DataCacheManager : ComponentCallbacks2 {
2    override fun onTrimMemory(level: Int) {
3        // Check the severity of the memory pressure signal
4        if (level >= ComponentCallbacks2.TRIM_MEMORY_MODERATE) {
5            // The system is reclaiming memory; clear the heavy caches
6            evictInternalMemoryCache()
7            clearNetworkResponseBuffer()
8        } else if (level == ComponentCallbacks2.TRIM_MEMORY_UI_HIDDEN) {
9            // The UI is no longer visible, release UI-specific resources
10            releaseLargeViewObjects()
11        }
12    }
13
14    override fun onConfigurationChanged(newConfig: Configuration) {
15        // Handle orientation or language changes if necessary
16    }
17
18    override fun onLowMemory() {
19        // This is a last-resort callback for older devices
20        clearEverythingPossible()
21    }
22}

Interpreting System Signals

On Android, the onTrimMemory callback provides specific levels that describe the context of the memory pressure. Some levels indicate that the system as a whole is running low, while others specifically signal that your app has moved into the background. Distinguishing between these scenarios allows you to be surgical about what data you discard.

On iOS, the system notifies your app through the applicationDidReceiveMemoryWarning method or the memoryWarningNotification. Because iOS is more aggressive with termination, these notifications should be treated as urgent requests rather than suggestions. Modern iOS development also involves using the Combine framework or Swift Concurrency to listen for these system-level pressure events in a reactive manner.

Proactive Resource Eviction Strategies

A robust eviction strategy involves categorizing memory into tiers of importance. Tier one data consists of critical state that cannot be lost, while tier three data consists of decorative images or pre-fetched API responses that are nice to have. Under moderate pressure, you should clear tier three; under extreme pressure, you clear everything except tier one.

Using specialized data structures like LruCache on Android or NSCache on iOS can automate much of this work. These containers are designed to automatically purge older or less-used items when system resources become scarce. Relying on these built-in tools reduces the amount of boilerplate code you need to write and ensures your app behaves as a good citizen within the OS ecosystem.

Negotiating Longevity with the System

There are valid scenarios where an application must continue running even when it is not in the foreground. Tasks such as navigating with GPS, playing a podcast, or uploading a large file to a server require the app to stay alive. To support these needs, both platforms offer APIs that allow an application to temporarily elevate its priority.

Elevating priority is a social contract between the developer and the user. The OS allows the app to stay active, but the app must fulfill a specific purpose that provides value to the user while minimizing power consumption. Misusing these APIs to stay alive for data mining or unnecessary polling will likely lead to rejection from app stores or manual termination by the user.

swiftRegistering a Background Task on iOS
1func performHeavyDataSync() {
2    var backgroundTaskID: UIBackgroundTaskIdentifier = .invalid
3    
4    // Request additional time from the OS
5    backgroundTaskID = UIApplication.shared.beginBackgroundTask(withName: "SyncData") {
6        // This closure is called if time runs out
7        print("Background task expired; cleaning up resources")
8        UIApplication.shared.endBackgroundTask(backgroundTaskID)
9    }
10
11    // Perform the actual work on a background queue
12    DispatchQueue.global().async {
13        self.uploadUserStatistics()
14        
15        // Always signal completion to avoid being penalized
16        UIApplication.shared.endBackgroundTask(backgroundTaskID)
17    }
18}

Android Foreground Services

In modern Android versions, running a service in the background is highly restricted. To ensure the process stays alive with high priority, you must promote the service to the foreground by attaching a persistent notification. This notification informs the user that your app is actively consuming battery and provides a way for them to stop the work if desired.

Foreground services are assigned a specific type, such as mediaPlayback, location, or dataSync. The OS uses these types to apply specific optimizations and power management rules. Using the correct service type is critical because the system may terminate a location service if it detects that the app is not actually accessing the GPS hardware.

iOS Background Execution Modes

iOS uses a more restrictive set of execution modes compared to Android. Most background work on iOS is handled by the Background Tasks framework, which allows the OS to schedule your code for execution at a time that is optimal for battery life. You do not get to decide exactly when your code runs; instead, you provide a window of time and a set of constraints.

For tasks that must happen immediately after the user leaves the app, you can use the beginBackgroundTask API to get a few minutes of extra execution time. This is ideal for finishing a database write or sending a final analytics event. If you fail to end this task before the system-allotted time expires, the OS will terminate your process immediately.

The Inevitable: Implementing State Restoration

Despite your best efforts to optimize memory and manage priorities, your app will eventually be killed by the operating system. This is a normal part of the mobile lifecycle and should be handled as a standard use case rather than an error. The goal is to make the process of termination and subsequent restart completely invisible to the user.

State restoration is the process of saving the current UI configuration and user data to a persistent store before the process is destroyed. When the user returns, the app reads this saved state and reconstructs the previous view hierarchy. To the user, it appears as if the app was running in the background the entire time, even if it was actually dormant for hours.

  • Persistence Strategy: Use a combination of lightweight key-value stores for UI state and a robust database for user data.
  • Trigger Points: Save state when the app moves to the background, as you cannot rely on a callback occurring at the moment of death.
  • Data Integrity: Ensure that partial state saves do not lead to corrupted UI layouts when the application is restored.
  • User Expectation: Evaluate whether restoring to the exact scroll position is helpful or if the user would prefer a fresh start after a long absence.

Identifying Termination Signatures

It is important to distinguish between an app that crashed and an app that was gracefully terminated by the OS for resources. A crash usually indicates a bug that should be fixed, while a resource-based termination is a signal that the user has been busy with other tasks. Analytics tools can help you track these signatures by looking for exits that were not initiated by the user.

On Android, you can use the ActivityManager to query the reason for the last process exit. This API provides detailed information, such as whether the kill was due to low memory, a system update, or an unhandled exception. Knowing these reasons allows you to tune your resource management strategy and identify memory leaks that might be causing premature evictions.

Restoration Implementation Patterns

The most effective way to handle restoration is to treat your UI as a pure function of your data state. On Android, the SavedStateHandle in ViewModels provides a convenient way to persist small amounts of data across process death. On iOS, the State Restoration API allows you to encode the entire view controller hierarchy into a set of restorable objects.

Avoid saving large objects like bitmaps or complete JSON strings into the restoration bundle. These bundles have strict size limits and saving too much data can actually trigger another round of memory pressure. Instead, save unique identifiers that allow you to quickly fetch the required data from a local database or a network cache upon restart.

We use cookies

Necessary cookies keep the site working. Analytics and ads help us improve and fund Quizzr. You can manage your preferences.