Mobile apps break in ways web apps never will. A user walks into an elevator, loses signal mid-transaction, and your app corrupts their cart data. A phone call interrupts a form submission and the app forgets everything they typed. An OS update changes permission dialogs and your camera feature silently stops working.
I learned these lessons the hard way testing Orocube's POS products across Android and iOS devices. After enough production incidents caused by things I should have caught, I built a checklist that I run before every mobile release. This is that checklist.
Functional Testing
This is the foundation. The app does what it is supposed to do.
Core functionality
- All primary user flows work end to end
- Navigation between screens is correct
- Back button behavior is consistent
- Deep links open the correct screen
- Search returns accurate results
- Filters and sorting work as expected
Data handling
- Forms validate input correctly
- Required fields enforce validation
- Data persists after app restart
- Data syncs correctly with the server
- Pagination loads additional content
- Pull-to-refresh updates data
Authentication
- Login with valid credentials works
- Login with invalid credentials shows clear error
- Session timeout redirects to login
- Biometric login (Face ID, fingerprint) works
- "Remember me" persists across app restarts
- Logout clears sensitive data from local storage
I test every item on this list manually at least once per release. Automation covers the regression, but the first pass is always hands-on.
Installation and Updates
This is the category most testers skip. Do not skip it.
Install
- Fresh install on supported OS versions
- App launches correctly after first install
- Onboarding flow completes without errors
- Permissions requested at the right time (not all at once on first launch)
Update
- Update from previous version preserves user data
- Update does not require re-login
- Database migrations run without data loss
- New features are accessible after update
- App works if user skips one version (v1.0 to v1.2, skipping v1.1)
Uninstall
- Reinstall after uninstall works as fresh install
- No orphaned data left on device after uninstall
The version skip test is critical. Not every user updates immediately. If your migration script only handles v1.1 to v1.2 but a user is still on v1.0, you need to know what happens.
Interruption Testing
This is where mobile testing gets interesting. Users do not use your app in a vacuum. Their phone is doing twenty things at once.
Phone calls
- Incoming call during form entry (does the form data persist?)
- Incoming call during file upload (does the upload resume or restart?)
- Incoming call during payment (is the transaction safe?)
- Return to app after call ends (correct screen and state?)
Notifications
- Push notification while app is in foreground
- Push notification while app is in background
- Tapping notification opens correct screen
- Multiple notifications do not crash the app
- Notification badges update correctly
System interruptions
- Low battery warning during operation
- OS update prompt during operation
- Another app requesting camera/mic access
- Alarm or timer going off during use
- Screenshot or screen recording started
I test interruptions by literally calling my test device from another phone while performing critical flows. It sounds basic, but this is responsible for some of the worst bugs I have ever found.
Network Conditions
Mobile apps must handle bad networks gracefully. This is non-negotiable.
Connection states
- App works on WiFi
- App works on cellular (4G/5G)
- App works on slow connection (3G simulation)
- App handles complete offline mode
- App recovers when connection returns
- Switching from WiFi to cellular mid-operation
Offline behavior
- Offline indicator shown to user
- Cached data available offline
- Actions queued for sync when online
- Queued actions sync correctly on reconnect
- Conflict resolution when offline edits clash with server changes
Error handling
- Timeout errors show user-friendly message
- Retry mechanism for failed requests
- No infinite loading spinners on network failure
- Large file downloads resume after interruption
How I simulate network conditions:
On Android, use the Network Link Conditioner in developer options or set up a proxy with Charles/Proxyman that throttles bandwidth. On iOS, go to Settings, then Developer, then Network Link Conditioner. You can simulate 3G, Edge, or a 100% packet loss scenario.
Test scenario: User submits an order on 3G
Steps:
1. Enable 3G simulation on device
2. Add items to cart
3. Proceed to checkout
4. Submit order
5. Observe: Does the app show a loading state?
6. Observe: Does it timeout gracefully?
7. Observe: Is the order created or safely rolled back?Performance Testing
Users uninstall apps that feel slow. Performance is not a nice-to-have.
Launch time
- Cold start under 3 seconds
- Warm start under 1 second
- Launch with large local database (does it degrade?)
Scrolling and rendering
- Long lists scroll smoothly (no jank)
- Images load without blocking the UI
- Animations run at 60fps
- Complex screens render within 2 seconds
Memory and battery
- No memory leaks during extended use (monitor with Xcode Instruments or Android Profiler)
- Battery drain is reasonable (not more than 5% per hour of active use)
- App does not drain battery in background
Storage
- App size is reasonable after extended use
- Cache can be cleared without losing user data
- App handles low storage gracefully (does not crash, shows warning)
How I check for memory leaks:
On Android, open Android Studio's Profiler. Navigate through the app for 10 minutes, visiting every screen multiple times. The memory graph should stay roughly flat. If it keeps climbing, there is a leak. On iOS, use Xcode Instruments with the Leaks template. Same process. Navigate, observe, look for the upward trend.
Device and OS Compatibility
The matrix of devices and OS versions is the most time-consuming part of mobile testing. Be strategic about it.
OS versions
- Latest OS version (iOS 19, Android 16)
- Previous major version (iOS 18, Android 15)
- Minimum supported version
- Beta OS version (if accessible)
Screen sizes
- Small phone (iPhone SE, Galaxy A series)
- Standard phone (iPhone 16, Pixel 9)
- Large phone (iPhone 16 Pro Max, Galaxy Ultra)
- Tablet (iPad, Galaxy Tab) if supported
- Foldable devices if supported
Device-specific
- Notch and Dynamic Island rendering
- Devices with and without home button
- Dark mode and light mode
- Text size accessibility settings (small, default, large, largest)
- Display zoom enabled
- RTL language layout (Arabic, Hebrew)
You cannot test every device. Pick a matrix that covers the extremes: smallest screen, largest screen, oldest supported OS, newest OS. If it works on the extremes, it usually works in between.
Security Checklist
Mobile security is different from web security. The device itself is a threat surface.
Data storage
- Sensitive data encrypted at rest
- No sensitive data in plain text logs
- API keys not hardcoded in the app binary
- Secure keychain/keystore for tokens
Network security
- All API calls use HTTPS
- Certificate pinning implemented
- No sensitive data in URL parameters
- API responses do not leak internal details
Authentication
- Session tokens expire appropriately
- Biometric auth cannot be bypassed
- Jailbroken/rooted device detection (if required)
- Clipboard cleared after pasting passwords
I do not do full penetration testing myself. But I check these basics on every release. If the app stores a JWT in plain text in SharedPreferences, that is a bug I can and should catch.
Accessibility
Accessibility is not a separate testing phase. It is part of every checklist item above.
Screen readers
- VoiceOver (iOS) reads all interactive elements
- TalkBack (Android) reads all interactive elements
- Focus order is logical (top to bottom, left to right)
- Images have meaningful alt text
- Decorative images are hidden from screen readers
Interaction
- Touch targets are at least 44x44 points (iOS) or 48x48 dp (Android)
- Color is not the only indicator of state (error fields have icons, not just red borders)
- Text is readable at 200% system font size
- Sufficient contrast ratio (4.5:1 minimum)
My Pre-Release Workflow
Here is how I actually run through this checklist on a real release:
Day 1: Smoke test on primary devices. Install the build on one iOS and one Android device. Run through every core flow. If anything fundamental is broken, send it back immediately. Day 2: Full functional testing. Work through the functional and authentication checklists methodically. File bugs as I go. Day 3: Edge cases and interruptions. This is where I find the bugs that would embarrass us in production. Network conditions, interruptions, device rotation, memory pressure. Day 4: Device matrix and OS versions. Test on the expanded device set. Focus on layout issues, font rendering, and OS-specific behaviors. Day 5: Regression and sign-off. Verify all bug fixes. Run the automated suite. Final exploratory pass. Either sign off or block with clear reasons.
Five days is the ideal. Some releases get compressed to three. When that happens, I prioritize: Day 1 stays the same, Day 2 covers functional plus the highest-risk edge cases, Day 3 is device matrix plus sign-off. The interruption and network testing gets reduced but never eliminated.
The Checklist Is a Starting Point
Every project has its own quirks. A POS app needs payment terminal testing. A social media app needs media upload testing across file types and sizes. A banking app needs transaction integrity testing under every failure mode. Start with this checklist and add to it based on what your app actually does. After three releases, you will have a checklist that catches the specific bugs your app tends to produce. That is when mobile testing stops feeling like guesswork and starts feeling like a system.