Testing your React Native App with Expo & Appium — Handling Cross Platform Differences
— Testing, JavaScript, Automation, Software Development — 6 min read
This is the second post in a series I’ll be making about Appium and how I’m using it to test my React Native app (JiffyCV) within the Expo Client.
You can read my previous post which covers how to set up Appium and how to get it to load your Expo backed React Native app within the Expo Client in order to run automation against it.
In this post I’ll be covering some of the differences I’ve seen between Android and iOS and techniques that can be used to abstract these differences out, providing a single interface to automation easier and cleaner.
Abstracting away complexity with Atomic Page Objects
Atomic Page Objects are those that follow the Atomic Design principles set out by Brad Frost and are a great way of having a clear separation of concerns in your automation code base.
The benefits of this separation of concerns really shines through when dealing with the different locator strategies that you might find yourself using across different devices and when combined with the strategy design pattern you can maintain a clear interface between the atomic levels.
Additionally building Atomic Page Objects may also help expose components in your React Native app that could benefit from refactoring to decouple them further.
There are five levels to Atomic Design, each has it’s own concerns and interacts with the level below:
Atom
The lowest level. Atoms represent with the raw elements of the UI such as text, inputs & buttons and how to interact with them.
Atom level Page Objects don’t track state and similar to pure functional components in React their main purpose is to be used by the higher levels.
Atoms are where the majority of the heavy lifting takes place, dealing with clicking buttons, reading text, entering input and any actual interaction with the UI elements.
It’s at the atom level that you’ll model the differences in structure of across platforms such as how a button on iOS is a single entity where as on Android the text is a sub-object of the button.
If you utilise the strategy pattern to call the appropriate function for performing the action on the specific platform you can have an atom level object that has a simple interface for performing what can be a complex task.
Molecule
Molecules represent a collection of low level UI elements combined into a more usable UI element, such as a text input with a label and optional text that informs the user if there’s a validation error.
When we build our UIs we would refer to this collection of atom level objects as a single element and the molecule level object serves the purpose to interact with atoms contained within, but providing a single interface to do so.
Organism
Organisms represent a collection of UI elements that would form a high level UI component that the user would interact with. In the case of our text input example, the organism level would be a collection of text inputs used to create a form.
In a similar manner to how the molecule level provides a single interface for interacting with multiple elements the organism provides an interface for interacting with the page objects contained within, but also it can aggregate information across those page objects to allow for more advanced interactions.
Template
Templates represent a collection of organisms as an abstract layout of a page.
It has no concrete implementation behind it but is useful for creating a clean interface for interacting with common layouts that are used within the app, especially in more data driven apps where there’s a single layout populated by data.
If your app only has a few screens or each screen is different to the others with no re-use then you might find it easier to jump straight to the page level.
Page
Pages represent the highest level in Atomic Design and are the concrete implementations of a template, or if templates aren’t used it’s a collection of organisms.
While the level is called ‘Page’ for React Native it’s probably easier to think of it as a ‘Screen’ object, similar to how with React Navigation a navigator works with a collection of screens, if you should look to mirror that separation of concerns when building Page level objects.
One anti pattern at this level is to navigate to a different screen but store the logic for interacting with the new screen in the page level object of the previous screen because it’s the only means of accessing that screen. This means the one page level object is dealing with interacting with two pages.
Another anti-pattern is to use the page level object to track state. This should be handled in the code, calling the page level object so that the page can be kept clean and deal with just interacting with the lower levels and providing aggregate data when needed.
Selectors
The best means of ensuring cross-platform automation is to use an accessibility ID which is available on both iOS and Android, although the prop you use on your React Native components differs, being testID
on iOS and accessibilityLabel
on Android.
Both platforms support XPath but I don’t see this as a viable selector strategy as the structure of the UI in Android and iOS are completely different that you’d never have an XPath you can use across both platforms.
Reading the accessibility ID from an element is also different between the platforms. iOS stores the accessibility ID as the name
attribute and Android stores as content-desc
.
Appium handles platform specific selectors using the android=
and ios=
prefixes although I have found more often than not it’s Android that I end up having to use platform specific selectors for as iOS has pretty good accessibility labelling on it’s UI that can be used.
In order to abstract out differences in selectors I found it useful to write a function that takes the Appium driver instance and returns the appropriate value such as a function that returns the name of the attribute the accessibility ID is stored under as you can then just call element.getAttribute(getAccessibilityId(driver))
.
Interacting with UI elements
On the whole, interacting with UI elements across iOS and Android is pretty uniform however I did encounter one instance where this wasn’t the case when automating a drag and drop interface in my app.
On iOS I was able to use the ‘standard’ approach to defining a more advanced touch pattern which is to use touchAction
and supply it a set of commands of doing a long press to pick the item to be dragged up, moving the item to where another element in the list is and releasing the touch to drop it where it needs to be.
On Android however this would long press the pick the item up but it wouldn’t drag the item and instead would perform the movement action in a split second.
After a lot of reading and trying to troubleshoot I resorted to raising a bug with Appium who provided resources to understanding how to carry out more controlled touch actions using performActions
which works similar to touchAction
but you define how an individual finger would interact with the UI, so the same action becomes:
- Finger 1 moves to where the element to be dragged is
- Finger 1 is pressed down
- Finger 1 remains held down in the same place for 2 seconds to be counted as a long press
- Finger 1 moves to the where the other element is
- Finger 1 is released
It’s a lot more verbose than touchAction
but I ended up wrapping both approaches in a function that allowed me to pass in the element to be dragged and the element to perform the drag to which gave me a simple interface for the complex action.
Enabling code re-use with TypeScript decorators
TypeScript has an experimental feature for allowing decorators to be used to extend the prototype of an object to implement behaviour.
This is really useful for building atom level objects where you might find yourself implementing behaviours for pressing elements or reading text values more than once.
In order to use a decorator you first define the decorator function which returns the function to extend the prototype and then on your atom level class you use the decorator annotation to have the class decorated with that behaviour.
Decorators I found good for cutting down duplicate implementations were @Pressable
which handled clicking an element, @Readable
which then handled getting the text content of an element, @hasVisbiilityDetection
which implemented a number of methods to help with handling if an element was visible or not.
Next Steps: Running automation in CI
Hopefully if you’ve read this and my previous post in this series you’ve got a good grip on how to get Appium working with Expo locally but the end goal is to run it as a means of exercising your app as you build it and to catch any regressions.
If you’re working with Appium or looking to start using it feel free to leave a comment below with any questions you may have and I’ll try to answer. Give me a follow if you want to read my next posts in this series.