Skip to content
Colin Wren
Twitter

Testing your React Native App with Expo & Appium — Handling Cross Platform Differences

Testing, JavaScript, Automation, Software Development6 min read

android and ios devices
Photo by Hardik Sharma on Unsplash

This is the second post in a series I’ll be making about Appium and how I’m using it to test my React Native app (JiffyCV) within the Expo Client.

You can read my previous post which covers how to set up Appium and how to get it to load your Expo backed React Native app within the Expo Client in order to run automation against it.

In this post I’ll be covering some of the differences I’ve seen between Android and iOS and techniques that can be used to abstract these differences out, providing a single interface to automation easier and cleaner.

Abstracting away complexity with Atomic Page Objects

Atomic Page Objects are those that follow the Atomic Design principles set out by Brad Frost and are a great way of having a clear separation of concerns in your automation code base.

The benefits of this separation of concerns really shines through when dealing with the different locator strategies that you might find yourself using across different devices and when combined with the strategy design pattern you can maintain a clear interface between the atomic levels.

Additionally building Atomic Page Objects may also help expose components in your React Native app that could benefit from refactoring to decouple them further.

There are five levels to Atomic Design, each has it’s own concerns and interacts with the level below:

Atom

The lowest level. Atoms represent with the raw elements of the UI such as text, inputs & buttons and how to interact with them.

Atom level Page Objects don’t track state and similar to pure functional components in React their main purpose is to be used by the higher levels.

Atoms are where the majority of the heavy lifting takes place, dealing with clicking buttons, reading text, entering input and any actual interaction with the UI elements.

It’s at the atom level that you’ll model the differences in structure of across platforms such as how a button on iOS is a single entity where as on Android the text is a sub-object of the button.

If you utilise the strategy pattern to call the appropriate function for performing the action on the specific platform you can have an atom level object that has a simple interface for performing what can be a complex task.

1export class Button {
2 constructor(client, element) {
3 this.client = clent;
4 this.element = element;
5 }
6
7 async init() {
8 this.textEl = this.element.$('~text');
9 }
10
11 async press() {
12 this.element.click();
13 }
14
15 async getText() {
16 if (this.client.isAndroid) {
17 return this.textEl.getText();
18 }
19 return this.element.getText();
20}
The atom level class for a button provides a single interface for getting text regardless of the underlying component structure

Molecule

Molecules represent a collection of low level UI elements combined into a more usable UI element, such as a text input with a label and optional text that informs the user if there’s a validation error.

When we build our UIs we would refer to this collection of atom level objects as a single element and the molecule level object serves the purpose to interact with atoms contained within, but providing a single interface to do so.

1export class Tabs {
2 constructor(client, element) {
3 this.client = client;
4 this.element = element;
5 }
6
7 async getTabWithText(text) {
8 if (this.client.isIOS) {
9 return await this.element.$(`~${text}`);
10 }
11 return await this.element.$(`android=new UiSelector().text("${text}")`);
12 }
13
14 async getNavigatorTab() {
15 const tabEl = await this.getTabWithText('NAVIGATOR');
16 return new Button(this.client, tabEl);
17 }
18
19 async getPreviewTab() {
20 const tabEl = await this.getTabWithText('PREVIEW');
21 return new Button(this.client, tabEl);
22 }
At the molecule level you create a means of interacting with a collection of atom level objects such as tabs in a tab bar

Organism

Organisms represent a collection of UI elements that would form a high level UI component that the user would interact with. In the case of our text input example, the organism level would be a collection of text inputs used to create a form.

In a similar manner to how the molecule level provides a single interface for interacting with multiple elements the organism provides an interface for interacting with the page objects contained within, but also it can aggregate information across those page objects to allow for more advanced interactions.

1export class DraggableList {
2
3 constructor(client, element) {
4 this.client = client;
5 this.element = element;
6 }
7
8 async init() {
9 this.testID = await this.element.getAttribute(getAccessibilityId(this.client));
10 }
11
12 async performDragiOS(client, xPos, fromYPos, toHandle) {
13 await this.client.touchAction([{ action: 'longPress', x: xPos, y: fromYPos },{ action: 'wait', ms: 10000},{ action: 'moveTo', element: toHandle},{ action: 'wait', ms: 3000},{ action: 'release' }]);
14 }
15
16 async performDragAndroid(client, xPos, fromYPos, toYPos) {
17 await this.client.performActions([{ype: 'pointer',id: 'finger',parameters: {pointerType: 'touch',},actions: [{type: 'pointerMove',duration: 0,x: xPos,y: fromYPos,},{type: 'pointerDown', button: 0,}, {type: 'pause',duration: 6000,},{type: 'pointerMove',duration: 5000,x: xPos,y: toYPos,}, {type: 'pointerUp',button: 0,}]}]);
18 }
19
20 async dragItem(fromIndex: number, toIndex: number) {
21 const handles = await this.element.$$(`~${this.testID} dragHandle`);
22 const toHandle = handles[toIndex];
23 const fromHandle = handles[fromIndex];
24 const fromLoc = await fromHandle.getLocation();
25 const posDelta = this.client.isIOS ? 11 : 30;
26 const xPos = Math.abs(fromLoc.x + posDelta);
27 const fromYPos = Math.abs(fromLoc.y + posDelta);
28 if (this.client.isIOS) {
29 await performDragiOS(client, xPos, fromYPos, toHandle);
30 } else {
31 const toLoc = await toHandle.getLocation();
32 const toYPos = Math.abs(toLoc.y + posDelta);
33 await performDragAndroid(client, xPos, fromYPos, toYPos);
34 }
35 }
36
37 async getItems() {
38 return this.element.$$(`~${this.testID} item`);
39 }
40
41 async getItemAtIndex(index: number) {
42 const items = await this.getItems();
43 return items[index];
44 }
45}
At the organism level you can provide interfaces for complex interactions across a number of molecule and atom level objects

Template

Templates represent a collection of organisms as an abstract layout of a page.

It has no concrete implementation behind it but is useful for creating a clean interface for interacting with common layouts that are used within the app, especially in more data driven apps where there’s a single layout populated by data.

If your app only has a few screens or each screen is different to the others with no re-use then you might find it easier to jump straight to the page level.

Page

Pages represent the highest level in Atomic Design and are the concrete implementations of a template, or if templates aren’t used it’s a collection of organisms.

While the level is called ‘Page’ for React Native it’s probably easier to think of it as a ‘Screen’ object, similar to how with React Navigation a navigator works with a collection of screens, if you should look to mirror that separation of concerns when building Page level objects.

One anti pattern at this level is to navigate to a different screen but store the logic for interacting with the new screen in the page level object of the previous screen because it’s the only means of accessing that screen. This means the one page level object is dealing with interacting with two pages.

Another anti-pattern is to use the page level object to track state. This should be handled in the code, calling the page level object so that the page can be kept clean and deal with just interacting with the lower levels and providing aggregate data when needed.

1export class Storybook {
2
3 constructor(lient) {}
4
5 async init() {
6 this.tabs = new StorybookTabBar(this.client); // organism level object
7 await this.tabs.init();
8 this.navigator = new StorybookNavigator(this.client); // organism level object
9 }
10
11 async goToFilledButtons() {
12 await this.tabs.goToNavigator();
13 await this.navigator.goToFilledButtons();
14 await this.tabs.goToPreview();
15 }
16}
At the page level you coordinate the lower level objects to interact or get the state of the page

Selectors

The best means of ensuring cross-platform automation is to use an accessibility ID which is available on both iOS and Android, although the prop you use on your React Native components differs, being testID on iOS and accessibilityLabel on Android.

Both platforms support XPath but I don’t see this as a viable selector strategy as the structure of the UI in Android and iOS are completely different that you’d never have an XPath you can use across both platforms.

Reading the accessibility ID from an element is also different between the platforms. iOS stores the accessibility ID as the name attribute and Android stores as content-desc .

Appium handles platform specific selectors using the android= and ios= prefixes although I have found more often than not it’s Android that I end up having to use platform specific selectors for as iOS has pretty good accessibility labelling on it’s UI that can be used.

In order to abstract out differences in selectors I found it useful to write a function that takes the Appium driver instance and returns the appropriate value such as a function that returns the name of the attribute the accessibility ID is stored under as you can then just call element.getAttribute(getAccessibilityId(driver)) .

1// Returns the appropriate prop to pass to the component based on the platform it's running on
2export function getTestIDProp(testID) {
3 if (Platform.OS === 'android') {
4 return { accessibilityLabel: testID };
5 }
6 return { testID };
7}
8
9// Returns the appropriate attribute to get the accessibilityID from based on the platform the client is targetting
10export function getAccessibilityId(client) {
11 if (client.isAndroid) {
12 return 'content-desc';
13 }
14 return 'name';
15}
These methods can really speed up development as they make it easier to have a consistent means of dealing with accessibility IDs

Interacting with UI elements

On the whole, interacting with UI elements across iOS and Android is pretty uniform however I did encounter one instance where this wasn’t the case when automating a drag and drop interface in my app.

On iOS I was able to use the ‘standard’ approach to defining a more advanced touch pattern which is to use touchAction and supply it a set of commands of doing a long press to pick the item to be dragged up, moving the item to where another element in the list is and releasing the touch to drop it where it needs to be.

On Android however this would long press the pick the item up but it wouldn’t drag the item and instead would perform the movement action in a split second.

After a lot of reading and trying to troubleshoot I resorted to raising a bug with Appium who provided resources to understanding how to carry out more controlled touch actions using performActions which works similar to touchAction but you define how an individual finger would interact with the UI, so the same action becomes:

  • Finger 1 moves to where the element to be dragged is
  • Finger 1 is pressed down
  • Finger 1 remains held down in the same place for 2 seconds to be counted as a long press
  • Finger 1 moves to the where the other element is
  • Finger 1 is released

It’s a lot more verbose than touchAction but I ended up wrapping both approaches in a function that allowed me to pass in the element to be dragged and the element to perform the drag to which gave me a simple interface for the complex action.

1const xPos = 10;
2const fromYPos = 10;
3const toYPos = 250;
4
5// iOS
6await this.client.touchAction([
7 { action: 'longPress', x: xPos, y: fromYPos },
8 { action: 'wait', ms: 10000},
9 { action: 'moveTo', x: xPos, y: toYPos},
10 { action: 'wait', ms: 3000},
11 { action: 'release' }
12]);
13
14// Android
15await this.client.performActions(
16 [
17 {
18 type: 'pointer',
19 id: 'finger',
20 parameters: {
21 pointerType: 'touch',
22 },
23 actions: [
24 {
25 type: 'pointerMove',
26 duration: 0,
27 x: xPos,
28 y: fromYPos,
29 },
30 {
31 type: 'pointerDown',
32 button: 0,
33 },
34 {
35 type: 'pause',
36 duration: 6000,
37 },
38 {
39 type: 'pointerMove',
40 duration: 5000,
41 x: xPos,
42 y: toYPos,
43 },
44 {
45 type: 'pointerUp',
46 button: 0,
47 }
48 ]
49 }
50 ]
51);
Because of the lack of control that Android has using touchAction you need to use the lower level performActions functionality

Enabling code re-use with TypeScript decorators

TypeScript has an experimental feature for allowing decorators to be used to extend the prototype of an object to implement behaviour.

This is really useful for building atom level objects where you might find yourself implementing behaviours for pressing elements or reading text values more than once.

In order to use a decorator you first define the decorator function which returns the function to extend the prototype and then on your atom level class you use the decorator annotation to have the class decorated with that behaviour.

Decorators I found good for cutting down duplicate implementations were @Pressable which handled clicking an element, @Readable which then handled getting the text content of an element, @hasVisbiilityDetection which implemented a number of methods to help with handling if an element was visible or not.

1export function Pressable() {
2 return function(target) {
3 target.prototype.press = async function() {
4 await this.element.click();
5 }
6 }
7}
8
9@Pressable()
10export class StorybookItem {
11 constructor(private driver: BrowserObject, private element: Element) {}
12}
Thanks to the decorator the StorybookItem class gains the press() method without you needing to write any new code

Next Steps: Running automation in CI

Hopefully if you’ve read this and my previous post in this series you’ve got a good grip on how to get Appium working with Expo locally but the end goal is to run it as a means of exercising your app as you build it and to catch any regressions.

If you’re working with Appium or looking to start using it feel free to leave a comment below with any questions you may have and I’ll try to answer. Give me a follow if you want to read my next posts in this series.