5 Lessons Learned In Automated Browser Testing

addthis-automated-browser-testing
We really love testing our tools and products to verify they’re working correctly and efficiently. We’ve been expanding our set of automated browser testing for the addthis.com website and our suite of publisher tools. We use the Selenium browser automation framework and we are big fans of this framework. Our philosophy has been to automate the simple workflows and allow QA to focus on the more intricate workflows. Here are 5 guidelines that we’ve learned to write more effective browser automation tests.

0. Use Simple CSS Selectors to Traverse the DOM

This tip is numbered 0 because we are computer scientists but also because it’s a no-brainer. Many introductory tutorials will stress this point and you should pay attention. Shorter CSS selectors are always preferable to longer selectors. Select by id whenever possible or by class as a secondary alternative and only by complex parent/child relationship if necessary. Beware older Selenium tutorials that use XPath selectors don’t copy that style. On rare occasion I will cheat if I need to select the parent of some child element that has an id by using the CSS selector to select the child and then an XPath selector (‘..’) to select the parent. But this is a code smell. Consider adding more IDs in your DOM if you find yourself using a lot of XPath selectors.

1. Webdriver Commands May Return Before the Operation has Completed

Many webdriver operations will return a response while DOM changes are still occurring on the page. While this is a totally reasonable behavior of the API, it is an important fact to keep in mind. Two operations with stronger guarantees are WebElement#click() and WebDriver#get().
WebElement#click()
“If this causes a new page to load, this method will attempt to block until the page has loaded.”
WebDriver#get()
“This is done using an HTTP GET operation, and the method will block until the load is complete.”
One gotcha is that WebElement#click() has much stronger semantics than Actions#click(WebElement element). This has led to some really hard to debug race conditions. See the next item for how we resolved that issue.

2. Not All Operations Are Supported by All Drivers

You start writing some tests. You run these tests on your local machine and everything seems to be in working order. Then you start running these tests on different platforms with different web drivers and (surprise!) not all operations are supported by all drivers. Our solution to this problem has been twofold. First, we try to keep the actions performed in our tests fairly simple—basic mouse clicks and filling in text boxes. Second, we’ve written our tests such that these unsupported operations are only optional for the tests. In other words the tests are still valid when the optional operations are not performed.
We’ve written the AvailableActions class to abstract these optional operations. Note that this implementation also has robustClick() method. This is to bypass the issue with Actions#click(WebElement element) and uses WebElement#click() instead.

3. A Disciplined Use of Retrying Operations is OK

Traditionally if you are writing tests that randomly fail, that usually indicates you’re writing your tests incorrectly. I normally advocate in these instances to add optional causal dependencies in your APIs. Perhaps trading performance for determinism such that these optional dependencies are only used during testing. But a different strategy is necessary when writing browser automated tests. Interacting with a WebElement will throw a StaleElementReferenceException if an element is no longer associated with the DOM. The challenge when testing a web application is that your application is very likely to be changing the DOM on many user interactions.
We use two guidelines to write a good test. The first guideline is to write each test in two phases. The first phase retrieves the references to all elements in the DOM you need to interact or test. This requires that all elements be present in the DOM at the same time but not necessary visible. The second phase uses the AvailableActions to construct one uninterrupted sequence of actions per test. We’ve found that having exactly one action sequence per test improves the reliability of the tests. Use the AvailableActions#waitAction() method if you need to wait for an element to become visible before interacting with the element.
The second guideline is sadly to wrap your test in a retry loop. Refactor your tests to isolate these pain points and use the retry loop only when necessary. An example of this design is shown in this gist. Using this approach we have had success at maintaining a close to zero false positive rate without damaging the true positive rate.

4. Consider Xvfb for Headless Testing

There are some neat projects in the open source community for headless Selenium testing such as the PhantomJSDriver and the HtmlUnitDriver. We’ve had some difficulty getting these drivers to run all of our automated tests (see item number #2 above). A very simple alternative approach is to run a virtual framebuffer such as Xvfb and then you can automate your tests on a headless box to your hearts content. Xvfb is a piece of cake to configure.

Post a Comment

Previous Post Next Post