Friday, November 13, 2015

Random Thoughts On Protractor

Killing time awaiting my carpool

The issue

End to end automated testing. I won't get in to the value of this activity. I will simply say that I am faced with a requirement to implement an end to end testing solution that will serve as one of our final release gates. The basic structure is a set of tests that simulate minimal user browser input, tracks the flow through our code base, and evaluates the responses from the live downstream systems. These tests will pass or fail, but there is no guarantee of stability. These tests will fail frequently. Failures will not break the release gate, they will simply be an indication that someone needs to review the results in order to evaluate the health of the product. The tests will be limited to one instance each of the user critical workflows. This means a grand total of somewhere between 9 and 12 tests for ever...until such time that a distinctly new feature is introduced. 

The Features

After some discussion we came up with a set of options. All of the options meet our core needs, and considering our environment, all team members are expected to have roughly the same skill set as related to each option. Further research and proofs of concept show that all options have the same core features:

Web Driver

All of the options wrap Selenium. All we're doing here is driving a browser. Selenium has significant support in the community.

Logging

Each solution has a different logging mechanism, but each of the mechanisms allow us the same critical features. Logging format and redirection of output. In addition, regardless of our solution we'll be implementing an abstraction layer around logging such that the test developer will likely never explicitly log anything. 

Test Runner

Again, each solution has a language specific test runner. Due to our environment each test runner is currently in use, and all engineers should have the ability to work competently in each. 

The Options

With the above set of required features defined, we jumped in to our technology options. The team self organized in to groups based on areas of interest and built some tools. Since the above features are well known and work equally well the proofs of concept came together along the lines of the three languages in use in our environment. I won't bother breaking out the details because ultimately they end up being essentially the same thing. A test runner (XUnit.net/JUnit/Jasmine), a web driver (Selenium as previously mentioned), and a logger. All of the options work equally well because, well, at the core they're the same thing. They all have similar drawbacks because UI Automation testing. In the end we went with a tool called Protractor. As the others, it has the same benefits and drawbacks. The reasons for the choice were a series of features that come from the fact that Protractor is built on node js. Some of the features apply to the Java option as well, but only a subset.
  • Cross platform out of the box
  • Parallel execution with a configuration change
  • Remote debugging with no tools beyond items already on the target machine
  • Small footprint
  • Development support across multiple platforms. Our devs are split between Windows and Mac.
  • Support for multiple IDEs. Our devs work in multiple tools including Visual Studio, Eclipse, and IntelleJ.
  • Fewer dependencies to track. Yes, all options require the same tools (logging, test runner, web driver), but as a created platform there's none of that for us to track.
  • No need to create build scripts. We can accomplish our needs with little more than a file copy, so no need to create build scripts.
The only articulated argument against the Protractor option, in an environment where every engineer is required to have basic skills in C#, Java, and Java Script, was "We have limited skills in Java Script." Naturally, that argument didn't fly. 

Maybe I'll come back later and talk a little about our design.