Automation of Graphical User Interface Testing

Of late there has been a proliferation of GUI applications and one should assume there will be proliferation of the ways in which the GUI is tested. Though there are a lot of libraries that would assist in testing a graphical user interface, none of them seem to be very reliable for a lot of reasons, say, changing UI in a product causing your test suite to break. Test suites are not smart enough to understand that the UI has changed and that the new UI is not a test case failure but is something that needs to be tested. Unlike a CLI application, there are a lot of operations that should be tested in a GUI system. A small system like Microsoft Wordpad has over 300 operations and it would increase exponentially as the system becomes more complex.

The process of verification and validation has 3 stages:

  • Test case generation
  • Running test cases
  • Test Execution Report

In this post I will talk more about the how to run test cases in an automated way.At first the strategies were migrated and adapted from the CLI testing strategies.

Mouse Position Capture

A popular method used in the CLI environment is capture/playback. In this method, one records various user interactions with the user interface, interactions such as mouse click, keyboard type etc. The tester also takes screen shots at various stages in the test case in order to do validations. The recorded test case is played every time the user interface has to be tested repeating the actions and validations.

Using capture/playback worked quite well in the CLI world but there are significant problems when one tries to implement it on a GUI-based system. The most obvious problem one finds is that the screen in a GUI system may look different while the state of the underlying system is the same, making automated validation extremely difficult. This is because a GUI allows graphical objects to vary in appearance and placement on the screen. Fonts may be different, window colors or sizes may vary but the system output is basically the same. This would be obvious to a user, but not obvious to an automated validation system. The other problem is that the test cases become difficult to maintain. With UI changing constantly debugging the test cases become as mundane as testing the UI manually. For example, Facebook, has made 16 major changes to a user’s home alone in half a decade let alone the numerous minor changes that need to be tested.

One approach to solve this problem is to use reusable shared libraries which should be called in the test cases rather than calling basic operations such as MouseClick(“likeButton”) or KeyboardType(“CommentTextArea”). This way, the change in user interface would not require you to debug the test case but just the shared library or API which uses the user interface element that was changed.

Part 2 Coming Soon

Also read...

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>