Monday 21 January 2013

Automation Frameworks


A framework is an integrated system or an infrastructure that sets the rules of automation of a specific product. This system integrates the function libraries, test data sources, object details and various reusable modules. These components act as small building blocks which need to be assembled to represent a business process. The framework provides the basis of test automation and simplifies the automation effort.Lets discuss types of frameworks in automation.

  Linear Framework :-

  • The capture/playback approach means that tests are performed manually while the inputs and outputs are captured in the background.
  • During subsequent automated playback, the script repeats the same sequence of actions to apply the inputs and compare the actual responses to the captured results; differences are reported as errors.
  • Capture/playback is available from almost all automated test tools, although it may be implemented differently.












Advantages:-
  •  The main advantage of this approach is that it requires the least training and setup time. The Learning curve is relatively short, even for non technical test operators.
  • Tests need not be developed in advance, as they can be defined on the fly by the test operator. This allows experienced users to contribute to the test process on an ad hoc basis.
  • This approach also provides an excellent audit trail for ad hoc or usability testing; in the event an error occurs, the precise steps that created it are captured for later diagnosis or reproduction
Disadvantages:-There are, however, several disadvantages of capture/playback, many of which have led to more advanced and sophisticated test tools, such as scripting languages.

  •    Except for reproducing errors, this approach offers very little leverage in the short term; since the tests must be performed manually in order to be captured, there is no real leverage or time savings. In the example shown, the entire sequence of steps must repeated for each account to be added, updated or deleted.
  • Also, because the application must already exist and be stable enough for manual testing, there is little opportunity for early detection of errors; any test that uncovers an error will most likely have to be recaptured after the fix in order to preserve the correct result.
  • Unless an overall strategy exists for how the functions to be tested will be distributed across the test team, the probability of redundancy and/or omission is high: each individual tester will decide what to test, resulting in some areas being repeated and others ignored. Assuring efficient coverage means you must plan for traceability of the test scripts to functions of the application so you will know what has been tested and what hasn't.
  • It is also necessary to give overall consideration to what will happen when the tests are combined; this means you must consider naming conventions and script development standards to avoid the risk of overwriting tests or the complications of trying to execute them as a set.
  • Although subsequent replay of the tests may offer time savings for future releases, this benefit is greatly curtailed by the lack of maintainability of the test scripts. Because the inputs and outputs are hard-coded into the scripts, relatively minor changes to the application may invalidate large groups of test scripts. For example, changing the number or sequence of controls in a window will impact any test script that traverses it, so a window which has one hundred test transactions executed against it would require one hundred or more modifications for a single change.
  • This issue is exacerbated by the fact that the test developer will probably require additional training in the test tool in order to be able to locate and implement necessary modifications. Although it may not be necessary to know the script language to capture a test, it is crucial to understand the language when making changes. As a result, the reality is that it is easier to discard and recapture scripts, which leads to a short useful life and a lack of cumulative test coverage.
  • Note also that there is no logic in the script to be sure that the expected window is in fact displayed, or that the cursor or mouse is correctly positioned before input occurs - all the decisions about what to do next are made by the operator at the time of capture and are not explicit in the script. This lack of any decision-making logic in the scripts means that any failure, regardless of its true severity, may abort the test execution session and/or invalidate all subsequent test results. If the application does not behave precisely as it did when the test was captured, the odds are high that all following tests will fail because of an improper context, resulting in many duplicate or false failures which require time and effort to review.
 Data Driven Framework :-

  •    The difference between classic capture/playback and Data-Driven is that in the former case the inputs and outputs are fixed, while in the latter the inputs and outputs are variable.
  •    This is accomplished by performing the test manually, then replacing the captured inputs and expected outputs with variables whose corresponding values are stored in data files external to the script.
  •    The sequence of actions remain fixed and stored in the test script. Data-Driven is available from most test tools that employ a script language with variable data capability, but may not be possible with pure capture/playback tools.




Advantages:-
  • Data-Driven allows test cases - the inputs and expected outputs – to be created in advance of the application. The software does not have to be stable enough to operate before test cases can be prepared as data files; only the actual script has to await the application.
  • Because they are stored as data, the sets of inputs and outputs can be entered through a spreadsheet, word processor, database or other familiar utility, then stored for later use by the test script.
  • Familiarity with, or even use of, the test tool is not required for test cases.Data-Driven provides leverage in the sense that a single test script can be used to apply many test cases, and test cases can be added later without modifications to the test script. Notice that the example script could be used to enter one or one thousand different accounts, while the capture/playback script enters only one. Cut and paste facilities of the selected utility can be used to rapidly “clone” and modify test cases, providing leverage.
  • This approach reduces required maintenance by not repeating the sequence of actions and logic to apply each test case; therefore, should the steps to enter an account change, they would have to be changed only one time, instead of once for each account.
Disadvantages:- Requirement of  additional expertise in the test tool and in data file management is the main disadvantage of this framework.
  • In order to convert the script to process variable data, at least one of the testers must be proficient in the test tool and understand the concept of variable values, how to implement external data files, and programming logic such as if/then/else expressions and processing loops.
  • Similarly, the test case data will require someone with expertise in creating and managing the test files; large numbers of data elements in a test case may lead to long, unwieldy test case records and awkward file management. Depending on the utility used, this may require expertise in creating and manipulating spreadsheet macros, database forms or word processor templates, then exporting the data into a file compatible with the test tool.
Keyword/ Table Driven Framework:-

  • Table-Driven differs from Data-Driven in the sequence of actions to apply and evaluate the inputs and outputs are also stored external to the script.This means that the test does not have to be performed manually at all. The inputs and expected outputs, as well as the sequence of actions, are created as data records; the test scripts are modular, reusable routines that are executed in a sequence based on the test data.
  • The logic to process these records and respond to application results is embedded in these routines.Also, these script routines are reusable across applications. They are completely generic in that they are based on the type of field or object and the action to be performed. The exact instance of the field or object is defined in the Application Map and is provided to the routine when the test executes.
  • A keyword in its simplest form is an aggregation of one or more atomic test steps. The keyword-driven testing methodology divides test creation into two stages:-
  • Planning Stage :- Preparing the test resources and testing tools.
  • Implementation Stage:- The implementation stage differs depending on the tool or framework. Often, automation engineers implement a framework that provides keywords like “check” and “enter”.Testers or test designers (who do not need to know how to program) write test cases based on the keywords defined in the planning stage that have been implemented by the engineers. The test is executed using a driver that reads the keywords and executes the corresponding code.Other methodologies use an all-in-one implementation stage. Instead of separating the tasks of test design and test engineering, the test design is the test automation. Keywords, such as “edit” or “check” are created using tools in which the necessary code has already been written. This removes the necessity for extra engineers in the test process, because the implementation for the keywords is already a part of the tool. Examples include GUIdancer and QTP.



                                      
Advantages:-
  • The main advantage to this approach is that it provides the maximum maintainability and flexibility.
  • Test cases can be constructed much earlier in the development cycle, and can be developed as data files through utilities such as spreadsheets, word processors, databases, and so forth. The elements of the test cases, can be easily modified and extended as the application itself becomes more and more defined. The scripts that process the data can be created as soon as the objects.
  • By constructing the test script library out of modular, reusable routines, maintenance is minimized. The script routines need not be modified unless a new type of object is added that supports different methods; adding new windows or controls simply requires additions to the tool’s variable file or GUI map, where the objects are defined.
  • Another key advantage of Table-Driven is that the test library can be easily ported from one application to another. Since most applications are composed of the same basic components - screens, fields and keys for character-based applications; windows and controls for graphical applications - all that is needed to move from one to another is to change the names and attributes of the components. Most of the logic and common routines can be left intact.
  • This approach is also portable between test tools. As long as the underlying script language has the equivalent set of commands, test cases in this format could be executed by a script library created in any tool. This means you are free to use different tools for different platforms if necessary, or to migrate to another tool.
  • Because logic is defined and stored only once per method, and there is substantial implied logic for verifying the context and state of the application to assure proper playback, individual testers may create test cases without understanding logic or programming. All that is needed is an understanding of the application components, their names, and the valid methods and values which apply to them.

Disadvantages: - 
  • In Table-Driven approach the amount of training and setup time required to implement it is high. 
  • In order to properly design and construct the script library, extensive programming skills and test tool expertise are needed. Programming skills are needed to implement a library of modular scripts and the surrounding logic that ties them together and uses external data to drive them.
  • Because all test assets are maintained as data, it is essential to have a means of enforcing conventions and managing the data. While this can be done in spreadsheets, a database is far more powerful.

Hybrid Framework:- 
  • The hybrid framework is what most frameworks evolve into over time and multiple projects.
  • The most successful automation frameworks generally accommodate both keyword and data driven frameworks.
  • This allows data driven scripts to take advantage of the powerful libraries and utilities that usually accompany a keyword driven architecture.
  • The framework utilities can make the data driven scripts more compact and less prone to failure than they otherwise would have been. The utilities can also facilitate the gradual and manageable conversion of existing scripts to keyword driven equivalents when and where that appears desirable.
  • On the other hand, the framework can use scripts to perform some tasks that might be too difficult to re-implement in a pure keyword driven approach, or where the keyword driven capabilities are not yet in place.
Modular Framework:-
  • The test script modularity framework requires the creation of small, independent scripts that represent modules, sections, and functions of the application-under-test.
  • These small scripts are then used in a hierarchical fashion to construct larger tests, realizing a particular test case.
  • Of all the frameworks, this one should be the simplest to grasp and master. It is a well-known programming strategy to build an abstraction layer in front of a component to hide the component from the rest of the application.
  • This insulates the application from modifications in the component and provides modularity in the application design. The test script modularity framework applies this principle of abstraction or encapsulation in order to improve the maintainability and scalability of automated test suites. 

No comments:

Post a Comment

Note: only a member of this blog may post a comment.