GithubHelp home page GithubHelp logo

lyp100208 / dollop Goto Github PK

View Code? Open in Web Editor NEW

This project forked from briankyckelhahn/dollop

0.0 1.0 0.0 11.89 MB

Dollop is a record-and-playback style automated test tool for Android.

License: Other

Shell 0.13% Python 99.87%

dollop's Introduction

About

The Dollop Test Tool is a desktop application that you can use to create tests of your Android apps.

  • No programming is required, though you can modify the generated Python test module if you like.
  • Convert your manual test to an automated test just by pressing 'Record' and interacting with the tool.
  • Test your apps, third-party apps, web pages - almost everything.
  • Understands taps, drags, long presses, text and keycode input.
  • Image-based - works well with images but also recognizes text.
  • Screenshots are saved both of playback and recording sessions, giving you a better idea of what went wrong during your test.

License and Copyright Information

This project was created and is maintained by Brian Kyckelhahn, an independent mobile app developer in Austin, Texas. You are welcome to contribute to it, and you will be acknowledged in the contributors file for doing so, but you surrender your copyright to Brian when you make contributions.

This project is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported license. That means that you cannot distribute changes you make to this project. You are also prohibited from distributing the binaries, bytecodes, and other derivatives of code that has been changed. However, if you make changes to this project in the course of working at a company and want to distribute those changes to your co-workers at that company so that they can use those changes in the course of their work at that company, Brian has no intention of sicking lawyers on you for doing so.

Installation and Running

Pre-requisites

Dollop runs on Windows. With at most a few hours of effort, it should be possible to edit the code to run on Linux. It was originally written for, and ran well on, Linux and was then ported to Windows.

Install the Android SDK. You might want to experiment with the version of the SDK that you use. I have seen good results with version 12, which is available for Windows via an installer or as a zip file. Later versions have sometimes been sluggish while handling the multiple simultaneous requests that the tool makes. You can install multiple SDKs on a single computer without any conflict. Go to Edit > Configuration after launching the tool and specify the full path to adb in the SDK of your choice. The components the tool uses from the SDK include adb, which is in the platform-tools folder, and monkeyrunner.bat, which is in the tools folder.

Install wxPython, OpenCV.

Installation

The tool requires very little configuration. If Android Debug Bridge (adb) is not on your system path, point to it in the field provided within the tool at Edit > Configuration > System. If your device has a custom keycode mapping, you can edit the default one at Edit > Configuration > Device. The tool communicates with your device using adb over USB. Nothing needs to be installed on your device. So that adb can communicate with it, however, you'll need to enable USB debugging on your device; try something similar to this navigation sequence on your device: Settings > Applications > Development and enable "USB Debugging". Also, open the "USB Connection" dialog from the notification bar and select "PC Mode" or something similar.

To Run

To run the tool:
cd Dollop/src
python gui.py

Recording   and   Playing   Tests

  1. With your device already connected by USB to the workstation on which the Dollop Test Tool is installed, start the tool. If you have not already done so, the tool will ask you to tap or tap-and-hold (i.e. long press) on different

    corners of the screen. It will also ask you point it to the monkeyrunner.bat file of the Android SDK.

  2. Press the record button, the one with the red circle on it. While you're recording, this button will stay depressed until you press it again to stop recording.
  3. A dialog will appear, asking you to provide a name and location for the Python test module that will be automatically created by the tool.
  4. After finishing with the dialog, you have begun recording the test. Interact with your phone or tablet through the tool to create test events, which include taps, drags, long presses, text verification, waits, and text and keycode

    entries. Here is how to create each of these events:

    • Tap: to tap on something, click your cursor within the device image. A bitmap centered on your cursor point (of maximum size 60 pixels square) will be used as the tap target when the test is played back. If the target image or a

      similar one is found during playback, that image will be tapped. If there is text near the cursor when the test is being recorded, it may be identified and the tool will look for it during playback. If the target image and associated

      text, if any, is not found during playback, the test fails.

    • Drag: dragging is similar to tapping. However, during playback, if the target image is not found, the tool will still play the drag back, with the point of initial drag contact being in the same general region as the original touch.

      Unlike taps, the test will not fail if the image around the original drag touch cannot be found.

    • Type: click near the device screen image to be sure that that GUI window is active, and then begin typing with your keyboard. There will be a small delay as the tool buffers input. adb sometimes skips the first few characters in

      input, so it's best to type some characters, then delete them with backspace, and then type the text you want to send.

    • Send keycodes: use the drop down list to choose the keycode you want to send, and then press the 'Send' button.
    • Verify text: to verify that particular text appears somewhere in the screen, use the text field at left. Note that the third-party OCR software the tool uses is not very accurate, though you can edit the test script to specify that

      the text match does not have to be perfect. See the API for details regarding text verification.

    • Wait: to insert a point in the test being recorded where the tool will wait before continuing, use the provided text field.
  5. Un-press the 'Record' button to stop recording.
  6. To improve the tool's responsiveness during recording, the test is not completely created during recording. Go to Test > Load Test and load the test you just finished recording. This will process the test and make it available for

    running. Simple tests are processed in a few seconds; very large tests may require a few minutes.

  7. Open the Python processed test module in a text editor. The tool makes choices for method parameters that you can override; see the API. Ensure that any text the OCR software found near your taps and drags is

    correct. The OCR software will usually produce the same text for similar input images, so you may want to rely on the image alone if the text found is not what you want.

  8. You can continue to modify the Python test module in any way you like. Add control structures such as for loops, add new routines, import modules, etc.

API

The tool automatically converts your interaction with your device into a test script in the form of a Python module. It is not necessary to learn the API presented here to use the tool, but it is documented here for those that want to edit test scripts.

device.tap(targetImagePath="", characters=None, chinBarImagePath=None, maxWaitTime=11, dragSearchPermitted=True)

Searches the device screen image, which is constantly being retrieved from the device during playback, for an image within it matching that at targetImagePath and containing characters

(if characters is provided). This method appends the image of the chin bar (which is created by the tool by default) if it is provided to the device screen image create the image that the tool searches. A chin bar is an

extension of the LCD that does not display pixels, but does display icons for actions, such as those for Menu, Home, Back, and Search. Most phones do not have chin bars; the Droid 2 is

an example of one that does. If the image is found, the tool taps it in its center. If it is not found, a drag search is conducted, which involves attempting to drag the screen forward, and, later, back, to search for the image. (If a

drag upward was performed before this tap, forward, here, means up, otherwise it means down; the drag search continues the motion of any preceding drag.) maxWaitTime is the number of seconds that the tool will search for the

target image before conducting a drag search or indicating failure.

device.longPress(targetImagePath="", characters=None, chinBarImagePath=None, maxWaitTime=11, dragSearchPermitted=True)
This method is just like tap(), though it presses on the target, if found, for long enough to cause the device to interpret the press as a long press, rather than a tap. As you may know, a long press often

causes the device to respond differently than it would to a tap, such as by popping up a menu, rather than launching an application.

device.drag(targetImagePath=None, dragRightUnits=None, dragDownUnits=None, dragStartRegion=None, characters=None, waitForStabilization=False)

drag is like tap() in that it searches the device screen image for the smaller, target image and characters, if provided, but it will proceed immediately with the drag even if the target is not found. For this method,

the screen is conceptually divided into 9 sections, with three divisions across and three vertically. dragStartRegion is represented as a binary tuple, the first element being an integer representing the (1-based) index of the

column in this conceptual matrix, and with the second being the integer representing the index of the row. In other words, dragStartRegion is (x, y), with the coordinate system centered at the upper left corner of the screen,

and with x increasing to the right and y increasing down. Downward-increasing y is a convention often used in the image processing field. For example, (1, 3) represents the section of your screen taking about 1/9th

the total screen area and located in the bottom left corner of the screen.

	  	      	  <p>If it is critical that the tool drag the item, and not just the region of the screen where the item was found during testing, and you find that the tool is not dragging the item, it may be because the tool is designed to be quick in conducting drags and, if the screen has just changed and the update has not made it to the tool, the tool will be working with an old image. So, one way to make your test behave the way you want is to put in a sleep before the <tt>drag</tt>, to ensure that the update gets to the tool. Another is to put a <tt>tap</tt> on the <tt>drag</tt>'s target image just before the drag</tt>.
		  </dd></dl>
device.keyEvent(keycodes)

This method sends key events to the device. You could conceive of three types of key events that can be sent: printing characters, such as 'a', non-printing characters, such as the return command, and

special device keycodes, such as HOME, which tells the device to go the home screen. For this method, printing characters are placed in quotes, non-printing characters are represented by their ASCII code, and special device keycodes are

specified using the NEGATIVE_KEYCODES dictionary that is written to every test script by the tool. Note that we recommend sending three garbage characters and then immediately removing them with backspace to counteract the

problem that many devices (or perhaps adb) have of ignoring the first characters sent to it after a period of inactivity. For example, to send the string hello to your device and press enter and then navigate the device to the home

screen, do the following:

device.keyEvent(['aaa', 8, 8, 8, 'hello', 13, NEGATIVE_KEYCODES['HOME']])

Here, 'aaa' represents the garbage characters, 8 is the ASCII code for backspace, and 13 is the ASCII code for return.

These commands are sent in sequence as fast as adb and the device allow.

device.verifyText(textToVerify, maxWaitTime=11, dragSearchPermitted=True, isRE=False, maximumAcceptablePercentageDistance=0)

This method searches the entire device screen image for textToVerify, which is a literal string if isRE is False, or a string specifying a regular expression if isRE is True.

Characters are produced from the device screen image using optical character recognition (OCR). Because the OCR software used is frequently off by some amount, maximumAcceptablePercentageDistance is provided to allow you to

specify the accuracy you require. If maximumAcceptablePercentageDistance is 0, the OCR software must find a string matching textToVerify exactly. Otherwise, maximumAcceptablePercentageDistance specifies the

maximum Levenshtein distance between any string found by OCR and textToVerify. When dragSearchPermitted is True, the tool drags to find

textToVerify just as it does in tap().

device.wait(seconds)

This method calls time.sleep(seconds).

dollop's People

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.