GithubHelp home page GithubHelp logo

py.saunter's Introduction

Saunter is an opinionated automation framework for use with the Selenium RC and WebDriver libraries. It is designed to remove a lot of the overhead and cruft that hinders teams when they first start out with automation. For documentation around Saunter see http://element34.ca/products/saunter.

Note: The configuration structure changed with 2.0.0. Either pin your install to the old version of click above link for a brief description of the change. In a nutshell, .ini has been replaced with .yaml.

py.saunter's People

Contributors

adamgoucher avatar makkers avatar santiycr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

py.saunter's Issues

.xml output from failing test doesn't provide line number

See http://i.imgur.com/R6lD7.gif for a snapshot of this issue. The bottom browser is displaying output from a failing test executed in the old Python Page Objects framework. Note the line number of the failing assertion. The upper browser is displaying output from the same failing test executed in the new Py.Saunter framework. Note the absence of the line number for the failing assertion.

Could Py.Saunter be modified to always output this line number in the .xml results file?

sauce profiles

Double check that the profile-platform, when run in sauce, is the target platform not the platform the script is being run on

Matchers.py assert_text_present

Using 0.63, I found that the locator block in Matchers's assert_text_present fails for me because it is not prefixed with self.

Code:
e = driver.find_element_by_tag_name('body')
assert text in e.text

Error:
File "/Library/Python/2.7/site-packages/py.saunter-0.63-py2.7.egg/saunter/matchers.py", line 345, in assert_text_present
e = driver.find_element_by_tag_name('body')
NameError: global name 'driver' is not defined

Fixing that to be ...
e = self.driver.find_element_by_tag_name('body')

... resolved that error, but something was odd in the timing because my test was passing yet my WebUI threw a handled exception in a popup, perhaps something timing related, but reproducible every time in pysaunter, but not when manually testing.

So, I modified the locator usage (and added some handling to raise an assertion error) with the following, and this works for me every time.

    if isinstance(self.driver, tailored.remotecontrol.RemoteControl):
        assert self.driver.is_text_present(text)
    else:
        try:
            locator = ("xpath=//*[contains(text(), '%s')]") % text
            self.driver.find_element_by_locator(locator)
            return True
        except:
            if len(msg) == 0:
                raise AssertionError('Text "%s" was not found.' % text)
            else:
                raise AssertionError('Text "%s" was not found.  %s' % (text,msg))

please add SaunterWebDriver find_elements_by_locator() method

the SaunterWebDriver class has a helpful method to find a single element by any locator: find_element_by_locator

it would be great if there was also a 'find_elements_by_locator' method to find a list of matching elements by locator, like there exist in webdriver.

I tried to extend webdriver via a 'tailored" class similar to the example for extending remotecontrol, but I could not figure it out. I'm a bit of a novice.

need an easy way to determine which version of py.saunter is installed

How about a --version?

adobe-MacBookPro:echosign mmaypump$ pysaunter.py -h
usage: pysaunter.py [-h] [--new] [-v] [-s] [--tb TB] [-p P] [-m M]
[--traceconfig] [--pdb] [--maxfail MAXFAIL]
[--collectonly]

optional arguments:
-h, --help show this help message and exit
--new creates a new Py.Saunter environment
-v increase verbosity
-s don't capture output
--tb TB traceback print mode (long/short/line/native/no)
-p P early-load given plugin (multi-allowed)
-m M filter based on marks
--traceconfig trace considerations of conftest.py files
--pdb start the interactive Python debugger on errors
--maxfail MAXFAIL exit after first num failures or errors.
--collectonly only collect tests, don't execute them
adobe-MacBookPro:echosign mmaypump$ more which pysaunter.py

!/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python

EASY-INSTALL-SCRIPT: 'py.saunter==0.28','pysaunter.py'

requires = 'py.saunter==0.28'
import pkg_resources
pkg_resources.run_script('py.saunter==0.28', 'pysaunter.py')
adobe-MacBookPro:echosign mmaypump$

How to support firefox profile in pysaunter 0.63 WebDriver

When I tried to start a WD selenium test in 0.63 version, I saw error [This Connection is Untrusted] in new opened browser. I'm thinking to use firefox profile to bypass this problem, but looks like in pysaunter 0.63, it still not support profile in WebDriver side. I tried to find solution on-line, I saw solutions like adding code like:

profile = Selenium::WebDriver::Firefox::Profile.new
profile.assume_untrusted_certificate_issuer = false
browser = Watir::Browser.new(:firefox, :profile => profile)

Any suggestion of what should I do here? Many thanks!

single-quoted -m argument on Windows fails; double-quoted one works

The following command fails on Windows...

pysaunter.py -m '(smoke or regression) and not sort and not old_search' --collectonly

but this one works...

pysaunter.py -m "(smoke or regression) and not sort and not old_search" --collectonly

On MacOS, both work.

The output from the first command on WIndows (which I cannot figure out how to copy/paste from my VM!) is the usage message, followed by this line:

pysaunter.py: error: unrecognized arguments: or regression) and not sort and not old_search'

feature request: option for specifying frequency of snapshots during test execution

We're finding the snapshots taken during execution more and more useful. For example, a bug I filed yesterday involved an extraneous horizontal white line in the middle of a blue pop-up. I detected it by absent-mindedly glancing at my test machine during a run. However, that image wasn't captured by pysaunter because it happened in the middle of one of the intervals between captures. If we could specify a smaller interval via the cmdline (or conf.ini), that would allow us to capture more of the action. (And no--I'm not asking for video here!)

Another example: I recently discovered when setting up a new test account that an upgrade page included some extraneous text. This kind of problem is virtually impossible to discover in a straightforward automated way. But we could have caught this one by quickly flicking through all the images captured during the pysaunter execution run, as we have a test that lands on this page.

overly short limit on length of -m argument

There seems to be a fairly short limit on the length of the -m argument. My manager came up with a very short (13) list of tests he wanted me to run for a network upgrade this weekend. Eventually, I will add a 'heartbeat' mark to each of these tests to indicate that they're an even smaller set than 'smoke' tests and run them with "-m heartbeat". But in the meantime, I just constructed the query to run them. However, the query failed until I shortened it down by several tests. Below is the failing full query and a query that worked after shortening it from the right end....

NON-WORKING QUERY BELOW

adobe-MacBookPro:echosign mmaypump$ pysaunter.py -m 'archive_from_home_via_upload_ea981 or authoring_default_sig_and_initials_required_ea2754 or esign_ssn_upload_ea2132 or filter_by_waiting_for_me_to_sign_documents_ea793 or new_search_for_doc_name_ea625 or mega_sign_with_ssn_ea15 or regexp_groupon_es_ea3887 or register_trial_enterprise_account_ea2157 or register_free_account_ea583 or remind_right_now_one_recipient_ea809 or string_any_validation_ea1324 or upgrade_from_enterprise_trial_to_enterprise_ea606 or widget_with_email_verification_ea992' -v
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/bin/pysaunter.py", line 5, in
pkg_resources.run_script('py.saunter==0.38', 'pysaunter.py')
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/pkg_resources.py", line 489, in run_script
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/pkg_resources.py", line 1207, in run_script
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/py.saunter-0.38-py2.7.egg/EGG-INFO/scripts/pysaunter.py", line 186, in
run_status = pytest.main(args=arguments, plugins=[marks.MarksDecorator()])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.3-py2.7.egg/_pytest/core.py", line 467, in main
config = _prepareconfig(args, plugins)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.3-py2.7.egg/_pytest/core.py", line 460, in _prepareconfig
pluginmanager=_pluginmanager, args=args)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.3-py2.7.egg/_pytest/core.py", line 419, in call
return self._docall(methods, kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.3-py2.7.egg/_pytest/core.py", line 430, in _docall
res = mc.execute()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.3-py2.7.egg/_pytest/core.py", line 348, in execute
res = method(**kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.3-py2.7.egg/_pytest/helpconfig.py", line 25, in pytest_cmdline_parse
config = multicall.execute()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.3-py2.7.egg/_pytest/core.py", line 348, in execute
res = method(**kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.3-py2.7.egg/_pytest/config.py", line 10, in pytest_cmdline_parse
config.parse(args)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.3-py2.7.egg/_pytest/config.py", line 343, in parse
self._preparse(args)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.3-py2.7.egg/_pytest/config.py", line 314, in _preparse
self._initini(args)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.3-py2.7.egg/_pytest/config.py", line 309, in _initini
self.inicfg = getcfg(args, ["pytest.ini", "tox.ini", "setup.cfg"])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pytest-2.2.3-py2.7.egg/_pytest/config.py", line 447, in getcfg
if p.check():
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/py-1.4.7-py2.7.egg/py/_path/common.py", line 182, in check
return self.Checkers(self)._evaluate(kw)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/py-1.4.7-py2.7.egg/py/_path/common.py", line 65, in _evaluate
if bool(value) ^ bool(meth()) ^ invert:
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/py-1.4.7-py2.7.egg/py/_path/local.py", line 111, in exists
return self._stat()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/py-1.4.7-py2.7.egg/py/_path/local.py", line 99, in _stat
self._statcache = self.path.stat()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/py-1.4.7-py2.7.egg/py/_path/local.py", line 424, in stat
return Stat(self, py.error.checked_call(os.stat, self.strpath))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/py-1.4.7-py2.7.egg/py/_error.py", line 82, in checked_call
raise cls("%s%r" % (func.name, args))
py.error.ENAMETOOLONG: [File name too long]: stat('/Users/mmaypump/Desktop/qa/PYSAUNTER/echosign/archive_from_home_via_upload_ea981 or authoring_default_sig_and_initials_required_ea2754 or esign_ssn_upload_ea2132 or filter_by_waiting_for_me_to_sign_documents_ea793 or new_search_for_doc_name_ea625 or mega_sign_with_ssn_ea15 or regexp_groupon_es_ea3887 or register_trial_enterprise_account_ea2157 or register_free_account_ea583 or remind_right_now_one_recipient_ea809 or string_any_validation_ea1324 or upgrade_from_enterprise_trial_to_enterprise_ea606 or widget_with_email_verification_ea992/pytest.ini',)
adobe-MacBookPro:echosign mmaypump$

WORKING QUERY BELOW

adobe-MacBookPro:echosign mmaypump$ pysaunter.py -m 'archive_from_home_via_upload_ea981 or authoring_default_sig_and_initials_required_ea2754 or esign_ssn_upload_ea2132 or filter_by_waiting_for_me_to_sign_documents_ea793 or new_search_for_doc_name_ea625 or mega_sign_with_ssn_ea15 or regexp_groupon_es_ea3887' --collectonly

pysaunter summary of results doesn't match xml summary of results

pysaunter's final line of output from a run...

=============== 9 tests deselected by "-m 'smoke or regression'" ===============
2 failed, 117 passed, 2 skipped, 9 deselected, 1 xfailed, 1 xpassed in 6063.91 seconds

doesn't match latest.xml's first line of output...

testsuite errors="0" failures="2" name="" skips="4" tests="119" time="6063.896"

ISSUES:

  1. If latest.xml is correct, 119 total minus 2 failures - 4 skips =113 that passed. But pysaunter's final line indicates that 117 passed. ?!?
  2. The four skips reported by latest.xml comprise the 2 skipped, 1 xfailed, and 1 xpassed from pysaunter's output. Could latest.xml's output be made to mimic the finer level of granularity from pysaunter?

third_party/selenium directory structure s/b set up by "pysaunter.py --new"

"pysaunter.py --new" does not create the required third_party/selenium directory needed by the pysaunter framework, which is confusing to somebody getting started....

adobe-MacBookPro:huh mmaypump$ pysaunter.py --new
adobe-MacBookPro:huh mmaypump$ ls
conf conftest.py logs modules pytest.ini scripts support
adobe-MacBookPro:huh mmaypump$

SUGGESTED FIX: Have pysaunter.py create the third_party/selenium structure, and then output a message that says:

"Download the latest Selenium standalone .jar file to third_party/selenium, then create a symbolic link named latest.jar that points to the downloaded .jar file."

-r run_tests_list & -s skip_tests_list options would really be nice

It would be super if we could run a totally miscellaneous set of tests via not just a lengthy query provided to -m but also via a one-testname/line file, the name of which would be provided as argument to a new -r option.

It would be equally super if we could EXCLUDE a totally miscellaneous set of tests via not just a lengthy query provided to -m but also via a one-testname/line file, the name of which would be provided as argument to a new -r option.

We need -m in order to be able to quickly and easily re-run a large number of previously failing tests when a fix has been provided.

And we need -r so that we can deal with temporarily failing tests without having to check in a skip()'ed version of the test case (which then has to have the skip() removed whenever the app fix becomes available, and then checked back in again).

pytest stops collecting tests when pysaunter release is newer than 0.50

Hi Adam,

Currently I have the following packages installed and everything works fine :

pytest==2.2.4
pytest-marks==0.3
py.saunter==0.50

As soon as I update py.saunter to a newer release (e.g. 0.59) the pytest fails to collect any tests. Collections starts but returns zero scripts for execution. As soon as I downgrade py.saunter everything returns back to normal.

Cheers
Alexis

Complete list of packages:
Brlapi==0.5.4
CouchDB==0.6
GnuPGInterface==0.3.2
HARPy==0.2.0
Mako==0.2.5
PAM==0.4.2
PIL==1.1.7
Twisted-Core==10.0.0
Twisted-Names==10.0.0
Twisted-Web==10.0.0
adium-theme-ubuntu==0.1
apturl==0.4.1ubuntu4.1
argparse==1.2.1
autopep8==0.8
browsermob-proxy==0.2.0
command-not-found==0.1
configglue==0.2dev
cups==1.0
distribute==0.6.28
execnet==1.1
fstab==1.4
fuzzywuzzy==0.1
gnome-app-install==0.4.2ubuntu2
httplib2==0.7.2
jockey==0.5.8
launchpadlib==1.6.0
lazr.restfulclient==0.9.11
lazr.uri==1.0.2
lettuce==0.2.6
louis==1.7.0
mechanize==0.2.5
nose==1.1.2
nvidia-common==0.0.0
oauth==1.0a
onboard==0.93.0
papyon==0.4.8
pep8==1.3.3
pexpect==2.3
proboscis==1.2.5.3
protobuf==2.2.0
py==1.4.9
py.saunter==0.50
pyOpenSSL==0.10
pycrypto==2.0.1
pycurl==7.19.0
pyinotify==0.8.9
pyserial==2.3
pytest==2.2.4
pytest-marks==0.3
pytest-xdist==1.8
python-apt==0.7.94.2ubuntu6.4
python-debian==0.1.14ubuntu2
pyxdg==0.18
rdflib==2.4.2
requests==0.13.5
screen-resolution-extra==0.0.0
selenium==2.25.0
simplejson==2.0.9
smbc==1.0
speechd==0.3
speechd-config==0.0
sure==1.0.0alpha
system-service==0.1.6
ubuntuone-storage-protocol==1.2.0
ufw==0.30pre1-0ubuntu2
unattended-upgrades==0.1
unittest2==0.5.1
usb-creator==0.2.22
virtkey==0.01
virtualenv==1.7.2
wadllib==1.1.4
wsgiref==0.1.2
xkit==0.0.0
zope.interface==3.5.3

SeleniumServer.py should not have hard-coded name of Selenium server .jar file

Line 55 of /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/py.saunter-0.5-py2.7.egg/saunter/SeleniumServer.py contains a hard-coded Selenium server .jar filename, i.e.,

s = subprocess.Popen(['java', '-jar', 'third_party/selenium/selenium-server-standalone-2.0b2.jar'],

IMHO, it would be better to use something like "third_party/selenium/latest-selenium.jar" where the latter was a symbolic link to the desired .jar file within the third_party/selenium directory.

screenshots - rc

double check that the screenshotting stuff is correct for both webdriver and rc

"1 tests deselected by ''" is insufficiently informative

  1. Observe the seeming parallelism between the "shirts" example and my own "logout" test in one respect:

adobe-MacBookPro:PYSAUNTER mmaypump$ grep "pytest.marks._shirts" scripts/_py
scripts/DressShirts.py: @pytest.marks('shirts')
adobe-MacBookPro:PYSAUNTER mmaypump$ grep "pytest.marks._logout" scripts/_py
scripts/Login.py: @pytest.marks('logout')
adobe-MacBookPro:PYSAUNTER mmaypump$

  1. Now observe what happens when I run the "shirts" example along with a nonexistent "skirts" test case:

adobe-MacBookPro:PYSAUNTER mmaypump$ pysaunter.py -f shirts skirts
========================================= test session starts =========================================
platform darwin -- Python 2.7.2 -- pytest-2.1.1
collected 1 items

scripts/DressShirts.py .

----------- generated xml file: /Users/mmaypump/Desktop/PYSAUNTER/logs/2011-09-06-48-04.xml -----------
====================================== 1 tests deselected by '' =======================================
=============================== 1 passed, 1 deselected in 11.05 seconds ===============================
adobe-MacBookPro:PYSAUNTER mmaypump$

The "shirts" example ran as expected, and the nonexistent "skirts" test case caused the "1 tests deselected by ''" message to appear.

  1. Now observe that the same error message is generated for my own (existing) test case...

adobe-MacBookPro:PYSAUNTER mmaypump$ pysaunter.py -f logout
========================================= test session starts =========================================
platform darwin -- Python 2.7.2 -- pytest-2.1.1
collected 1 items

----------- generated xml file: /Users/mmaypump/Desktop/PYSAUNTER/logs/2011-09-06-47-09.xml -----------
====================================== 1 tests deselected by '' =======================================
==================================== 1 deselected in 0.13 seconds =====================================
adobe-MacBookPro:PYSAUNTER mmaypump$

CONCLUSION: The error message in question is obviously being displayed for two very different problems--I'm just not clear what the second problem is yet!

SUGGESTED FIX: The error message doesn't work even for the first case. I'd like to suggest something more along the lines of:

       Testcase "skirts" not found

And of course, I'd like to see a more actionable error message for the second case also. ;-)

conftest.py s/b moved to pysaunter egg if possible

Would it be possible for conftest.py to be moved into the pysaunter egg, so that users don't have to have the latest copy installed in their main test dir (parallel to modules, scripts, etc.)?

Our test suite has all its components (conf, conftest.py, modules, scripts, support, pytest.ini, etc.) checked into Perforce. And our Jenkins site has an artifact which is a downloadable tar.gz file of the latest version of the testsuite from Perforce. Anyone who wants to run the latest version of the suite just needs to download, edit conf/saunter.ini, and run. But since these "anyone"s who do this download/edit/run may have a different version of pysaunter installed than the one matching the checked-in conftest.py, all sorts of problems are possible. And it seems really inconvenient to make all of these "anyone"s copy the appropriate conftest.py from the pysaunter version they're using.

Yet another enhancement for your contemplation!

RFE: framework should show testname with P/F status in real-time

Both the original Python Page Objects framework (pre-"pysaunter") and the very latest version of pysaunter (0.12) present our QA group with the same reporting issue. We run the smoke tests sequentially on our staging environment (sometimes more than once if staging gets updated) and then once more on production in a big hurry right after we've pushed the new code there. If a test fails or times out, we give it a second chance by re-running it individually, before we spend any time investigating. The problem is that we don't know which test is producing the "E" or "F" until after the entire run has completed. We'd like to know immediately which test has produced the "E" or "F" so that we could be re-running that test asap.

Below is output from running the pysaunter 0.12 saucelabs examples that show the issue. Instead of ".F....", we'd like to see something more like:

test_incorrect_login .
test_incorrect_login_fails F
test_incorrect_login_from_csv .
test_incorrect_login_from_sqlite3 .
test_incorrect_login_with_random_username_and_password .
test_incorrect_login_with_soft_assert .

adobe-MacBookPro:saucelabs mmaypump$ pysaunter.py -f login
================================================================== test session starts ===================================================================
platform darwin -- Python 2.7.2 -- pytest-2.1.1
collected 6 items

scripts/LoginExample.py .F....

======================================================================== FAILURES ========================================================================
/Users/mmaypump/Desktop/PYSAUNTER-20110920/examples/saucelabs/scripts/LoginExample.py:93: AssertionError: 'Incorrect username or password.' != 'This message is deliberately incorrect to trigger a failed test.'
---------------------- generated xml file: /Users/mmaypump/Desktop/PYSAUNTER-20110920/examples/saucelabs/logs/2011-09-20-35-32.xml -----------------------
========================================================== 1 failed, 5 passed in 101.52 seconds ==========================================================
adobe-MacBookPro:saucelabs mmaypump$

sauce platforms

Seems the platforms Sauce wants in the JSON have changed since originally written. Need to figure out the mapping ... especially when it comes to OSX which has a different string depending on the browser.

deletion of dir from whence server was started => "'NoneType' object is not subscriptable"

Okay, granted this one involves an even "dumber user" than in issue 8. However, I still think it would be super if the harness could give an improved error message, maybe one that lists all the known possible causes of this TypeError?

TO REPRODUCE:

  1. Start the server up from a specific directory.
  2. Remove that directory.
  3. Run a test under pysaunter.
  4. Get the less than helpful message:

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/selenium-2.13.1-py2.7.egg/selenium/selenium.py:221: TypeError: 'NoneType' object is not subscriptable

Requests version must be >= 1.1.0

When I install and run py.saunter on a host with requests==0.7.3 already installed, I get the following error trace:

Traceback (most recent call last):
  File "/bitly/local/bin/pysaunter", line 5, in <module>
    from pkg_resources import load_entry_point
  File "/bitly/local/lib/python2.7/site-packages/distribute-0.6.40-py2.7.egg/pkg_resources.py", line 2825, in <module>
    parse_requirements(__requires__), Environment()
  File "/bitly/local/lib/python2.7/site-packages/distribute-0.6.40-py2.7.egg/pkg_resources.py", line 598, in resolve
    raise VersionConflict(dist,req) # XXX put more info here
pkg_resources.VersionConflict: (requests 0.7.3 (/bitly/local/lib/python2.7/site-packages), Requirement.parse('requests>=1.1.0'))

I believe this is because the requests package does not have a version specified.

matchers verify_true fails when expr is false

This toy test...

@pytest.marks('matchers')
def test_verify_true_with_true_condition(self):
    self.matchers.verify_true(1==1,"Expected to pass because expr true")
    self.matchers.verify_true(1==2,"Expected to fail because expr false")
    self.matchers.assert_true(3==3,"Expected to pass because expr pass")
    log_info("Finished!")

fails in two ways:

  1. It produces a stack trace instead of outputting our msg for the second verify_true call.
  2. It does NOT proceed on to the assert_true call as a soft assert should.

adobe-MacBookPro:WD mmaypump$ pysaunter -m matchers -v
================================================================================ test session starts =================================================================================
platform darwin -- Python 2.7.2 -- pytest-2.3.4 -- /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
plugins: marks, xdist
collected 186 items
version = 66
version = 66

scripts/Footer.py:13: CheckFooter.test_verify_true_with_true_condition FAILED

====================================================================================== FAILURES ======================================================================================
__________________________________________________________________ CheckFooter.test_verify_true_with_true_condition __________________________________________________________________
Traceback (most recent call last):
File "/Users/mmaypump/Desktop/conftest.py", line 26, in pytest_runtest_makereport
assert([] == item.parent.obj.verificationErrors)
AssertionError: assert [] == ['Expected to fail because e...assert bool(False) is True']
Right contains more items, first extra item: 'Expected to fail because expr false:\nassert bool(False) is True'
---------------------------------------------------------------------------------- Captured stderr -----------------------------------------------------------------------------------
10:42:58 INFO: Starting new HTTPS connection (1): secure.stage.echosign.com
10:42:58 DEBUG: "GET /version HTTP/1.1" 200 77
10:43:01 INFO: Finished!
---------------------------------------------- generated xml file: /Users/mmaypump/Desktop/qa/PYSAUNTER/WD/logs/2013-05-14-10-42-56.xml ----------------------------------------------
====================================================================== 185 tests deselected by "-m 'matchers'" =======================================================================
====================================================================== 1 failed, 185 deselected in 4.44 seconds ======================================================================
adobe-MacBookPro:WD

pytest.ini's relationship to py.saunter framework needs to be documented

I've finally figured out why I couldn't get any of my pre-py.saunter Python Page Object tests to run in py.saunter! I had modified them to import and use the new classes provided with py.saunter, but.... My test classes didn't start with "Check" and my test function names didn't start with "test"!

I heartily recommend saving future py.saunter new users who aren't familiar with pytest from the pain I've been suffering by documenting how pytest.ini fits in with the py.saunter framework in at least one (preferably all!) of these three places:

  1. An updated README.md at the bottom of https://github.com/Element-34/py.saunter
  2. The py.saunter product page at http://element34.ca/products/saunter/pysaunter
  3. An expansion of the documentation available at http://packages.python.org/py.saunter/

SeleniumServer.py isn't incrementing "waiting" variable inside start_server()

Lines 64-75 of /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/py.saunter-0.5-py2.7.egg/saunter/SeleniumServer.py use a variable named "waiting" inside the start_server function. However, it's never getting incremented which allows one to experience a totally silent infinite loop if the Selenium server process doesn't start up earlier in the function. Here are the problematic lines:

make sure the server is actually up

server_up = False
waiting = 0
while server_up == False and waiting < 60:
    try:
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        s.connect(("localhost", 4444))
        s.close()
        server_up = True
    except socket.error:
        time.sleep(1)
        server_up = False

return server_up

In my case, I was having problems getting my own tests (which are based on the original python-selenium-pageobjects) to run in py.saunter so I backed up to trying the ebay example from github. The test was not working with Firefox 6.0, so I figured I'd try updating the .jar file in third_party/selenium to the latest version. At that point, since SeleniumServer.py expects to start up selenium-server-standalone-2.0b2.jar (which I had deleted before putting the latest version in third_party/selenium), I got an infinite silent loop.

output from failing test not as readable/useful as in original Python Page Objects framework

See http://i.imgur.com/QRRZ3.png for an image of the output from one of my failing tests in the old Python Page Objects framework (bottom) and in the new Py.Saunter framework (top). I prefer the old framework's output for two reasons:

  1. The line of code that caused the failure looks just like I entered it when displayed by the old framework. In the new framework, it looks like some low-level debugger output, and is hence, much harder to read.
  2. The test case that failed is fairly clearly identified in the old framework via its docstring--"Search for 'NDA'". In the new framework, one has to look into logs/latest.xml to find any identifying information.

Is there any way that Py.Saunter can be tweaked/configured to make the default output better in these two areas?

feature enhancement: simple html page needed for quickly flicking through all image files

Many of our app's bugs can be found not by failures of my Selenium/pysaunter tests but by either watching the tests run OR by looking at all the image files afterwards. The first method is too slow, and the second is painful primarily because of the need to keep switching from one test dir to another with Mac's Finder's slideshow feature. (Plus that feature doesn't let one know when one is at the last image.)

I'd really like to see pysaunter enhanced to include a simple web app that would allow one to rapidly click through all of the images in the logs/test_* dirs. Each image would need dirname/filename so that when the tester/clicker saw one with a problem, s/he could note the image for later investigation/results-reproducing/bug-filing.

full path to browser in saunter.ini on Windows => no browser session started

WHAT I DID:

  1. Specified a full path for *chrome in conf/saunter.ini on Windows:
    browser: *chrome "C:/Program Files (x86)/Mozilla Firefox36"

(Specifying this second argument for the browser path works great on my MacBook.)

  1. Ran a test, which FAILED with a "Type Error: 'NoneType' object is not subscriptable." error message.

I've never been able to get this second argument to browser to work on Windows, possibly because of the necessity of double-quoting it to protect the embedded spaces.

Snapshot link below, which shows the error message from pysaunter, the server output, and the saunter.ini lines in question. Note that I tried with both forward and backward slashes. Both produced the same result.

http://imgur.com/pAGtp

invalid path to browser in conf/saunter.ini => inexplicable error message

TO REPRODUCE:

  1. Put a deliberately invalid path to the browser inside conf/saunter.ini:

browser: *chrome /Applications/Firefox-3.7.app/Contents/MacOS/firefox-bin

  1. Run a known-to-work-all-the-time test:

adobe-MacBookPro:echosign mmaypump$ pysaunter.py -f logout
collected 20 items
F
========================================================= FAILURES =========================================================
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/selenium-2.6.0-py2.7.egg/selenium/selenium.py:224: TypeError: 'NoneType' object is not subscriptable
--------------- generated xml file: /Users/mmaypump/Desktop/qa/PYSAUNTER/echosign/logs/2011-09-23-48-09.xml ----------------
================================================ 19 tests deselected by '' =================================================
1 failed, 19 deselected in 0.62 seconds
adobe-MacBookPro:echosign mmaypump$

BUG: The average user needs a better error message than the TypeError one above for this particular scenario. In my case, I was attempting to bring up my pysaunter tests on a second system, which didn't have exactly the same path to the browser of interest.

server killing is confused?

Not only is the grammar not right in the message when the server cannot be killed, I think the logic in when to start / stop it isn't quite right

--collectonly needs to show count of tests that would get executed

Several of us have been making boo-boos w.r.t. test-counting lately, because of some -collectonly confusion. When one specifies -m with a valid argument and the --collectonly option, then sees a "collected items" message, one assumes that items match the collection parameters specified in pytest.ini AND the -m argument. Instead, the reflects just those specified in pytest.ini. Thus, in the example below, in order to determine how many "smoke" tests would get run during an actual execution run, one has to calculate 60-25. Is there some way to easily improve this usability issue? A couple suggestions:

  1. Annotate the "collected 60 items" to say something like "collected 60 items based on pytest.ini parameters."
  2. Output the actual count (35) somewhere in the output, clearly labeled.

adobe-MacBookPro:echosign mmaypump$ pysaunter.py -m smoke --collectonly
============================================================== test session starts ==============================================================
platform darwin -- Python 2.7.2 -- pytest-2.2.0
collected 60 items
.
.
.
------------------------ generated xml file: /Users/mmaypump/Desktop/qa/PYSAUNTER/echosign/logs/2012-01-11-18-48-51.xml -------------------------
====================================================== 25 tests deselected by "-m 'smoke'" ======================================================
========================================================= 25 deselected in 0.26 seconds =========================================================
adobe-MacBookPro:echosign mmaypump$

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.