GithubHelp home page GithubHelp logo

autotest / autotest-docker Goto Github PK

View Code? Open in Web Editor NEW
25.0 32.0 30.0 2.54 MB

Autotest client test containing functional & integration subtests for the Docker project

License: Other

Makefile 0.55% Python 97.95% Shell 1.35% Dockerfile 0.15%
python testing docker

autotest-docker's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autotest-docker's Issues

multiple --name args to docker run

You can run: `docker run --name foo --name bar ...`` with a very long list of --names and it should pick up the last one. This task is to create a test that makes sure this is always the case.

Docker Autotest depends on unspecified autotest features

When something new is added or changed in autotest.client.utils, and is referenced from docker autotest, an unchecked (hidden) dependency is established. To fix this, the framework must verify whichever autotest client is installed meets some point-in-time criteria (or better). If not, then the operator should be given a warning or instructions to update the version of their autotest client.

dockerimport test fail by KeyError if docker-py is installed from github

install docker-py from github(https://github.com/dotcloud/docker-py.git), the API is different,
image['Repository']) and image['Tag'] does not exist.

In [1]: import docker
In [2]: c= docker.Client()
In [5]: c.images()
Out[5]:
[{u'Created': 1396317040,
u'Id': u'660dacc178661df26203ccc0a03ed4a592688a2289c444d1227d8404e90209a5',
u'ParentId': u'e9bfd7e799221ad2605b8805683b36ac5d09094ea1d971d1d52ccdd993e3b90f',
u'RepoTags': [u'fedora:docker'],
u'Size': 0,
u'VirtualSize': 1758593509},

Traceback (most recent call last):
File "/var/www/html/autotest/client/shared/test.py", line 411, in _exec
_call_test_function(self.execute, _p_args, *_p_dargs)
File "/var/www/html/autotest/client/shared/test.py", line 830, in _call_test_function
raise error.UnhandledTestFail(e)
UnhandledTestFail: Unhandled KeyError: 'Repository'
Traceback (most recent call last):
File "/var/www/html/autotest/client/shared/test.py", line 823, in _call_test_function
return func(_args, *_dargs)
File "/var/www/html/autotest/client/tests/docker/dockertest/subtest.py", line 104, in execute
_args, *_dargs)
File "/var/www/html/autotest/client/shared/test.py", line 298, in execute
self.postprocess()
File "/var/www/html/autotest/client/tests/docker/dockertest/subtest.py", line 627, in postprocess
subsubtest.postprocess()
File "/var/www/html/autotest/client/tests/docker/subtests/docker_cli/dockerimport/empty.py", line 54, in postprocess
self.sub_stuff['image_tag'])
File "/var/www/html/autotest/client/tests/docker/subtests/docker_cli/dockerimport/empty.py", line 103, in lookup_image_id
if ((str(image['Repository']) == image_name) and
KeyError: 'Repository'

unhandles T

outputgood = OutputGood(cmdresult)

Unhandled TypeError: __init__() takes exactly 2 arguments (1 given)

Traceback (most recent call last):
File "/var/www/html/autotest/client/job.py", line 510, in _runtest
parallel.fork_waitfor_timed(self.resultdir, pid, timeout)
File "/var/www/html/autotest/client/parallel.py", line 116, in fork_waitfor_timed
_check_for_subprocess_exception(tmp, pid)
File "/var/www/html/autotest/client/parallel.py", line 67, in _check_for_subprocess_exception
e = pickle.load(file(ename, 'r'))
File "/usr/lib64/python2.6/pickle.py", line 1370, in load
return Unpickler(file).load()
File "/usr/lib64/python2.6/pickle.py", line 858, in load
dispatchkey
File "/usr/lib64/python2.6/pickle.py", line 1133, in load_reduce
value = func(*args)
TypeError: init() takes exactly 2 arguments (1 given)

[RFC] Improved Subtest-bypass control

In 0.7.x we have a disabled CSV list that can be more or less hard-coded to force subtests and sub-subtests to raise TestNAError. However this is exploiting faulty logic (this exception is suppose to be a signal for environment or pre-requesite problems). The central issue is we have no access to read from configuration within the control file, and no existing way to pass information from control into subtests.

One possible way we can resolve this, is if the control file is given write-only access to record informational/advisory values into a dedicated config. file. In other words, every time the harness runs, control could write a new config_custom/control.ini. This could then be consulted by subtests/sub-subtests to consider appropriate action.

Could not clean up containers by execute DockerCmd(self, 'rm', "container_name").execute()

it extent the "container_name" into a list ['c', 'o', 'n', 't', 'a', 'i', 'n', 'e', 'r', '_', 'n', 'a', 'm', 'e']:

$ tail -n 7 workdir.py

def cleanup(self):
    super(workdir, self).cleanup()
    if self.config['remove_after_test']:
        dkrcmd = DockerCmd(self, 'rm', "exist_dir")
        cmd = dkrcmd.execute()
        print cmd

$ ./autotest-local run docker --args=docker_cli/workdir
17:59:45 INFO | Writing results to /var/www/html/autotest/client/results/default
17:59:46 INFO | START ---- ---- timestamp=1398333586 localtime=Apr 24 17:59:46
17:59:46 INFO | START docker/subtests/docker_cli/workdir.test_1-of-1 docker/subtests/docker_cli/workdir.test_1-of-1 timestamp=1398333586 timeout=600 localtime=Apr 24 17:59:46
17:59:46 INFO | RUNNING ---- ---- timestamp=1398333586 localtime=Apr 24 17:59:46 INFO: initialize()
17:59:46 INFO | RUNNING ---- ---- timestamp=1398333586 localtime=Apr 24 17:59:46 INFO: run_once() iteration 1 of 1
17:59:50 INFO | RUNNING ---- ---- timestamp=1398333590 localtime=Apr 24 17:59:50 INFO: postprocess_iteration(), iteration #1
17:59:50 INFO | RUNNING ---- ---- timestamp=1398333590 localtime=Apr 24 17:59:50 INFO: postprocess()
17:59:50 INFO | RUNNING ---- ---- timestamp=1398333590 localtime=Apr 24 17:59:50 INFO: Commands: /usr/bin/docker -D run --workdir=/var/log --name=exist_dir fedora:20 pwd
17:59:50 INFO | RUNNING ---- ---- timestamp=1398333590 localtime=Apr 24 17:59:50 INFO: workdir /var/log set successful for container
17:59:50 INFO | RUNNING ---- ---- timestamp=1398333590 localtime=Apr 24 17:59:50 INFO: Commands: /usr/bin/docker -D run --workdir=/tmp/mdAhsaPSTVxC --name=nonexist_dir fedora:20 pwd
17:59:50 INFO | RUNNING ---- ---- timestamp=1398333590 localtime=Apr 24 17:59:50 INFO: workdir /tmp/mdAhsaPSTVxC set successful for container
17:59:50 INFO | RUNNING ---- ---- timestamp=1398333590 localtime=Apr 24 17:59:50 INFO: Commands: /usr/bin/docker -D run --workdir=tmp/mdAhsaPSTVxC --name=invalid_dir fedora:20 pwd
17:59:50 ERROR| RUNNING ---- ---- timestamp=1398333590 localtime=Apr 24 17:59:50 ERROR: Intend to fail:
2014/04/24 17:59:49 The working directory is invalid. It needs to be an absolute path.
17:59:50 INFO | RUNNING ---- ---- timestamp=1398333590 localtime=Apr 24 17:59:50 INFO: Commands: /usr/bin/docker -D run --workdir=/etc/hosts --name=file_as_dir fedora:20 pwd
17:59:50 ERROR| RUNNING ---- ---- timestamp=1398333590 localtime=Apr 24 17:59:50 ERROR: Intend to fail:
[debug] utils.go:267 [hijack] End of stdout
[debug] commands.go:1905 End of CmdRun(), Waiting for hijack to finish.
2014/04/24 17:59:50 Error: Cannot start container e8f5283749b462491372cfa838ba62be0bcf2d91b1d141b001b0a3cbdc3e82f6: Cannot mkdir: /etc/hosts is not a directory
17:59:50 INFO | RUNNING ---- ---- timestamp=1398333590 localtime=Apr 24 17:59:50 INFO: cleanup()
17:59:50 INFO | * Command:
17:59:50 INFO | /usr/bin/docker -D rm e x i s t _ d i r
17:59:50 INFO | Exit status: 1
17:59:50 INFO | Duration: 0.019348859787
17:59:50 INFO |
17:59:50 INFO | stderr:
17:59:50 INFO | Error: No such container: e
17:59:50 INFO | Error: No such container: x
17:59:50 INFO | Error: No such container: i
17:59:50 INFO | Error: No such container: s
17:59:50 INFO | Error: No such container: t
17:59:50 INFO | Error: No such container: _
17:59:50 INFO | Error: No such container: d
17:59:50 INFO | Error: No such container: i
17:59:50 INFO | Error: No such container: r
17:59:50 INFO | 2014/04/24 17:59:50 Error: failed to remove one or more containers
17:59:52 INFO | GOOD docker/subtests/docker_cli/workdir.test_1-of-1 docker/subtests/docker_cli/workdir.test_1-of-1 timestamp=1398333592 localtime=Apr 24 17:59:52 completed successfully
17:59:52 INFO | END GOOD docker/subtests/docker_cli/workdir.test_1-of-1 docker/subtests/docker_cli/workdir.test_1-of-1 timestamp=1398333592 localtime=Apr 24 17:59:52
17:59:52 INFO | END GOOD ---- ---- timestamp=1398333592 localtime=Apr 24 17:59:52
17:59:52 INFO | Report successfully generated at /var/www/html/autotest/client/results/default/job_report.html

xception module Exception base consistency

In order to distinguish between regular python Exceptions, Autotest Exceptions, and Dockertest Exceptions, we need to reorganize the xceptions module & callers:

  • Alias all autotest.client.shared.error exception names inside xceptions module
    for easy-access by sub/sub-subtests.
  • Add a DockertestException() base class that all package module exceptions come from.
  • Add an module-level exception for each module, inherit from DockertestException
  • Add a class-level exception for each class, inherit from module-level exception
  • Add interface or behavior-specific exceptions that provide some details, and send the rest to the debug log.
  • Modify all dockertest modules & unittests to use the above
  • Modify all sub/sub-subtests that catch exceptions, to catch the correct base-classes.

The purpose here is to allow sub/sub-subtests to catch the higher-level exceptions to hide the more detailed ones. For example:

>>> class DockertestImagesError(DockertestException):
...     pass
... 
>>> try:
...     raise DockertestImagesError()
... except DockertestException:
...     print "caught it"
... 
caught it
>>> 

In this way, a sub/sub-subtest may target expected exceptions at various levels of detail, always coming from the dockertest API only, while allowing python/autotest exceptions to pass through.

See also #210 (DockerImage.full_name_from_defaults should raise exception)

Make container & image cleanup option default

Nearly every test uses a remove_after_test (or similar) option to control cleanup. This should be generalized separately for images, stopped containers, running containers, etc. These options should be added to defaults.ini and all subtests and their configurations modified to use the new option names.

RFC: Use __getitem__ and __setitem__ instead of stuff/sub_stuff

Hi guys,

after couple of tests it really annoys me to always write self.sub_stuff['whatever']. Also I noticed, that neither Subtest, nor SubSubtest uses __getitem__, __setitem__, get(). It would shorten the code significantly to use these and it makes a bit sense, that Subtest.get() returns Subtest's stuff object.

What do you think about this, I don't see any issues apart from possible misinterpreting of SubSubtestCaller's __iter__ or __in__ (someone might expect it to return list of subtests...). But SubSubtestCaller can override those methods...

rework needed for "kill" subtest

There are programatical errors in this file and pep8 fails:

Subtest module: subtests/docker_cli/kill/kill.py 
************* Module kill
W0511: 25,0: : TODO: Not all named signals seems to be supported with docker0.9
W0511:247,0: : TODO: Signals 20, 21 and 22 are not reported after SIGCONT
W0511:350,0: : TODO: Should these be ignored? They can be caught when not STOPPED
W0511:385,0: : TODO: 0 is accepted even thought it's bad signal
W0703:183,19: kill_base.cleanup: Catching too general exception Exception
R0914:195,4: kill_check_base.run_once: Too many local variables (21/20)
C1001:196,8: kill_check_base.run_once.Output: Old-style class defined.
R0912:195,4: kill_check_base.run_once: Too many branches (23/12)
R0915:195,4: kill_check_base.run_once: Too many statements (57/50)
W0703:587,19: parallel_stress.run_once: Catching too general exception Exception
E1103:600,20: parallel_stress.run_once: Instance of 'int' has no 'remove' member (but some types could not be inferred)
R0912:547,4: parallel_stress.run_once: Too many branches (14/12)

Reading though this, this test is complicated enough to where this is not a simple fix. This is a task for cleaning up and simplifying this code.

Don't stop the subTestCaller in case of DockerOutputError/...

Hi @cevich, I was struggling with fact, that sometimes even thought cleanup() passed, whole subtest was aborted.

Now I noticed it in Jiří's test too so I really tried to understand the issue. I debugged it and found out, that when init/run/cleanup raises AutotestException exception and the cleanup pass, the subTestCaller continues the testing.

On the other hand when non-AutotestException, the test failes. This seems as an intention, but the problem is, that DockerOutputError and couple of other widely used errors are not inherited from error.AutotestError.

If I understand your intention, the solution should be simple. Just inherit from the python and autotest error in all xceptions (eg: class DockerValueError(ValueError, error.AutotestError):)

What do you think?

Additionally I'd log an information, that subtest was interrupted, because of non-AutotestException, and people should try/catch their tests in case it's TestFailure and not unhandled exception. It took me a while to understand the issue.

restructure error_check into no_error_found and error_found

There are several negative testings which are expected to fail, which looks so bad in the test report, it would be better to turn them into GOOD result.

suggest to split error_check into no_error_found and error_found:
for normal testing, skip the error_found method, maybe add it into default config file is a good idea.
for negative testing, skip the no_error_found method when call OutputGood.

@staticmethod
def no_error_found(output):
    """
    Return False if Go panic string found in output

    :param output: Stripped output string
    :return: True if Go panic pattern **not** found
    """
    regex = re.compile(r'\s*panic:\s*.+error.*')
    for line in output.splitlines():
        if bool(regex.search(line.strip())):
            return False  # panic message found
    return True  # panic message not found

@staticmethod
def error_found(output):
    """
    Return True if Go panic string found in output

    :param output: Stripped output string
    :return: False if Go panic pattern **not** found
    """
    regex = re.compile(r'\s*panic:\s*.+error.*')
    for line in output.splitlines():
        if bool(regex.search(line.strip())):
            return True  # panic message found
    return False  # panic message not found

Centralize/Standardize image name generation

Every subtest does it's own name generation. This makes it difficult to control in case requirements change. Add a method to the images module similar to containers.DockerContainersBase.get_unique_name() that supports current needs/usage of all existing tests. Convert some/all existing tests to use new method.

images & containers module too table-column contents sensitive

We shouldn't rely on 'multiple-whitespace' delimiter for images and containers table parsing. For example, if a command has too many spaces in it or there's an odd line-wrap somewhere. Instead, the character-offset of each header field seems to be more accurate and I think would be less error-prone.

\r in the log output causing autotest failure

Today this occurred while developing the sigproxy test. It's based on #65 code and in case (already existing test) parallel_stress execution fails, the output contains \r which is unlogable by autotest status_log_entry.

04/28 14:46:59 INFO |   subtest:0127|       RUNNING ----    ----    timestamp=1398689219    localtime=Apr 28 14:46:59   INFO: SubSubtest parallel_stress INFO: parallel_stress postprocess()
04/28 14:46:59 INFO |   subtest:0127|       RUNNING ----    ----    timestamp=1398689219    localtime=Apr 28 14:46:59   INFO: SubSubtest parallel_stress INFO: parallel_stress cleanup()
04/28 14:47:02 ERROR|      test:0414| Exception escaping from test:
Traceback (most recent call last):
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/shared/test.py", line 411, in _exec
    _call_test_function(self.execute, *p_args, **p_dargs)
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/shared/test.py", line 830, in _call_test_function
    raise error.UnhandledTestFail(e)
UnhandledTestFail: Unhandled ValueError: Invalid character in message 'ERROR: parallel_stress failed to postprocess: DockerOutputError: Good: [\'crash_check_stdout\', \'usage_check_stdout\', \'crash_check_stderr\', \'usage_check_stderr\', \'error_check_stderr\']; Not Good: [\'error_check_stdout\']; Details: (error_check_stdout, Command exit 255 stdout "Received 1, ignoring...\r'
Traceback (most recent call last):
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/shared/test.py", line 823, in _call_test_function
    return func(*args, **dargs)
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/tests/docker/dockertest/subtest.py", line 132, in execute
    *args, **dargs)
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/shared/test.py", line 291, in execute
    postprocess_profiled_run, args, dargs)
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/shared/test.py", line 212, in _call_run_once
    self.run_once(*args, **dargs)
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/tests/docker/dockertest/subtest.py", line 545, in run_once
    self.run_all_stages(name, self.new_subsubtest(name))
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/tests/docker/dockertest/subtest.py", line 518, in run_all_stages
    self.try_all_stages(name, subsubtest)
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/tests/docker/dockertest/subtest.py", line 493, in try_all_stages
    detail)
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/tests/docker/dockertest/subtest.py", line 247, in logtraceback
    self.logerror(error_head)
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/tests/docker/dockertest/subtest.py", line 233, in logerror
    return self._log('error', message, *args)
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/tests/docker/dockertest/subtest.py", line 125, in _log
    sle = base_job.status_log_entry("RUNNING", None, None, message, {})
  File "/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/shared/base_job.py", line 529, in __init__
    raise ValueError('Invalid character in message %r' % self.message)
ValueError: Invalid character in message 'ERROR: parallel_stress failed to postprocess: DockerOutputError: Good: [\'crash_check_stdout\', \'usage_check_stdout\', \'crash_check_stderr\', \'usage_check_stderr\', \'error_check_stderr\']; Not Good: [\'error_check_stdout\']; Details: (error_check_stdout, Command exit 255 stdout "Received 1, ignoring...\r'

04/28 14:47:02 INFO |   subtest:0127|       RUNNING ----    ----    timestamp=1398689222    localtime=Apr 28 14:47:02   INFO: cleanup()

I'll have to investigate it further to see whether autotest or docker needs to be modified. This issue is just to not forget about it.

Unhandled AttributeError with six 1.5.2 and above

if remove the six package, the testing is ok, but six look to be used by many other packages.

docker version: 0.9.1
system: Fedora 20

[root@fbox client]# ./autotest-local run docker --args=docker_cli/version
11:17:56 INFO | Writing results to /var/www/html/autotest/client/results/default
11:17:56 INFO | START ---- ---- timestamp=1397791076 localtime=Apr 18 11:17:56
11:17:56 INFO | START docker/subtests/docker_cli/version.test_1-of-1 docker/subtests/docker_cli/version.test_1-of-1 timestamp=1397791076 timeout=600 localtime=Apr 18 11:17:56
11:17:56 INFO | RUNNING ---- ---- timestamp=1397791076 localtime=Apr 18 11:17:56 INFO: initialize()
11:17:56 INFO | RUNNING ---- ---- timestamp=1397791076 localtime=Apr 18 11:17:56 INFO: run_once() iteration 1 of 1
11:17:59 INFO | RUNNING ---- ---- timestamp=1397791079 localtime=Apr 18 11:17:59 INFO: postprocess_iteration(), iteration #1
11:17:59 INFO | RUNNING ---- ---- timestamp=1397791079 localtime=Apr 18 11:17:59 INFO: Found docker versions client: 0.9.1 server 0.9.1
11:17:59 INFO | RUNNING ---- ---- timestamp=1397791079 localtime=Apr 18 11:17:59 INFO: Docker cli version matches docker client API version
11:17:59 INFO | RUNNING ---- ---- timestamp=1397791079 localtime=Apr 18 11:17:59 INFO: cleanup()
11:18:00 INFO | GOOD docker/subtests/docker_cli/version.test_1-of-1 docker/subtests/docker_cli/version.test_1-of-1 timestamp=1397791080 localtime=Apr 18 11:18:00 completed successfully
11:18:00 INFO | END GOOD docker/subtests/docker_cli/version.test_1-of-1 docker/subtests/docker_cli/version.test_1-of-1 timestamp=1397791080 localtime=Apr 18 11:18:00
11:18:00 ERROR| JOB ERROR: Unhandled AttributeError: name
Traceback (most recent call last):
File "/var/www/html/autotest/client/job.py", line 1036, in _run_step_fn
exec('__ret = %s(___args, *___dargs)' % fn, local_vars, local_vars)
File "", line 1, in
File "/var/www/html/autotest/client/tests/docker/control", line 140, in run_test
if module is not None and module.name.count('docker')]
File "/usr/lib/python2.7/site-packages/six.py", line 123, in getattr
raise AttributeError(attr)
AttributeError: name

11:18:00 INFO | END ABORT ---- ---- timestamp=1397791080 localtime=Apr 18 11:18:00 Unhandled AttributeError: name
Traceback (most recent call last):
File "/var/www/html/autotest/client/job.py", line 1036, in _run_step_fn
exec('__ret = %s(___args, *___dargs)' % fn, local_vars, local_vars)
File "", line 1, in
File "/var/www/html/autotest/client/tests/docker/control", line 140, in run_test
if module is not None and module.name.count('docker')]
File "/usr/lib/python2.7/site-packages/six.py", line 123, in getattr
raise AttributeError(attr)
AttributeError: name

11:18:00 INFO | Report successfully generated at /var/www/html/autotest/client/results/default/job_report.html

Add negative "usage" test

  • run conflicting name (set & autogenerated) error name (invalid chars)
  • run dangerous, outputs to /dev/mapper/docker*pool, the daemon socket,
    r/w file on loop0, etc.
  • run invalid volumes (zero, non-existing source, invalid/reserved chars)
  • run with stdout => stdin

Does this "[debug] client.go" show up randomly?

Does this debug information show up randomly in docker autotest? It's partly hidden and partly visible in my log [1]. Sometimes I encounter this error even though I execute the docker command [2] directly. what is hijack? How can I finish the hijack by manual?

[debug] client.go:2283 [hijack] End of stdout
[debug] client.go:1870 End of CmdRun(), Waiting for hijack to finish.

[1] http://10.66.100.116/home/xhe/backup/docker/code/invalid0425/docker/subtests/docker_cli/invalid.test_1-of-1/debug/invalid.test_1-of-1.DEBUG
[2] docker run fedora:20 abc

Test container access host /tmp as a volume

It's possible to do with the "run_volumes" test, however only by configuring /tmp as one of the volumes to test. We could use a stand-alone test that checks read/write/list on a host /tmp shared volume.

attach sig_proxy off/on fails

Due to b5dfa20:
...
13:12:53 ERROR| attach: sig_proxy_on failed to postprocess
DockerTestFail
Docker command wasn't killed by attached docker when sig-proxy=true. It shouldn't happened.
...
13:14:02 ERROR| attach: sig_proxy_on failed to postprocess
DockerTestFail
Docker command wasn't killed by attached docker when sig-proxy=true. It shouldn't happened.

Original behavior was correct, but code hard to maintain/read. We need more generalized methods on super-classes for sub-classes to override.

Implement docker detach/reattach test

Docker supports detach in tty mode using ctrl+p ctrl+q. We should test it somehow. The problem is, how to send the ctrl+p ctrl+q sequence. According to:
http://stackoverflow.com/questions/11295550/python-pexpect-sendcontrol-key-characters

it should be possible, I succeeded when using screen:

from autotest.client.shared import aexpect
b=aexpect.ShellSession("TERM=xterm screen -x")
b.sendline('\x01c')
b.sendline('a')
b.sendline('\x01\x01')
b.sendline('b')

attached to running screen, created new terminal, executed comand a (not found), than switched to previously used terminal and executed command b (not found).

Anyway when I tried the same with docker:

from autotest.client.shared import aexpect
a=aexpect.ShellSession("TERM=xterm docker run -t -i fedora bash")
a.sendline('\x11\x12')
print a.read_nonblocking()
a.sendline("echo $HOSTNAME")
print a.read_nonblocking()

it was still inside the container. I even tried a=aexpect.ShellSession("TERM=xterm bash -c 'docker run -t -i fedora bash'") to make sure there is bash underneath, without luck.

@cevich, @jzupka do you have any suggestions? (btw ctrl+c works fine, so we can add it to the --sig-proxy test)

run_interactive.....forever :(

Something odd is going on with the run_simple/run_interactive sub-subtest:

16:55:08 INFO |         run_signal: cleanup()
16:55:09 INFO |         run_interactive: initialize()
16:55:09 INFO |         run_interactive: Starting background docker command, timeout 60 seconds
16:55:11 INFO |         run_interactive: Waiting up to 10 seconds for exit
16:55:11 INFO |         run_interactive: Container running, waiting 2 seconds to finish.
17:04:40 INFO | Timer expired (600 sec.), nuking pid 15027
17:04:42 INFO |         ERROR   docker/subtests/docker_cli/run_simple.test_6-of-35  docker/subtests/docker_cli/run_simple.test_6-of-35  timestamp=1398978281    localtime=May 01 17:04:41   Test timeout expired, rc=15

Generate subtest & sub-subtest docs from content

It's tedious and error-prone to maintain test and configuration documentation far away from the source test and configs themselves. We already have a Makefile that builds documentation, we need one that can scrape together test/config docs by examining the files directly. This is not otherwise possible via Sphinx due to the "plug-in" nature of the tests and non-python nature of their config.

stop test too timing-sensitive

@ldoktor this test seems aweful timing sensitive, e.g.

DockerTestFail: 'docker stop' cmd execution took shorter, than expected: 0.224925041199s (30.0+-2s)

Normally people don't care when things happen sooner than expected :D These ultimately will be automated tests. If something gets stuck and doesn't stop after the default 10-minutes, it's fine - nobody will be sitting there waiting. Otherwise, unless you think it's critical for some reason, I don't see a reason to check if things finish sooner than expected, as long as they're verified (which will be easier later).

After 0.6.1, can you take a look into this?

All tests should cleanup after themselves

This mainly pertains to the default image pulled down from the repo for testing, but also to any other images and containers created directly or indirectly through testing. This may seem wasteful, since the next test will probably pull down the default image again.

However, doing this pushes the kernel, disk, docker daemon, etc. harder, and therefore is more likely to expose bugs (i.e. the point of testing). Also, at some point, we'll add in some inbetween-subtest checks that will verify the environment state. In order for that to work, every subtest must assert cleanup duty over everything it knows about. This way, when anything slips through the cracks (unexpected container executions, leftover <none> images, etc.) they will be easily detectable.

Also, having a verify-ably clean environment in-between every subtest means the state of one is unlikely to affect the next. The cleanup-check would be configurable, so we will have the option of forcing "dirty" testing in case it also helps find problems.

top subtest IndexError

Docker autotest 0.7.3 docker-0.11.1-10.el7.x86_64

06/02 13:44:01 INFO |       job:0215|   START   docker/subtests/docker_cli/top.test_22-of-37    docker/subtests/dock
er_cli/top.test_22-of-37    timestamp=1401731041    timeout=600     localtime=Jun 02 13:44:01       
06/02 13:44:01 DEBUG|  base_job:0393| Persistent state client._record_indent now set to 2
06/02 13:44:01 DEBUG|  base_job:0393| Persistent state client.unexpected_reboot now set to ('docker/subtests/docker_
cli/top.test_22-of-37', 'docker/subtests/docker_cli/top.test_22-of-37')
06/02 13:44:01 DEBUG|       job:0509| Waiting for pid 32350 for 600 seconds
06/02 13:44:02 INFO |   subtest:0219|   top: initialize()
06/02 13:44:02 DEBUG|   subtest:0219|   top: Subtest top configuration:
                                        config_version = "0.7.2"
                                        run_options_csv = "--tty=true,--interactive=true,--detach=true"
                                        docker_path = "/usr/bin/docker"
                                        envcheck_skip = ""
                                        docker_repo_tag = ""
                                        docker_registry_user = "******"
                                        docker_timeout = "60"
                                        remove_after_test = "True"
                                        docker_registry_host = "registry.***********"
                                        disable = "debug,example,subexample"
                                        try_remove_after_test = "True"
                                        docker_repo_name = "rhel7beta"
                                        autotest_version = "0.16.0-master-32-g050cd"
                                        envcheck_ignore_iids = "f5f7ddddef7d,1606650b4a7f"
                                        container_name_prefix = "test"
                                        docker_options = "-D"

06/02 13:44:02 DEBUG|   subtest:0219|   top: Execute /usr/bin/docker -D run --tty=true --interactive=true --detach=t
rue --name test_RkiR r*********************/redhat/rhel7beta bash
06/02 13:44:04 DEBUG|   subtest:0219|   top: Subcommand: run
                        Subargs: ['--tty=true', '--interactive=true', '--detach=true', '--name test_RkiR', '************', 'bash']
                        Executed: 1
                        Timeout: 60
                        Command: /usr/bin/docker -D run --tty=true --interactive=true --detach=true --name test_RkiR
 ***********************/redhat/rhel7beta bash
                        Exit code: 0
                        Standard Out:bc49afe3c5393825bdb01d40a0bd91a3c654325bdffa35d686f280f12b6ec3ae
                        Standard Error:[debug] commands.go:1976 End of CmdRun(), Waiting for hijack to finish.

06/02 13:44:04 DEBUG|   subtest:0219|   top: Async-execute: /usr/bin/docker -D attach test_RkiR
06/02 13:44:04 INFO |   subtest:0219|   top: run_once() iteration 1 of 1
06/02 13:44:04 DEBUG|   subtest:0219|   top: Execute /usr/bin/docker -D top test_RkiR all
06/02 13:44:04 DEBUG|   subtest:0219|   top: Subcommand: top
                        Subargs: ['test_RkiR', 'all']
                        Executed: 1
                        Timeout: 60
                        Command: /usr/bin/docker -D top test_RkiR all
                        Exit code: 0
                        Standard Out:F                   UID                 PID                 PPID               
 PRI                 NI                  VSZ                 RSS                 WCHAN               STAT           
     TTY                 TIME                COMMAND
                                4                   0                   32385               32635               20                  0                   11740               1364                n_tty_              Ss+                 pts/2               0:00                bash
                        Standard Error:

06/02 13:44:04 ERROR|      test:0414| Exception escaping from test:
Traceback (most recent call last):
  File "/usr/local/autotest/client/shared/test.py", line 411, in _exec
    _call_test_function(self.execute, *p_args, **p_dargs)
  File "/usr/local/autotest/client/shared/test.py", line 830, in _call_test_function
    raise error.UnhandledTestFail(e)
UnhandledTestFail: Unhandled IndexError: list index out of range
Traceback (most recent call last):
  File "/usr/local/autotest/client/shared/test.py", line 823, in _call_test_function
    return func(*args, **dargs)
  File "/usr/local/autotest/client/tests/docker/dockertest/subtest.py", line 132, in execute
    *args, **dargs)
  File "/usr/local/autotest/client/shared/test.py", line 291, in execute
    postprocess_profiled_run, args, dargs)
  File "/usr/local/autotest/client/shared/test.py", line 212, in _call_run_once
    self.run_once(*args, **dargs)
  File "/usr/local/autotest/client/tests/docker/subtests/docker_cli/top/top.py", line 132, in run_once
    last_idx = self._gather_processes()
  File "/usr/local/autotest/client/tests/docker/subtests/docker_cli/top/top.py", line 118, in _gather_processes
    if out[-1].startswith("bash"):  # cut the last 'bash #' line
IndexError: list index out of range

06/02 13:44:04 INFO |   subtest:0219|   top: cleanup()

In .ini file 'yes' and 'no' will parse to 'True' and 'Fail' through Config Class

If i put test_yes = yes in an ini file, it will parse to True when call it in a sub test case, like

[docker_cli/test]
test_yes = yes

print self.config['test_yes']

True

I'm not sure if it is expected, but IMO the Config class is like the python lib configparse that should only do the job parse string to string, am i right?
Test basing on the latest auotest-docker/master.

Should framework be more tollerant of cleanup() exceptions?

Question: #36 (comment)

Instead of exceptions you'd just record the failure into self.cleanup_fails and proceed with cleanup. Than the framework would check the test.cleanup_fails and raise exception when cleanup is done. What do you think about this? (I want to avoid hanging containers because cleanup failed before we get the chance to smash them)

Fail to capture the failed testing details

when a testing fail, it always gets:
DockerTestFail: Good: ['error_check_stdout', 'crash_check_stdout']; Not Good: ['error_check_stderr', 'crash_check_stderr']; Details: (error_check_stderr, No details); (crash_check_stderr, No details)

no details captured.

It looks like self.details in environment.py has no value assigned.

Fail test in case subsubtest import fails

The test result GOOD in case of subsubtests import failure is misleading.

This problem can be simulated by changing SubSubTest type to SubTest type, which just logs the import failure. Results from my test run using modified run_simple test:

08:55:48 INFO | START   ----    ----    timestamp=1395561348    localtime=Mar 23 08:55:48
08:55:48 DEBUG| Persistent state client._record_indent now set to 1
08:55:48 DEBUG| Persistent state client.steps now set to [([], 'step_init', (), {})]
08:55:48 DEBUG| Persistent state client.steps now set to []
08:55:48 DEBUG| Persistent state client.steps now set to [([], 'run_test', ('/home/medic/Work/Projekty/autotest/autotest-ldoktor/client/tests/docker', 'docker/subtests/docker_cli/run_simple', 'test_1-of-1', 3600, None), {})]
08:55:48 DEBUG| Persistent state client.steps now set to []
08:55:48 DEBUG| Test has timeout: 3600 sec.
08:55:48 INFO |         START   docker/subtests/docker_cli/run_simple.test_1-of-1       docker/subtests/docker_cli/run_simple.test_1-of-1       timestamp=1395561348        timeout=3600    localtime=Mar 23 08:55:48
08:55:48 DEBUG| Persistent state client._record_indent now set to 2
08:55:48 DEBUG| Persistent state client.unexpected_reboot now set to ('docker/subtests/docker_cli/run_simple.test_1-of-1', 'docker/subtests/docker_cli/run_simple.test_1-of-1')
08:55:48 DEBUG| Waiting for pid 19683 for 3600 seconds
08:55:48 INFO |                 RUNNING ----    ----    timestamp=1395561348    localtime=Mar 23 08:55:48       INFO: initialize()
08:55:48 INFO | /home/medic/Work/Projekty/autotest/autotest-ldoktor/client/tests/docker/subtests/docker_cli/run_simple
08:55:48 ERROR|                 RUNNING ----    ----    timestamp=1395561348    localtime=Mar 23 08:55:48       ERROR: Failed importing sub-subtest run_true
08:55:48 INFO | /home/medic/Work/Projekty/autotest/autotest-ldoktor/client/tests/docker/subtests/docker_cli/run_simple
08:55:48 ERROR|                 RUNNING ----    ----    timestamp=1395561348    localtime=Mar 23 08:55:48       ERROR: Failed importing sub-subtest run_false
08:55:48 INFO |                 RUNNING ----    ----    timestamp=1395561348    localtime=Mar 23 08:55:48       INFO: run_once() iteration 1 of 1
08:55:48 INFO |                 RUNNING ----    ----    timestamp=1395561348    localtime=Mar 23 08:55:48       INFO: postprocess_iteration(), iteration #1
08:55:48 INFO |                 RUNNING ----    ----    timestamp=1395561348    localtime=Mar 23 08:55:48       INFO: postprocess()
08:55:48 INFO |                 RUNNING ----    ----    timestamp=1395561348    localtime=Mar 23 08:55:48       INFO: cleanup()
08:55:50 INFO |                 GOOD    docker/subtests/docker_cli/run_simple.test_1-of-1       docker/subtests/docker_cli/run_simple.test_1-of-1       timestamp=1395561350        localtime=Mar 23 08:55:50       completed successfully
08:55:50 INFO |         END GOOD        docker/subtests/docker_cli/run_simple.test_1-of-1       docker/subtests/docker_cli/run_simple.test_1-of-1       timestamp=1395561350        localtime=Mar 23 08:55:50
08:55:50 DEBUG| Persistent state client._record_indent now set to 1
08:55:50 DEBUG| Persistent state client.unexpected_reboot deleted
08:55:50 INFO | END GOOD        ----    ----    timestamp=1395561350    localtime=Mar 23 08:55:50

Bug 1097877 - Seemingly arbitrary image-name restrictions

https://bugzilla.redhat.com/show_bug.cgi?id=1097877

All docker sub-commands which result in an image being added or tagged to the repository are limited to only lower-case letters and numbers. There is no clear explanation or mention of this requirement in any --help, man, or online docs regarding this. This limitation seems to needlessly restrict image naming where case-sensitivity could be a useful distinction for users.

There may be a subsubtest here with comment one having to do with the addition of the ':' character:
https://bugzilla.redhat.com/show_bug.cgi?id=1097877#c1

Do we need to wait long time to generate a ramdom container name?

I tried to generate a ramdom container name with method of DockerContainers.get_unique_name(). However the speed of generating is such slow. After check, I found the default variant of get_size = True, if True, the executing statements in def _get_container_list(self) will go to "else", therefore the get_unique_name() takes so long time to generate.

In fact, It doesn't need the "--size" in the process of generating container name. It would be better if set get_size = False. After all, the default value of get_size should be belong to the situation in most of the time. In addition, from class DockerContainersBase, I got the comment of "Gathering layer-size data is potentially very slow, skip by default", I am not clear what is mean of "skip by default" ?

I called this function like this :
def initialize(self):
...
docker_containers = DockerContainers(self)
docker_containers.get_size = False
generated_name = docker_containers.get_unique_name('docker',
'test',
4)
...
However how I reset docker_containers.get_size = False in my test case (except reset it in the definition). the docker_containers.get_size equals still True in the end.

unhandles TypeError when OutpuGood tests fail

when try to run a expect failure test,
run the command "outputgood = OutputGood(cmdresult)"
will cause an Unhandled TypeError,

run outputgood = OutputGood(cmdresult, ignore_error=False)
get the same error Unhandled TypeError

run outputgood = OutputGood(cmdresult, ignore_error=True)
no the error, but also no expected exception happen, which is not expected also.

Unhandled TypeError: __init__() takes exactly 2 arguments (1 given)

Traceback (most recent call last):
File "/var/www/html/autotest/client/job.py", line 510, in _runtest
parallel.fork_waitfor_timed(self.resultdir, pid, timeout)
File "/var/www/html/autotest/client/parallel.py", line 116, in fork_waitfor_timed
_check_for_subprocess_exception(tmp, pid)
File "/var/www/html/autotest/client/parallel.py", line 67, in _check_for_subprocess_exception
e = pickle.load(file(ename, 'r'))
File "/usr/lib64/python2.6/pickle.py", line 1370, in load
return Unpickler(file).load()
File "/usr/lib64/python2.6/pickle.py", line 858, in load
dispatchkey
File "/usr/lib64/python2.6/pickle.py", line 1133, in load_reduce
value = func(*args)
TypeError: init() takes exactly 2 arguments (1 given)

events test fails with keyerror

Docker Autotest 0.7.3
docker-0.11.1-10.el7.x86_64

13:39:41 INFO |     START   docker/subtests/docker_cli/events.test_15-of-37 docker/subtests/docker_cli/events.test_15-of-37 timestamp=1401730781    timeout=600 localtime=Jun 02 13:39:41   
13:39:42 INFO |     events: initialize()
13:39:42 INFO |     events: run_once() iteration 1 of 1
13:39:45 INFO |     events: Removing test container...
13:39:48 INFO |     events: Sleeping 5 seconds for events to catch up
13:39:53 INFO |     events: postprocess_iteration(), iteration #1
13:39:53 INFO |     events: postprocess()
13:39:53 ERROR| Exception escaping from test:
Traceback (most recent call last):
  File "/usr/local/autotest/client/shared/test.py", line 411, in _exec
    _call_test_function(self.execute, *p_args, **p_dargs)
  File "/usr/local/autotest/client/shared/test.py", line 830, in _call_test_function
    raise error.UnhandledTestFail(e)
UnhandledTestFail: Unhandled KeyError: '563ef73e7d627bfe86526087a3a7d76e9fbaf1c2f902cba62f85e62fa039ed81'
Traceback (most recent call last):
  File "/usr/local/autotest/client/shared/test.py", line 823, in _call_test_function
    return func(*args, **dargs)
  File "/usr/local/autotest/client/tests/docker/dockertest/subtest.py", line 132, in execute
    *args, **dargs)
  File "/usr/local/autotest/client/shared/test.py", line 298, in execute
    self.postprocess()
  File "/usr/local/autotest/client/tests/docker/subtests/docker_cli/events/events.py", line 229, in postprocess
    test_events = cid_events[self.stuff['nfdc_cid']]
KeyError: '563ef73e7d627bfe86526087a3a7d76e9fbaf1c2f902cba62f85e62fa039ed81'

13:39:53 INFO |     events: cleanup()

wait test is non-deterministic

This method changes testing behaviour randomly:

def init_use_names(self, use_names=False):
    if use_names:
        conts = self.sub_stuff['containers']
        containers = DockerContainers(self.parent_subtest)
        containers = containers.get_container_list()
        cont_ids = [cont['id'] for cont in conts]
        for cont in containers:
            if cont.long_id in cont_ids:
                if use_names is not True and random.choice((True, False)):
                    continue    # 50% chance of using id vs. name
                # replace the id with name
                cont_idx = cont_ids.index(cont.long_id)
                conts[cont_idx]['id'] = cont.container_name

If there's a bug in either one, and one-shot to find it on a test-run, this test will fail 50% of the time. It also makes debugging really difficult since the precise behaviour is always somewhat undefined.

Make error message more friendly

When subsubtests are being run, if there is no 'subsubtest =' defined in the config for the subtest, you get an error message that looks like this:

Traceback (most recent call last):
  File "/home/jmolet/Projects/autotest/client/shared/test.py", line 378, in _exec
    _cherry_pick_call(self.initialize, *args, **dargs)
  File "/home/jmolet/Projects/autotest/client/shared/test.py", line 738, in _cherry_pick_call
    return func(*p_args, **p_dargs)
  File "/home/jmolet/Projects/autotest/client/tests/docker/dockertest/subtest.py", line 434, in initialize
    self.subsubtest_names = self.config['subsubtests'].strip().split(",")
KeyError: 'subsubtests'

This should probably be a nicer exception DockertestNAError("No subsubtests enabled in config!") to to both make it easier to figure out where its coming from and if you perhaps purposefully don't enable tests.

[RFC] Subtest directory orginization

I was thinking of renaming subtests/docker_cli to subtests/integration then creating subtests/bugzilla and subtests/functional and whatever other categories we need. Simply to help keep these directories of subtests from growing too deep, and make them wider instead.

Make pretty autotest client console output

This is just completely wrong for a single line of console-only output:

16:12:05 INFO |         RUNNING ----    ----    timestamp=1396037524    localtime=Mar 28 16:12:04   INFO: SubSubtest good_extra_tag INFO: Pulling...

If we can muddle out way through the autotest client logging code deep enough, we should be able to add enough duct-tape (here or in autotest) so that line appears like this:

16:12:05 INFO |         INFO: SubSubtest good_extra_tag INFO: Pulling...

HTB Req. Test: Image can NOT be pushed to docker.io

An upcoming capability is license-enforcement related, we need a test that is able to confirm:

  • A specific image can NOT be docker-pushed to index.docker.io
  • An built or committed images from specific image, also can NOT be pushed to index.docker.io

Run sub-subtest rerun_long_term_app fails when image needs to be downloaded

[root@docker docker]# rpm -q docker-io
docker-io-0.9.1-3.collider.el7.x86_64
[root@docker docker]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.0 (Maipo)
[root@docker docker]# ../../shared/version.py 
0.16.0-master-63-g35e93
[root@docker docker]# grep version config_defaults/defaults.ini 
config_version = 0.6.3
# Autotest version dependency for framework (or override for individual tests)
autotest_version = 0.16.0-master-32-g050cd

Verbose output:

reset; ../../autotest-local run docker --verbose --args=docker_cli/save_load,docker_cli/run_simple,docker_cli/start
...
48  INFO: SubSubtest rerun_long_term_app INFO: rerun_long_term_app initialize()
11:37:18 WARNI| run process timeout (30.0) fired on: /usr/bin/docker -D run -d fedora ping 127.0.0.1
11:37:19 ERROR|         RUNNING ----    ----    timestamp=1398181039    localtime=Apr 22 11:37:19   ERROR: rerun_long_term_app failed to initialize: DockerCommandError: Command </usr/bin/docker -D run -d fedora ping 127.0.0.1> failed, rc=2
  * Command: 
      /usr/bin/docker -D run -d fedora ping 127.0.0.1
  Exit status: 2
  Duration: 31.1690530777

  stderr:
  Unable to find image 'fedora' locally
Repository fedora already being pulled by another client. Waiting.
11:37:19 DEBUG|         RUNNING ----    ----    timestamp=1398181039    localtime=Apr 22 11:37:19   DEBUG: Traceback (most recent call last):
    File "/usr/local/autotest/client/tests/docker/dockertest/subtest.py", line 536, in call_subsubtest_method
      method()
    File "/usr/local/autotest/client/tests/docker/subtests/docker_cli/start/start.py", line 157, in initialize
      results = prep_changes.execute()
    File "/usr/local/autotest/client/tests/docker/dockertest/dockercmd.py", line 136, in execute
      raise DockerCommandError(self.command, detail.result_obj)
  DockerCommandError: Command </usr/bin/docker -D run -d fedora ping 127.0.0.1> failed, rc=2
  * Command: 
      /usr/bin/docker -D run -d fedora ping 127.0.0.1
  Exit status: 2
  Duration: 31.1690530777

  stderr:
  Unable to find image 'fedora' locally
Repository fedora already being pulled by another client. Waiting.
11:37:19 INFO |         RUNNING ----    ----    timestamp=1398181039    localtime=Apr 22 11:37:19   INFO: SubSubtest rerun_long_term_app INFO: rerun_long_term_app cleanup()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.