GithubHelp home page GithubHelp logo

multi-mechanize's People

Contributors

cgoldberg avatar

Watchers

 avatar  avatar

multi-mechanize's Issues

An optional arg for specifying target host:port

Again, not a bug but a possible feature request.

It would be great to be able to do:
 $ python multi-mecthanize.py -host my.host.com:8000

and have that passed through to each Transaction.run() method (hrm, that would 
break existing api) so that each test does not need to hard-code the host name.

The reason for this is so that tests can be run against different hosts without 
changing the actual test code, or even the configuration file.

Currently, I'm working around this by setting the host in the python 
environment and picking it up from the Transaction class :/

Thanks either way!

Original issue reported on code.google.com by [email protected] on 8 Mar 2011 at 8:01

Try-Finally to catch results even on exception

What steps will reproduce the problem?
1. Currently if there is an exception in the agent or manually killed with 
ctrl+c the results are lost

What version of the product are you using? On what operating system?
trunk -r 406 on ubuntu

Please provide any additional information below.
The provided patch places a try: finally around the main agent processing etc 
and the output. This will make sure that the output is always shown and all 
results files are generated up to the point that the exception occured. 

Unfortunately to avoid merge conflicts this patch must be applied after issues 
11, 12 AND 13 or the solution will have to be applied manually. To run this 
patch please run the following:
cd <multimechanizefolder>
mv multimechanize.py multi-mechanize.py
patch -p0 -i /<pathtofile>/alwaysAnalyseResults.diff
mv multi-mechanize.py multimechanize.py

Original issue reported on code.google.com by [email protected] on 27 Sep 2011 at 11:28

Attachments:

Throughput values less than one are truncated in graph

=== What steps will reproduce the problem?
1. Create some data where throughput is less-than-1 for some time intervals
2. Observe that the throughput graph has zeroes for those points

=== What is the expected output? What do you see instead?
I expect less-than-1 values to be reported. They are truncated to 0.

=== What version of the product are you using? On what operating system?
multi-mechanize version 1.011

=== Please provide any additional information below.

Here is a patch that fixes it:


diff --git a/multi-mechanize/lib/results.py b/multi-mechanize/lib/results.py
--- a/multi-mechanize/lib/results.py
+++ b/multi-mechanize/lib/results.py
@@ -128,7 +128,7 @@
     interval_secs = ts_interval
     splat_series = split_series(trans_timer_points, interval_secs)
     for i, bucket in enumerate(splat_series):
-        throughput_points[int((i + 1) * interval_secs)] = (len(bucket) / 
interval_secs)
+        throughput_points[int((i + 1) * interval_secs)] = (len(bucket) / 
float(interval_secs))
     graph.tp_graph(throughput_points, 'All_Transactions_throughput.png', results_dir)


@@ -150,7 +150,7 @@
         interval_secs = ts_interval
         splat_series = split_series(custom_timer_points, interval_secs)
         for i, bucket in enumerate(splat_series):
-            throughput_points[int((i + 1) * interval_secs)] = (len(bucket) / 
interval_secs)
+            throughput_points[int((i + 1) * interval_secs)] = (len(bucket) / 
float(interval_secs))
         graph.tp_graph(throughput_points, timer_name + '_throughput.png', results_dir)

         report.write_line('<hr />')

Original issue reported on code.google.com by [email protected] on 19 Aug 2011 at 4:59

Provide own script for report generation

KIND: FEATURE-REQUEST

It would be nice if their would be an own, standalone script for report 
generation.
This would:

  * simplify to adapt reports in different formats (PDF, ...)
  * allow post-processing testrun data (from files or from  a database)
  * allows other benchmarking utils to store data in multi-mechanize format and 
     use it for report generation

NOTE:
I have certain usecases where I cannot use Python for benchmarking or load 
testing.
Therefore, multi-mechanize is not an option. But using it for post-processing 
and report generation would be nice.



Original issue reported on code.google.com by [email protected] on 2 Apr 2011 at 7:18

Misleading # of threads displayed when you run the load test.

What steps will reproduce the problem?
1. Have multiple sections in config.cfg
2. Have different # of threads for each.
3. Run multi-mechnize

What is the expected output? What do you see instead?
 threads:Sum of # of threads in each section.
 We see it as # of threads in last section * # of sections.


What version of the product are you using? On what operating system?
Latest.

Please provide any additional information below.

See code: line 93 and 94 in multi-mechanize.py!

print '\n  user_groups:  %i' % len(user_groups)
print '  threads: %i\n' % (ug_config.num_threads * len(user_groups))

Original issue reported on code.google.com by [email protected] on 16 Dec 2011 at 4:36

results.html not accurate

What steps will reproduce the problem?
1. Configure projectX to with 20 user_groups each with 100 threads.
2. Other config settings.
run_time: 300
rampup: 300
console_logging: on
results_ts_interval: 5

3. Execute "python multi-mechanize.py projectX"

4. Looking at the "Result.html" for this test run shows 
Download_PDF_procedure = 
count min   avg   80pct  90pct  95pct  max    stdev 
317   0.687 6.515 10.835 15.334 15.345 20.440 5.563 
(the key here being the max value of 20.440)

But looking the results.csv shows "Download_PDF_procedure': 331.61391496658325,"
"3574,2180.750,1299697691,user_group-15,1919.354296,,{'Releases_page_load_time':
 184.1017701625824, 'Download_PDF_procedure': 331.61391496658325, 
'Login_page_load_time': 25.920620203018188, 'Notams_page_load_time': 
171.20861792564392, 'portal': 16, 'user': 2, 'Time_to_login': 
26.438215017318726, 'RNP_forcast_page_load_time': 10.634931087493896}" 

What is the expected output? What do you see instead?
That the results.html report uses the same values found in the results.csv


What version of the product are you using? On what operating system?
I am using multi-mechanize_1.010 on ubuntu 10.04 server and Python 2.6.5.


Please provide any additional information below.
It appear that results.html tables and graphics are completely inaccurate. With 
the example again the count says 317 but looking the csv file there 1431 
entries. These report error only seem to happen when I use large numbers of 
user_groups.


Original issue reported on code.google.com by [email protected] on 9 Mar 2011 at 8:00

Attachments:

Remote tests ending early for an unknown reason.

Environment description:

Using amazon's EC2 instances with multi-mechanize running in 'server mode.

Config file looks like the following:
[global]
run_time: 300
rampup: 30
console_logging: off
results_ts_interval:5 


[user_group-1]
threads: 400 
script: glb.py

(glb.py is a renamed version of example_mock.py, with a different target 
hardcoded in).



What steps will reproduce the problem?
1. Add the ec2 host to the local machine where grid_gui.py will be used.
2. Run the test
3. When test shows as 'complete', view the results.html

I get a summary that looks something like this:

transactions: 46757
errors: 46757
run time: 300 secs
rampup: 30 secs

test start: 2011-01-20 02:46:33
test finish: 2011-01-20 02:48:35

time-series interval: 5 secs


workload configuration:

group name  threads script name
user_group-1    400 glb.py



Notice that the test finish is barely over 2 minutes after the start time, when 
the test should have lasted 5 minutes.



What version of the product are you using? On what operating system?

I'm running the latest version of multi-mechanize on ubuntu 10.04.  EC2 
instances are being run via command line.


Original issue reported on code.google.com by [email protected] on 20 Jan 2011 at 7:48

ImportError: No module named mechanize

What steps will reproduce the problem?
1.
$ python multi-mechanize.py default_project
Traceback (most recent call last):
  File "multi-mechanize.py", line 47, in <module>
    exec('import %s' % f)
  File "<string>", line 1, in <module>
  File "projects/default_project/test_scripts/example_mechanize_simple.py", line 9, in <module>
    import mechanize
ImportError: No module named mechanize

2.
3.

What is the expected output? What do you see instead?
no errors

What version of the product are you using? On what operating system?
ubuntu & mm downloaded 10 mins ago

Please provide any additional information below.


Original issue reported on code.google.com by osde.info on 8 Mar 2011 at 3:30

Make multi-mechanize.py importable

What steps will reproduce the problem?
1. From the python command line
2. import multi-mechanize

What is the expected output? What do you see instead?
import multi-mechanize
  File "<stdin>", line 1
    import multi-mechanize
                ^
SyntaxError: invalid syntax


What version of the product are you using? On what operating system?
trunk -r 406 on Ubuntu

Please provide any additional information below.
I have attached a patch file to make sure that it is importable moving the 
parsing of args to a function that is automatically called when the code is run 
on the commandline but can be called manually if imported. The patch must be 
applied and then the filename must be changed to not include a - which is 
causing the above error. 

Commands to run to apply patch:
cd <multimechanizefolder>
patch -p0 -i /<pathtofile>/makeImportable.diff
cp multi-mechanize.py multimechanize.py

Original issue reported on code.google.com by [email protected] on 27 Sep 2011 at 9:53

Attachments:

Expand documentation for test example code to be on par with Pylot

Since Multi-Mechanize is supposed to be the successor to Pylot, it would seem 
appropriate to at least have the equivalent documentation as Pylot's getting 
started guide. Not everyone who plans to use Multi-Mechanize is a Python user.

Some of this stuff appears scattered in the Google Group for the tool, but 
would be worthwhile to consolidate onto the project site wiki pages.

Some notable areas to document:

* Making POST requests
* Making POST request with a (binary) file (to upload)
* Add headers to request
* parameterizing tests with text, CSV, XML file
* parameterizing tests using command line arguments

Original issue reported on code.google.com by [email protected] on 24 Sep 2011 at 2:25

Run project for an Infinate time

What steps will reproduce the problem?
1. We require the multi-mechanize to run for an unspecified time, anything from 
a minute to a week at which point we can kill the processes. In order to do 
this we suggest specifying run_time = -1 in the config.

What version of the product are you using? On what operating system?
trunk -r 406 on ubuntu

Please provide any additional information below.
We have created a patch file (find attached).

If this is applied after the issue 11 patch then the following commands should 
be run:
cd <multimechanizefolder>
mv multimechanize.py multi-mechanize.py
patch -p0 -i /<pathtofile>/runIndefinately.diff
mv multi-mechanize.py multimechanize.py

Otherwise the following can be run:
cd <multimechanizefolder>
patch -p0 -i /<pathtofile>/runInfinately.diff

However we strongly recommend using the commandline argument (added by issue 
12):
 --daemonize 
As this will allow ALL processes to be killed by ctrl+c 

Original issue reported on code.google.com by [email protected] on 27 Sep 2011 at 11:10

Attachments:

Enable projects directory to be set via an optional arg

This is a feature request more than a bug.

It'd be great to be able to do:
 $ python multi-mechanize --projects-dir=myproject load_tests
or similar.

The situation I have is that I'd like to store our load test configurations 
within our vcs without including multi-mech in our vcs (we just pull it when 
building a dev environment).

Original issue reported on code.google.com by [email protected] on 8 Mar 2011 at 7:54

Daemonize the child processes

What steps will reproduce the problem?
1. If you currently start the multi-mechanize process it will start 
child-processes for each agent. 
2. When the parent process is killed using ctrl+c then the child-processes are 
not killed and will remain active until the run_time is reached.

What is the expected output? What do you see instead?
If you run after killing the parent process:
ps auwx | grep "multi-mechanize" 
You will see that there are still child processes running

What version of the product are you using? On what operating system?
trunk -r 406 on ubuntu

Please provide any additional information below.
To solve this the Agent Class should have a variable self.deamon set to true. 

The following patch gives a commandline argument (defaulted to false) that can 
set the child processes to have the daemon set to true. Note the following 
patch relies that the Issue 11 patch has already been applied (to avoid patch 
conflicts). To apply this patch run the following:
cd <multimechanizefolder>
mv multimechanize.py multi-mechanize.py
patch -p0 -i /<pathtofile>/daemonize.diff
mv multi-mechanize.py multimechanize.py

Original issue reported on code.google.com by [email protected] on 27 Sep 2011 at 10:30

Attachments:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.