GithubHelp home page GithubHelp logo

svenskaspel / locust-plugins Goto Github PK

View Code? Open in Web Editor NEW
509.0 509.0 135.0 27.85 MB

A set of useful plugins/extensions for Locust

License: Apache License 2.0

Python 97.88% Makefile 0.55% Dockerfile 0.29% Shell 1.28%

locust-plugins's People

Contributors

alexeiser avatar balaji2711 avatar bogdanlivadariu avatar ctstone avatar cyberw avatar cylarez avatar denniskrone avatar dmitry-rozhd avatar freshwlnd avatar guilhermeslucas avatar howardosborne avatar ionutab avatar jmfiaschi avatar joaogfarias avatar kolo-vrat avatar mabellou avatar maffey avatar martinpeck avatar nemmeviu avatar recolada avatar samposh avatar srenatus avatar srininara avatar stasgrishaev avatar tomasgareau avatar victormf2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

locust-plugins's Issues

Docker image with locust-plugin

First of all a big thank you for those great plugins!
This is not an issue but more of an FYI: Since I couldn't find another existing docker image which is based on vanilla locust image and includes your plugins, I just created one myself and published on dockerhub: https://github.com/sebader/locust-with-plugins

If there is another maintained version on dockerhub already, I'm happy to remove mine again

TimescaleListener - Failed to write user count to Postgresql: FeatureNotSupported

Hi there @cyberw !
I spent quite a bit of time trying to add your plugin - TimescaleListener to display graphs in Grafana. and I am tightly stuck on an INSERT table
what i tried:

  • creating a new PG database via pgAdmin and from the command line via psql (9 and 11 versions with the corresponding timescale version of the plugin) from your dump listeners_timescale_table.sql, after restarting the PG server (judging by the logs everything was created successfully)
    screenshot error stack, selected text - values for INSERT operation

What am I missing and what can I try?

image

Use 'options.csv_prefix' when writing transaction stats file

The main stats and summary csv files from locust can be configured to use a prefix string (--csv CSV_PREFIX), however the transaction csv file is always written to the current directory. Since the transaction file could grow large for long runs, it is ideal to store it on a dedicated volume.

I can work around this with the following code, but it would be great if it was automatic:

from locust.main import parse_options
from locust_plugins.transaction_manager import TransactionManager


options = parse_options()
if options.csv_prefix and not TransactionManager.transactions_filename.startswith(options.csv_prefix):
    TransactionManager.transactions_filename = f'{options.csv_prefix}_transactions.csv'
if options.csv_prefix and not TransactionManager.transactions_summary_filename.startswith(options.csv_prefix):
    TransactionManager.transactions_summary_filename = f'{options.csv_prefix}_transactions_summary.csv'

Is there a desire for refactoring the Listener?

Inspired by your listeners, I have performed my own analysis. You can see all of the adapted code in https://github.com/Midnighter/starlette-delay/. I have put some effort into refactoring the listener and I think it's more flexible now and a bit cleaner in the sense of higher responsibility separation.

If you find these changes valuable, I can add tests and open a pull request. I'm also happy to change the license of my code to Apache 2.0 so it is compatible with this project.

install error~

Installing collected packages: confluent-kafka, locust, locust-plugins
Running setup.py install for confluent-kafka ... error
ERROR: Command errored out with exit status 1:
command: 'd:\python\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\DELL\AppData\Local\Temp\pip-install-1jhg23va\confluent-kafka\setup.py'"'"'; file='"'"'C:\Users\DELL\AppData\Local\Temp\pip-install-1jhg23va\confluent-kafka\setup.py'"'"';f=getattr(tokeniz
e, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\DELL\AppData\Local\Temp\pip-record-jnpwxu7z\install-record.txt' --single-version-externally-managed --compile --install-headers 'd:\python\Inc
lude\confluent-kafka'
cwd: C:\Users\DELL\AppData\Local\Temp\pip-install-1jhg23va\confluent-kafka
Complete output (51 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.8
creating build\lib.win-amd64-3.8\confluent_kafka
copying confluent_kafka\deserializing_consumer.py -> build\lib.win-amd64-3.8\confluent_kafka
copying confluent_kafka\error.py -> build\lib.win-amd64-3.8\confluent_kafka
copying confluent_kafka\serializing_producer.py -> build\lib.win-amd64-3.8\confluent_kafka
copying confluent_kafka_init_.py -> build\lib.win-amd64-3.8\confluent_kafka
creating build\lib.win-amd64-3.8\confluent_kafka\admin
copying confluent_kafka\admin_init_.py -> build\lib.win-amd64-3.8\confluent_kafka\admin
creating build\lib.win-amd64-3.8\confluent_kafka\avro
copying confluent_kafka\avro\cached_schema_registry_client.py -> build\lib.win-amd64-3.8\confluent_kafka\avro
copying confluent_kafka\avro\error.py -> build\lib.win-amd64-3.8\confluent_kafka\avro
copying confluent_kafka\avro\load.py -> build\lib.win-amd64-3.8\confluent_kafka\avro
copying confluent_kafka\avro_init_.py -> build\lib.win-amd64-3.8\confluent_kafka\avro
creating build\lib.win-amd64-3.8\confluent_kafka\kafkatest
copying confluent_kafka\kafkatest\verifiable_client.py -> build\lib.win-amd64-3.8\confluent_kafka\kafkatest
copying confluent_kafka\kafkatest\verifiable_consumer.py -> build\lib.win-amd64-3.8\confluent_kafka\kafkatest
copying confluent_kafka\kafkatest\verifiable_producer.py -> build\lib.win-amd64-3.8\confluent_kafka\kafkatest
copying confluent_kafka\kafkatest_init_.py -> build\lib.win-amd64-3.8\confluent_kafka\kafkatest
creating build\lib.win-amd64-3.8\confluent_kafka\schema_registry
copying confluent_kafka\schema_registry\avro.py -> build\lib.win-amd64-3.8\confluent_kafka\schema_registry
copying confluent_kafka\schema_registry\error.py -> build\lib.win-amd64-3.8\confluent_kafka\schema_registry
copying confluent_kafka\schema_registry\json_schema.py -> build\lib.win-amd64-3.8\confluent_kafka\schema_registry
copying confluent_kafka\schema_registry\protobuf.py -> build\lib.win-amd64-3.8\confluent_kafka\schema_registry
copying confluent_kafka\schema_registry\schema_registry_client.py -> build\lib.win-amd64-3.8\confluent_kafka\schema_registry
copying confluent_kafka\schema_registry_init_.py -> build\lib.win-amd64-3.8\confluent_kafka\schema_registry
creating build\lib.win-amd64-3.8\confluent_kafka\serialization
copying confluent_kafka\serialization_init_.py -> build\lib.win-amd64-3.8\confluent_kafka\serialization
creating build\lib.win-amd64-3.8\confluent_kafka\avro\serializer
copying confluent_kafka\avro\serializer\message_serializer.py -> build\lib.win-amd64-3.8\confluent_kafka\avro\serializer
copying confluent_kafka\avro\serializer_init_.py -> build\lib.win-amd64-3.8\confluent_kafka\avro\serializer
running build_ext
building 'confluent_kafka.cimpl' extension
creating build\temp.win-amd64-3.8
creating build\temp.win-amd64-3.8\Release
creating build\temp.win-amd64-3.8\Release\Users
creating build\temp.win-amd64-3.8\Release\Users\DELL
creating build\temp.win-amd64-3.8\Release\Users\DELL\AppData
creating build\temp.win-amd64-3.8\Release\Users\DELL\AppData\Local
creating build\temp.win-amd64-3.8\Release\Users\DELL\AppData\Local\Temp
creating build\temp.win-amd64-3.8\Release\Users\DELL\AppData\Local\Temp\pip-install-1jhg23va
creating build\temp.win-amd64-3.8\Release\Users\DELL\AppData\Local\Temp\pip-install-1jhg23va\confluent-kafka
creating build\temp.win-amd64-3.8\Release\Users\DELL\AppData\Local\Temp\pip-install-1jhg23va\confluent-kafka\confluent_kafka
creating build\temp.win-amd64-3.8\Release\Users\DELL\AppData\Local\Temp\pip-install-1jhg23va\confluent-kafka\confluent_kafka\src
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Id:\python\include -Id:\python\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\ATLMFC\include" "-IC:\Program F
iles (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\include" "-ID:\Windows Kits\10\include\10.0.18362.0\ucrt" "-ID:\Windows Kits\10\include\10.0.18362.0\shared" "-ID:\Windows Kits\10\include\10.0.18362.0\um" "-ID:\Windows Kits\10\include\10.0.18362.0\winrt" "-ID:\Windows Kits\10\include\1
0.0.18362.0\cppwinrt" /TcC:\Users\DELL\AppData\Local\Temp\pip-install-1jhg23va\confluent-kafka\confluent_kafka\src\confluent_kafka.c /Fobuild\temp.win-amd64-3.8\Release\Users\DELL\AppData\Local\Temp\pip-install-1jhg23va\confluent-kafka\confluent_kafka\src\confluent_kafka.obj
confluent_kafka.c
C:\Users\DELL\AppData\Local\Temp\pip-install-1jhg23va\confluent-kafka\confluent_kafka\src\confluent_kafka.h(22): fatal error C1083: 无法打开包括文件: “librdkafka/rdkafka.h”: No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\bin\HostX86\x64\cl.exe' failed with exit status 2
----------------------------------------
ERROR: Command errored out with exit status 1: 'd:\python\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\DELL\AppData\Local\Temp\pip-install-1jhg23va\confluent-kafka\setup.py'"'"'; file='"'"'C:\Users\DELL\AppData\Local\Temp\pip-install-1jhg23va\confluent-kafka
\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record 'C:\Users\DELL\AppData\Local\Temp\pip-record-jnpwxu7z\install-record.txt' --single-version-externally-managed --compile
--install-headers 'd:\python\Include\confluent-kafka' Check the logs for full command output.

AttributeError: 'TransactionManager' object has no attribute 'runner'

I'm trying to use TransactionManager and I'm following the example in the repo, however when I call tm.strart_transaction I receive this error
AttributeError: 'TransactionManager' object has no attribute 'runner'

here is the full traceback
Traceback (most recent call last): worker_1 | File "/usr/local/lib/python3.8/site-packages/locust/user/task.py", line 284, in run worker_1 | self.execute_next_task() worker_1 | File "/usr/local/lib/python3.8/site-packages/locust/user/task.py", line 309, in execute_next_task worker_1 | self.execute_task(self._task_queue.pop(0)) worker_1 | File "/usr/local/lib/python3.8/site-packages/locust/user/task.py", line 321, in execute_task worker_1 | task(self) worker_1 | File "/mnt/locust/test_scenarios/simulate_compliance_life_cycle.py", line 188, in create_compliance worker_1 | self.tm.start_transaction("compliance_creation") worker_1 | File "/home/locust/.local/lib/python3.8/site-packages/locust_plugins/transaction_manager.py", line 46, in start_transaction worker_1 | transaction["user_count"] = self.runner.user_count if self.runner else 0 worker_1 | AttributeError: 'TransactionManager' object has no attribute 'runner'

I'm using locust version 1.1.1 and locust.plugins version 1.0.15

Appreciate your support.

Interested in a PR For MLLP/Socket Based Protocols?

Hello,

I work within the health care industry and am interested in a contributing a new user class, MllpUser, to locust-plugins to support load testing clinical system throughput. The mllp protocol is based on the HL7 standard . The standard supports a socket based messaging exchange, and is (at least in the US), the primary means of exchanging clinical data. So this could potentially be useful to other developers working within health-care.

The messaging format used within the mllp protcol is rather simple. Each message is enclosed within a block.

<0x0B>  [start block 1 byte]
datadatadatadata..etc [variable data]
<0x1C> [end block 1 byte]
 <0x0D> [carriage return 1 byte]

Communication with a server is blocking/synchronous. A client has to wait for a server ACK before additional messages may be sent.

If you think this is something which could be a good addition to the repository, I will be happy to work on a PR. I could try to make things more generic and parameterize some of the "MLLP" spec, if you would prefer to have a "SockerUser" which could be fine tuned for different use-cases through configuration.

Thanks!

Dixon

-i option with master/slave

I'm trying to run a master with multiple slaves, however the -i option does not appear to have any impact when used, neither in the master instance nor in the slaves.

Am I missing something, or is it not supposed to work?
The master is run like this:
locust -f main.py --headless --master --expect-workers=3
And the workers:
locust -f main.py GetUser --headless -u 10 -r 10 -i 100 --worker

As I mentioned, moving any of those options to the master does nothing. Even setting the options to a very low threshold like -u 1 -r 1 -i 1 on both the master and slave will just make it run forever.

TimescaleDB plugin doesn't work on Windows

Line 65 of listeners.py has a git check that includes a grep call, that Windows does not have installed by default, causing an error.
Why does this plugin need to check for a git repo in the first place?
Should be wrapped in a try/catch block at a minimum.

Application Insights - TypeError: 'bool' object is not callable

Hey, running locust & locust-plugins using Python 3.9.5 with the following:

  • locust==1.5.1
  • locust-plugins==1.2.0
  • numpy==1.20.2
  • pip-tools==6.1.0
  • toml==0.10.2

I noticed there's an issue when attempting to use the ApplicationInsights logger:

[2021-05-05 14:14:16,483] REDACTED/INFO/root: Successfully downloaded Global CA pem file
[2021-05-05 14:14:16,485] REDACTED/INFO/root: Verified using GlobalCAs.pem
[2021-05-05 14:14:16,488] REDACTED/ERROR/root: Uncaught exception in event handler:
Traceback (most recent call last):
File "c:\data\sources\performance-testing\venv\lib\site-packages\locust\event.py", line 40, in fire
handler(**kwargs)
File "C:\Data\Sources\performance-testing\locust_tests\locust_extension.py", line 40, in app_insights
ApplicationInsights(env=environment, instrumentation_key="REDACTED")
File "c:\data\sources\performance-testing\venv\lib\site-packages\locust_plugins\appinsights_listener.py", line 19, in init
self.logger.propagate(propagate_logs)
TypeError: 'bool' object is not callable

[2021-05-05 14:14:16,488] REDACTED/INFO/locust.main: Run time limit set to 30 seconds
[2021-05-05 14:14:16,488] REDACTED/INFO/locust.main: Starting Locust 1.5.1
[2021-05-05 14:14:16,489] REDACTED/INFO/locust.runners: Spawning 1 users at the rate 1 users/s (0 users already running)...
[2021-05-05 14:14:16,489] REDACTED/INFO/locust.runners: All users spawned: ApiUser: 1 (1 total running)
[2021-05-05 14:14:46,496] REDACTED/INFO/locust.main: Time limit reached. Stopping Locust.
[2021-05-05 14:14:46,497] REDACTED/INFO/locust.runners: Stopping 1 users
[2021-05-05 14:14:46,500] REDACTED/INFO/locust.runners: 1 Users have been stopped, 0 still running
[2021-05-05 14:14:46,526] REDACTED/INFO/locust.main: Running teardowns...
[2021-05-05 14:14:46,526] REDACTED/INFO/locust.main: Shutting down (exit code 2), bye.
[2021-05-05 14:14:46,526] REDACTED/INFO/locust.main: Cleaning up runner...

So nothing gets logged to application insights.

Seems that line 19 in appinsights_listener.py is attempting to set propagation via the following:
self.logger.propagate(propagate_logs)

I understand that I am running on an unsupported version of Python, however according to the logging docs (for all 3.X versions at least), it seems that propagate has always been an attribute not a function, hence the error.

If I change to this:
self.logger.propagate = propagate_logs
It fixes the issue:

[2021-05-05 14:10:26,897] REDACTED/INFO/root: Successfully downloaded Global CA pem file
[2021-05-05 14:10:26,899] REDACTED/INFO/root: Verified using GlobalCAs.pem
[2021-05-05 14:10:26,902] REDACTED/WARNING/root: request_success event deprecated. Use the request event.
[2021-05-05 14:10:26,902] REDACTED/WARNING/root: request_failure event deprecated. Use the request event.
[2021-05-05 14:10:26,902] REDACTED/INFO/root: Added Application Insights listener
[2021-05-05 14:10:26,902] REDACTED/INFO/locust.main: Run time limit set to 30 seconds
[2021-05-05 14:10:26,902] REDACTED/INFO/locust.main: Starting Locust 1.5.1
[2021-05-05 14:10:26,903] REDACTED/INFO/locust.runners: Spawning 1 users at the rate 1 users/s (0 users already running)...
[2021-05-05 14:10:26,903] REDACTED/INFO/locust.runners: All users spawned: ApiUser: 1 (1 total running)
[2021-05-05 14:10:27,825] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 920.9999999984575 Number of Threads: 1.
[2021-05-05 14:10:29,372] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 250.0 Number of Threads: 1.
[2021-05-05 14:10:32,275] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 250.0 Number of Threads: 1.
[2021-05-05 14:10:35,099] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 281.00000000267755 Number of Threads: 1.
[2021-05-05 14:10:37,848] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 234.00000000037835 Number of Threads: 1.
[2021-05-05 14:10:40,227] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 250.0 Number of Threads: 1.
[2021-05-05 14:10:42,074] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 295.9999999984575 Number of Threads: 1.
[2021-05-05 14:10:44,490] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 235.00000000058208 Number of Threads: 1.
[2021-05-05 14:10:46,066] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 235.00000000058208 Number of Threads: 1.
[2021-05-05 14:10:48,365] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 282.0000000028813 Number of Threads: 1.
[2021-05-05 14:10:50,188] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 250.0 Number of Threads: 1.
[2021-05-05 14:10:52,505] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 234.00000000037835 Number of Threads: 1.
[2021-05-05 14:10:54,840] REDACTED/INFO/locust_plugins.appinsights_listener: Success: GET /api-call Response time: 297.0000000022992 Number of Threads: 1.
[2021-05-05 14:10:56,903] REDACTED/INFO/locust.main: Time limit reached. Stopping Locust.
[2021-05-05 14:10:56,904] REDACTED/INFO/locust.runners: Stopping 1 users
[2021-05-05 14:10:56,907] REDACTED/INFO/locust.runners: 1 Users have been stopped, 0 still running
[2021-05-05 14:10:56,950] REDACTED/INFO/locust.main: Running teardowns...
[2021-05-05 14:10:56,951] REDACTED/INFO/locust.main: Shutting down (exit code 0), bye.
[2021-05-05 14:10:56,951] REDACTED/INFO/locust.main: Cleaning up runner...

image

I am performing some custom logging (lines 1 & 2 in each log) but I don't think they would cause this issue.

Use action='store_true' for transaction manager '--log_transactions_in_file'

When using a boolean argument value, it's more intuitive to let the flag itself assign a True value using the action attribute.

Here's an example from the locust argparser:

stats_group.add_argument(
    "--print-stats",
    action="store_true",
    help="Print stats in the console",
    env_var="LOCUST_PRINT_STATS",
)

Since this would be a breaking change, it might be a good opportunity to deprecate this flag and switch it to log-transactions-in-file for consistency with the other locust options that use dashes instead of underscores.

Connect method call in socketio_ex.py is missing header parameter

  • Repro step:
    Command:
    locust -f socketio_ex.py --host http://example.com -u 2 -r 10 --headless

  • Result:
    File "

    /venv/lib/python3.10/site-packages/locust/user/task.py", line 319, in run
    self.execute_next_task()
    File "/venv/lib/python3.10/site-packages/locust/user/task.py", line 344, in execute_next_task
    self.execute_task(self._task_queue.pop(0))
    File "/env/lib/python3.10/site-packages/locust/user/task.py", line 457, in execute_task
    task(self.user)
    File "socketio_ex.py", line 13, in my_task
    self.connect("wss://example.com/socket.io/?EIO=3&transport=websocket")
    SocketIOUser.connect() missing 1 required positional argument: 'header'

  • Expected:

  1. The example should be updated.
  2. A comment how should the user run it

Selenium server URL

How do I set another URL for the selenium server besides the "127.0.0.1:4444" default one?

I was trying to run a docker-compose project but didn't found on the documentation how to pass my selenium-hub container's IP address to the WebdriverUser. I've tried to change the command_executor property, but with no success.

Listener plugin performance to be improved with batching requests to timescaleDB

Hi,
Thanks for the plugins, I was experimenting with your listener plugin and timescale db in docker container. My test script was doing 3000 get requests per second, and inserts were not performant with plugin.

My Observations:

  1. with triggers ts_insert_blockers, updates were not going through the DB. so I dropped the triggers and updates were fine.
  2. locust seems to open 6 connections to timescale db, but only one is used, as below ps | grep output.
  3. with 3000 requests per second, grafana was not able to show results in real time, looking at the timescaleDB documentation, batched updates are faster in timescale db while individual updates suffer with performance issues.
  4. I don't think create ....chunk queries necessary in sql schema file. could you please confirm.
    once again, thank you.

70 17906 3577 1 11:16 ? 00:00:00 postgres: postgres locust
172.17.0.1(41684) INSERT
70 17907 3577 0 11:16 ? 00:00:00 postgres: postgres locust
172.17.0.1(41688) idle
70 17908 3577 0 11:16 ? 00:00:00 postgres: postgres locust
172.17.0.1(41692) idle
70 17909 3577 0 11:16 ? 00:00:00 postgres: postgres locust
172.17.0.1(41696) idle
70 18469 3577 1 11:16 ? 00:00:00 postgres: postgres locust
172.17.0.1(42326) idle
70 18653 3577 0 11:17 ? 00:00:00 postgres: postgres locust
172.17.0.1(42398) idle
root 18655 27576 0 11:17 pts/2 00:00:00 grep --color=auto locust

Timeline plugin example

Hello community,

is there any example on how to integrate the Timeline plugin for grafana dashboards?, I am trying to integrate that plugin but I am not sure if there is something to include in the python locust file, because the charts are showing blank,

I will really appreciate any help, thanks!

Locust 1.4.4 + Locust-Plugins 1.2.0 - Listener.py Error Timescale Integration

I've an implementation using Locust and Locust-Plugin inside a docker container.
I use Taurus has wrapper around locust for test execution. My config is setup in such a away that I pip install the Locust-Plugin has part of the pre-execution setup tasks. Today I've observed that the latest update to the locust-plugin broke my integration to the Timescale DB in my case.
Below is the exception I'm getting.

`
[2021-05-08 02:42:36,138] 90f0f3cdb2a5/INFO/root: Follow test run here: http://localhost:3000/d/qjIIww4Zz/locust-ht?orgId=1&from=now-15m&to=now&var-testplan=taurus_cci_listener_ex&from=1620441756138&to=now
[2021-05-08 02:42:36,159] 90f0f3cdb2a5/ERROR/root: Uncaught exception in event handler:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/locust/event.py", line 40, in fire
handler(**kwargs)
File "/bzt-configs/locustfile-cci.py", line 40, in on_locust_init
listeners.Timescale(env=environment, testplan="taurus_cci_listener_ex")
File "/tmp/artifacts/python-packages/3.8.5/locust_plugins/listeners.py", line 93, in init
events.request.add_listener(self.on_request)
AttributeError: 'Events' object has no attribute 'request'

[2021-05-08 02:42:36,159] 90f0f3cdb2a5/INFO/locust.main: Run time limit set to 360 seconds
[2021-05-08 02:42:36,160] 90f0f3cdb2a5/INFO/locust.main: Starting Locust 1.4.4
[2021-05-08 02:42:36,161] 90f0f3cdb2a5/INFO/locust.runners: Spawning 250 users at the rate 4.16667 users/s (0 users already running)...
[2021-05-08 02:42:36,163] 90f0f3cdb2a5/INFO/root: Terminal was not a tty. Keyboard input disabled
[2021-05-08 02:43:36,666] 90f0f3cdb2a5/INFO/locust.runners: All users spawned: MyWebsiteUser: 250 (250 total running)

locustfile-cci.py.zip
`

Its pointing to line 93 of the listener.py
I started seeing this error today. To confirm I tested with the previous plugin version
locust-plugins=1.1.7,
I'm not observing this issue and can successfully log the transactions data to the Timescale DB

Locust version 1.4.4
Locust-plugind 1.2.0
timescaledb:2.0.1-pg12

I'm I missing something has part of my locustfile? Any pointer would be greatly appreciated

Suggestion: break up the plugins by functionality

I was looking to add the --iterations functionality to my tests and the implementation here works well. However, installing locust-plugins also installs a bunch of dependencies (as specified in setup.py) and my company is stringent with bringing in new packages that we don't need (rightfully so, IMO).

Would be nice to be able to add just that functionality..

What I ended up doing was copying most of __init__.py into my project with just what I needed, but I would much rather install a package with just that functionality.

Same for the other plugins.

  • locust-iterations
  • locust-mongo-reader
  • etc.

Granted, this would cause for very small projects, which would likely make them harder to maintain, but would avoid the dependency overload.

When distributed,granfana diagrams only have users

image

master code

from locust import FastHttpUser, task
import locust_plugins
# from common.load_shapes import StagesShape


class Dummy(FastHttpUser):

    @task
    def hello(self):
        pass

worker code

from locust import FastHttpUser
from locust import task


class Dummy(FastHttpUser):
    host = "xxxxxx"
    @task
    def hello(self):
        self.client.get("/posts")

my shell

locust --timescale --headless -f master.py --pguser postgres --pgpassword password --master --expect-workers=1

locust -f worker.py --workder

use docker-compose.yml

version: "3"

services:
  postgres:
    image: cyberw/locust-timescale:2
    networks:
      - timenet
    expose:
      - 5432
    ports:
      # remove the 127.0.0.1 to expose postgres to other machines (including load gen workers outside this machine)
      - 127.0.0.1:5432:5432
    environment:
      # change the password if you intend to expose postgres to other machines
      - POSTGRES_PASSWORD=password
      - TIMESCALEDB_TELEMETRY=off
    volumes:
      - postgres_data:/var/lib/postgresql/data

  grafana:
    image: cyberw/locust-grafana:2
    environment:
      # these settings are not particularly safe, dont go exposing Grafana externally without changing this.
      - GF_AUTH_DISABLE_LOGIN_FORM=true
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_SECURITY_ALLOW_EMBEDDING=true
      - GF_LOG_LEVEL=warn # reduce log spamming. Remove this if you need to debug grafana.
    ports:
      - 127.0.0.1:3000:3000
    networks:
      - timenet
    volumes:
      - grafana_data:/var/lib/grafana

networks:
  timenet: null

volumes:
  postgres_data: null
  grafana_data: null

I'm very sorry. Could you tell me what I should do? Thank you

Websocket test issues

Trying to use the websocket-socket.io plugin and got the error No User class found!

My code:

import json
import logging
import re
import time
import gevent
import websocket
from locust import HttpUser


class SocketIOUser(HttpUser):
    abstract = True

    def __init__(self, parent):
        super().__init__(parent)
        ws_host = re.sub(r"https://REDACTED.com", "", self.host)
        self.ws = websocket.create_connection(f"wss://{ws_host}")
        gevent.spawn(self.receive)

    def receive(self):
        message_regex = re.compile(r"(\d*)(.*)")
        description_regex = re.compile(r"<([0-9]+)>$")
        response_time = None
        while True:
            message = self.ws.recv()
            logging.debug(f"WSR: {message}")
            m = message_regex.match(message)
            if m is None:
                # uh oh...
                raise Exception(f"got no matches in {message}")
            code = m.group(1)
            json_string = m.group(2)
            if code == "0":
                name = "0 open"
            elif code == "3":
                name = "3 heartbeat"
            elif code == "40":
                name = "40 message ok"
            elif code == "42":
                # this is rather specific to our use case. Some messages contain an originating timestamp,
                # and we use that to calculate the delay & report it as locust response time
                # see it as inspiration rather than something you just pick up and use
                obj = json.loads(json_string)
                name = f"{code} {obj[0]} apiUri: {obj[1]['apiUri']}"
                if obj[1]["value"] != "":
                    description = obj[1]["value"]["draw"]["description"]
                    description_match = description_regex.search(description)
                    if description_match:
                        sent_timestamp = int(description_match.group(1))
                        current_timestamp = round(time.monotonic() * 1000)
                        response_time = current_timestamp - sent_timestamp
                    else:
                        # differentiate samples that have no timestamps from ones that do
                        name += "_"
                else:
                    name += "_missingTimestamp"
            else:
                print(f"Received unexpected message: {message}")
                continue
            self.environment.events.request_success.fire(
                request_type="WSR", name=name, response_time=response_time, response_length=len(message)
            )

    def send(self, body):
        if body == "2":
            action = "2 heartbeat"
        else:
            m = re.search(r'(\d*)\["([a-z]*)"', body)
            assert m is not None
            code = m.group(1)
            action = m.group(2)
            url_part = re.search(r'"url": *"([^"]*)"', body)
            assert url_part is not None
            url = re.sub(r"/[0-9_]*/", "/:id/", url_part.group(1))
            action = f"{code} {action} url: {url}"

        self.environment.events.request_success.fire(
            request_type="WSS", name=action, response_time=None, response_length=len(body)
        )
        logging.debug(f"WSS: {body}")
        self.ws.send(body)

    def sleep_with_heartbeat(self, seconds):
        while seconds >= 0:
            gevent.sleep(min(15, seconds))
            seconds -= 15
            self.send("2")

Issue with Timescale setup due to git remote show origin

Hello I'm getting this issue with the Timescale plugin:

2021-06-01 13:39:42,184] USER/ERROR/root: Uncaught exception in event handler:
Traceback (most recent call last):
  File "c:\python39\lib\site-packages\locust\event.py", line 40, in fire
    handler(**kwargs)
  File "C:\Users\HMG28\Desktop\performance\locust_files\locustfile.py", line 123, in on_locust_init
    listeners.Timescale(env=environment, testplan="timescale_listener_ex")
  File "c:\python39\lib\site-packages\locust_plugins\listeners.py", line 64, in __init__
    subprocess.check_output(
  File "c:\python39\lib\site-packages\gevent\subprocess.py", line 404, in check_output
    raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command 'git remote show origin -n 2>/dev/null | grep h.URL | sed 's/.*://;s/.git$//'' returned non-zero exit status 255.

My code:

@events.init.add_listener
def on_locust_init(environment, **_kwargs):
    os.environ["PGHOST"] = "192.168.XXX.XXX"
    os.environ["PGPORT"] = "5432"
    os.environ["PGUSER"] = "test"
    os.environ["PGPASSWORD"] = "test"
    os.environ["PGDATABASE"] = "test_database"
    os.environ["LOCUST_GRAFANA_URL"] = "http://192.168.XXX.XXX:3000/d/YYYYYYY/locust?orgId=1"
    print("On init")
    listeners.Timescale(env=environment, testplan="timescale_listener_ex")

I am using the last version installed through pip and the locust version is 1.5.3

If I comment this lines it seems to work:

        # self._gitrepo = (
        #     subprocess.check_output(
        #         "git remote show origin -n 2>/dev/null | grep h.URL | sed 's/.*://;s/.git$//'",
        #         shell=True,
        #         stderr=None,
        #         universal_newlines=True,
        #     )
        #     or None  # default to None instead of empty string
        # )      
        self._gitrepo = None
```

Selenium WebDriver fails when launch rate is higher then 0.25

Env:
Ubuntu 18.04
Selenium Grid v 4.1.2 (launched via docker like docker run --log-driver=journald -e JAVA_OPTS=-Xmx512m -e SE_OPTS="--log-level FINE" -e SE_NODE_SESSION_TIMEOUT=1800 -e SE_SESSION_REQUEST_TIMEOUT=60 -e SE_SESSION_RETRY_INTERVAL=5 -e SE_NODE_OVERRIDE_MAX_SESSIONS=true -e SE_NODE_MAX_SESSIONS=120 -d -p 4444:4444 -p 7900:7900 --rm --shm-size="5g" --name sgrid selenium/standalone-chrome:96.0 or natively using xvfb-run java -Dwebdriver.chrome.driver=/usr/bin/chromedriver -jar selenium-server-standalone.jar standalone --session-timeout 10

Using a simple webdriver based test like:

class GoogleUser(WebdriverUser):
    def on_start(self):
        self.client.set_window_size(1000, 1000)

    @task
    def homepage(self):
        with self.request(name=f"homepage") as request:
            self.client.get(f"https://google.com")

If the spawn rate is higher then 0.25 - the selenium grid will start sessions, but locust will never register the as a user.

Removing the sleep step at
https://github.com/SvenskaSpel/locust-plugins/blob/master/locust_plugins/users/webdriver.py#L52

Seems to let the spawn rate work - although the second sleep - https://github.com/SvenskaSpel/locust-plugins/blob/master/locust_plugins/users/webdriver.py#L258 - does mean that it won't start more than 1 a second.

Removing both does seem to let the spawn rate work some what consistently.

I don't know if this is an issue with my env - but I don't see any reason why the code as written wouldn't have worked. If you disable headless, and monitor over VNC - chrome windows are spawned for each user ... but locust doesn't start sending them commands.

Catch pkill and killall exceptions on WebdriverUser

When experimenting with the package, I've found that when running a containerized worker with locust-plugins, some images would need to install psmisc since they didn't have killall command available. Thinking a bit, in any case, the termination bit would not be useful because running containerized would mean that probably I would be running a selenium server in a container as well.

Maybe putting that part in a try-except statement could prevent future headaches for containerized/non-Unix systems.

where users can get all env var's meaning for locust-plugins and locust-swarm ?

As a beginner, I finally can use Grafana monitoring locust after suffered a struggled process. In the process, I confused about some env vars like LOCUST_TEST_ENV, I checked code and not find where it set, how to set it and what role it can act. The same problem I have when I tried to use locust-swarm, the guide document says there many env vars which prefix with 'LOCUST-' can be used to control swarm strategy. My questions are:

  1. could you tell me where I can exactly know LOCUST_TEST_ENV?
  2. If it possible, could you make a list for the common using env vars for locust-swarm?
    I wanna use TimescaleListener and swarm together thanks.

Use a global KafkaProducer instead of creating one for each user

This is not exactly an issue, but an observation/a suggestion with reference to the Kafka user.

Do you think it will make more sense to have the end user create a producer globally in their script and pass it in to the KafkaUser, instead of creating one for each instance of User? I was thinking something on these lines:

Instead of
bootstrap_servers: str = None @ kafka.py#L10

if we had
producer: Producer = None

It might make the code more optimised from a performance standpoint.

Getting module 'select' has no attribute 'epoll'

Platform: Linux 16
python3.9
locust==2.8.2
locust-plugins==2.5.1

Log:
File "/opt/jenkins/virtual_env/bugsy_3_9/lib/python3.9/site-packages/locust_plugins/users/playwright.py", line 2, in
from playwright.async_api import async_playwright
File "/opt/jenkins/virtual_env/bugsy_3_9/lib/python3.9/site-packages/playwright/async_api/init.py", line 23, in
import playwright.async_api._generated
File "/opt/jenkins/virtual_env/bugsy_3_9/lib/python3.9/site-packages/playwright/async_api/_generated.py", line 25, in
from playwright._impl._accessibility import Accessibility as AccessibilityImpl
File "/opt/jenkins/virtual_env/bugsy_3_9/lib/python3.9/site-packages/playwright/_impl/_accessibility.py", line 17, in
from playwright._impl._connection import Channel
File "/opt/jenkins/virtual_env/bugsy_3_9/lib/python3.9/site-packages/playwright/_impl/_connection.py", line 22, in
from pyee import AsyncIOEventEmitter
File "/opt/jenkins/virtual_env/bugsy_3_9/lib/python3.9/site-packages/pyee/init.py", line 120, in
from pyee.trio import TrioEventEmitter as _TrioEventEmitter # noqa
File "/opt/jenkins/virtual_env/bugsy_3_9/lib/python3.9/site-packages/pyee/trio.py", line 7, in
import trio
File "/opt/jenkins/virtual_env/bugsy_3_9/lib/python3.9/site-packages/trio/init.py", line 18, in
from ._core import (
File "/opt/jenkins/virtual_env/bugsy_3_9/lib/python3.9/site-packages/trio/_core/init.py", line 29, in
from ._run import (
File "/opt/jenkins/virtual_env/bugsy_3_9/lib/python3.9/site-packages/trio/_core/_run.py", line 2384, in
from ._io_epoll import EpollIOManager as TheIOManager
File "/opt/jenkins/virtual_env/bugsy_3_9/lib/python3.9/site-packages/trio/_core/_io_epoll.py", line 188, in
class EpollIOManager:
File "/opt/jenkins/virtual_env/bugsy_3_9/lib/python3.9/site-packages/trio/_core/_io_epoll.py", line 189, in EpollIOManager
_epoll = attr.ib(factory=select.epoll)
AttributeError: module 'select' has no attribute 'epoll'

Question about timescale DB initialization

Hello! First, thank you for taking the time to publish this project for the benefit of programming community! I'm not sure how much time it would take me to connect Locust to Grafana without it.

There's just something I couldn't understand in my own. IIUC, timescale_schema.sql is an initialization file meant to prepare Postgres DB for recieving data from Locust. I understand why you set some DB settings there, as well as create the tables. But what are "_hyper" tables for? To me they look like a forgotten part of an SQL dump from a running installation. I removed all of them from the file and managed to successfully run a small load test (and view the results in Grafana). Could you please clarify the issue?

"Failed to reach target rps, even after rampup has finished" with limited load

I'm running a simple script with a task like that:

def mytask(self):
	self.rps_sleep(self.rps)
	jObs = {
		...
	}
	head = {'content-type': 'application/json'}
	self.client.request('POST', '/api/observations', data=json.dumps(jObs), headers=head)

and even with 2 clients and an rps of 1 I'm getting "Failed to reach target rps, even after rampup has finished" as soon as I run locust. Any clue?

locust + timescaledb integration on new timescaledb versions

Masters, please look this:

I'm try to run the locust+timescaledb integration using the timescale_schema.sql.
"to me" only work on docker image timescale/timescaledb:1.3.1-pg11-bitnami (bitnami or not, but version 1.3.1)

QUESTION: What's is my fail to load timescale_schema.sql on new versions like timescale/timescaledb:2.1.0-pg13?

My step-by-step:

locust + timescaledb

Step by step to create a timescaledb and locust integration.

Official links:

timescaledb docker image

  • timescale/timescaledb:1.3.1-pg11-bitnami

Steps to deploy

Create a database

> psql -U postgres -h 127.0.0.1
# create database locust;
# \c locust
# CREATE EXTENSION IF NOT EXISTS timescaledb;

Populate a locust database

> psql -U postgres -d locust -f ~/timescale_schema.sql -h 127.0.0.1

Remove a TRIGGER on request

> psql -U postgres -d locust -h 127.0.0.1
locust=# DROP TRIGGER ts_insert_blocker ON public.request;

Local Grafana TestRuns Page Hyperlink to TestRun is Broken

Describe the bug

Local Grafana TestRuns Page Hyperlink to TestRun is Broken
Screen Shot 2022-03-28 at 11 28 18 PM

Expected behavior

When clicking on a testrun in testruns table page, the Hyperlink should go to
http://localhost:3000/d/qjIIww4Zz/locust?orgId=1&var-testplan=*****&from=1648534349103&to=1648534409375

Actual behavior

https://grafana.internt.test.svenskaspel.se/d/qjIIww4Zz/locust?orgId=1&var-testplan=*****&from=1648534349103&to=1648534409375
It's having https://grafana.internt.test.svenskaspel.se instead of http://localhost:3000.

Screen Shot 2022-03-28 at 11 28 52 PM

Steps to reproduce

Always reproduce-able on local env

Environment

  • OS: MacOS
  • Python version: Python 3.9.10
  • Locust version: locust 2.8.5 locust-plugins 2.6.1
  • Locust command line that you ran: locust --config=master.conf
  • Locust file contents (anonymized if necessary):

locustfile.py

from locust import HttpUser, TaskSet, task, between, events
import os
import locust_plugins
from locust_plugins import listeners
import locust.env
import requests

os.environ["USER"] = "****"

@events.init.add_listener
def on_locust_init(environment, **_kwargs):
	environment.parsed_options.grafana_url = "http://localhost:3000/d/qjIIww4Zz?"
	listeners.Timescale(env=environment)

class LocalTester(HttpUser):
	wait_time = between(1, 2)
	host = "http://localhost:5001"
	@task(1)
	class test(TaskSet):
		@task
		def test1(self):
			self.client.get("/test1")
		@task
		def test2(self):
			self.client.post("/test2")
		@task
		def test3(self):
			self.client.post("/test3")

master.conf in current directory

test-env = ***
test-version = 22.1
description = "Testing ****"
override-plan-name = *****
grafana-url = http://localhost:3000/d/qjIIww4Zz?
headless = true
users = 48
spawn-rate = 5
run-time = 1m
locustfile = locustfile.py
html=report.html
csv=stats
pguser = "postgres"
pgpassword = "password"

Application Insights PlugIn Generates Lots of Noise on StdOut

When running locust tests, the state of the tests (the summary) is output to the console (stdout).
When using the Application Insights plugin this summary test information is unreadable because the AI events are written to the parent logger's console hander as well as the AzureLogHandler.

One way to fix this might be to prevent the logger created by this plugin from propagating events to the parent.

(I have a PR that should address this)

Using plugin with locust==1.6.0

Hi, Thanks for providing the plugin.
I am trying to use the -i add-on for the command line.
However, this command cannot be recognized by locust:

locust: error: unrecognized arguments: -i

I ran locust -f locustfile-that-imports-locust_plugins.py --help as well but it also gives only hints on the commands from the locust.
Am I missing some configuration?

How to set the Postgres env variable for TimescaleListener

I am trying to set the env variable for Postgres credentials like below,
**locust --timescale --headless -f LocustScript.py --pghost=localhost --pguser=admin --pgpassword=admin
Getting the following error.

fe_sendauth: no password supplied

I tried below method also.

**@events.init.add_listener
def on_locust_init(environment, **_kwargs):
    TimescaleListener(env=environment, testplan="timescale_listener_ex", target_env="myTestEnv",PGPASSWORD="admin")

How can I set env variable?

Originally posted by @ashanmuga in #18 (comment)

how to create table testrun events and user_count

When i use TimescaleListener, i found that the listeners_timescale_table.sql file only contains create table request.
There is my question:
how to create table testrun events and user_count

thx!

Support for Selenium WAIT constructs

With the webdriver.py class overriding find_element, using the native wait driver constructs causes a lot of extra exceptions, screenshots and log activity.

e.g.

wait = WebDriverWait(self.client, 20, poll_frequency=0.5, ignored_exceptions=[exception.RescheduleTask])
submit = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,"button[name=submit]")))

Will generate multiple find events:

find	              css s button[name=submit]           	101    	'no such element: {"method":"css selector","selector":"button[name=submit]"} (waited 0.0s, chrome=96.0.4664.93)'	
find	              css s button[name=submit]           	3548   	         	

The customized find_element has some benefits - retry, and screen capture on failure, improved debuging rendering, but its not really as flexible as the native functionality.

For example, I found myself adding a "sleep" to the retry function because I had a dynamic UI element that animated as it loaded that I couldn't find a work around except to migrate the locust_plugin find_element to a different function name.

Extending the existing find_element with additional options, or detecting if its being used in a WAIT for would work - but it feels like its working against the selenium design choices.

My thought would be to relabel the existing find_element to locust_find_element, and let the two interaction models co-exist.

I am happy to provide a PR to do the re-naming (and associated doc updates) but wanted to ask if it would be accepted - or if I had missed something obvious before providing the code change.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.