I have problems scanning a big nfs share - some bots giving me those errors
22:30:25 TransportError: TransportError(429, u'es_rejected_execution_exception', u'rejected execution of org.elasticsearch.transport.TransportService$7@5392962b on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@431b1929[Running, pool size = 4, active threads = 4, queued tasks = 200, completed tasks = 446339]]')
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/rq/worker.py", line 793, in perform_job
rv = job.perform()
File "/usr/lib/python2.7/site-packages/rq/job.py", line 599, in perform
self._result = self._execute()
File "/usr/lib/python2.7/site-packages/rq/job.py", line 605, in _execute
return self.func(*self.args, **self.kwargs)
File "/opt/mdc/bin/diskover/diskover_bot_module.py", line 1003, in scrape_tree_meta
es_bulk_add(worker, tree_dirs, tree_files, cliargs, totalcrawltime)
File "/opt/mdc/bin/diskover/diskover_bot_module.py", line 865, in es_bulk_add
es.index(index=cliargs['index'], doc_type='worker', body=data)
File "/usr/lib/python2.7/site-packages/elasticsearch5/client/utils.py", line 73, in _wrapped
return func(*args, params=params, **kwargs)
File "/usr/lib/python2.7/site-packages/elasticsearch5/client/init.py", line 300, in index
_make_path(index, doc_type, id), params=params, body=body)
File "/usr/lib/python2.7/site-packages/elasticsearch5/transport.py", line 312, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/usr/lib/python2.7/site-packages/elasticsearch5/connection/http_urllib3.py", line 129, in perform_request
self._raise_error(response.status, raw_data)
File "/usr/lib/python2.7/site-packages/elasticsearch5/connection/base.py", line 125, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
TransportError: TransportError(429, u'es_rejected_execution_exception', u'rejected execution of org.elasticsearch.transport.TransportService$7@5392962b on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@431b1929[Running, pool size = 4, active threads = 4, queued tasks = 200, completed tasks = 446339]]')
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/rq/worker.py", line 793, in perform_job
rv = job.perform()
File "/usr/lib/python2.7/site-packages/rq/job.py", line 599, in perform
self._result = self._execute()
File "/usr/lib/python2.7/site-packages/rq/job.py", line 605, in _execute
return self.func(*self.args, **self.kwargs)
File "/opt/mdc/bin/diskover/diskover_bot_module.py", line 1003, in scrape_tree_meta
es_bulk_add(worker, tree_dirs, tree_files, cliargs, totalcrawltime)
File "/opt/mdc/bin/diskover/diskover_bot_module.py", line 865, in es_bulk_add
es.index(index=cliargs['index'], doc_type='worker', body=data)
File "/usr/lib/python2.7/site-packages/elasticsearch5/client/utils.py", line 73, in _wrapped
return func(*args, params=params, **kwargs)
File "/usr/lib/python2.7/site-packages/elasticsearch5/client/init.py", line 300, in index
_make_path(index, doc_type, id), params=params, body=body)
File "/usr/lib/python2.7/site-packages/elasticsearch5/transport.py", line 312, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/usr/lib/python2.7/site-packages/elasticsearch5/connection/http_urllib3.py", line 129, in perform_request
self._raise_error(response.status, raw_data)
File "/usr/lib/python2.7/site-packages/elasticsearch5/connection/base.py", line 125, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
TransportError: TransportError(429, u'es_rejected_execution_exception', u'rejected execution of org.elasticsearch.transport.TransportService$7@5392962b on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@431b1929[Running, pool size = 4, active threads = 4, queued tasks = 200, completed tasks = 446339]]')
22:30:25 Moving job to u'failed' queue