GithubHelp home page GithubHelp logo

Comments (39)

norick5243 avatar norick5243 commented on August 26, 2024

I'm a beginner and don't know much about docker, so I'm doing some research.
The purpose is to change the max_input_length of c2translate.
If I create a docker image, editing it seems like a hassle, so I would like to change it before creating the image.

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024
docker run --rm
-e APP_PORT=7860
-e  CT2_FORCE_CPU_ISA=“”
-p 7860:7860
ghcr.io/winstxnhdw/nllb-api:main

You can try this instead.

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

I'm a beginner and don't know much about docker, so I'm doing some research. The purpose is to change the max_input_length of c2translate. If I create a docker image, editing it seems like a hassle, so I would like to change it before creating the image.

Oh, I guess I can expose that as an environment variable instead.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

https://opennmt.net/CTranslate2/python/ctranslate2.Translator.html
max_input_length=0
If I want to, which one should I enter?
CT2_FORCE_CPU_ISA is like CPU type...
Many apologies.

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

Give me a while. I’ll push a commit to expose it soon. I am having dinner right now. CT2_FORCE_CPU_ISA is the CPU architecture. Yours is likely AVX2.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

got it.
My CPU is AVX2.
The purpose is to remove the translation character limit.
We apologize for the inconvenience.

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

Hey, you can now set the MAX_INPUT_LENGTH with the following.

docker run --rm
  -e APP_PORT=7860
  -e MAX_INPUT_LENGTH=0
  -e CT2_FORCE_CPU_ISA=AVX2
  -p 7860:7860
  ghcr.io/winstxnhdw/nllb-api:main

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

thank you.
I was able to confirm the operation.
However, if you send a long message, an Internal server error will be returned.

Looking at the log,
[2024-03-23 ​​12:43:53 +0000] [INFO] 200 "POST /v2/translate" 172.17.0.1 "NIL" in 2387.9415 ms
{"level":"error","ts":1711197854.1078668,"logger":"http.log.error","msg":"context deadline exceeded","request":{"remote_ip":"172.17.0.1 ","remote_port":"34910","client_ip":"172.17.0.1","proto":"HTTP/1.0","method":"POST","host":"lenovo:7860","uri ":"/api/v2/translate","headers":{"Connection":["close"],"Content-Length":["1852"],"Content-Type":["application/json; charset=UTF-8"]}},"duration":10.000821137}
{"level":"error","ts":1711197854.1081665,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","upstream":"localhost:5000","duration ":0.002623851,"request":{"remote_ip":"172.17.0.1","remote_port":"34910","client_ip":"172.17.0.1","proto":"HTTP/1.0","method" :"POST","host":"lenovo:7860","uri":"/v2/translate","headers":{"Date":["Sat, 23 Mar 2024 12:44:04 UTC"] ,"X-Forwarded-For":["172.17.0.1"],"User-Agent":[""],"X-Forwarded-Proto":["http"],"X-Forwarded-Host": ["lenovo:7860"],"Content-Length":["1852"],"Content-Type":["application/json; charset=UTF-8"]}},"error":"reading: context canceled"}
It says.

Do you know why? It seems like the cause is a timeout.
Is there a timeout setting?

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

Looks like the reverse proxy is timing out. I'll expose that as an environment variable as well I guess.

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

Hmm, actually it is probably better if you batch your texts.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

The test was about 2000 characters.
This is a situation where an error is returned about 7 seconds after POSTing to the API with curl.

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

Okay, you can now set the CACHE_TIMEOUT.

docker run --rm
  -e APP_PORT=7860
  -e MAX_INPUT_LENGTH=0
  -e CACHE_TIMEOUT=300
  -e CT2_FORCE_CPU_ISA=AVX2
  -p 7860:7860
  ghcr.io/winstxnhdw/nllb-api:main

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

thank you.
I tried it right away, but an error occurs.

[2024-03-23 ​​13:16:40 +0000] [INFO] 200 "POST /v2/translate" 172.17.0.1 "NIL" in 2535.6766 ms
{"level":"error","ts":1711199812.7662432,"logger":"http.log.error","msg":"context deadline exceeded","request":{"remote_ip":"172.17.0.1 ","remote_port":"58714","client_ip":"172.17.0.1","proto":"HTTP/1.0","method":"POST","host":"lenovo:7860","uri ":"/api/v2/translate","headers":{"Content-Length":["1852"],"Content-Type":["application/json; charset=UTF-8"],"Connection ":["close"]}},"duration":10.000553699}
{"level":"error","ts":1711199812.7670348,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","upstream":"localhost:5000","duration ":0.004413636,"request":{"remote_ip":"172.17.0.1","remote_port":"58714","client_ip":"172.17.0.1","proto":"HTTP/1.0","method" :"POST","host":"lenovo:7860","uri":"/v2/translate","headers":{"Content-Length":["1852"],"User-Agent":[ ""],"X-Forwarded-Proto":["http"],"X-Forwarded-Host":["lenovo:7860"],"Content-Type":["application/json; charset=UTF- 8"],"Date":["Sat, 23 Mar 2024 13:16:42 UTC"],"X-Forwarded-For":["172.17.0.1"]}},"error":"reading: context canceled"}

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

Can you try again? Seems like you are still on the old one. Try clearing your docker cache as well.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

I reinstalled it about 5 minutes ago, but I will try downloading it to DOCKER again.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

docker builder prune
I tried installing it again, but the situation remains the same.
What can you think?

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024
docker system prune -af

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

docker system prune -af
I tried reinstalling after that, but the situation remains the same.
It seems that the process is aborted after about 7 seconds.

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

What did you set CACHE_TIMEOUT to?

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

docker run --name my_nllb -e APP_PORT=7860 -e MAX_INPUT_LENGTH=0 -e CACHE_TIMEOUT=300 -e CT2_FORCE_CPU_ISA=AVX2 -p 7860:7860 ghcr.io/winstxnhdw/nllb-api:main

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

I've triggered another rebuild. Can you try again? If it still doesn't work, can you send me your dataset so I can give it a try as well. Or can you just send your full logs.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

I will try the installation again.
Although I am a beginner, I can send the contents of the sending CURL and the error contents.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

I tried installing it again, but the result remained the same.
After installation, I was able to translate short sentences without any problems.
I suddenly thought about it, but even if I sent a POST during the installation (model downloading), an error was returned in about 7 seconds.
It may not be necessary since short sentences have been translated, but do you also need docker run logs?
The server error remains the same.
You can send the CURL code.

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

Please wait for the model to finish downloading..

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

The result is the same even after the download is finished.
This time I tried sending an API for the first time.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

I tested it by sending various texts, but
The number of characters was gradually increased from a short sentence, and the error seemed to occur after reaching the number of characters that would take about 7 seconds to translate.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

After that, I went to various places and realized something.
I am translating Japanese into English,
An error occurs after about 100 Japanese characters.
When I sent an even longer text, the server returned an error after about 7 seconds, but the CPU usage rate continued to rise for a while, and after the translation was completed,
[2024-03-23 ​​15:45:45 +0000] [INFO] 200 "POST /v2/translate" 172.17.0.1 "NIL" in 52242.9479 ms
etc. is returned.
If you send 2000 characters, a completely different error will occur.

[2024-03-23 15:47:22 +0000] [23] [ERROR] Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/starlette/responses.py", line 264, in call
await wrap(partial(self.listen_for_disconnect, receive))
File "/usr/local/lib/python3.12/site-packages/starlette/responses.py", line 260, in wrap
await func()
File "/usr/local/lib/python3.12/site-packages/starlette/responses.py", line 237, in listen_for_disconnect
message = await receive()
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 568, in receive
await self.message_event.wait()
File "/usr/local/lib/python3.12/asyncio/locks.py", line 212, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7ffa8342c740

During handling of the above exception, another exception occurred:

  • Exception Group Traceback (most recent call last):
    | File "/usr/local/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
    | result = await app( # type: ignore[func-returns-value]
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "/usr/local/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in call
    | return await self.app(scope, receive, send)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "/usr/local/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in call
    | await super().call(scope, receive, send)
    | File "/usr/local/lib/python3.12/site-packages/starlette/applications.py", line 123, in call
    | await self.middleware_stack(scope, receive, send)
    | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in call
    | raise exc
    | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in call
    | await self.app(scope, receive, _send)
    | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/cors.py", line 83, in call
    | await self.app(scope, receive, send)
    | File "/home/user/app/server/middlewares/logging.py", line 76, in call
    | await self.app(scope, receive, self.inner_send_factory(send, status_code))
    | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in call
    | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
    | File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    | raise exc
    | File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    | await app(scope, receive, sender)
    | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 758, in call
    | await self.middleware_stack(scope, receive, send)
    | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 778, in app
    | await route.handle(scope, receive, send)
    | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 299, in handle
    | await self.app(scope, receive, send)
    | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 79, in app
    | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
    | File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    | raise exc
    | File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    | await app(scope, receive, sender)
    | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 77, in app
    | await response(scope, receive, send)
    | File "/usr/local/lib/python3.12/site-packages/starlette/responses.py", line 257, in call
    | async with anyio.create_task_group() as task_group:
    | File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 678, in aexit
    | raise BaseExceptionGroup(
    | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
    +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    | File "/usr/local/lib/python3.12/site-packages/starlette/responses.py", line 260, in wrap
    | await func()
    | File "/usr/local/lib/python3.12/site-packages/starlette/responses.py", line 249, in stream_response
    | async for chunk in self.body_iterator:
    | File "/usr/local/lib/python3.12/site-packages/starlette/concurrency.py", line 65, in iterate_in_threadpool
    | yield await anyio.to_thread.run_sync(_next, as_iterator)
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "/usr/local/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
    | return await get_async_backend().run_sync_in_worker_thread(
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
    | return await future
    | ^^^^^^^^^^^^
    | File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 851, in run
    | result = context.run(func, *args)
    | ^^^^^^^^^^^^^^^^^^^^^^^^
    | File "/usr/local/lib/python3.12/site-packages/starlette/concurrency.py", line 54, in _next
    | return next(iterator)
    | ^^^^^^^^^^^^^^
    | File "/home/user/app/server/features/translator.py", line 62, in
    | return (
    | ^
    | File "/usr/local/lib/python3.12/site-packages/ctranslate2/extensions.py", line 82, in translator_translate_iterable
    | yield from _process_iterable(
    | File "/usr/local/lib/python3.12/site-packages/ctranslate2/extensions.py", line 554, in _process_iterable
    | yield queue.popleft().result()
    | ^^^^^^^^^^^^^^^^^^^^^^^^
    | RuntimeError: No position encodings are defined for positions >= 1024, but got position 1657
    +------------------------------------
    {"level":"error","ts":1711208842.7998471,"logger":"http.handlers.reverse_proxy","msg":"reading from backend","error":"unexpected EOF"}
    {"level":"error","ts":1711208842.8048394,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","upstream":"localhost:5000","duration":0.00879943,"request":{"remote_ip":"172.17.0.1","remote_port":"40232","client_ip":"172.17.0.1","proto":"HTTP/1.0","method":"POST","host":"lenovo:7860","uri":"/v2/translate","headers":{"X-Forwarded-For":["172.17.0.1"],"User-Agent":[""],"X-Forwarded-Proto":["http"],"X-Forwarded-Host":["lenovo:7860"],"Content-Length":["6819"],"Content-Type":["application/json; charset=UTF-8"],"Date":["Sat, 23 Mar 2024 15:47:22 UTC"]}},"error":"reading: unexpected EOF"}

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

My guess is that the first problem is that the API server times out in about 7 seconds.

"http.log.error","msg":"context deadline exceeded"
"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response"

and a timeout will occur.

The second problem is even longer,

RuntimeError: No position encodings are defined for positions >= 1024, but got position 4861

There seems to be a limit of 1024.
I think the first problem is that the translation continues.
I think the second problem is that it throws an error and the translation is not executed.

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

Yeah, seems like you aren't even getting the new images.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

I tried installing it on a completely different new PC, but the result remained the same.
What can you think?

PS C:\Users\norick> docker run --name my_nllb -e APP_PORT=7860 -e MAX_INPUT_LENGTH=0 -e CACHE_TIMEOUT=300 -p 7860:7860 ghcr.io/winstxnhdw/nllb-api:main
Unable to find image 'ghcr.io/winstxnhdw/nllb-api:main' locally
main: Pulling from winstxnhdw/nllb-api
8a1e25ce7c4f: Pull complete
1103112ebfc4: Pull complete
e71929d49167: Pull complete
c529235f83c8: Pull complete
a47354887c31: Pull complete
bab5667827e6: Pull complete
3b31cad95c05: Pull complete
f3db419b7994: Pull complete
f3beb8ad61dc: Pull complete
fd66263818ce: Pull complete
0b19c1e9a8a5: Pull complete
Digest: sha256:93b03626da9ede70507c3399900fe03ead750ae07396ca1de6c0294f34926980
Status: Downloaded newer image for ghcr.io/winstxnhdw/nllb-api:main
2024-03-24 01:25:37,834 INFO supervisord started with pid 1
2024-03-24 01:25:38,838 INFO spawned: 'server' with pid 7
2024-03-24 01:25:38,842 INFO spawned: 'caddy' with pid 8
{"level":"info","ts":1711243538.8956883,"msg":"using provided configuration","config_file":"Caddyfile","config_adapter":"caddyfile"}
{"level":"info","ts":1711243538.9008093,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1711243538.9014416,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0004ad980"}
{"level":"info","ts":1711243538.9125662,"logger":"http.handlers.cache","msg":"Set nextTxnTs to 0"}
{"level":"info","ts":1711243538.915696,"logger":"http.handlers.cache","msg":"Set backend timeout to 10s"}
{"level":"info","ts":1711243538.9157453,"logger":"http.handlers.cache","msg":"Set cache timeout to 10s"}
{"level":"info","ts":1711243538.915751,"logger":"http.handlers.cache","msg":"Souin configuration is now loaded."}
{"level":"info","ts":1711243538.9165044,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"warn","ts":1711243538.9165168,"logger":"tls","msg":"unable to get instance ID; storage clean stamps will be incomplete","error":"open /home/user/.local/share/caddy/instance.uuid: no such file or directory"}
{"level":"info","ts":1711243538.9169295,"msg":"autosaved config (load with --resume flag)","file":"/home/user/.config/caddy/autosave.json"}
{"level":"info","ts":1711243538.9169784,"msg":"serving initial configuration"}
{"level":"info","ts":1711243538.9221144,"logger":"tls","msg":"cleaning storage unit","storage":"FileStorage:/home/user/.local/share/caddy"}
{"level":"info","ts":1711243538.9223886,"logger":"tls","msg":"finished cleaning storage units"}
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.

  • /v2/translate route found!
  • /v2/index route found!
    [2024-03-24 01:25:42 +0000] [7] [INFO] Starting gunicorn 21.2.0
    [2024-03-24 01:25:42 +0000] [7] [INFO] Listening at: http://0.0.0.0:5000 (7)
    [2024-03-24 01:25:42 +0000] [7] [INFO] Using worker: uvicorn.workers.UvicornWorker
    [2024-03-24 01:25:42 +0000] [27] [INFO] Booting worker with pid: 27
    [2024-03-24 01:25:42 +0000] [27] [INFO] Started server process [27]
    [2024-03-24 01:25:42 +0000] [27] [INFO] Waiting for application startup.
    Fetching 9 files: 22%|██▏ | 2/9 [00:01<00:05, 1.25it/s]2024-03-24 01:25:49,141 INFO success: server entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
    2024-03-24 01:25:49,141 INFO success: caddy entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
    Fetching 9 files: 100%|██████████| 9/9 [03:43<00:00, 24.86s/it]
    [2024-03-24 01:29:29 +0000] [27] [INFO] Application startup complete.

[2024-03-24 01:29:54 +0000] [INFO] 200 "POST /v2/translate" 172.17.0.1 "NIL" in 2920.4108 ms
{"level":"error","ts":1711243808.1974733,"logger":"http.log.error","msg":"context deadline exceeded","request":{"remote_ip":"172.17.0.1","remote_port":"43668","client_ip":"172.17.0.1","proto":"HTTP/1.0","method":"POST","host":"game:7860","uri":"/api/v2/translate","headers":{"Connection":["close"],"Content-Length":["1852"],"Content-Type":["application/json; charset=UTF-8"]}},"duration":10.001060666}
{"level":"error","ts":1711243808.1982503,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","upstream":"localhost:5000","duration":0.002788804,"request":{"remote_ip":"172.17.0.1","remote_port":"43668","client_ip":"172.17.0.1","proto":"HTTP/1.0","method":"POST","host":"game:7860","uri":"/v2/translate","headers":{"X-Forwarded-Proto":["http"],"X-Forwarded-Host":["game:7860"],"Date":["Sun, 24 Mar 2024 01:29:58 UTC"],"X-Forwarded-For":["172.17.0.1"],"Content-Length":["1852"],"Content-Type":["application/json; charset=UTF-8"],"User-Agent":[""]}},"error":"reading: context canceled"}
[2024-03-24 01:30:17 +0000] [INFO] 200 "POST /v2/translate" 172.17.0.1 "NIL" in 73569.3316 ms

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

I tried installing it on Ubuntu as well, but got the same error.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

Sorry again and again.
As a result of my subsequent investigation, I found out that to the best of my knowledge, isn't a 10 second timeout set somewhere in caddy or c2translate?
The same thing happened when translating from English to Japanese.
Is it okay for other people to take more than 10 seconds to translate?

"duration":10.000816066}
"duration":10.001197174}

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

CACHE_TIMEOUT defaults to milliseconds. The issue was that you didn't set a long enough CACHE_TIMEOUT. If you want it to be 300 seconds, you have to use 300s instead.

docker run --rm
  -e APP_PORT=7860
  -e MAX_INPUT_LENGTH=0
  -e CACHE_TIMEOUT=300s
  -e CT2_FORCE_CPU_ISA=AVX2
  -p 7860:7860
  ghcr.io/winstxnhdw/nllb-api:main

Should be fine now.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

thank you

I'll try installing again.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

docker run --name my_nllb -e APP_PORT=7860 -e MAX_INPUT_LENGTH=0 -e CACHE_TIMEOUT=300s -e CT2_FORCE_CPU_ISA=AVX2 -p 7860:7860 ghcr.io/winstxnhdw/nllb-api:main

executed.
The time is longer than before, but this time an error occurs after 30 seconds.
It seems that context deadline exceeded has changed to context canceled.

"duration":30.033858361}
"duration":30.015550372}

[2024-03-24 07:54:32 +0000] [INFO] 200 "POST /v2/translate" 172.17.0.1 "NIL" in 2625.4808 ms
{"level":"error","ts":1711266907.353249,"logger":"http.log.error","msg":"context canceled","request":{"remote_ip":"172.17.0.1" ,"remote_port":"36742","client_ip":"172.17.0.1","proto":"HTTP/1.1","method":"POST","host":"lenovo:7860","uri" :"/api/v2/translate","headers":{"Accept":["/"],"Content-Type":["application/json; charset=UTF-8"],"Content- Length":["1852"],"Expect":["100-continue"]}},"duration":30.015550372}
{"level":"error","ts":1711266907.353249,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","upstream":"localhost:5000","duration ":0.002236774,"request":{"remote_ip":"172.17.0.1","remote_port":"36742","client_ip":"172.17.0.1","proto":"HTTP/1.1","method" :"POST","host":"lenovo:7860","uri":"/v2/translate","headers":{"Date":["Sun, 24 Mar 2024 07:54:37 UTC"] ,"User-Agent":[""],"X-Forwarded-Proto":["http"],"X-Forwarded-Host":["lenovo:7860"],"Accept":["*/ *"],"Content-Type":["application/json; charset=UTF-8"],"Content-Length":["1852"],"Expect":["100-continue"],"X -Forwarded-For":["172.17.0.1"]}},"error":"reading: context canceled"}

docker run --name my_nllb -e APP_PORT=7860 -e MAX_INPUT_LENGTH=0 -e CACHE_TIMEOUT=3000s -e CT2_FORCE_CPU_ISA=AVX2 -p 7860:7860 ghcr.io/winstxnhdw/nllb-api:main

But I tried it, but it didn't work after the same 30 seconds.

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

Sorry again and again.

By extending the CURL timeout, we were able to exceed 30 seconds.
This was a configuration error.

If you make the text you want to translate a little long, you'll get the error I posted earlier.

RuntimeError: No position encodings are defined for positions >= 1024, but got position 1657

Isn't this pretty good?

from nllb-api.

norick5243 avatar norick5243 commented on August 26, 2024

As I know now, I measure the seconds with a stopwatch, but after one minute the translation is abandoned mid-way, and the last sentence or word is returned continuously, or sentences are cut off. .
It's exactly one minute. Can't this timeout be disabled?

When I translate a Japanese sentence, it seems that after one minute, the same translations are lined up.

(response)
The Prime Minister of Japan has also raised doubts about his own party. In June 2022, the Prime Minister of Japan has raised doubts about his own party. In June 2022, the Prime Minister of Japan has also raised doubts about his own party. In June 2022, the Prime Minister of Japan has announced that the Prime Minister of Japan will announce the annual meeting of the Prime Minister of Japan after the opening of the annual meeting of the Prime Minister of Japan. "Mr. Kim Min-Hyo, this is a political party that will announce itself. This is a political party that will announce itself. This is a political party that will announce itself. This is a political party that will announce itself. This is a political party that will announce itself. This is a political party that will announce itself. This is a political party that will announce itself. This is a political party that will announce itself. This is a political party that will announce itself. This is a political party that will announce itself. This is a political party that will announce itself. This is a political party that will announce itself. This is a political party that will announce itself. This is a political party that will anno

(response)
The party's head of state, Prime Minister Shinzo Abe, said that he was "doubting whether he would celebrate the party's opening again". The party's head of state, Prime Minister Kim Jong-un, said that the party's head of state, Prime Minister Kim Jong-un, would celebrate the 20th anniversary of the inauguration of the party's head of state, Kim Jong-un. The party's head of state, Kim Jong-un, said that the party's head of state, Kim Jong-un, would celebrate the 20th anniversary of the inauguration of the party's head of state, Kim Jong-un. The party's head of state, Kim Jong-un, said that the party's head of state, Kim Jong-un, would celebrate the 20th anniversary of the inauguration of the party's head of state, Kim Jong-un. The party's head, Kim Jong-un, said that the party's head, Kim Jong-un, would celebrate the 20th anniversary of the inauguration of the party's head, Kim Jong-un. This is the first political party to

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

RuntimeError: No position encodings are defined for positions >= 1024, but got position 1657

This seems to be an issue with ctranslate2. Seems like the max_input_length argument is not properly being accepted by translate_iterable. To be honest, you should batch the texts instead.

It's exactly one minute. Can't this timeout be disabled?

I don't place any timeouts. It's likely whatever you are using to make the requests is timing out.

from nllb-api.

winstxnhdw avatar winstxnhdw commented on August 26, 2024

You can see how to batch your texts here: https://github.com/winstxnhdw/nllb-api/blob/main/tests/test_translate.py

from nllb-api.

Related Issues (16)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.