GithubHelp home page GithubHelp logo

pjlab-adg / drivelikeahuman Goto Github PK

View Code? Open in Web Editor NEW
337.0 8.0 32.0 5.21 MB

Drive Like a Human: Rethinking Autonomous Driving with Large Language Models

License: MIT License

Python 69.56% HTML 3.84% Jupyter Notebook 26.60%

drivelikeahuman's People

Contributors

fdarco avatar friskit-china avatar sankin97 avatar zijinoier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

drivelikeahuman's Issues

Reasoning ability with common sense?

I was trying to reproduce the example regarding "Reasoning ability with common sense" with pic below:
cones_on_truck_1

But the result is not expected according to the paper:
Screenshot from 2023-07-10 22-07-38

I am wondering is there anyway we can fix this? Or the result is just not reproducible.
Cheers.

ValueError

哥,为什么我录的视频只有两三秒?我用的是openai,不是azure;然后就会出现这个问题。
laneID, vid = inputs.replace(' ', '').split(',')
ValueError: not enough values to unpack (expected 2, got 1)

复现出错

Hello, I have followed the method in this article to execute python HELLM.py,The following situation occurred, it kept running and got stuck here
image

Change the large model

Does your project support the replacement of large language models, such as replacing gpt3.5 in the project with the ChatGlm3 large model? Because our school does not have the conditions to unlock the restrictions of gpt3.5, that is, it does not have a credit card to upgrade the openai account. Thank you.

Error with http://llama-adapter.opengvlab.com/

Hi, thank you so much for your amazing work! I'm experimenting with the CaseReasoning ipynb, where a few weeks ago, it's able to run smoothly locally. Yet now, when I try to run client = Client("http://llama-adapter.opengvlab.com/"), it generates an error
ValueError: Could not get Gradio config from: http://llama-adapter.opengvlab.com/

And it appears that http://llama-adapter.opengvlab.com/ is now down? Do you happen to know any updates to this? Thank you so much!

Observation: Error

Hi, your work is amazing! However I have notice that I use chatgpt to grenerate the thought, will often return "Observation: Check your output and make sure it conforms the format instructions!"
Have you tried other ways to better constrain the output of large models?

Experimental data

I would like to ask why your experimental data is not reflected in the paper, such as the success rate of 30s in the environment of road 4 and density 1,We look forward to hearing from you!

It doesn't work as smoothly as the video you showed

Thank you for the open source spirit, and the ideas in the paper are great,However, I have always had a problem in running your code, and I have asked questions several times, but the problem has not been solved, so I would like to ask you in detail.Here is the same code as yours
image
image
The results are as follows
ZYZUD)K2B@O5S%GPY@ KG L
Just keep stuck like this, and then wait until the final decision is made before running, and then keep stuck waiting for running,
I was wondering if this is normal, because I saw how smooth your demo videos were
So the question was raised,I think there are the following anomalies in conda, and I will list them one by one. Please kindly help me to see which aspects are the problems
image
image
image
image
It is probably the above exceptions, which have warning errors, and there are many repeated reasoning processes, I do not know whether these problems cause the simulator to stall and not run smoothly,I would really appreciate your help!!!

TypeError encountered when recording video: Must be a real number, not NoneType

Description:
I encountered an issue when trying to record videos in my environment using the gymnasium.wrappers. RecordVideo wrapper. I receive an error indicating that the fps value is of NoneType when the code tries to save the video. Do I need to specify a specific frame rate (fps) when initializing the environment or in the configuration?

Steps to Reproduce:

  1. Initialize a 'highway-v0' environment and wrap it using gymnasium.wrappers.RecordVideo.
  2. Run the environment and perform actions.
  3. Try to save the video upon environment termination or reset.

Expected Behavior:
The video should save successfully without any errors.

Actual Behavior:
A TypeError is thrown indicating that the fps value is NoneType.

Error Logs:
Traceback (most recent call last):
File "E:\DriveLikeAHuman-main\DriveLikeAHuman-main\HELLM.py", line 107, in
obs, reward, done, info, _ = env.step(output["action_id"])
File "C:\Users\ASUS\anaconda3\lib\site-packages\gymnasium\wrappers\record_video.py", line 180, in step
self.close_video_recorder()
File "C:\Users\ASUS\anaconda3\lib\site-packages\gymnasium\wrappers\record_video.py", line 193, in close_video_recorder
self.video_recorder.close()
File "C:\Users\ASUS\anaconda3\lib\site-packages\gymnasium\wrappers\monitoring\video_recorder.py", line 161, in close
clip.write_videofile(self.path, logger=moviepy_logger)
File "C:\Users\ASUS\anaconda3\lib\site-packages\decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "C:\Users\ASUS\anaconda3\lib\site-packages\moviepy\decorators.py", line 54, in requires_duration
return f(clip, *a, **k)
File "C:\Users\ASUS\anaconda3\lib\site-packages\decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "C:\Users\ASUS\anaconda3\lib\site-packages\moviepy\decorators.py", line 135, in use_clip_fps_by_default
return f(clip, *new_a, **new_kw)
File "C:\Users\ASUS\anaconda3\lib\site-packages\decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "C:\Users\ASUS\anaconda3\lib\site-packages\moviepy\decorators.py", line 22, in convert_masks_to_RGB
return f(clip, *a, **k)
File "C:\Users\ASUS\anaconda3\lib\site-packages\moviepy\video\VideoClip.py", line 300, in write_videofile
ffmpeg_write_video(self, filename, fps, codec,
File "C:\Users\ASUS\anaconda3\lib\site-packages\moviepy\video\io\ffmpeg_writer.py", line 213, in ffmpeg_write_video
with FFMPEG_VideoWriter(filename, clip.size, fps, codec = codec,
File "C:\Users\ASUS\anaconda3\lib\site-packages\moviepy\video\io\ffmpeg_writer.py", line 88, in init
'-r', '%.02f' % fps,
TypeError: must be real number, not NoneType

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "E:\DriveLikeAHuman-main\DriveLikeAHuman-main\HELLM.py", line 111, in
env.close()
File "C:\Users\ASUS\anaconda3\lib\site-packages\gymnasium\wrappers\record_video.py", line 217, in close
self.close_video_recorder()
File "C:\Users\ASUS\anaconda3\lib\site-packages\gymnasium\wrappers\record_video.py", line 193, in close_video_recorder
self.video_recorder.close()
File "C:\Users\ASUS\anaconda3\lib\site-packages\gymnasium\wrappers\monitoring\video_recorder.py", line 161, in close
clip.write_videofile(self.path, logger=moviepy_logger)
File "C:\Users\ASUS\anaconda3\lib\site-packages\decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "C:\Users\ASUS\anaconda3\lib\site-packages\moviepy\decorators.py", line 54, in requires_duration
return f(clip, *a, **k)
File "C:\Users\ASUS\anaconda3\lib\site-packages\decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "C:\Users\ASUS\anaconda3\lib\site-packages\moviepy\decorators.py", line 135, in use_clip_fps_by_default
return f(clip, *new_a, **new_kw)
File "C:\Users\ASUS\anaconda3\lib\site-packages\decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "C:\Users\ASUS\anaconda3\lib\site-packages\moviepy\decorators.py", line 22, in convert_masks_to_RGB
return f(clip, *a, **k)
File "C:\Users\ASUS\anaconda3\lib\site-packages\moviepy\video\VideoClip.py", line 300, in write_videofile
ffmpeg_write_video(self, filename, fps, codec,
File "C:\Users\ASUS\anaconda3\lib\site-packages\moviepy\video\io\ffmpeg_writer.py", line 213, in ffmpeg_write_video
with FFMPEG_VideoWriter(filename, clip.size, fps, codec = codec,
File "C:\Users\ASUS\anaconda3\lib\site-packages\moviepy\video\io\ffmpeg_writer.py", line 88, in init
'-r', '%.02f' % fps,
TypeError: must be real number, not NoneType
Moviepy - Building video E:\DriveLikeAHuman-main\DriveLikeAHuman-main\results-video\highwayv0-episode-0.mp4.
Moviepy - Writing video E:\DriveLikeAHuman-main\DriveLikeAHuman-main\results-video\highwayv0-episode-0.mp4

Moviepy - Building video E:\DriveLikeAHuman-main\DriveLikeAHuman-main\results-video\highwayv0-episode-0.mp4.
Moviepy - Writing video E:\DriveLikeAHuman-main\DriveLikeAHuman-main\results-video\highwayv0-episode-0.mp4

C:\Users\ASUS\anaconda3\lib\site-packages\gymnasium\wrappers\monitoring\video_recorder.py:182: UserWarning: WARN: Unable to save last video! Did you call close()?

Environment:
OS: [Windows 11]
Python Version: [3.10.13]
gymnasium version: [0.29.1]
moviepy version: [1.0.3]
IDE: [PyCharm 2023.2.1 (Professional Edition)]

Rate Limit Error from Chatgpt

Hi there,

Thanks for sharing this great work.
The code worked well when I first tried it one month ago. However, I tried it yesterday and got the Rate Limit Error from the Openai API after running up to 10 frames; therefore, the simulation terminated. I wonder whether you have had the same error as mine recently and any thoughts on this error. Looking forward to your reply.

Best,
Yixuan

Could we use GPT-4 model in this scope?

Is it possible for us to use GPT-4 model (with GPT-4 API key in config file) in this project?
I have tried to replace the API key with a GPT-4 one in config.yaml and set the model_name of the llm in HELLM.py as below:
os.environ["OPENAI_API_KEY"] = OPENAI_CONFIG['OPENAI_KEY'] llm = ChatOpenAI( temperature=0, model_name='gpt-4-0613', max_tokens=1024, request_timeout=60 )
But we got error:
openai.error.InvalidRequestError: The model: gpt-4-0613 does not exist

Thank you in advance for any helpful insight!

Token length exceeds during running simulation

Hi,

This is Haowei and thanks for the open source code. When I am trying to run the repo, I encountered the following errors:

Exception has occurred: InvalidRequestError
This model's maximum context length is 4097 tokens. However, you requested 4270 tokens (3246 in the messages, 1024 in the completion). Please reduce the length of the messages or completion.

I am currently using OS Windows 11, Python 3.9.13 Anaconda, and Openai API (rather than the Azure API). I was wondering if you could help solve this issue. Please let me know if other information is required.

Thanks.

Version of urllib3 and HTTPSConnection error

When I save the result of video, I have a problem:
Example:
image

Error:

> Finished chain.
Moviepy - Building video /data/GitHub/DriveLikeAHuman/results-video/highwayv0-episode-0.mp4.
Moviepy - Writing video /data/GitHub/DriveLikeAHuman/results-video/highwayv0-episode-0.mp4

                                                  
Moviepy - Done !
Moviepy - video ready /data/GitHub/DriveLikeAHuman/results-video/highwayv0-episode-0.mp4

---------------------------------------------------------------------------
TimeoutError                              Traceback (most recent call last)
File ~/anaconda3/envs/llama/lib/python3.10/site-packages/urllib3/connection.py:203, in HTTPConnection._new_conn(self)
    202 try:
--> 203     sock = connection.create_connection(
    204         (self._dns_host, self.port),
    205         self.timeout,
    206         source_address=self.source_address,
    207         socket_options=self.socket_options,
    208     )
    209 except socket.gaierror as e:

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/urllib3/util/connection.py:85, in create_connection(address, timeout, source_address, socket_options)
     84 try:
---> 85     raise err
     86 finally:
     87     # Break explicitly a reference cycle

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/urllib3/util/connection.py:73, in create_connection(address, timeout, source_address, socket_options)
     72     sock.bind(source_address)
---> 73 sock.connect(sa)
     74 # Break explicitly a reference cycle

TimeoutError: [Errno 110] Connection timed out

The above exception was the direct cause of the following exception:

ConnectTimeoutError                       Traceback (most recent call last)
File ~/anaconda3/envs/llama/lib/python3.10/site-packages/urllib3/connectionpool.py:790, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
    789 # Make the request on the HTTPConnection object
--> 790 response = self._make_request(
    791     conn,
    792     method,
    793     url,
    794     timeout=timeout_obj,
    795     body=body,
    796     headers=headers,
    797     chunked=chunked,
    798     retries=retries,
    799     response_conn=response_conn,
    800     preload_content=preload_content,
    801     decode_content=decode_content,
    802     **response_kw,
    803 )
    805 # Everything went great!

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/urllib3/connectionpool.py:491, in HTTPConnectionPool._make_request(self, conn, method, url, body, headers, retries, timeout, chunked, response_conn, preload_content, decode_content, enforce_content_length)
    490         new_e = _wrap_proxy_error(new_e, conn.proxy.scheme)
--> 491     raise new_e
    493 # conn.request() calls http.client.*.request, not the method in
    494 # urllib3.request. It also calls makefile (recv) on the socket.

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/urllib3/connectionpool.py:467, in HTTPConnectionPool._make_request(self, conn, method, url, body, headers, retries, timeout, chunked, response_conn, preload_content, decode_content, enforce_content_length)
    466 try:
--> 467     self._validate_conn(conn)
    468 except (SocketTimeout, BaseSSLError) as e:

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/urllib3/connectionpool.py:1092, in HTTPSConnectionPool._validate_conn(self, conn)
   1091 if conn.is_closed:
-> 1092     conn.connect()
   1094 if not conn.is_verified:

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/urllib3/connection.py:611, in HTTPSConnection.connect(self)
    610 sock: socket.socket | ssl.SSLSocket
--> 611 self.sock = sock = self._new_conn()
    612 server_hostname: str = self.host

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/urllib3/connection.py:212, in HTTPConnection._new_conn(self)
    211 except SocketTimeout as e:
--> 212     raise ConnectTimeoutError(
    213         self,
    214         f"Connection to {self.host} timed out. (connect timeout={self.timeout})",
    215     ) from e
    217 except OSError as e:

ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7f0c0bb01480>, 'Connection to openaipublic.blob.core.windows.net timed out. (connect timeout=None)')

The above exception was the direct cause of the following exception:

MaxRetryError                             Traceback (most recent call last)
File ~/anaconda3/envs/llama/lib/python3.10/site-packages/requests/adapters.py:486, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
    485 try:
--> 486     resp = conn.urlopen(
    487         method=request.method,
    488         url=url,
    489         body=request.body,
    490         headers=request.headers,
    491         redirect=False,
    492         assert_same_host=False,
    493         preload_content=False,
    494         decode_content=False,
    495         retries=self.max_retries,
    496         timeout=timeout,
    497         chunked=chunked,
    498     )
    500 except (ProtocolError, OSError) as err:

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/urllib3/connectionpool.py:844, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
    842     new_e = ProtocolError("Connection aborted.", new_e)
--> 844 retries = retries.increment(
    845     method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2]
    846 )
    847 retries.sleep()

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/urllib3/util/retry.py:515, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
    514     reason = error or ResponseError(cause)
--> 515     raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]
    517 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)

MaxRetryError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f0c0bb01480>, 'Connection to openaipublic.blob.core.windows.net timed out. (connect timeout=None)'))

During handling of the above exception, another exception occurred:

ConnectTimeout                            Traceback (most recent call last)
File /data/GitHub/DriveLikeAHuman/HELLM.py:105
    103 while not (done or truncated):
    104     sce.upateVehicles(obs, frame)
--> 105     DA.agentRun(output)
    106     da_output = DA.exportThoughts()
    107     output = outputParser.agentRun(da_output)

File /data/GitHub/DriveLikeAHuman/LLMDriver/driverAgent.py:72, in DriverAgent.agentRun(self, last_step_decision)
     70     last_step_explanation = "Not available"
     71 with get_openai_callback() as cb:
---> 72     self.agent.run(
     73         f"""
     74         You, the 'ego' car, are now driving a car on a highway. You have already drive for {self.sce.frame} seconds.
     75         The decision you made LAST time step was `{last_step_action}`. Your explanation was `{last_step_explanation}`. 
     76         Here is the current scenario: \n ```json\n{self.sce.export2json()}\n```\n. 
     77         Please make decision for the `ego` car. You have to describe the state of the `ego`, then analyze the possible actions, and finally output your decision. 
     78 
     79         There are several rules you need to follow when you drive on a highway:
     80         {TRAFFIC_RULES}
     81 
     82         Here are your attentions points:
     83         {DECISION_CAUTIONS}
     84         
     85         Let's think step by step. Once you made a final decision, output it in the following format: \n
     86         ```
     87         Final Answer: 
     88             "decision":{{"ego car's decision, ONE of the available actions"}},
     89             "expalanations":{{"your explaination about your decision, described your suggestions to the driver"}}
     90         ``` \n
     91         """,
     92         callbacks=[self.ch]
     93     )
     94 print(cb)
     95 print('[cyan]Final decision:[/cyan]')

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/langchain/chains/base.py:315, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
    313     if len(args) != 1:
    314         raise ValueError("`run` supports only one positional argument.")
--> 315     return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
    316         _output_key
    317     ]
    319 if kwargs and not args:
    320     return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
    321         _output_key
    322     ]

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/langchain/chains/base.py:183, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
    181     raise e
    182 run_manager.on_chain_end(outputs)
--> 183 final_outputs: Dict[str, Any] = self.prep_outputs(
    184     inputs, outputs, return_only_outputs
    185 )
    186 if include_run_info:
    187     final_outputs[RUN_KEY] = RunInfo(run_id=run_manager.run_id)

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/langchain/chains/base.py:257, in Chain.prep_outputs(self, inputs, outputs, return_only_outputs)
    255 self._validate_outputs(outputs)
    256 if self.memory is not None:
--> 257     self.memory.save_context(inputs, outputs)
    258 if return_only_outputs:
    259     return outputs

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/langchain/memory/token_buffer.py:48, in ConversationTokenBufferMemory.save_context(self, inputs, outputs)
     46 # Prune buffer if it exceeds max token limit
     47 buffer = self.chat_memory.messages
---> 48 curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
     49 if curr_buffer_length > self.max_token_limit:
     50     pruned_memory = []

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/langchain/chat_models/openai.py:499, in ChatOpenAI.get_num_tokens_from_messages(self, messages)
    497 if sys.version_info[1] <= 7:
    498     return super().get_num_tokens_from_messages(messages)
--> 499 model, encoding = self._get_encoding_model()
    500 if model.startswith("gpt-3.5-turbo"):
    501     # every message follows <im_start>{role/name}\n{content}<im_end>\n
    502     tokens_per_message = 4

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/langchain/chat_models/openai.py:477, in ChatOpenAI._get_encoding_model(self)
    475 # Returns the number of tokens used by a list of messages.
    476 try:
--> 477     encoding = tiktoken_.encoding_for_model(model)
    478 except KeyError:
    479     logger.warning("Warning: model not found. Using cl100k_base encoding.")

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/tiktoken/model.py:67, in encoding_for_model(model_name)
     65     for model_prefix, model_encoding_name in MODEL_PREFIX_TO_ENCODING.items():
     66         if model_name.startswith(model_prefix):
---> 67             return get_encoding(model_encoding_name)
     69 if encoding_name is None:
     70     raise KeyError(
     71         f"Could not automatically map {model_name} to a tokeniser. "
     72         "Please use `tiktok.get_encoding` to explicitly get the tokeniser you expect."
     73     ) from None

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/tiktoken/registry.py:63, in get_encoding(encoding_name)
     60     raise ValueError(f"Unknown encoding {encoding_name}")
     62 constructor = ENCODING_CONSTRUCTORS[encoding_name]
---> 63 enc = Encoding(**constructor())
     64 ENCODINGS[encoding_name] = enc
     65 return enc

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/tiktoken_ext/openai_public.py:64, in cl100k_base()
     63 def cl100k_base():
---> 64     mergeable_ranks = load_tiktoken_bpe(
     65         "https://openaipublic.blob.core.windows.net/encodings/cl100k_base.tiktoken"
     66     )
     67     special_tokens = {
     68         ENDOFTEXT: 100257,
     69         FIM_PREFIX: 100258,
   (...)
     72         ENDOFPROMPT: 100276,
     73     }
     74     return {
     75         "name": "cl100k_base",
     76         "pat_str": r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+""",
     77         "mergeable_ranks": mergeable_ranks,
     78         "special_tokens": special_tokens,
     79     }

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/tiktoken/load.py:116, in load_tiktoken_bpe(tiktoken_bpe_file)
    114 def load_tiktoken_bpe(tiktoken_bpe_file: str) -> dict[bytes, int]:
    115     # NB: do not add caching to this function
--> 116     contents = read_file_cached(tiktoken_bpe_file)
    117     return {
    118         base64.b64decode(token): int(rank)
    119         for token, rank in (line.split() for line in contents.splitlines() if line)
    120     }

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/tiktoken/load.py:48, in read_file_cached(blobpath)
     45     with open(cache_path, "rb") as f:
     46         return f.read()
---> 48 contents = read_file(blobpath)
     50 os.makedirs(cache_dir, exist_ok=True)
     51 tmp_filename = cache_path + "." + str(uuid.uuid4()) + ".tmp"

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/tiktoken/load.py:24, in read_file(blobpath)
     22         return f.read()
     23 # avoiding blobfile for public files helps avoid auth issues, like MFA prompts
---> 24 resp = requests.get(blobpath)
     25 resp.raise_for_status()
     26 return resp.content

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/requests/api.py:73, in get(url, params, **kwargs)
     62 def get(url, params=None, **kwargs):
     63     r"""Sends a GET request.
     64 
     65     :param url: URL for the new :class:`Request` object.
   (...)
     70     :rtype: requests.Response
     71     """
---> 73     return request("get", url, params=params, **kwargs)

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/requests/api.py:59, in request(method, url, **kwargs)
     55 # By using the 'with' statement we are sure the session is closed, thus we
     56 # avoid leaving sockets open which can trigger a ResourceWarning in some
     57 # cases, and look like a memory leak in others.
     58 with sessions.Session() as session:
---> 59     return session.request(method=method, url=url, **kwargs)

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/requests/sessions.py:589, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
    584 send_kwargs = {
    585     "timeout": timeout,
    586     "allow_redirects": allow_redirects,
    587 }
    588 send_kwargs.update(settings)
--> 589 resp = self.send(prep, **send_kwargs)
    591 return resp

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/requests/sessions.py:703, in Session.send(self, request, **kwargs)
    700 start = preferred_clock()
    702 # Send the request
--> 703 r = adapter.send(request, **kwargs)
    705 # Total elapsed time of the request (approximately)
    706 elapsed = preferred_clock() - start

File ~/anaconda3/envs/llama/lib/python3.10/site-packages/requests/adapters.py:507, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
    504 if isinstance(e.reason, ConnectTimeoutError):
    505     # TODO: Remove this in 3.0.0: see #2811
    506     if not isinstance(e.reason, NewConnectionError):
--> 507         raise ConnectTimeout(e, request=request)
    509 if isinstance(e.reason, ResponseError):
    510     raise RetryError(e, request=request)

ConnectTimeout: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f0c0bb01480>, 'Connection to openaipublic.blob.core.windows.net timed out. (connect timeout=None)'))

MY Version:

python==3.10
urllib3==2.0.4
moviepy==1.0.3
tiktoken==0.4.0

I guess it may be a problem with the urllib3 version, my current version is urllib3==2.0.4,
I would like to ask your the version of urllib3 , python, moviepy and tiktoken.

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.