azure-samples / openai Goto Github PK
View Code? Open in Web Editor NEWThe repository for all Azure OpenAI Samples complementing the OpenAI cookbook.
Home Page: https://aka.ms/azure-openai
License: MIT License
The repository for all Azure OpenAI Samples complementing the OpenAI cookbook.
Home Page: https://aka.ms/azure-openai
License: MIT License
Please provide us with the following information:
x
)- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [x] regression (a behavior that used to work and stopped in a new release)
I was using azure OPENAI from last few months. I see a trends that after every month's 1st day a new apiversion can be used. For example current month is 11 so I can use https://xxxxxxx.openai.azure.com/openai/deployments/gpt-4-32k/chat/completions?api-version=2023-11-01-preview.
In this url 11 denote the mm and 01 is first day of month. It was working good in first few day and suddenly it's stopped working. I see 10-01-preview is working but 11-01-preview is give error like resource not found. Is there any specific reason why this api-version suddenly removed.
API was working fine earlier as expected but suddenly start giving error "resources not found". Earlier I am getting proper response.
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [ ] bug report -> please search issues before submitting
- [*] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Thanks! We'll be in touch soon.
Please provide us with the following information:
All the apps are individually up and running but when chatting with the bot webapp the orchestrator is not able to access the open ai completion endpoint and below error is displayed in the command prompt. If anyone resolved this, please provide info.
400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
[2024-04-02 10:26:07,000] ERROR in app: Exception on /query [POST]
Traceback (most recent call last):
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\openai\End_to_end_Solutions\AOAIVirtualAssistant\src\botapp\cognition\openai\api\client.py", line 32, in completions
response.raise_for_status()
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\requests\models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\flask\app.py", line 1463, in wsgi_app
response = self.full_dispatch_request()
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\flask\app.py", line 872, in full_dispatch_request
rv = self.handle_user_exception(e)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\flask\app.py", line 870, in full_dispatch_request
rv = self.dispatch_request()
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\flask\app.py", line 855, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\openai\End_to_end_Solutions\AOAIVirtualAssistant\src\botapp\main.py", line 24, in run_flow
agent_response = orchestrator.run_query(conversation, user_id, conversation_id, query)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\openai\End_to_end_Solutions\AOAIVirtualAssistant\src\botapp\orchestrator.py", line 50, in run_query
classification = self.topic_classifier.run(query)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\openai\End_to_end_Solutions\AOAIVirtualAssistant\src\botapp\tasks\topic_classifier.py", line 24, in run
response = topic_classifier.generate_dialog(classifier_payload)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\openai\End_to_end_Solutions\AOAIVirtualAssistant\src\botapp\cognition\openai\model_manager.py", line 58, in generate_dialog
response_choice = self.client.completions(text_prompt, self.model_params)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\tenacity_init_.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\tenacity_init_.py", line 379, in call
do = self.iter(retry_state=retry_state)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\tenacity_init_.py", line 325, in iter
raise retry_exc.reraise()
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\tenacity_init_.py", line 158, in reraise
raise self.last_attempt.result()
File "C:\Users\ArjunM\AppData\Local\Programs\Python\Python39\lib\concurrent\futures_base.py", line 433, in result
return self.__get_result()
File "C:\Users\ArjunM\AppData\Local\Programs\Python\Python39\lib\concurrent\futures_base.py", line 389, in __get_result
raise self.exception
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\tenacity_init.py", line 382, in call
result = fn(*args, **kwargs)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\openai\End_to_end_Solutions\AOAIVirtualAssistant\src\botapp\cognition\openai\api\client.py", line 40, in completions
raise Exception("Error making request to Open AI completions endpoint.")
Exception: Error making request to Open AI completions endpoint.
Windows
10
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
run
azd up
against the AOAISearchDemo code. when there are failures you are not bubbling up the failure.
Look at Line 65...this is but one example...it's everywhere in that script...
if ($process.ExitCode -ne 0) {
Write-Host ""
Write-Warning "Installing post-deployment dependencies failed with non-zero exit code $LastExitCode."
Write-Host ""
exit $process.ExitCode
}
You are printing 0 basically, even when there is a failure.
should throw the correct error
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
deploy code to webapp
2023-07-08T02:05:04.538815817Z from backend.contracts.error import OutOfScopeException, UnauthorizedDBAccessException
2023-07-08T02:05:04.538820117Z ModuleNotFoundError: No module named 'backend'
I have it running locally and confirmed it works prior to deployment
Windows 10
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
hello, I followed the steps to index over 100 our own PDF documents into Azure Cognitive Search. The answers through Bring Your Own Data in the Chat playground I got for pretty simple questions are simply wrong.
No log messages because the system functions.
A more precise answer to my questions.
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?) Windows 10 but I am using Azure AI Studio
10
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [X ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
azd auth login --tenant-id 'REDACTED' or azd auth login --client-id 'REDACTED' --client-secret 'REDACTED' --tenant-id 'REDACTED'
azd up
The indexing of docs eventually fails because of fetching the token timeout. I have tried both UPN and SPN but none of them worked.
Indexing sections from 'Surface Deployment Accelerator.pdf' into search index 'gptkbindex'
Indexed 4 sections, 4 succeeded
Processing 'REDACTED\AOAISearchDemo\End_to_end_Solutions\AOAISearchDemo/data/surface_device_documentation/Deploy & manage\Automate deployment\Upgrade Surface devices to Windows 10 with MDT.pdf'
Uploading blob for page 0 -> Upgrade Surface devices to Windows 10 with MDT-0.pdf
Ex
tracting text from 'REDACTED\AOAISearchDemo\End_to_end_Solutions\AOAISearchDemo/data/surface_device_documentation/Deploy & manage\Automate deployment\Upgrade Surface devices to Windows 10 with MDT.pdf' using Azure Form Recognizer
AzureDeveloperCliCredential.get_token failed: Failed to invoke the Azure Developer CLI
Unable to retrieve continuation token: cannot pickle '_io.BufferedReader' object
Traceback (most recent call last):
File "REDACTED\AOAISearchDemo\End_to_end_Solutions\AOAISearchDemo\scripts.venv\lib\site-packages\azure\identity_credentials\azd_cli.py", line 153, in _run_command
return subprocess.check_output(args, **kwargs)
File "REDACTED\anaconda3\lib\subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "REDACTED\anaconda3\lib\subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "REDACTED\anaconda3\lib\subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "REDACTED\anaconda3\lib\subprocess.py", line 1530, in _communicate
raise TimeoutExpired(self.args, orig_timeout)
subprocess.TimeoutExpired: Command '['cmd', '/c', 'azd auth token --output json --scope https://cognitiveservices.azure.com/.default --tenant-id REDACTED']' timed out after 10 seconds
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "REDACTED\AOAISearchDemo\End_to_end_Solutions\AOAISearchDemo\scripts\prepdocs.py", line 285, in
page_map = get_document_text(filename)
File "REDACTED\AOAISearchDemo\End_to_end_Solutions\AOAISearchDemo\scripts\prepdocs.py", line 155, in get_document_text
form_recognizer_results = poller.result()
azd up finished successfully
Windows 11
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Hi,
I'm using openai-python library v1.3.5 with the AsyncAzureOpenAI client.
I'd like to get the rate limiting information in the response headers, see this issue for more context: openai/openai-python#416 (comment)
But the response headers do not seem to contain this information when using Azure OpenAI.
Does the Azure OpenAI API returns this information? If yes, how to retrieve the information?
Thanks
There are many such examples of using Server-sent events directly. I spent days searching for an example that uses @azure/openai package but nothing was found.
Hi,
I tried on Azure OpenAI Chat playground and gave input as CSV file but not supporting also converted data into text but model not able to understand data because data in numeric formate ,alos tried with indexing but having same issue model is not able to understant given data.
Track issues found during testing for C# notebooks.
Please provide us with the following information:
x
)- [ X] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
When I run the azd up on a starting from scratch installation instructions from the following URL... https://github.com/Azure-Samples/openai/blob/main/End_to_end_Solutions/AOAISearchDemo/README.md, it goes through the steps and then generates an error, see section below. Nothing is generated in Azure. How can i review the logs to see if additional error information is generated?
C:\VSCode\AOAISearchDemo\openai\End_to_end_Solutions\AOAISearchDemo>azd up
Executing prepackage hook => C:\Users\xxxxxx\AppData\Local\Temp\azd-prepackage-637729247.ps1
up to date, audited 123 packages in 2s
11 packages are looking for funding
run npm fund
for details
2 vulnerabilities (1 moderate, 1 high)
To address all issues, run:
npm audit fix
Run npm audit
for details.
[email protected] build
tsc && vite build
vite v4.2.2 building for production...
✓ 1247 modules transformed.
../backend/static/assets/github-fab00c2d.svg 0.96 kB
../backend/static/index.html 1.15 kB
../backend/static/assets/index-d82ced22.css 7.84 kB │ gzip: 2.23 kB
../backend/static/assets/index-be339f02.js 620.18 kB │ gzip: 203.99 kB │ map: 5,818.53 kB
(!) Some chunks are larger than 500 kBs after minification. Consider:
Packaging services (azd package)
(✓) Done: Packaging service backend
(✓) Done: Packaging service data
Error: accepts 2 arg(s), received 1
ERROR: failed running pre hooks: 'preprovision' hook failed with exit code: '1', Path: 'C:\Users\xxxxxx\AppData\Local\Temp\azd-preprovision-76052707.ps1'. : exit code: 1
A successful provision of the environment.
Windows 11
Thanks! We'll be in touch soon.
Hi,
For some reason I'm not able to clone the report, I tried using the desktop app or the URL with no success. My user is gbissio, can you please help me?
Thank you!
Hi, I'm trying to run this demo, but unfortunately getting the below-mentioned error,
"not enough values to unpack (expected 2, got 0)" It is due to no value in "sentiment_aspects" variable.
can anyone guide me regarding this?
Please note I'm following the exact guidelines provided in the readme file.
Thanks
I don't know if this is the correct place to report this, but I was trying to test Chat Completions since GPT35 doesn't support completions. When you click view code, you are given this code
Response<ChatCompletions> responseWithoutStream = await client.GetChatCompletionsAsync(
"GPT35",
new ChatCompletionsOptions()
{
Messages =
{
new ChatMessage(ChatRole.System, @"You are an AI assistant that helps people find information."),
},
Temperature = (float)0.7,
MaxTokens = 800,
NucleusSamplingFactor = (float)0.95,
FrequencyPenalty = 0,
PresencePenalty = 0,
});
ChatCompletions response = responseWithoutStream.Value;
ChatMessage does not exist. I searched and found ChatRequestMessage, is that what this is supposed to be?
Thanks
Please provide us with the following information:
x
)- [ x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
I just noticed I have beta12 installed, and the code expects beta5. My guess is the code should be updated if this class ChatMessage did not make it to beta12.
View Code should compile with the latest betas if you want people to be up to date on your current code base.
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
I have Windows 11. You should probably update your bug report to list MS latest Windows offering.
As mentioned above, I have Azure.AI.OpenAI beta12 installed. This is probably different in beta5
I will try beta5 and see if that compiles. It is just a little frustrating to someone trying to learn this when the sample code doesn't compile.
Thanks! We'll be in touch soon.
Please provide us with the following information:
I'm encountering an authorization error when attempting to prompt my chatbot locally on my system. Once, I send a request or a question, the UI runs for like 10 seconds and returns an error. The error appears to be related to Azure Search service authorization.
x
)- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Open Command Prompt.
Activate the Conda environment:
Navigate to the project directory: cd
Log in to Azure CLI: az login.
Select the appropriate subscription and tenant.
Run the python script: python .py
Authorization failed
Prompt the chatbot UI interface and get a text response as you would in chatGPT. See attached screenshot of the error message i get instead on the user interface.
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
OS: Windows 11
Environment Summary
Name: azure-core
Version: 1.30.2
Summary: Microsoft Azure Core Library for Python
Home-page: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/core/azure-core
Author: Microsoft Corporation
Author-email: [email protected]
License: MIT License
Location: c:\users\t-grbabalola\appdata\local\anaconda3\envs\chatosp\lib\site-packages
Requires: requests, six, typing-extensions
Required-by: azure-identity, azure-search-documents, msrest
Thanks! We'll be in touch soon.
x
)- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
I am following the example below for deployment https://github.com/Azure-Samples/openai/tree/main/End_to_end_Solutions/AOAISearchDemo When I run the command Run【azd up】 I get the following error
I am following the example below for deployment https://github.com/Azure-Samples/openai/tree/main/End_to_end_Solutions/AOAISearchDemo When I run the command Run【azd up】 I get the following error
I am following the example below for deployment https://github.com/Azure-Samples/openai/tree/main/End_to_end_Solutions/AOAISearchDemo When I run the command Run【azd up】 I get the following error
Windows 11
Windows 11
status message
{
"status": "Failed",
"error": {
"code": "InvalidTemplateDeployment",
"message": "The template deployment 'openai' is not valid according to the validation procedure. The tracking id is 'c858cf6f-91ce-47cb-86a3-3e39d171e2c3'. See inner errors for details.",
"details": [
{
"code": "InvalidResourceProperties",
"message": "The specified scale type 'Standard' of account deployment is not supported by the model 'gpt-4'."
}
]
}
}
Please provide us with the following information:
After making the changes as mentioned in the git repo readme when I run the following command the Macbook
azd up
I am getting the below error:
Executing prepackage hook => /var/folders/t1/6k5yrx055dnf88pf4b12v6f40000gp/T/azd-prepackage-3025571478.ps1
/var/folders/t1/6k5yrx055dnf88pf4b12v6f40000gp/T/azd-prepackage-3025571478.ps1: pwsh: command not found
ERROR: failed running pre hooks: 'prepackage' hook failed with exit code: '127', Path: '/var/folders/t1/6k5yrx055dnf88pf4b12v6f40000gp/T/azd-prepackage-3025571478.ps1'. : exit code: 127
x
)- [ x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
azd up on MacBook VSCode
Executing prepackage hook => /var/folders/t1/6k5yrx055dnf88pf4b12v6f40000gp/T/azd-prepackage-3025571478.ps1
/var/folders/t1/6k5yrx055dnf88pf4b12v6f40000gp/T/azd-prepackage-3025571478.ps1: pwsh: command not found
ERROR: failed running pre hooks: 'prepackage' hook failed with exit code: '127', Path: '/var/folders/t1/6k5yrx055dnf88pf4b12v6f40000gp/T/azd-prepackage-3025571478.ps1'. : exit code: 127
It should deploy the repo
MacOS M2 chip
Thanks! We'll be in touch soon.
/Basic_Samples//Dotnet
Please provide us with the following information:
I forked this repository, and when I tried to clone it with GitHub desktop, I got an error because this file's path is too long. I tried to download the zip file, and I got the same error, but I was able to skip the file when extracted the repo from the zip file
azure-samples-openai-main/End_to_end_Solutions/AOAISearchDemo/data/surface_device_documentation/Commercial service & repair/Service & repair options/Service & repair features/Surface Australia On-Site service and repair.pdf
x
)- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Cloning into 'C:\azure-samples-openai\azure-samples-openai'...
remote: Enumerating objects: 2030, done.
remote: Counting objects: 100% (835/835), done.
remote: Compressing objects: 100% (360/360), done.
remote: Total 2030 (delta 474), reused 654 (delta 378), pack-reused 1195
Receiving objects: 100% (2030/2030), 195.65 MiB | 35.65 MiB/s, done.
Resolving deltas: 100% (851/851), done.
error: unable to create file End_to_end_Solutions/AOAISearchDemo/data/surface_device_documentation/Commercial service & repair/Service & repair options/Service & repair features/Surface Australia On-Site service and repair.pdf: Filename too long
Updating files: 100% (623/623), done.
fatal: unable to checkout working tree
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'
Would you like to retry cloning ?
I should clone without a problem
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Windows 11 PRO
run
azd version
and copy paste here.
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [x] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
- pip install openai
- input the codes below
from dotenv import load_dotenv
from typing_extensions import override
from openai import AzureOpenAI, AssistantEventHandler
load_dotenv()
api_key = os.environ.get("AZURE_OPENAI_API_KEY")
api_version = os.environ.get("OPENAI_API_VERSION")
azure_endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT")
assistant_id = os.environ.get("AZURE_OPENAI_ENDPOINT")
class EventHandler(AssistantEventHandler):
@override
def on_text_created(self, text) -> None:
print(f"\nassistant > ", end="", flush=True)
@override
def on_text_delta(self, delta, snapshot):
print(delta.value, end="", flush=True)
def on_tool_call_created(self, tool_call):
print(f"\nassistant > {tool_call.type}\n", flush=True)
def on_tool_call_delta(self, delta, snapshot):
if delta.type == 'code_interpreter':
if delta.code_interpreter.input:
print(delta.code_interpreter.input, end="", flush=True)
if delta.code_interpreter.outputs:
print(f"\n\noutput >", flush=True)
for output in delta.code_interpreter.outputs:
if output.type == "logs":
print(f"\n{output.logs}", flush=True)
client = AzureOpenAI(api_key, api_version, azure_endpoint)
thread = client.beta.threads.create(
messages=[]
)
client.beta.threads.messages.create(
thread_id=thread.id,
role="user",
content="here are some messages..."
)
with client.beta.threads.runs.stream(
thread_id=thread.id,
assistant_id=assistant_id,
event_handler=EventHandler()
) as stream:
stream.until_done()
Traceback (most recent call last):
File "/Users/admin/project/aoai-assistant-demo/main.py", line 47, in <module>
with client.beta.threads.runs.stream(
thread_id=thread.id,
assistant_id=assistant_id,
event_handler=EventHandler()
) as stream:
File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/lib/streaming/_assistants.py", line 444, in __enter__
self.__stream = self.__api_request()
^^^^^^^^^^^^^^^^^^^^
File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1213, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 902, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 993, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Unknown parameter: 'stream'.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Expect the codes to work fine.
macOS Sonoma 14.0
python 3.12
openai 1.16.2
I am facing an intermittent issue with the @azure/[email protected] library using the [email protected]. While attempting to send messages to an OpenAI deployment, I sporadically encounter the ECONNRESET error. Despite multiple attempts, this problem persists. Below is my code snippet along with the error message:
for (let attempt = 0, keyIdx = 0; attempt < retries; attempt++, keyIdx++) {
const openai = await this.createAPI();
const messages = [
{
role: 'system',
content: prompt,
},
{ role: 'user', content: content },
];
try {
let res = '';
const result = await openai.getChatCompletions(
OPENAI_DEPLOYMENT_ID,
messages,
);
for (const choice of result.choices) {
console.log(choice.message.content);
res += choice.message.content;
}
return res;
} catch (error) {
if (attempt === retries - 1) {
console.log('Failed after maximum attempts', error);
throw error;
}
}
}
Failed after maximum attempts RestError: read ECONNRESET
{
"name": "RestError",
"code": "ECONNRESET",
"request": {
"url": "https://[xxx].openai.azure.com/openai/deployments/[openai-model]/chat/completions?api-version=2023-08-01-preview",
"headers": {
// Request headers
},
"method": "POST",
"timeout": 0,
"disableKeepAlive": false,
// ...
},
"message": "read ECONNRESET"
}
Expected Resolution: I am seeking assistance in resolving the intermittent ECONNRESET error, which is impeding my ability to consistently send messages to the OpenAI deployment. I need help identifying the root cause of this issue and obtaining a solution.
Additional Information:
I am using Node.js v18.15.0.
The version of the @azure/openai library I am using is 1.0.0-beta.2.
Thank you for your support!
x
)- [x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Windows 11 & macOS
1.0.0-beta.2
Thanks! We'll be in touch soon.
x
)- [ ] bug report -> please search issues before submitting
- [X] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
On a Javascript application, install the package
@azure/openai
withnpm install @azure/openai
This installs version v1.0.0-beta.5
Within the script, include:const { OpenAIClient, AzureKeyCredential } = require("@azure/openai"); const client = new OpenAIClient(CHATGPT_EP, new AzureKeyCredential(CHATGPT_KEY)); const deploymentId = "DEPLOYMENT"; const data = await client.getChatCompletions(deploymentId, CHAT_LIST); console.log(data);
I expected the response to include have a field of "usage" (with all the tokens consumption details) as presented in example response in:
https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#example-response-2
Windows Chrome
Package version is v1.0.0-beta.5 (azure/openai on npm)
I tried asking in the Microsoft Learn forum but was asked to raise an issue on the github repo, I'm not even sure if this is the right place.
However, I hope to know if it is still possible to somehow get that usage details included within the response without manually adding another tokenization package locally.
Appreciate your assistance.
Please provide us with the following information:
**### Minimal steps to reproduce**
>run azd init -t AOAISearchDemo
**### Any log messages given by the failure**
>ERROR: init from template repository: fetching template: failed to clone repository https://github.com/Azure-Samples/AOAISearchDemo: exit code: 128, stdout: , stderr: Cloning into 'C:\Users\xxxxx\AppData\Local\Temp\az-dev-template2468296xxx'...
remote: Repository not found.
fatal: repository 'https://github.com/Azure-Samples/AOAISearchDemo/' not found
### Expected/desired behavior
> template needs to deploy resources successfully and clone the repo.
### OS and Version?
> Windows 11
### Versions
>
### Mention any other details that might be useful
> ---------------------------------------------------------------
> Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [ X] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
When deploying End_to_end_Solutions
AOAISearchDemo application, I ran with the below issue when I ran azd up command after following the steps of starting from the scratch -"https://github.com/Azure-Samples/openai/tree/main/End_to_end_Solutions/AOAISearchDemo#starting-from-scratch"
The template deployment 'openai' is not valid according to the validation procedure.
The specified scale type 'Standard' of account deployment is not supported by the model
I tried with both gpt35turbo and gpt4. Please let me how to fix this deployment error
Every resource except openai got deployed successfully
The application should get deployed successfully on Azure infrastructure. Once the app is up, I should be able to query the application.
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Window10
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [x] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
- pip install openai
- input the codes below
from dotenv import load_dotenv
from typing_extensions import override
from openai import AzureOpenAI, AssistantEventHandler
load_dotenv()
api_key = os.environ.get("AZURE_OPENAI_API_KEY")
api_version = os.environ.get("OPENAI_API_VERSION")
azure_endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT")
assistant_id = os.environ.get("AZURE_OPENAI_ENDPOINT")
class EventHandler(AssistantEventHandler):
@override
def on_text_created(self, text) -> None:
print(f"\nassistant > ", end="", flush=True)
@override
def on_text_delta(self, delta, snapshot):
print(delta.value, end="", flush=True)
def on_tool_call_created(self, tool_call):
print(f"\nassistant > {tool_call.type}\n", flush=True)
def on_tool_call_delta(self, delta, snapshot):
if delta.type == 'code_interpreter':
if delta.code_interpreter.input:
print(delta.code_interpreter.input, end="", flush=True)
if delta.code_interpreter.outputs:
print(f"\n\noutput >", flush=True)
for output in delta.code_interpreter.outputs:
if output.type == "logs":
print(f"\n{output.logs}", flush=True)
client = AzureOpenAI(api_key=api_key, api_version=api_version, azure_endpoint=azure_endpoint)
thread = client.beta.threads.create(
messages=[]
)
client.beta.threads.messages.create(
thread_id=thread.id,
role="user",
content="here are some messages..."
)
with client.beta.threads.runs.stream(
thread_id=thread.id,
assistant_id=assistant_id,
event_handler=EventHandler()
) as stream:
stream.until_done()
Traceback (most recent call last):
File "/Users/admin/project/aoai-assistant-demo/main.py", line 47, in <module>
with client.beta.threads.runs.stream(
thread_id=thread.id,
assistant_id=assistant_id,
event_handler=EventHandler()
) as stream:
File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/lib/streaming/_assistants.py", line 444, in __enter__
self.__stream = self.__api_request()
^^^^^^^^^^^^^^^^^^^^
File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1213, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 902, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 993, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Unknown parameter: 'stream'.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Expect the codes to work fine.
macOS Sonoma 14.0
python 3.12
openai 1.16.2
Please provide us with the following information:
x
)- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [X] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
git clone
cd openai/End_to_end_Solutions/AOAISearchDemo
azd auth login --tenant-id
azd up
Running "populate_sql.py"
Connecting to SQL Server
Traceback (most recent call last):
File ".\AOAISearchDemo\End_to_end_Solutions\AOAISearchDemo\scripts\prepopulate\populate_sql.py", line 15, in
cnxn = pyodbc.connect(args.sql_connection_string)
pyodbc.InterfaceError: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)')
The azd up finishes successfully. Documentation is missing prerequisites for latest ODBC Driver. I had ODBC v17 but it couldn't find it as SQL connection string was expectign ODBC v18
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Windows 11
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [ ] bug report -> please search issues before submitting
- [x] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
from operator import itemgetter
from typing import Dict, List, Union
from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain_core.tools import tool
from langchain_openai import AzureChatOpenAI, AzureOpenAIEmbeddings, ChatOpenAI
from langchain_core.messages import AIMessage
from langchain_core.runnables import (
Runnable,
RunnableLambda,
RunnableMap,
RunnablePassthrough,
)
import os
@tool
def multiply(first_int: int, second_int: int) -> int:
"""Multiply two integers together."""
return first_int * second_int
@tool
def add(first_int: int, second_int: int) -> int:
"Add two integers."
return first_int + second_int
@tool
def exponentiate(base: int, exponent: int) -> int:
"Exponentiate the base to the exponent power."
return base**exponent
llm = AzureChatOpenAI(config)
tools = [multiply, exponentiate, add]
llm_with_tools = llm.bind_tools(tools)
tool_map = {tool.name: tool for tool in tools}
def call_tools(msg: AIMessage) -> Runnable:
tool_map = {tool.name: tool for tool in tools}
tool_calls = msg.tool_calls.copy()
for tool_call in tool_calls:
tool_call["output"] = tool_map[tool_call["name"]].invoke(tool_call["args"])
return tool_calls
chain = llm_with_tools | call_tools
input_text = "What's 23 times 7, and what's five times 18 and add a million plus a billion and cube thirty-seven"
result = chain.invoke(input_text)
Having multiple tools called (parallel calling)
Linux
I was wondering if Azure did support langchain AzureChatOpenAI to handle parallel tool calling. Even when specifying the model name, the chat_completions always uses gpt-4-32k for some reason. When I use directly ChatOpenAI, parallel tool calling works.
Anyone having this issue?
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
{"code":"InvalidTemplate","message":"Deployment template validation failed: 'The template resource 'cog-634zta4uccph4/' for type 'Microsoft.CognitiveServices/accounts/deployments' at line '1' and column '1710' has incorrect segment lengths. A nested resource type must have identical number of segments as its resource name. A root resource type must have segment length one greater than its resource name. Please see https://aka.ms/arm-syntax-resources for usage details.'.","additionalInfo":[{"type":"TemplateViolation","info":{"lineNumber":1,"linePosition":1710,"path":"properties.template.resources[1].type"}}]}
Windows 11with VS code
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [ ] bug report -> please search issues before submitting
- metadata function is not available in azureopen ai
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
pip install openai1.14.1, call metadata method in azureopen ai object
AttributeError: type object 'AzureOpenAI' has no attribute 'metadata'
couldnt load metadata method in azure openai
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [ ] bug report -> please search issues before submitting
- [ x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Hello, for the time being, it seems that the model can determine which function to call but it seems to be limited to only one function, unless I missed something. In the scenario where I split the calculator into four distinct functions (add, divide, substract, multiply) and I would input the following query:
Calculate the total of 10/2 multiplied by 3
The model will determine that divide must be called, but it will not understand that multiply must also be called. Is it something that will be available?
{ "role": "assistant", "function_calls":[ { "name": "divide", "arguments": "{\n\"num1\": 10,\n\"num2\": 2\n}" }, { "name": "multiply", "arguments": "{\n\"num1\": 5,\n\"num2\": 3\n}" } ] }
Thanks! We'll be in touch soon.
Please provide us with the following information:
x
)- [X ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
git clone
cd openai/End_to_end_Solutions/AOAISearchDemo
azd auth login --tenant-id
azd up
Running "prepopulate.py"
usage: prepopulate.py [-h] [--entities_path ENTITIES_PATH] [--permissions_path PERMISSIONS_PATH] [--cosmos_db_endpoint COSMOS_DB_ENDPOINT]
[--cosmos_db_key COSMOS_DB_KEY] [--cosmos_db_name COSMOS_DB_NAME]
[--cosmos_db_entities_container_name COSMOS_DB_ENTITIES_CONTAINER_NAME]
[--cosmos_db_permissions_container_name COSMOS_DB_PERMISSIONS_CONTAINER_NAME]
prepopulate.py: error: argument --cosmos_db_endpoint: expected one argument
The azd up finishes successfully. Azure Cosmos DB Endpoint secret is not created as keyvault secret. See file .\End_to_end_Solutions\AOAISearchDemo\infra\core\database\cosmos-database.bicep
Missing piece:
module azureCosmosKeySecret '../keyvault/keyvault_secret.bicep' = if(addKeysToVault) {
name: 'AZURE-COSMOS-ENDPOINT'
params: {
keyVaultName: keyVaultName
secretName: 'AZURE-COSMOS-ENDPOINT'
secretValue: account.properties.documentEndpoint
}
}
Windows 11
Thanks! We'll be in touch soon.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.