microsoft / chat-copilot Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
There are important files that Microsoft projects should all have that are not present in this repository. A pull request has been opened to add the missing file(s). When the pr is merged this issue will be closed automatically.
Microsoft teams can learn more about this effort and share feedback within the open source guidance available internally.
Exposing DocumentLineSplitMaxTokens
and DocumentParagraphSplitMaxLines
these paramteres for override in the API and App provides customer with an avenue to tune these on a per document basis.
Describe the bug
After deploying the package to Azure, the api/serviceOptions endpoint returns a 404. If I run it from local files it works, but when using the package that is deployed to azure, it gives that error
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Expect 200 response as when it is run locally
Platform
Additional context
Add any other context about the problem here.
If multiple docs are uploaded, import in parallel
Describe the bug
clicking deploy to azure fails in deploy section. screenshot below.
There was an error downloading the template from URI 'https://raw.githubusercontent.com/microsoft/copilot-chat/main/deploy/main.json'. Ensure that the template is publicly accessible and that the publisher has enabled CORS policy on the endpoint. To deploy this template, download the template manually and paste the contents in the 'Build your own template in the editor' option below.
Groups are identified by the chat ID.
Retrieve the user ID from the claims and call this method from the OnConnectedAsync method instead of the frontend.
after importing a pdf document and asking it questions about the document it has no knowledge of the content in the pdf . Is this still under development ? I am using qdrant for embeddings and no errors appear in console.
Source Repo: https://github.com/microsoft/semantic-memory
Looking at:
The deployed web api service has "Always On" turned on, which will have the front-end load balancer to send a request to the application root every 5 minutes: https://learn.microsoft.com/en-us/azure/app-service/configure-common?tabs=portal.
This can be seen from application insight:
We probably don't want to disable "Always On". We can map the application root to the health check end point or come up with better solutions.
If user made edits to plan (i.e., editted inputs or removed a step), regenerate user intent
When setting up the app for the first time with AAD authentication, a new AAD application is required. This needs to be documented, including the required settings.
The setup instructions should document all necessary steps to get the application running.
While running the ./Configure.sh on my terminal with values for each element. The output keeps printing even when I enter the value for --aiservice
Please specify an AI service (AzureOpenAI or OpenAI) for --aiservice.
To Reproduce
For MacOS
Download co-pilot chat repo
Run ./Install-brew.sh
Run ./Configure.sh --aiservice {AzureOpenAI} --apikey {XXXXXXXXXXXXXXX} --endpoint {https://xxxx.openai.azure.com/} --clientid {XXXX}
Output: Please specify an AI service (AzureOpenAI or OpenAI) for --aiservice.
Expected behavior
It should either accept the config or throw an error. It always says "Please specify an AI service (AzureOpenAI or OpenAI) for --aiservice." Even when specified
Platform
Additional context
The app still launches but gives invalid URL error. Might be a different bug. But this is what my server says:
In the documents tab I upload a PDF file and shows in the list.
But on the Chat tab, there is a message from the bot with the file name
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
The file should be in the list and the chat should suggest what to do with it.
I should be able to ask about the file. No?
Screenshots
If applicable, add screenshots to help explain your problem.
Platform
Help fixing it please
Thank you
After adjusting the Authorization section of appsettings.json for Copilot Chat to use AzureAd I receive a 401 Unauthorized error with the following www-authenticate response header:
Bearer error="invalid_token", error_description="The signature is invalid"
In the Custom Plugin flow on the webapp, we need to form validation to verify the Open API spec once it's parsed from the manifest file.
I'm thinking we expose an API from the webapi that calls the OpenAPI parser to validate the form and then call that from the webapp
Currently the Deploy to Azure buttons use an ARM template that defaults the memories selection to "Volatile" but does allow the user to change this to Qdrant or ACS. The problem is that the webapi package that is deployed is configured to use volatile memories and so changing this in the ARM deployment results in a resource that is deployed but is not being used.
One option to fix this is to publish webapi artifacts with all configurations in the pipeline, and then use the memories selection in the ARM template to pick the correct one.
Need to re-evaluate our ExternalInformationSkill to better support SequentialPlanner inputs and outputs
Currently the provided devcontainer does not support dotnet 6, which is required for samples/apps/copilot-chat-app
. When you try to install the dotnet sdk following the instructions, you will face an error. This can block adoption for first time users that want to develop via GH CodeSpaces or VSCode devcontainer.
E: Conflicting values set for option Signed-By regarding source https://packages.microsoft.com/ubuntu/20.04/prod/ focal: /usr/share/keyrings/microsoft-archive-keyring.gpg != E: The list of sources could not be read.
I haven't been able to resolve it yet, but I will do a PR otherwise ๐.
Code mixes strings and Guid without being explicit about the serialization format
Describe the bug
Plugins are not saved when you refresh the page
To Reproduce
Steps to reproduce the behavior:
Currently the ServiceOptionsController has separate APIs for each option. It'd be nice to have one API that returns all the options, reducing the number of requests required.
Note: the current implementation only has memories store type because that is the only requested option to show in the webapp. In the future if we desire to show more options, we should combine things in a single API and a single response model.
Section "(Optional) Enable backend authorization via Azure AD" of readme says "Ensure you created the required application registration mentioned in Start the WebApp FrontEnd application" and links to the #start-the-webapp-frontend-application anchor which doesn't exist.
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Readme should get updated for single tenant deployment as an option as well.
Describe the bug
When enabling my custom plugin through Chat-Copilot and running it, the plugin doesn't provide up-to-date results. Specifically, I've created a custom plugin named "Grocery App Plugin," allowing users to add, list, and remove items. While using Swagger to add items to the list, I can retrieve the complete list of items. However, when running the same plugin through Chat-Copilot, it seems to fetch outdated data, appearing as if the planner is providing historical details. I'm utilizing the "Action" Planner /"Sequential" Planner with the gpt-35-turbo model.
To Reproduce
Steps to reproduce the behavior:
1.Enable custom plugin -"Grocery App Plugin"
2.List the grocery items using "Grocery App Plugin."
3.Attempt to add an item using the plugin or Swagger
4.Run the plugin to list the items.
5.Compare the listed items with the actual up-to-date list.
Expected behavior
I expected the plugin to fetch and display the most recent data when using Chat-Copilot, similar to the behavior when interacting with Swagger directly. This would ensure that users receive accurate and current information from the plugin.
Screenshots
Please find the screenshots of the outdated results from Chat-Copilot and the expected up-to-date results from Swagger.
Platform
Additional context
I noticed that this issue is specific to Chat-Copilot's interaction with my custom plugin. Swagger integration seems to work correctly and provides real-time updates. This context could potentially help in identifying whether the problem lies within my plugin's implementation or Chat-Copilot's integration with both "Action"/ "Sequential" Planner along with the gpt-35-turbo model.
Describe the bug
Hi Im getting folwing error in Chat Copilot app with volatile memory on line 81 in webapi\CopilotChat\Skills\ChatSkills\DocumentMemorySkill.cs file.
inner exception is:
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Retrieve similar records from volatile memory.
Platform
Additional context
InnerException {"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.\r\nStatus: 404 (Not Found)\r\nErrorCode: DeploymentNotFound\r\n\r\nContent:\r\n{"error":{"code":"DeploymentNotFound", "message":"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."}}\r\n\r\nHeaders:\r\nOpenAI-Processing-Ms: REDACTED\r\nx-ms-client-request-id: 1652580f-151c-46c0-a75b-bc71e7b8a4b5\r\napim-request-id: REDACTED\r\nStrict-Transport-Security: REDACTED\r\nX-Content-Type-Options: REDACTED\r\nx-ms-region: REDACTED\r\nDate: Tue, 08 Aug 2023 09:19:46 GMT\r\nContent-Length: 198\r\nContent-Type: application/json\r\n"} System.Exception {Azure.RequestFailedException}
Stack trace:
at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.d__251.MoveNext() at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.<InternalGetEmbeddingsAsync>d__14.MoveNext() at Microsoft.SemanticKernel.AI.Embeddings.EmbeddingGenerationExtensions.<GenerateEmbeddingAsync>d__0
2.MoveNext()
at Microsoft.SemanticKernel.Memory.SemanticTextMemory.d__7.MoveNext()
at Microsoft.SemanticKernel.Memory.SemanticTextMemory.d__7.System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult(Int16 token)
at SemanticKernel.Service.CopilotChat.Skills.ChatSkills.DocumentMemorySkill.d__4.MoveNext() in C:\Work\chat-copilot-main\webapi\CopilotChat\Skills\ChatSkills\DocumentMemorySkill.cs:line 81
We will create diagrams explaining how authentication works within copilot chat. These diagrams will be shared with the community via docs and used internally to ramp new engineers onto the project and to perform a Microsoft security review.
Hi Everyone,
Love what you've done with chat-copilot web app. I am going to use this internally for testing and would like to turn off the Microsoft Azure Directory authentication as using Microsoft accounts is overkill and unnecessary.
I see that the webapp/.env.example file has some configuration options that are used to define 2 users that can login without a Microsoft account. It looks like the unit tests leverage these to bypass the Microsoft login.
REACT_APP_TEST_USER_ACCOUNT1=
REACT_APP_TEST_USER_PASSWORD1=
REACT_APP_TEST_USER_ACCOUNT2=
REACT_APP_TEST_USER_PASSWORD2=
How can I enable these hardcoded accounts to be the only authorized users who can sign in while also disabling the Microsoft authentication altogether?
Thanks for your help in advance.
The Bicep defaults to deploying the Static Web App to westus2
and neither the deploy-azure.sh
or deploy-azure.ps1
pass this parameter in during deployment. Therefore, it will always deploy to westus2
regardless.
Possible solutions:
location
if not specified.deploy-azure.ps1
and deploy-azure.sh
to allow setting this value.I'm happy to submit a PR to resolve this if one of the options is preferrable. It is a minor issue, but I prefer all my resources in the same region if possible.
Much like we're able to show the full context when a stepwise planner returns some result, we should show at least the prompt template that was used in each semantic dependency of ChatSkill
See: #149
Create a plan model to be parsed from Json in Webapp to avoid unsafe assignment.
Current Memory store options are:
While deploying to Azure, the only PaaS available directly on Azure is Azure Cognitive Search.
However this is limited to 50 indexes, that might limit the usage when adding more users to the chat...
I'ld like to add CosmosDB as vector database (using PostgreSQL connector).
Action Planner intermittently returns a 400 error
In scenario test:
If a plugin has been enabled, the action planner is invoked to perform the evaluation. This leads to a weird json exception and crash. To workaround this im performing the evaluation after disabling the plugin.
Describe the bug
A clear and concise description of what the bug is.
I am trying to run the project with my AzureOpenAI credentials. However, it leads the the error
Error: Invalid request: The request is not valid, HTTP status: 404. Details: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
I have already created the deployment on AzureOpenAI, it is up and running.
To Reproduce
Steps to reproduce the behavior:
dotnet run
and yarn start
Expected behavior
A clear and concise description of what you expected to happen.
The chatbot should return a response.
Screenshots
If applicable, add screenshots to help explain your problem.
This is the error trace
Microsoft.AspNetCore.Server.Kestrel[0]
Overriding address(es) 'http://localhost:37240'. Binding to endpoints defined via IConfiguration and/or UseKestrel() instead.
info: Microsoft.Hosting.Lifetime[14]
Now listening on: https://localhost:40443
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: /Users/ekdnam/Projects/copilot-chat-app/webapi
info: SemanticKernel.Service.Program[0]
Health probe: https://localhost:40443/healthz
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET https://localhost:40443/messageRelayHub - -
info: SemanticKernel.Service.Auth.PassThroughAuthenticationHandler[0]
Allowing request to pass through
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET https://localhost:40443/healthz - -
info: SemanticKernel.Service.Auth.PassThroughAuthenticationHandler[0]
Allowing request to pass through
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 GET https://localhost:40443/healthz - - - 200 - text/plain 13.6845ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET https://localhost:40443/messageRelayHub - -
info: SemanticKernel.Service.Auth.PassThroughAuthenticationHandler[0]
Allowing request to pass through
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 GET https://localhost:40443/messageRelayHub - - - 101 - - 714.8757ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 OPTIONS https://localhost:40443/chatSession/getAllChats/4cd0f668-8aa0-46f8-99f5-bbd098cd0494.b5a078fe-4334-4ee6-9122-352bf41674ec - -
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 OPTIONS https://localhost:40443/chatSession/getAllChats/4cd0f668-8aa0-46f8-99f5-bbd098cd0494.b5a078fe-4334-4ee6-9122-352bf41674ec - - - 204 - - 1.7864ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET https://localhost:40443/chatSession/getAllChats/4cd0f668-8aa0-46f8-99f5-bbd098cd0494.b5a078fe-4334-4ee6-9122-352bf41674ec application/json -
info: SemanticKernel.Service.Auth.PassThroughAuthenticationHandler[0]
Allowing request to pass through
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 GET https://localhost:40443/chatSession/getAllChats/4cd0f668-8aa0-46f8-99f5-bbd098cd0494.b5a078fe-4334-4ee6-9122-352bf41674ec application/json - - 200 - application/json;+charset=utf-8 39.2631ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 OPTIONS https://localhost:40443/chatSession/create - -
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 OPTIONS https://localhost:40443/chatSession/create - - - 204 - - 0.6206ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 POST https://localhost:40443/chatSession/create application/json 127
info: SemanticKernel.Service.Auth.PassThroughAuthenticationHandler[0]
Allowing request to pass through
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 POST https://localhost:40443/chatSession/create application/json 127 - 201 - application/json;+charset=utf-8 38.1543ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 OPTIONS https://localhost:40443/chatSession/getChatMessages/73d94922-f2be-43a8-b3ed-769fc4f96add?startIdx=0&count=1 - -
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 OPTIONS https://localhost:40443/chatSession/getChatMessages/73d94922-f2be-43a8-b3ed-769fc4f96add?startIdx=0&count=1 - - - 204 - - 0.2834ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET https://localhost:40443/chatSession/getChatMessages/73d94922-f2be-43a8-b3ed-769fc4f96add?startIdx=0&count=1 application/json -
info: SemanticKernel.Service.Auth.PassThroughAuthenticationHandler[0]
Allowing request to pass through
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 GET https://localhost:40443/chatSession/getChatMessages/73d94922-f2be-43a8-b3ed-769fc4f96add?startIdx=0&count=1 application/json - - 200 - application/json;+charset=utf-8 19.2040ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 OPTIONS https://localhost:40443/speechToken - -
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 OPTIONS https://localhost:40443/speechToken - - - 204 - - 0.1704ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 OPTIONS https://localhost:40443/speechToken - -
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 OPTIONS https://localhost:40443/speechToken - - - 204 - - 0.0550ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET https://localhost:40443/speechToken application/json -
info: SemanticKernel.Service.Auth.PassThroughAuthenticationHandler[0]
Allowing request to pass through
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 GET https://localhost:40443/speechToken application/json - - 200 - application/json;+charset=utf-8 5.7146ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET https://localhost:40443/speechToken application/json -
info: SemanticKernel.Service.Auth.PassThroughAuthenticationHandler[0]
Allowing request to pass through
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 GET https://localhost:40443/speechToken application/json - - 200 - application/json;+charset=utf-8 0.2466ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 OPTIONS https://localhost:40443/chat - -
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 OPTIONS https://localhost:40443/chat - - - 204 - - 0.4357ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 POST https://localhost:40443/chat application/json 270
info: SemanticKernel.Service.Auth.PassThroughAuthenticationHandler[0]
Allowing request to pass through
fail: Microsoft.SemanticKernel.IKernel[0]
Something went wrong while rendering the semantic function or while executing the text completion. Function: ChatSkill.funcf7c0625a835e4d588e88223f37c30a42. Error: Invalid request: The request is not valid, HTTP status: 404. Details: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
Status: 404 (Not Found)
ErrorCode: DeploymentNotFound
Content:
{"error":{"code":"DeploymentNotFound", "message":"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."}}
Headers:
OpenAI-Processing-Ms: REDACTED
x-ms-client-request-id: 71fb1aa3-6d1b-411c-89bc-34de587346be
apim-request-id: REDACTED
Strict-Transport-Security: REDACTED
X-Content-Type-Options: REDACTED
x-ms-region: REDACTED
Date: Mon, 07 Aug 2023 05:07:51 GMT
Content-Length: 198
Content-Type: application/json
Microsoft.SemanticKernel.AI.AIException: Invalid request: The request is not valid, HTTP status: 404
---> Azure.RequestFailedException: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
Status: 404 (Not Found)
ErrorCode: DeploymentNotFound
Content:
{"error":{"code":"DeploymentNotFound", "message":"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."}}
Headers:
OpenAI-Processing-Ms: REDACTED
x-ms-client-request-id: 71fb1aa3-6d1b-411c-89bc-34de587346be
apim-request-id: REDACTED
Strict-Transport-Security: REDACTED
X-Content-Type-Options: REDACTED
x-ms-region: REDACTED
Date: Mon, 07 Aug 2023 05:07:51 GMT
Content-Length: 198
Content-Type: application/json
at Azure.Core.HttpPipelineExtensions.ProcessMessageAsync(HttpPipeline pipeline, HttpMessage message, RequestContext requestContext, CancellationToken cancellationToken)
at Azure.AI.OpenAI.OpenAIClient.GetChatCompletionsAsync(String deploymentOrModelName, ChatCompletionsOptions chatCompletionsOptions, CancellationToken cancellationToken)
at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.RunRequestAsync[T](Func`1 request)
--- End of inner exception stack trace ---
at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.RunRequestAsync[T](Func`1 request)
at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.InternalGetChatResultsAsync(ChatHistory chat, ChatRequestSettings chatSettings, CancellationToken cancellationToken)
at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.InternalGetChatResultsAsTextAsync(String text, CompleteRequestSettings textSettings, CancellationToken cancellationToken)
at Microsoft.SemanticKernel.SkillDefinition.SKFunction.<>c__DisplayClass19_0.<<FromSemanticConfig>g__LocalFunc|1>d.MoveNext()
fail: Microsoft.SemanticKernel.IKernel[0]
Invalid request: The request is not valid, HTTP status: 404: Microsoft.SemanticKernel.AI.AIException: Invalid request: The request is not valid, HTTP status: 404
---> Azure.RequestFailedException: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
Status: 404 (Not Found)
ErrorCode: DeploymentNotFound
Content:
{"error":{"code":"DeploymentNotFound", "message":"The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."}}
Headers:
OpenAI-Processing-Ms: REDACTED
x-ms-client-request-id: 71fb1aa3-6d1b-411c-89bc-34de587346be
apim-request-id: REDACTED
Strict-Transport-Security: REDACTED
X-Content-Type-Options: REDACTED
x-ms-region: REDACTED
Date: Mon, 07 Aug 2023 05:07:51 GMT
Content-Length: 198
Content-Type: application/json
at Azure.Core.HttpPipelineExtensions.ProcessMessageAsync(HttpPipeline pipeline, HttpMessage message, RequestContext requestContext, CancellationToken cancellationToken)
at Azure.AI.OpenAI.OpenAIClient.GetChatCompletionsAsync(String deploymentOrModelName, ChatCompletionsOptions chatCompletionsOptions, CancellationToken cancellationToken)
at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.RunRequestAsync[T](Func`1 request)
--- End of inner exception stack trace ---
at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.RunRequestAsync[T](Func`1 request)
at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.InternalGetChatResultsAsync(ChatHistory chat, ChatRequestSettings chatSettings, CancellationToken cancellationToken)
at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.InternalGetChatResultsAsTextAsync(String text, CompleteRequestSettings textSettings, CancellationToken cancellationToken)
at Microsoft.SemanticKernel.SkillDefinition.SKFunction.<>c__DisplayClass19_0.<<FromSemanticConfig>g__LocalFunc|1>d.MoveNext()
fail: Microsoft.SemanticKernel.IKernel[0]
Function call fail during pipeline step 0: ChatSkill.Chat. Error: Invalid request: The request is not valid, HTTP status: 404
Hello, world!
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 POST https://localhost:40443/chat application/json 270 - 400 - text/plain;+charset=utf-8 2070.9510ms
Platform
Additional context
Add any other context about the problem here.
Currently RLHF is only implemented on the front end for demonstration purposes, need to pass feedback to model somehow (few shot?) or save in chat message for persistence
It's disabled so not actually shown in webapp
Describe the bug
I'm trying to chat with the bot locally, however, this doesn't go anywhere and in web API logs I'm seeing a lot of errors that look like below. How do I debug this?
fail: SemanticKernel.Service.CopilotChat.Skills.ChatSkills.ChatSkill[0]
Cannot search collection f375dd31-e341-4ab0-a2c6-dc5087cb3aa6-LongTermMemory
Microsoft.SemanticKernel.AI.AIException: Service error: The service failed to process the request, HTTP status:500
---> Azure.RequestFailedException: Internal server error
Status: 500 (Internal Server Error)
Content:
{ "statusCode": 500, "message": "Internal server error", "activityId": "19ccf22c-8e3d-4a93-8eb9-028cef52fd3a" }
Headers:
x-ms-client-request-id: 29beecdc-e4bc-4929-b5be-28594a1db6e1
apim-request-id: REDACTED
Strict-Transport-Security: REDACTED
X-Content-Type-Options: REDACTED
x-ms-region: REDACTED
Date: Tue, 08 Aug 2023 00:13:25 GMT
Content-Length: 111
Content-Type: application/json
at Azure.Core.HttpPipelineExtensions.ProcessMessageAsync(HttpPipeline pipeline, HttpMessage message, RequestContext requestContext, CancellationToken cancellationToken)
at Azure.AI.OpenAI.OpenAIClient.GetEmbeddingsAsync(String deploymentOrModelName, EmbeddingsOptions embeddingsOptions, CancellationToken cancellationToken)
at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.RunRequestAsync[T](Func`1 request)
--- End of inner exception stack trace ---
at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.RunRequestAsync[T](Func`1 request)
at Microsoft.SemanticKernel.Connectors.AI.OpenAI.AzureSdk.ClientBase.InternalGetEmbeddingsAsync(IList`1 data, CancellationToken cancellationToken)
at Microsoft.SemanticKernel.AI.Embeddings.EmbeddingGenerationExtensions.GenerateEmbeddingAsync[TValue,TEmbedding](IEmbeddingGeneration`2 generator, TValue value, CancellationToken cancellationToken)
at Microsoft.SemanticKernel.Memory.SemanticTextMemory.SearchAsync(String collection, String query, Int32 limit, Double minRelevanceScore, Boolean withEmbeddings, CancellationToken cancellationToken)+MoveNext()
or
fail: SemanticKernel.Service.CopilotChat.Skills.ChatSkills.ChatSkill[0]
Cannot search collection f375dd31-e341-4ab0-a2c6-dc5087cb3aa6-WorkingMemory
Microsoft.SemanticKernel.AI.AIException: Service error: The service failed to process the request, HTTP status:500
---> Azure.RequestFailedException: Internal server error
Status: 500 (Internal Server Error)
Content:
{ "statusCode": 500, "message": "Internal server error", "activityId": "95f34a4a-c52c-4121-a0e5-d58ab0d6a6e3" }
Headers:
x-ms-client-request-id: 8647cf4a-e191-4fa7-8480-b58c37977d0b
apim-request-id: REDACTED
Strict-Transport-Security: REDACTED
X-Content-Type-Options: REDACTED
x-ms-region: REDACTED
Date: Tue, 08 Aug 2023 00:12:21 GMT
Content-Length: 111
Content-Type: application/json
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Get responses reliably.
Platform
Since MS aligned on Plugins, we need to rename everything with "Skill" in it to "Plugin"
No dependency on core team but here's an item to track the work on their side: microsoft/semantic-kernel#2119
Describe the bug
Application Insights Telemetry create quite verbose logs and it does not filter "Authorization: Bearer" headers. I tried to disable it but I still see it in my logs:
To Reproduce
Steps to reproduce the behavior:
Expected behavior
No logs from Application Insights Telemetry
Screenshots
My appsettings.json
Platform
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.