Comments (6)
I'm not very familiar with the higher level ChatSession stuff, but shouldn't the first message be a AuthorRole.System
type message, instead of a user message?
@philippjbauer worked on an overhaul of ChatSession
and associated classes, so he might have a better answer.
from llamasharp.
I currently tried to replase Role User by System at the first call, but now the first call breaks wit the error message: "Message must be a user message (Parameter 'message')"
Basically I have a similar implementation like in the example: ChatSessionStripRoleName.cs
In this example, the ChatAsync method seems to use just the user role.
The Llamasharp web project seems to run without any problems, maybe I can learn something from it next weekend. I suspect it's because of my initial text prompt. Maybe it needs to be worded a little differently for the new parameter. We'll see.
from llamasharp.
@hswlab can you post your code here?
The chat session requires an optional System message and then alternating User and Assistant messages. I suspect there is an issue in your calling code, perhaps?
from llamasharp.
@philippjbauer
Unfortunately I haven't found the time to watch it yet. But I can post an excerpt from my code here.
This method is called only once at the start of a conversation. In this method I tell the chatbot how it should behave for the rest of the conversation.
private static async Task CreateNewTopic()
{
try
{
...
// reset session
if (_chatSession?.Executor?.Context != null)
{
try
{
_chatSession.Executor.Context.Dispose();
_chatSession = null;
}
finally { }
}
...
if (!string.IsNullOrEmpty(modelPath))
{
...
// split possible "name firstname" into "name:", "firstname:"
string prompt = string.Empty;
string userName = SettingsManager.GetDbSetting("UserName") ?? string.Empty;
List<string> antiPrompts = userName.Split(new char[] { ',', ' ' }).Select(x => $"{x.Trim()}:").ToList();
List<string> ignoreAntiPrompts = antiPrompts;
ILLamaExecutor executor = null;
switch (modelName)
{
...
case "vicuna-7b":
case "llama-2-7B":
case "openchat_3.5":
{
prompt =
$"A chat between a curious user named {userName} and an artificial intelligence assistant named Kati. " +
$"Kati gives helpful, detailed, and polite answers to the user's questions, addresses the user informally and simulates emotions by using emojis. " +
$"Code inside Kati's answers is always wrapped in ``` for readability. " +
$"Kati prefers to answer in {LanguageManager.Translate("German")}, but she can also answer in another language if she is asked. ";
prompt = $"{userName}: {{prompt}} Kati:\r\n";
antiPrompts =
antiPrompts.Concat(new List<string> { "USER:" }).ToList();
ignoreAntiPrompts =
ignoreAntiPrompts.Concat(new List<string> { "Kati:", "ASSISTANT:" }).ToList();
// Load a model
ModelParams parameters = new ModelParams(modelPath)
{
ContextSize = 1024,
Seed = 1337,
GpuLayerCount = 5
};
// Session Executor
using LLamaWeights model = LLamaWeights.LoadFromFile(parameters);
LLamaContext context = model.CreateContext(parameters);
executor = new InteractiveExecutor(context);
break;
}
...
}
_antiPrompts = antiPrompts.ToArray();
_chatSession = new ChatSession(executor).WithOutputTransform(new LLamaTransforms.KeywordTextOutputStreamTransform(ignoreAntiPrompts, redundancyLength: 8));
_chatCancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(int.Parse(SettingsManager.GetDbSetting("RequestTimeout") ?? "0")));
await foreach (var text in _chatSession.ChatAsync(
message: new ChatHistory.Message(AuthorRole.User, prompt),
inferenceParams: new InferenceParams() { Temperature = 0.6f, AntiPrompts = _antiPrompts, MaxTokens = -1 },
cancellationToken: _chatCancellationTokenSource.Token
))
{
break;
}
...
}
MakeNewConversation = false;
}
catch (Exception)
{
throw;
}
}
This method is always called on user message submit. At start, the session is initial, so the method above is getting called first.
internal static async Task<ChatResponse?> DoChatAsync(string message, OnUpdateCallback callback)
{
ChatResponse? chatResponse = null;
try
{
// New Topic
if (MakeNewConversation == true || _chatSession == null)
{
await CreateNewTopic();
}
// Do Chat
if (_chatSession != null)
{
// Response Stream
string resultMessage = string.Empty;
_chatCancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(int.Parse(SettingsManager.GetDbSetting("RequestTimeout") ?? "0")));
await foreach (var text in _chatSession.ChatAsync(
message: new ChatHistory.Message(AuthorRole.User, message),
inferenceParams: new InferenceParams() { Temperature = 0.6f, AntiPrompts = _antiPrompts, MaxTokens = -1 },
cancellationToken: _chatCancellationTokenSource.Token
))
{
...
}
...
}
}
catch (Exception)
{
throw;
}
return chatResponse;
}
after CreateNewTopic(); is called, i can't call ChatAsync a second time without getting an error. I suspect the problem is in the description of my prompt in the CreateNewTopic method. Originally there were no problems with it. With the last update this has probably changed.
from llamasharp.
I think I have found the problem.
I replaced the first ChatAsync
call with my initial prompt by:
_chatSession.History.AddMessage(AuthorRole.System, prompt);
now there is no error anymore.
from llamasharp.
@hswlab can you post your code here?
The chat session requires an optional System message and then alternating User and Assistant messages. I suspect there is an issue in your calling code, perhaps?
You're right, the normal flow should follow the format you mentioned. However, there are instances where third-party requests do not adhere to this format. For example, I've previously encountered cases where KernelMemory produced two consecutive User messages (though I don't recall the exact details due to the passage of time). When using OpenAI's service, this wasn't an issue, but switching to a local chat session resulted in errors due to these constraints.
Ideally, the chat session should primarily serve as a wrapper for the inference format. Imposing overly strict restrictions might not be beneficial.
from llamasharp.
Related Issues (20)
- [BUG]: Vulkan backend crash on model loading HOT 3
- [BUG]: Different continuation after restoring state HOT 1
- Improve `LLamaEmbedder` HOT 2
- [BUG]: KernelMemory.AskAsync() does not work - exception: object reference not set to an instance of an object HOT 25
- [BUG]: fatal error using gemma-2-2b-it HOT 3
- [BUG]: "The type or namespace 'Common' does not exist in the namespace 'LLama'" HOT 4
- Application Not Using GPU Despite Installing LlamaSharp.Backend.Cuda12 HOT 1
- [Feature]: Add development support for Dev Containers HOT 6
- How do i use RAG by kernel memory and Semantic kernel Handlebar Planner with llama3 HOT 3
- versioning issue HOT 11
- [BUG]: gemma-2-9b-it-GGUF - error loading model HOT 3
- [BUG]: Error when starting LLama Cuda11/12 HOT 6
- [BUG]: Second Response Empty when using Grammar HOT 3
- LLamaSharp v0.15.0 broke cuda backend HOT 15
- [BUG]: KernelMemory - Simultaneous execution of AskDocument & ImportDocument HOT 18
- [BUG]: Error setting variables HOT 1
- [BUG:] When switching to new versions of LLamaSharp 0.16.0, there was a slowdown HOT 30
- A few moments in the process of work LLamaSharp & KernelMemory
- Question about promt templates
- [BUG]: DefaultSamplingPipeline - strange behavior at high temperature
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llamasharp.