GithubHelp home page GithubHelp logo

Comments (6)

martindevans avatar martindevans commented on September 26, 2024

I'm not very familiar with the higher level ChatSession stuff, but shouldn't the first message be a AuthorRole.System type message, instead of a user message?

@philippjbauer worked on an overhaul of ChatSession and associated classes, so he might have a better answer.

from llamasharp.

hswlab avatar hswlab commented on September 26, 2024

I currently tried to replase Role User by System at the first call, but now the first call breaks wit the error message: "Message must be a user message (Parameter 'message')"

Basically I have a similar implementation like in the example: ChatSessionStripRoleName.cs
In this example, the ChatAsync method seems to use just the user role.
image

The Llamasharp web project seems to run without any problems, maybe I can learn something from it next weekend. I suspect it's because of my initial text prompt. Maybe it needs to be worded a little differently for the new parameter. We'll see.

from llamasharp.

philippjbauer avatar philippjbauer commented on September 26, 2024

@hswlab can you post your code here?

The chat session requires an optional System message and then alternating User and Assistant messages. I suspect there is an issue in your calling code, perhaps?

from llamasharp.

hswlab avatar hswlab commented on September 26, 2024

@philippjbauer
Unfortunately I haven't found the time to watch it yet. But I can post an excerpt from my code here.
This method is called only once at the start of a conversation. In this method I tell the chatbot how it should behave for the rest of the conversation.

 private static async Task CreateNewTopic()
 {
     try
     {

	...

         // reset session
         if (_chatSession?.Executor?.Context != null)
         {
             try
             {
                 _chatSession.Executor.Context.Dispose();
                 _chatSession = null;
             }
             finally { }
         }

	...

         if (!string.IsNullOrEmpty(modelPath))
         {

	     ...
             // split possible "name firstname" into "name:", "firstname:"
             string prompt = string.Empty;
             string userName = SettingsManager.GetDbSetting("UserName") ?? string.Empty;
             List<string> antiPrompts = userName.Split(new char[] { ',', ' ' }).Select(x => $"{x.Trim()}:").ToList();
             List<string> ignoreAntiPrompts = antiPrompts;
             ILLamaExecutor executor = null;

             switch (modelName)
             {
		 ...
                 case "vicuna-7b":
                 case "llama-2-7B":
                 case "openchat_3.5":
                     {

                         prompt =
                            $"A chat between a curious user named {userName} and an artificial intelligence assistant named Kati. " +
                            $"Kati gives helpful, detailed, and polite answers to the user's questions, addresses the user informally and simulates emotions by using emojis. " +
                            $"Code inside Kati's answers is always wrapped in ``` for readability. " +
                            $"Kati prefers to answer in {LanguageManager.Translate("German")}, but she can also answer in another language if she is asked. ";
                         prompt = $"{userName}: {{prompt}} Kati:\r\n";


                         antiPrompts =
                             antiPrompts.Concat(new List<string> { "USER:" }).ToList();


                         ignoreAntiPrompts =
                             ignoreAntiPrompts.Concat(new List<string> { "Kati:", "ASSISTANT:" }).ToList();


                         // Load a model
                         ModelParams parameters = new ModelParams(modelPath)
                         {
                             ContextSize = 1024,
                             Seed = 1337,
                             GpuLayerCount = 5
                         };


                         // Session Executor
                         using LLamaWeights model = LLamaWeights.LoadFromFile(parameters);
                         LLamaContext context = model.CreateContext(parameters);
                         executor = new InteractiveExecutor(context);

                         break;
                     }
                 ...
             }


             _antiPrompts = antiPrompts.ToArray();
             _chatSession = new ChatSession(executor).WithOutputTransform(new LLamaTransforms.KeywordTextOutputStreamTransform(ignoreAntiPrompts, redundancyLength: 8));
             _chatCancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(int.Parse(SettingsManager.GetDbSetting("RequestTimeout") ?? "0")));

             await foreach (var text in _chatSession.ChatAsync(
                 message: new ChatHistory.Message(AuthorRole.User, prompt),
                 inferenceParams: new InferenceParams() { Temperature = 0.6f, AntiPrompts = _antiPrompts, MaxTokens = -1 },
                 cancellationToken: _chatCancellationTokenSource.Token
                 ))
             {
                 break;
             }


             ...
         }

         MakeNewConversation = false;

     }
     catch (Exception)
     {
         throw;
     }
 }

This method is always called on user message submit. At start, the session is initial, so the method above is getting called first.

        internal static async Task<ChatResponse?> DoChatAsync(string message, OnUpdateCallback callback)
        {
            ChatResponse? chatResponse = null;

            try
            {

                // New Topic
                if (MakeNewConversation == true || _chatSession == null)
                {
                    await CreateNewTopic();
                }


                // Do Chat
                if (_chatSession != null)
                {

                    // Response Stream
                    string resultMessage = string.Empty;
                    _chatCancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(int.Parse(SettingsManager.GetDbSetting("RequestTimeout") ?? "0")));
                    await foreach (var text in _chatSession.ChatAsync(
                        message: new ChatHistory.Message(AuthorRole.User, message),
                        inferenceParams: new InferenceParams() { Temperature = 0.6f, AntiPrompts = _antiPrompts, MaxTokens = -1 },
                        cancellationToken: _chatCancellationTokenSource.Token
                        ))
                    {

                       ...
                    }

		...

                }
            }
            catch (Exception)
            {
                throw;
            }

            return chatResponse;

        }

after CreateNewTopic(); is called, i can't call ChatAsync a second time without getting an error. I suspect the problem is in the description of my prompt in the CreateNewTopic method. Originally there were no problems with it. With the last update this has probably changed.

from llamasharp.

hswlab avatar hswlab commented on September 26, 2024

I think I have found the problem.

I replaced the first ChatAsync call with my initial prompt by:
_chatSession.History.AddMessage(AuthorRole.System, prompt);
now there is no error anymore.
image

from llamasharp.

sangyuxiaowu avatar sangyuxiaowu commented on September 26, 2024

@hswlab can you post your code here?

The chat session requires an optional System message and then alternating User and Assistant messages. I suspect there is an issue in your calling code, perhaps?

You're right, the normal flow should follow the format you mentioned. However, there are instances where third-party requests do not adhere to this format. For example, I've previously encountered cases where KernelMemory produced two consecutive User messages (though I don't recall the exact details due to the passage of time). When using OpenAI's service, this wasn't an issue, but switching to a local chat session resulted in errors due to these constraints.

Ideally, the chat session should primarily serve as a wrapper for the inference format. Imposing overly strict restrictions might not be beneficial.

from llamasharp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.