Describe the bug
Sometimes Ollama returns a suggestion with newlines in it, but Twinny displays the suggestion with the newline stripped out.
This is a screenshot of a YAML file with a pretty predictable pattern. Ollama correctly returned a response with a newline, then the text alias: dl
, but in my editor the suggestion had lost the newline. As you can see from the existing entries in the YAML file, there should be a newline before the alias
key.
Twinny's HTTP request, from Wireshark:
{"model":"codellama:13b-code","prompt":"<PRE> \n\n# Language: YAML (yaml) \n# File uri: file:///Users/someguy/src/provisioning/roles/fauxpilot/files/ops.yml (yaml) \nactions:\n start: docker compose up -d --build\n stop: docker compose down\n restart: docker compose restart\n bounce: ops stop && ops start\n logs:\n command: docker compose logs --tail 200 --follow\n alias: l\n status:\n command: docker compose ps\n alias: ps\n start-with-logs:\n command: ops start && ops logs\n alias: sl\n bounce-with-logs:\n command: ops bounce && ops logs\n alias: bl\n down-with-logs:\n command: ops stop && ops start && ops logs <SUF>\n restart-with-logs:\n command: ops restart 88 ops logs\n alias: rl\n <MID>","stream":true,"n_predict":-1,"temperature":0.1,"options":{"temperature":0.1,"num_predict":-1}}
Ollama's response, from Wireshark:
6a
{"model":"codellama:13b-code","created_at":"2024-01-30T14:06:04.294959315Z","response":"\n","done":false}
6b
{"model":"codellama:13b-code","created_at":"2024-01-30T14:06:04.313395311Z","response":" ","done":false}
6e
{"model":"codellama:13b-code","created_at":"2024-01-30T14:06:04.327937513Z","response":" alias","done":false}
69
{"model":"codellama:13b-code","created_at":"2024-01-30T14:06:04.343851477Z","response":":","done":false}
6a
{"model":"codellama:13b-code","created_at":"2024-01-30T14:06:04.359633046Z","response":" d","done":false}
69
{"model":"codellama:13b-code","created_at":"2024-01-30T14:06:04.377242264Z","response":"l","done":false}
78
{"model":"codellama:13b-code","created_at":"2024-01-30T14:06:04.391557372Z","response":" \u003cEOT\u003e","done":false}
Personally, I like just getting one line at a time back from the LLM. However, if a multiline response is returned but newlines are stripped the suggested completions won't be usable without manual tweaking.
To Reproduce
Steps to reproduce the behavior:
- Use Twinny until you get a completion that should have a newline
- Note the lack of a newline
Expected behavior
Suggested completion is shown with newlines.
Desktop (please complete the following information):