GithubHelp home page GithubHelp logo

aidungeon's Introduction

AIDungeon2

aidungeon's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aidungeon's Issues

Separate development/master branch

It is standard development practice to set aside a branch for the stable release so that users installing will not get some broken version between bigfixes. It would not be difficult to commit and merge pull requests into a development branch first. Then you can update the instructions to include the option of cloning the development branch so that you can receive bug reports; anons still do the work for you, and other anons can choose to download a stable version, so everybody's happy.

[Patch] Fix up play.py

As of 5133672 or earlier, play.py got a bit munged, attached patch fixes the strange formatting to let the script actually run.

[space reserved for finding culprit commit]

 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/play.py b/play.py
index c8b51e0..95718f8 100644
--- a/play.py
+++ b/play.py
@@ -223,9 +223,8 @@ def play_aidungeon_2():
     generator = GPT2Generator()
     story_manager = UnconstrainedStoryManager(generator, upload_story=upload_story, cloud=False)
     print("\n")
-
-     ranBanner =  bannerRan()
-     openingPass = (ranBanner.banner_number)
+    ranBanner = bannerRan()
+    openingPass = (ranBanner.banner_number)
         
     with open(openingPass, "r", encoding="utf-8") as file:
         starter = file.read()

/retry command causes index out of range error in story generator

This issue was recreated on the colab on the newest version (commit 9e3ab21)

  1. Start the game.
  2. Start a new game.
  3. Start a non-random, custom story, with both a story context and starting story prompt.
  4. Progress the story by entering a prompt in.
  5. Enter the /retry command.

Upon using the /retry command, the game will crash and produce the following error:

Traceback (most recent call last):
  File "play.py", line 608, in <module>
    play_aidungeon_2()
  File "play.py", line 461, in play_aidungeon_2
    story_manager.act_with_timeout(last_action)
  File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 208, in act_with_timeout
    return func_timeout(self.inference_timeout, self.act, (action_choice,))
  File "/usr/local/lib/python3.6/dist-packages/func_timeout/dafunc.py", line 108, in func_timeout
    raise_exception(exception)
  File "/usr/local/lib/python3.6/dist-packages/func_timeout/py3_raise.py", line 7, in raise_exception
    raise exception[0] from None
  File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 203, in act
    result = self.generate_result(action_choice)
  File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 211, in generate_result
    block = self.generator.generate(self.story_context() + action)
  File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 198, in story_context
    return self.story.latest_result()
  File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 78, in latest_result
    latest_result += self.actions[-mem_ind] + self.results[-mem_ind]
IndexError: list index out of range

Can't run and install on Windows?

I have a question because there are no detailed installation instructions in README.md.
When I run install.sh, it seems like something is running, but it disappears quickly.
download_model.sh This file is also in the same situation as above.
I tried downloading the trained model by running download_model.py, but only files that don't work
are downloaded.
The error message is as follows:

AccessDeniedAccess

denied.

Anonymous caller does not have storage.objects. get access to the Google
Cloud Storage object. Permission 'storage.objects.get' denied on resource (or it may not
exist).

All .jsons contained the same content.
Where on earth do I download the pre-trained model?

Windows line endings and nonexecutable files, Linux problem.

install.sh, play.py and other script files have Windows line endings making headers in them pointless. This also means that they are impossible to run without explicitly mentioning program in command line. They are also marked as nonexecutable making it even more tedious to run.

Suggestion: custom names for local saves.

Managing several saves can be a hassle when they all look the same-ey, so custom save names would help. Such feature was in the mod for an earlier version i played, could upload it if needed.

I keep getting a syntax error, yet I don't know why

The error reads

Traceback (most recent call last):
File "play.py", line 9, in
from story.story_manager import *
File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 191
try:
^
SyntaxError: invalid syntax

Is there a way I can fix this? The fork used to work for me.

Password prompts

When playing the game after manually fixing the "missing parenthesis" crash, the input box works like a password box and doesn't show what you're typing.

It only happens when it's actually waiting for input, ie: doesn't happen while the game is initializing until it asks for new or load game
Editing the box element and making it a "text" instead of "password" only fixes it until it asks your next input.

I suspect it might be related to the newly commited encryption feature since it uses passwords, but it might have nothing to do with it.

Missing things

The colab version is missing the tracery package, and the install_model.sh
was able to add this manually.

There are also problems copying the files to the google drive.

Small visual bug: "spaces" after output linebreaks

Latest versions have the linebreaks on the output looking weird:

It looks like this, it goes on until it linebreaks
 and the next line starts a bit more to the right.
 It happens again on every linebreak, but never
 on the first line after each player input.

(alt+255 used here to simulate the spaces otherwise it wouldn't show)

Suggestion: possible name change

Considering how far we've diverged from the original source material, and how the name "AI Dungeon" might soon mainly be associated with a proprietary & paid program, it may be time to re-brand ourselves, many anons have suggested the name "OpenCYOAI" which I think would be appropriate, what do you all think?

gen_num unused?

There's a bunch of code setting it and restoring its default value, but it is currently unused.

A suggestion

I suggest that you utilize the Projects feature of github so people can have a clear understanding of what needs to be completed and what's already finished, i.e bug fixes or improvements to AI outputs, as it would also reduce the frequency of which questions like these are asked.

Suggestion: simplify code for toggleable actions

For commands /censor, /ping, and in the upcoming #29, /saving and /cloud. Each one need to include logic for:

  • Displaying the current setting if no argument is supplied
  • Message indicating the setting is changed successfully
  • Error message for invalid argument

I propose that instead of doing:

AIDungeon/play.py

Lines 288 to 308 in 135efb4

elif command == "censor":
if len(args) == 0:
if generator.censor:
console_print("Censor is enabled.")
else:
console_print("Censor is disabled.")
elif args[0] == "off":
if not generator.censor:
console_print("Censor is already disabled.")
else:
generator.censor = False
console_print("Censor is now disabled.")
elif args[0] == "on":
if generator.censor:
console_print("Censor is already enabled.")
else:
generator.censor = True
console_print("Censor is now enabled.")
else:
console_print(f"Invalid argument: {args[0]}")

we can simplify it to:

elif command == "censor":
    generator.censor = not generator.censor
    console_print("Censor is now {}.".format("enabled" if generator.censor else "disabled"))

Benefits for doing so:

  • Far fewer lines of code to maintain, easier to copy if other toggleable command is added in the future.
  • No need to validate and handle args
  • Settings can already be inspected without being modified with the /showstats command
  • If the command is inadvertently called, it can be easily reverted by repeating the same command again

Suggestion: an ability to change generate_num midgame

Sometimes i want to change the amount of generated text to a different one. For example for dialogues i'd sometimes like to set it to a lower amount so that game spouts less bullshit from my perspective. Or when describing locations i'd like to set it to a higher amount. So the ability to change it midgame as you can do with top_p and temp now would be nice.

Suggestion: a config file for consistent options

It would be nice to have the options saved to a config file so that you don't have to change stuff every time you run the game. You could also back it up and carry it over when updating the game
Clover already does this, might be worth taking a look at it.

Crashing on startup

Shit happens instantly when running with latest commits:

Traceback (most recent call last):
  File "play.py", line 9, in <module>
    from story.story_manager import *
  File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 191
    try:
      ^
SyntaxError: invalid syntax

"cut_trailing_sentence()" sometimes removes complete dialogues

I noticed that the game would on occasion produce results that would cut off just as a character is about to speak. I think I identified at least one condition that causes this.


Let's say for example the raw output of one run ends with:

[...]
You walk pass your neighbour mowing his lawn, he turns to you and says, "Hey how's it going?"
[incomplete sentence fragment]

When passed to cut_trailing_sentence, the function removes everything after the question mark, including the double quote:

[...]
You walk pass your neighbour mowing his lawn, he turns to you and says, "Hey how's it going?

The text is then passed to cut_trailing_quotes, which removes the rest:

[...]
You walk pass your neighbour mowing his lawn, he turns to you and says, 

This issue only affect dialogues that ends with ? or ! but not . because in one of the earlier steps of result_replace, there is this line:

result = result.replace('."', '".')

I don't understand why this line is here, because dialogues should end with its punctuation inside the quotes. *shrug*

I don't know the best way going about fixing this, my present solution is to move cut_trailing_quotes near the top of the function for it to remove actual incomplete quotes first, then change

last_punc = max(text.rfind("."), text.rfind("!"), text.rfind("?"))

to include " as a valid character to end a sentence in.


One problem I can see with this is if the last sentence of the dialogue contains the string you ask or you say, it will get removed by cut_trailing_action, potentially resulting in a incomplete quote if the dialogue is 2 or more sentences long.

Trying to get past "choose temp value"

when I try to get past the part where it asks me whether or not I'd want to change the temp value the program stops. It gives me this error if I input N.
Traceback (most recent call last):
File "play.py", line 382, in
play_aidungeon_2()
File "play.py", line 150, in play_aidungeon_2
story_manager.generator.generate_num = story_manager.generator.default_gen_num
AttributeError: 'GPT2Generator' object has no attribute 'default_gen_num'
and it gives me this if I input Y it gives me this.
Traceback (most recent call last):
File "play.py", line 382, in
play_aidungeon_2()
File "play.py", line 138, in play_aidungeon_2
story_manager.generator.change_temp(float(input("Enter a new temp (default 0.4): ")))
AttributeError: 'GPT2Generator' object has no attribute 'change_temp'
Any tips for fixing this?

Unable to turn cloudsaving off?

If you use the /cloud command, to turn couldsaving on, using it again still gives you a text saying it's turning the setting on.
Maybe it's just visual, and it is actually turning it off... but it does happen.

broken /censor?

/censor fails to pass a check and enable/disable itself
> /censor off

Traceback (most recent call last):
  File "play.py", line 753, in <module>
    play_aidungeon_2()
  File "play.py", line 412, in play_aidungeon_2
    if not generator.censor:
AttributeError: 'NoneType' object has no attribute 'censor'

Suggestion: Final custom prompts

Hi, one feature which would be great is to make final custom prompts or edit the first story prompt.

I am not talking about the context I am talking about the first prompt made by the AI.

It is extremely annoying if I want to make a specific opening scene in my Custom stories and the AI runs away with its story creation. I can't even force the Ai to do what I want, because the first story prompt can't be edited.

[Bug] Mismatch of storage directory name

In story_manager.py, load_from_storage() has the save directory set to "saved_stories" while save_to_storage() and start_new_story() uses "saves", this is apparently breaking the loading and saving features.

Note that the cloud save location was moved from root to "saved_stories" in patch latitudegames/AIDungeon#101

Sorry I can't actually test it because I'm unfamiliar with how this works in Colab.

"cut_down_prompt" causes infinite loop when in raw mode

When playing in raw mode, after the story get to a certain length, it would get stuck in a infinite loop:

while len(prompt) > 3500:
prompt = self.cut_down_prompt(prompt)

This is because cut_down_prompt uses the player action indicator > to split and reduce the prompt, which doesn't exist in raw mode, leaving it unchanged.

There's probably a more elegant solution for this, but this is the workaround I'm using for my offline copy for the time being. It just reuses the code from #27 to cut down the prompt sentence by sentence if the generator is in raw mode.

def cut_down_prompt(self, prompt):
    if not self.raw:
        split_prompt = prompt.split(">")
        expendable_text = ">".join(split_prompt[2:])
        return split_prompt[0] + (">" + expendable_text if len(expendable_text) > 0 else "")
    else:
        sentences = string_to_sentence_list(prompt.lstrip())
        sentences = sentences[1:]
        new_text = ""
        for i in range(len(sentences)):
            if sentences[i] == "<break>":
                new_text = new_text + "\n"
            else:
                new_text = new_text + " " + sentences[i]
        return new_text.lstrip()

Suggestion: /altergen command

I wrote this command the other day, as a variant of my /alter command, which is in this fork.

            elif command == "altergen": 
                if len(story_manager.story.results) is 0: 
                    console_print("There's no results to alter.\n") 
                    continue 
                console_print("\nThe AI thinks this was what happened:\n") 
                print(story_manager.story.results[-1])
                action = story_manager.story.actions[-1]
                story_manager.story.actions = story_manager.story.actions[:-1]
                story_manager.story.results = story_manager.story.results[:-1] #most recent action and result needs to be temporarily removed to let storycontext work
                result = input("\nWrite the first part of the new result (use \\n for new line):\n\n") 
                result = result.replace("\\n", "\n")
                result += story_manager.generator.generate(story_manager.story_context() + "\n> "+action +"\n" + result)
                print(result)
                story_manager.story.add_to_story(action,result)

Math nerd wanted for PyTorch

I opened a branch for running AIDungeon on PyTorch: https://github.com/thadunge2/AIDungeon/tree/pytorch-model/generator/gpt2

It's plug-and-play, just run play.py and it should install everything it needs to (unless you're on Windows, in which case it will tell you what to do). However, it's unusably slow until we rework the generate method to use hidden past states. This is beyond my ken, so if one of you wants to step up and do it, be my guest.

Here's the generate function we use: https://github.com/huggingface/transformers/blob/ce50305e5b8c8748b81b0c8f5539a337b6a995b9/src/transformers/modeling_utils.py#L699

outputs = self(**model_inputs) needs to take a "past" parameter and change like so: outputs, pasts = self(**model_inputs) I don't have the time or knowledge to make it do this, since it turns the 3D matrix into a 2D one and fucks everything up. So drop a PR on the pytorch-model branch fixing that and we can roll this feature out.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.