thadunge2 / aidungeon Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
How about implementing the handling of multiple characters like https://gitlab.com/ai-dungeon-2-mods-unofficial/characters or https://gist.github.com/MightyAlex200/af9b5f1290cb68629ad81813da28c87f
It is standard development practice to set aside a branch for the stable release so that users installing will not get some broken version between bigfixes. It would not be difficult to commit and merge pull requests into a development branch first. Then you can update the instructions to include the option of cloning the development branch so that you can receive bug reports; anons still do the work for you, and other anons can choose to download a stable version, so everybody's happy.
As of 5133672 or earlier, play.py got a bit munged, attached patch fixes the strange formatting to let the script actually run.
[space reserved for finding culprit commit]
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/play.py b/play.py
index c8b51e0..95718f8 100644
--- a/play.py
+++ b/play.py
@@ -223,9 +223,8 @@ def play_aidungeon_2():
generator = GPT2Generator()
story_manager = UnconstrainedStoryManager(generator, upload_story=upload_story, cloud=False)
print("\n")
-
- ranBanner = bannerRan()
- openingPass = (ranBanner.banner_number)
+ ranBanner = bannerRan()
+ openingPass = (ranBanner.banner_number)
with open(openingPass, "r", encoding="utf-8") as file:
starter = file.read()
This issue was recreated on the colab on the newest version (commit 9e3ab21
)
/retry
command.Upon using the /retry
command, the game will crash and produce the following error:
Traceback (most recent call last):
File "play.py", line 608, in <module>
play_aidungeon_2()
File "play.py", line 461, in play_aidungeon_2
story_manager.act_with_timeout(last_action)
File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 208, in act_with_timeout
return func_timeout(self.inference_timeout, self.act, (action_choice,))
File "/usr/local/lib/python3.6/dist-packages/func_timeout/dafunc.py", line 108, in func_timeout
raise_exception(exception)
File "/usr/local/lib/python3.6/dist-packages/func_timeout/py3_raise.py", line 7, in raise_exception
raise exception[0] from None
File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 203, in act
result = self.generate_result(action_choice)
File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 211, in generate_result
block = self.generator.generate(self.story_context() + action)
File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 198, in story_context
return self.story.latest_result()
File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 78, in latest_result
latest_result += self.actions[-mem_ind] + self.results[-mem_ind]
IndexError: list index out of range
I have a question because there are no detailed installation instructions in README.md.
When I run install.sh, it seems like something is running, but it disappears quickly.
download_model.sh This file is also in the same situation as above.
I tried downloading the trained model by running download_model.py, but only files that don't work
are downloaded.
The error message is as follows:
AccessDenied
Access
denied.
install.sh, play.py and other script files have Windows line endings making headers in them pointless. This also means that they are impossible to run without explicitly mentioning program in command line. They are also marked as nonexecutable making it even more tedious to run.
MISTAKE
Managing several saves can be a hassle when they all look the same-ey, so custom save names would help. Such feature was in the mod for an earlier version i played, could upload it if needed.
Is raw input functionally identical to #21?
The error reads
Traceback (most recent call last):
File "play.py", line 9, in
from story.story_manager import *
File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 191
try:
^
SyntaxError: invalid syntax
Is there a way I can fix this? The fork used to work for me.
When playing the game after manually fixing the "missing parenthesis" crash, the input box works like a password box and doesn't show what you're typing.
It only happens when it's actually waiting for input, ie: doesn't happen while the game is initializing until it asks for new or load game
Editing the box element and making it a "text" instead of "password" only fixes it until it asks your next input.
I suspect it might be related to the newly commited encryption feature since it uses passwords, but it might have nothing to do with it.
The colab version is missing the tracery package, and the install_model.sh
was able to add this manually.
There are also problems copying the files to the google drive.
Latest versions have the linebreaks on the output looking weird:
It looks like this, it goes on until it linebreaks
and the next line starts a bit more to the right.
It happens again on every linebreak, but never
on the first line after each player input.
(alt+255 used here to simulate the spaces otherwise it wouldn't show)
Considering how far we've diverged from the original source material, and how the name "AI Dungeon" might soon mainly be associated with a proprietary & paid program, it may be time to re-brand ourselves, many anons have suggested the name "OpenCYOAI" which I think would be appropriate, what do you all think?
There's a bunch of code setting it and restoring its default value, but it is currently unused.
I suggest that you utilize the Projects feature of github so people can have a clear understanding of what needs to be completed and what's already finished, i.e bug fixes or improvements to AI outputs, as it would also reduce the frequency of which questions like these are asked.
For commands /censor
, /ping
, and in the upcoming #29, /saving
and /cloud
. Each one need to include logic for:
I propose that instead of doing:
Lines 288 to 308 in 135efb4
elif command == "censor":
generator.censor = not generator.censor
console_print("Censor is now {}.".format("enabled" if generator.censor else "disabled"))
Benefits for doing so:
/showstats
commandSometimes i want to change the amount of generated text to a different one. For example for dialogues i'd sometimes like to set it to a lower amount so that game spouts less bullshit from my perspective. Or when describing locations i'd like to set it to a higher amount. So the ability to change it midgame as you can do with top_p and temp now would be nice.
It would be nice to have the options saved to a config file so that you don't have to change stuff every time you run the game. You could also back it up and carry it over when updating the game
Clover already does this, might be worth taking a look at it.
Hello everyone who's contributed to the repo thus far:
Please comment on this issue affirming your consent to have your changes included under the new AGPL license, as per http://boards.4channel.org/v/thread/489594729#p489602649
@tshatrov @ShnitzelKiller @marktaiwan @pirone2 @KennethSamael @PapayasTehSkeletor
Shit happens instantly when running with latest commits:
Traceback (most recent call last):
File "play.py", line 9, in <module>
from story.story_manager import *
File "/content/gdrive/My Drive/AIDungeon/story/story_manager.py", line 191
try:
^
SyntaxError: invalid syntax
I noticed that the game would on occasion produce results that would cut off just as a character is about to speak. I think I identified at least one condition that causes this.
Let's say for example the raw output of one run ends with:
[...]
You walk pass your neighbour mowing his lawn, he turns to you and says, "Hey how's it going?"
[incomplete sentence fragment]
When passed to cut_trailing_sentence
, the function removes everything after the question mark, including the double quote:
[...]
You walk pass your neighbour mowing his lawn, he turns to you and says, "Hey how's it going?
The text is then passed to cut_trailing_quotes
, which removes the rest:
[...]
You walk pass your neighbour mowing his lawn, he turns to you and says,
This issue only affect dialogues that ends with ?
or !
but not .
because in one of the earlier steps of result_replace
, there is this line:
I don't know the best way going about fixing this, my present solution is to move cut_trailing_quotes
near the top of the function for it to remove actual incomplete quotes first, then change
last_punc = max(text.rfind("."), text.rfind("!"), text.rfind("?"))
to include "
as a valid character to end a sentence in.
One problem I can see with this is if the last sentence of the dialogue contains the string you ask
or you say
, it will get removed by cut_trailing_action
, potentially resulting in a incomplete quote if the dialogue is 2 or more sentences long.
when I try to get past the part where it asks me whether or not I'd want to change the temp value the program stops. It gives me this error if I input N.
Traceback (most recent call last):
File "play.py", line 382, in
play_aidungeon_2()
File "play.py", line 150, in play_aidungeon_2
story_manager.generator.generate_num = story_manager.generator.default_gen_num
AttributeError: 'GPT2Generator' object has no attribute 'default_gen_num'
and it gives me this if I input Y it gives me this.
Traceback (most recent call last):
File "play.py", line 382, in
play_aidungeon_2()
File "play.py", line 138, in play_aidungeon_2
story_manager.generator.change_temp(float(input("Enter a new temp (default 0.4): ")))
AttributeError: 'GPT2Generator' object has no attribute 'change_temp'
Any tips for fixing this?
In light of recent events, and in case it's escaped attention.
If you use the /cloud command, to turn couldsaving on, using it again still gives you a text saying it's turning the setting on.
Maybe it's just visual, and it is actually turning it off... but it does happen.
/censor
fails to pass a check and enable/disable itself
> /censor off
Traceback (most recent call last):
File "play.py", line 753, in <module>
play_aidungeon_2()
File "play.py", line 412, in play_aidungeon_2
if not generator.censor:
AttributeError: 'NoneType' object has no attribute 'censor'
Hi, one feature which would be great is to make final custom prompts or edit the first story prompt.
I am not talking about the context I am talking about the first prompt made by the AI.
It is extremely annoying if I want to make a specific opening scene in my Custom stories and the AI runs away with its story creation. I can't even force the Ai to do what I want, because the first story prompt can't be edited.
In the clover edition fork of AI Dungeon (https://github.com/cloveranon/Clover-Edition), they use Half precision floating point using significantly less GPU memory
, 4GB of VRAM to run at our reduced 16 bit mode.
. Is it possible to use the same technique here?
In story_manager.py, load_from_storage()
has the save directory set to "saved_stories" while save_to_storage()
and start_new_story()
uses "saves", this is apparently breaking the loading and saving features.
Note that the cloud save location was moved from root to "saved_stories" in patch latitudegames/AIDungeon#101
Sorry I can't actually test it because I'm unfamiliar with how this works in Colab.
When playing in raw mode, after the story get to a certain length, it would get stuck in a infinite loop:
AIDungeon/generator/gpt2/gpt2_generator.py
Lines 89 to 90 in c198371
This is because cut_down_prompt
uses the player action indicator >
to split and reduce the prompt, which doesn't exist in raw mode, leaving it unchanged.
There's probably a more elegant solution for this, but this is the workaround I'm using for my offline copy for the time being. It just reuses the code from #27 to cut down the prompt sentence by sentence if the generator is in raw mode.
def cut_down_prompt(self, prompt):
if not self.raw:
split_prompt = prompt.split(">")
expendable_text = ">".join(split_prompt[2:])
return split_prompt[0] + (">" + expendable_text if len(expendable_text) > 0 else "")
else:
sentences = string_to_sentence_list(prompt.lstrip())
sentences = sentences[1:]
new_text = ""
for i in range(len(sentences)):
if sentences[i] == "<break>":
new_text = new_text + "\n"
else:
new_text = new_text + " " + sentences[i]
return new_text.lstrip()
I wrote this command the other day, as a variant of my /alter command, which is in this fork.
elif command == "altergen":
if len(story_manager.story.results) is 0:
console_print("There's no results to alter.\n")
continue
console_print("\nThe AI thinks this was what happened:\n")
print(story_manager.story.results[-1])
action = story_manager.story.actions[-1]
story_manager.story.actions = story_manager.story.actions[:-1]
story_manager.story.results = story_manager.story.results[:-1] #most recent action and result needs to be temporarily removed to let storycontext work
result = input("\nWrite the first part of the new result (use \\n for new line):\n\n")
result = result.replace("\\n", "\n")
result += story_manager.generator.generate(story_manager.story_context() + "\n> "+action +"\n" + result)
print(result)
story_manager.story.add_to_story(action,result)
I opened a branch for running AIDungeon on PyTorch: https://github.com/thadunge2/AIDungeon/tree/pytorch-model/generator/gpt2
It's plug-and-play, just run play.py and it should install everything it needs to (unless you're on Windows, in which case it will tell you what to do). However, it's unusably slow until we rework the generate method to use hidden past states. This is beyond my ken, so if one of you wants to step up and do it, be my guest.
Here's the generate function we use: https://github.com/huggingface/transformers/blob/ce50305e5b8c8748b81b0c8f5539a337b6a995b9/src/transformers/modeling_utils.py#L699
outputs = self(**model_inputs)
needs to take a "past" parameter and change like so: outputs, pasts = self(**model_inputs)
I don't have the time or knowledge to make it do this, since it turns the 3D matrix into a 2D one and fucks everything up. So drop a PR on the pytorch-model branch fixing that and we can roll this feature out.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.