GithubHelp home page GithubHelp logo

2020's Introduction

This is for the 2020 NaNoGenMo. See here for the 2021 repo!

NaNoGenMo 2020

entries: completed entries: preview issues: admin twitter: nanogenmobot

National Novel Generation Month - based on an idea Darius tweeted on a whim, where people are challenged to write code that writes a novel.

Hey, who wants to join me in NaNoGenMo: spend the month writing code that generates a 50k word novel, share the novel & the code at the end

This is the 2020 edition. For previous years see:

The Goal

Spend the month of November writing code that generates a novel of 50k+ words. This is in the spirit of National Novel Writing Month's interesting definition of a novel as 50,000 words of fiction.

The Rules

The only rule is that you share at least one novel and also your source code at the end.

The source code does not have to be licensed in a particular way, so long as you share it. The code itself does not need to be on GitHub, either. We use this repo as a place to organize the community. (Convenient because many programmers have GitHub accounts and the Issues section works like a forum with excellent syntax highlighting.)

The "novel" is defined however you want. It could be 50,000 repetitions of the word "meow" (and yes it's been done!). It could literally grab a random novel from Project Gutenberg. It doesn't matter, as long as it's 50k+ words.

Please try to respect copyright. We're not going to police it, as ultimately it's on your head if you want to just copy/paste a Stephen King novel or whatever, but the most useful/interesting implementations are going to be ones that don't engender lawsuits.

This activity starts at 12:01am GMT on Nov 1st and ends at 12:01am GMT Dec 1st.

How to Participate

Open an issue on this repo and declare your intent to participate. If you already have some inkling of the kind of project you'll be doing, please title your issue accordingly. You may continually update the issue as you work over the course of the month. Feel free to post dev diaries, sample output, etc.

If you have more than one project you're attempting, feel free to post a new issue for that project and keep that one up to date as well.

Also feel free to comment on other participants' issues.

Admins

Official admins for NaNoGenMo are @dariusk, @hugovk, @MichaelPaulukonis, and @mewo2. We'll be doing our best to keep the issues section well organized and tagged.

Resources

There's an open issue where you can add resources (libraries, corpuses, APIs, techniques, etc).

There are already a ton of resources on the old resources threads from 2013, 2014, 2015, 2016, 2017, 2018, and 2019.

You might want to check out corpora, a repository of public domain lists of things: animals, foods, names, occupations, countries, etc.

That's It

Have fun!

2020's People

Contributors

dariusk avatar hugovk avatar maxdeviant avatar mrcasals avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

2020's Issues

Strange Animalia - Intent to participate

Back in the days of mid 2000s internet I was part of a fan forum for His Dark Materials, where a bunch of nerdy mostly teenagers compiled and wrote animal analysis mapped to personality traits. I have a 10mb csv file with about 10 years of community sourced animal traits I scraped and I'd like to do something weird with it.

Current idea:
Taking the titles of animal names, parse them into a grammar to generate new animal forms
Train a model on a whole lot of animal descriptions from post bodies or use a markov chain, not sure yet!
Use the new animals as input and see what comes out

In!

Not sure what to do yet.

Maybe I'll update and re-run NaNoGenMo/2016#138, four years on.

I may also check some unrealised ideas from 2019, 2018, 2017, 2016 and 2015 and 2014.

Or maybe I'll figure out how to physically print an old entry instead.

Sounds interesting, I'd love to participate

Stumbled upon a link to this on twitter, and it sounds like a fun time. In November it'll time to code a novel! Will my story be grammatically correct? Maybe. Will it be coherent? Not likely! But will it be 50,000 words? Yep!

activity logs

hi! i've been a big fan of nanogenmo for a while. this year i have an idea for it.

i found out earlier this year a friend of mine has to keep exhaustive logs of all the work activity he does every day. it includes things like "11am-11:15am: Nothing". this really blew me away before i learned that its actually common practice in some positions. i think it could be cool to generate a prose version of it. it might also be really boring. i guess i'll find out.

I'm Participating

I heard about NaNoGenMo a couple of years ago and I always wanted to participate. This year, after much vacillation, I'm finally joining.

Walking around, eating mangos and strawberries, on a deserted island

in a similar vein to NaNoGenMo/2019#81, but hopefully I'll be able to come up with a "deeper" & more interesting simulation this year.

my basic process will be: make a procedurally-generated text-adventure with emphasis on simulating basic needs, crafting, "survival sandbox" stuff. then make an AI to play the game. and then generate narrative from the experiences of that AI.

🤞

life & death: a machine's interpretation of the human condition

hey, all. first time participating here. i'm going to do some research and populate a gigantic list of important events that my friends and families have experienced throughout different points in their life. specifically, i'm planning on splitting up the data sets into:

  • infancy & toddler (0-3)
  • middle childhood (6-11)
  • teenage (13-19)
  • young adulthood (20 - 29)
  • middle adulthood (30 - 55)
  • seniority (55 - death)

based on the information i get, i will have my program create a memoir of it's own. will it manifest a desire to live? let's see!

Intent to participate: Crash Blossoms, or: History Above the Fold

NaNoGenMo is a solution to a problem that I have, of a dataset with no purpose. Within the month, I intend to use the R language to convert my data into a generated novel of (hopefully) 50k works or more. The title hints at the dataset: "Crash Blossoms, or History Above the Fold".

This is my first time doing a NaNo*Mo, wish me luck!

Intent of participation

I have been looking at this for a couple of years now. I think this year is the one, I want to try it out. Not sure what I am making but let's figure it out along the way.

NaNoGenMo Generated Stories Continued by AI Dungeon

My project will pick up where my NaNoGenMo 2019 project left off, utilizing GPT-2 and GPT-3, the language models developed by OpenAI.

Last year, thanks to Max Woolf, I learned how to fine-tune two different GPT-2 language models with Google Colab. I created "Writing Prompt Prompter" trained on thousands of writing prompts posted on /R/WritingPrompts and "Writing Prompt Responder" trained on thousands of responses to writing prompts on /R/WritingPrompts.

After running those language models for weeks straight, I generated a 50,000-word story collection, with a series of stories generated by "Writing Prompt Responder" in response to story prompts generated by "Writing Prompt Prompter." The stories were spooky and lovely.

Late in 2019, Nick Walton created AI Dungeon, a text-based video game that used the GPT-2 language model as its storytelling engine. Walton shared his AI Dungeon code for free through Google Colab and I spent a few amazing weeks running the early version on my laptop. Walton and his team have since developed a standalone AI Dungeon app.

In June, OpenAI announced the creation GPT-3, the third generation of its AI language model. The company said it trained GPT-3 with a massive selection of datasets, including the Common Crawl corpus (around one trillion words gathered through eight years of web searches), two “internet-based books corpora," and the English-language version of Wikipedia.

I can't access GPT-3 on my own, but AI Dungeon now runs on the powerful new language model. I could never code something as elegant as AI Dungeon and I could never replicate the millions of training hours its users have spent on the game.

So for my NaNoGenMo project this year, I will use AI Dungeon as my interface to generate 50,000 words. I will feed AI Dungeon my favorite computer-generated writing prompts and writing prompt responses generated during NaNoGenMo 2019. AI Dungeon will continue where those stories left off last year.

It will be like a game of Exquisite corpse played between GPT-2 and GPT-3.

I will keep an eye on Tra38's project that also uses AI Dungeon.

Using "AI Dungeon" To Create A RPG Sourcebook [Complete!]

AI Dungeon is a free-to-play single-player and multiplayer text adventure game which uses artificial intelligence to generate unlimited content.---Wikipedia

The important thing about AI Dungeon is that it currently uses GPT-3 on the backend, which is a very powerful machine learning algorithm, able to generate human-readable text based on the "prompt" you provide it. The free version uses a fairly limited version of GPT-3 (Griffin), but the paid version (Dragon) is much better. I currently have the paid version.

I have used AI Dungeon to generate characters for tabletop RPGs, so having it generate fluff for a roleplaying game seems doable.

Here's a problem with this project though. AI Dungeon is a closed-source project. The only "source code" I can provide is the initial prompt I used to generate the text, and I can open-source that prompt. But AI Dungeon doesn't allow for reproducible output. In other words, it is based almost entirely on the honor system.

So, I may write my own script, along the lines of The Computer Crashes or The Track Method, and open-source that script, merely using AI Dungeon as a generator of corpus that will be remixed.

I don't know if I'll have time to work on this project, but I think it might be something for me to think about, at least. So I'm making this issue, just in case. If I get, say, only 10,000 words out of this, I'll just add a bunch of meows at the end and declare victory.

Horror Story with Monster

I will be coding a horror story with a monster in it. I plan to do a mix of templating and something else (probably simulation). This is because I'd like to produce something with a coherent sense of plot, while still wanting to be "surprised" by what the characters do. I'd like to do something with synonyms in this work too.

I'm not sure how all of this will go, but I am sure I will have a ton of fun coding it.

I will program it in Ruby.

Naked Fear, Loathing, Pride, Prejudice, and Brunch at Tiffany's (in Las Vegas).

This is going to a continuation of my ideas from last year in NaNoGenMo/2019#65

Treating vocabularies as numbering systems, and works composed from them as large numbers, to be manipulated.

Following some very good advice last year I switched focus towards the end of the month to ensuring I actually had 50k words in some kind of format that was readable, rather than bug free code that was pure and true to a half-baked concept that only I was judging on. It was a good exercise in project management: focus on the results that matter.

I was happy enough with the results last year. Some of the bugs / issues with the tokenisation of the source material seemed to make the output more interesting, and my attempts last year to fix it resulted in (if I remember correctly) less interesting output, so I embraced the glitches and accomplished the goal of producing a generated novel using a simple arithmetic operation on a text.

This round I want to:

  • Generalise the tokenisation to be robust against many kinds of input (I'll be using a mix of properly edited text and some OCR'd source content)
  • Work on formalising the tokenisation algorithm so it is repeatable / comprehensible
  • Overcome the challenge of converting a > 100K word text like Pride and Prejudice into an integer. With the current code this requires more than 4 gig of RAM
  • Work on a shared vocab across more than one source work (4) and do some more interesting averaging or combinations.
  • Figure out if there is a conceptually pure way to make the text output interesting, or whether the output will really be as interesting as reading a large integer.

server-side graphics

Nothing sexy, and things that were done a decade ago, BUT NOT BY ME and not tools that work with the rest of my toolchain, so why not?!!

Maybe I won't get anything done - I haven't the last few years. But maybe I will!

Possibly using:

Driving around America with Mark Twain

Since last year's NaNoGenMo, I made a "novel" that consists of Google Maps driving directions, where the itinerary follows the mentions of U.S. place names from Mark Twain novels.

Let's drive around America with Mark Twain!

I hope to share the text and code soon.

Something Something Cyberwitches

It's definitely going to have cyberwitches. There may be Markov chains. There will be javascript. I may even write some of the code myself.

Chess Dreams (a "Sports Novel")

In 2017 I wrote "White to Play and Win", a novel in which a Chess engine narrates its thought process as it determines the best move to play. This year I intend to do something chess-themed again, but this time, in the reverse direction: rather than hyperfixation on a single move, it will tell the story of four players at a tournament competing for victory.

The tone will be similar to a "sports novel", with the players coming from different backgrounds and temperaments, and having different reasons for wanting to win the championship. Interspersed with their personal (simulated? Tracery?) stories of growing up, training, and accepting defeat or glory... will be actual simulated games. Chess engines are quite advanced now and many online services offer post-game analysis, identifying the significant moves, blunders, and alternate tactics for both players. The challenge will be to put a narrative spin on the analysis performed between the players.

For the past several months my time has been consumed by Blaseball, an online baseball simulation where 20 teams of randomly-generated players compete every week to win the Internet League Series championship. Over the sparse simulation mechanics an entire community of fans have grown, cheering on their favorite teams and building elaborate lore and fan work to tell stories from the rolled dice. Given a slight nudge, people are quite happy to identify their own patterns in the material and project much more into it than what is actually there. I hope I can translate some of that experience into this work.

Simple Dialogues

Simple dialogues converted into ridiculously detailed phonetic descriptions.

Pessoatron

Roughly: a symmetry group governs combinations of elements (characters, moods, locations, etc) for each chapter: these are then used as elements to drive a text generator.

This is just a vague idea which may change once I start actually coding.

  • Symmetry group generator - based on https://math.ucr.edu/home/baez/six.html
  • Decide on a theme - the heteronyms of the Portuguese poet Fernando Pessao
  • Write heteronym persona-templates
  • Write poem or first line templates
  • Write vocab lists
  • Finish off generator
  • Publish results

Completed results: https://etc.mikelynch.org/nanogenmo2020/

Mean Kings: NaNoGenMo non-fiction

To the title -- well, kinda.

I'm still working on my other ridiculously stupid idea (#7), but as I get frustrated there, I turn to another smaller project: Mean Kings.

The goal here is to crawl Wikipedia for articles on ancient rulers, perform sentiment analysis on the articles and tell readers who was (or maybe wasn't) a mean king and some reasons why.

This idea found genesis as an idea for a bot that my partner and I once had, but never found its expression. Maybe I'll turn it into a bot after this. Probably.

Whimsical Forest Field Guide

I want to write a field guide to help you survive a magical, whimsical forest.
Sections on:

  • Author's notes
  • Table of Contents
  • Flora
  • Fauna

Entries will consist of

  • a name
  • text describing the item
  • any warnings about it.

The name will most likely be generated based on a format with specific words pulled from corpuses. The text will likely be markov chain generated (unsure of source). The warnings will periodically added and will most likely be a format filled with specific based on the name and/or key words in the entry.


Stretch goals: If I complete what I have fast enough and don't want to start another project, I will possibly attempt these extra tasks:

  • NLP
  • Hook the gen script up to processing or blender or another similar software to generate a few plants for artwork throughout the text.

Participation Intent

Here I declare my intent to participate 🙌

Haven't decided about style/topic yet but we'll see

Cosmic Voyage & Triptography 2

I've been writing on cosmic.voyage for a while and noticed some common tropes that I thought would make it amenable to generative text, other writers on the platform may be doing NaNoWriMo so this could be fun.

Last year I attempted to generate a dadaist zine but got hung up in the markdown+latex formatting, I'd like to fix the problems I had and make some improvements on that that I could re-implement in my other triptograph project

The Cruxwick File / Any Day In 2020

I'm considering a puzzle book in the vein of the Sherlock Holmes Consulting Detective boardgame and Obra Dinn, where the reader is given a wealth of in-universe newspaper clippings, maps, directories and other documents, and a mystery can be solved by successfully cross-referencing information and eliminating possibilities.

The autogeneration idea would be to start with a single fact ("Mr Black was killed by Colonel Mustard") and then to recursively obfuscate it in a variety of ways ("Mr Black was killed with a skull-topped malacca cane" + "Colonel Mustard has a skull-topped malacca cane"), hiding all of these facts in various documents, until the case was complex enough to require some effort to solve. The script would generate as many non-overlapping cases as was needed to fill 50,000 words, along with a lot of randomly generated background filler to bulk out newspaper pages and address listings.

I'm imagining a standalone file of documents with a simple introduction and no further mechanics, rather than a Consulting Detective door-knocking system, but will see how it goes.

Markov text with citations

A common criticism of GPT language models is that they plagiarise text from the internet. As an experiment in smoothing over this issue, I will make a Markov chain language model that tags each n-gram observation with the location of the original in the source text.

This means that in the text generation stage, each output token can cite the n-gram it was drawn from in the source text. In the generated novel, I'll put this info in footnotes. This should make the resulting text much better sourced, and give the reader clarity about the true origin of any deep insights found in the novel.

Haven't decided what source text to use. Maybe Shakespeare (all lines have a standard identifier), GPT research papers, Moby Dick...

Caveats:

  • I'll probably need to generate LaTeX to keep the footnotes organised.
  • The procedure would be difficult to port into GPT models.
  • Most of the 50,000 words would be in the footnotes.

AutoNovel: Auto-DL's RNN Writing a Novel

I am co-founder and maintainer of an open-sourced org (Auto-DL).
We anyway need to work on RNNs, writing a novel generator (given a list of topics generate 50k+ word novel would be a challenging yet rewarding exercise.

I am participating :)

K. is lazy: Der Process (durch Epizeuxis)

A Kafkaesque, esoteric, Nano NaNoGenMo, Meow inspired epizeuxical "lengthy work of fiction" attempt in Lazy-K.

It may not be possible to bring the final code length under 256 characters of SKI combinators to make this 'Nano', but I'll try.

This began as an attempt to understand SKI combinator calculus by programming "something practical" using it.

An initial version of the code that produces the exactly 50K word long text comes in at 1276 bytes (not nano :( ) is:

K(IS(SI(K(S(K(S(S(KS)K)(S(S(KS)K)I)))(S(S(KS)K)I(S(S(KS)K)(SII(S(S(KS)K)I)))))))(K(S(SI(K(S(S(KS)K)(S(K(S(S(KS)K)(SII(S(S(KS)K)I))))(S(S(KS)K)I(S(S(KS)K)(S(S(KS)K)I)))))))(K(S(SI(K(S(K(S(S(KS)K)I))(S(SII)I(S(S(KS)K)I)))))(K(S(SI(K(S(S(KS)K)(S(K(SII(S(S(KS)K)I)))(S(S(KS)K)(S(S(KS)K)I(S(S(KS)K)(SII(S(S(KS)K)I)))))))))(K(S(SI(K(S(S(KS)K)(S(K(S(S(KS)K)I))(S(S(KS)K)(S(K(S(S(KS)K)I))(S(S(KS)K)(SII(S(S(KS)K)(S(S(KS)K)I))))))))))(K(S(S(KS)K)(S(S(KS)K)I)(S(S(KS)K)(S(S(KS)K)I(S(S(KS)K)(SII(S(S(KS)K)I))))(S(S(KS)K)(S(K(S(S(S(KS)K))(SII)(S(S(KS)K)I)))(S(K(S(S(KS)K)I))(S(S(KS)K)(SII(S(S(KS)K)I)))))(S(K(S(K(S(SI(K(S(K(S(S(KS)K)I))(S(SII)I(S(S(KS)K)I)))))))K))(S(K(S(K(S(SI(K(S(K(SII(S(S(KS)K)I)))(SII(S(S(KS)K)(S(S(KS)K)I))))))))K))(S(K(S(K(S(SI(K(S(SII)I(S(S(KS)K)I)(S(S(KS)K))(S(SII)(S(S(KS)K))(S(S(KS)K)I)))))))K))(S(K(S(K(S(SI(K(S(S(KS)K)(S(S(KS)K)I(S(S(KS)K)(S(K(S(S(KS)K)I))(S(S(KS)K)(SII(S(S(KS)K)I)))))))))))K))(S(K(S(K(S(SI(K(S(S(KS)K)I(S(S(KS)K)(S(K(S(S(KS)K)I))(S(S(KS)K)(SII(S(S(KS)K)I))))))))))K))(S(K(S(SI(K(S(S(KS)K)(S(SII)I(S(S(KS)K)I))(S(S(KS)K))(SII(S(S(KS)K)(S(S(KS)K)I))))))))K))))))))(S(SI(K(S(S(KS)K)(S(S(KS)K)I)(S(S(KS)K)I))))(K(S(SI(K(S(S(KS)K)(S(K(S(S(KS)K)(SII(S(S(KS)K)I))))(S(S(KS)K)I(S(S(KS)K)(S(S(KS)K)I)))))))(K(K(SII(SII(S(S(KS)K)I)))))))))))))))))))

Plan:

  • Attempt to reduce hardcoded text by accepting input -- Church encoded ASCII values take up a lot of space. This seems like the best bet for reducing code size.
  • If I can't make input work, or if it the input processing code is still way more that 256 bytes, I will create a internationalised suite of generators to produce the same "novel" in a variety of languages. Planned translations: English, German, French, Latin, and Spanish.
  • Ideal goal: have the language selection handled in Lazy-K, but I doubt I'm going to manage that.

Inspirational quote:

"A short while later, K. was lying in his bed. He very soon went to sleep, but before
he did he thought a little while about his behaviour, he was satisfied
with it but felt some surprise that he was not more satisfied."

Important note on pronunciation: For the purposes of this project, 'K.' refers to Franz Kafka's fictional character Josef K., and is to be pronounced as the German letter: 'kah'.

PhD poetry: creating poems from phd theses

Started a project earlier this year to create rhymes from Jordan Peterson's dissertation. Decided to use this project as a basis for NaNoGenMo 2020 and participate a second time! As opposed to last year I aim to update my work throughout the month a bit better, so let's see!

The Mathematician (π)

For reasons unknown, a mathematician sits down to write out their favorite (or at least second-favorite) number.

There will be at least one surprise.

NaNo MetaPoetry

I've done NaNoWriMo successfully a few times and feel like it is now time to move onto more confusing ventures! This is my first time participating in NaNoGenMo. I'm looking forward to figuring out what on earth I'll make with my novice Python skills. Will it be AI generated fanfiction? Will it be a horror novel based on cookery websites? Who knows? Not me! :D

Describe your workflow in three sentences or fewer

I have almost no clue what I am doing, but rather than ask for help I hit upon the idea of asking you to tell me what you do so I can just copy it. (I won't be able to, lol)

I'll go first:

I will follow a bunch of tutorial videos using P5.JS to generate a lot of very random text. I will then attempt to impose some structure and sort of fill the structure in. Then I'll give up and just generate 50k words of nonsense.

God, i'm tired

Intent to participate
Collate all the weary moments of the bible into one long rest

Anabasis-Inspired Comic

They say that the most important part of writing is to show, not tale. Comics tend to do a pretty good job of that, and are much more readable than a wall of textual simulation output.

I'm planning to make a comic for this year. I will aim for ~150 pages instead of 50,000 words, because comics are naturally sparser in words than prose, but I'll also make sure to generate a 50,000 word version to please the rule pedants.

As far as I know, this has only been done once, by atduskgreg in 2014. Unlike atduskgreg's entry, I will take the strictly simulationist approach with my generator. My plan is to generate a rough page-by-page plot through a simulator, then generate individual pages a little more loosely (while still maintaining as much cohesiveness as possible, especially in the small details), and finally generate the appropriate frames to go in the pages. I am hoping to generate a story that echoes the Anabasis, though I haven't decided to what extent.

I opened a repository which is empty for now, but I hope it won't stay so for long.

Intent to participate

I'm not sure if I want to write a fake diary for a supermarket cashier or generate logs for a food delivery company, but I'll try to participate this year. I hope to use Rust together with an ECS to generate the activities, otherwise I'll just use Python/Ruby.

Labyrinth of text

Intent: lay out existing text as a generated multicursal labyrinth, where branches occur only when the same word occurs multiple times in the source text.

Writing E-Team

Here the code.
Here the 50000 word output.

Initial intent

Hi there,
Using gpt-2, I'm interested in exploring tricks to bring narrative coherence across longer texts, possibly recycling some ideas from https://www.scritturacollettiva.org/metodo.html. Looking forward to emulation from other participants.
KR Francois

Execution

I went out with the naive idea that gpt could understand the structure of a text if trained accordingly. So the training material includes text snippets (pompously called chapters) with a prompt containing:
• a summary of the snippet created with Bert Summarizer and tagged with <|summary|>,
• the third paragraph of the snippet tagged <|line-03|>,
• the final paragraph of the snippet tagged <|line-last|>,
• the first paragraph of the snippet tagged <|chapter-begin|>.
And, in deed, even though gpt-2 writes sequence by sequence, it did recognize a pattern and often picked up the first sentence and the third one. The final one mostly got lost. But that picking of the third sentence seemed to be mostly statistics-based. It does not stick to the rest of the text.

B1naryGh0sts

It was only a month ago I trawled through all the sad mecha table top rpg game entires for a sad mech game jam on itch, so I'm gonna see what I can do to with parsing and text injection to make a story that at least starts out fairly straightforward, but... eventually have some digital ghosts or echoes or memories start making noise in the text, and see what it comes up with.

Here's hoping I can even find some time to spend on this ;0;

--Update---
I think I missed the actual official time, but I made it just in time with like an hour to spare for my time zone, to nyeh. Of course, as I type this it's still compiling the final novel, but I know where it will be once finished.

I used Markovify to build new sentences from the sentences that matched a ghost's emotions. And each ghost had a word bank that would be depleted as it overwrote sentences it didn't like. Then I had multiple ghosts all reading and writing at once at different rates. So they were learning and rewriting from each other too.

My code is available here
And the finished, rewritten text is here (The source story was a NaNoWriMo story I wrote in 2010 that was terrible)

Resources

This is an open issue where you can comment and add resources that might come in handy for NaNoGenMo.

There are already a ton of resources on the old resources threads of previous editions:

Art is an art

It's been said that the definition of insanity is doing the same thing over and over and expecting different results.

WELL HERE I AM AT NANOGENMO AGAIN

MANIAC Suite

Introduction

This is both an intent to participate and the beginning of a more comprehensive program log.

Description

This November, I plan to write a project using Intel 8031/8051 assembler to produce a work told via punched IBM 5081 cards (or other standard, yet to be really decided). The goal of this is to one day produce 1 physical copy of the work as a computational artist book. This project may involve Python to do the convenient parts, but I am going to try to stay as much as I can within my technical constraint.

This work takes some inspiration from the title computational machine developed the Los Alamos National Laboratory, intending be a kind of notional autononfiction. But, let's digress: that sounds far too literary a pretention to talk about at this point. As we all know from NGM projects, things can go quite differently.

More later. But for now, this is enough.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.