Using a linear regression model and publicly available data, my code predicts the final letter in five-letter words. The accuracy is ~20% which is significantly greater than a random algorithm (~3.8%).
With modern transformer models (GTP-4, Claude, ect.), AI labs have found clever solutions for accurately predicting the next word in a sequence. In this respository, I test a similar idea with a regression model. I wondered if could a regression model also predict the next character? To test this out I provided the model with a small amount of data, trained it and recorded the results.
There are two files that do significant work to create the model in this repository:
- words_to_tensorflow_readable.c: a file written in C that takes the csv file in the oldCSVfiles folder containing common words and their part of speech (wordFrequency.csv) and creates a new csv in a format that is easily used by tensorflow (ASCII numbers instead of letters and a different column for each letter). By modifying the global variables for the start line and end line in the top of the C file, I was able to create training.csv and eval.csv (one file to train the model and the other to evaluate it). For the output that would contain all the lines of the file, I have included output.csv in the oldCSVfiles folder.
- letterpredictor.ipynb: the notebook that contains the code that trains and evaluates the linear-regression model. Linear-regression is a common machine learning algorithm that can predict outcomes based on a labeled dataset like the one in this example.
Overall, the model was able to perform at about 20% accuracy (more consistently at a bit lower though at ~18%) which is pretty strong considering that a randomly-choosing model would perform at only about ~3.8%. This accuracy demonstrates strong and impressive performance considering the size of the dataset and the challenge of the problem.
Take the word "flame" for example. In this case, the model was able to correctly predict that the final letter in the string "flam" was the character "e" (ASCII character 101), giving this character a roughly 45% chance of being correct as shown below:
The model also seems to be good at having a bias against characters that are extremely unlikely to end a five letter word. For the string "flam", the model gives a close to zero probability that the final character in the string is an "x" (ASCII character 120).
To be fair, it's performance is far from perfect. For the same string "flam", the model gave a 31% likelihood that the final character was a "t" (ASCII character 116) which doesn't seem correct at all from a human perspective.
- List of words and part of speech data used to train the model was barrowed from www.wordfrequency.info
- The linear regression section of the course https://www.youtube.com/watch?v=tPYj3fFJGjk was heavily referenced to build this model. The course, at the time it was made, was mostly just an in-depth walkthrought of the tensorflow documentation.