I've been programming for a few years now and I love it. I started in 2019 with Python, and then switched to p5js and JavaScript because it was easier to share.
I then became interested in chaos theory from a book I read and I made several programs plotting strange attractors and the Mandelbrot set.
My programming accounts:
I've been a big fan of programming for a while now and these are some of my projects
Due to many of the dependencies of my projects in p5js implementing breaking changes in the years since I have worked on them, many of these will not work unless they are run with p5.js 1.x.x You can change this in the upper right corner of the screen by pressing the gear and then changing the version of the p5js libary. I have spent many years working here and some of my favorite work is here
Word2vec implementation (from 8th grade, updated recently (Dec 8, 25) to fix dependency versions)
This program finds meaningful vectors that correspond to the meanings of the words it's given. This program takes in all the words and symbols from the corpus, splits them up, takes the relative positions between the words, and then strongly compresses that information by training a neural network on it. The lower the blue line during training gets, the more compressed the data is. Once the neural network is trained on this information, different parts of the network store relevant information for different words. The incredible part of this is that this word-specific information in the neural network actually describes the coordinates of a meaningful point in an abstract space - not a normal 2D or 3D space as we are used to, but a much more complex 100D space.
It is basically impossible to visualize anything in a 100 dimensional space, but many of the same rules and mathematics we are used to in lower dimensional space can still be used when we go to 100D. For example, we can still find the distance between two points in the same way as we would in 2D space, by applying the Pythagorean theorem. The reason these 100D points are interesting is because words with points that are closer to each other have similar meaning.
And it gets even better because (when this method is applied to much larger collections of text), you can actually do literal arithmetic on the points the words correspond to. Google's paper describing the word2vec system gives the example of vector("King") - vector("Man") + vector("Woman")
By now it would probably be a good idea to say that 'vector' and 'point' mean mostly the same thing in this context (sometimes in the field of machine learning its helpful to look at the distance between two points in terms of the angle between them from the origin, and other times it's helpful to rotate the points by an angle around the origin, so the word vector, a quantity with both direction and magnitude is often used)
I find this system really interesting because it is one of the shortest ways I could find to automatically quantify the meanings of words. I first learned how this process works in 8th grade, and now (11th as I write this) I understand much more about this. The idea of turning words into vectors is a very powerful one, and is actually the first step in the pipeline LLMs like ChatGPT use to try and understand what people ask it.
Inside the code, there is a variable named 'corpus' that I would recommend changing the content of and reruning the program.