Modelexicon
This AI Does Not Exist: generate realistic descriptions of made-up machine learning models.
Install / Use
/learn @thesephist/ModelexiconREADME
This AI Does Not Exist 🤖
This AI Does Not Exist generates realistic descriptions of made-up machine learning models. Modelexicon is what it was called before I bought the domain.

This AI Does Not Exist is built with Oak and Torus. EleutherAI's GPT-J-6B is used for text and code generation at time of writing, though this may change as state-of-the-art models improve.
How it works
At the core of This AI Does Not Exist are two text generation pipelines:
- One takes a "model name" and generates a brief description in academic-sounding prose
- Another takes that model description and writes a Python code snippet that demonstrates how to "use" the described model
These are both generated using a language model called GPT-J-6B, which sits somewhere between the well-known GPT-2 and GPT-3 models in terms of performance.
When you simply open thisaidoesnotexist.com, the model names you'll see are hand curated and pre-generated by me. There are a few reasons I chose to pre-generate a set of model names:
- Most importantly, this saves compute costs. Most users are statistically going to click through the first few sample/pre-generated models, and try one or two of their own model ideas. Some visitors may even bounce after having seen only pre-generated examples. I expect pre-generated model data to save me 2-4x on API bills, which — my goodness — language models are expensive to run!
- Manually curating the first few samples ensures the first encounter visitors have with this project is at least of a certain baseline of quality and fun.
The script in scripts/pregenerate_models.oak pre-generates this dataset into models.json, which the server round-robins through at runtime on each request. Any user-entered model names are obviously routed to the right APIs for text generation.
Development
Running Modelexicon requires a config.oak configuration file, which currently includes API access information for text generation. There are two text generation backends supported:
- Huggingface Inference, which requires
HuggingfaceURLset to the right model andHuggingfaceTokenset to your API key. - My personal private language model API, which you probably can't use because you are not me. This requires setting
CalamityURLto the API endpoint andCalamityTokento the app-specific API token I generate for my projects.
With these defined in config.oak, oak src/main.oak should start up the app.
Like many of my projects, Modelexicon is built and managed with Oak. There's a short Makefile that wraps common oak commands:
makeruns the web server, and is equivalent tooak src/main.oakmentioned abovemake fmtormake fauto-formats any tracked changes in the repositorymake buildormake bbuilds the client JavaScript bundle fromsrc/app.js.oakmake watchormake wwatches for file changes and runs themake buildon any change
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
isf-agent
a repo for an agent that helps researchers apply for isf funding
