AIWorld6
A simulation for causational studies of lifeforms
Install / Use
/learn @DickingAround/AIWorld6README
TODO:
- Increase the speed ** It's the new agent creation:
- save the last place someone died as the next one to use. Or perhaps loop through forever.
- Just have a cache
-
Make a way to prove that they're using the signaling system
-
Need to make a terrain and signal loader
-
Need to make a brian editor/render-er
-
Test! - If we have energy-take a constant vs. it being a function of size, do we still get speciation??
-
Get the tests up and running again (preferrably without killing the exisitng simulation)
-
Build a testing suite to actually a/B test things (but how to measure the outcome??)
-
How will I detect if they're actually using the communication channels?? How can I measure what parts of their brain they're using at all? ** Let's say I know the decision. I know the connections and values that went into it. (I don't know what went into choosing 'not it' though) ** I could just give a map of what connections existed
-
I could make a brain-builder. An app that lets you design a brain and then it adds it to the system to see how it fairs
-
I could make a multi-machine version in which they share the agents you have in your world.
-
I could make a file-combiner, which mixes two different worlds together. Take file A and file B and half the world is one and half the world is the other.
-
There's a lot of extra parts in the brain.c replications and connections that don't need to be tehre
-
Make eating an constant and see if we still get predation
-
We should put positive pressure on the connections but then cost them? How do we make it latent in their genes??
-
Add energy-give
-
Add terrain view to the UI
-
Add terrain deformation
-
Create the first test, running many iterations and then an automatic analysis of it. What metics were statistically significantly different? ** The first test is: Are they using their signaling or not? We turn off signaling. What then?? Or perhaps even, howe long does it take for them to react without it?? No, the real first test is an A/A test!... but then signaling.
-
Load needs to also load the terrain
GETTING STARTED
- You will need to install 'sudo apt-get install python-pygame' which is use for the UI and rendering
- You will need to install 'sudo apt-get install python-numpy python-scipy' which is used for the species differentiation and clustering in the python UI
- You may want to install 'sudo apt-get libav-tools' which is used when turning all the images into a movie
- You may want to install avconv for turning images into a movie
- You may want to install openshot for joining movies and audio
- You may want imageMagic which is used for turning all the images into a gif
- You may want audacity to record audio over videos you make
- Run the command 'bash make' to build and run the program.
RUNNING EXPERIMENTS
Experiments should have code in three places:
- In the config.h as a #define statement telling the compiler to include/exclude code
- In the main.c file printing out the name of the experiment so it's always clear what code is running
- In the simulation itself wherever needed. You can always locate all impacts of the experiment easily by searching for the name in the initial define statement.
Experiments that have been coded:
- A vs. A test
- No communication
- Aging - NOT IMPLEMENTED
- More communication - NOT IMPLEMENTED
- Only sexual or only asexual reproduction - NOT IMPLEMENTED (should be just another call to asex)
Running remotely: screen -d bash make screen -dmSL main_aiworld bash make
IMPLEMENTATION
PROGRAM STRUCTURE
There is a multi-threaded c program that runs the main simulation. It outputs files to ./outputs/ which are then consumed by a python UI program. The make scrip noted above compiles the c program, launches the python UI, runs the unit-tests on the c program, and then launches the c program to actually run the simulation.
USE OF MEMORY
There are no 'malloc' memory allocations in this program. All memory used by the simulation is allocated on startup and then maintained within the program. This was done for the sake of speed given that many agents are created and destroyed on a very rapid basis. It also provides a degree of memory stability; if we allocated more memory as life grew you may only find out you allowed the life to grow too much when it crashes the program many hours or days into a run.
USE OF CLASSES
The program has a rough class structure where structs are used as object classes. They're always saved in a .h file. All functions that operate on a class will start with the name of the class. For example there's 'world.h'/'world.c' the files and then world's functions will start with 'world_' and take a pointer to a world struct as their first argument.
FILE DESCRIPTIONS
main_ui.py - The python UI
main.c - The main program, just input parsing and calling simulationManager
config.h - This defines many constants used within the program. Some can be changed without having to change anything else such as 'NUMBER_OF_THREADS' or 'AG_MUTATION_RATE'. Others are more tightly coupled such as the mapping of outputs from the brain of an agent.
simulationManager.h/.c - There is a single global instance of this class so that everything can access it without having to be given a pointer to it. The simulation manager is in charge of running the time iterations of the simulation and knowing when to run statistics. Each time-step of the world is broken into a decision phase and an action phase. The simulation manager launches and calls several threads that do decision making and then it's own thread handles the actions. The actions cant be easily multi-threaded because there is so much cross-involvement it wasn't worth doing in V1.
simulationManager_thread.c - This is not a class, but just a file with some complex mutex locks that are used by the simulationManager to wake up and communicate with the threads. It's implemented using several locks, which is complex, but it allows for the simulationManager and the threads to communicate without either of them ever having to use a sleep command and thus losing any time in the back-and-forth.
simulationManager_thread_control.h - This is the data the threads need to be passed initially.
world.h/.c - The world maintains a 2D array of all 'locations'. It also maintains the array of agents, which is block of memory that it also uses when other parts of the program call it to 'allocate' a new agent in memory. The simulation manager and simulation monitor use that array when they want to iterate through all the agents that exist.
location.h/.c - A location knows what the cost to pass over this location is, how much food is there, and what agent is at this location.
agent.h/.c - An agent has an energy level, is facing a direction, and has a brain. It's functions include the implementation of actions it can take.
brain.h/.c - A brain is a 2-level sparse neural network. That means that instead of every possible connection being defined it's a list with only some connections being defined. This is done because the agent's brains are more likely to act like decision trees or fussy deciders than they are to act like complex pattern matchers. As such, it's more efficient computationally to model them this way. The brain also contains the complex replication logic.
intelligenceTests.c - This is a series of tests used to rate how intelligent the agents are. It's not yet complete r tested.
TESTING Each class has it's own unit tests. In each there is a roll-up function named '<className>_test' that captures all of them and returns either 1 or 0. For example 'world_test()'. All the tests are called by main.c when you pass -t as the command line input. They output their results to the console but won't block the continued execution of the program.
DESIGN GOALS / NOTES
- use audacity for audio recording
- use tunes to tube for audio upload to youtube
- use pinta to edit the still images
- use openshot to edit the videos
SIMULATION GOALS:
- Evolve sexual reproduction
- Evolve cooperative behavior / multi-cellular behavior
IMPEMENTATION GOALS:
- Avoid the world stagnation problem
- Allocate all memory at the beginning
- Multi-threading in the C itself
IMPLEMENTATION: Species
- We are throwing away a lot of species data. We know there's commonalities in their brains. How might we discover what the brains are exactly? ** Connections are location persistent. We could look at what locations are most common. For the most common connection, who has it. Who doesnt? Build a tree. Lets say the population is a[1,2,3] , b[1,2,2] , c[1,2,4] , d[3,2,9] , e[3,4,10] *** 1 conn: 2 - a,b,c,d e *** 0 conn: 1 - a,b,c d e *** 3 conn: x - a b b d e
- why not easier? Like just pick a number and either + or - from it over time??
- Did a proof of concept and I bet this is going to work. The species do wander all over the damn place. So, now we're going to give every agent a number. And over time, that number is going to wander. Initially people get a set of numbers at random (across the spectrum of color). We let the numbers wander anywhere. When we display them, we hash them down. When we do species, we can look at the real numbers and run a clustering algorithm. (we may need to learn how to implement that algorithm). We'll probably also need a number-wander modifier. Perhaps they shouldn't be an int but instead a float?? Nah. With an int, we can still compress the color spectrum as much as we want.
- It's even ok to have a variable color spectrum I think.
- Now we have a speciation hash we trust. How do we use the species algorithm to learn about them?? We have the algorithm in python, we could try finding a
Related Skills
node-connect
344.4kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
99.2kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
344.4kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
344.4kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
