Elon Musk comments on quantum physics and computer science. People gather. Finally the audience I’ve been waiting for :D
I volunteered to share some facts about how the software running our Universe’s simulation works
Warning: silliness ahead.
Most popular quantum mechanics quirks can be explained in terms of simulation software optimization or resource constraints to the extent of sounding convincing for someone who only read a popular science article. I wish I were a decent sci-fi author…
— Ζbγѕzеk (@naugtur) September 19, 2018
Here’s all the explanations I gave in the AMA nobody requested:
Quantum entanglement is one glitch that occurs when the hashing function used for indexing particles has collisions.
— Ζbγѕzеk (@naugtur) September 19, 2018
The evolutionary algorithm running in our simulation has the goal of not just training an artificial intelligence but in an unattended process discover architectures for trainable AI capable of interfacing with the network adapter on the host machine
— Ζbγѕzеk (@naugtur) September 19, 2018
The simulation resolves circular references by asynchronously delaying each step with exponential backoff to ensure good approximation of expected results with finite resources.
— Ζbγѕzеk (@naugtur) September 19, 2018
Speed of light limitation and expanding universe are there to ensure increasing complexity of simulating this area of space are balanced by the diminishing number of objects within what we call observable Universe. It's an idea superior to fog and map edges in video games
— Ζbγѕzеk (@naugtur) September 19, 2018
It's easier to scale the simulation workload horizontally across astronomical number of cpu cores if each observer can be processed separately even at the cost of some overlap and smart eventual consistency algorithms can converge minor differences unnoticed.
— Ζbγѕzеk (@naugtur) September 19, 2018
In the simulation host universe physics it's way cheaper to use cpu for regenerating the same data multiple times than store it in memory because different value of Planck constant allows for high frequencies but large minimum quants of energy, storing 1 bit takes a lot of power
— Ζbγѕzеk (@naugtur) September 19, 2018
The waves interfering in this experiment are artifacts of a generic class for handling dimensions. An off-by-one programming error caused it to use one unnecessary dimension in calculations. The error was discovered too late - stayed for backwards compatibility across simulations
— Ζbγѕzеk (@naugtur) September 19, 2018
If we continue growing our space observation reach, the speed of universe expanding will have to increase to make up for the onramp of work with removing objects from sight more quickly by pushing them beyond the edge of observable universe
— Ζbγѕzеk (@naugtur) September 19, 2018
The idea behind entropy is to reduce concentration of unnecessary structures that aren't used for a longer period of time. It allows freeing up simulation state memory faster and as I said earlier, in host universe memory is expensive.
— Ζbγѕzеk (@naugtur) September 19, 2018
Superposition of states is an optimization - it's cheaper to store an equation defining possible particle positions than keep updating the position constantly and propagate it through the network graph of everything affected by it. Graph executes only when attached to an observer
— Ζbγѕzеk (@naugtur) September 19, 2018
If it fulfills the goal of the simulation before our neural network design does, it wins as the better architecture to be used in future models
— Ζbγѕzеk (@naugtur) September 19, 2018
Simulation inspection is evaluated lazily, missing parts are retroactively filled in. Host universe observers monitor the processes running the simulation, they don't interact with simulated particles on daily basis. Nobody can afford simulating everything unnecessarily.
— Ζbγѕzеk (@naugtur) September 19, 2018
Due to horizontal scaling across many cores each observer's scope is computed independently and overlap between observers is converged with eventual consistency. See also my post on off-by-one error in generic dimensions class.
— Ζbγѕzеk (@naugtur) September 19, 2018
The goal of the evolutionary algorithm building a new neural network architecture is not affected by simulation limitations unless they skew the output of fitness function. Results should generalize to a higher definition universe.
— Ζbγѕzеk (@naugtur) September 19, 2018
When an edge in a proximity graph of particles connects to a consciousness the graph cannot be frozen nor garbage collected in the next pass.
— Ζbγѕzеk (@naugtur) September 19, 2018
The evolutionary algorithm running in the simulation is designing a neural network architecture capable of being trained to interface with the API. When we use the API correctly our network architecture design graduates to production use.
— Ζbγѕzеk (@naugtur) September 20, 2018
Consciousness interacts with particles so observing oneself in that sense keeps particles actively simulated. Consciousness itself runs in a thread. If one stops observing any particles for long enough, the consciousness thread might be stopped and resources assigned elsewhere.
— Ζbγѕzеk (@naugtur) September 20, 2018
There's a maximum of specimen in the evolutionary algorithm but the maximum limits all consciousness threads, not only humans.
— Ζbγѕzеk (@naugtur) September 20, 2018