Humans of Structured Taste
Bootstrapping the Biological Bootloader
This is, of course, a riff on Dario Amodei’s “Machine’s of Loving Grace”
What are humans good at? We create *things*. Beautiful things. We can also take something abstract and fuzzy, and convert it to something concrete and discrete. A structure.
Humans can take an abstract input, add structure, concreteness, and create something truly new. In global aggregate, they can verify even the most complex of problems. Human history is almost nothing but a stacked series of human verifiable facts, formulas, events, etc.
I believe humans job, post AGI or not, is to encode this structure. I believe truth (structure) seeking is the last job.
The emergence of markets, most recently prediction markets, are a simplistic expression of this. Moving through time, you can probabilistically map truthiness/value via all inputs from a disparate set. Using this lens, something is 20% likely to happen, then 60%, then 100%.
AI is a fuzzy machine, it cannot deterministically “solve” discrete problems. By nature of it being fuzzy, it is fallible. My theory is that humans must act as the biological bootloader, taking fuzzy (approximate) logic, validating it, and converting it into discrete (exact) logic. Evals for Everything.
Humans are fallible individually, but, they converge in aggregate.
“The origin of consciousness in the breakdown of the Bicameral Mind” mind states there used to be two parts; a part that says, and a part that does.
I posit, post “singularity”/”AGI”/whatever, we are re-separating the bicameral mind; humans say, and AI does.
I believe we, as humans, will be responsible for the input and structures, leading to progress and truth. Our learned experiences form the basis for our “self”, which outputs our tastes/thoughts/feelings into actions. This is where impulse comes from, where anxiety comes from; learned experience emerging as nature+nurture based behavior.
The way to think about this, in naively sense, is the market structure; yes or no. A more fun way in my mind is to visualize it of a 2-3D plane of logical “is” and “isn’t”s. Imagine the plane, filled with various sized objects. An infinite cosmos of planets is how I think about it. Always forming.
One such object in this plane can be thought of as a calculator.
So when we consider an object in this plane, like the calculator, how do we know it’s a sufficient expression of logic to model a calculator?
It must be validated. It must perform like a calculator.
As we harden this definition of a calculator, others can choose to use it or not. Should sufficient buy in to use or evolve this calculator, at some point we reach a suitable definition, and others should simply re-use this invention; this “calculator”.
In this world, it is both cheaper, and more accurate, for AI to simply reach for this “calculator”, because AI cannot a create a “calculator” perfectly 100% of the time. To ask the fuzzy system, which can only ever be 99.99999… accurate, to create a discrete structure without validation will result in potentially no two sufficiently advanced “calculators” ever being the same.
Now, imagine the infinite plane again, filled with sufficiently advanced logical representations, that can be composed into someone else’s logic. For others to use and benefit.
I believe humans must encode this discrete logic. Our job is to move the fuzzy (GPU) to the discrete (CPU).
The most accurate model of a calculator will win, and other humans will cooperate to validate the truthiness of this calculator, because it is cheaper to pay someone else for a calculator, than perform the work to re-invent a calculator itself.
We already do this. This is how open-source works, how apps work, how markets work, how tooling works, how social media works, etc.
So the job of the future is the job of the past, which is to take the fuzzy and bring it to the discrete.
