[This is an excerpt from the current draft of the Ceptr Revelation that I want to reference in other blog posts.]
One of our big breakthroughs in our system design came when we were looking at how to maximize composability. In contrast to our foray into XGFL, we wanted it to be easy for everything to be functionally mashed together.
We were talking about language and how amazingly composable it is. How, from a traditional computer science perspective of starting with ontological units, the conversation we were having was all constructed out of a couple dozen phonemes, which we used to construct word parts, and in turn constructed words, then phrases, then sentences, then narratives.
This way of thinking seems completely valid – even obvious. However, in another way, it is also completely wrong.
It misses something fundamental that makes human communication so different from computer communication of today. If I spoke with a strong French accent, or had a speech impediment, I would be using a very different set of phonemes. Even if I was unable to say up to half of the sounds the way you expect, you would still probably be able to understand me. And as we continued to speak, your ability to understand me would increase.
So if everything was actually built out of those base-level phonemes, all of later/dependent layers of processing and understanding would be broken and you wouldn't understand anything. Garbage in, garbage out. This would make the prospect of mutual understanding quite brittle and fragile, which is indeed the case with most computer interfaces, where unexpected symbols can mess up everything. The ability to do somewhat independent sense-making at each of these layers is critical. And the layers themselves seem to be connected to different expectations, symbol sets or layers of meaning.
In fact, it turns out that first and foremost, we have the ability to receive a general carrier for the symbols rather than the symbols themselves. In this case, the hairs in your cochlear receive sound waves by vibrating to different frequencies. They receive the general carrier of sound, not just specific phonemes. Just like our eyes receive frequencies of light, not just specific letters or objects.
Receiving the carrier first allows us to attempt pattern matching for phonemes, words or phrases on soundwaves, or for different objects or patterns on lightwaves. In fact, we have the capacity to hear a word or phrase and if we don't recognize it, ask what it means, and "install" the ability to understand it in the future. Computers (other than AI learning systems) don't normally have this capacity to say, "Unknown symbol or protocol, install it in me so I can continue processing the message you sent." Ceptr enables this process. Not by being an AI/learning system, but by having a built in ability to define and receive semantics, protocols, and carriers.
Once we started thining this way, its importance grew. We started seeing how RECEPTIVE CAPACITY organizes the world: like our "five senses," how RNA receptively holds the pattern that builds DNA, and how atoms have valences which want to receive a certain number of electrons which become "slots" for molecular bonds.
When we walked into a coffee shop, we'd see how some surfaces were receptive for walking (able to bear full body weight, clear of obstacles, etc.), others for butts to sit on (at a comfortable height, stable, soft to sit on, etc.), and others for setting food and drinks on (smooth, stable, not likely to get kicked, sat on or spilled, etc.). We started seeing how these types of receptivity shape all interactions, from subatomic particles to solar systems, from social interactions to technology systems.
For us it was a beautiful inversion. Instead of only seeing the objects that exist, we started seeing the receptivity that gave them the space in which to come into existence. The subtle power of yin became clearer instead of just the solidity of yang.
And we embarked on a mission to have computers work this way.