Getting the One Right Answer
We believe human-to-human interactions can be enormously improved by way of the humble Chatbot, a semantic engine that performs relevance and coherence much better than most human beings do. We are working on the idea of a conversation engine, an AI app that can suggest sentences for humans to perform for one another in various contexts. A sentential operator able to surprise and delight humans anywhere in real time.
Such an operator could only be premised on a tested theory of human conversation, which we have recently proposed for the first time. Currently there is no generally accepted theory of what conversation actually is, but we think we have identified it. If these are true, a great deal in the human world can be expected to begin to change. Recent developments with OpenAI’s Q* suggest this change may begin rather sooner than what we had expected, and in way that will need to be managed wisely.
We think human conversation, when it rises to become something we would apply the label to, is form of collective intelligence, which is to say, a form of life. It’s something that depends on the bioelectric field. Michael Levin’s work on morphogenesis and regenerative biology has recently advanced knowledge of this field significantly. We think humans will soon be learning how to use this field (these fields) socially, quite probably, because of its cosmological and even theosophical properties, to being capable of a physiological or intersubjective interferometry. An advance of this sort, we think, would indispensably depend on a tested theory of human consciousness (of human subjectivity), with theosophical social science becoming something as precise as physics.
In contradistinction to the humble chatbot, a general artificial intelligence (an AGI) will be a system that reasons or understands. The way to get to a system like that is to introduce process supervision into its architecture. On our theory, anything one would want to call human conversation would likewise subject to process supervision.
The humble chatbot is quite capable of producing satisfactory human/machine conversation, where a single right outcome isn’t required. For real human-to-human conversation, on our definition, something like a single right outcome actually is required, and that outcome needs to repeat moment by conversational moment. Currently the only way to adjudge a sequence of satisfactions like that is by way of intuition and consent, and generally if that sequence can be sustained from beginning of the conversation to its end, it’s good enough. For really advanced, leading-edge human/human interactions however, AGI-augmented surveillance and conversation seems likely to become something of a default.