The lack of an explicit brain model has been holding back AI improvements leading to applications that don’t model language in theory. This paper explains Patom theory (PT), a theoretical brain model, and its interaction with human language emulation.
Patom theory explains what a brain does, rather than how it does it. If brains just store, match and use patterns comprised of hierarchical bidirectional linked-sets (sets and lists of linked elements), memory becomes distributed and matched both top-down and bottom-up using a single algorithm. Linguistics shows the top-down nature because meaning, not word sounds or characters, drives language. For example, the pattern-atom (Patom) “object level” that represents the multisensory interaction of things, is uniquely stored and then associated as many times as needed with sensory memories to recognize the object accurately in each modality. This is a little like a template theory, but with multiple templates connected to a single representation and resolved by layered agreement.
In combination with Role and Reference Grammar (RRG), a linguistics framework modeling the world’s diverse languages in syntax, semantics and discourse pragmatics, many human-like language capabilities become demonstrable. Today’s natural language understanding (NLU) systems built on intent classification cannot deal with natural language in theory beyond simplistic sentences because the science it is built on is too simplistic. Adoption of the principles in this paper provide a theoretical way forward for NLU practitioners based on existing, tested capabilities.