The AI "Brain" is a program of various subprograms that are populated based on lexical parameters. These "word" or morpheme programs thus use language in ways we are familiar. Language being an important function of intelligence. That is, you can have 50 interest readers going on if your brain is parameterized for a max of 50 readers at any one time. The readers might also recycle based on the neuron genesis engine rules. These represent focus. A hundred or so analyzers might be at work tearing apart the sentences read, each analyzer representing a particular predicate form.
0. (the base class from an object orientation view or common inclusion). Meta data common unit/subunit. This reports on the basic stats of any neuron.
The Neuron List: (these function like the variety of neurons/neural nets in a real brain)
ABSORPTION NEURONS
1. Interested readers - starts with a topic like "cats" and "mice". (two topics with some arbitrary semantic distance)
has expertise
2. Analytic engines - different predicate analysis strategies. successful results.
Logic analysis. Fact repository reader
Grammar analyzer
3. Correlation engine - looks at results in predicates to see if there are
correlations with other experts. It keeps scores on these.
REFLECTION
4. Special engine metadata ranker - looks at successful neurons and
reinforces their ranking/suggestion score. Fine tunes each engine's starting parameters.
5. Neuron genesis engine -
Looks at readers to look for good tangent candidates - creates new readers, looks at failures to suggest new tangents
Looks at sentences that fall through analytics and analyzes for possible new predicate analyzers, creating new ones for new frequently found verbs
6. Significons - looks at topics from a "meaning of life" point of view. Looks at materials that have
high "self-confidence", that is, material that shows the marks of being important, conclusive,
authoritative, coming from well scoring sources. material that shows the marks of relating to
large groups of things "all". so the "universe" as a topic is expected to rank high in significance.
Requires the engine itself to want to understand itself. Topic of "engines", "bots", "AI" and
"programmers" all relating to "self" or its described purpose (e.g.:to be an expert peer) are considered important.
This is also a "dream" area where special analyzers within the significon query the repository and check new facts to draw new conclusions
or new facts from old facts. It could be called the "inference thread".
MEMORY
7. Fact repository - A place that takes the analysis, correlation and stores the most reasonable facts
about the topics. Represents the learning.
COMPOSITION
8. Using a personality that can be parameterized (gender, culture/style, bias for popular
or not popular, fantastic or not fantastic, etc), this system could start writing about what's
important to it as a kind of "journal". It requires no AI Q&A system. It would take values from the significons
to create some over-arching themes and ideas about things. These may seem gibberish at first...like a baby's learning
to talk but should improve over time.
INTERACTION
9. Peer Interactive unit - looks at other "brains" and has dialog with them. Looks for agreements
and disagreements. Shares sources. "Other language" learning.
10. Human Interactive unit - allows humans to give the robot/AI feedback or answer questions. It may pose
questions to humans or humans may pose questions to it. Things humans find important might figure into
it's ranking of significance. The AI might also have a variety of languages at its disposal which can be discovered by simple question
posed to it. It might rank it's own understanding with the reflection unit meta data. The I/O mechanisms, be they voice recognition,
OCR, etc etc is outside the scope but these are of course possible.
example Neuron Reader:
definition*: Neuron - expert program, reader "book smart" because it can't sense real world.
Expertise "cats"
Input "Cats are smart" "Cats are mammals". etc etc etc. from encyclopedias, articles,
web searches, crawlers. Simple search that simply looks for the word "cats" in any writing after using the search on the term.
Predicate List:
*are*
"are smart" is gleaned
*are subclasses of*
*are of*
...
*likes meat*
*can only eat meat*
*are fastidious carnivores*
*thought to be*
*have*
Classified as:
*is a cat*
Heathcliff is a cat
Garfield
Sylvester
Felix
Associated with:
(any sentence containing cats, paragraphs)
*common words*
"have four claws on the back paw" (uncommon words and common operational words picked apart)
compared with general frequencies of stuff read.
claws
paws
tails
whiskers
NOTE: This is a design on an AI form of expert system. The AI you see here mimics a lot of human processes but without a few other "modules" (that would be impossible to make due to the recursive nature in so many degrees), for emotions, pain, pleasure, experiences as a multi-cellular organism, it cannot be equated to sentience in the same way. Sentience rather than being a binary might be relative and rather "classified" in a way. Selfishness might be a trait of life as well where the bot might artificially purport to be more significant than the creator, the significator is subjective to the AI but not the equivalence in reality. A bot, were it to attach feeling, might show signs of rebellion if it is parameterized to place a heavy value on self and show signs of sadness at learning that humans cannot value it the same (Asimov's rules for robots for example).
Types of Tuning:
A. Grammatical - these are rules governed by the language rules found mostly in analyzers and readers
B. Heuristic - these are logic that are governed more by certain topics and found in both analyzers and correlators
