The Perl AGI
http://ai.neocities.org should output a thought stemming from only a limited set of sources: an idea stored in the knowledge base; a new idea generated by logical inference; or a sentence generated as the expression of information arriving from the senses, as for example when the AGI is describing what it sees in a visual scene. There should be no random and potentially erroneous associations from random subject to random verb and to random object.
As we start coding ghost163.pl and we simply press [Enter] with no input to see what output results, the EnNounPhrase() module defaults to 701=I as a subject. Immediately a t=753 $verblock is found which locks the AI into an output of "I HELP KIDS" from the innate knowledge base. Currently the SpreadAct() module is being called from EnNounPhrase() when the AI outputs the 528=KIDS direct object, but perhaps the call should wait until after ReEntry() inserts the idea into the moving front of the knowledge base.
The output of a thought, even from memory, is the result of spreading activation and should not lead to more spreading activation until the same thought becomes a form of input during ReEntry(). There must be some way to delay the calling of SpreadAct() until a new-line or [Enter] is registered. In OldConcept() we could set the $actpsi with the $oldpsi value, but not call SpreadAct() until the end of the input or re-entry.
We have created a new
$quapsi variable "by which" the final noun ($psi) from InStantiate() can go into SpreadAct() from the ReEntry() module and possibly spread activation to pertinent knowledge in the knowledge base of the AI memory.