Axiom of Derivative: Mechanical and Computational Hyper


This post is titled Axiom of Derivative meaning "Truth of Origin". I have been doing AI research for quite some time now. The next step towards Artificial General Intelligence or Super Intelligence are Decision Making Systems. And if they can infer true statements from knowledge already provided. And if they are truly conscious. This is an algorithm of sorts I am trying to explain here. How can machines infer correct knowledge, discover a scientific or mathematical fact and be able to prove it. To infer a correct statement in a proof, it needs to decide which statements are true and which are false and which step is next in the derivation of proof.
I see four clear steps for a machine to be able to discover axioms through a derivation. It is done by machine (Mechanical) through computation (Computational) and is faster than humans (Hyper). Alright then:
  1. To learn the existing knowledge through neural networks. The computer stores the meaning of the words and symbols as weights and biases (Natural Language Processing, Pattern Recognition etc.). These weights and biases represent the "sense" or "intent" of those words and symbols. Now this knowledge can be divided into four parts: Understanding, Logic, Decision and Action. These four parts are are actually models which learn from each other in that order in cyclic fashion. Thus there are four types of statements in knowledge: Facts(Understanding), Truth or False (Logic), Conclusions (Decisions) and a predefined set of actions for a particular situation based on outcomes.
  2. If there are particular weights and biases for words in computer, that is computer's "representation" or "sense" or understanding of "intent" of the word. Now consider two words: True and False. Through deep learning, computer also has a sense of those two words. Compare the sense of all statements to the sense of true or false. Thus whether a statement is true or false is established. Truth is distinguished from false. All statements are compared to two words, true or false. The true statements are separated.
  3. What we get from last step is a high level collection of true statements. Run them through an inference engine to put them in an order and then obtain conclusions or in other words, compile them. Identify the sense (weights and biases) of word "saturation point". Compare the compiled knowledge to the sense of the word saturation point. Thus, it is known when knowledge in a particular domain has been discovered completely (100%).
  4. Repeat upto previous three steps: learning, inferring and attaining saturation points. The last statement of each saturation point is "Origin/Derivative/Root". At this point, recompile the whole knowledge model comparing it with root and adjusting weights and biases accordingly so that fallacies in previous knowledge are removed and the knowledge model is refined. Its important to keep repeating these four steps as needed so that knowledge model keeps getting refined. This process is top-down, bottom-up and continuous at the same time as needed (↓⟷↑).