Artificial Intelligence is really something that should only arise from data? Or can it come from other factors, such as interaction?
Cognidynamics is a sort of thermodynamics for Machine Learnings. Neural Propagation is crucial for the results we are experimenting but we need to re-invent it if you are processing over time instead that over big collection of data.
Both Deep Learning and LLM and Generative AI are under the “collectionist” umbrella. They use big collection of data to learn from.
Symbolic AI, from Norvig-Russell’s textbook, you don’t need collection for the emergence of intelligence. You don’t need it either for some Machine Learning.
The major challenge is to making intelligence arise from data interaction instead that from big collections.
Environmental Interactions:
- unsupervised learning
- self-supervised learning
Agents’ Interactions:
- Supervised Learning
- semi-supervised learning
- active learning
- reinforcement learning
What if intelligent agents are learning agents who interacts over time?
You have environmental interactions and agents interaction, pre-training of GPTs are environmental interactions but then you also you do RL for alignment which are interactions with other agents.
Narnian Developmental plan
There are a teacher and a student. The student take “courses” from the teacher, and learns from it. Then get tests and if it fails the process repeats. Than if it pass the exams the student agents can become a teacher to other agents.
Cognidynamics
To continually process data you need to abandon collection of data, and time is the new protagonist.
Back-propagation is good for processing data but have no time, nor causality so it’s not that useful in continual learning.
Instead of doing a forward step, and a backward step (at time ) and then a forward step, and a backward step at time ,
you could do a forward activation in a wave, one for each step, and then doing a backward wave for each step.
Basically at time , you do a forward step for first and a backward for going step by step. so next you at you are a and backward at , at some points the forward and backward step meet at the middle and you can calculate the gradient and the delta error for updates.
This is a sort of waves of updates in networks.
The average of the loss can be calculate over time instead of accumulating loss and then calculating it.
I principle
A formal outcome of this theory, you can define Enviromental energy and some dissipative terms, and basically you have an exchange of energy which is
Which is the internal variation of energy plus dissipation.
Learning requires dissipation, generation arises from null dissipation.
There is an energy balance, formally defined.
II Principle is: for isolated agents entropy increases, if you have a random behaviour following the Hamiltonian equation for the new back-propagation scheme, the entropy increases.