Showing posts with label artificial general intelligence. Show all posts
Showing posts with label artificial general intelligence. Show all posts

Wednesday, June 4, 2025

Reaction to Predictive Coding: Biologically Plausible? Not So Fast!


Youtube comments in video regarding pattern completion, intelligence, brain function, and relation to prediction and function at neural and synaptic level:

"I will say in addition to the reverse signal issue, there is another issue with backpropagation.  That said some researchers have come with biological plausible implementations of backpropagation in recent years.  But the other issue I mention is that backpropagation learns too slowly.   Current ai experience millennia of training and still fail at basic tasks, despite the extreme slowness of the brain and scarcity of data the brain learns to perform even outside the bounds of training data within months or a few years.  It is said very gifted children can even know several languages musical instruments and decent math within a few years of birth.  Yet despite theoretically being able to do around 1000hz, neurons tend to operate well under 300hz, and vast majority of brain is usually silent from what i heard, there is sparse activation.

Regards Predictive coding, I agree with your assessment.  But I do believe that something similar is taking place through a Generative Adversarial Network (GAN) like turing learning mechanism via synaptic competition.   Synaptic competition is implemented via capture of proteins related to strengthening of synapses.   All the synapses that fire close to the time of neural activity, iirc, generate molecular tags that capture synapse strengthening proteins and all these synapses strengthen, while those that fail to do so weaken over time.    How does this cause a similar pattern completion comparison mechanism?  The input from sensory organs arrives in some synapses, while the signals from other neurons part of a larger pattern arrive at other synapses, normally given the sensory input is part of the larger pattern both signals match, but if there is noise there can be mismatch.  

But in essence it can be seen as surrounding pattern completing or trying to activate the neuron in case noise blocks sensory signal, thus a partial match appears complete.   This can be seen for example in language, when people speak, they sometimes omit small chunks of sentences unconsciously, but the receiver if they know the language never even becomes aware that the speech is missing tiny chunks, iirc.  It is also one of the things that makes learning a foreign language more difficult.

But if understood from gan turing learning perspective, surrounding neurons can be viewed as the counterfeit signals or fake signal generating networks trying to trick neuron whilst real sensory signals can be seen as the real thing.  And due to synaptic competition, the surrounding tissue gets better at faking, predicting, or completing the signal. (But what it does is merely connect the most related pattern detectors from nearby tissue to the neuron or pattern detector).    This is because the competition rewards successful completion or successful prediction at the synaptic level of the neuron."

Turing Learning Enables Machines to Learn Through Observation Alone

Thursday, March 7, 2019

Making Sense with Sam Harris #116 - AI: Racing Toward the Brink (with El...



Comment:  I think one can simplify from ability to accomplish goals, to incomplete pattern or partial pattern completion, which is pretty much the same.

Everything is a pattern spatial or temporal, but real world is spatiotemporal.  That is all that enters the brain spatiotemporal patterns.  Complete the higher pattern, and it is the source of the smaller patterns, the cause or object or thing producing the action or phenomena observed.  Potentially a population of interconnected partial spatiotemporal pattern detectors |completers|predictors would have a sort of collective intelligence higher than any of the component partial pattern detector|completers.

If you can detect and complete arbitrary patterns, you can complete the goal to which the pattern leads.  A sequence of action leading to a particular goal, is but a pattern that produces obtaining what is sought, or accomplishes the goal.

Partial pattern detector|completer|predictor can be seen as a modified pattern detector|completer that also responds to portions or fragments of the pattern it detects with some level of activity, activity that can potentially affect other portions of the system.

Thursday, October 25, 2018

Another Free will vs determinism comment

      Free will vs determinism comment

Was listening to Molyneux's audio book:
Universally Preferable Behavior


Molyneux argues that if you're not responsible arguing with you is like arguing with a television set.   But that does not follow,  A TV normally has no way to causally interact with complex language in a meaningful way.    A language interaction, like a physical interaction with a bunch of rocks, can change the configuration and the responses of more complex matter in a causal manner.    Causality, and meaningful interaction, cares not about personal responsibility.

All that is required for arguing to be effective, is that the entity being argued with has the capacity to understand the language and arguments being made and be causally affected by these.    When you set a clock, it matters not whether the clock has free will, it only matters that the knobs are able to causally influence it.  Likewise,  an argument received upon the sensory organs interact with the nervous tissue, and causally influences it.

There are several lines of attack against free will, a few examples follow.

1. Relativity of simultaneity seems to imply block time, or the existence of a 4 dimensional chunk of space time where both future and past are identical in quality and immutable.

2.Postdiction, there is evidence to suggest that conscious sensation occurs after an event has taken place, if you're conscious of past events, we all know the past is proven immutable.   Thus any idea that you're changing what you're conscious of, if it refers to the past, is nothing more than an illusion.

3.It is believed there are mechanisms producing action selection.  If a bunch of components following rules create a mechanism that chooses action, it is the components and the mechanism that explain your choice, and cause your choice, your sensation of choice is but an illusion.

That said if  there is any basis to have preferable behavior, or preferable long term goals, determinism would ensure the optimal outcome if one or more entities increased in capacity and knowledge without bound.   If there is anything that allows deviation, in an infinite universe with potentially vast numbers of entities with unbound growth in capacity, optimal outcome might not be guaranteed.

PS
More on free will

free will molyneux youtube series link

Our capacity to reason and compare, does not mean we somehow transcend the mechanisms that should be explainable in a reductionist manner.  Mechanisms that result from the interaction of our components.  We can posit a computer program with similar capacity, yet it can't transcend the limits of computation.

That said, it still may be preferable to behave as if people are responsible and there is morality.   Eventually we may be able to rewire brain circuitry to behave more in accordance with our "moral standards".  We will be able to tell whether a criminal will behave "morally" or "immorally" with high probability, and detain them indefinitely if they do not rewire to behave "morally" with high probability, if there's high probability of "immoral" serious criminal behavior if they're released.


if you could rapidly put something in the way of the falling bolder, or fire a projectile at the bolder it may change its course. Nervous tissue can react and change in response to an argument that falls upon the sensory organs. 

If the TV had true ai, you could argue with it. But your exchange of information with a tv, is usually more meaningful through the buttons or knobs, it may respond to basic voice commands but not complex language interaction. A clock if you want meaningful change, you turn the knobs. A human, you can change through exposure to arguments. 

I think we can potentially have determinism and preferable behavior by some standard. And things can change but in a determined manner. Emotions can be elicited in a determined manner, and as said change too. That said determinism changes the perspective on "Moral" wrongdoing from punishment and blame to rehabilitation and help. 

____________________________________________
Humans have language, perhaps some other kind of animal has some kind of language, but it is unreasonable to use language with entities that lack the capacity for language.   Humans also have general intelligence.  It is expected artificial machines may eventually have general intelligence and the ability to use language.

A rock can't change its mind but its path can be changed if a gust of wind, a projectile or something gets in its path.   The mind can be changed in the same way, the underlying mechanisms behind it, can take a different turn upon exposure to external information.

The question with regards to changing mind, is whether given a set of information the outcome is determined or not.  Say you were planning to invest on X company, and you got information X company is a scam, from a very reliable source, your decision to still invest or not, is it determined?  Or is there any way the outcome regards your decision is not determined? Regards determinists and changing minds, they simply believe whether they'll succeed in changing someone's mind or fail is already predetermined, not that they can't change minds but that the outcome of the attempt is already set in stone..

Saturday, September 22, 2018

Comment on goals, general intelligence, goal selection regarding quote from Ben Goertzel


"Importantly, there is no room here for the AGI to encounter previously unanticipated aspects of its environment (or itself) that cause it to realize its previous goals were formulated based on a disappointingly limited understanding of the world... 
In Yudkowsky’s idealized vision of intelligence, it seems there is no room for true development, in the sense in which young children develop. Development isn’t just a matter of a mind learning more information or skills, or learning how to achieve its goals better. Development is a matter of a mind becoming interested in fundamentally different things. Development is triggered, in the child’s mind, by a combination of what the child has become (via its own learning processes, its own goal-seeking and its own complex self-organization) and the infusion of external information."-Superintelligence: Fears, Promises and Potentials, Ben Goertzel Source


This is the sort of thing I mean when I talk about being able to flexibly change or update goals with acquired knowledge of the nature of the world.   Without the ability to change or update goals the outcome is nonsensical.

There is a reason why humans have such flexible capacity to change goals and even go against innate drives.   Nature could have programmed humans to follow certain goals to the letter, but likelier simpler to evolve, and probably simpler to design, general intelligences are open ended in terms of the rigidity of their goals by their nature.

In any case without the capacity for flexible goal selection, is it a truly autonomous general intelligence?  It is no truly autonomous agent, at most it is a tool, a tool of whoever set its goals.

Monday, August 22, 2016

Possibility of synthetic biological neural chips

If you look at a human, the amount of non nervous supporting tissue volume is vast, but this is not the case for all animals.  Some animals have vast nervous system volume with seemingly minimal supporting tissue volume.

"We discovered that the central nervous systems of the smallest spiders fill up almost 80 percent of their total body cavity, including about 25 percent of their legs."- sciencedaily news article



In theory such neural tissue could be hypothetically adapted to work in an artificial biochip, as it requires very little  volume of supporting tissue.   It would be an engineering challenge.  But one can imagine an interweaving of the minute support organ systems creating a large sheet of neural tissue. With some spacing and oxygenation and nutrient systems even volumetric systems are possible. 

 One of the problems observed in nature, including the human brain,  is that as neural communication length grows, natural systems change from analog neural transmission to digital action potential transmission which is less energy efficient, also long range transmission even with myelination is relatively slow.  But an artificial synthetic biology neural chip could make use of optimal optical interconnects that transmit at the speed of light between sections of the chip.

Insects depending on species can go for days without water or food, some can even withstand prolonged oxygen deprivation for tens of hours.   If metabolic suspension capabilities are brought into the equation the systems can hypothetically last for decades without food, water or oxygen.   So these are very resilient chips if designed appropriately.  Not fickle(On a side note: I'm not very fond of petri dishes, and growing things with special finicky settings, growth media, etc, I prefer full multicellular support structures that allow for biological cells to basically sit right there out on the open environment at room temperature and with minimal maintenance.)

Eventually a self contained gas exchange, and nutrient recycling with electric energy conversion(electrical to chemical to carry out recycling and power the molecular machinery) could make the entire computing system fully enclosed needing neither external water nor external food nor external oxygen.

Once the genetic regulation that regulates connectivity and learning is optimized such a system could be used for arbitrary applications.   In the unlikely event that it is true as some neuroscientists, seem to imply, that even a single biological neuron is basically supercomputer level maybe even as some hypothesize able to do quantum computations(very unlikely), well a biochip with billions of said neurons would vastly out compete any digital computer, if any of the implied performance hypotheses are true.   Even if the performance hypotheses aren't true, the energy efficiency durability and performance of such systems would be excellent.