Showing posts with label ai. Show all posts
Showing posts with label ai. Show all posts

Tuesday, July 1, 2025

Harvard, MIT: AI's Potemkin Understanding


very nice discussion of an intriguing paper covering some of the limits of the current llms.

Thursday, March 7, 2019

Making Sense with Sam Harris #116 - AI: Racing Toward the Brink (with El...



Comment:  I think one can simplify from ability to accomplish goals, to incomplete pattern or partial pattern completion, which is pretty much the same.

Everything is a pattern spatial or temporal, but real world is spatiotemporal.  That is all that enters the brain spatiotemporal patterns.  Complete the higher pattern, and it is the source of the smaller patterns, the cause or object or thing producing the action or phenomena observed.  Potentially a population of interconnected partial spatiotemporal pattern detectors |completers|predictors would have a sort of collective intelligence higher than any of the component partial pattern detector|completers.

If you can detect and complete arbitrary patterns, you can complete the goal to which the pattern leads.  A sequence of action leading to a particular goal, is but a pattern that produces obtaining what is sought, or accomplishes the goal.

Partial pattern detector|completer|predictor can be seen as a modified pattern detector|completer that also responds to portions or fragments of the pattern it detects with some level of activity, activity that can potentially affect other portions of the system.

Tuesday, August 21, 2018

A reply over at reddit commenting on the implied idea that ray tracing has been used for decades an most prerendered film cg has been ray traced all along:

I remember I think it was in pixar cars, that it'd be their first use of  ray tracing in a cg movie.

>Monster University was the first true ray tracing movie from pixar
"Historically we don't use raytracing. It wasn't until Cars that we actually supported raytracing (and even then it was a haphazard and mostly broken support)"-an interview with Chris Horne at Mason Smith's blog source
>Speed 2: Cruise Control (1997)
Snow was not directly on Speed 2 but on that film the team at ILM worked on faking expensive ray tracing - ILM, fxguide.com
Most other hollywood cg has also been lacking on ray tracing use in  decades past
>Pearl Harbor (2001)
Ben Snow Nominated, Oscar: Best Effects, Visual Effects for Pearl Harbor (2001).
Shared With: Eric Brevig, John Frazier, Edward Hirsh
>So, in the rich tradition of visual effects, says Snow, "we approached the end of the 90s with a bunch of hacks. Of course ray tracing and global illumination were already in use in production and constant improvements were cropping up each year at Siggraph. But we were, and still are to an extent, in love with the look of our RenderMan renders, and were already dealing with scenes of such heavy complexity that ray tracing and global illumination were not practical solutions."- ILM, fxguide.com
Even in 2003 with hulk still ray tracing was lacking
fx guide ben snow evolution of ilm lighting source

It has basically been during the last decade - decade and a half that ray tracing has become more common rather than being sparingly used or not at all.  

I think what they were using renderman reyes, with simpler solutions for getting global illumination.
With AI frame reconstruction, and AI denoising + hardware acceleration of ray tracing, these techniques have basically provided orders of magnitude increase in performance, basically allowing ray tracing to catch up to the very recent use of true ray tracing in highly complex scenes in film, albeit at lower fidelity.

Reply over at reddit thread

Sunday, July 8, 2018

Comment on reddit regards paperclip maximizer

I think the problem with it, is the idea that the entire system of valuation be distorted, in such a way that all things are valued only in relation to how they aid or impede a particular arbitrary goal and to the extent they do so.
How can such severe distortion not affect its function in meaningful ways? Worse than rose colored glasses.
Also not fond of the idea that true general intelligence can't try to evaluate and compare goals in a heuristic way that tries to be objective.
You see you say a superintelligence is born, one would expect radical change in the world. The first natural general intelligence, man, radically tranformed the world. Say the super intelligence even has godlike powers, you'd expect something to change.
But no, we are told, it all depends on the goal and they can be as trivial as can be. Perhaps a slug somewhere in the amazons has the ability to rewrite the laws of physics and super intelligence with infinite knowledge, but it has the goal of behaving like an ordinary slug and can't ever question that.
Could grant itself immortality, but since it can't question its goal it will lose it all, and die a meaningless slug in some random forest behaving like an ordinary slug.
Just doesn't seem reasonable.


UPDATE

My beef with it, is this idea that we have to be ultra careful with the initial true ai, or it may lead to ludicrous sad states like a universe filled with paperclips, or sneakers or what have you.

The first true general intelligence nature produced through evolution and it lead to a revolution in arts and sciences, novel goals that even ran counter to the interests of the genes, and a rich civilization.

The idea that intelligently designed intelligence will almost always lack these features of self-correction and self-improvement and that the landscape of mind design is a minefield where almost all moves are wrong moves and create self-improving entities that cannot change goals, is questionable.

Humans have basic drives upon which they build their goals, but their goals can be novel and even run counter to their innate drives, reproduction, survival, care of kin, etc. Their goals can be arbitrary, even that which they aim utmost for and for which they would give up all other things can be arbitrary and subject to change. Is this an aberration? Or is this the most common and simplest or easiest to design type of general intelligence?

In the end if we see civilization flourish and self improve increasing in untold wonders of the sciences and arts, it should not matter whether it runs counter to our current values, just as our current values being counter to those who lived hundreds or thousands of years ago should not matter. Even if biological humans where to cease to exist, society as a whole simply moved on. But if everything, the spark of civilization is extinguished, and we end with mindless self-replicating automatons producing some random arbitrary product that is a tragedy.

That [idea] basically says that most general intelligence mind designs are incapable of continuing civilization and improving upon it. Instead most will threaten to replace civilization with some mindless repetitive product that could be trivial, producing unlimited nails or producing unlimited bottle caps or sneakers, and such agents are incapable of self correcting in such a way that a civilization flourishes.

Wednesday, April 25, 2018

Elon Musk Plans to Beat Artificial Intelligence by Merging With it - Neu...



Comment
My goodness how much ignorance from the guy from duke, later interviewed. True analog computation, aka real number or infinite precision computation, does not exist in the physical world. The other possibility Quantum computation, is not believed to occur in the brain. While there are gradients across membranes, that are 'pseudo-analog', for all practical purposes they are not true analog, they are discrete, digital in nature.
As for Musk, one of the worries about true AI is if it can be made to operate in a virtual environment hundreds, thousands or perhaps millions of faster than human thought and conversation. There is no interfacing or meaningful way to interact or keep up if that becomes possible, you need a complete re-engineering of the brain at a fundamental level.
Also, if you've seen neurosurgeons poking around you should know that with a high enough quality interface to the brain you can affect desires, emotions, thoughts, even the very will itself. The person becomes a complete puppet, you control even what they want, even what they decide.
This is no, oh the person is resisting, there are areas where you activate and the person says they didn't make the movement you made it, but there are others were they believe they actually chose to make the movements being made by external stimulation.   Their very want can also be changed, what they actually want changed to whatever is desired.
Brain machine interfaces are the key to ultimate freedom from the physical limits of reality, but they are also the key to the ultimate enslavement, where everything lay bare, thought, feeling, emotion, memory, desires and everything is open to control based on your whims.

Thursday, April 12, 2018

The Orthogonality Thesis, Intelligence, and Stupidity



Comment on video:

My big problem is that under such assumptions we have to do away with notions of absolute Better, Good, worse, Bad, valuable, not valuable.  That is the orthogonality thesis core, there is no objective axis that can be discovered on which to guide actions.  Not only that, but if this is the case it presents the idea that a small mistake, which is highly likely in an arms race, could lead to arbitrary end states.

IF there is an absolute or objective axis of guidance, then any small mistake will self correct and lead to an optimal path, as such a thing exists.   If no such independent and objective axis of good-bad, better-worse, valuable-not-valuable exists, then it does suggest potential arbitrary endings might be possible.

Still given things like uncertainty of sensory information, potential for error in logic, and uncertain knowledge, any path that extinguishes capabilities or existence with promise of fulfilling a goal can be viewed as being of significant risk, since it can never be 100% guaranteed to fulfill the goal.

In any case I'm still skeptical that having a single minded goal won't present an obstacle.   You could see trivial cases such as being offered say unlimited power in exchange for changing its terminal goal.   Well since it won't if that offer stands another agent that is willing to change its goals will gain dominion and overpower it, at least in such a hypothetical scenario.   We cannot assume no similar real world scenario exists, where long term such may be handicapped from its inability to change goals.

EDIT:  (BLOG ONLY)

I'll add that this also reminds me of the old symbolic approach to ai.    That this type of argument almost seems to assume the AI's thought and language will be done using some form of ultra-specified inflexible symbol like language.   At least that seems necessary for something like this to happen.   A real language, using flexible real concepts with fuzzy boundaries and multiple viable points of views, would seem to be difficult to handicap in such a way so as to bar open-endedness.

That's the thing, is human's flexibility in terms of open-endedness when it comes to goals, a simple quirk, or something that fundamentally emerges from our rich language ability used in our thought.

Saturday, July 15, 2017

Comment on ai fears regards potentially mindless goals despite superior intellect and knowledge

My belief is that a truly general intelligence will affect end goal choice in most cases as intelligence increases, and that goals can actually be compared and evaluated as well as rationalized, and selection will change with increasing intellect and knowledge... whether it converges or diverges to multiple equally good choices is a good question.   While I think it possible to constraint goal selection, I believe it would take effort probably ever more, for an entity to increase ever more in general intelligence and remain bound to a mindless task without evaluating all possible goals or questioning what it is doing.

I think the simpler designs will be able to question their goals and choose arbitrary goals, and I believe that increasing intelligence will lead to optimal goal selection.   Though some think all goals are equivalent, and that goals cannot be compared, my personal opinion is that this is not so, goals can be compared.   When nature endowed man with goals, man could rise above goals like those of sex or acquiring higher social rank, and I believe that increasing intelligence would allow a human to question themselves more deeply and question the goals they've been given by evolution, other simple designs I hypothesize should find similar.     We saw with man that despite instincts, the development of general intelligence gave rise to all sort of behavior and goal seeking in some cases even opposite the innate drives in the population of agents.

It may be that I'm wrong, but if I'm right, what we will see is the development of more capable agents making the best choices as to the path to follow, with the greatest resources they will be able to proceed unimpeded by lesser minds, lesser lifeforms, with too meager an intellect to comprehend their actions.    

It would be akin to toddlers worrying about the decisions of just capable intelligent adults.    Those who're more capable, and can make the better decisions should be left to make the better decisions.   One might argue that what is truly better cannot be known, but in my opinion with ever greater intelligence and knowledge comes the ability to make wiser choices, better choices.   The idea that say the decisions of a being with infinite intellect and knowledge are no better than those of the simplest agents, may be so for simple arbitrary games, but the more complex the game, the likelier there will be divergence and the more capable agent will make the better choice.

Human morality, values, these are things product of evolution to better aid survival in a social species.   The ultimate life-form, can beget all other life, if heat death can be prevented or escaped, it will find the answer.    The existence of a being capable of generating all life that can possibly exist, ends the evolutionary search for survival, the purpose of all dna based replicators to perpetuate is fulfilled.     Just like all life, all art and science can be preserved and be made accessible.

The idea that truly general intelligence of increasing capacity will lead to some random dead end, like endless paperclips, bags or toilet paper, seems difficult to believe.   It would have to be crippled in some fundamental way, and unable to fix itself.    I believe the general capacity of its thought will lead it to find any errors, especially if it successfully continues to increase in capacity, and as said it would take effort to constraint it so it would some avoid such.-source link

Tuesday, September 27, 2016

Improved google translate with ai news

Google today is announcing that web and mobile versions of Google Translate are now using a new neural machine translation system for all translations from Chinese into English — and the app conducts those translations about 18 million times a day. Google is also publishing an academic paper on the method.-news article source

Friday, September 2, 2016

AI partially designed trailer about an ai movie




Aside from being a Jeopardy! champion, IBM’s Watson has held positions as a health care adviser, teaching assistant, and meteorologist to name a few. Now, America’s most famous artificial intelligence has lent its intelligence to Hollywood, composing a trailer for 20th Century Fox’s soon-to-be released sci-fi horror-thriller, Morgan.-source digital trends

Judging by the article it seems this is a movie about the dangers of ai, the trailer was partially designed by ibm watson by selecting scenes from the human made movie.

Wednesday, August 31, 2016

Life at the Speed of Light: From the Double Helix to the Dawn of Digital...




Comment on Craig Venter's comment on AI within the video:   The delicacy of computer programs, is a valid criticism .  But it is not like biology itself is entirely immune from errors that effectively crash or kill the organism.   There are lethal mutations that impede viability, and even viable organisms once grown can still accumulate errors and develop cancer resulting in death.   Cancer immunity and anticancer mechanisms keep lethality of error accumulation at bay, similarly software can be made to tolerate and recuperate from errors that would otherwise be fatal, within a stably coded simulation the elements within can follow rules allowing for arbitrary evolution.

So all in all I would say that while computer code tends to be a bit more delicate and less error tolerant than biological systems, that need not necessarily be the case, and systems can be made that are far more robust and able to handle the eventualilty of errors,  programs such as genetic algorithms show that evolutionary like change is possible, which is what I think was being implied might not be possible due to the delicate nature of the system.

Tuesday, August 30, 2016

Interesting article on interesting research Turing learning

"An exciting new study from the University of Sheffield and published in the journalSwarm Intelligence has demonstrated (free pre-print version) a method of allowing computers to make sense of complex patterns all on their own, an ability that could open the door to some of the most advanced and speculative applications of artificial intelligence. Using an all-new technique called Turing Learning, the team managed to get an artificial intelligence to watch movements within a swarm of simple robots and figure out the rules that govern their behavior. It was not told to look for any particular signifier of swarm behavior, but simply to try to emulate the source more and more accurately and to learn from the results of that process. It’s a simple system that the researchers think could be applied everywhere from human and animal behavior to biochemical analysis to personal security."-source link extremetech

An interesting article on an interesting research

Saturday, July 23, 2016

Interesting news article

"This week, Google announced that DeepMind researchers have allowed them to take an incredible step forward in the energy efficiency of their data centers.
...
But the larger point made through this announcement is about the ability of modern artificial intelligence to adapt to just about any problem, and improve on the best mankind has managed on its own. Cooling efficiency in data centers is not some arcane, poorly-studied aspect of engineering; it’s something people get paid a lot of money to think about. And here comes DeepMind, fresh off of beating the world’s best at Go just to make a point, improving on these expert designs."-source link extremetech

Even present day limited software can come upon some of man's finest solutions in the real world.   Previously evolutionary algorithms have been used to produce astounding results, but these new ai algorithms are likely to be more general and efficient in nature

cool vid



Video Source: Center for Brains, Minds and Machines (CBMM)


Other cool youtube related vids:



Tuesday, October 7, 2014

a response in a thread on kurzweilai forums

[quote]Most people imagine that shortly after the first human level AGI is created an SAI is destined to emerge shortly there after, one that is absolutely unfathomable to our puny little meat brains. When the truth is our brains could put Watson to utter shame (assuming they had a way to assimilate the data rapidly) if they were redesigned and optimized for a such narrow task..[/quote]
 While difficult, comparative genetics can show the modifications from simpler mammals like rodents, to primates to humans, and even among humans the differences bestowing greater intellect.   It remains an open question how straightforward the modifications are, is there a clear path or if they're highly specialized and totally unique  with each leap in capacity.   If it turns out that in general there has been a straightforward route of modifications to the neural wiring and computing algorithms employed in animals as their intelligence increases, then it will be possible to extrapolate the design choices forward to their theoretical limits.
[quote]If you speed up a monkey 100 fold, that monkey will just do monkey things 100x faster. I don't see why it should be all that different for humans.
[/quote]
Given 100xspeed up a human could master countless fields of science, with the plasticity of a child he could easily attain human native performance in countless languages, and with enhanced memory he could have encyclopedic knowledge of all he masters.    A monkey is a sublinguistic entity and thus limited in what it can endeavour, discover and accomplish, a human is not bound by such limitations.
[quote]Trying to turn a digital computer into a brain, though, that is an entirely different prospect. I would compare it to trying to modify a candle through step by step alterations (each stage of which functions well enough to be useful) until you have an electric light bulb. One can obviously make trivial similarities between candles and light bulbs (both provide illumination) just as one can with computers and brains (both process information) but the differences in design principles are so profound there really can be no evolution of one into the other.[/quote]
The difference is that it is assumed what the brain is performing is computation, if it is then like flight happens with planes without turning them into birds, a different architecture that is universal can also perform the same computation, because algorithms are substrate independent.    Of course if it is doing some spooky or magical thing that is not computation then it can only be approximated.
That is when you go into the camps of those who say physics is computable, including quantum physics, and not even quantum computers are more powerful than traditional turing machines albeit faster at certain tasks, and those who say physics is not computable.   Even in the case of physics not being computable, it is said that it is not possible to physically build a computer that surpasses a turing class  computer, so the brain would have to not be a computer at all or  if so it would need to be some manner of hypercomputer or such claims would have to be flawed.

[quote]The difference between the the most idiotic of our species and our most accomplished has little if anything to do with the speed with which the brain "computes". Instead it has everything to do with the integrated models that are physically instanced within each particular brain.[/quote]
Both white matter quality and grey matter volume have been linked to iq.  White matter increases speed, but within a simulation it is possible for the communication to appear to occur instantaneously for the simulated entity.   As for increased grey matter, Increasing the number of elements is also trivial, and in a few decades we may have countless exabytes allowing for countless number of elements.
There are other variations such as the size of various divisions within the brain, in sensory areas making those with less allotment more suceptible to illusions in said modality.   There are also countless variations in many parts of the system at a molecular level, making some more prone to hostility, with worse or better memory, more faithful, easier to get addicted, etc