Monday, July 25, 2016

cool vids





Comment on the above, I find it likely that the ultimate rules are likely deterministic.  Uncaused, truly random, events don't seem sensible.


Comment on the above, while Wolfram seems to say an AGI agent might not have its own self produced goals, that we'd give it its goals, I don't necessarily agree.   My hypothesis is that the simplest designs for complex enough artificial agents likely necessitate they tend to gain many if not most of their goals by learning about the behavior and goals of the agents to which they are exposed and of which they can conceive, they create internal models of other agents and use these models to guide behavior.  At least I think that's the easier path, complex programming of complex specific goals be it by genes or software code, is likely to be harder than self organizing behavior emerging from environmental exposure.   Also I find it hard to believe that an agent can be truly generally intelligent without being able to evaluate and compare goals by various means, and even comparing the means themselves, later being able to choose amongst these goals.

No comments:

Post a Comment