10. My longstanding commitment to certain cognitive models for emergent and adaptive computation are inherently both distributed and parallel
11.
12. Mike also responded with interest to my suggestion that my own recent move into the world of research with GPGPU hardware could result in a joint effort to produce a similar kind of framework to SPEEDES for that rapidly growing technique
13. *(caveat): I was unable to follow up with Mike very quickly due to administrative obligations of our new certificate program; John stepped in to facilitate, so that here we are today (thanks!)
14.
15. ...which generated unexpected excitement for me as I have been ruminating on both distributed and parallel possibilities for that architecture—indeed have been developing a more close-to-the-silicon version of Starcat in support of mult-agent simulations leveraging the hundreds of parallel cores in the Nvidia Tesla board (GPU)
16. ...but which may have as much or more chance of success in a CPU based mature framework such as SPEEDES.
17. The ultimate goal would be to produce self-organizing behavior in a computational system that adapts to a changing environment and learns new behavior in a manner more cognitively plausible than much of the body of work in traditional machine learning (Hawkins for exp.)
18.
19. Developing an analog of SPEEDES for leveraging GPGPU hardware is a completely independent line of research we could begin as soon as feasible; likely there are sources of support for something of this nature—or at least a clear need (and thus client base) for that framework (I resisted the urge to invent a cute cousin acronym for it)
20. A subsequent slide will present some very preliminary thoughts regarding the feasibility of such a “port” of the SPEEDES product to this different hardware configuration
21.
22. It is FULL of possibilities of interest to everyone from the DOD to, for example, the Neurosciences Institute here in SD
23. I've been wanting to bring the two ideas together for a couple of years now
24. I'd love to get involved in such a collaboration! Publications guaranteed. Follow-on funding and applications too...
25. Again, following a bit later are some slides to motivate some technical discussion about adaptive computation and the Starcat framework
30. Internal stigmergy occurs in a complex system when changes in internal dynamics persist and have influence on future action.
31. When that persistence is closely coupled with particular interactions with the environment, the changes can be called emergent representations of the phenomona from the environment.
32.
33. Some have gone so far as to eschew it completely (Brooks, 1991).
34. But a representation that emerges as a side-effect of the interaction of very many agents with the environment and each other, guided by long-trained system dynamics, is another matter.
36. Such a representation might also be leveraged externally, as a usable “product” of the system’s processes
37.
38. So the system’s own “trail” of behavior plays a role in guiding the system’s future actions.
39. Coupled with the fact that global behavior arises from the interaction of very many locally acting agents, we have the possibility of adaptation of behavior.
40. It only calls for us to define the microbehaviors that are possible and the concepts which motivate them.
69. Initial results here; a complete redevelopment using a refined Starcat architecture and extending the Copycat capabilities is underway
70.
71. We want to explore the interactions between multiple instantiations of the framework
72. We have an ALife application where each individual agent has a slipnet and a coderack for regulating its behavior and they all interact in a shared workspace
74. Fear, hunger and contentedness nodes rise and fall in activation in response to environment, driving individual agent’s behaviors (exploring, fleeing, attacking, eating)
75. Hunter and prey agents (both lazy and active varieties) have slipnets with different biases and generated codelets
83. Tuning space huge and true emergent effects not clearly seen…an issue to consider
84.
85. Madcat was the first system to embody the emergent representation and fluid concept network of Copycat in ongoing interaction with a problem domain
104. Populations of 50-100 Botcat agents, initially randomly configured, are evolved over 1500 generations.
105. This ability to train the network with a GA has been a goal of the project since the beginning.
106. We also want to provide an evolutionary process over the development of codelets.
107. Together these might bring about something like learning, provided the GA is always running, similar to the classifier system.
108. Botcat experimented with different configurations of the framework and with new types of codelets (fuzzy and composite).
109. Demonstrated two interesting truly “emergent” phenomena: ramming and raking, to cope with the underlying Robocode frameworks evaluator patterns
110.
111. We want this network to self-organize its configuration (who is talking to who, who is monitoring the signal, how to collaborate for increased confidence, etc.)
123. Autonomy arises because the system simply produces ongoing behavior in response to a changing environment without external intervention
Hinweis der Redaktion
Nothing to add here
Nothing to add here
Nothing to add here
Caveat that this definition of Starcat is the ultimate goal; it takes development of applications along the way to discover just how these long-term goals can be realized. Presented here are some of those intermediate application efforts
Anything here?...
Comment on ordering of components (doesn’t have to be in this cycle—swimming in sea—but no applications so far that require other, even though we can imagine them) …
Type constrained access? Experiments in workspace access methods? What else from grads?
Preceded current framework but helped to develop the ideas in it… How does autopoeisis figure in here?
Remember here and everywhere to narrate what the slipnet looks like and what the codelets are and what the structures are… That is the set of features that occurs across applications and which helps speak to the primary question of: HOW DOES THE STARCAT FRAMEWORK REALIZE THE POSSIBILITY OF A SOFTWARE SYSTEM BUILT ON PRINCIPLES OF COMPLEX SYSTEMS!!
One point: we can talk about the resulting predator-prey interactions, but what are the larger lessons here?: Can we build multi-instance systems, yes. What changes in design are required (thread cycling/synchronizing) … ?
Would we want to talk about structural coupling? See Crossing the barrier
First the spatial patterns are built by codelets Then temporal patterns in those structures are tended to These give rise to structures in the mapnet indicating both what’s there and how much is known (coherence)
One point: we can talk about the resulting predator-prey interactions, but what are the larger lessons here?: Can we build multi-instance systems, yes. What changes in design are required (thread cycling/synchronizing) … ?
One point: we can talk about the resulting predator-prey interactions, but what are the larger lessons here?: Can we build multi-instance systems, yes. What changes in design are required (thread cycling/synchronizing) … ?
Removed bullet: Leads to next-level problem of data fusion; also expect to address this using Starcat Mention LM Use language about events like, bright flash of light accompanied by a ground-carried acoustic signal
Genericity – classifier systems has some of it, but then there are many different versions… Principles---many interacting agents New issues: think SONUS Mention GUI with “partially possible”