AAAI Spring 2014 Symposium - March 24-26, 2014 / Stanford University, Palo Alto, California, USA
More Information Schedule of Speakers Register NOW!
Implementing Selves with Safe Motivational Systems & Self-Improvement
Artificial Intelligence (AI) and Artificial General Intelligence (AGI) most often focus on tools for collecting knowledge and solving problems or achieving goals rather than self-reflecting entities. Instead, this implementation-oriented symposium will focus on guided self-creation and improvement – particularly as a method of achieving human-level intelligence in machines through iterative improvement (“seed AI”).
In I Am a Strange Loop, Douglas Hofstadter argues that the key to understanding selves is the “strange loop”, a complex feedback network inhabiting our brains and, arguably, constituting our minds. Further, humans have both conscious and unconscious minds (Daniel Kahneman’s system one and system two), attention, emotions, partial self-reflection, a moral sense and many other aspects that are rarely addressed – yet seem critical for the creation of a safe self-sufficient autonomous system.
This symposium will focus on the integration of these components into a coherent self-improving self. Ideally, the ultimate end product will be a successful entity with extensive self-knowledge and a safe or moral/ethical motivational system that functions with context/ecologically sensitive egoistic/altruistic discrimination to promote cooperation with and contribution to community via iterative improvement of self, tools and theoretical constructs of relational dynamics and resource utilization, allocation and sharing.
This symposium is proposed as an implementation-oriented exploration of self, to include:
- integrative architectures with explicit motivations
- implementing “self” as operating system with “plug-ins”
- implementing “self” as society (of mind - Marvin Minsky)
- implementing “self” as economy (of idiots – Eric Baum)
- implementing “self” as global workspace/consciousness (Baars/Franklin)
- implementing “self” as authorship (Dennett, Wegner)
- “safe” and/or “moral” and ethical motivational systems
- value sets vs. goal hierarchies
- context/ecological sensitivity
- “safe”/moral values/goal content
- evaluation schemes
- self-modeling & self-knowledge
- goal-based self-evaluation for self-improvement
- attention & emotions
- as interoceptive responses to environmental stimuli
- as knowledge/rules of thumb/”actionable qualia”
- as helpful & unhelpful biases (and how to intelligently improve)
- as evaluation & enforcement mechanisms
- integrating different knowledge and action representation schemes
- coordination & translation between various schemes
- analyzing trade-offs & knowing when to switch between schemes
- via automated tool/method incorporation & theory-inductive heuristics
- goal-based tool/method discovery
- tool/method integration and evaluation
- tool-to-theory heuristics
- via learning (knowledge incorporation)
- (re-)building schemes and models
- discovery (refactoring, modularization, encapsulation and scale-invariance)
- theory-to-tool heuristics
- automated (re-)construction of probabilistic graphical models
- Meta-Optimizing Semantic Evolutionary Search (MOSES)
- Frequent & Interesting Sub-HyperGRAph Mining (FISHGRAM)
While solutions need to be grounded and extensible, the symposium would prefer approaches starting with some initial structure rather than a tabula rasa with the lowest level bootstrapping approaches or first causes explanations (except where they are fully extended to initial structures and/or used to justify such structures). Also, while autopoiesis and “functional consciousness” are obviously key topics, we would prefer that phenomenal consciousness arguments be considered off-topic.
Questions can be emailed to Mark Waser (MWaser@DigitalWisdomInstitute.org).