Viewing Autonomic Computing through the Lens of Embodied Artificial Intelligence
A topic of central interest to the autonomic computing community is how to manage computing or other self-adaptive systems in dynamic environments. In years past, I had advocated an approach in which high-level goals are expressed in the form of utility functions, and optimization and/or feedback control techniques are used in conjunction with system models to adjust resources and tuning parameters to maximize utility. After outlining and illustrating the general concepts behind this idea, I will point out a significant flaw that the autonomic computing community (including myself) had ignored historically.
Then, I will introduce my recent work on embodied Artificial Intelligence agents, which are somewhat like Alexa, Siri, and other voice-driven assistants, with two major differences. First, they are designed to operate in the business realm, where they assist humans with data analysis and decision making. Second, they interact with people multi-modally, using speech in conjunction with non-verbal modalities like pointing and facial expression.
In the final third of my talk, I will explain why I believe embodied AI agents can solve the fundamental flaw of utility-based autonomic computing and speculate about how autonomic computing can contribute to the growth of embodied AI, especially given recent AI advances such as ChatGPT.
Keynote Jeff Kephart - Viewing Autonomic Computing through the Lens of Embodied Artificial Intelligence (Keynote_Jeff_ACSOS2023.pdf) | 9.50MiB |
Tue 26 SepDisplayed time zone: Eastern Time (US & Canada) change
09:30 - 11:00 | |||
09:30 90mKeynote | Viewing Autonomic Computing through the Lens of Embodied Artificial Intelligence Main Track Jeffrey Kephart IBM Thomas J Watson Research Center File Attached |