Filene Auditorium, Moore Hall
Conference notes by Meg Houston Maker
Survival as a Digital Ghost
A "digital ghost" is an instantiation of you. [MHM: A machine doppelganger.] The name comes from William Gibson. It's a personalized artificial intelligence, and it could maybe pass a personalized Turing test, convincing your friends and loved ones the machine was you. So it would have to know your history, beliefs, etc.
People now have massive amounts of personal information; there is a digital vapor trail that you leave behind you as you go through the world: videos, images, bookmarks, etc. You go through life generating data, and much of it is not saved or archived. But if you were to start archiving it, you'd get a big temporal database. A little work is being done on this, there's a SIG in the ACM for it, and MicroSoft has its "my life's bits" project.
Think of this collection as a digital diary. Of course you would have to tag all that information -- that's my wife, that's my dog. Tagging and indexing is not there yet -- they're too cumbersome. So instead, you might want a personalized AI that could look at the diary and interpret the diary data as if it were you. The AI tool would have to be a model of your psychology that would interpret it the way YOU would interpret it. Some recomendation engines (Amazon.com) are pretty promising and pretty good at capturing preferences. Mate selection in online data sites could certainly use this technology. E.g. the website Hot or Not, where you rate pictures of people; these could be sent to dating sites.
So, how do you get this? One recommendation is a questionnaire with 20,000 questions you could answer to construct a profile. No way! Too onerous. But maybe we could analyze your telephone conversations, GPS data about where you are at any given time. Or you could do simple query-response about where you were or why you did a certain action. Descriptive and explanatory, in other words. Brain implants are another possibility, or non-invasive brain scanning approaches, to create a highly-personalized analog of your own architecture.
Digital ghosts should also be a simulation of your body. These could be pretty much generic models that could then be tuned with personal history and prefs. Faces and voices are finely tuned personal features, and are interesting to others, so it would be essential to incorporate these. It would have speech synthesizers that could replicate your voice. Your medical data could be incorporated. We can then build a model that looks like you in addition to having your history.
How might we interact wtih something like this? Maybe initially it would be chat-based, but in a more advanced state it would be an animated VR or simulation. It raises a lot of privacy questions, and issues of restricted access.
C. T. A. Schmidt, LeMans and Sorbonne
Did You Lean That "Contraption" Alone with Your Little Sister?
Schmidt's research areas: the dialogical aspects of cognition and communication, and the context for learning -- the physical environment including other people. Key question that interests him: how can the machine learn if it can't communicate?
Robotics-embedded AI should be, or maybe must be, dialogical. In order for advanced humanoid robotics to be fully accepted by others, they will need the proper identity features or they will remain at the fringe of human communities.
Social roles and human institutional involvement seem to have been left out in all forms of AI. To make robots dialogical, we need to work on the pragmatic aspects of communication.
Michael Anderson, U. of Hartford
Susan Leigh Anderson , UCONN
The Status of Machine Ethics: A Report from the AAAI Symposium
Michael is a computer scientist, and Susan is a philosopher. They're here presenting summaries of the papers presented recently at the AAAI Symposium on machine ethics.
The time has come for adding an ethical dimesion to machines -- CareBots, unmanned aircraft, defense uses, etc. This will ensure their actions, especially in self-evolving or learning systems, remain ethical. (Note to contrast this with computer ethics, which concerns hacking and the like.)
The Nature of Machine Ethics:
1) Normative computer agents: computers are normative, because they're designed with a purpose in mind, but not necessarily an ethical purpose. Their performace is assessed according to how well they do what we've told them to.
2) Ethical Impact agents: these not only perform certain tasts, but have an ethical impact on the world. E.g. a robot jockey that guides camels in races, replacing young boys who are slaves who are otherwise forced to do this.
3) Implicit ethical agents: these are machines that are programmed to behave ethically and are designed to perform ethically. E.g. ATMs that are programmed not to cheat the bank or its customers, and automatic pilots entrusted with human safety.
4) Explicit Ethical Agents: machines that are able to calcuate best actions in ethical dilemmas.
5) Autonomous ethical agents: these can calculate the best action in an ethcial dilemma and function independently. E.g., a robo-soldier, sent into battle, which makes ethical decisions that guide its own behavior.
6) Full ethical agents: this term is used to describe human ethical decision makers. Are intentionality, consciousness and free will essential to genuine ethical decision making? Would it be sufficient that machines have "as if it does" versions of these qualities? Could it pass a "Moral Turing Test" for understanding ethics?
If humans create laws that allow them to mistreat entities that resemble human beings, it increases the chances that they will find it easier to mistreat human beings.
Developing an explicit ethical agent is a compelling goal of AI. Many approaches are being pursued. Democracy-dependent algorithms have been created, wherein agents could look up ethical information on the web, giving the machine a kind of "average" or "averaged" ethics. This is probably not good enough. Other methods use neural nets, or offer the human user a case-based reasoning engine with natural language inputs and outputs. Other researchers recommend using deontic and default logics to iteratively construct a theory of ethics. The Andersons have developed a system that extrapolates from experts' intuitions about particular ethical dilemmas.
Marcello Guarini, University of Windsor
Computation, Coherence, and Ethical Reasoning
Thagard-Verbeurgt Coherence Theory of Constraints (1998): hypotheses, evidence statements, negative and positive constraints. Used in moral reasoning problems. This system can be encoded in an associative neural network.
Thadarg identifies four types of coherence reasonings contributing to ethical reasoning: explanatory, deducive, deliberative, and analogical coherence. Ethical reasoning is a "multi-coherence" problem. The idea is that it can provide prescriptive or normative recommendations.
People obviously don't arrive at ethical decisions through brute force computation. We don't work out coherence values using internal computation. And we certainly don't do so consciously.
The roll-up: Guarini is critical of the Thagard-Verbeurgt approach, and that coherence is required for moral reasoning in machines. Read his paper for more.