Artificial Intelligence FAQ:1/6 General Questions & Answers [Monthly posting]


From: [email protected] (Ric Crabbe and Amit Dubey)
Newsgroups: comp.ai
Subject: Artificial Intelligence FAQ:1/6 General Questions & Answers [Monthly posting]
Date: Wed, 5 May 2004 19:13:57 +0000 (UTC)
Sender: [email protected]
Message-ID: <[email protected]>
Reply-To: [email protected], [email protected]
Summary: Frequently asked questions about AI

Archive-name: ai-faq/general/part1
Posting-Frequency: monthly
Last-Modified: 1-Apr-04 rc by Ric Crabbe
Version: 2.1
Maintainer: Ric Crabbe <[email protected]> and Amit Dubey <[email protected]>
URL: http://www.faqs.org/faqs/ai-faq/general
Size: 46550 bytes, 1051 lines

;;; ****************************************************************
;;; Answers to Questions about Artificial Intelligence *************
;;; ****************************************************************
;;; Maintained by: Amit Dubey <[email protected]>
;;;		   Ric Crabbe <[email protected]>
;;;                           <http://www.cs.usna.edu/~crabbe>
;;; Written by Ric Crabbe, Amit Dubey, and Mark Kantrowitz
;;; ai_1.faq 

If you think of questions that are appropriate for this FAQ, or would
like to improve an answer, please send email to the maintianers.

*** Copyright:

Some portions of this FAQ are Copyright (c) 1992-94 by Mark
Kantrowitz.  The rest are Copyright (c) 1999,2000-04 by Ric Crabbe and Amit
Dubey 

*** Disclaimer:

       This article is provided as is without any express or implied
       warranties.  While every effort has been taken to ensure the
       accuracy of the information contained in this article, the
       author/maintainer/contributors assume(s) no responsibility for
       errors or omissions, or for damages resulting from the use of
       the information contained herein. 

*** What's new?
;;; 01-Apr-04 rc	Replaced "game of life" question with
			information theory.  Other assorted fixes.
;;; 29-Jun-03 rc	Have begun a section on comercial AI software.
			Added question on "tell me all about AI"
;;; 29-May-03 rc	Added question on A*

*** Topics Covered:

Part 1:

  [1-0]  What is the purpose of this newsgroup?
  [1-1]  I have a Question not covered in the FAQ...
  [1-2]	 What is AI?
  [1-3]	 What's the difference between strong AI and weak AI?
  [1-4]  I have little/no background in CompSci/AI, can you tell
	 me in detail how AI works?
  [1-5]	 I'm a programmer interested in AI.  Where do I start?
  [1-6]  What's an agent?
  [1-7]  History of AI.
  [1-8]	 What has AI accomplished?
  [1-9]	 What are the branches of AI?
  [1-10]	 What are good programming languages for AI?
  [1-11]  What's the difference between "classical" AI and "statistical" AI?
  [1-12]  I have the idea for an AI Project that will solve all of AI...
  [1-13]  Glossary of AI terms.
  [1-14]  In A*, why does the heuristic have to always underestimate?
  [1-15]  I'm considering studying AI. What information is there for me? 
  [1-16]  What are good graduate schools for AI?
  [1-17]  No really, just give me a ranking of the best
	  graduate schools for AI!
  [1-18]  What are the ratings of the various AI journals?
  [1-19]  Where can I find conference information?
  [1-20]  How can I get the email address for Joe or Jill Researcher?
  [1-21]  What does it mean to say a game is 'solved'?  Is tic-tac-toe
	  solved? How about X?
  [1-22]  What's this Information Theory thing?
  [1-23]  What AI competitions exist?
  [1-24]  Open source software and AI.
  [1-25]  AI Job Postings
  [1-26]  Future Directions of AI
  [1-27]  Where are the FAQs for...neural nets? natural language?
	  artificial life? fuzzy logic? genetic algorithms?
	  philosophy? Lisp? Prolog? robotics?

Part 2 (AI-related News, Newsgroups and Mailing Lists):

  -  List of all known AI-related newsgroups, newsgroup archives, mailing
     lists, and electronic bulletin board systems.

     http://www.faqs.org/faqs/ai-faq/general/part2/preamble.html

Part 3 (AI-related Associations and Journals):

  -  List of AI-related associations and journals, organized by subfield.

     http://www.faqs.org/faqs/ai-faq/general/part3/preamble.html

Part 4 (Bibliography):

  -  Bibliography of introductory texts, overviews and references
  -  Addresses and phone numbers for major AI publishers
  -  Finding conference proceedings
  -  Finding PhD dissertations

     http://www.faqs.org/faqs/ai-faq/general/part4/preamble.html

Part 5 (FTP and WWW Resources and Repositories):

  -  Information on Web resources and software repositories for AI.
  -  Information on Technical Papers in AI
  -  Web journals
  -  Part 5 concentrates mostly on documents and collections of links
     to other AI resources

     http://www.faqs.org/faqs/ai-faq/general/part5/preamble.html

Part 6 (AI Open-Source Software by Sub-field)
  - An A-Z (well A-T anyway) of Open source (or at least free)
    software with relation to AI.
  - A nascent list of commercial AI software,

    http://www.faqs.org/faqs/ai-faq/general/part6/preamble.html
  

Search for [#] to get to question number # quickly.

*** Introduction:

Certain questions and topics come up frequently in the various network
discussion groups devoted to and related to Artificial Intelligence
(AI).  This file/article is an attempt to gather these questions and
their answers into a convenient reference for AI researchers. It is
posted on a monthly basis. The hope is that this will cut down on the
user time and network bandwidth used to post, read and respond to the
same questions over and over, as well as providing education by
answering questions some readers may not even have thought to ask.

The latest version of this FAQ is NO-LONGER available via anonymous
FTP from:
   ftp://ftp.cs.ucla.edu/pub/AI/
as the files ai_[1-7].faq.

The cannonical source is now:
   http://www.faqs.org/faqs/ai-faq/general

The FAQ postings are also archived in the periodic posting archive on

   rtfm.mit.edu:/pub/usenet/news.answers/ai-faq/general/ [18.181.0.24]

If you do not have anonymous ftp access, you can access the archive by
mail server as well.  Send an E-mail message to [email protected]
with "help" and "index" in the body on separate lines for more
information.


Subject: [1-0] What is the purpose of the newsgroup comp.ai? Comp.ai is a moderated newsgroup whose topic is Artificial Intelligence. It has existed since the early days of USENET (at least 10 years) and has been a moderated newsgroup since 5th May 1999. An introduction for new readers including the official charter, moderation policies and posting guidelines may be found at <http://www.cs.mu.oz.au/~dnk/comp.ai>. The current moderator is David Kinny, but the actual moderation is done largely automatically by an intelligent :-) agent (the AI-mod-bot). The group is meant for general discussion of AI topics (but not about those for which specialized subgroups already exist), including: o announcements of AI conferences, reports, books, products and jobs. o questions and discussion about AI theory and practice, algorithms, systems and applications, problems, history and future trends. o distribution of AI source code (preferably indirectly by weblinks) All contributions should be of potential interest to the general AI community, and in English plain text without attachments. See part 2 of this FAQ for a list of other more specialized newsgroups and lists. Every so often, somebody posts an inflammatory message, such as Will computers ever really think? AI hasn't done anything worthwhile. These "religious" issues serve no real purpose other than to waste bandwidth. If you feel the urge to respond to such a post, please do so through a private e-mail message, or post redirecting follow-ups to comp.ai.philosophy. We suspect this will be less of a problem now that the group is moderated. We've tried to minimize the overlap with the FAQ postings to the comp.lang.lisp, comp.lang.prolog, comp.ai.neural-nets, and comp.ai.shells newsgroups, so if you don't find what you're looking for here, we suggest you try the FAQs for those newsgroups. These FAQs should be available by anonymous ftp in subdirectories of ftp://rtfm.mit.edu/pub/usenet/ or by sending a mail message to [email protected] with subject "help". http://www.faqs.org/ has a nice webified version.
Subject: [1-1] I have a Question not covered in the FAQ... This FAQ tries to answer many introductory issues in Artificial Intelligence, but there are many questions it cannot or does not answer. While the FAQ maintainers welcome email about the FAQ and AI in general, the proper place to ask AI questions is the comp.ai newsgroup itself - that's what it's for. As a practical issue, the maintainers reply to FAQ related mail on a monthly basis, so replies to questions are likely to be delayed.
Subject: [1-2] What is AI? Artificial intelligence ("AI") can mean many things to many people. Much confusion arises because the word 'intelligence' is ill-defined. The phrase is so broad that people have found it useful to divide AI into two classes: strong AI and weak AI.
Subject: [1-3] What's the difference between strong AI and weak AI? Strong AI makes the bold claim that computers can be made to think on a level (at least) equal to humans and possibly even be conscious of themselves. Weak AI simply states that some "thinking-like" features can be added to computers to make them more useful tools... and this has already started to happen (witness expert systems, drive-by-wire cars and speech recognition software). What does 'think' and 'thinking-like' mean? That's a matter of much debate.
Subject: [1-4] I have little/no background in CompSci/AI, can you tell me in detail how AI works? No. AI is a scientific and engineering discipline depending on sophisticated Computer Science techniqes, mathematics, etc. It also is sub-divided into many distinct subfields. At the International Joint Conference on Artificial Intelligence in 2003, the program committee divided the papers into nearly forty different topic areas. It is not really practical to expect to understand the technical details of AI from a USENET forum. On the other hand, it is possible to get the general gist of the field from several books. If you have a computer science background, you should investigate one of the texts listed in question [4-0]. If you don't, then you may be interested in Raymond Kurzweil's "The Age of Intelligent Machines".
Subject: [1-5] I'm a programmer interested in AI. Where do I start? There's a list of introductory AI texts in the bibliography section of the FAQ [4-0]. Also, check out the web links in section [5-2]. [1-5a] I'm writing a game that needs AI. It depends what the game does. If it's a two-player board game, look into the "Mini-max" search algorithm for games (see [4-1]). In most commercial games, the AI is is a combination of high-level scripts and low-level efficiently-coded, real-time, rule-based systems. Often, commercial games tend to use finite state machines for computer players. Recently, discrete Markov models have been used to simulate unpredictible human players (the buzzword compliant name being "fuzzy" finite state machines). A recent popular game, "Black and White", used machine learning techniques for the non-human controlled characters. Basic reinforcement learning, perceptrons and decision trees were all parts of the learning system. Is this the begining of academic AI in video games?
Subject: [1-6] What's an agent? A very misused term. Today, an agent seems to mean a stand-alone piece of AI-ish software that scours across the internet doing something "intelligent." Russell and Norvig define it as "anything that can can be viewed a perceiving its environment through sensors and acting upon that environment through effectors." Several papers I've read treat it as 'any program that operates on behalf of a human,' similar to its use in the phrase 'travel agent'. Marvin Minsky has yet another definition in the book "Society of Mind." Minsky's hypothesis is that a large number of seemingly-mindless agents can work together in a society to create an intelligent society of mind. Minsky theorizes that not only will this be the basis of computer intelligence, but it is also an explaination of how human intelligence works. Andrew Moore at Carnegie Mellon University once remarked that "The only proper use of the word 'agent' is when preceded by the words 'travel', 'secret', or 'double'."
Subject: [1-7] History of AI. The appendix to Ray Kurzweil's book "Intelligent Machines" (MIT Press, 1990, ISBN 0-262-11121-7, $39.95) gives a timeline of the history of AI. Pamela McCorduck, "Machines Who Think", Freeman, San Francisco, CA, 1979. Allen Newell, "Intellectual Issues in the History of Artificial Intelligence", Technical Report CMU-CS-82-142, Carnegie Mellon University Computer Science Department, October 28, 1982. See also: Charniak and McDermott's book "Introduction to Artificial Intelligence", Addison-Wesley, 1985 contains a number of historical pointers. Daniel Crevier, "AI: The Tumultuous History of the Search for Artificial Intelligence", Basic Books, New York, 1993. Henry C. Mishkoff, "Understanding Artificial Intelligence", 1st edition, Howard W. Sams & Co., Indianapolis, IN, 1985, 258 pages, ISBN 0-67227-021-8 $14.95. Margaret A. Boden, "Artificial Intelligence and Natural Man", 2nd edition, Basic Books, New York, 1987, 576 pages. The introductory material in: Russell, S and Norvig, P, "Artificial Intelligence: A Modern Approach", Prentice Hall, 1995 is also quite good.
Subject: [1-8] What has AI accomplished? Quite a bit, actually. In 'Computing machinery and intelligence.', Alan Turing, one of the founders of computer science, made the claim that by the year 2000, computers would be able to pass the Turing test at a reasonably sophisticated level, in particular, that the average interrogator would not be able to identify the computer correctly more than 70 per cent of the time after a five minute conversation. AI hasn't quite lived upto Turing's claims, but quite a bit of progress has been made, including: - Deployed speech dialog systems by firms like IBM, Dragon and Lernout&Hauspie - Financial software, which is used by banks to scan credit card transactions for unusual patterns that might signal fraud. One piece of software is estimated to save banks $500 million annually. - Applications of expert systems/case-based reasoning: a computerized Leukemia diagnosis system did a better job checking for blood disorders than human experts. - Machine translation for Environment Canada: software developed in the 1970s translated natural language weather forcasts between English and French. Purportedly stil in use. - Deep Blue, the first computer to beat the human chess Grandmaster - Physical design analysis programs,such as for buildings and highways. - Fuzzy controllers in dishwashers, etc. Here is a cute A-Z list made by [email protected] (Lauren Vincent): AnswerBus (http://www.answerbus.com/) Babel Fish (http://babel.altavista.com/) Connexor (http://www.connexor.com/) Deep Blue (http://www.research.ibm.com/deepblue/) Emdros (http://emdros.org/) Flip Dog (http://flipdog.monster.com/) Gigablast (http://www.gigablast.com/) Hermit Crab (http://www.sil.org/computing/hermitcrab/) InDiGen (http://www.coli.uni-sb.de/cl/projects/indigen.html) Jack the Ripper (http://www.triumphpc.com/jack-the-ripper/) KartOO (http://www.kartoo.com/) Loebner Prize (http://www.loebner.net/Prizef/loebner-prize.html) Mamma (http://www.mamma.com/) NEGRA (http://www.coli.uni-sb.de/sfb378/2002-2004/projects.phtml?action=2&w=2&l=en) OpenFind (http://www.openfind.com/en.web.php) PolyWorld (http://homepage.mac.com/larryy/larryy/PolyWorld.html) Questia (http://www.questia.com/) RiniNet (http://sourceforge.net/projects/rininnlib/) SIGS (http://www.acm.org/sigs/) Turing Test (http://cogsci.ucsd.edu/~asaygin/tt/ttest.html) Useroo (http://useroo.businessresearchsources.com/) Vivisimo (http://www.vivisimo.com/) WordNet (http://www.cogsci.princeton.edu/~wn/) Xconq (http://sources.redhat.com/xconq/) YY (http://www.yy.com/) Zabaware (http://www.zabaware.com/) One persistent 'problem' is that as soon as an AI technique trully succeeds, in the minds of many it ceases to be AI, becoming something like Engineering. For example, when Deep Blue defeated Kasparov, there were many who said Deep Blue wasn't AI, since after all it was just a brute force parallel minimax search, despite minimax search being one of the great early successes of AI. Nowadays, people are still studying all sorts of things that are currently considered the prerequisites of intelligence, such as intuition and emotion, but you can bet that if and when they solve some part, some will say "oh, that's just Engineering..." ref: Alan M. Turing. Computing machinery and intelligence. Mind, LIX(236):433-460, October 1950. (http://www.abelard.org/turpap/turpap.htm) Sheiber, S, "Lessons from a Restricted Turing Test". Communications of the Association for Computing Machinery, volume 37, number 6, pages 70-78, 1994
Subject: [1-9] What are the branches of AI? There are many, some are 'problems' and some are 'techniques'. Automatic Programming - The task of describing what a program should do and having the AI system 'write' the program. Bayesian Networks - A technique of structuring and inferencing with probabilistic information. (Part of the "machine learning" problem). Constraint Statisfaction - solving NP-complete problems, using a variety of techniques. Knowledge Engineering/Representation - turning what we know about a particular domain into a form in which a computer can understand it. Machine Learning - Programs that learn from experience or data. Natural Language Processing(NLP) - Processing and (perhaps) understanding human ("natural") language. Also known as computational linguistics. Neural Networks(NN) - The study of programs that function in a manner similar to how animal brains do. Planning - given a set of actions, a goal state, and a present state, decide which actions must be taken so that the present state is turned into the goal state Robotics - The intersection of AI and robotics, this field tries to get (usually mobile) robots to act intelligently. Speech Recogntion - Conversion of speech into text. Search - The finding of a path from a start state to a goal state. Similar to planning, yet different... Visual Pattern Recognition - The ability to reproduce the human sense of sight on a machine. AI problems (speech recognition, NLP, vision, automatic programming, knowledge representation, etc.) can be paired with techniques (NN, search, Bayesian nets, production systems, etc.) to make distinctions such as search-based NLP vs. NN NLP vs. Statistical/Probabilistic NLP. Then you can combine techniques, such as using neural networks to guide search. And you can combine problems, such as posing that knowledge representation and language are equivalent. (Or you can combine AI with problems from other domains.)
Subject: [1-10] What are good programming languages for AI? This topic can be somewhat sensitive, so I'll probably tread on a few toes, please forgive me. There is no authoritative answer for this question, as it really depends on what languages you like programming in. AI programs have been written in just about every language ever created. The most common seem to be Lisp, Prolog, C/C++, recently Java, and even more recently, Python. LISP- For many years, AI was done as research in universities and laboratories, thus fast prototyping was favored over fast execution. This is one reason why AI has favored high-level langauges such as Lisp. This tradition means that current AI Lisp programmers can draw on many resources from the community. Features of the language that are good for AI programming include: garbage collection, dynamic typing, functions as data, uniform syntax, interactive environment, and extensibility. Read Paul Graham's essay, "Beating the Averages" for a discussion of some serious advantages: http://www.paulgraham.com/avg.html PROLOG- This language wins 'cool idea' competition. It wasn't until the 70s that people began to realize that a set of logical statements plus a general theorem prover could make up a program. Prolog combines the high-level and traditional advantages of Lisp with a built-in unifier, which is particularly useful in AI. Prolog seems to be good for problems in which logic is intimately involved, or whose solutions have a succinct logical characterization. Its major drawback (IMHO) is that it's hard to learn. C/C++- The speed demon of the bunch, C/C++ is mostly used when the program is simple, and excecution speed is the most important. Statistical AI techniques such as neural networks are common examples of this. Backpropagation is only a couple of pages of C/C++ code, and needs every ounce of speed that the programmer can muster. Java- The newcomer, Java uses several ideas from Lisp, most notably garbage collection. Its portability makes it desirable for just about any application, and it has a decent set of built in types. Java is still not as high-level as Lisp or Prolog, and not as fast as C, making it best when portability is paramount. Python- This language does not have widespread acceptance yet, but several people have suggested to me that it might end up passing Java soon. Apparently the new edition of the Russell-Norvig textbook will include Python source as well as Lisp. According to Peter Norvig, "Python can be seen as either a practical (better libraries) version of Scheme, or as a cleaned-up (no $@&%) version of Perl." For more information, especially on how Python compares to Lisp, go to http://norvig.com/python-lisp.html Also see section [6-1] for implementations of new languages that might be pertainant to AI practitioners and researchers. (some of the above material is due to the comp.lang.prolog FAQ, and Norvig's "Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp")
Subject: [1-11] What's the difference between "classical" AI and "statistical" AI? Statistical AI, arising from machine learning, tends to be more concerned with "inductive" thought: given a set of patterns, induce the trend. Classical AI, on the other hand, is more concerned with "deductive" thought: given a set of constraints, deduce a conclusion. Another difference, as mentioned in the previous question, is that C++ tends to be a favourite language for statistical AI while LISP dominates in classical AI. A system can't be truly intelligent without displaying properties of both inductive and deductive thought. This lends many to beleive that in the end, there will be some kind of synthesis of statistical and classical AI.
Subject: [1-12] I have the idea for an AI Project that will solve all of AI... Great! Welcome to the club and tell us all about it. Most poeple in the community genuinely want new people to be thinking about AI. You should be aware that you will probably not get a whole lot of enthusiasm from the established scientists for a few reasons: - We receive or hear about such proposals about once a month. The vast majority are naive. - Many smart people have been thinking about the AI problem for a long time. There have been many ideas that have been pursued by sophisticated research teams which turned out to be dead ends. This includes all of the obvious ideas. Most grand solutions proposed have been seen before (about 70% seem to be recapitulations of Minsky proposals). - The grand ideas are almost always far too vague to implement. One of the tough lessons of graduate school is how to turn a vague idea into something that is implementable and testable. Unless you have experience at it, it is unlikely your first try will have the needed precision. - It is the general opinion of the research community that we're just not ready to solve the general AI problem yet (cf. question on CYC). Why that is should be addressed in another question. OK, now that we've covered the harsh reality, you shouldn't get discouraged. If you're having fun with it, keep doing it. You're guaranteed to learn something while participating in a fascinating hobby. Who knows- you may still come up with a really great and new idea. Finally, [and this is just Ric's opinion] most of the really interesting AI people started out because they had the same kind of idea to make AI better than it is now.
Subject: [1-13] Glossary of AI terms. This is the start of a simple glossary of short definitions for AI terminology. The purpose is not to present the gorey details, but give ageneral idea. A*: A search algorithm to find the shortest path through a search space to a goal state using a heuristic. See 'search', 'problem space', 'Admissibility', and 'heuristic'. Admissibility: An admissible search algorithm is one that is guaranteed to find an optimal path from the start node to a goal node, if one exists. In A* search, an admissible heuristic is one that never overestimates the distance remaining from the current node to the goal. Agent: "Anything that can can be viewed a perceiving its environment through sensors and acting upon that environment through effectors." [Russel, Norvig 1995] ai: 1. A three-toed sloth of genus Bradypus. This forest-dwelling animal eats the leaves of the trumpet-tree and sounds a high-pitched squeal when disturbed. (Based on the Random House dictionary definition.) 2. An ancient canaanite city that was occupied by the Israelites and is mentioned in the bible as well as in other ancient texts. (thanks to Omri Safren) Alpha-Beta Pruning: A method of limiting search in the MiniMax algorithm. The coolest thing you learn in an undergraduate course. If done optimally, it reduces the branching factor from B to the square root of B. Animat Approach: The design and study of simulated animals or adaptive real robots inspired by animals. (From www-poleia.lip6.fr/ANIMATLAB - click on "English page") Backward Chaining: In a logic system, reasoning from a query to the data. See Forward chaining. Belief Network (also Bayesian Network): A mechanism for representing probabilistic knowledge. Inference algorithms in belief networks use the structure of the network to generate inferences effeciently (compared to joint probability distributions over all the variables). Breadth-first Search: An uninformed search algorithm where the shallowest node in the search tree is expanded first. Case-based Reasoning: Technique whereby "cases" similar to the current problem are retrieved and their "solutions" modified to work on the current problem. Closed World Assumption: The assumption that if a system has no knowledge about a query, it is false. Computational Linguistics: The branch of AI that deals with understanding human language. Also called natural language processing. Data Mining: Also known as Knowledge Discovery in Databases (KDD) was been defined as "The nontrivial extraction of implicit, previously unknown, and potentially useful information from data" in Frawley and Piatetsky-Shapiro's overview. It uses machine learning, statistical and visualization techniques to discover and present knowledge in a form which is easily comprehensible to humans. Depth-first Search: An uninformed search algorithm, where the deepest non-terminal node is expanded first. Embodiment: An approach to Artificial Intelligence that maintains that the only way to create general intelligence is to use programs with 'bodies' in the real world (i.e. robots). It is an extreme form of Situatedness, first and most strongly put forth by Rod Brooks at MIT. Evaluation Function: A function applied to a game state to generate a guess as to who is winning. Used by Minimax when the game tree is too large to be searched exhaustively. Forward Chaining: In a logic system, reasoning from facts to conclusions. See Backward Chaining Fuzzy Logic: In Fuzzy Logic, truth values are real values in the closed interval [0..1]. The definitions of the boolean operators are extended to fit this continuous domain. By avoiding discrete truth-values, Fuzzy Logic avoids some of the problems inherent in either-or judgments and yields natural interpretations of utterances like "very hot". Fuzzy Logic has applications in control theory. Generate and Test: The basic model for performing search in any search space. "The purest form of `generate and test' is: 1. generate all the possible [options] that I would even remotely consider taking next, 2. test each [option] in the generated set to filter out bad ones, and possibly to prioritize the rest. How much you move away from this "pure" form depends on how much of the testing you try to move into the generation stage. What we often strive for in intelligent systems is: 1. generate only the most appropriate action 2. no testing is needed But what we usually end up with is: 1. generate only the best candidates (moving some of the testing conditions into the generator), 2. perform a more strenuous test on the small set of generated actions, for a final selection" -Randolph_M._Jones <[email protected]> Heuristic: The dictionary defines it as a method that serves as an aid to problem solving. It is sometimes defined as any 'rule of thumb'. Technically, a heuristic is a function that takes a state as input and outputs a value for that state- often as a guess of how far away that state is from the goal state. See also: Admissibility, Search. Information Extraction: Getting computer-understandable information from human-readable (ie natural language) documents. Iterative Deepening: An uninformed search that combines good properties of Depth-fisrt and Breadth-first search. Iterative Deepening A*: The ideas of iterative deepening applied to A*. Language Acquisition: A relatively new sub-branch of AI; traditionally computational linguists tried to make computers understand human language by giving the computer grammar rules. Language acquisition is a technique for the computer to generate the grammar rules itself. Machine Learning: A field of AI concerned with programs that learn. It includes Reinforcement Learning and Neural Networks among many other fields. MiniMax: An algorithm for game playing in games with perfect information. See alpha-beta pruning. Modus Ponens: An inference rule that says: if you know x and you know that 'If x is true then y is true' then you can conclude y. Nonlinear Planning: A planning paradigm which does not enforce a total (linear) ordering on the components of a plan. Natural Language (NL): Evolved languages that humans use to communicate with one another. Natural Language Queries: Using human language to get information from a database. Partial Order Planner: A planner that only orders steps that need to be ordered, and leaves unordered any steps that can be done in any order. Planning: A field of AI concerned with systems that constuct sequences of actions to acheive goals in real-world-like environments. Problem Space (also State Space): The formulation of an AI problem into states and operators. There is usually a start state and a goal state. The problem space is searched to find a solution. Search: The finding of a path from a start state to a goal state. See 'Admissibility', 'Problem Space', and 'Heuristic'. Situatedness: The property of an AI program being located in an environment that it senses. Via its actions, the program can select its sensation input, as well as change its environment. Situatedness is often considered necessary in the Animat approach. Some researchers claim that situatedness is key to understanding general intelligence. (see Embodiment) Strong AI: Claim that computers can be made to actually think, just like human beings do. More precisely, the claim that there exists a class of computer programs, such that any implementation of such a program is really thinking. Unification: The process of finding a substitution (an assignment of constants and variables to variables) that makes two logical statements look the same. Validation: The process of confirming that one's model uses measureable inputs and produces output that can be used to make decisions about the real world. Verification: The process of confirming that an implemented model works as intended. Weak AI: Claim that computers are important tools in the modeling and simulation of human activity.
Subject: [1-14] In A*, why does the heuristic have to always underestimate? Recall that in A*, a number is assigned to each node, its f-cost. f-cost is defined as f(n)=g(n)+h(n), where g(n) is the cost of traveling to node n, and h(n) is the heuristic guess at traveling from node n to the goal. A* expands nodes based on minimal f-cost (i.e. it looks at all the nodes it knows about but hasn't yet examined closely, and picks the one with the smallest f(n)). Let's look at the following situation: +-+ |n| +-+ / \ +-+ +-+ |o| |p| +-+ +-+ / \ +-+ +-+ |g|---------|q| +-+ +-+ n is an already expanded node, and A* is trying to decide if it wants to expand o or p. If g is the goal node, then o is on the shorter path, so we want A* to pick o. Lets assume that g(n) = 5 and the cost between nodes is always 1. Therefore g(o)=6 and g(p)=6. Now lets assume that our heuristic sometimes overestimates, so that h(o)=5, h(p)=3 and h(q)=2. In this case, f(o)=g(o)+h(o) = 6+5=11 f(p)=g(p)+h(p) = 6+3=9, so A* would expand p next, discovering node q. Then it decides which node to expand, f(o)=g(o)+h(o) = 6+5=11 f(q)=g(q)+h(q) = 7+2=9, so A* would expand q next, discovering g. Then it decides which node to expand, f(o)=g(o)+h(o) = 6+5=11 f(g)=g(g)+h(g) = 8+0=8, So A* would discover node g, notice that it is a goal and return the path n->p->q->g, which is _not_ the shortest path. The intuition here is that the overestimate of h(o) led A* to look at another path where the overestimate was less bad.
Subject: [1-15] I'm a student considering further study AI. What information is there for me? Aaron Sloman has written an essay addressing this question, aimed at people who know little about it. Please see http://www.cs.bham.ac.uk/~axs/misc/aiforschools.html
Subject: [1-16] What are best graduate schools for AI? The short answer is: MIT, CMU, and Stanford are historically the powerhouses of AI and still are the top 3 today. There are however, hundreds of schools all over the world with at least one or two active researchers doing interesting work in AI. What is most important in graduate school is finding an advisor who is doing something YOU are interested in. Read about what's going on in the field and then identify the the people in the field that are doing that research you find most interesting. If a professor and his students are publishing frequently, then that should be a place to consider.
Subject: [1-17] No really, just give me a ranking of the best graduate schools for AI! [stolen from Randy Crawford [email protected]:] "A single number that assesses a CS department's worth across the spectrum of AI topics has increasingly less meaning as you descend the rankings. Few schools can claim to have outstanding profs that represent all areas in AI. Perhaps no schools can. As you descend through school reputations from CMU/MIT/Stanford/UC Berkeley down, you lose breadth as much as excellence. It then becomes increasingly important that you clearly define the subtopic in AI that you want ranked... What's more, reputations can change quickly. Ten years ago Yale was among the best in AI, and five years ago, Chicago was quite strong. Today, not."
Subject: [1-18] What are the ratings of the various AI journals? ISI (Institute for Scientific Information NOT Information Sciences Institute) produces an annual database called the Journal Impact Factors. Check your library. Lee Giles has done a nice job of extracting this information for some AI journals. See: http://www.neci.nj.nec.com/homepages/giles/Citation.index/ Thanks to Bob Fisher and Dean Hougen for this answer. You might also want to look at CiteSeer "Earth's largest free full-text index of scientific literature." It also tracks citations. For a somewhat complete list of AI journals listed by area, see part 3 of this FAQ.
Subject: [1-19] Where can I find conference information? Georg Thimm maintains a webpage that lets you search for upcoming or past conferences in a variety of AI disciplines. Check out: http://www.drc.ntu.edu.sg/users/mgeorg/enter.epl
Subject: [1-20] How can I get the email address for Joe or Jill Researcher? This question is an anachronism. The correct way to get someone's email address is to Google them. If that fails, try posting to the comp.ai newsgroup.
Subject: [1-21] What does it mean to say a 2-player game is 'solved'? Is tic-tac-toe solved? How about game z? We say a game is solved when we know for sure the result when both players play optimally. The result is either a guaranteed win for the first player, a guaranteed win for the second player, or a draw. We find this out by searching the mini-max game tree to the game ending positions. If you do this for 3x3 tic-tac-toe, it is easy to see that it is a forced draw. other games: 3x3x3 tic-tac-toe: win for the first player. 4x4x4 tic-tac-toe: win for the first player. Connect-4: win for the first player. Go-Moku: win for the first player. [Maintainer's note: Please let us know about your favorite solved game.]
Subject: [1-22] What's this Information Theory thing? Information Theory was developed to describe properties of communication across networks. It turns out that it has all sorts of applcations in AI and machine learning as well. A good tutorial can be found at: http://www-2.cs.cmu.edu/~dst/Tutorials/Info-Theory/
Subject: [1-23] What AI competitions exist? The Loebner Prize, based on a fund of over $100,000 established by New York businessman Hugh G. Loebner, is awarded annually for the computer program that best emulates natural human behavior. During the contest, a panel of independent judges attempts to determine whether the responses on a computer terminal are being produced by a computer or a person, along the lines of the Turing Test. The designers of the best program each year win a cash award and a medal. If a program passes the test in all its particulars, then the entire fund will be paid to the program's designer and the fund abolished. For further information about the Loebner Prize, see the URL http://www.loebner.net/Prizef/loebner-prize.html or write to Cambridge Center for Behavioral Studies, 11 Waterhouse Street, Cambridge, MA 02138, or call 617-491-9020. Also look at: http://www.eecs.harvard.edu/~shieber/papers/loebner-rev-html/loebner-rev-html.html for a published criticism of the Loebner. Hugh G. Loebner has written a reply to Prof. Shieber's critique. It may be found at: http://loebner.net/Prizef/In-response.html --- The Robot World Cup Initiative (RoboCup) is an attempt to foster AI and intelligent robotics research by providing a standard problem where wide range of technologies can be integrated and examined. For this purpose, RoboCup chose to use soccer game, and organize RoboCup: The Robot World Cup Soccer Games and Conferences. In order for a robot team to actually performa soccer game, various technologies must be incorporated including: design principles of autonomous agents, multi-agent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor-fusion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. RoboCup also offers a software platform for research on the software aspects of RoboCup. Information can be found at: http://www.robocup.org/02.html --- The BEAM Robot Olympics is a robot exhibition/competition started in 1991. For more information about the competition, write to BEAM Robot Olympics, c/o: Mark W. Tilden, MFCF, University of Waterloo, Ontario, Canada, N2L-3G1, 519-885-1211 x2454, [email protected]. --- The Gordon Bell Prize competition recognizes outstanding achievements in the application of parallel processing to practical scientific and engineering problems. Entries are considered in performance, price/performance, compiler parallelization and speedup categories, and a total of $3,000 will be awarded. The prizes are sponsored by Gordon Bell, a former National Science Foundation division director who is now an independent consultant. Contestants should send a three- or four-page executive summary to 1993 Gordon Bell Prize, c/o Marilyn Potes, IEEE Computer Society, 10662 Los Vaqueros Cir., PO Box 3014, Los Alamitos, CA 90720-1264, before May 31, 1993. --- AAAI has an annual robot building competition. The anonymous FTP site for the contest is/was aeneas.mit.edu:/pub/ACS/6.270/AAAI/ This site has the manual and the rules. To be added to the [email protected] mailing list for discussing the AAAI robot building contest, send mail to [email protected]. See also the 6.270 robot building guide in part 4 of this FAQ. --- CASC theorem prover competition is held annually at the CADE conference. First-order logic theorem prover compete for recognition and plaques. The web page for this years contest (1999) is found at: http://www.cs.jcu.edu.au/~tptp/CASC-16/ --- The International Computer Chess Association presents an annual prize for the best computer-generated annotation of a chess game. The output should be reminiscent of that appearing in newspaper chess columns, and will be judged on both the correctness and depth of the variations and also on the quality of the program's written output. The deadline is December 31, 1994. For more information, write to Tony Marsland <[email protected]>, ICCA President, Computing Science Department, University of Alberta, Edmonton, Canada T6G 2H1, call 403-492-3971, or fax 403-492-1071.
Subject: [1-24] Open Source Software and AI Some of the more interesting AI programs end up getting released to the Web, usually with liscence granting redistrubition for non-commercial purposes, or, increasingly, under the GNU Public License. See Part 6 of the FAQ for more information.
Subject: [1-25] AI Job Postings Computists International publishes a list of AI related jobs, which is also posted periodically to comp.ai at the request of the moderator. Computists International also publishes a set of informative newsletters that may be subscribed to at http://www.computists.com with membership. Student fees are (as of 3/30/00) $22.50 and professional fees are $47.50. For neural networks, the Neuron Digest and Connectionists mailing lists are a good source of job postings. For computer vision, the VISION-LIST digest includes occasional job announcements. A good source for general AI is Computists' Communique. For postdoctoral appointments, see sci.research.postdocs. A new list (as of 16 Jan 2003) is available at http://groups.yahoo.com/group/Artificial_Intelligence_Jobs/
Subject: [1-26] Future Directions of AI [Note:as of 2002, this is out of date.] The purpose of this question is to compile a list of major ongoing and future thrusts of AI. To be included in this list a research problem or application must have the following characteristics: [1] Collaborative Community Effort: It must span several subfields of AI, requiring some degree of collaboration between AI researchers of different specialties. The idea is to help unify the fragmented subfields with a common purpose or purposes. [2] High Impact: It must address important problems of widespread interest. Solving the problem must matter to many people and not simply be adding another grain of sand on the anthill. This will help motivate and excite researchers, and justify the field to outsiders. [3] Short Horizon for Progress: It must be possible to have incremental progress and not be an all or nothing problem. For example, problems where we can reasonably expect to make significant measurable progress over the next 10 years or so. [4] Drive Basic Research: It should involve more than just applying current technology, but should drive basic research and the development of new technology (possibly in completely new directions). In short, these problems should be "Grand Challenges" for AI. If you were trying to describe the field of AI to a layman, what concrete problems would you use to illustrate the overall vision of the field? Saying that the goal of AI is to produce "thinking machines that solve problems" doesn't quite cut it. o Knowbots/Infobots, Web Agents and Intelligent Help Desks Unified NLU, NLG, Information Retrieval, KR, Reasoning, Intelligent User Interfaces, Qualitative Reasoning. o Autonomous Vehicles Unified Robotics, Machine Vision, Machine Learning, Intelligent Control, Planning o Machine Translation Unified NLU, NLG, Knowledge Representation, Speech Understanding, Speech Synthesis It seems appropriate to mention, in this context, some of the early goals of AI. In 1958 Newell and Simon predicted that computers would -- by 1970 -- be capable of composing classical music, discovering important new mathematical theorems, playing chess at grandmaster level, and understanding and translating spoken language. Although these predictions were overly optimistic, they did represent a set of focused goals for the field of AI. [See H. A. Simon and A. Newell, "Heuristic Problem Solving: The Next Advance in Operations Research", Operation Research, pages 1-10, January-February 1958.]
Subject: [1-27] Where are the FAQs for...neural nets? natural language? artificial life? fuzzy logic? genetic algorithms? philosophy? Lisp? Prolog? robotics? The FAQs for various related AI fields can be found: ( this list is obviously incomplete) comp.ai.neural-nets: ftp://ftp.sas.com/pub/neural/FAQ.html comp.ai.nat-lang: http://www.cs.columbia.edu/~acl/nlpfaq.txt comp.ai.alife: ? comp.ai.fuzzy: ? comp.ai.genetic: ftp://rtfm.mit.edu/pub/usenet/comp.ai.genetic/ comp.ai.philosophy: ? comp.lang.lisp: ftp://ftp.think.com/public/think/lisp/ In general, http://www.faqs.org/ is a good place to check for the latest FAQs in most areas. --- [ comp.ai is moderated. To submit, just post and be patient, or if ] [ that fails mail your article to <[email protected]>, and ] [ ask your news administrator to fix the problems with your system. ]