Artificial Intelligence and Neurocognition - Introduction

Artificial Intelligence and Neurocognition - Leiden University, 2019​​​​​​

Lecture 1: Introduction

What is Artificial Intelligence?

Cognitive psychology:

- The study of the computations that make it possible to perceive, reason and act

Artificial intelligence:

- The study of how to build or program computers to enable them to do what minds can do

AI and other scientific disciplines

AI ≠ psychology and AI ≠ computer science

However, AI draws from these disciplines:

- AI puts greater emphasis on computation than psychology 

- AI puts greater emphasis on perception, reasoning, and action than computer science

So why AI in psychology?

Psychology is one big inverse problem: we try to reason about the mind but we can't really measure it

-We have a set of observations (behavior, psychophysiological measurements, EEG, fMRI, etc.)

-We then try to infer the processes producing such observations

-Such inferences are limited, and sometimes even impossible to make

AI can use forward modeling

-We design a (simple) system, and see how that behaves

-Examples: cognitive robotics

-This is where AI and computational psychology meet

 

How did the field of AI develop?

Philosophy of mind

How does the physical brain give rise to the mental mind?

-René Descartes (1596–1650): dualism, because the mind is not physical

- Materialists: wrong! All mental states are caused by (or identical to) physical states

 

Searle:

John Searle: a collection of cells can lead to thought/action/consciousness

Consciousness requires actual physicalchemical properties of actual human brains

Only brains cause minds!

Chinese Room Experiment (Searle):

I am situated in a room containing only a large book and a door under which pieces of paper can be passed

Chinese people on the outside of the room can ask me questions by writing them down and passing pieces of paper under the door

The large book contains every possible question– answer mapping, so I can answer (in Chinese!) all questions correctly

Rule-based manipulation of symbols does not constitute intelligence: the inhabitant of the Chinese room does not understand Chinese (weak AI)

Chinese room criticism: AI really is “the ongoing research program of showing Searle’s Chinese Room Argument to be false” (Hayes)

No matter how intelligent machine behavior may seem, it does not reflect true intelligence or sentience

Strong AI

“The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds”

 Strong AI proponents believe that intelligent systems can actually think

Most people believe that strong AI should have a connectionist architecture (later)

Can machines think?

- “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” —Edsger Dijkstra

In other words: are we asking the right questions?

- Strong AI assumes that the human mind is an information processing system, and that thinking is a form of computing

-The mind as an information processor is one of the basic tenets of cognitive psychology

 

Important dates within AI:

1st phase:

1943 Walter–Pitts neuron:

Warren McCulloch and Walter Pitts’ three principles:

1. Basic physiology

2. Propositional logic

3. Turing’s theory of computation

-Any computable function can be computed by a network of neurons

-All logical operators can be implemented by simple neural networks

 

1950 Computing Machinery and Intelligence:

Turing’s (1950) imitation game: a machine is intelligent if we cannot distinguish it from a human in conversation

It makes no claims about the underlying mechanisms

-How does the judge determine intelligence in a Turing test? By complex grammatical structures and realistic world knowledge (De Kleijn et al, 2018)

 

1951 SNARC: neural net machine designed by Minsky - First neural network computer with 40 neurons

 

1956 Dartmouth Conferences - the birth of AI:

Pioneers in the fields of computer science, mathematics and cognitive science got together for a month-long conference at Dartmouth College

Here they coined the term: artificial intelligence

 

In the 50s and 60s: there's several successes.

Computers playing checkers, proving theorems

Invention of Lisp, the dominant high-level AI language

Neural network (connectionist) research was pushed to the background

Intelligence is thought of as symbols and the relations between them:

-Symbolic AI (GOFAI) does not concern itself with neurophysiology

-Human thinking is a kind of symbol manipulation --> IF (A > B) AND (B > C) THEN (A > C)

-Knowledge-based, or expert systems were hugely successful

 

2nd phase: 

1965 ELIZA

Weizenbaum (1965): ELIZA was an early natural language processor

Used simple techniques to create the illusion of understanding

Regardless, some people felt like the computer did understand them: "“Computers can have conversations!”

“It was meant to mimic a psychotherapist, which allowed it to adopt the pose of knowing almost nothing of the real world.”:

- ELIZA looks for keywords in its input - Father, mother, boyfriend, girlfriend, angry, sad, happy, etc.

-Using a database of rules, new sentences are constructed using these words

--> “I hate my father.”

--> “Why do you hate your father?”

-What if there are no keywords present in the input?

-->“I see.”  or “Please go on.”

The anthropomorphization (humanization) of computers is just a mind trick 

 

1971 STRIPS

Stanford Research Institute Problem Solver: an automated planner

Realization of goals

Divide the task into subgoals, identify necessary actions

Early action planners were susceptible to the Sussman anomaly:

Goal stack planning

In the problem, three blocks (labeled A, B, and C) rest on a table. The agent must stack the blocks such that A is atop B, which in turn is atop C. However, it may only move one block at a time.

This problem is illustarted well here: https://en.wikipedia.org/wiki/Sussman_anomaly

 

1972 PARRY:

Kenneth Colby (1972, Stanford): modified Turing test

PARRY simulated a patient with paranoid schizophrenia

- Ofen inconsistent or meaningless sentences, but therefore realistic!

33 psychiatrists were asked to classify transcripts of conversations with PARRY or paranoid schizophrenics - only 48% were correct

 

1972 MYCIN:

A system that emulates the decision-making ability of a human expert

Example: MYCIN (Stanford, 1970s) was designed to diagnose and recommend treatment for certain blood infections

Simple if–then rules with certainty factors

MYCIN reached an accuracy of ~69%, which was better than physicians at Stanford Medical School (it was never used in practice due to ethical and legal difficulties)

 

1974–1980s AI winter: no funding for AI research

Many unanswered questions: how do we deal with perception, robotics, learning and pattern recognition?

AI is not that powerful - example of translation:

The spirit is willing but the flesh is weak - The vodka is good but the meat is rotten

Symbolic AI does not suffice:

-It is unclear how processes like pattern recognition would work in a purely symbolic way

-Representations dealing with noisy input are needed

 

3rd phase:

1986 PDP handbook

After the AI winter, connectionism was revived with the Rumelhart & McClelland PDP (parallel distributed processing) research group.

3 main pros of connectionism:

1. Biologically inspired: Connectionism is based on the structure of the human brain. Recognizes that parallel processing takes place (and is more efficient)

2. Lesion tolerant: Lesioned or damaged networks can still process information

3. Capable of generalization: ANNs (aritficial neural networks) are capable of learning, and are able to generalize rules to novel input

 

Human memory is content-addressable

First explicit theory on data storage in the brain

Memory is not stored in neurons, but in the connections between them

There are excitatory and inhibitory connections

But how do these neural networks compute anything?

- Neurons output a signal based on their input signal

-Multi-layer perceptrons are able to implement all logical operators, such as AND, OR, XOR

 

Connectionism is usefeul becuase it allows us to:

-Find properties of one particular member

-Identify a member by properties 

-Identify general characteristics of members of a gang, or members with a certain characteristic. This shows generalization.

-Have a visual demonstration

-No a priori assumption about problem space or statistical distribution

-Artificial neural networks can compute any computable function (remember McCulloch & Pitts!)

-Pattern recognition

Connectionist AI principles:

-Mental states are represented as N-dimensional vectors of numeric activation values over neural network units

-Memory is created by modifying the connection strength (weight) between units

 

1997 Deep Blue vs. Kasparov

1997: first time a computer (IBM’s Deep Blue) beat a grandmaster at chess in a tournament

2005: last time a human beat a top chess computer under tournament conditions

2009: an HTC Touch HD running chess sofware equals Deep Blue’s performance

4th phase

2010s Deep reinforcement learning

Data mining offers huge quantities of data

Deep learning offers representation at many levels

Bayesian networks deal with uncertain knowledge

Deep reinforcement learning can learn to act from rich, noisy data:

-Adding more layers adds to dimensionality of classification

-Multiple representations offer multiple levels of abstraction

-Recurrent connections can maintain context, temporal information

-Combination is hot topic: Google is investigating motion classification and content classification

 

2012 :Personal assistants Siri, Google Go

Sofware used deep reinforcement learning

2015 :AlphaGo

2016: Google DeepMind’s AlphaGo defeated the world’s number one player

 

What next? Machine learning

If we don’t want to preprogram all knowledge, systems should be able to learn.

A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E

Types of learning

Supervised learning

- External knowledgable supervisor presents the system with correctly labeled training data

Unsupervised learning

-Discover hidden structure in data without labeled data

Reinforcement learning

-Learning from a feedback signal

Classification

- Determining group membership based on input data

- Does this MRI image of someone’s head show a brain tumor?

Regression

- Predict outcome data based on input data

- Given its location, surface area, and number of rooms, can we predict the value of this house?

 

Conclusion

Philosophical implications:

- Weak AI: machines can simulate human intelligence using clever tricks

- Strong AI: a well-programmed machine that exactly emulates the human brain is a mind, and thereby intelligent

Approaches to AI:

 - Symbolic AI: intelligent behavior through manipulation of symbols

- Connectionist AI: representations in the brain are distributed, processing massively parallel

 

 

 

 

 

Image

Access: 
Public

Image

Image

 

 

Contributions: posts

Help other WorldSupporters with additions, improvements and tips

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Image

Spotlight: topics

Check the related and most recent topics and summaries:
Activities abroad, study fields and working areas:
WorldSupporter and development goals:
Institutions, jobs and organizations:

Image

Check how to use summaries on WorldSupporter.org

Online access to all summaries, study notes en practice exams

How and why use WorldSupporter.org for your summaries and study assistance?

  • For free use of many of the summaries and study aids provided or collected by your fellow students.
  • For free use of many of the lecture and study group notes, exam questions and practice questions.
  • For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
  • For compiling your own materials and contributions with relevant study help
  • For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.

Using and finding summaries, notes and practice exams on JoHo WorldSupporter

There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

  1. Use the summaries home pages for your study or field of study
  2. Use the check and search pages for summaries and study aids by field of study, subject or faculty
  3. Use and follow your (study) organization
    • by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
    • this option is only available through partner organizations
  4. Check or follow authors or other WorldSupporters
  5. Use the menu above each page to go to the main theme pages for summaries
    • Theme pages can be found for international studies as well as Dutch studies

Do you want to share your summaries with JoHo WorldSupporter and its visitors?

Quicklinks to fields of study for summaries and study assistance

Main summaries home pages:

Main study fields:

Main study fields NL:

Submenu: Summaries & Activities
Follow the author: Ilona
Work for WorldSupporter

Image

JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

Statistics
2035 1
Search a summary, study help or student organization