SlideShare ist ein Scribd-Unternehmen logo
1 von 43
Downloaden Sie, um offline zu lesen
Word Recognition Models




           Lucas Rizoli
           Thursday, September 30
           PSYC 365*, Fall 2004
           Queen’s University, Kingston
Human Word Recognition
●   Text interpreted as it is perceived
    –   Stroop test (Red, Green, Yellow)
    –   Aware of results, not of processes
         ●   Likely involves many areas of brain
              –   Visual
              –   Semantic
              –   Phonological
              –   Articulatory
●   How can we model this?
Creating a Word Recognition Model
●   Assumptions
    –   Working in English
    –   Only monosyllabic words
         ●   FOX, CAVE, FEIGN...
    –   Concerned only with simple word recognition
         ●   Symbols → sounds
         ●   Visual, articulatory systems function independently
         ●   Context of word is irrelevant
Creating a Word Recognition Model
●   Rules by which to recognize CAVE
    –   C → /k/
    –   A → /A/
    –   VE → /v/
●   Describe grapheme-phoneme correspondences
    (GPC)
    –   Grapheme → phoneme
Creating a Word Recognition Model
●   Recognize HAVE
    –   H → /h/
    –   A → /A/
    –   VE → /v/
    –   So HAVE → /hAv/ ?
●   Rules result in incorrect pronunciation
Creating a Word Recognition Model
●   English is quasi-regular
    –   Can be described as systematic, but with exceptions
    –   English has a deep orthography
         ●   grapheme → phoneme rules inconsistent
              –   GAVE, CAVE, SHAVE end with /Av/
              –   HAVE ends with /@v/
Creating a Word Recognition Model
●   Models needs to recognize irregular words
●   Check for irregular words before applying GPCs
    –   List irregular words and their pronunciations
         ●   HAVE → /h@v/, GONE → /gon/, ...
    –   Have separate look-up process
Our Word Recognition Model
From Visual System
                            Orthographic Input




                Irregular
                                                  GPCs
                 Words




                            Phonological Output
                                                   To Articulatory System
The Dual-Route Model
●   Proposed by Max Coltheart in 1978
    –   Supported by Pinker, Besner
    –   Revised throughout the 80’s, 90’s, 00’s
         ●   Context sensitive rules
         ●   Rule frequency checks
         ●   Lots of other complex stuff
●   We’ll follow his 1993 model (DR93)
DR93 Examples




                                  Note: Above, /a/ should be /@/


Context-sensitive GPC
What’s Good About DR93
●   Regular word pronunciation
    –   Goes well with rule-based theories
         ●   Berko’s Wug test (This is a wug, these are two wug_)
         ●   Childhood over-regularization
●   Nonword pronunciation
    –   NUST, FAIJE, NARF are alright
What’s Not Good About DR93
●   Irregular word pronunciation
    –   GONE → /dOn/, ARE → /Ar/
●   GPCs miss subregularities
    –   OW → /aW/, from HOW, COW, PLOW
    –   SHOW, ROW, KNOW are exceptions
●   Biological plausibility
    –   Do humans need explicit rules in order to read?
The SM89 Model
●   Implemented by Seidenberg and
    McClelland in 1989
    –   Response to dual-route model
    –   Neural network/PDP model
    –   “As little as possible of the solution built
        in”
    –   “As much as possible is left to the
        mechanisms of learning”
●   We’ll call it SM89
The SM89 Model

                             Hidden Units
                               (200 units)




        Orthographic Units                   Phonological Units
             (400 units)                          (460 units)




From Visual System                               To Articulatory System
The SM89 Model
                     ●   Orthographic units are
                         triples
                         –   Three characters
                         –   Letters or word-border
Orthographic Units       –   CAVE
     (400 units)
                              ●   _CA, CAV, AVE, VE_
                         –   Context-sensitive
The SM89 Model

                         Hidden Units
                           (200 units)


●   Hidden units needed for complete neural network
●   Encode information in a non-specified way
●   Learning occurs by changing weights on
    connections to and from hidden units
    –   Process of back-propagation
The SM89 Model
●   Phonological units are
    also triples
    –   /kAv/
         ●   _kA, kAv, Av_
●   Triples are generalized             Phonological Units
                                             (460 units)
         ●   [stop, vowel, fricative]
●   Number of units are
    sufficient for English
    monosyllables
How SM89 Learns
●   Orthographic units artificially stimulated
●   Activation spreads to hidden, phonological units
    –   Feedforward from ortho. to phono. units
●   Model response is pattern of activation in
    phonological units
How SM89 Learns
●   Difference in activation between response and the
    correct activation

●   Error computed as the sum of difference for each
    unit, squared

●   Weights of all connections between units
    adjusted
How SM89 Learns



●   Simply, it learns to pronounce words properly
    –   Don’t worry about the equations
How SM89 Learns
●   Trained using a list of ~ 3000 English
    monosyllabic words
    –   Includes homographs (WIND, READ) and irregulars
●   Each training session called an epoch
●   Words appeared somewhat proportionately to
    their frequency in written language
Practical Limits on SM89’s Training
●   Activation calculated in a single step
    –   Impossible to record how long it took to respond
    –   Correlated error scores with latency
         ●   Error → time
●   Frequency of words was compressed
    –   Would’ve required ~ 34 times more epochs
    –   Saved computer time
How SM89 Performed
How SM89 Performed




Human            SM89
What’s Good About SM89
●   Regular word pronunciation
●   Irregular word pronunciation
●   Similar results to human studies
    –   Word naming latencies
    –   Priming effects
●   Behaviour the result of learning
    –   Ability increases in human fashion
What’s Not Good About SM89
●   Nonword pronunciation
    –   Significantly worse than skilled readers
    –   JINJE, FAIJE, TUNCE pronounced strangely
●   Design was awkward
    –   Triples
    –   Feedforward network
    –   Compressed word frequencies
    –   Single-step computation
The SM94 Model
●   Seidenberg, Plaut, and
    McClelland revise SM89 in 1994
    –   Response to criticism of SM89’s
        poor nonword performance
●   We’ll call this model SM94
●   Compared humans’ nonword
    responses with model’s responses
The SM94 Model

                           Hidden Units
                             (100 units)




         Graphemic Units                   Phonological Units
             (108 units)                        (50 units)




From Visual System                             To Articulatory System
How SM94 Differs From SM89
●   Feedback loops for hidden and phonemic units
●   Weights adjusted using cross-entropy method
    –   Complicated math, results in better learning
●   Not computed in a single step
●   No more triples
    –   Graphemes for word input
    –   Phonemes for word output
    –   Input based on syllable structure
Examples of SM94’s Units
Nonwords
●   May be similar to regular words
    –   SMURF ← TURF
●   In many cases there are many responses
    –   BREAT
        ●   ← EAT ?
        ●   ← GREAT ?
        ●   ← YEAH ?
Nonwords




  Human
How SM94 and DR93 Performed




            Note: Above, PDP is SM94; Rules is DR93
Comparing SM94 and DR93
●   Both perform well with list of ~ 3000 words
    –   SM94 responds 99.7% correctly, DR93 78%
●   Both do well with nonwords
    –   SM89’s weakness caused by design issues
         ●   SM94 avoids such issues
    –   Neural networks equally capable for nonwords
Comparing SM94 and DR93
●   SM94 is a good performer
    –   Regular, irregular words
    –   Behaviour similar to human
         ●   Latency effects
         ●   Nonword pronunciation
●   DR93 still has problems
    –   Trouble with irregular words
    –   More likely to regularize words
Models and Dyslexia
●   Consider specific types of dyslexia
    –   Phonological Dyslexia
         ●   Trouble pronouncing nonwords
    –   Surface Dyslexia
         ●   Trouble with irregular words
    –   Developmental Dyslexia
         ●   Inability to read at age-appropriate level
●   How can word recognition models account for
    dyslexic behaviour?
DR93 and Dyslexia
●   Phonological dyslexia as damage to GPC route
    –   Cannot compile sounds from graphemes
    –   Relies on look-up
●   Surface dyslexia as damage to look-up route
    –   Cannot remember irregular words
    –   Relies on GPCs
●   Developmental dyslexia
    –   Problems somewhere along either route
         ●   Cannot form GPCs, slow look-up, for example
SM89 and Dyslexia
●   Developmental dyslexia as damaged or missing
    hidden units




      200 Hidden Units            100 Hidden Units
The 1996 Models and Dyslexia
●   Plaut, McClelland, Seidenberg, and Patterson
    study networks and dyslexia (1996)
    –   Variations of the SM89/SM94 models
         ●   Feedforward
         ●   Feedforward with actual word-frequencies
         ●   Feedback with attractors
         ●   Feedback with attractors and semantic processes
    –   Compare each to case studies of dyslexics
Feedforward and Dyslexia Case-
           Studies
Feedback, with Attractors and
Semantics, and Dyslexia Case-Studies
The 1996 Models and Dyslexia
●   Most complex damage caused closest results
    –   Not as simple as removing hidden units
         ●   Severing semantics
         ●   Distorting attractors
●   Results are encouraging
Questions or Comments

Weitere ähnliche Inhalte

Was ist angesagt?

Historical linguistics
Historical linguisticsHistorical linguistics
Historical linguistics
Rick McKinnon
 
Theoriesof Firstand Second Language Session1slideshare
Theoriesof Firstand Second Language Session1slideshareTheoriesof Firstand Second Language Session1slideshare
Theoriesof Firstand Second Language Session1slideshare
Andres Atehortua
 
phonetic and phonology and manners of articulation and places of articulation
phonetic and phonology and manners of articulation and places of articulationphonetic and phonology and manners of articulation and places of articulation
phonetic and phonology and manners of articulation and places of articulation
MUHAMMAD Gulzar
 
Teaching listening and speaking
Teaching listening and speakingTeaching listening and speaking
Teaching listening and speaking
asavitski
 
Distinctive feature ( yahya choy )
Distinctive feature ( yahya choy )Distinctive feature ( yahya choy )
Distinctive feature ( yahya choy )
YahyaChoy
 

Was ist angesagt? (20)

Introduction to Psycholinguistics
Introduction to PsycholinguisticsIntroduction to Psycholinguistics
Introduction to Psycholinguistics
 
language and the Brain
language and the Brainlanguage and the Brain
language and the Brain
 
1. models of word recognition
1. models of word recognition1. models of word recognition
1. models of word recognition
 
Phonology
Phonology Phonology
Phonology
 
Morphology
MorphologyMorphology
Morphology
 
Historical linguistics
Historical linguisticsHistorical linguistics
Historical linguistics
 
The Sounds of Language
The Sounds of LanguageThe Sounds of Language
The Sounds of Language
 
What is psycholinguistics revised
What is psycholinguistics  revisedWhat is psycholinguistics  revised
What is psycholinguistics revised
 
Morphology introduction
Morphology introductionMorphology introduction
Morphology introduction
 
Phonology
PhonologyPhonology
Phonology
 
Theoriesof Firstand Second Language Session1slideshare
Theoriesof Firstand Second Language Session1slideshareTheoriesof Firstand Second Language Session1slideshare
Theoriesof Firstand Second Language Session1slideshare
 
Phonology chapter 8
Phonology chapter 8Phonology chapter 8
Phonology chapter 8
 
phonetic and phonology and manners of articulation and places of articulation
phonetic and phonology and manners of articulation and places of articulationphonetic and phonology and manners of articulation and places of articulation
phonetic and phonology and manners of articulation and places of articulation
 
Teaching listening and speaking
Teaching listening and speakingTeaching listening and speaking
Teaching listening and speaking
 
Phonetics & phonology
Phonetics & phonologyPhonetics & phonology
Phonetics & phonology
 
Production of Speech
Production of SpeechProduction of Speech
Production of Speech
 
Distinctive feature ( yahya choy )
Distinctive feature ( yahya choy )Distinctive feature ( yahya choy )
Distinctive feature ( yahya choy )
 
Ppt,s & g
Ppt,s & gPpt,s & g
Ppt,s & g
 
Intro. to Linguistics_8 Phonology
Intro. to Linguistics_8 PhonologyIntro. to Linguistics_8 Phonology
Intro. to Linguistics_8 Phonology
 
Phonetics & phonology, INTRODUCTION, Dr, Salama Embarak
Phonetics & phonology, INTRODUCTION, Dr, Salama EmbarakPhonetics & phonology, INTRODUCTION, Dr, Salama Embarak
Phonetics & phonology, INTRODUCTION, Dr, Salama Embarak
 

Mehr von Lucas Rizoli

Mehr von Lucas Rizoli (10)

A Multimedia Interface For Facilitating Comparisons Of Opinions (Thesis Prese...
A Multimedia Interface For Facilitating Comparisons Of Opinions (Thesis Prese...A Multimedia Interface For Facilitating Comparisons Of Opinions (Thesis Prese...
A Multimedia Interface For Facilitating Comparisons Of Opinions (Thesis Prese...
 
Brasilia
BrasiliaBrasilia
Brasilia
 
Thoughts on the use of Analogies in Understanding and Solving Complex Problem...
Thoughts on the use of Analogies in Understanding and Solving Complex Problem...Thoughts on the use of Analogies in Understanding and Solving Complex Problem...
Thoughts on the use of Analogies in Understanding and Solving Complex Problem...
 
World Bank
World BankWorld Bank
World Bank
 
Recognizing Strong and Weak Opinion Clauses
Recognizing Strong and Weak Opinion ClausesRecognizing Strong and Weak Opinion Clauses
Recognizing Strong and Weak Opinion Clauses
 
Modeling and Adapting to Cognitive Load
Modeling and Adapting to Cognitive LoadModeling and Adapting to Cognitive Load
Modeling and Adapting to Cognitive Load
 
Fitts' Law Basics
Fitts' Law BasicsFitts' Law Basics
Fitts' Law Basics
 
Our Victorian Now
Our Victorian NowOur Victorian Now
Our Victorian Now
 
On Google
On GoogleOn Google
On Google
 
Communication is Viral
Communication is ViralCommunication is Viral
Communication is Viral
 

Kürzlich hochgeladen

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 

Kürzlich hochgeladen (20)

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 

Word Recognition Models

  • 1. Word Recognition Models Lucas Rizoli Thursday, September 30 PSYC 365*, Fall 2004 Queen’s University, Kingston
  • 2. Human Word Recognition ● Text interpreted as it is perceived – Stroop test (Red, Green, Yellow) – Aware of results, not of processes ● Likely involves many areas of brain – Visual – Semantic – Phonological – Articulatory ● How can we model this?
  • 3. Creating a Word Recognition Model ● Assumptions – Working in English – Only monosyllabic words ● FOX, CAVE, FEIGN... – Concerned only with simple word recognition ● Symbols → sounds ● Visual, articulatory systems function independently ● Context of word is irrelevant
  • 4. Creating a Word Recognition Model ● Rules by which to recognize CAVE – C → /k/ – A → /A/ – VE → /v/ ● Describe grapheme-phoneme correspondences (GPC) – Grapheme → phoneme
  • 5. Creating a Word Recognition Model ● Recognize HAVE – H → /h/ – A → /A/ – VE → /v/ – So HAVE → /hAv/ ? ● Rules result in incorrect pronunciation
  • 6. Creating a Word Recognition Model ● English is quasi-regular – Can be described as systematic, but with exceptions – English has a deep orthography ● grapheme → phoneme rules inconsistent – GAVE, CAVE, SHAVE end with /Av/ – HAVE ends with /@v/
  • 7. Creating a Word Recognition Model ● Models needs to recognize irregular words ● Check for irregular words before applying GPCs – List irregular words and their pronunciations ● HAVE → /h@v/, GONE → /gon/, ... – Have separate look-up process
  • 8. Our Word Recognition Model From Visual System Orthographic Input Irregular GPCs Words Phonological Output To Articulatory System
  • 9. The Dual-Route Model ● Proposed by Max Coltheart in 1978 – Supported by Pinker, Besner – Revised throughout the 80’s, 90’s, 00’s ● Context sensitive rules ● Rule frequency checks ● Lots of other complex stuff ● We’ll follow his 1993 model (DR93)
  • 10. DR93 Examples Note: Above, /a/ should be /@/ Context-sensitive GPC
  • 11. What’s Good About DR93 ● Regular word pronunciation – Goes well with rule-based theories ● Berko’s Wug test (This is a wug, these are two wug_) ● Childhood over-regularization ● Nonword pronunciation – NUST, FAIJE, NARF are alright
  • 12. What’s Not Good About DR93 ● Irregular word pronunciation – GONE → /dOn/, ARE → /Ar/ ● GPCs miss subregularities – OW → /aW/, from HOW, COW, PLOW – SHOW, ROW, KNOW are exceptions ● Biological plausibility – Do humans need explicit rules in order to read?
  • 13. The SM89 Model ● Implemented by Seidenberg and McClelland in 1989 – Response to dual-route model – Neural network/PDP model – “As little as possible of the solution built in” – “As much as possible is left to the mechanisms of learning” ● We’ll call it SM89
  • 14. The SM89 Model Hidden Units (200 units) Orthographic Units Phonological Units (400 units) (460 units) From Visual System To Articulatory System
  • 15. The SM89 Model ● Orthographic units are triples – Three characters – Letters or word-border Orthographic Units – CAVE (400 units) ● _CA, CAV, AVE, VE_ – Context-sensitive
  • 16. The SM89 Model Hidden Units (200 units) ● Hidden units needed for complete neural network ● Encode information in a non-specified way ● Learning occurs by changing weights on connections to and from hidden units – Process of back-propagation
  • 17. The SM89 Model ● Phonological units are also triples – /kAv/ ● _kA, kAv, Av_ ● Triples are generalized Phonological Units (460 units) ● [stop, vowel, fricative] ● Number of units are sufficient for English monosyllables
  • 18. How SM89 Learns ● Orthographic units artificially stimulated ● Activation spreads to hidden, phonological units – Feedforward from ortho. to phono. units ● Model response is pattern of activation in phonological units
  • 19. How SM89 Learns ● Difference in activation between response and the correct activation ● Error computed as the sum of difference for each unit, squared ● Weights of all connections between units adjusted
  • 20. How SM89 Learns ● Simply, it learns to pronounce words properly – Don’t worry about the equations
  • 21. How SM89 Learns ● Trained using a list of ~ 3000 English monosyllabic words – Includes homographs (WIND, READ) and irregulars ● Each training session called an epoch ● Words appeared somewhat proportionately to their frequency in written language
  • 22. Practical Limits on SM89’s Training ● Activation calculated in a single step – Impossible to record how long it took to respond – Correlated error scores with latency ● Error → time ● Frequency of words was compressed – Would’ve required ~ 34 times more epochs – Saved computer time
  • 25. What’s Good About SM89 ● Regular word pronunciation ● Irregular word pronunciation ● Similar results to human studies – Word naming latencies – Priming effects ● Behaviour the result of learning – Ability increases in human fashion
  • 26. What’s Not Good About SM89 ● Nonword pronunciation – Significantly worse than skilled readers – JINJE, FAIJE, TUNCE pronounced strangely ● Design was awkward – Triples – Feedforward network – Compressed word frequencies – Single-step computation
  • 27. The SM94 Model ● Seidenberg, Plaut, and McClelland revise SM89 in 1994 – Response to criticism of SM89’s poor nonword performance ● We’ll call this model SM94 ● Compared humans’ nonword responses with model’s responses
  • 28. The SM94 Model Hidden Units (100 units) Graphemic Units Phonological Units (108 units) (50 units) From Visual System To Articulatory System
  • 29. How SM94 Differs From SM89 ● Feedback loops for hidden and phonemic units ● Weights adjusted using cross-entropy method – Complicated math, results in better learning ● Not computed in a single step ● No more triples – Graphemes for word input – Phonemes for word output – Input based on syllable structure
  • 31. Nonwords ● May be similar to regular words – SMURF ← TURF ● In many cases there are many responses – BREAT ● ← EAT ? ● ← GREAT ? ● ← YEAH ?
  • 33. How SM94 and DR93 Performed Note: Above, PDP is SM94; Rules is DR93
  • 34. Comparing SM94 and DR93 ● Both perform well with list of ~ 3000 words – SM94 responds 99.7% correctly, DR93 78% ● Both do well with nonwords – SM89’s weakness caused by design issues ● SM94 avoids such issues – Neural networks equally capable for nonwords
  • 35. Comparing SM94 and DR93 ● SM94 is a good performer – Regular, irregular words – Behaviour similar to human ● Latency effects ● Nonword pronunciation ● DR93 still has problems – Trouble with irregular words – More likely to regularize words
  • 36. Models and Dyslexia ● Consider specific types of dyslexia – Phonological Dyslexia ● Trouble pronouncing nonwords – Surface Dyslexia ● Trouble with irregular words – Developmental Dyslexia ● Inability to read at age-appropriate level ● How can word recognition models account for dyslexic behaviour?
  • 37. DR93 and Dyslexia ● Phonological dyslexia as damage to GPC route – Cannot compile sounds from graphemes – Relies on look-up ● Surface dyslexia as damage to look-up route – Cannot remember irregular words – Relies on GPCs ● Developmental dyslexia – Problems somewhere along either route ● Cannot form GPCs, slow look-up, for example
  • 38. SM89 and Dyslexia ● Developmental dyslexia as damaged or missing hidden units 200 Hidden Units 100 Hidden Units
  • 39. The 1996 Models and Dyslexia ● Plaut, McClelland, Seidenberg, and Patterson study networks and dyslexia (1996) – Variations of the SM89/SM94 models ● Feedforward ● Feedforward with actual word-frequencies ● Feedback with attractors ● Feedback with attractors and semantic processes – Compare each to case studies of dyslexics
  • 40. Feedforward and Dyslexia Case- Studies
  • 41. Feedback, with Attractors and Semantics, and Dyslexia Case-Studies
  • 42. The 1996 Models and Dyslexia ● Most complex damage caused closest results – Not as simple as removing hidden units ● Severing semantics ● Distorting attractors ● Results are encouraging