Information

Relation of spoken sentence length and processing time required

Relation of spoken sentence length and processing time required



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I would like to know if there is some good research that tried to put spoken sentence length and its processing time in simple relation without focusing on meaning of words etc. It would be nice to know results for a big range of sentence lengths (from 2 words to 10 words, at least). I've found some answers for reading time (where relation appears to be linear) on this site, but that is not quite what I'm interested in.


Introduction: Conceptual Short Term Memory

Conceptual short term memory (CSTM) is a construct based on the observation that most cognitive processing occurs without review or rehearsal of material in standard working memory and with little or no conscious reasoning. CSTM proposes that when one perceives a meaningful stimulus such as a word, picture, or object, it is rapidly identified and in turn activates associated information from long term memory (LTM). New links among concurrently active concepts are formed in CSTM, shaped by parsing mechanisms of language or grouping principles in scene perception, and by higher-level knowledge and current goals. The resulting structure is conscious and represents one’s understanding of the gist of a picture or the meaning of a sentence. This structured representation is consolidated into LTM if time permits. Momentarily activated information that is not incorporated into such structures either never becomes conscious or is rapidly forgotten. Figure 1 shows a cartoon of CSTM in relation to LTM and one component of conventional STM.

Figure 1. Conceptual short term memory (CSTM) is represented in this cartoon as a combination of new perceptual information and associations from long term memory (LTM) out of which structures are built. Material that is not included in the resulting structure is quickly forgotten. The articulatory loop system that provides a limited, rehearsable phonological short term memory (STM) is separate from CSTM. Adapted from Figure 1 in Potter (1993).

CSTM in Relation to Other Memory Systems and Other Models

Conceptual short term memory is a processing and memory system that differs from other forms of short term memory. In vision, iconic memory (Sperling, 1960) maintains a detailed visual representation for up to about 300 ms, but it is eliminated by new visual stimulation. Meaning plays little or no role. Visual short term memory (VSTM) holds a limited amount of visual information (about four items’ worth) and is somewhat resistant to interference from new stimulation as long as the information is attended to (Coltheart, 1983 Phillips, 1983 Luck and Vogel, 1997 Potter and Jiang, 2009). Although VSTM is more abstract than perception in that the viewer does not mistake it for concurrent perception, it maintains information about many characteristics of visual perception, including spatial layout, shape, color, and size. In audition, the phonological loop (Baddeley, 1986) holds a limited amount (about 2 s worth) of recently heard or internally generated auditory information, and this sequence can be maintained as long as the items are rehearsed (see Figure 1).

Conceptual short term memory differs from these other memory systems in one or more ways: in CSTM, new stimuli are rapidly categorized at a meaningful level, associated material in LTM is quickly activated, this information is rapidly structured, and information that is not structured or otherwise consolidated is quickly forgotten (or never reaches awareness). In contrast, standard working memory, such as Baddeley’s articulatory/phonological loop and visuospatial sketchpad together with a central executive (Baddeley, 1986, 2007), focuses on memory systems that support cognitive processes that take place over several seconds or minutes. A memory system such as the phonological loop is unsuited for conceptual processing that takes place within a second of the onset of a stream of stimuli: it takes too long to be set up and it does not represent semantic and conceptual information directly. Instead, Baddeley’s working memory directly represents articulatory and phonological information or visuospatial properties: these representations must be reinterpreted conceptually before further meaning-based processing can occur.

More recently, Baddeley (2000) proposed an additional system, the episodic buffer, that represents conceptual information and may be used in language processing. The episodic buffer is 𠇊 temporary store of limited capacity… capable of combining a range of different storage dimensions, allowing it to collate information from perception, from the visuo-spatial and verbal subsystems and LTM… representing them as multidimensional chunks or episodes…” (Baddeley and Hitch, 2010). Baddeley notes that this idea is similar to CSTM as it was described in 1993 (Potter, 1993).

Although Baddeley’s multi-system model of working memory has become the dominant model of short term memory, it neglects the evidence that stimuli in almost any cognitive task rapidly activate a large amount of potentially pertinent information, followed by rapid selection and then decay or deactivation of the rest. That can happen an order of magnitude faster than the setting up of a standard, rehearsable STM representation, permitting the seemingly effortless processing of experience that is typical of cognition. Of course, not all cognitive processing is effortless: our ability to engage in slower, more effortful reasoning, recollection, and planning may well draw on conventional short term memory representations.

Relation to other cognitive models

Many models of cognition include some form of processing that relies on persistent activation or memory buffers other than standard working memory, tailored to the particular task being modeled. CSTM may be regarded as a generalized capacity for rapid abstraction, pattern recognition, and inference that is embodied in a more specific form in models such as ACT-R (e.g., Budiu and Anderson, 2004), the construction–integration model of discourse comprehension (Kintsch, 1988), the theory of long term working memory (Ericsson and Kintsch, 1995) and models of reading comprehension (e.g., Just and Carpenter, 1992 see Potter et al., 1980 Verhoeven and Perfetti, 2008).


Frontiers in Psychology

The editor and reviewers' affiliations are the latest provided on their Loop research profiles and may not reflect their situation at the time of review.



SHARE ON

Materials and Methods

Participants

Eleven female and nine male participants with normal hearing participated in the experiment, with an average age of 23 years (ranging from 19 to 36 years). The participants had pure tone hearing thresholds of 15 dB hearing level (HL) or better at the standard audiometric frequencies in the range from 125 to 8000 Hz. All participants performed better than 20/50 on the Snellen chart indicating normal or corrected to normal vision (according to Hetherington, 1954). All experiments were approved by the Science Ethics Committee for the Capital Region of Denmark.

Stimuli

Speech Material

Thirty-nine items from the German Oldenburg Linguistically and Audiologically Controlled Sentence corpus (OLACS, see Uslar et al., 2013) were translated into Danish language and recorded. Each sentence describes two characters and an action being performed by one of the characters. All sentences contained a transitive full verb such as filme (𠇏ilm” in Table ​ Table1 1 ), an auxiliary verb vil (“will”), a subject noun phrase den sure pingvin (“The angry penguin”) and an object noun phrase den s koala (“the sweet koala”). Each speech item was recorded with two different sentence structures in order to vary the complexity of sentences without changing word elements. Each sentence was either realized with a subject-verb-object structure (SVO I and II in Table ​ Table1 1 ) as well as with a syntactically complex object-verb-subject structure (OVS I and II in Table ​ Table1 1 ). While the SVO structure is canonical in Danish syntax and considered to be easy to process, written and spoken OVS sentences in Danish are more difficult to process (see Boeg Thomsen and Kristensen, 2014 Kristensen et al., 2014).

Table 1

Examples of the two sentence structures that were presented in the audio-visual picture-matching task.

Sentence structureExample
Word1Word2Word3Word4Word5Word6Word7Word8
SVO IDensurepingvinvilPTDfilmedenskoala
The angry penguin will film the sweet koala
SVO IIDenskoalavilPTDfilmedensurepingvin
The sweet koala will film the angry penguin.
OVS IDensurepingvinvilPTDdenskoalafilme
The smart penguin, the sweet hare will film
OVS IIDenskoalavilPTDdensurepingvinfilme
The sweet koala, the angry penguin will film

In both (SVO and OVS) sentence structures, the participants need to identify the semantic roles of the involved characters. The role assignment of the character that carries out the action (the agent) and the character that is affected by the action (the patient) is possible only after the auxiliary verb vil. Until the auxiliary verb, both sentence structures are ambiguous with respect to the grammatical roles of the involved characters and, thus, no thematic role assignment can be made. The auxiliary verb vil is either followed by the transitive verb filme (𠇏ilm” see word 5 in Table ​ Table1 1 ), indicating a subject noun phrase at the beginning of the sentence, or by the article den (“the” see word 5 for the OVS I and II), informing the listener about the object role of the first noun. Since word 5 within each sentence provided the information required performing the comprehension task, the onset of word 5 is defined as the point of target disambiguation (PTD) for all sentence structures (see Table ​ Table1 1 ). Care was taken in selecting actions, agents, and objects that were non-stereotypical for any of the characters (for example, baking is a typical action of a baker). This constraint was employed to make sure that the participants did not make premature role assignments based on any anticipation of an agent’s characteristic action.

Visual Material

Pictures from the OLACS picture set were used, which were created for eye-tracking purposes (see Wendt et al., 2014, 2015). Each sentence was presented with either a target or a competitor picture. The picture illustrating the situation as described in the spoken sentence was defined as target picture (left panel of Figure ​ Figure1 1 ). The competitor picture showed the same characters and action but interchanged roles of the agent and patient (right panel of Figure ​ Figure1 1 ). Both the competitor and the target picture were of the same size, and within each picture, the agent was always shown on the left side in order to facilitate fast comprehension of the depicted scene. There were always two sentences that potentially matched a given sentence (i.e., a SVO and an OSV sentence for each picture). For instance, the left picture shown in Figure ​ Figure1 1 was used as target picture for sentence SVO I and OVS II in Table ​ Table1 1 . All pictures were presented to the participants before they performed the audio-visual picture matching paradigm to familiarize them with the visual stimuli. All pictures are publicly available 1 .

Example of a visual stimulus pair used in the audio-visual picture-matching paradigm. The left figure shows a target picture corresponding to the sentences Den sure pingvin vil filme den s koala (“The angry penguin will film the sweet koala” SVO I in Table ​ Table1 1 ) or Den s koala vil den sure pingvin filme (“The sweet koala, the angry penguin will film.” OVS II in Table ​ Table1 1 ). The right figure shows an example for the corresponding competitor picture of the same sentences. Only one of the pictures, either target or competitor picture, was presented during the paradigm.

Audio-Visual Picture-Matching Paradigm

The trial procedure for the audio-visual picture matching paradigm is shown in Figure ​ Figure2 2 . After an initial silent baseline showing a fixation cross (for 1 s), the participants were shown a picture (either target or competitor) for a period of 2 s. This was followed by a 3-s long background noise baseline after which a sentence was presented in the same background noise. After the sentence offset, the background noise continued for additional 3 s. A fixation cross was presented during the sound stimulus presentation. After the final noise offset, the participants were prompted to decide whether the sentence matched the picture or not via a button press (left or right mouse button). After the comprehension task, the participants were instructed to rate how difficult it was to understand the sentence using a continuous visual analog scale (McAuliffe et al., 2012). They were asked to indicate their rating by positioning a mouse on a continuous slider marked �sy” and 𠇍ifficult” at the extremes.

Trial structure of the audio-visual picture-matching paradigm. Participants saw a picture on screen for 2000 ms, followed by a visual fixation cross and a simultaneous acoustical presentation of a sentence in background noise. Background noise was presented 3000 ms before and ended 3000 ms after sentence offset. After the acoustic presentation, participants’ task was to decide whether the picture matched with the sentence or not. Pupil dilations were measured from the picture onset until the participants’ response in the comprehension task. The comprehension task was followed by a subjective rating of the experienced difficulty.

First, the participants performed one training block, which contained 10 trials. After training, each participant listened to 159 sentences, divided into two blocks. Both SVO and OVS sentences were presented in a lower-level noise condition (+12 dB SNR) or in a higher-level noise condition (-6 dB SNR). The noise masker was a stationary speech-shaped noise with the long-term frequency spectrum of the speech. Filler trials were included were the picture either did not match the character or the action of the spoken sentences.

Cognitive Tests

At the end of the test session, the participants performed two cognitive tests: a digit-span test and a reading span task. The digit span test was conducted in a forward and a backward version. The forward version is thought to primarily asses working memory size (i.e., number of items that can be stored) whereas the backward version reflects the capacity for online manipulation of the content of working memory (e.g., Kemper et al., 1989 Cheung and Kemper, 1992). In the forward version, a chain of digits was presented aurally and the participants were then asked to repeat back the sequence. In the backward version, the participants were asked to repeat back the sequence in reversed order. To calculate the scores for the digit span test, one point was awarded for each correctly repeated sequence (according to the traditional scoring see Tewes, 1991). The scores were presented in percentages correct, i.e., how many sequences out of the 14 sequences were repeated correctly. In addition, while the participants performed the digital span tests, pupil dilations were recorded to obtain a physiological correlate of effort.

In the reading span task, the participants were presented with sequences of sentences on the screen and instructed to determine, after each sentence, whether the sentence made sense or not (Daneman and Carpenter, 1980). After each sentence, a letter was presented on the screen and the participant was asked to remember the letter. After a set of sentences (length of the set varied between 3 and 11 sentences), the participant was prompted to recall the letters presented between sentences. The number of letters that were correctly recalled were scored regardless of the order in which they were reported. The reading span score was defined as the aggregated number of letters correctly recalled across all sentences in the test. Letters were used as targets rather than sentence words in order to make the task less reliant on reading abilities.

Apparatus

The experiment was performed in a sound-proof booth. Participants were seated 60 cm from the computer screen and a chin rest was used to stabilize their head. Visual stimulus was presented on a 22″ computer screen with a resolution of 1680 × 1050 pixels. The stimuli were delivered through two loudspeakers (ADAM, A5X), located next to the screen. An eye-tracker system (EyeLink 1000 desktop system, SR Research Ltd.) was used to record participants’ pupil dilation with a sampling rate of 1000 Hz throughout the experiment. The eye-tracker was calibrated at the beginning of the experiment using a nine-point fixation stimulus. During each trial, pupil size and pupil x- and y-traces were recorded for detecting horizontal and vertical eye-movements, respectively. The eye tracker sampled only from the left eye.

Pupil Data Analysis

The recorded data were analyzed for 20 participants in a similar way as reported in previous studies (Piquado et al., 2010 Zekveld et al., 2010, 2011) 2 . First, eye-blinks were removed from the recorded data by classifying samples for which the pupil value was below 3 standard deviations of the mean pupil dilation. After removing the eye-blinks, linear interpolation was applied starting 350 ms before and ending 700 ms after a detected eye-blink. Trials for which more than 20% of the data required interpolation were removed from the further data analysis. For one participant more than 50% of the trials required interpolation and, therefore, this participant was excluded from the further data analysis (Siegle et al., 2003). The data of the de-blinked trails were smoothed by a four-point moving average filter. In order to control for individual differences in pupil range, each trial data point was subtracted by the minimum pupil value of the entire trial time series (from trial onset of the picture presentation until the comprehension task) for each individual participant. Afterward, the pupil data were divided by the range of the pupil size within the entire trial. Finally, the pupil data were normalized by subtracting a baseline value which was defined as the averaged pupil value across 1 s before sentence presentation (when listening to noise alone, see Figure ​ Figure3 3 ). The pupil responses were averaged across all participants for each condition. Averaged pupil data were analyzed within three different time epochs (see Figure ​ Figure3 3 ). Epoch 1 describes the time from the start of the sentence until the point of disambiguation. Epoch 2 is defined as the time after the point of disambiguation until the sentence offset. Epoch 3 defines the 3 seconds following the sentence offset when the participants are asked to retain sentences in memory until the comprehension question.

Normalized pupil dilation averaged across all participants for all four conditions. Time axis starts with the onset of sentence presentation. Horizontal lines indicated interval used for baseline correction and the different epochs in which the mean pupil response was calculated.


Human Language and the Brain

Several areas of the brain must function together in order for a person to develop, use, and understand language.

Learning Objectives

Describe the role each brain structure involved in language production

Key Takeaways

Key Points

  • Broca’s area is primarily responsible for language production damage to this area results in productive aphasia.
  • Wernicke’s area is primarily responsible for language comprehension damage to this area results in receptive aphasia.
  • The primary auditory cortex identifies pitch and loudness of sounds.
  • The angular gyrus is responsible for several language processes, including (but not limited to) attention and number processing.

Key Terms

Without the brain, there would be no language. The human brain has a few areas that are specific to language processing and production. When these areas are damaged or injured, capabilities for speaking or understanding can be lost, a disorder known as aphasia. These areas must function together in order for a person to develop, use, and understand language.

Language and the brain: The areas of the brain necessary for processing language: Broca’s area, Wernicke’s area, the primary motor cortex, the posterior middle temporal gyrus, and the middle and posterior superior temporal gyrus.

Broca’s Area

Broca’s area, located in the frontal lobe of the brain, is linked to speech production, and recent studies have shown that it also plays a significant role in language comprehension. Broca’s area works in conjunction with working memory to allow a person to use verbal expression and spoken words. Damage to Broca’s area can result in productive aphasia (also known as Broca’s aphasia), or an inability to speak. Patients with Broca’s can often still understand language, but they cannot speak fluently.

Wernicke’s Area

Wernicke’s area, located in the cerebral cortex, is the part of the brain involved in understanding written and spoken language. Damage to this area results in receptive aphasia (also called Wernicke’s aphasia). This type of aphasia manifests itself as a loss of comprehension, so sometimes while the patient can apparently still speak, their language is nonsensical and incomprehensible.

Language and the brain: The areas of the brain necessary for language. Spoken word, cognition, and written word all are processed in different parts of the brain in different orders.

Auditory Cortex and Angular Gyrus

The primary auditory cortex, located in the temporal lobe and connected to the auditory system, is organized so that it responds to neighboring frequencies in the other cells of the cortex. It is responsible for identifying pitch and loudness of sounds.

The angular gyrus, located in the parietal lobe of the brain, is responsible for several language processes, including number processing, spatial recognition and attention.


Conclusion

Wason and Reich (1979) referred to their emblematic sentence, “No head injury is too trivial to be ignored” as a “verbal illusion” because most people derive a meaning that is not actually conveyed by its lexico-syntactic content. It is possible that the anomaly present in Wason and Reich’s exemplar sentence overloads the parser, but instead of causing a breakdown in understanding, the anomaly is resolved by a reversal of meaning (Kizach et al., 2016). This remains a possibility. Along with others, however, we suggest that shallow processing is in fact a processing preference a first-pass system that, besides making possible rapid comprehension with minimal effort, also allows for pragmatic normalization and an ability to understand a speaker’s (or writer’s) intended meaning (Sanford and Graesser, 2010 Christianson, 2016).

In the present experiment, we have highlighted the interplay between syntactic complexity, plausibility, and cognitive effort in sentence comprehension among older and younger adults. In so doing, it is important to note that the speech materials were presented under ideal listening conditions and in the absence of distraction. As such, we may well have underestimated the relative frequency with which shallow processing and presumed plausibility underlies everyday understanding of spoken discourse.


Relation of spoken sentence length and processing time required - Psychology

l Do People think differently when speaking different languages?

For a long time, people think the idea that people, who speak different languages, think differently because of the language itself is simply wrong or just too difficult to test out. However, as more and more multi-lingual people have self-reported different ways of thinking when they are speaking different languages, it is a issue that should be re-attended.

2) Current situation and research

Nowadays, actually more than half of the world’s population is able to use two or more languages fluently (ilanguages.org). Historically, people in certain regions are born to learn more than one languages in their social environment. For example, people in China are more likely be trilingual as they speak home dialect, mandarin, and later learn English at school in Morocco, people study different languages because of colonial influences, so the population speaks Arabic, French, English, Spanish and Moroccan, or some of them (ilanguages.org). It is also observed that the percentage of this category of population is steadily increasing because of more accessible migration and culture integration. People moving to another countries of different language are required by social environment to learn that language. Thus, multi-lingual people is becoming a greater portion of total population.

Research categorized some key reasons behind why people speak multiple languages, as shown below (ilanguages.org):

· If you speak one language at home and another outside with friends, at school or at work.

· If your native language is full of foreign words.

· If you live in a country influenced by other cultures and eventually other languages.

· If you live in a country with open borders with other nations speaking different languages.

· If you love languages and willing to dedicate time learning them.

· Ability to learn and memorize a good number of new words.

· Interest in grammar and attention to detail.

* From the list above, we can see that speaking multiple languages can attribute to both external (social environment) and internal (desire to learn) factors.

There are several reasons why it is important to study the different thinking models when speaking different languages. As we mentioned above, firstly, this idea has been considered wrong or untestable. Secondly, there has been and will be greater population speaking multiple languages, so studies on this topic will affect and might help more people. Thirdly, there has been researches done on this topic which prove this idea, so it is important to disprove the previous misunderstanding. In the following of the chapter, we will talk about how different languages shape people into different ways of thinking by using experiments and researches done by specialized psychologists.

One famous of the first ones who think that “each language holds a worldview that influences its speakers” (Whorf, 1930s) was Benjamin Lee Whorf, an American linguist. Whorf’s idea is often as “Whorfianism” (Fishman, 1982) in later researchers’ discussions.

An exemplifying research showing that people do think differently when switching the languages they speak is done by Prof. Susan Ervin-Tripp. She experimented on Japanese-American women who speaks Japanese and English fluently (Ervin-Tripp). Participants were asked to complete sentences beginning with “When my wishes conflicts with my family” in two languages, and found out that participants ended sentences with different meanings depending on the language. For example, same participant ended with “it is a time of great unhappiness” in Japanese, while with “I do what I want” in English (Ervin-Tripp). This kind of responses by a group cannot be explained by coincidence, so we can say that people tend to think differently when speaking different languages.

Ever since the beginning of this speculation, more and more experiments have been conducted and proved the argument around the world. Further following up Prof. Ervin-Tripp’s idea, prof. David Luna experimented on Hispanic American to interpret picture of women in one language, and in the other six months later (Luna, 2007). Their finding matches with Tripp’s result. Participants’ depictions are different in meanings in two sessions. In Spanish one, the depictions of the women are generally extrovert and self-sufficient, while in English one, the women are interpreted more traditional and family-oriented (Luna, 2007). A series of tests has proven that people do think differently in different language settings.

Psycholinguist Panos Athanasopoulos of Lancaster University. Prof. Athanasopoulos conducted an experiment in which he tested people using English and German to express the same event. Through experiment, he found out that participants talk about the same thing differently in English and German. They say “A man is walking” in English, while “A man leaves the house and walks to the store” in German (Athanasopoulos, 2015).

A different type of experiment of carried out in Israel,when researchers tried to test out Israeli Arab’s different mindset when speaking Arabic and Hebrew as there’s such a tension between the populations of the two languages. They will see a subjective, in Arabic or Hebrew, and one particular name from each language. It is hypothesized that participants will “think more positively about Arabs when placed in an Arabic environment than in a Hebrew one” (Danziger & Ward, 2010), saying that they are more likely to connect positive adjectives with name in the same language. The result proved the hypothesis that participant Israeli Arabs associate more positively with Arabs when speaking Arabic and less so in a Hebrew environment. Therefore, it shows an association of language environment with judgments.

All the example researches above is pointing to the idea that thinking model is different in different languages. This leads to the question what are the factors contributing to this phenomenon and, for this, psycholinguists have provided three main reasons.

The first one is the nature of different languages’ vocabulary, grammar and etymology determines the frame of thinking in that particular language (Brown, 1960). Most of the times, we think in language, so it is not surprising that our thinking is influenced by the language in which we think. In the field of vocabulary, English, for example, have words for different colors, while Dani in New Guinea only has dark and light, two colors when English has blue, there are two “blue”s in Russian, two different words for light blue and dark blue (LSA). Grammar of verb is also an important aspect of difference. In English, different verb tenses for present, past and future, while verbs in Mandarin don’t, so in Mandarin there is an interpretation of time with sentence itself. Moreover, a lot of languages have verb at the beginning of a sentence to emphasize the motion, while in English we usually place verb after noun.

Another important reason is “priming” in our thinking process when speaking a particular language (Economist, 2013). That is to say, although we don’t notice, we tend to link language with the context we are using it, and thus the context (external environment) impacts the association we link with the language. For example, if someone is a Chinese-American who speaks Mandarin at home, he/she will be likely to be more family-oriented when speaking Mandarin and more likely to tell a story about family in Mandarin environment. In contrast, English will be more likely to link with work or school where it is the prevalent language. This is called biculturalism when individual is symmetrically competent in both languages, as one understands the meaning and context behind the word and sentence spoken. If it is not the case, we refer to it as bilingualism.

The third factor is mainly impacting bilingual but not bicultural people. Because the fluency at which speakers process the two languages is different, it is observed that people are more rational and are making less assumptions in their weaker languages. People tend to think faster but end up choosing obvious-seeming but wrong answers in first language, while take longer and cross out the wrong option in second language. People undoubtedly “feel looser, more spontaneous, perhaps more assertive or funnier or blunter, in the language they were reared in from childhood” (Keysar, 2012).

Although it becomes clear that people are influenced differently when they are speaking different languages, it is still unclear that whether learning a new language would also influence the way we think in our mother tongue.

1. Ask bi- and multilingual students at Haverford to recount an unforgettable experience of childhood (5 min) in mother tongue, and six months later, ask the same group of students to do it again in their second language. Record, translate and compare whether there is difference in meaning and mood.

2. Showing bi- and multilingual students at Haverford a silent film clip by Charlie Chaplin and ask them to describe what they have watched in mother tongue. One hour later, show the same film clip and ask them to recount the clip in their second language. Record, transalte and compare whether there is difference in meaning.

l Robb, A. (2014, April 23). Multilinguals Have Multiple Personalities. Retrieved December 3 , 2015, from https://newrepublic.com/article/117485/multilinguals-have-multiple-personalities

l Luna, D., & Peracchio, L. (2007, December 12). Visual and linguistic processing of ads by bilingual consumers. Retrieved December 3, 2015, from http://www.researchgate.net/publication/228874579_Visual_and_linguistic_processing_of_ads_by_bilingual_consumers

l Athanasopoulos, P., Damjanovic, L., Burnand, J. and Bylund, E. (2015), Learning to Think in a Second Language: Effects of Proficiency and Length of Exposure in English Learners of German. The Modern Language Journal, 99: 138–153. doi: 10.1111/j.1540-4781.2015.12183.x


Psychology Final

b.
these techniques produce illusory correlations rather than true correlations.

c.
expectancy effects usually invalidate the findings.

b.
nodes of Ranvier synaptic vesicle

a.
neurotransmitter receptor site

c.
dependent or outcome variables.

a.
which psychological test would best predict job success

b.
the age at which children understand abstract concepts

c.
how conflict affects marital happiness

a.
it is very probable that he will develop one of the most common sleep disorders, transient insomnia.

b.
he is very likely to experience light sleep.

c.
he will feel refreshed and alert once he gets his "second wind."

a.
dream images of sticks, swords, brooms, and other elongated objects are phallic symbols that represent the penis.

b.
dreams are meaningful insofar as they reflect how the dreamer imposes personal meaning on the images generated by his or her brain.

c.
dreams are completely meaningless.

a.
regulating sleep and wakefulness

c.
survival behaviors, including eating and drinking

a.
axon to the dendrites to the cell body.

b.
cell body to the axon to the nucleus.

c.
dendrites to the axon to the axon terminals and then to the cell body.

b.
group that receives the new medication.

a.
Lola's brain becomes very active.

c.
Lola's voluntary muscle activity is suppressed.

b.
vision somatosensory processing

c.
somatosensory processing audition

a.
percentage of REM sleep has increased.

b.
total sleep time is less.

c.
total sleep time has increased.

a.
in the primary visual cortex at the back of the brain

b.
in the retina of each eye

b.
habits and adaptive behavior.

c.
the relative importance of nature versus nurture.

a.
PET scans provide a much sharper picture than fMRIs.

b.
PET scans use less radioactive glucose than fMRIs.

c.
fMRIs provide a picture of brain activity averaged over seconds rather than the several minutes that PET scans require.

a.
German psychologist Wilhelm Wundt.

b.
American psychologist B.F. Skinner.

c.
Austrian physician Sigmund Freud.

a.
long episodes of REM sleep

b.
the complete absence of NREM sleep

b.
deliberately manipulated by the researcher.

c.
affected by changes in the dependent variable.

a.
free will, self-determination, psychological growth, and human potential.

b.
the active role played by mental processes in organizing sensations into meaningful perceptions.

c.
the experimental study of overt, observable behaviors.

a.
case studies of stroke patients with language difficulties.

b.
studies of split-brain patients.

c.
studies of rats that were raised in "impoverished" versus "enriched" environments.

a.
autonomic nervous system.

c.
peripheral nervous system.

a.
descriptive method experimental method

a.
unconscious causes of behavior.

b.
psychological growth and conscious experience.

c.
overt behavior and principles of learning.

a.
1 = frontal lobe, 2 = temporal lobe, 3 = parietal lobe, 4 = occipital lobe

b.
1 = frontal lobe, 2 = parietal lobe, 3 = occipital lobe, 4 = temporal lobe

c.
1 = parietal lobe, 2 = gray matter, 3 = association areas, 4 = white matter

a.
ability to reverse mental operations.

b.
inability to reverse mental operations.

c.
ability to comprehend that some actions cannot be undone.

a.
Dogs displayed exploratory behavior when the experimenter was not in the room.

b.
Dogs that were repeatedly restrained while shocked displayed "learned helplessness."

c.
Dogs displayed a reflexive response before the stimulus was presented rather than after it was presented.

a.
the developmental period that marks the transition from the preoperational stage to the concrete operational stage.

b.
an infant's ability to learn expectations about specific types of events, rather than general principles.

c.
the period just before and after the formal beginning of puberty.

a.
is specialized for spatial and visual material.

b.
controls attention, integrates information, and initiates retrieval.

c.
is involved in organizing information in a complex network.

c.
twelve, plus or minus three.

a.
unconditioned response (UCR)

c.
unconditioned stimulus (UCS)

a.
short-term memory can last for up to thirty seconds.

b.
visual sensory memory holds a great deal of information for about half a second.

c.
auditory sensory memory lasts for three or four seconds.

a.
preconventional, conventional, and postconventional.

b.
legal principles, universal moral principles, and law and order principles.

c.
preoperational, concrete operational, and formal operational.

b.
difficult and slow-to-warm-up.

a.
Baddeley's working memory curve

b.
Kandel's long-term potentiation curve

c.
the Ebbinghaus forgetting curve

a.
Piaget stressed the child's independent discoveries, whereas Vygotsky stressed that supportive interactions with parents and others played a key role in cognitive development.

b.
the stages of cognitive development in Vygotsky's theory occur at much earlier ages than the corresponding stages in Piaget's theory.

c.
Vygotsky's theory supplied additional evidence for Piaget's conclusions.

a.
explicit memory system implicit memory system

b.
encoding specificity system semantic network system

c.
semantic network system encoding specificity system

a.
the presence of broken glass at the scene of the accident

b.
the presence of a stop sign at the scene of the accident

c.
whether the word contacted, hit, bumped, collided, or smashed was used in the question

a.
only during the fetal period.

b.
only during the germinal period.

c.
during the germinal period and the fetal period but not the embryonic period.


Key points

Dyslexia and AS are associated with distinct cognitive profiles.

Children with comorbid dyslexia+AS exhibit an additive combination of the cognitive deficits associated with dyslexia-only and AS-only.

Deficits in duration discrimination associated with dyslexia are mediated by symptoms of inattention resulting from the comorbidity between these disorders.

It is important to consider the impact of ADHD symptoms when investigating the neuropsychological profile of children with dyslexia.


Barrett, S., & Rugg, M. (1989). Event-related potentials and the semantic matching of faces. Neuropsychologia, 27, 913–922.

Bentin, S., Kutas, M., & Hillyard, S. (1993). Electrophysiological evidence for task effects on semantic priming in auditory word processing. Psychophysiology, 30, 161–169.

Besson, M., & Macar, F. (1987). An event-related potential analysis of incongruity in music and other non-linguistic contexts. Psychophysiology, 24, 14–25.

Bronzino, J. D. (1995). Principles of electroencephalography. In J. D. Bronzino (Ed.), The biomedical engineering handbook (pp. 201–212). Florida: CRC Press.

Cheour, M., Ceponiene, R., Lehtokoski, A., Luuk, A., Allik, J., Alho, K., & Näätänen, R. (1998). Development of language-specific phoneme representations in the infant brain. Nature Neuroscience, 1, 351–353.

Coulson, S., King, J., & Kutas, M. (1998). Expect the unexpected: Event-related brain response to morphosyntactic violations. Language and Cognitive Processes, 13, 21–58.

Debruille, J., Pineda, J., & Renault, B. (1996). N400-like potentials elicited by faces and knowledge inhibition. Cognitive Brain Research, 4, 133–144.

Deutsch, A., & Bentin, S. (2001). Syntactic and semantic factors in processing gender agreement in Hebrew: evidence from ERPs and eye movements. Journal of Memory and Language, 45, 200–224.

Friederici, A., Hahne, A., & Saddy, D. (2002). Distinct neurophysiological patterns reflecting aspects of syntactic complexity and syntactic repair. Journal of Psycholinguistic Research, 31, 45–63.

Friederici, A., Pfeifer, E., & Hahne, A. (1993). Event-related brain potentials during natural speech processing—effects of semantic, morphological and syntactic violations. Cognitive Brain Research, 1, 183–192.

Friedrich, C., Eulitz, C., & Lahiri, A. (2006). Not every pseudoword disrupts word recognition. An ERP study. Behavioural and Brain Functions, 2, 36–46.

Gratton, G., Coles, M., & Donchin, E. (1983). A new method for off-line removal of ocular artifact. Electroencephalography and Clinical Neurophysiology, 55, 468–484.

Hagoort, P., Brown, C., & Groothusen, J. (1993). The syntactic positive shift (SPS) as an ERP measure of syntactic processing. Language and Cognitive Processes, 8, 439–483.

Hagoort, P., Wassenaar, M., & Brown, C. (2003). Syntax-related ERP-effects in Dutch. Cognitive Brain Research, 16, 38–50.

Herculano-Houzel, S. (2009). The human brain in numbers: a linearly scaled-up primate brain. Frontiers in Human Neuroscience, 3(31), 1–11.

Holcomb, P., & Anderson, J. (1993). Cross-modal semantic priming: A time-course analysis using event-related brain potentials. Language and Cognitive Processes, 8, 379–411.

Kaan, E. (2007). Event-related potentials and language processing: a brief overview. Language and Linguistics Compass, 1, 571–591.

Kaan, E., Harris, A., Gibson, E., & Holcomb, P. (2000). The P600 as an index of syntactic integration difficulty. Language & Cognitive Processes, 15, 159–201.

Kaan, E., & Swaab, T. (2003). Electrophysiological evidence for serial sentence processing: a comparison between non-preferred and ungrammatical continuations. Cognitive Brain Research, 17, 621–635.

Kluender, R., & Kutas, M. (1993a). Subjacency as a processing phenomenon. Language and Cognitive Processes, 8, 573–633.

Kluender, R., & Kutas, M. (1993b). Bridging the gap: evidence from ERPs on the processing of unbounded dependencies. Journal of Cognitive Neuroscience, 5, 196–214.

Kolk, H., & Chwilla, D. (2007). Late positivities in unusual situations. Brain and Language, 100, 257–261.

Kolk, H., Chwilla, D., van Herten, M., & Oor, P. (2003). Structure and limited capacity in verbal working memory: a study with event-related potentials. Brain and Language, 85, 1–36.

Kutas, M., & Federmeier, K. (2011). Thirty years and counting: Finding meaning in the N400 component of the event related brain potential (ERP). Annual Review of Psychology, 62, 621–647.

Kutas, M., & Hillyard, A. (1980). Reading senseless sentences: brain potentials reflect semantic incongruity. Science, 207, 203–205.

Kutas, M., & Hillyard, A. (1983). Event-related brain potentials to grammatical errors and semantic anomalies. Memory and Cognition, 11, 539–550.

Kutas, M., Neville, H., & Holcomb, P. (1987). A preliminary comparison of the N400 response to semantic anomalies during reading, listening and signing. Electroencephalography and Clinical Neurophysiology. Supplement, 39, 325–330.

Leinonen, A., Grönholm-Nyman, P., Järvenpää, M., Söderholm, C., Lappi, O., Laine, M., & Krause, C. (2009). Neurocognitive processing of auditorily and visually presented inflected words and pseudowords: evidence from a morphologically rich language. Brain Research, 1275, 56–66.

Lelekov, T., Dominey, P., & Garcia-Larrea, L. (2000). Dissociable ERP profiles for processing rules vs. instances in a cognitive sequencing task. Neuroreport: For Rapid Communication of Neuroscience Research, 11, 1129–1132.

Lopes da Silva, F. (2010). EEG: origin and measurement. In C. Mulert & L. Lemieux (Eds.), EEG-fMRI: physiological basis, technique, and applications (pp. 19–38). London: Springer.

Luck, S. (2005). An Introduction to the event-related potential technique. Cambridge: MIT Press.

McPherson, W., & Holcomb, P. (1999). An electrophysiological investigation of semantic priming with pictures of real objects. Psychophysiology, 36, 53–63.

Näätänen, R. (2003). Mismatch negativity: clinical research and possible applications. International Journal of Psychophysiology, 48, 179–188.

Näätänen, R., Lehtokoski, A., Lennes, M., Cheour, M., Huotilainen, M., Iivonen, A., Vainio, M., Alku, P., Ilmoniemi, R., Luuk, A., Allik, J., Sinkkonen, J., & Alho, K. (1997). Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385, 432–434.

Neville, H., Nicol, J., Barss, A., Forster, K., & Garrett, M. (1991). Syntactically based sentence processing classes: evidence from event-related brain potentials. Journal of Cognitive Neuroscience, 3, 151–165.

Nigam, A., Hoffman, J., & Simons, R. (1992). N400 to semantically anomalous pictures and words. Journal of Cognitive Neuroscience, 4, 15–22.

Núñez-Pena, I., & Honrubia-Serrano, M. (2004). P600 related to rule violation in an arithmetic task. Cognitive Brain Research, 18, 130–141.

Osterhout, L., & Holcomb, P. (1992). Event-related brain potentials elicited by syntactic anomaly. Journal of Memory and Language, 31, 785–806.

Osterhout, L., Holcomb, P., & Swinney, D. (1994). Brain potentials elicited by garden- path sentences: evidence of the application of verb information during parsing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 786–803.

Osterhout, L., & Mobley, L. (1995). Event-related brain potentials elicited by failure to agree. Journal of Memory and Language, 34, 739–773.

Patel, A., Gibson, E., Ratner, J., Besson, M., & Holcomb, P. (1998). Processing syntactic relations in language and music: and event-related potential study. Journal of Cognitive Neuroscience, 10, 717–733.

Strijkers, K., & Costa, A. (2011). Riding the lexical speedway: a critical review on the time course of lexical selection in speech production. Frontiers in Psychology, 2, 365–372.

Teplan, M. (2002). Fundamentals of EEG Measurement. Measurement Science Review, 2, 1–11.

van de Meerendonk, N., Kolk, H., Vissers, C., & Chwilla, D. (2010). Monitoring in language perception: mild and strong conflicts elicit different ERP patterns. Journal of Cognitive Neuroscience, 22, 67–82.

van Petten, C., & Rheinfelder, H. (1995). Conceptual relationships between spoken words and environmental sounds: Event-related brain potential measures. Neuropsychologia, 33, 485–508.

Vouloumanos, A., Hauser, M., Werker, J., & Martin, A. (2010). The tuning of human neonates’ preference for speech. Child Development, 81, 517–527.

Werker, J., Shi, R., & Desjardins, R. (1998). Three methods for testing infant speech perception. In A. Slater (Ed.), Perceptual development: visual, auditory, and speech perception in infancy (517–27). Hove: Psychology Press Ltd.

Werker, J., & Tees, R. (1994). Cross-language speech perception: evidence for perceptual reorganization during the first year of life. Infant Behavior and Development, 7, 49–63.

West, W., & Holcomb, P. (2002). Event-related potentials during discourse-level semantic integration of complex pictures. Cognitive Brain Research, 13, 363–375.