Information

Can we enter data at the speed of thought?

Can we enter data at the speed of thought?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I have the subjective personal experience that the speed at which I can enter information in a computer through a keyboard is so slow and my thoughts "run" so fast, that I find it an especially frustrating experience to compose a text on a computer or do a translation.

Is there any research studying the effects of computer keyboard data entry speed compared to the speed at which thinking happens (or the speed of speech, the latter measurable) or just some paper elaborating on a correlation between the speed of data input and the subjective feeling of smooth mind work?

Maybe research with stenographers (supposed to be able to write at the speed of speech) would be a good direction? An article suggesting such correlation: (Plover, the Open Source Steno Program: Writing and Coding With Steno).


I am unfamiliar with such research. However, considering this from a User Interface design perspective, as well as from a cognitive perspective, I would add one bit of clarification to the question.

Can we enter data at the speed of organized thought?

If the limitations in user interface technology are overcome to the point where the input gate is no longer the limitation, it is quite possible that we would discover that our thought processes are so chaotic as to render the new technology virtually useless.

As support for this I would point out that sitting and typing a message, while slow, is already a method by which thoughts evolve into lucid communication.


You could compare with other kinds of communication limitations, or bottlenecks if you wish.

For example, one could have problems keeping up with his thoughts while speaking. This would causes some lacks of information and create a gap in the representations of the subject between the two interlocutors.

I think it's like the "lost idea" phenomenon, too much new information comes into consciousness and one cannot communicate efficiently enough to follow his "flow of ideas" while sharing.

Who knows, maybe you are faster with a pencil, or a paintbrush.


How the Right Banking Vendor Strategy Can Speed Digital Transformation

Banks and credit unions are more eager than ever to leverage the latest technologies to boost efficiencies, improve the customer experience and enhance their brand. From instant account opening and marketing automation, digital capabilities enable banking providers to capture both profits and market share while developing relevant, one-to-one digital relationships with consumers.

While these capabilities were once nice to have, they’re now a necessary component just to survive. According to the Digital Banking Report, the lack of digital maturity now threatens the survival of many banks and credit unions, and more than half of those with assets under $10 billion say they haven’t made substantial progress toward their goals.

The Complex Journey to Digital Transformation

Many financial institutions are still working with siloed legacy systems that lack data functionality and can’t support modern data needs, digital channels, or marketing automation. As a result, financial institutions aren’t always using their consumer data like they need to if they’re to meet their consumers growing expectations. 80% of COOs believe their organization’s existence is threatened if they don’t update their technology to support rapid innovation, according to Accenture.

While these technology upgrades are now mandatory, they’re not always so simple. It can take three to five years to complete a core conversion, and on top of that many financial institutions are already locked into contracts that can span several years. Even when the path is visible, overhauling technology can impact internal resources and operations.

As a result, many banks and credit unions still have yet to take the first steps to address their underlying infrastructure requirements. Given the added pressures of changing consumer behaviors and the impact of the pandemic, alternative approaches that make this long and complex process more agile are emerging and gaining popularity.

Laying the Groundwork: 3 Approaches to Digital Transformation

Some say that if you have to do something painful, it’s better to “rip off the Band-aid” and overhaul everything at once. Others will argue that it’s better to break it down in smaller chunks.

What this looks like for each institution is different. When considering a digital transformation, here are three schools of thought (one of which we feel is superior).

Approach One: Incremental

Financial institutions typically have four major milestones in their digital transformation journey: 1. upgrading legacy systems, 2. updating the digital channels, 3. developing business intelligence and data analytics, and 4. marketing automation.

One approach to tackling these requirements is to go incrementally. Selecting a new core system and making the conversions is usually the first milestone (and the most time intensive process that can take multiple years). With a core system in place, financial institutions can then establish their digital channels (which can take another year to go live).

After that, they can add in a layer to query and report so the institution can extract data and use it in a meaningful way. Typically the last of the four tech integrations involves choosing a marketing platform that can be integrated with the infrastructure that’s been put in place.

Some financial institutions find the incremental approach less intimidating because it’s easier to get started. It also makes it possible to select best-of-breed vendors (smaller, specialized providers that offer an innovative or superior solution for a specific area of focus).

One downside of this approach is that tackling elements one at a time can be slow. And by the time they finish the last step, the first one may already need an update.

Approach Two: “Do-it-all” vendor

Banks and credit unions that prefer to manage fewer moving parts, might opt for an approach where they can work with a single, large do-it-all vendor who offers most of the functionality required for the financial institution’s digital transformation.

While integration may be pretty straightforward, this approach has its downsides. An important one is that financial institutions are unlikely to get best-of-breed in technology. Which means that in the areas that fall outside of the large vendor’s focus, the functionality will not be state-of-the-art and might require an additional update sooner rather than later. Additionally, many financial institutions have concerns with putting all their eggs in one basket and would rather hedge their bets by working with more than one vendor. Last but not least, large vendors usually have lots of consumers and a more standardized approach, which can be especially hard for the smaller institutions.

Approach Three: Concurrent

A third option, one that our partner Tyfone — a provider of digital banking solutions — excels at and that has become our recommended approach, is where a financial institution identifies the key projects that will propel their digital transformation and evaluates best-of-breed vendors who are suited to meet the specific strategic objectives of those projects. Providers are assessed at the same time for their agile mindset, interoperability and synergy.

Josh DeTar, Tyfone’s VP of Sales & Marketing, puts the importance of synergy among vendors succinctly: “As any financial institution evaluates their path for digital transformation, a few key points should be kept in mind. Move at the speed of digital, not the pace of banking. We can’t do it alone, find the right partners. Find partners who focus on building long-term relationships centered around a culture of continuous innovation.”

Here are just a few of the benefits of looking to clusters where solutions are already aligned and vendors have synergy with one another:

Vendor Synergy in Action

One great example of synergy in action is from SIU, a credit union that has been serving Southern Illinois since 1938. As they researched for a new core, they were aware that whatever choice they made, would also define the set of vendors they could work with in the future. In other words, choosing a core meant choosing an ecosystem. And so that’s how their choice of Corelation Keystone as their core led to Bankjoy as their digital channel provider and Prisma Campaigns as their marketing platform.

According to Mark Dynis, SIU’s VP of Marketing, “The longer you wait, the further behind you get. There’s no perfect time for it and we wanted to jump in as soon as we could. While features and cost are important, we do appreciate vendors that are good at relationships. We value their existing ones, as it makes integration easier for us, just as much as we value their openness to establish new ones with anyone else we may bring on board. Also, we’ve learned that even when the features are there, you can’t take full advantage of them if the vendor treats you as just another client. So how they work with us, intimately knowing our credit union, also matters.”

Numerica Credit Union, as another example, has been serving Eastern Washington state and the Northern Idaho Panhandle for over 80 years. When choosing providers, they evaluated and considered Prisma Campaigns as their marketing solution at the same time as their digital channel upgrade with Tyfone. Here’s what KayCee Murray, Senior Vice President of Information Technology said about how vendor synergy has benefitted their credit union:

“Numerica has benefitted from Prisma Campaigns’ partnership with Tyfone as we’ve prepared for our digital banking platform upgrade. Tyfone already understands the power Prisma brings to the table, while Prisma already has a blueprint for how to best match their solution with Tyfone’s platform. Numerica, Prisma, and Tyfone are collaborating to create a plan to ensure our members have a seamless transition to our new digital banking platform.”

Increasing Speed of Digital Deployment

Despite the urgent need to evolve and adopt the latest capabilities, many financial institutions still lack the underlying infrastructure to support automation.

Upgrading infrastructure can often be a multi-year (and sometimes unending) process, and while financial institutions can’t rush digital transformation, or avoid building their foundation, there are undoubtedly ways to ease transitions and smooth the road ahead.

Banks and credit unions can reap the benefits of digital transformation sooner by getting clear on their strategic objectives and based on that, choosing clusters of best-of-breed solution providers that have synergy with one another and are open to tackling each client’s unique needs with a collaborative, agile and iterative approach.

About Prisma Campaigns:
Prisma Campaigns is an all-in-one marketing automation platform for banks and credit unions that integrates with +20 cross-technology solutions including data providers, core banking and digital channels.

This article was originally published on June 3, 2021 . All content © 2021 by The Financial Brand and may not be reproduced by any means without permission.


Key Study: Leading questions and the misinformation effect – ” the car crash study” (Loftus and Palmer, 1974)

Memory is a reconstructive process, which means memories are actively and consciously rebuilt when we are trying to remember certain things. Elizabeth Loftus, her colleagues and others studying this cognitive phenomenon have shown that during the reconstruction phase our memories can be distorted if we are given false information about the event – this is called the misinformation effect.

Background Information

Some of Elizabeth Loftus’s first studies focused on how language can influence memories of particular events. Research prior to the following two 1974 experiments suggested that people are quite inaccurate when asked to report numerical details regarding events. Also, as memory has been shown to be reconstructive in nature, Loftus and Palmer predicted that the wording of a question could influence recall. They define a leading question as “one that, either by its form or content, suggests to the witness what answer is desired or leads him (sic) to the desired answer).”

Have you ever seen this in a film or on TV in a court-room drama? The lawyer asks the question and the opposing lawyer shouts, “Objection! Leading the witness, your Honour”. They are objecting to the use of a leading question – asking in a question that is guiding (or leading) the respondent towards a particular answer.

For example, I would be asking a leading question if I asked you, “how much do you like Psychology?” I’m already implying in my question that you do in fact like Psychology, I simply want to know how much. You’re lead to answer in a way that suggests you like this subject. What if you hate it, or find it immensely boring? It would be more difficult to respond this way to this particular question.

In the following two experiments, Loftus and Palmer first studied the effects verbs in questions on speed estimates and also if these verbs could impact memory in other ways.

The following information has been adapted from our textbook, IB Psychology: A Student’s Guide.

Key Study Experiment #1: 5 verbs in leading questions.

In this first experiment, 45 college participants were divided into five groups of nine and watched seven short videos (5 – 30 seconds) taken from driver’s education courses that involved a traffic accident of some kind. The participants were first asked an open-ended question: “Give an account of the accident you have just seen”, which was followed by a series of specific questions about the accident. There was one critical question that asked “About how fast were the cars going when they … each other”. The five groups were given five different verbs. I.e. one group was asked “hit”, one was asked, smashed, etc.

The results were as follows (mph):

A note on the films and speed estimates : Four of the seven films were staged crashes made specifically for education purposes, and so the precise speed in mph (miles per hour) of the vehicles is known. The results below show the actual speed of the car in the video (first number) and the mean guesses from all participants (second number)

From the above results it shows that the different verbs can lead to different speed estimates. The researchers provided two possible explanations for these results. The first explanation is that the participants might not have been sure about the speed and the verb simply led them towards a particular answer. If they were not sure of the speed and thought it was around 30 to 40mph, the verb would have biased their answer in a particular direction. This doesn’t tell us much about the reconstructive nature of memory and is more a possible limitation in the research methodology, if anything.

However, they also hypothesized that perhaps the verb “smashed” caused the participants to remember the crash differently. During the process of imagining the crash in order to remember the details and answer the questions, the verb may have affected the memory itself. The participants might have actually been imagining a more severe crash and a faster speed than was really portrayed in the video because of the leading question when remembering the incident and playing it over in their minds, the verb “smashed” might have led to an actual change in the memory of the video.

But this data doesn’t provide strong support for this hypothesis so they conducted a second experiment, which will be explained in the next section.

Experiment #2: The broken glass manipulation

In this study, 150 participants were put into three different groups but all watched the same film (in smaller groups). The film showed an accident involving many cars and the entire film lasted for less than one minute and the accident part of film lasted 4 seconds. After the participants watched the film, they were given a questionnaire. The first question was again open-ended and asked the participants to describe the accident in their own words. This was followed by a series of specific questions, with one critical question.

  • 50 participants were asked “About how fast were the cars going when they smashed into each other?”
  • 50 participants were asked “About how fast were the cars going when they hit each other?”
  • 50 participants weren’t asked any questions about speed.

If the verb smashed significantly increased the memory of broken glass when there was none, this is stronger evidence to show that the verb was acting as false information which was actually changing the memories of participants in this condition.

One week later all participants returned and were asked a series of ten questions but they didn’t watch the film again. One of the ten questions appeared randomly in a different order for each participant and asked: “Did you see any broken glass?” And there was a check-box for Yes or No.

Once again the results showed that the speed estimates of those asked about the cars with the verb “smashed” were higher than those with the verb “hit” (10.46mph and 8.00mph respectively).

Here are the results regarding the memory of seeing broken glass:

Distribution of “Yes” and “NO” Responses for Different Conditions
Response Smashed Hit Control
Yes 16 (32%) 7 (14%) 6 (12%)
No 34 (68%) 43 (86%) 44 (88%)

These results provide some evidence for the explanation that the misinformation effect was occurring. Perhaps the verb “smashed” was influencing people’s recollections of the crash and they were remembering it as being more severe than it really was, which is why they could remember seeing broken glass even when there wasn’t any in the original video.

Loftus and Palmer argue that two types of information are influential in making up someone’s memory. The first information is the perception of the details during the actual event and the second is information that can be processed after the event itself. In this case, information from our environment might impact our memory processes, which could lead to distortions. They argue that the verb “smashed” provides additional external information because it shows that the cars did actually smash into each other. The verb that has connotations of a stronger and more severe impact than hit or collided could result in a memory of the incident that never happened, like remembering broken glass when there was none. Remember that the second question was asked an entire week after the original videos were viewed and the leading questions asked. The participants are reconstructing their memories after one week and the difference between the scores is quite significant.

  • These studies can be used to show the reconstructive nature of memory.
  • If asked about “one study” it would be fine to write about both of these versions of the same experiment – the focus should be on the second one, though.
  • The second study is the important one to be able to explain in exams as it shows the reconstructive nature of memory.
  • For the IA, I would not use the broken glass version of the experiment as it gathers nominal data and this makes the inferential stats a little more difficult. The best option is to choose two verbs from the first study and replicate that.
  • This study could be used for schema theory, but I prefer other studies (e.g. Bransford and Johnson)

Critical Thinking Questions

  • What do these experiments show that memory is reconstructive?
  • Is this study limited in population validity? For example, look at the accuracy of their guesses in the first experiment – is this evidence that perhaps these results might not apply to other groups of people? (Think about experience).
  • What are the possible practical implications of these findings?
  • What are the ethical considerations involved in these experiments?
  • Can you find any other limitations with this study?

Loftus, Elizabeth F., and John C. Palmer. “Reconstruction of Automobile Destruction: An Example of the Interaction between Language and Memory.” Journal of Verbal Learning and Verbal Behavior 13.5 (1974): 585-89. Web. (Read full text here)

Travis Dixon is an IB Psychology teacher, author, workshop leader, examiner and IA moderator.


Are we ready for the genetic revolution?

When the time comes, and experts believe it is coming sooner than we expect or are prepared for, genetic meddling with the human genome may drive social inequality to an unprecedented level with not just differences in wealth distribution but in what kind of being you become and who retains power. This is the kind of nightmare that Nobel Prize-winning geneticist Jennifer Doudna talked about in a recent Big Think video.

CRISPR 101: Curing Sickle Cell, Growing Organs, Mosquito Makeovers | Jennifer Doudna | Big Think www.youtube.com

At the heart of these advances is the dual-use nature of science, its light and shadow selves. Most technological developments are perceived and sold as spectacular advances that will either alleviate human suffering or bring increasing levels of comfort and accessibility to a growing number of people. Curing diseases is what motivated Doudna and other scientists involved with CRISPR research. But with that also came the potential for altering the genetic makeup of humanity in ways that, again, can be used for good or evil purposes.

This is not a sci-fi movie plot. The main difference between biohacking and nuclear hacking is one of scale. Nuclear technologies require industrial-level infrastructure, which is very costly and demanding. This is why nuclear research and its technological implementation have been mostly relegated to governments. Biohacking can be done in someone's backyard garage with equipment that is not very costly. The Netflix documentary series Unnatural Selection brings this point home in terrifying ways. The essential problem is this: once the genie is out of the bottle, it is virtually impossible to enforce any kind of control. The genie will not be pushed back in.


Automatic and Controlled Processes in Social Cognition

A great deal of social cognition theory and research is concerned with questions about the degree to which social information processing involves active, conscious analysis of the social environment. Historical models of person perception and attribution regarded the perceiver as operating as a “lay scientist” (e.g., Heider, 1958 Kelley, 1967), examining evidence and reasoning about its logical implications research in this tradition was largely mute, however, with respect to whether these putative mental processes involved the conscious application of deductive principles or processes of a more preconscious variety. As Gilbert (1998) observes, it is quite possible for a mental system to follow a reasoning algorithm without requiring that the conscious mind know or consciously apply the relevant principles. Mental processes that do not involve active, conscious ratiocination have come to be called automatic or implicit social cognition and have been the subject of a massive amount of recent research.

The contrast between conscious, effortful, controlled mental processes on one hand and unconscious, automatic ones on the other became a prominent issue in cognitive psychology largely due to influential papers by Posner and Snyder (1974), Shiffrin and Schneider (1977), and Hasher and Zacks (1979), yet there is quite a history of interest in the extent to which the mind might be operating in ways unknown to the conscious self. For example, researchers interested in human performance have long been interested in the processes involved in skill acquisition, whereby an initially novel task that requires considerable effort and attention becomes relatively automatic with practice (e.g., Fitts & Posner, 1967).After they become automated, skills can be triggered and used without much involvement of the conscious mind. In a different vein, psychoanalytically oriented researchers have been interested in how unconscious motivations might shape processes of perception and cognition (e.g., Erdelyi, 1974). Cognitive research of this sort addresses profound questions concerning who is running the show. Does the conscious self call the shots, or is the brain going about its business without much interference from the conscious thinker? In this section, we first review research on automatic aspects of social cognition, and then we consider the case that can be made for the capacity of the conscious mind to control and regulate processes of social cognition. Finally, we consider some of the ways in which automatic and effortful processes can interact to determine jointly the course of perception, thought, and action.

Automatic Social Cognition

The foundations for social-psychological treatments of the issue of automaticity have been established in the work of Bargh (e.g., 1982 Bargh & Chartrand, 1999 Bargh & Ferguson, 2000). Synthesizing the insights emerging from disparate research areas touching on the issue of automaticity, Bargh (1994) argued that the notion of automatic mental processes is complex and multifaceted. He argued that the term has been used to refer to four distinct qualities of information processing: awareness, intention, efficiency, and control. That is, a process tends to be considered automatic if it (a) occurs without the person’s awareness, (b) occurs without the person’s intention, (c) occurs with great efficiency and does not require much mental capacity, or (d) occurs in a manner that is difficult to prevent or stop. Not all four criteria are necessary for a process to be considered automatic. When one or more of these characteristics is present, the relevant process is often deemed to be relatively automatic.

A particularly compelling and influential demonstration of the implicit operation of the mind was provided by Warrington and Weiskrantz (1968). Their research documented that individuals suffering from anterograde amnesia, who are unable to consciously recollect their recent experiences, nevertheless showed a clear benefit from that experience in the performance of indirect tests of memory, such as completing word fragments. Although these patients have no explicit memory for the words they saw during a study period, they nevertheless were better able to complete word fragments when the corresponding word had indeed been previously studied. This research clearly indicates that memories can be quite influential even when there is no conscious awareness of the relevant prior episodes.

Social cognition researchers have sought to investigate the role of awareness in social cognition in several ways. One approach has simply been to demonstrate that individuals are often unable to articulate accurately the factors that are important in shaping their behavioral choices (e.g., Nisbett & Wilson, 1977). This fact obviously implies that people are generally unaware of the processes at work behind the scenes in the preconscious mind. Another approach to documenting that some processes occur without awareness has been adopted in research on priming. The basic idea of priming research is quite straightforward. Individuals are exposed to a task or environmental context that is designed to activate a particular mental representation. Then a second, ostensibly unrelated task is performed, and the researcher seeks to determine whether the previously activated representation exerts any influence on information processing in the second task. Research of this sort conclusively demonstrates that concepts that have been activated in one context can continue to influence social cognition in subsequent, unrelated contexts, by virtue of their enhanced accessibility (Higgins, 1996).Acommon effect of such priming is that subsequently encountered information is assimilated toward the activated concept. For example,SrullandWyer(1979)showedthatactivatinghostile concepts in a language-processing task caused participants to form more negative impressions of an ambiguous social target in a subsequent impression formation task, compared to participants who never had the hostile concepts activated in the initial task. It is typically assumed that this assimilation process occurs because the fortuitously activated concepts are used to disambiguate later information, and the perceiver is presumed to be oblivious to the fact that it is occurring.

Perhaps the best evidence that priming effects occur without the perceiver’s awareness comes from research that employs subliminal priming techniques. In this research, concepts are activated by exposing participants to extremely brief stimulus presentations (see Bargh & Chartrand, 2000, for procedural details). Although perceivers are unable to describe the stimuli to which they have been exposed, they nevertheless show evidence of priming effects. We have already described one experiment by Devine (1989) that showed that subliminal activation of words associated with the African American stereotype caused perceivers to view an ambiguously aggressive target as more hostile, compared to individuals who had not been primed with the stereotypic concepts. Similar findings have been reported by other researchers (e.g., Bargh & Pietromonaco, 1982), confirming that priming effects can occur outside of the perceiver’s conscious awareness.

It is usually assumed that for these assimilative priming effects to occur, not only must the relevant concept be accessible, but it must also be applicable (Higgins, 1996). In line with this proposition, Banaji, Hardin, and Rothman (1993) demonstrated that priming gender stereotypes resulted in more stereotypical impressions of ambiguous targets, but only when the target was a member of the relevant gender group—that is, activating masculine concepts resulted in the perception of ambiguous male targets in a more stereotypical manner, but it largely did not affect perceptions of female targets. Conversely, activating feminine concepts resulted in perceiving ambiguous female targets in a more stereotypical manner, but it did not affect perceptions of male targets.Although priming effects do operate under the constraints of applicability, the processes involved in using or failing to use activated concepts as a basis for disambiguating social targets appears to operate largely without any awareness on the perceiver’s part.

It is not inevitably the case that priming results in assimilation to the primed concepts. For example, Herr (1986) demonstrated that when activated concepts are sufficiently extreme, they can produce contrast effects. A contrast effect is said to occur when an object is judged more extremely in the direction opposite to the activated concept. For example, if an ambiguous target were judged to be significantly less hostile after an African American stereotype had been activated (compared to an unprimed control group), this would constitute a contrast effect. The mechanism producing contrast effects involves using the activated concept as a comparison standard rather than as an interpretive frame. Thus, in the case of Herr’s research, for example, the target person is compared to the activated standard and is consequently seen as relatively less hostile, given the extremity of the standard. The question of whether contrast effects occur automatically has been a matter of continuing theoretical dispute (e.g., Martin, Seta, & Crelia, 1990 Stapel & Koomen, 1998).

Another hallmark of automatic processing is the occurrence of unintended effects. The assimilative priming effects just reviewed certainly meet this criterion of automaticity, because it is clearly not the case that individuals intend to use subliminally activated concepts to guide subsequent impressions. Another domain providing compelling evidence for unintended aspects of impression formation is research on spontaneous trait inferences. The question at stake in this research concerns whether social perceivers spontaneously infer that observed behavior implies that the actor has a corresponding personality trait. In historical models of this process of dispositional inference (e.g., Jones & Davis, 1965), it was typically assumed that perceivers engage in a fairly extensive deductive reasoning process to determine the trait implications of observed behavior, comparing the effects of the observed behavior with the simulated effects of not performing it or of performing an alternative option. In contrast, more recent research on spontaneous trait inferences suggests that perceivers automatically infer the trait implications of behavioral information, even if that is not their conscious intention. For example, Winter and Uleman (1984) presented participants with behavioral descriptions (e.g., Billy hit the ballerina) and subsequently asked participants to recall the presented descriptions with the aid of cues. The cues were either semantically related to the theme of the description (e.g., dance) or were related to the trait implications of the behavior (e.g., hostile). Cued recall performance was markedly better when trait cues were available. In a different paradigm, Uleman, Hon, Roman, and Moskowitz (1996) showed that people spontaneously made trait inferences when processing behavioral descriptions, even when such inferences actually impaired performance of their focal task. In this paradigm, participants read behavioral descriptions on a computer screen. Immediately after the presentation of a description, a word appeared on the screen and participants had to indicate whether that exact word had appeared in the preceding sentence. When the target word was a trait that was implied by the behavioral description, reaction times were slower and error rates were higher than they were when the same target words followed similar descriptions that did not imply the traits in question. This kind of evidence suggests that fundamental aspects of social perception can occur quite spontaneously, without any conscious instigation on the part of the perceiver.

Trait inferences are but one manifestation of unintended social cognition. In a growing program of research, Bargh and colleagues have shown that without the formation of any conscious intention, primed or salient stimuli can trigger spontaneous behavior (e.g., Bargh, Chen, & Burrows, 1996). For example, Barghet al. Showed that activating stereotypes about elderly persons resulted in slower rates of walking. Similarly, Chen and Bargh (1997) showed that subliminal presentation of African American (as compared with European American) faces resulted in more hostile behavior in a subsequent verbal game played with an unprimed partner. Moreover, the unprimed partner’s behavior also became more hostile as a consequence, showing that self-fulfilling prophecies can emerge in a very automatic manner—even when participants are unaware that stereotypical concepts have even been activated and have formed no conscious intention to act in a manner consistent with these concepts. Although the precise mechanisms responsible for these fascinating effects have not been isolated, the very existence of the phenomenon provides a potent demonstration of the potential automaticity of not only social thought, but also interpersonal interaction.

A principal advantage of automatic reactions lies in the fact that they are largely not dependent on the availability of processing resources. Because of the great efficiency with which they unfold, automatic processes do not require much investment of attentional capacity or perceiver motivation. Whereas novice drivers can find it harrowing to coordinate all of the requisite activities (shifting gears, monitoring traffic, steering, braking, etc.), after the process has been automated, not only can these tasks be easily performed, but the driver may also have sufficient reserve capacity available for singing along with the stereo or engaging in mobile phone conversations. Empirical confirmation of the resourceconserving properties of automatic mental processes was provided in a series of experiments by Macrae, Milne, and Bodenhausen (1994). In one of their studies, they asked participants to engage in two tasks simultaneously: a visual impression-formation task that involved reading personality descriptions of four different persons, and an audio task that involved listening to a description of the geography and economy of Indonesia. For half of the participants, stereotypes were activated in the impression-formation task (by providing information about a social group to which each target belonged). Some of the personality information was consistent with stereotypes about the relevant group, and the rest was irrelevant to such stereotypes. One might expect that giving these participants an additional piece of information to integrate would simply make their task all that much harder—but in fact, the introduction of the stereotype provided a framework that participants could spontaneously use to organize their impressions, making the process of impression formation much more automatic and efficient. As a consequence, participants who knew about the group memberships of the social targets not only recalled more information about the targets (as revealed in a free recall measure), they also learned more information about Indonesia (as revealed in a multiplechoice test). The automatic reactions triggered by stereotype activation provided a clear functional benefit to perceivers by making the process of impression formation more efficient, thereby freeing up attentional resources that could be devoted to the other pressing task.

When automatic effects of these sorts occur without awareness, intention, or much attentional investment, is there any hope of preventing them or stopping them after they start? In the realm of automatic stereotyping effects, Bargh (1999) has argued that the prospects for controlling such effects are slim to none. Indeed, the final hallmark of an automatic process is its imperviousness to control. In line with Bargh’s assertion, the previously described research of Devine (1989) showed that even low-prejudice individuals who disavow racist stereotypes are still prone to showing automatic effects of stereotype activation. Similarly, Dunning and Sherman (1997) found that implicit gender stereotyping occurred independently of participants’level of sexism. However, other research has begun to suggest that at least some of the time, it may be possible to develop control over automatic processes. Uleman et al. (1996), for example, found that with practice, people could learn to avoid making spontaneous trait inferences. Similarly, it seems that egalitarian individuals can also learn to control automatic stereotyping effects, at least under some circumstances (e.g., Wittenbrink, Judd, & Park, 1997). It is toward the processes through which mental control can be achieved that we now turn our attention.

Controlled Social Cognition

The process of controlling thought and action, at least in relatively novel and unpracticed domains, requires attention. Whereas automatic processes occur efficiently and thus require little expenditure of mental resources, effortful, controlled processes come with an attentional price to pay. Moreover, controlled processes typically require intentional deployment, and they occur in a manner that is at least partially accessible to the conscious mind. Whereas many computational processes of implicit cognition are regarded to be massively parallel, attention and consciousness represent a processing bottleneck that results in highly selective and serial information processing (e.g., Simon, 1994). As Simon notes, connecting one’s motives to one’s thought processes requires a system that can cope with the constraints imposed by limitations of attentional capacity.

Attentional capacity has turned out to be a major theoretical construct in social cognition research (for a review, see Sherman, Macrae, & Bodenhausen, 2001) precisely because it plays such a fundamental role in determining whether it will be possible for the perceiver to engage in controlled processing. Without sufficient mental resources, automatic mental processes are presumed to operate in an unchecked manner, and it is difficult or impossible for perceivers to impose their will and exercise control over the workings of their own minds. Early theorizing about attentional capacity assumed a simple, unitary structure to the mental resources that are used in conscious, controlled information processing. However, advances in cognitive neuroscience have made it possible to identify a more differentiated set of working memory resources (e.g., Roberts, Robbins, & Weiskrantz, 1998). Baddeley (1998) proposed that there are three principal facets to working memory, each with a limited capacity for holding information: a phonological buffer, a visuospatial sketch pad, and a central executive. It is the latter resource that is most important to social-cognitive theorizing, because it is the central executive that governs the conscious planning, execution, and regulation of behavior. When these executive resources are in ample supply, individuals are generally able to exercise a considerable degree of control over their conscious thought processes and behavioral responses when these finite resources have been usurped by other ongoing processes, however, the resulting executive dysfunction can put perceivers in the position of failing to produce intended patterns of thinking and responding. Under this circumstance, thought and action will be dictated more by potent automatic reactions than by the force of the conscious will.

Research on mental control has undergone a dramatic resurgence in the past decade (for an excellent sampling of research topics, see Wegner & Pennebaker, 1993). Wegner’s research on thought suppression has been a major impetus for this explosion of research attention (e.g., Wegner, 1994 Wenzlaff & Wegner, 2000). In this research, the prospects for mental self-control have been investigated by providing participants with a self-regulatory injunction to consciously pursue (e.g., don’t think about white bears or don’t be sexist). Success is measured simply by the number of times the unwanted response is generated, and success rates can be considerable—provided that the person has ample attentional resources. However, if a cognitive load is imposed on the person (e.g., a secondary task must be completed simultaneously, such as rehearsing an eight-digit number), not only are unwanted responses likely to emerge, but they are also likely to occur with even greater frequency than they would if the person had never tried to suppress them in the first place (i.e., a rebound effect).

Wegner (1994) proposed a theoretical account for this state of affairs his account rests on the assumption that mental control reflects the operation of two separate processes. A monitoring process is responsible for checking to see whether undesired responses (e.g., sexist thoughts) are occurring. If it should detect such responses, an operating process is triggered that serves to squelch the unwanted response by finding an acceptable substitute response (e.g., thoughts about a target’s occupation rather than her gender). Crucial to his model are two additional assumptions. First, the monitoring process can do its work in a relatively automatic manner, but must of necessity keep active in memory (even if only at a relatively low level) a representation of the undesirable response so that it can be recognized if it should appear. Thus, the monitoring process ironically keeps an unwanted thought or response salient in the perceiver’s mind. This recurrent activation of the undesired target stimulus is not a big problem, so long as the operating process can counteract the unwanted response whenever it does exceed the threshold necessary for conscious awareness. However, a second assumption of the model is that the operating process is relatively effortful and requires sufficient attentional resources. Hence, if these resources are being depleted by other tasks (e.g., rehearsing a digit string), the enhanced accessibility created as a byproduct of the monitoring process cannot be effectively checked, and the stage is set for rebound effects.

These assumptions have been explored in the domain of stereotype suppression by several researchers. In the contemporary social world, it has become largely taboo to respond to many stigmatized social groups in terms of negative stereotypes and prejudices that have historically been prevalent. In the previous section, we reviewed several pieces of evidence suggesting that stereotypes can exert numerous automatic effects on information processing. If so, what are the prospects for success when perceivers strive to follow the dictates of cultural injunctions against thinking discriminatory thoughts about these stigmatized groups? In an initial demonstration, Macrae, Bodenhausen, Milne, and Jetten (1994) showed that individuals who strive to prevent stereotypical reactions from entering their thoughts can succeed as long as they are actively pursuing that objective. However, consistent with the implications of Wegner’s ironic model of mental control, this process rendered the unwanted thoughts hyper-accessible, and Macrae et al. found that after the suppression motivation had dissipated, rebound effects emerged when subsequent members of the stereotyped group were encountered. That is, participants reported even more stereotypical reactions to the subsequent group members than did individuals who had never engaged in any previous stereotype suppression. These findings confirm that intentionally suppressing stereotypes ironically involves repeatedly priming them, albeit at relatively low levels—and this in turn renders the stereotypes all the more accessible. If the operating process that is commissioned to direct attention away from unwanted thoughts shouldbecompromisedeitherbytheimpositionofacognitive load or by the dissipation of the motivation required for its activity (being a relatively effortful, controlled process), this in turn can lead to rebound effects.

Additional ironic implications of stereotype suppression were uncovered in subsequent research. For example, trying not to think stereotypical thoughts about an elderly target resulted in better memory for the most stereotypical characteristics displayed by the target (Macrae, Bodenhausen, Milne, & Wheeler, 1996). Moreover, these effects are not limited to situations in which an overt, external requirement for thought suppression is imposed even when suppression motivation was self-generated in a relatively spontaneous manner, ironic effects were observed to result (Macrae, Bodenhausen, & Milne, 1998). Other research suggests that rebound effects of this sort are more likely to emerge in high-prejudice persons (Monteith, Spicer, & Toomen, 1998) and in situations in which the perceiver is unlikely to have chronically high levels of suppression motivation (Wyer, Sherman, & Stroessner, 2000). These qualifications are quite consistent with general idea that even the process of mental control itself is subject to somedegreeofautomation.Withpractice,theinitialeffortfulness of stereotype suppression may be replaced by relative efficiency.

Another form of controlled processing that has received considerable attention from social cognition researchers is judgmental correction. When perceivers suspect that their judgments have been contaminated by unwanted or inappropriate biases, they may take steps to adjust their judgments in a manner that will remove the unwanted influence (e.g., Wilson & Brekke, 1994). Whereas the initial processes that produced the bias are likely to be automatic ones, the processes involved in correcting for them are usually considered to be effortful. Hence, they require perceiver motivation and processing capacity for their deployment. One particularly noteworthy domain in which such hypotheses have been investigated is research on person perception. In particular, it has long been established that people are susceptible to a correspondence bias, in which they tend to perceive the behavior of others to be a reflection of corresponding internal dispositions—even when there are clear and unambiguous situational constraints on the behavior (e.g., Jones & Harris, 1967 Gilbert & Malone, 1995). The previously described research on spontaneous trait inference is consistent with the idea that people often immediately assume that behavior reflects the actor’s dispositions. In an influential theoretical assessment of this bias, Gilbert (e.g., 1998) proposed that dispositional inferences involve three distinct stages. In the categorization stage, the observed behavior is construed in terms of its trait implications (e.g., Hannah shared her dessert with her brother could be categorized as kind). Then the inferred trait is ascribed to the actor in the characterization stage. Both of these stages are assumed to be relatively automatic —that is, they occur spontaneously, efficiently, and without intention. In a third correction stage, individuals may consider the situational constraints that might have influenced the behavior (e.g., Mommy threatened Hannah with retribution if she failed to share her dessert) and adjust their dispositional inferences accordingly (e.g., perhaps Hannah isn’t so kind after all). This correction process is assumed to be a controlled activity that requires motivation and processing capacity for its execution.

In numerous experiments, Gilbert and colleagues have pursued the implications of this model by demonstrating that situational constraints are often not taken into account when perceivers are given a taxing mental task to perform that occupies their central executive resources (e.g., rehearsing a random digit string). For example, when watching a nervouslooking woman, people spontaneously assume that she is an anxious person only subsequently do they correct this initial assumption in light of the fact that she is in an anxietyprovoking situation (e.g., a job interview). If they have to watch the seemingly nervous person while rehearsing a digit string, they still automatically infer the trait of anxiety, but they no longer engage in corrective adjustments in light of the situational constraint. This pattern of results is quite consistent with the idea that correction is a controlled, resource-dependent process. When attentional resources are diminished, the automatic tendencies of the system remain unchecked by more effortful control mechanisms.

A more general treatment of the nature of correction processes has been provided by Wegener and Petty (1997) in their flexible correction model. According to this model, correction processes operate on the basis of lay theories about the direction and extent of biasing influences. When people suspect that they may have fallen prey to some untoward influence, they rely on their intuitive ideas about the nature of the bias to make compensatory corrective adjustments. For example, if they believe that their judgments of a particular person have been assimilated to stereotypes about the person’s gender group, then they would adjust those judgments in the opposite direction to make them less stereotypical in nature. Conversely, if they believe that their judgment of a target has been contrasted away from a salient standard of comparison, they will make adjustments that result in judgments in which the target is seen as more similar to the comparison standard. Several points are important to keep in mind with regard to this correction process. First, it requires that the perceiver detect the biasing influence before the process can initiate (Stapel, Martin, & Schwarz, 1998 Strack & Hannover, 1996). Many automatic biasing influences are likely to be subtle and hence escape detection as a result, no correctional remedy is pursued. Second, as a controlled process, it is likely to require motivation and attentional capacity for its successful execution. Third, if correctional mechanisms are to result in a less biased judgment, the perceiver must have a generally accurate lay theory about the direction and extent of the bias. Otherwise, corrections could go in the wrong direction, they could go insufficiently in the right direction, or they could go too far in the right direction, leading to overcorrection. Indeed, many examples of overcorrection have been documented (see Wegener & Petty, 1997, for a review), indicating that even when a bias is detected and capacity and motivation are present, controlled processes are not necessarily effective in accurately counteracting automatic biases.

Wegner and Bargh (1998) categorize several ways in which automatic and controlled mental processes interact with one another. The examples we have just described fall into the category of regulation—when a controlled process overrides an automatic one. When an automatic process overrides a controlled one, as in the rebound effect, intrusion is said to occur. Controlled processes can also launch automatic processes that subserve the achievement of the actor’s momentary intentions, and this is termed delegation. For example, delegation would be said to occur if a conscious goal to go to the shopping mall triggered the many automatic aspects of driving behavior. Conversely, automatic processes can serve an orienting function in which they launch controlled processes, as in Wegner’s model of mental control: When the automatic monitoring process detects an unwanted thought, it triggers the more effortful operating process to banish the thought from conscious awareness. Finally, controlled processes can be transformed into automatic processes via automatization, as when perceivers become so skilled at suppressing stereotypes that it happens automatically, and automatic processes can be transformed into controlled processes via disruption, as when one starts thinking too much about the steps involved in a well-learned task and subsequently performs the task more poorly.

In many ways, the tension between automatic and controlled processes has become the heart of social cognition research. Most contemporary social cognition research programs are oriented toward this issue in a fundamental way. One of the key insights to emerge from this research is that our perceptions of and reactions to the social world are often shaped by rapid, automatic processes over which we commonly exercise very little control. By virtue of their very automaticity, the impressions that are constructed on this basis often have the phenomenological quality of being direct representations of objective reality. We feel, for example, that Mary is objectively a kind and caring person rather than recognize the role that our own biases (e.g., gender stereotypes) may have played in shaping this necessarily subjective interpretation. It may be possible to exercise control over these processes. If we pause long enough to entertain the possibility that our perceptions of the world may contain systematic biases, we can engage in suitable corrective action. This action, however, requires awareness, motivation, and attentional capacity. Without them, we may function more like automatons than like the rational agents we often fancy ourselves to be.


Cognitive Psychology Journal

The Cognitive Psychology Journal is a peer-reviewed scientific journal that Elsevier publishes each year. It is a Transformative Journal that follows the hybrid model of subscription and open access. Thus, this journal is actively devoted to transforming into a fully open access journal.

It publishes original empirical, theoretical, and tutorial papers, methodological articles, and critical reviews. And these publications cover different areas of cognitive psychology including attention, memory, language processing, perception, problem-solving, and thinking.

Thus, this journal focuses on human cognition. Other research areas are also focused upon. Provided the cognitive psychologists have a direct interest and find it comprehensive to read. These research areas include:

  • Developmental Psychology
  • Artificial Intelligence
  • Linguistics
  • Neurophysiology
  • Social Psychology

The editor of the Cognitive Psychology Journal is G.D. Logan who contributes towards the journal along with a team of Associate Editors.

Furthermore, the Cognitive Psychology Journal has a Cite Score of 5.9 and the Impact Factor of 3.029.

Cite Score refers to the average citations that a single peer-reviewed document receives published under a specific title.

Whereas, the Impact Factor refers to the average number of citations that papers published in the journal received in a particular year during the two preceding years.

Further, the Cognitive Psychology Journal covers extensive articles in the field of Cognitive Psychology. The research in these publications have a great impact on the Cognitive Theory. Plus, it contributes towards new theoretical developments in the field.

Model of Publishing

Elsevier publishes Cognitive Psychology journal articles under two separate business models:

I. Subscription Articles

The subscription articles model of publishing is the one in which the subscribing institutions or individuals make payments. Thus, the subscribing institutions and individuals fund the subscription articles of the publication.

Furthermore, the subscribers, patients, as well as developed countries can access these articles through Elsevier’s various access programs.

These access programs include:

(i) Partners

The following are the partners with whom Elsevier works to make the world of research more transparent and collaborative.

  • Wikipedia Library’s Access Donation Program
  • Sense About Science
  • REISearch
  • Science Media Centre
  • White City Maker Challenge Program, Imperial College London
  • Pint of Science
  • Load2Learn
  • Bookshare
(ii) Science Literacy

Elsevier also distributes research and enhances understanding of science with specialized and broader audiences. Thus, it connects with such audiences through the following programs:

  • Elsevier Connect – an online community publishing about science, technology, and health research papers from Elsevier Journals
  • Science and People – Series of events that Elsevier organizes to bring together researchers and interested public for discussions on science, technology, and medical research.
  • Media Promotion of Research – Free access that Elsevier offers to the media to help them cover stories and promote the latest research through Press releases and alerts for journalists.
(iii) Research Integrity

Elsevier already has numerous initiatives to ensure that the integrity of research is maintained.

Maintaining the integrity of research means (i) following proper design methodology, (ii) ethical article submission, (iii) proper publication review, (iv) making research data available for re-use, etc.

Thus, the following is the list of initiatives that Elsevier has undertaken:

  • Organizing information, education, and training sessions encompassing online lectures and interactive courses conducted by leading experts.
  • Contributing to data initiatives like Force11, Scholix, and Research Data Alliance to make the research data accessible, discoverable, and reusable.
  • Detecting Plagiarism through Crossref Similarity Check – a service built in collaboration with the STM Publishing community to verify a paper’s originality.
  • Following a manual image screening process by running pilots and sponsoring research in software development.
  • Ensuring structured and transparent reporting through CONSORT checklist, author checklist, and adopting STAR Methods.
  • Maintaining transparency in authorship and contributor roles
  • Sponsoring World Conference on Research Integrity
  • Publishing reproducibility papers
  • Promoting the Lancet Reward Campaign

II. Open Access Articles

The Open Access Articles are the journals that are freely available to both the subscribers and the public at large. Further, the authors of the Open Access Articles either pay the fee themselves or are funded by their researcher funder.

It is important to note that the Cognitive Psychology Journal is a transformative journal. Accordingly, funds that were previously used to pay for subscriptions would now be redirected to pay for Open Access services.

Thus, to transit to full or complete Open Access Articles, an increasing number of transformative agreements must be negotiated with publishers around the globe.

However, the foundation of these transformative agreements is temporary and transitional.

From 2021, the authors funded by funders implementing Plan S principles can publish open access articles in this journal. In addition to this, they can receive funding for their article publishing charges. And for that, they will have to meet Plan S requirements.

Remember, the Coalition S members provide the funding to support the publication fee of journals through such arrangements.


Recognizing the rush to judgment

It’s not clear that quick decisions are always bad. Sometimes snap judgments are remarkably accurate and they can save time. It would be crippling to comb through all the available information on a topic every time a decision must be made. However, misunderstanding how much information we actually use to make our judgments has important implications beyond making good or bad decisions.

Take the problem of self-fulfilling prophesies. Imagine a situation in which a manager forms a tentative opinion of an employee that then cascades into a series of decisions that affect that employee’s entire career trajectory. A manager who sees an underling make a small misstep in an insignificant project may avoid assigning challenging projects in the future, which in turn would hamstring this employee’s career prospects. If managers are unaware how willing they are to make quick and data-poor initial judgments, they’ll be less likely to nip these self-fulfilling destructive cycles in the bud.

Another example might be the human tendency to rely on stereotypes when judging other people. Although you may believe that you’ll consider all the information available about another person, people in fact are more likely to consider very little information and let stereotypes creep in. It may be a failure to understand how quickly judgments get made that make it so hard to exclude the influence of stereotyping.

Modern technology allows virtually any decision made today to be more informed than the same decision made a few decades ago. But the human reliance on quick judgments may forestall this promise. In the quest for more informed decision-making, researchers will need to explore ways to encourage people to slow down the speed of judgment.


The mind rebels

Of course, whenever one toys with the human mind, the human mind comes up with all kinds of reasons to misbehave.

If you reduce the speed of a road but don't design in those external distractions, you run the risk that drivers will start focusing on what one researcher called "in-vehicle tasks." Think messing about with your cellphone, or fiddling with your radio dial.

So this slower roads thing has to be done right.

Designers and planners say you can't just slap a new speed limit down and say "obey."

It's that part of the conversation where Calgary now finds itself.


Definition of thought

Let's try to define this - without getting too smarmy. We're talking physics here, after all! It is obvious there's neural activity inside our heads all the time. However, it is impossible to pinpoint thoughts to any particular electron or a change in quantum states of atoms. We can observe brain activity on a macro scale, but it tells us nothing about the product of that activity, only observed side effects.

This leads me to believe that thoughts are complex functions with possibly unreal eigenvalues, which explains why they cannot be observed, without contradicting the physics. However, this still might pose a problem if we limit our neurons and synaptic activity to classic electrochemical signaling. In this case, we must have wavefunctions with real values only.

An alternative explanation to the physical manifestation of thoughts would be the use of less known particles for signaling, similar to what I proposed in my Telepathy & Telekinesis article. Or we could be dealing with extremely low energies that are undetectable by modern equipment.

A third explanation is that thoughts are a direct, linear product of synaptic activity, although conceived from many thousands or millions of events. In fact, you might treat thoughts as the product of biological cryptography, with electric signals as raw data bits and external and internal stimuli as the encryption key. Which would explain why any attempt to measure thoughts outside the mind would appear as pure random noise.

Depending on which explanation you prefer, the debate of what we do with our mind changes. If you stick with the quantum theory, then we can definitely go beyond simple physics easily. Unknown particles and ultra-low energies place us in an uncharted territory that human instrumentation has not been able to measure yet. And the third explanation is the simplest and purely classic. Now, let's try to answer some fairly basic questions, based on these ramblings.


Watch the video: How Fast Does A Thought Travel? (August 2022).