Information

Free software to run online visual category learning experiment

Free software to run online visual category learning experiment



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Is there any open software to support a visualization category learning experiment? It should be open-source, and I can add more functions designed by myself to implement this experiment.


You might want to take a look at jsPsych, which is an open-source JavaScript library for building experiments. There is an example of running a visual categorization task in the documentation for the library.


Here are some open-source software for you:


I've been developing an online platform to run HTML5/Javascript experiments, recruit participants via email, facebook, or twitter, and collect and evaluate results in real time.

The platform itself is not open source but many of the experiments are right now and more will be soon. You may also add your own experiments if you are comfortable developing HTML5/Javascript.

Please see http://statode-demo.herokuapp.com/mental-rotation/ for a demo, it does not require signup.


What Makes Psychology Research Ethical?

Whether psychological research is conducted with animals or human participants, it must be done in a manner that follows rules of conduct that protect the participants from harm.

But what makes psychology research ethical goes well beyond not bringing physical or emotional harm to a research subject.

Instead, ethical guidelines for research are quite broad, covering everything from using inducements to entice people to participate in studies to using deception as a research tool – and much more.

Today’s guidelines arose, in part, as a reaction to classic psychological research (i.e., Milgram’s obedience study Zimbardo’s prison experiment ) that caused significant emotional harm to some participants.

In this guide, we’ll explore some of the most important ethical guidelines psychology requires of researchers to ensure their conduct is ethical and that participants are protected.


How to Get Started Teaching Online

Set Goals
The first thing you'll want to think about is what you're trying to achieve with this online course. Consider whether the entirety of the course will be on online teaching platforms, if it meant as a supplement to a course, or is meant as a prerequisite before joining a future course, e.g. an online course on basic math skills which might be required before taking an advanced live class, etc.

Create a Course Plan
Just like a face-to-face course, courses on online learning platforms need to have outlines or course plans for what you'll cover each week. If these are courses that you've previously offered live, you're one step ahead of the game. Your course plans will be your maps for what kinds of materials you will need to create for your lessons.

Gather Your Equipment
After you've considered what lessons you want to teach and what online learning platform you want to use, think about what types of equipment, software, and other tools you have at your disposal. Do you have a video camera or other device capable of capturing HD video? Do you have a screencasting software program? If not, sign up for a free trial of Camtasia Studio, or check out screencasting programs such as Screencast-O-Matic or Jing. Will you need to create PowerPoints? Do you have a microphone to capture audio? Once you figure out what your technological capabilities and limitations are, you'll know what kinds of content you'll want to create.

Set Aside Time for Creation and Editing
If you're going to create videos and screencasts for your lessons, consider not only the time it takes to record these, but also how long it takes to edit them, create title slides, render and upload them to online teaching platforms, etc.

Get Started!
Once you've completed all of that planning, jump right in and start building your online course. It's really only through experimenting on online learning platforms that you'll know what you're doing wrong and what's working. Ask for feedback from your friends and social networks and get started teaching online!


Free software to run online visual category learning experiment - Psychology

PsychoJS is a JavaScript library that makes it possible to run neuroscience, psychology, and psychophysics experiments in a browser. It is the online counterpart of the PsychoPy Python library.

You can create online experiments from the PsychoPy Builder, you can find and adapt existing experiments on pavlovia.org, or create them from scratch: the PsychoJS API is available here.

PsychoJS is an open-source project. You can contribute by submitting pull requests to the PsychoJS GitHub repository, and discuss issues and current and future features on the Online category of the PsychoPy Forum.

Many studies in behavioural sciences (e.g. psychology, neuroscience, linguistics or mental health) use computers to present stimuli and record responses in a precise manner. These studies are still typically conducted on small numbers of people in laboratory environments equipped with dedicated hardware.

With high-speed broadband, improved web technologies and smart devices everywhere, studies can now go online without sacrificing too much temporal precision. This is a “game changer”. Data can be collected on larger, more varied, international populations. We can study people in environments they do not find intimidating. Experiments can be run multiple times per day, without data collection becoming impractical.

The idea behind PsychoJS is to make PsychoPy experiments available online, from a web page, so participants can run them on any device equipped with a web browser such as desktops, laptops, or tablets. In some circumstance, they can even use their phone!

Running PsychoPy experiments online requires the generation of an index.html file and of a javascript file that contains the code describing the experiment. Those files need to be hosted on a web server to which participants will point their browser in order to run the experiment. The server will also need to host the PsychoJS library.

The recommended approach to creating experiments is to use PsychoPy Builder to generate the javascript and html files. Many of the existing Builder experiments should "just work", subject to the Components being compatible between PsychoPy and PsychoJS.

We built the PsychoJS library to make the JavaScript experiment files look and behave in very much the same way as to the Builder-generated Python files. PsychoJS offers classes such as Window and ImageStim , with very similar attributes to their Python equivalents. Experiment designers familiar with the PsychoPy library should feel at home with PsychoJS, and can expect the same level of control they have with PsychoPy, from the structure of the trials/loops all the way down to frame-by-frame updates.

There are however notable differences between the PsychoJS and PsychoPy libraries, most of which have to do with the way a web browser interprets and runs JavaScript, deals with resources (such as images, sound or videos), or render stimuli. To manage those web-specific aspect, PsychoJS introduces the concept of Scheduler. As the name indicate, Scheduler's offer a way to organise various PsychoJS along a timeline, such as downloading resources, running a loop, checking for keyboard input, saving experiment results, etc. As an illustration, a Flow in PsychoPy can be conceptualised as a Schedule, with various tasks on it. Some of those tasks, such as trial loops, can also schedule further events (i.e. the individual trials to be run).

Under the hood PsychoJS relies on PixiJs to present stimuli and collect responses. PixiJs is a multi-platform, accelerated, 2-D renderer, that runs in most modern browsers. It uses WebGL wherever possible and silently falls back to HTML5 canvas where not. WebGL directly addresses the graphic card, thereby considerably improving the rendering performance.

A convenient way to make experiment available to participants is to host them on pavlovia.org, an open-science server. PsychoPy Builder offers the possibility of uploading the experiment directly to pavlovia.org.

Which PsychoPy Components are supported by PsychoJS?

The list of PsychoPy Builder Components supported by PsychoJS see the PsychoPy/JS online status page


Exogeneity in Nonrandomized Experiments

The use of randomized experiments is not the only means of achieving exogeneity. There are several other research designs/research methods of achieving at least partial exogeneity. These designs/methods are more frequently utilized in fields outside of criminology and are making inroads in criminology. These quasi-experimental designs/methods include natural experiments, regression discontinuity, and instrumental variable estimation. These techniques allow researchers to accurately estimate causal relationships and draw causal inferences in certain situations.

Natural experiments are one means of establishing exogenous variation in a cause variable when researcher-led random assignment is not feasible. A natural experiment is study in which external factors such as natural events, serendipity, or policy changes “assign” research subjects to various experimental conditions of interest. Because the assignment process is external to the research subjects under observation, the assignment process is exogenous, or at least arguably exogenous. This exogenous variation allows researchers to accurately estimate causal relationships.

Natural experiments seem to be increasingly common in the social sciences (see Dunning, 2012 ). As an example of a natural experiment, Kirk ( 2009 ) wished to learn the causal effect of relocating previously incarcerated offenders from their old neighborhoods of residence to new less criminogenic neighborhoods. This is an important theoretical and practical issue because we know that many parolees return to the same criminogenic neighborhoods and social networks that contributed to their involvement in offending in the first place, and therefore we shouldn't be surprised that recidivism is often alarmingly high. While it is not impossible to conduct a randomized experiment on this issue, it would be difficult for a variety of reasons. However, a recent natural event, Hurricane Katrina, forced many parolees who resided in high-crime areas of New Orleans hard hit by the storm to move to other neighborhoods. In essence, Hurricane Katrina exogenously assigned some parolees to new neighborhoods, which made it possible to estimate the causal effect of relocation on recidivism. Kirk found that parolees who moved to a new area were substantially less likely to be reincarcerated within three years of release in comparison to parolees who did not move.

Another research design capable of establishing exogenous variation is the regression discontinuity design (see Murnane and Willett, 2011 ). The key element of this design is the use of some “forcing variable” that establishes a cut point (or threshold) that assigns research subjects below the cut point to one experimental condition and those above the cut point to another condition. The cut point is used as an exogenous source of variation research subjects just below and just above the cut point are compared to estimate the causal relationship between the variables of interest. As an example of a regression discontinuity design, Berk and Rauma ( 1983 ) assessed the causal effect of providing financial assistance in the form of unemployment insurance to recently released former prison inmates. In order to qualify for the financial assistance former prison inmates had to have made at least $1,500 in the year prior to release this criterion was used as the forcing variable used to assign former inmates to either the control (no financial assistance) or the treatment (financial assistance) conditions. Berk and Rauma found that financial assistance caused a 13% reduction in recidivism.

Outside of criminology, the most popular means of estimating causal relationships without the use of a random experiment is instrumental variable estimation. The logic of instrumental variable estimation begins by noting that if some part of the variation in some endogenous variable of interest could be established as exogenous, then this part of the variation in the variable of interest could be used to accurately estimate its causal effect on the outcome of interest. An “instrumental variable” can be used to identify exogenous variation. An instrumental variable is one that satisfies two assumptions: (1) it is uncorrelated with the error term of the regression of the outcome variable of interest on the endogenous independent variable of interest, and (2) it is correlated with the endogenous variable of interest. The first assumption means that the instrumental variable can have no effect on the outcome variable except via its indirect effect on the outcome through the endogenous independent variable. This is a strong assumption because the instrumental variable must only be related to the outcome variable through the endogenous independent (causal) variable and the instrumental variable cannot be correlated with other factors that affect the outcome variable. If an instrumental variable meeting these criteria can be found, then estimating the causal relationship between the endogenous variable and the outcome of interest is straightforward and can be accomplished using several statistical techniques, of which two-stage least stages is most popular.

Instrumental variable estimation has rarely been applied in criminology. Apel, Bushway, Paternoster, Brame, and Sweeten ( 2008 ) is one example of instrumental variable estimation in criminology. Apel and colleagues examine the relationship between hours worked by youth and delinquency. Prior research typically finds that youth who work more hours are more likely to be involved in delinquency however, number of hours worked is likely to be endogenously related to delinquency, as youth who work more hours are likely to be different from other youth on a host of factors that are also related to delinquency. Apel and colleagues use variation in state child labor laws as an instrumental variable to identify exogenous variation in the number of hours worked. Contrary to prior research, these authors find that the number of hours worked by youth reduces delinquency.

Ojmarrh Mitchell is an associate professor and graduate director in the Department of Criminology at the University of South Florida. Professor Mitchell earned his PhD in criminal justice and criminology from the University of Maryland with a doctoral minor in measurement, statistics, and evaluation. His research interests include drugs and crime, corrections and sentencing, race and crime, and research methods.


Predict new automobile prices

Now that we've trained the model using 75 percent of our data, we can use it to score the other 25 percent of the data to see how well our model functions.

Find and drag the Score Model module to the experiment canvas. Connect the output of the Train Model module to the left input port of Score Model. Connect the test data output (right port) of the Split Data module to the right input port of Score Model.

Run the experiment and view the output from the Score Model module by clicking the output port of Score Model and select Visualize. The output shows the predicted values for price and the known values from the test data.

Finally, we test the quality of the results. Select and drag the Evaluate Model module to the experiment canvas, and connect the output of the Score Model module to the left input of Evaluate Model. The final experiment should look something like this:

To view the output from the Evaluate Model module, click the output port, and then select Visualize.

The following statistics are shown for our model:

  • Mean Absolute Error (MAE): The average of absolute errors (an error is the difference between the predicted value and the actual value).
  • Root Mean Squared Error (RMSE): The square root of the average of squared errors of predictions made on the test dataset.
  • Relative Absolute Error: The average of absolute errors relative to the absolute difference between actual values and the average of all actual values.
  • Relative Squared Error: The average of squared errors relative to the squared difference between the actual values and the average of all actual values.
  • Coefficient of Determination: Also known as the R squared value, this is a statistical metric indicating how well a model fits the data.

For each of the error statistics, smaller is better. A smaller value indicates that the predictions more closely match the actual values. For Coefficient of Determination, the closer its value is to one (1.0), the better the predictions.


Limitless customization

Simul8’s APIs and powerful scripting tool, Visual Logic, allow for even deeper customization when you need it.

Visual Logic

A powerful yet intuitive scripting tool to control any aspect of your simulation - from specific rules of your process, to building your own custom tools or interfaces.

Control Simul8 from anywhere with connections to Python and C# to integrate simulation directly into your technology stack.


T Test Calculator

A t test compares the means of two groups. For example, compare whether systolic blood pressure differs between a control and treated group, between men and women, or any other two groups. Don't confuse t tests with correlation and regression. The t test compares one variable (perhaps blood pressure) between two groups. Use correlation and regression to see how two variables (perhaps blood pressure and heart rate) vary together. Also don't confuse t tests with ANOVA. The t tests (and related nonparametric tests) compare exactly two groups. ANOVA (and related nonparametric tests) compare three or more groups. Finally, don't confuse a t test with analyses of a contingency table (Fishers or chi-square test). Use a t test to compare a continuous variable (e.g., blood pressure, weight or enzyme activity). Use a contingency table to compare a categorical variable (e.g., pass vs. fail, viable vs. not viable).

Analyze, graph and present your scientific work easily with GraphPad Prism. No coding required.


“Top Hat allowed me to truly interact with students, see what they’re doing at any given time and monitor their progress.”

Christina Alevras Instructor of BiologyUniversity of Saint Joseph

“Top Hat is a one-stop shop students can directly access from their devices, wherever they are. It saves them from having to navigate different resources.”

Frank Spors Associate Professor, College of OptometryWestern University of Health Sciences

“When I realized I could customize the textbook to make it more specific to my classroom, I was able to make the textbook an extension of my lecture.”


From survey data to brilliant insights. in half the time.

Flexible live reporting software built to make every stage of your analysis & reporting faster and easier.

Cut your analysis and reporting time in half

With Displayr, every stage of the analysis and reporting process is faster and easier.

Learn how you can use Displayr for:

The only tool you’ll ever need to quickly uncover and share the stories in your data

Easy for the novice, powerful for the expert. Analyze, report and publish dashboards. Automatically update everything with new data. Everything’s reproducible, Work collaboratively in real time. Secure cloud platform. Nothing’s missing.

Basic analysis and crosstabs

Quickly create tables using drag and drop or automatically churn them out by the thousand.

Easily manipulate data

Create filters and new variables with the click of a mouse, and merging categories is as easy as drag and drop.

Effortlessly see the stories in the data

Automated, state-of-the-art statistical tests that highlight the key results. You can automatically sort and delete uninteresting tables.

Update everything automatically

Update your raw data and watch all your work update automatically.

Nothing’s missing

From rim weighting to complicated filters to recoding to automatic detection of flat-lining, it’s all there, built in.

Text analysis and coding

Rapidly analyze and code text

With a click of the mouse, you can automatically categorize your data, perform sentiment analysis, entity extraction, and create word clouds.

Greater accuracy

For the most accurate results, invest a little more time and train Displayr’s machine learning models to understand the nuances of your data.

Instant updating with new text data

To save you even more time, when new text data comes in, Displayr automatically and accurately assigns it to existing categories.

Designed for market research data

All the tools to categorize any type of text data, including single response, multiple response, brand lists, and back-coding.

Nothing’s missing

There’s no text analysis or coding that you cannot perform in Displayr.

Advanced analysis

All analysis techniques are built-in

Everything you’ll ever need including regression, PCA, clustering, latent class analysis, machine learning, MaxDiff, conjoint, TURF, and so much more.

Designed for survey data

All the analysis techniques work with categorical data, sampling weights, and filters.

Analysis for everyone

Displayr guides you through your analyses, alerting you when there are problems and suggesting solutions along the way.

Nothing’s missing

There’s no advanced analysis you cannot perform in Displayr.

Professional-quality reporting

Write your reports in Displayr

Displayr allows you to author your reports at the same time as doing your analysis.

Automatic updating of reports

When your data and analyses update, the report automatically updates as well.

Any report exports to PowerPoint

Automatically update entire PowerPoint reports with new data (e.g., waves of survey data, country level reports, automatically batch export).

Nothing’s missing

From webpages and dashboards, to exporting to PowerPoint, Excel, or as a PDF, to embedding interactive visualizations in your blog, Displayr’s got all your reporting needs covered.

Dashboards

Easily create beautiful, interactive dashboards and online reports. Empower viewers to self-serve with interactive visualizations, filter-based queries, and easy-to-use calculators. Build your dashboard once and automatically update it forever. All your analysis, text coding, visualizations and summaries will update automatically. Add drop-downs, interactive text and visualizations, set page masters. When it comes to dashboards, nothing’s missing.

"A dashboard that would've taken a week to do previously is now ready in two days.”

Michelle Mercer, Lewers Research

Your plan to cut analysis and reporting time in ½

Step 1 Demo

・Customized to your needs
・See how to cut your analysis and reporting time in half

Step 2 Purchase

・Annual subscription
・Customer support
・Quantity discounts

Step 3 Implement

・Review & optimize your data file
・Custom 3 hour new user training
・Regular check-ins

Your painful manual tasks are costing you time, resources and money.

How Displayr cuts your analysis and reporting time in ½

You want to turn data into insight and share this insight in a meaningful and instantly understandable way, right? But so much of your analysis and reporting time is spent on painful manual tasks - endlessly cutting, pasting and formatting data, checking for mistakes, redoing work, cobbling together different software, and trying to figure stuff out.

And the problem is that these painful manual tasks don't leave you with much time to find and report your brilliant insights - stressful! We, at Displayr, don't think you should have to do these manual tasks at all. So, we combined our expertise in quantitative analysis and software engineering to create Displayr.

It's analysis and reporting software that automatically does all your painful manual tasks for you. It also makes it easy to do the tricky things that usually take ages to figure out - like machine learning, automatic text coding, statistical testing, driver analysis, and even building dashboards.

The result? Displayr will cut your analysis and reporting times in half. That's what 1000s of researchers who already use our software tell us.

Imagine blending Powerpoint, SPSS, Excel, and Tableau into one tool that's also fast, easy to use, and excellent for survey data - well, that's Displayr! If you analyze data, it'll make you thrive. And by using fewer resources to do more work, Displayr won't just save you time, it will make you money. You'll produce higher quality work faster, without needing to outsource or use other tools.

So, stop throwing away your valuable time to painful manual tasks and quit spending hours figuring stuff out! Schedule a demo of Displayr and discover how you can transform your survey data to brilliant insights in a fraction of the time.


Free software to run online visual category learning experiment - Psychology

PsychoJS is a JavaScript library that makes it possible to run neuroscience, psychology, and psychophysics experiments in a browser. It is the online counterpart of the PsychoPy Python library.

You can create online experiments from the PsychoPy Builder, you can find and adapt existing experiments on pavlovia.org, or create them from scratch: the PsychoJS API is available here.

PsychoJS is an open-source project. You can contribute by submitting pull requests to the PsychoJS GitHub repository, and discuss issues and current and future features on the Online category of the PsychoPy Forum.

Many studies in behavioural sciences (e.g. psychology, neuroscience, linguistics or mental health) use computers to present stimuli and record responses in a precise manner. These studies are still typically conducted on small numbers of people in laboratory environments equipped with dedicated hardware.

With high-speed broadband, improved web technologies and smart devices everywhere, studies can now go online without sacrificing too much temporal precision. This is a “game changer”. Data can be collected on larger, more varied, international populations. We can study people in environments they do not find intimidating. Experiments can be run multiple times per day, without data collection becoming impractical.

The idea behind PsychoJS is to make PsychoPy experiments available online, from a web page, so participants can run them on any device equipped with a web browser such as desktops, laptops, or tablets. In some circumstance, they can even use their phone!

Running PsychoPy experiments online requires the generation of an index.html file and of a javascript file that contains the code describing the experiment. Those files need to be hosted on a web server to which participants will point their browser in order to run the experiment. The server will also need to host the PsychoJS library.

The recommended approach to creating experiments is to use PsychoPy Builder to generate the javascript and html files. Many of the existing Builder experiments should "just work", subject to the Components being compatible between PsychoPy and PsychoJS.

We built the PsychoJS library to make the JavaScript experiment files look and behave in very much the same way as to the Builder-generated Python files. PsychoJS offers classes such as Window and ImageStim , with very similar attributes to their Python equivalents. Experiment designers familiar with the PsychoPy library should feel at home with PsychoJS, and can expect the same level of control they have with PsychoPy, from the structure of the trials/loops all the way down to frame-by-frame updates.

There are however notable differences between the PsychoJS and PsychoPy libraries, most of which have to do with the way a web browser interprets and runs JavaScript, deals with resources (such as images, sound or videos), or render stimuli. To manage those web-specific aspect, PsychoJS introduces the concept of Scheduler. As the name indicate, Scheduler's offer a way to organise various PsychoJS along a timeline, such as downloading resources, running a loop, checking for keyboard input, saving experiment results, etc. As an illustration, a Flow in PsychoPy can be conceptualised as a Schedule, with various tasks on it. Some of those tasks, such as trial loops, can also schedule further events (i.e. the individual trials to be run).

Under the hood PsychoJS relies on PixiJs to present stimuli and collect responses. PixiJs is a multi-platform, accelerated, 2-D renderer, that runs in most modern browsers. It uses WebGL wherever possible and silently falls back to HTML5 canvas where not. WebGL directly addresses the graphic card, thereby considerably improving the rendering performance.

A convenient way to make experiment available to participants is to host them on pavlovia.org, an open-science server. PsychoPy Builder offers the possibility of uploading the experiment directly to pavlovia.org.

Which PsychoPy Components are supported by PsychoJS?

The list of PsychoPy Builder Components supported by PsychoJS see the PsychoPy/JS online status page


Exogeneity in Nonrandomized Experiments

The use of randomized experiments is not the only means of achieving exogeneity. There are several other research designs/research methods of achieving at least partial exogeneity. These designs/methods are more frequently utilized in fields outside of criminology and are making inroads in criminology. These quasi-experimental designs/methods include natural experiments, regression discontinuity, and instrumental variable estimation. These techniques allow researchers to accurately estimate causal relationships and draw causal inferences in certain situations.

Natural experiments are one means of establishing exogenous variation in a cause variable when researcher-led random assignment is not feasible. A natural experiment is study in which external factors such as natural events, serendipity, or policy changes “assign” research subjects to various experimental conditions of interest. Because the assignment process is external to the research subjects under observation, the assignment process is exogenous, or at least arguably exogenous. This exogenous variation allows researchers to accurately estimate causal relationships.

Natural experiments seem to be increasingly common in the social sciences (see Dunning, 2012 ). As an example of a natural experiment, Kirk ( 2009 ) wished to learn the causal effect of relocating previously incarcerated offenders from their old neighborhoods of residence to new less criminogenic neighborhoods. This is an important theoretical and practical issue because we know that many parolees return to the same criminogenic neighborhoods and social networks that contributed to their involvement in offending in the first place, and therefore we shouldn't be surprised that recidivism is often alarmingly high. While it is not impossible to conduct a randomized experiment on this issue, it would be difficult for a variety of reasons. However, a recent natural event, Hurricane Katrina, forced many parolees who resided in high-crime areas of New Orleans hard hit by the storm to move to other neighborhoods. In essence, Hurricane Katrina exogenously assigned some parolees to new neighborhoods, which made it possible to estimate the causal effect of relocation on recidivism. Kirk found that parolees who moved to a new area were substantially less likely to be reincarcerated within three years of release in comparison to parolees who did not move.

Another research design capable of establishing exogenous variation is the regression discontinuity design (see Murnane and Willett, 2011 ). The key element of this design is the use of some “forcing variable” that establishes a cut point (or threshold) that assigns research subjects below the cut point to one experimental condition and those above the cut point to another condition. The cut point is used as an exogenous source of variation research subjects just below and just above the cut point are compared to estimate the causal relationship between the variables of interest. As an example of a regression discontinuity design, Berk and Rauma ( 1983 ) assessed the causal effect of providing financial assistance in the form of unemployment insurance to recently released former prison inmates. In order to qualify for the financial assistance former prison inmates had to have made at least $1,500 in the year prior to release this criterion was used as the forcing variable used to assign former inmates to either the control (no financial assistance) or the treatment (financial assistance) conditions. Berk and Rauma found that financial assistance caused a 13% reduction in recidivism.

Outside of criminology, the most popular means of estimating causal relationships without the use of a random experiment is instrumental variable estimation. The logic of instrumental variable estimation begins by noting that if some part of the variation in some endogenous variable of interest could be established as exogenous, then this part of the variation in the variable of interest could be used to accurately estimate its causal effect on the outcome of interest. An “instrumental variable” can be used to identify exogenous variation. An instrumental variable is one that satisfies two assumptions: (1) it is uncorrelated with the error term of the regression of the outcome variable of interest on the endogenous independent variable of interest, and (2) it is correlated with the endogenous variable of interest. The first assumption means that the instrumental variable can have no effect on the outcome variable except via its indirect effect on the outcome through the endogenous independent variable. This is a strong assumption because the instrumental variable must only be related to the outcome variable through the endogenous independent (causal) variable and the instrumental variable cannot be correlated with other factors that affect the outcome variable. If an instrumental variable meeting these criteria can be found, then estimating the causal relationship between the endogenous variable and the outcome of interest is straightforward and can be accomplished using several statistical techniques, of which two-stage least stages is most popular.

Instrumental variable estimation has rarely been applied in criminology. Apel, Bushway, Paternoster, Brame, and Sweeten ( 2008 ) is one example of instrumental variable estimation in criminology. Apel and colleagues examine the relationship between hours worked by youth and delinquency. Prior research typically finds that youth who work more hours are more likely to be involved in delinquency however, number of hours worked is likely to be endogenously related to delinquency, as youth who work more hours are likely to be different from other youth on a host of factors that are also related to delinquency. Apel and colleagues use variation in state child labor laws as an instrumental variable to identify exogenous variation in the number of hours worked. Contrary to prior research, these authors find that the number of hours worked by youth reduces delinquency.

Ojmarrh Mitchell is an associate professor and graduate director in the Department of Criminology at the University of South Florida. Professor Mitchell earned his PhD in criminal justice and criminology from the University of Maryland with a doctoral minor in measurement, statistics, and evaluation. His research interests include drugs and crime, corrections and sentencing, race and crime, and research methods.


Predict new automobile prices

Now that we've trained the model using 75 percent of our data, we can use it to score the other 25 percent of the data to see how well our model functions.

Find and drag the Score Model module to the experiment canvas. Connect the output of the Train Model module to the left input port of Score Model. Connect the test data output (right port) of the Split Data module to the right input port of Score Model.

Run the experiment and view the output from the Score Model module by clicking the output port of Score Model and select Visualize. The output shows the predicted values for price and the known values from the test data.

Finally, we test the quality of the results. Select and drag the Evaluate Model module to the experiment canvas, and connect the output of the Score Model module to the left input of Evaluate Model. The final experiment should look something like this:

To view the output from the Evaluate Model module, click the output port, and then select Visualize.

The following statistics are shown for our model:

  • Mean Absolute Error (MAE): The average of absolute errors (an error is the difference between the predicted value and the actual value).
  • Root Mean Squared Error (RMSE): The square root of the average of squared errors of predictions made on the test dataset.
  • Relative Absolute Error: The average of absolute errors relative to the absolute difference between actual values and the average of all actual values.
  • Relative Squared Error: The average of squared errors relative to the squared difference between the actual values and the average of all actual values.
  • Coefficient of Determination: Also known as the R squared value, this is a statistical metric indicating how well a model fits the data.

For each of the error statistics, smaller is better. A smaller value indicates that the predictions more closely match the actual values. For Coefficient of Determination, the closer its value is to one (1.0), the better the predictions.


Limitless customization

Simul8’s APIs and powerful scripting tool, Visual Logic, allow for even deeper customization when you need it.

Visual Logic

A powerful yet intuitive scripting tool to control any aspect of your simulation - from specific rules of your process, to building your own custom tools or interfaces.

Control Simul8 from anywhere with connections to Python and C# to integrate simulation directly into your technology stack.


From survey data to brilliant insights. in half the time.

Flexible live reporting software built to make every stage of your analysis & reporting faster and easier.

Cut your analysis and reporting time in half

With Displayr, every stage of the analysis and reporting process is faster and easier.

Learn how you can use Displayr for:

The only tool you’ll ever need to quickly uncover and share the stories in your data

Easy for the novice, powerful for the expert. Analyze, report and publish dashboards. Automatically update everything with new data. Everything’s reproducible, Work collaboratively in real time. Secure cloud platform. Nothing’s missing.

Basic analysis and crosstabs

Quickly create tables using drag and drop or automatically churn them out by the thousand.

Easily manipulate data

Create filters and new variables with the click of a mouse, and merging categories is as easy as drag and drop.

Effortlessly see the stories in the data

Automated, state-of-the-art statistical tests that highlight the key results. You can automatically sort and delete uninteresting tables.

Update everything automatically

Update your raw data and watch all your work update automatically.

Nothing’s missing

From rim weighting to complicated filters to recoding to automatic detection of flat-lining, it’s all there, built in.

Text analysis and coding

Rapidly analyze and code text

With a click of the mouse, you can automatically categorize your data, perform sentiment analysis, entity extraction, and create word clouds.

Greater accuracy

For the most accurate results, invest a little more time and train Displayr’s machine learning models to understand the nuances of your data.

Instant updating with new text data

To save you even more time, when new text data comes in, Displayr automatically and accurately assigns it to existing categories.

Designed for market research data

All the tools to categorize any type of text data, including single response, multiple response, brand lists, and back-coding.

Nothing’s missing

There’s no text analysis or coding that you cannot perform in Displayr.

Advanced analysis

All analysis techniques are built-in

Everything you’ll ever need including regression, PCA, clustering, latent class analysis, machine learning, MaxDiff, conjoint, TURF, and so much more.

Designed for survey data

All the analysis techniques work with categorical data, sampling weights, and filters.

Analysis for everyone

Displayr guides you through your analyses, alerting you when there are problems and suggesting solutions along the way.

Nothing’s missing

There’s no advanced analysis you cannot perform in Displayr.

Professional-quality reporting

Write your reports in Displayr

Displayr allows you to author your reports at the same time as doing your analysis.

Automatic updating of reports

When your data and analyses update, the report automatically updates as well.

Any report exports to PowerPoint

Automatically update entire PowerPoint reports with new data (e.g., waves of survey data, country level reports, automatically batch export).

Nothing’s missing

From webpages and dashboards, to exporting to PowerPoint, Excel, or as a PDF, to embedding interactive visualizations in your blog, Displayr’s got all your reporting needs covered.

Dashboards

Easily create beautiful, interactive dashboards and online reports. Empower viewers to self-serve with interactive visualizations, filter-based queries, and easy-to-use calculators. Build your dashboard once and automatically update it forever. All your analysis, text coding, visualizations and summaries will update automatically. Add drop-downs, interactive text and visualizations, set page masters. When it comes to dashboards, nothing’s missing.

"A dashboard that would've taken a week to do previously is now ready in two days.”

Michelle Mercer, Lewers Research

Your plan to cut analysis and reporting time in ½

Step 1 Demo

・Customized to your needs
・See how to cut your analysis and reporting time in half

Step 2 Purchase

・Annual subscription
・Customer support
・Quantity discounts

Step 3 Implement

・Review & optimize your data file
・Custom 3 hour new user training
・Regular check-ins

Your painful manual tasks are costing you time, resources and money.

How Displayr cuts your analysis and reporting time in ½

You want to turn data into insight and share this insight in a meaningful and instantly understandable way, right? But so much of your analysis and reporting time is spent on painful manual tasks - endlessly cutting, pasting and formatting data, checking for mistakes, redoing work, cobbling together different software, and trying to figure stuff out.

And the problem is that these painful manual tasks don't leave you with much time to find and report your brilliant insights - stressful! We, at Displayr, don't think you should have to do these manual tasks at all. So, we combined our expertise in quantitative analysis and software engineering to create Displayr.

It's analysis and reporting software that automatically does all your painful manual tasks for you. It also makes it easy to do the tricky things that usually take ages to figure out - like machine learning, automatic text coding, statistical testing, driver analysis, and even building dashboards.

The result? Displayr will cut your analysis and reporting times in half. That's what 1000s of researchers who already use our software tell us.

Imagine blending Powerpoint, SPSS, Excel, and Tableau into one tool that's also fast, easy to use, and excellent for survey data - well, that's Displayr! If you analyze data, it'll make you thrive. And by using fewer resources to do more work, Displayr won't just save you time, it will make you money. You'll produce higher quality work faster, without needing to outsource or use other tools.

So, stop throwing away your valuable time to painful manual tasks and quit spending hours figuring stuff out! Schedule a demo of Displayr and discover how you can transform your survey data to brilliant insights in a fraction of the time.


“Top Hat allowed me to truly interact with students, see what they’re doing at any given time and monitor their progress.”

Christina Alevras Instructor of BiologyUniversity of Saint Joseph

“Top Hat is a one-stop shop students can directly access from their devices, wherever they are. It saves them from having to navigate different resources.”

Frank Spors Associate Professor, College of OptometryWestern University of Health Sciences

“When I realized I could customize the textbook to make it more specific to my classroom, I was able to make the textbook an extension of my lecture.”


What Makes Psychology Research Ethical?

Whether psychological research is conducted with animals or human participants, it must be done in a manner that follows rules of conduct that protect the participants from harm.

But what makes psychology research ethical goes well beyond not bringing physical or emotional harm to a research subject.

Instead, ethical guidelines for research are quite broad, covering everything from using inducements to entice people to participate in studies to using deception as a research tool – and much more.

Today’s guidelines arose, in part, as a reaction to classic psychological research (i.e., Milgram’s obedience study Zimbardo’s prison experiment ) that caused significant emotional harm to some participants.

In this guide, we’ll explore some of the most important ethical guidelines psychology requires of researchers to ensure their conduct is ethical and that participants are protected.


T Test Calculator

A t test compares the means of two groups. For example, compare whether systolic blood pressure differs between a control and treated group, between men and women, or any other two groups. Don't confuse t tests with correlation and regression. The t test compares one variable (perhaps blood pressure) between two groups. Use correlation and regression to see how two variables (perhaps blood pressure and heart rate) vary together. Also don't confuse t tests with ANOVA. The t tests (and related nonparametric tests) compare exactly two groups. ANOVA (and related nonparametric tests) compare three or more groups. Finally, don't confuse a t test with analyses of a contingency table (Fishers or chi-square test). Use a t test to compare a continuous variable (e.g., blood pressure, weight or enzyme activity). Use a contingency table to compare a categorical variable (e.g., pass vs. fail, viable vs. not viable).

Analyze, graph and present your scientific work easily with GraphPad Prism. No coding required.


How to Get Started Teaching Online

Set Goals
The first thing you'll want to think about is what you're trying to achieve with this online course. Consider whether the entirety of the course will be on online teaching platforms, if it meant as a supplement to a course, or is meant as a prerequisite before joining a future course, e.g. an online course on basic math skills which might be required before taking an advanced live class, etc.

Create a Course Plan
Just like a face-to-face course, courses on online learning platforms need to have outlines or course plans for what you'll cover each week. If these are courses that you've previously offered live, you're one step ahead of the game. Your course plans will be your maps for what kinds of materials you will need to create for your lessons.

Gather Your Equipment
After you've considered what lessons you want to teach and what online learning platform you want to use, think about what types of equipment, software, and other tools you have at your disposal. Do you have a video camera or other device capable of capturing HD video? Do you have a screencasting software program? If not, sign up for a free trial of Camtasia Studio, or check out screencasting programs such as Screencast-O-Matic or Jing. Will you need to create PowerPoints? Do you have a microphone to capture audio? Once you figure out what your technological capabilities and limitations are, you'll know what kinds of content you'll want to create.

Set Aside Time for Creation and Editing
If you're going to create videos and screencasts for your lessons, consider not only the time it takes to record these, but also how long it takes to edit them, create title slides, render and upload them to online teaching platforms, etc.

Get Started!
Once you've completed all of that planning, jump right in and start building your online course. It's really only through experimenting on online learning platforms that you'll know what you're doing wrong and what's working. Ask for feedback from your friends and social networks and get started teaching online!