Information

Are there rule of thumb regarding sample size when neuroscientific measurements are used?

Are there rule of thumb regarding sample size when neuroscientific measurements are used?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

According to this paper, 10-25 subjects are enough for an fMRI study. However, regarding EEG, it only mentioned in this paper that some "authors concluded that the relatively small sample sizes (average 20 subjects) in these studies are likely responsible for the inconsistent result". This paper did not give a recommendation of sample size.

Are there a rule of thumb regarding how many subjects should be used for neuroscientific measurements in general? Are there papers regarding this?

Note that I am not currently designing an experiment, but writing a literature review regarding application of neuroscience. So I don't think I can conduct a power analysis as an answerer suggested.


The minimum sample size required to plausibly reject the null hypothesis depends not on the area of study (e.g. neuroscience), but on your study design and the statistical tests you perform.

What you need to do is a power analysis. Questions regarding power analysis have been asked on Cross Validated. You may want to read up on power analysis in your favourite statistics textbook and, if you have questions, ask them there.


Opportunistic use of dual-energy X-ray absorptiometry to evaluate lumbar scoliosis

Low bone mineral density is associated with spinal deformity. Dual-energy X-ray absorptiometry (DXA), a modality that assesses bone density, portends a theoretical means to also assess spinal deformity. We found that DXA can reliably assess spine alignment. DXA may permit surveillance of spine alignment, i.e., scoliosis in the clinical setting.

Purpose

Osteoporosis and scoliosis are interrelated disease processes. Dual-energy X-ray absorptiometry (DXA), used to assess bone density, can also be used to evaluate spinal deformity since it captures a posteroanterior (PA) image of the lumbar spine. We assessed the use of DXA to evaluate lumbar spine alignment.

Methods

A lumbar spine DXA phantom was used to assess the effects of axial and sagittal plane rotation on lumbar bone mineral content (BMC), density (BMD), and L1–L4 Cobb angle measurements. Using two subject cohorts, intra- and inter-observer reliability and validity of using DXA for L1–L4 Cobb angle measurements in the coronal and sagittal planes were assessed.

Results

Axial and sagittal plane rotation greater than 15° and 10°, respectively, significantly reduced measured BMD and BMC there was minimal effect on Cobb angle measurement reliability. In human subjects, excellent intra- and inter-observer reliability was observed using lumbar PA DXA images for Cobb angle measurements. Agreement between Cobb angles derived from lumbar PA DXA images and AP lumbar radiographs ranged from good to excellent. The mean difference in Cobb angles between supine lumbar PA DXA images and upright AP lumbar radiographs was 2.8° in all subjects and 5.8° in those with scoliosis.

Conclusions

Lumbar spine rotation does not significantly affect BMD and BMC within 15° and 10° of axial and sagittal plane rotation, respectively, and minimally affects Cobb angle measurement. Spine alignment in the coronal plane can be reliably assessed using lumbar PA DXA images.


Root Cause Analysis: the Core of Problem Solving and Corrective Action [2nd ed.] 9780873899826, 0873899822

Table of contents :
Title page
CIP data
Contents
List of Figures and Tables
Preface to the Second Edition
Preface to the First Edition
Chapter 1_Getting Better Root Cause Analysis
The Problem
The Impact
Approaches to Root Cause Analysis
Existing Problem-Solving Models
A Proposed Model
Chapter 2_Multiple Causes and Types of Action
Initial Problem Response
The Diagnosis
Actions to Prevent Future Problems
The Need for Filters
Chapter 3_Step 1: Define the Problem
Selecting the Right Problem
Scoping the Problem Appropriately
The Problem Statement
Chapter 4_Step 2: Understand the Process Setting Process BoundariesFlowcharting the Process
Why Process is So Important
Additional Values of the Flowchart
Chapter 5_Step 3: Identify Possible Causes
Using the Flowchart for Causes
Using a Logic Tree for Causes
Using Brainstorming and the Cause-and-Effect Diagram for Causes
Using Barrier Analysis for Causes
Using Change Analysis for Causes
Eliminating Possible Causes
Sources for Possible Causes
Chapter 6_Step 4: Collect the Data
A Basic Concept
Types of Data
Using Existing Versus New Data
Where to Collect Data
Special Tests
Sample Size and Time Frame Data Collection Tools for Both Low- and High-Frequency ProblemsAdditional Tools for High-Frequency Problems
Enhancing Data Collection Value
Organizing the Data Collection Process
Chapter 7_Step 5: Analyze the Data
Tools for Low-Frequency Data
Additional Tools for High-Frequency Data
Questioning the Data
Data Analyses Summaries
Analyzing Variation
Cautions on Data Analysis
Where to Go Next?
Can't Find the Cause?
Chapter 8_Identify and Select Solutions
Step 6: Identify Possible Solutions
Step 7: Select Solution(s) to Be Implemented Chapter 9_Implement, Evaluate, and InstitutionalizeStep 8: Implement the Solution(s)
Step 9: Evaluate the Effect(s)
Step 10: Institutionalize the Change
Chapter 10_Organizational Issues
Cognitive Biases
Emotional Barriers
Resistance to Change
Organizational Culture
Project Ownership
Coaching/Facilitation Skills
Other Issues
Chapter 11_Human Error and Incident Analysis
Human Error
Incident Analysis
Chapter 12_Improving Corrective Action
Critical Thinking
Buddhism
Stoic Philosophy
Summary of Root Cause Analysis
Appendix A_Example Projects
A Need for Focus How Would They Know?How Proficient is That?
Getting the Shaft Back
Got it in the Bag!
Appendix B_Root Cause Analysis Process Guides
Generic Process Thinking
SIPOC Analysis Form
Data Collection and Analysis Tools
Do It2 Root Cause Analysis Guide
Do It2 Problem-Solving Worksheet
Checklist for Reviewing the Corrective Action Process
Expanded List of Seven Ms
Forms for Tracking Causes and Solutions
Appendix C_Enhancing the Interview
Basic Interview Problems and Process
Types of Interviews and Questions
Leveraging How Memory Works
The Importance of Time and Reflection

Citation preview

Also available from ASQ Quality Press: Musings on Internal Quality Audits: Having a Greater Impact Duke Okes Performance Metrics: The Levers for Process Management Duke Okes The ASQ Pocket Guide to Root Cause Analysis Bjørn Andersen and Tom Natland Fagerhaug Handbook of Investigation and Effective CAPA Systems, Second Edition José Rodríguez-Pérez Introduction to 8D Problem Solving: Including Practical Applications and Examples Donald W. Benbow Ali Zarghami Managing Organizational Risk Using the Supplier Audit Program: An Auditor’s Guide Along the International Audit Trail. Lance B. Coleman, Sr. The Quality Toolbox, Second Edition Nancy R. Tague The Certified Six Sigma Green Belt Handbook, Second Edition Roderick A. Munro, Govindarajan Ramu, and Daniel J. Zrymiak The Certified Manager of Quality/Organizational Excellence Handbook, Fourth Edition Russell T. Westcott, editor The Certified Six Sigma Black Belt Handbook, Second Edition T.M. Kubiak and Donald W. Benbow The ASQ Auditing Handbook, Fourth Edition J.P. Russell, editor The ASQ Quality Improvement Pocket Guide: Basic History, Concepts, Tools, and Relationships Grace L. Duffy, editor To request a complimentary catalog of ASQ Quality Press publications, call 800-248-1946, or visit our Web site at http://www.asq.org/quality-press.

Root Cause Analysis The Core of Problem Solving and Corrective Action Second Edition

ASQ Quality Press Milwaukee, Wisconsin

American Society for Quality, Quality Press, Milwaukee, WI 53203 © 2019 by Duke Okes. All rights reserved. Published 2019. Printed in the United States of America. 24 23 22 21 20 19 5 4 3 2 1 Library of Congress Cataloging-in-Publication Data Names: Okes, Duke, 1949- author. Title: Root cause analysis: the core of problem solving and corrective action / Duke Okes. Description: Second edition. | Milwaukee, Wisconsin: ASQ Quality Press, [2019] | Includes bibliographical references and index. Identifiers: LCCN 2018055702 | ISBN 9780873899826 (hardcover: alk. paper) Subjects: LCSH: Problem solving. | Decision making. | Management. | Root cause analysis. Classification: LCC HD30.29 .O44 2019 | DDC 658.4/03—dc22 LC record available at https://lccn.loc.gov/2018055702 No part of this book may be reproduced in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Director, Quality Press and Programs: Ray Zielke Managing Editor: Paul Daniel O’Mara Sr. Creative Services Specialist: Randy L. Benson ASQ Mission: The American Society for Quality advances individual, organizational, and community excellence worldwide through learning, quality improvement, and knowledge exchange. Attention Bookstores, Wholesalers, Schools, and Corporations: ASQ Quality Press books, video, audio, and software are available at quantity discounts with bulk purchases for business, educational, or instructional use. For information, please contact ASQ Quality Press at 800-248-1946, or write to ASQ Quality Press, P.O. Box 3005, Milwaukee, WI 53201-3005. To place orders or to request ASQ membership information, call 800-248-1946. Visit our Web site at www.asq.org/quality-press. Printed on acid-free paper

List of Figures and Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Preface to the Second Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Preface to the First Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Chapter 1 Getting Better Root Cause Analysis . . . . . . . . . . . . . . . . . . The Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Impact. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Approaches to Root Cause Analysis . . . . . . . . . . . . . . . . . . . . . . . . Existing Problem-solving Models. . . . . . . . . . . . . . . . . . . . . . . . . . . A Proposed Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 2 Multiple Causes and Types of Action. . . . . . . . . . . . . . . . . Initial Problem Response. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Actions to Prevent Future Problems. . . . . . . . . . . . . . . . . . . . . . . . . The Need for Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 3 Step 1: Define the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . Selecting the Right Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scoping the Problem Appropriately. . . . . . . . . . . . . . . . . . . . . . . . . The Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 4 Step 2: Understand the Process. . . . . . . . . . . . . . . . . . . . . . . Setting Process Boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flowcharting the Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Why Process is So Important. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additional Values of the Flowchart. . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 5 Step 3: Identify Possible Causes. . . . . . . . . . . . . . . . . . . . . . Using the Flowchart for Causes . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using a Logic Tree for Causes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Brainstorming and the Cause-and-Effect Diagram for Causes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Barrier Analysis for Causes . . . . . . . . . . . . . . . . . . . . . . . . . . Using Change Analysis for Causes. . . . . . . . . . . . . . . . . . . . . . . . . . Eliminating Possible Causes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sources for Possible Causes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 6 Step 4: Collect the Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Basic Concept. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types of Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Existing versus New Data . . . . . . . . . . . . . . . . . . . . . . . . . . . Where to Collect Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Special Tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Size and Time Frame. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Collection Tools for Both Low- and High-frequency Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additional Tools for High-frequency Problems. . . . . . . . . . . . . . . Enhancing Data Collection Value. . . . . . . . . . . . . . . . . . . . . . . . . . . Organizing the Data Collection Process . . . . . . . . . . . . . . . . . . . . .

Chapter 7 Step 5: Analyze the Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tools for Low-frequency Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additional Tools for High-frequency Data. . . . . . . . . . . . . . . . . . . Questioning the Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Analyses Summaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analyzing Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cautions on Data Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Where to Go Next? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Can’t Find the Cause?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87 88 90 100 100 102 102 104 105

Chapter 8 Identify and Select Solutions. . . . . . . . . . . . . . . . . . . . . . . . 1 07 Step 6: Identify Possible Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 1 07 Step 7: Select Solutions to be Implemented. . . . . . . . . . . . . . . . . . . 117

Chapter 9 Implement, Evaluate, and Institutionalize. . . . . . . . . . . . . Step 8: Implement the Solution(s). . . . . . . . . . . . . . . . . . . . . . . . . . . Step 9: Evaluate the Effect(s). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Step 10: Institutionalize the Change. . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 10 Organizational Issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cognitive Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emotional Barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resistance to Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Organizational Culture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Project Ownership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Coaching/Facilitation Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

129 129 131 132 135 136 137 141

Chapter 11 Human Error and Incident Analysis. . . . . . . . . . . . . . . . . 1 45 Human Error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 45 Incident Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 51 Chapter 12 Improving Corrective Action . . . . . . . . . . . . . . . . . . . . . . . Critical Thinking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Buddhism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stoic Philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary of Root Cause Analysis . . . . . . . . . . . . . . . . . . . . . . . . . .

Appendix A Example Projects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Need for Focus. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Would They Know?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Proficient is That? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Getting the Shaft Back . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Got It in the Bag. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Appendix B Root Cause Analysis Process Guides . . . . . . . . . . . . . . . Generic Process Thinking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SIPOC Analysis Form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Collection and Analysis Tools . . . . . . . . . . . . . . . . . . . . . . . . . Do-It2 Root Cause Analysis Guide. . . . . . . . . . . . . . . . . . . . . . . . . . Do-It2 Problem Solving Worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . Checklist for Reviewing the Corrective Action Process . . . . . . . . Expanded List of Seven Ms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Forms for Tracking Causes and Solutions. . . . . . . . . . . . . . . . . . . .

173 173 175 175 178 180 181 183 185

Appendix C Enhancing the Interview. . . . . . . . . . . . . . . . . . . . . . . . . . Basic Interview Problems and Process. . . . . . . . . . . . . . . . . . . . . . . Types of Interviews and Questions . . . . . . . . . . . . . . . . . . . . . . . . . Leveraging How Memory Works. . . . . . . . . . . . . . . . . . . . . . . . . . . The Importance of Time and Reflection. . . . . . . . . . . . . . . . . . . . . .

Appendix D Analyzing Problem Responses . . . . . . . . . . . . . . . . . . . . 193 Appendix E Additional Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 99 Books. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 99 Websites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 00 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

List of Figures and Tables

Figure 1.1 Table 1.1 Figure 1.2 Figure 1.3

The DO IT2 problem-solving model. . . . . . . . . . . . . . . . . . . . . . . . . . 7 Problem solving model comparison . . . . . . . . . . . . . . . . . . . . . . . . . 7 Visual depiction of the model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Corrective action, root cause analysis, and problem solving. . . . . 9

Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5

Differentiating between symptoms and causes (physical and system). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Levels of causes for machine problem. . . . . . . . . . . . . . . . . . . . . . . . 16 Manifestations of multiple causes . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Filters for the corrective action process. . . . . . . . . . . . . . . . . . . . . . . 21 Nonconformity risk matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Table 3.1 Table 3.2 Figure 3.1 Figure 3.2 Figure 3.3 Figure 3.4

Project decision matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Scrap analysis data for first quarter. . . . . . . . . . . . . . . . . . . . . . . . . . 25 Scrap analysis using Paretos. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Scrap analysis using pivot tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Effect of Pareto categories. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Using run charts to inform the problem statement. . . . . . . . . . . . . 32

Figure 4.1 Figure 4.2 Figure 4.3 Figure 4.4 Figure 4.5 Figure 4.6

Setting boundaries for pizza taste problem . . . . . . . . . . . . . . . . . . . 38 Standard process flowchart. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Example flowchart symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Deployment flowchart for an engineering change request . . . . . . 41 Primary versus administrative/support processes. . . . . . . . . . . . . 43 Generic process for policy development and implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 SIPOC diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Drilling down to find cause. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

xii List of Figures and Tables

Figure 5.1 Figure 5.2 Figure 5.3 Figure 5.4 Figure 5.5 Figure 5.6 Figure 5.7 Figure 5.8 Figure 5.9

Copier process flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Logic tree for copier problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Deeper logic tree for copier problem. . . . . . . . . . . . . . . . . . . . . . . . . 53 Frequent flier points error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Process-focused logic tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Employee voluntary turnover problem. . . . . . . . . . . . . . . . . . . . . . . 55 Losing market share problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Logic tree for lack of training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Partial cause-and-effect diagram related to a hotel reservation problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Figure 5.10 Barrier analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Figure 5.11 Change analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Table 5.1 Possible cause tool selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Figure 6.1 Figure 6.2 Table 6.1 Figure 6.3 Figure 6.4 Table 6.2 Table 6.3 Table 6.4

X and Y (cause and effect). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Component swap for two identical lines. . . . . . . . . . . . . . . . . . . . . 73 Lab multi-vari data sheet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Multi-vari plot for lab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Concentration diagram for injuries . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Check sheet for hotel room availability problem. . . . . . . . . . . . . . . 81 Data collection sheet for insurance overpays. . . . . . . . . . . . . . . . . . 81 Data collection plan for playground accidents. . . . . . . . . . . . . . . . . 85

Figure 7.1 Figure 7.2 Figure 7.3

Procedure compliance versus results. . . . . . . . . . . . . . . . . . . . . . . . . 89 G-chart indicating number of days between failures . . . . . . . . . . . 89 Affinity diagram for factors affection problem-solving effectiveness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Interrelationship digraph for problem-solving effectiveness. . . . . 91 Pareto chart of hotel checklist data. . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Drilling down deeper. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Contingency table indicating number of students earning As . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Patterns in run charts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Run chart using reordered data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Histogram analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Scatter diagrams. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Dot plot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Multi-vari plot for help desk call completion time . . . . . . . . . . . . . 99 Multi-vari plot for printing press registration error amount. . . . . 99 Multi-vari plot for shape of honed cylinder diameter. . . . . . . . . . . 99

Figure 7.4 Figure 7.5 Figure 7.6 Table 7.1 Figure 7.7 Figure 7.8 Figure 7.9 Figure 7.10 Figure 7.11 Figure 7.12 Figure 7.13 Figure 7.14

List of Figures and Tables xiii

Table 7.2 Table 7.3 Table 7.4 Figure 7.15 Figure 7.16

Is/is-not table for packing-line problems. . . . . . . . . . . . . . . . . . . . . 101 Cause analysis table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Drilling down into problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Two perspectives of the same data set. . . . . . . . . . . . . . . . . . . . . . . . 103 Spatial relationship analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

Figure 8.1 Table 8.1 Figure 8.2 Figure 8.3 Figure 8.4 Table 8.2 Table 8.3

Mind map for room-improvement ideas. . . . . . . . . . . . . . . . . . . . . . 109 Use of analogies for gardening tool marketing problem . . . . . . . . 110 Perceived limits to solution space . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 9 boxes/windows technique example. . . . . . . . . . . . . . . . . . . . . . . . 115 Level of system to address. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Decision table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Paired comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Action plan tracking form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Solution–outcome matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

Figure 10.1 Figure 10.2 Figure 10.3 Figure 10.4 Figure 10.5 Figure 10.6 Figure 10.7

Reasons people resist change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Clarifying what we can/can’t affect. . . . . . . . . . . . . . . . . . . . . . . . . . 133 Force field analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Changing a complex adaptive system. . . . . . . . . . . . . . . . . . . . . . . . 134 Impact of a punitive culture on root cause analysis . . . . . . . . . . . . 136 Types of helpers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Facilitator roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

Figure 11.1 Macro causes of human error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Figure 11.2 Flowchart of an incident . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Figure 12.1 Figure 12.2 Table 12.1 Figure 12.3

Preventive and corrective action. . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Major components of problem diagnosis . . . . . . . . . . . . . . . . . . . . . 156 6S approaches to finding causes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Thinking as a process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

Figure A.1 Figure A.2 Figure A.3 Figure A.4 Figure A.5 Figure A.6 Figure A.7

Process flowchart for continuous line. . . . . . . . . . . . . . . . . . . . . . . . 162 Line downtime by machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Causes of downtime for machine B. . . . . . . . . . . . . . . . . . . . . . . . . . 163 Logic tree highlighting no audits being conducted. . . . . . . . . . . . . 165 PT process flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 PT logic tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Pareto analysis of causes for returned parts. . . . . . . . . . . . . . . . . . . 166

xiv List of Figures and Tables

Figure A.8 Figure A.9 Table A.1 Figure A.10 Figure A.11 Figure A.12

SIPOC diagram for pinion problem. . . . . . . . . . . . . . . . . . . . . . . . . . 167 Pinion manufacturing process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Runout data collection table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Distributions of runout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Bagging process flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Bag weights over time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

Figure B.1 Figure B.2 Table B.1 Table B.2 Table B.3 Table B.4 Table B.5 Table B.6 Table B.7 Table B.8

Generic process thinking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 SIPOC analysis form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Data collection and analysis tools . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Analysis of variable data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Analysis of attribute data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 DO It2 root causes analysis guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 DO It2 problem solving worksheet. . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Checklist for reviewing the corrective action process. . . . . . . . . . . 182 Assistant for Steps 3, 4, & 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Assistant for Steps 7, 8, & 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

Preface to the Second Edition

lot has changed since the first edition of the book. Many more management system standards requiring corrective action have been developed. Many more failures of products, service processes, and organizations have occurred. And thankfully, root cause analysis (along with a close cousin, risk management) has been recognized as a critical component of organizational governance. Unfortunately, the weakest component of most systems, the human being, has not become more reliable. And while this book cannot directly impact the social factors underlying these weaknesses, it can help anyone whose role is to try to find specific causes for failures. It does that by providing a way of thinking that will produce evidence identifying specific causes for which solutions can then be implemented. As I mentioned in the preface to the first edition, a lot of credit is due to past classroom instructors and work experiences I’ve had that helped shape my thinking in an appropriate direction. However, I left out one major contributor—Dale Patterson—who as Director of Training for a major corporation requested a course on root cause analysis for some of their workforce. The newly developed course was a raging success and I felt obligated to document the content in the form of a book so it could be more widely disseminated. Since the first edition was published, the course has been conducted another several hundred times. Each offering brings participants from new sectors, industries, and organizations who are dealing with the same types of issues—customer complaints, product or process noncompliance, and other performance problems. But as my wife reminded me one time, if there were no problems, I’d be out of work!

xvi Preface to the Second Edition

The best part of teaching the course is that I also get to learn, so it’s time to update the book to add a bit more detail in some areas (and correct three errors that folks were kind enough to contact me about), while still keeping the page length to something people are willing to tackle. Although the chapters are still ordered the same, more examples have been added and the appendices also have significant additions. I hope you find it worth your time to read and apply.

Preface to the First Edition

lthough many organizations have invested considerable time and effort to improve their processes, it isn’t unusual to see the same problems popping up over and over. The impacts on customers, end users, employees, profitability, and competitiveness have been well documented in management literature. One factor making such problems highly visible is the formalized management systems guided by documents such as ISO 9001. They require organizations to collect and analyze data on process performance using audits, internal performance indicators, and customer feedback, and problems identified are to have corrective action taken to prevent recurrence. Unfortunately, insufficient effort has been placed on providing guidance on how to carry out an effective diagnosis to identify the causes of problems. Organizations often implement what a participant in one of my courses called a “duct tape solution,” hoping it will address the problem. Meanwhile, the risks associated with repeat problems have significantly increased. Not only is there much greater competition in just about any niche, but organizations and individuals who suffer from failures often expect significant monetary compensation. The increase in transparency brought about by the Internet and various social and legal movements also makes problems more visible. Although the identification of problems is more rigorous, the ability to solve them has not necessarily improved at the same rate. Much of the training that is provided is too high level and philosophical, or is focused on creative rather than analytical problem solving. People are not being taught how to think logically and deductively.

xviii Preface to the First Edition

This book provides detailed steps for solving problems, focusing more heavily on the analytical process involved in finding the actual causes of problems. It does this using a large number of figures, diagrams, and tools useful for helping to make our thinking visible. This increases our ability to see what is truly significant and to better identify errors in our thinking. It is not the intent of the book to teach the tools themselves, as this has been covered well elsewhere. However, methods for using the tools to make better decisions will be presented. The topic of statistics has intentionally been left out of this book. Although various statistical methodologies are valuable for validating measurement and process variation, making probabilistic decisions about hypothesis validity, and designing and analyzing complex multivariate tests, these are topics beyond the scope of this book due to their extensive nature. The focus of the book is instead on the logic of finding causes—or, as often described in training workshops, it is Six Sigma lite: problem solving without all the heavy statistics. The primary focus is on solving repetitive problems, rather than performing investigations for major incidents/accidents. Most of the terminology used is what readers will see as everyday language thus they can also use it for applications in their personal lives. Many of the examples involve situations with which the reader will likely be familiar. Chapters 1 and 2 provide a solid foundation for understanding what root cause analysis is all about, and Chapters 3–7 provide details on each of the five critical steps necessary for diagnosing problems. Chapters 8 and 9 provide guidance for identifying, selecting, and implementing solutions, and Chapters 10–12 look at the subject matter from other angles. Three appendixes provide additional information to help the reader understand, apply, and learn more about root cause analysis. It is important for the reader to understand that this book is designed to supplement, not replace, any guidance provided by regulators, customers, or other stakeholders who define requirements for an organization or industry. Also, while many examples are included, they are used only to help demonstrate specific concepts and should not be taken as recommendations for any specific problem situation the reader might face. One philosophical aspect reinforced throughout is that one can use the Pareto concept (the 80/20 rule) during the problem-solving process, thereby better utilizing resources in ways that will give a higher probability of success. However, given

Preface to the First Edition xix

the level of risk involved, some organizations or situations may not lend themselves to this approach. The book focuses primarily on the technical process of root cause analysis, although other issues that can affect the ability of the process to be carried out effectively will be highlighted. And while many examples are used, the data or other factors have typically been normalized or otherwise adjusted to keep original sources anonymous. I would like to recognize some of the individuals and organizations that have contributed significantly to my knowledge of problem solving, whether through formal training or experience. The first is a high school physics instructor, Al Harper, who embedded a module on logic in the course. Then there were college and continuing-education instructors Hugh Broome and Jim White, who introduced me to statistical quality methods that helped me understand the importance of variation and its sources. As an employee of TRW Automotive I had the opportunity to continually diagnose product, equipment, and process problems—an experience worth millions. One of Dr. Joseph Juran’s early books, Managerial Breakthrough, also greatly influenced me. Of course, what really solidified and validated my knowledge was applying and teaching it for numerous organizations, including the government, military, education, manufacturing, healthcare, and financial sectors. My thanks to course participants and their organizations for the wide range of examples and their contributions to my learning. Thanks also to the people at ASQ Quality Press for the opportunity to again publish with them. They really make the process seem easy, although it sometimes doesn’t feel that way when I give up a Saturday to work on a chapter. I encourage readers to contact me with comments or questions on the book, or about workshops based on the book. Go to http:// www.aplomet.com.

1 Getting Better Root Cause Analysis

e live in a complex world. People and organizations often don’t believe they have the time to perform the in-depth analyses required to solve problems. Instead, they take remedial actions to make the problem less visible and implement a patchwork of ad hoc solutions they hope will prevent recurrence. Then when the problem returns, they get frustrated—and the cycle repeats. The risks of repeated problems in today’s world are significant. Most customers have many potential sources for their purchases, and this competition means firms cannot afford the waste created by resources producing less than adequate results. While viral marketing and the Internet can help make a new product or service an instant hit, rapid communication about problems can just as quickly wipe out a success. And more than a few consumers and legal firms are willing to take advantage of failures to create for themselves a financial bonanza through class action lawsuits. This is not to say that all problems need to be given the same attention. However, those with a greater potential impact do need to get the appropriate focus. Repeated failures can be interpreted as a lack of due diligence given the knowledge gained over the past century for how to effectively design, produce, and deliver reliable products and services.

THE PROBLEM According to the International Organization for Standardization (ISO 2007), by the end of 2006 more than a million certificates had been issued worldwide for compliance to quality management system

standards such as ISO 9001, IATF 16949, and ISO 13485. While many of these certificates were issued to manufacturing firms, there also exist many other standards and/or guidelines used by other sectors and for other management systems. Some more widely known examples are the ISO 14001 standard for environmental management systems, the standards of the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO), the Capability Maturity Model Integrated (CMMI) and Information Technology Infrastructure Library (ITIL) for information technology, and generally accepted accounting principles (GAAP) for financial accounting. In more recent years even more families of management system standards have been developed. Some examples include food safety management (ISO 22000), information system services management (ISO 20000), information security management (ISO 27000), occu­ pa­ tional health and safety management (ISO 45001), energy management (ISO 50000), and asset management (ISO 55000). In 2018 the ISO survey counted more than 1.5 million certifications, excluding IATF 16949. Such documents provide general descriptions of management systems that allow organizations flexibility for their unique charac­ teristics. An important component of most of the documents is the recognition that systems do occasionally fail, and therefore provision is made to help the organization identify the failures, diagnose their causes, and take action to prevent recurrence. However, the guidance given for corrective action by the standards (as well as most organizations’ internal procedures) is primarily for administrative purposes and thus provides no help for how to perform the diagnosis. Meanwhile, most people have not been trained in root cause analysis. The author’s years of experience in training people in problem solving indicate that using root cause analysis is not a widely held skill. Schools certainly don’t teach it, even for professions where it’s obviously needed (Groopman, 2007). Instead, they describe how to diagnose specific problems related to the technology under study (for example, medical problems if one is studying to be a physician, or computer technologies if one is studying computer science). However, root cause analysis is a generic skill that can be applied to nearly any type of problem. Some people learn it over time from repeated experiences solving problems, but this takes a lot of time and many mistakes are likely to be made along the way before one becomes highly proficient.

Getting Better Root Cause Analysis 3

THE IMPACT Some people simply accept problems as part of life, as they appear to be everywhere you look. Here are just a few statistics to indicate the widespread failure of systems: • A study by the Institute of Medicine estimated that in the United States as many as 98,000 people die each year due to medical errors (Kohn, Corrigan, and Donaldson 1999) and in 2016 that number climbed to more than 250,000 (Johns Hopkins study). • Wikipedia lists more than 60 accidents or incidents involving commercial flights throughout the world from 2015 thru 2018. • According to the National Highway Transportation Safety Administration (NHTSA) web site, there were more than 50 product recalls announced just during April 2008. • The Food and Drug Administration (FDA) web site listed more than 100 recalls in just the last three months of 2018. While these public numbers are important, equally significant in their own ways are the day-to-day problems consumers and businesspeople must deal with. Maybe it’s a hotel room door that doesn’t open when the keycard is inserted, an error in a bank statement, a new TV that doesn’t work, or a late airplane departure. At the workplace it may be a document that wasn’t signed, a computer system that’s down, an invoice that was paid twice, or a product that doesn’t work but needs to be shipped. Such problems can cost people their jobs, their life savings, and their lives. They also reduce the trust people have in one another and in our institutions. As people and organizations become more averse to risk they are less willing to explore and innovate. Yet it is these latter types of activities that have created the technological, economic, and societal breakthroughs that have made the world as advanced and complex as it is today.

APPROACHES TO ROOT CAUSE ANALYSIS There are many methodologies for conducting root cause analysis. A U.S. Department of Energy (DOE [2003]) guideline lists the follow­ ing five: • Events and causal factor analysis—This process is widely used for major, single-event problems such as a refinery explosion. It

uses evidence gathered quickly and methodically to establish a timeline for the activities leading up to the accident. Once the timeline has been established, the causal and contributing factors can be identified. • Change analysis—This approach is applicable to situations where a system’s performance has shifted significantly. It explores changes made in people, equipment, information, and so forth, that may have contributed to the change in performance. • Barrier analysis—This technique focuses on what controls are in place in the process to either prevent or detect a problem, and which might have failed. • Management oversight and risk tree analysis—One aspect of this approach is the use of a tree diagram to look at what occurred and why it might have occurred. • Kepner-Tregoe Problem Solving and Decision Making—This model provides four distinct phases for resolving problems: (1) situation analysis, (2) problem analysis, (3) solution analysis, and (4) potential problem analysis. There are, of course, overlaps among these five approaches, and the model presented in this book, based on more than 40 years of experience troubleshooting a wide range of problems, incorporates aspects of each. A major focus of the book is to help problem solvers differentiate among the generic steps involved in (1) identifying a problem, (2) performing a diagnosis, (3) selecting and implementing solutions, and (4) leveraging and sustaining results. The major emphasis is placed on diagnosis, which at its core is logical, deductive analysis carried out using critical thinking. One barrier to effective root cause analysis is a lack of logical thinking about cause-and-effect relationships. An example from a television news broadcast makes this point. The anchor stated that there had been an increase in the number of bank robberies during the previous year and attributed it to the fact that there had been an increase in the number of banks. Yet had there not been an increase in the number of bank robbers (or robbery activities), the number of actual robberies would not have been higher. That is, although banks are necessary for bank robberies, they are not sufficient. Another barrier is our reliance on intuition or previous experience. Daniel Kahneman (2013) states that there are two modes of thinking,

Getting Better Root Cause Analysis 5

System 1 and System 2. System 1 is rapid, subconscious, and without much depth, while System 2 is slow and methodical. System 1 relies on previous experience and often includes a lot of biases that cause the individual to jump to conclusions. When trying to dig down to find causes of a problem it is useful to slow down and be more rigorous in our use of facts and data. Such errors in thinking carry over to problems in technical, organizational, and social arenas. Individuals often focus on what is most visible, who has the deepest pockets, or whatever is the most politically convenient, rather than what will solve the problem. If such lapses in judgment continue to occur, the same problems will, of course, also continue. Just think what it feels like in an organization when everyone knows what the real cause is but no one is willing to speak up. Of course, as Dr. W. Edwards Deming often said, survival is not mandatory (Lowenthal 2002). Capitalism has a way of weeding out organizations that are less effective, but unfortunately it often takes a long time and causes a lot of pain.

EXISTING PROBLEM-SOLVING MODELS So how can organizations overcome the lack of guidance in root cause analysis? One way is to provide a model that gives people sufficient details about the discrete mental activities required. However, it is also useful to understand the potential weaknesses of some of the existing models used within organizations.

The ISO 9001 Corrective Action Process A corrective action procedure is what most organizations provide for employees who must perform root cause analysis and take corrective action. Unfortunately, the procedure tends to mimic the ISO standard by requiring the following: (1) problems are identified and documented, (2) causes are determined, (3) corrective action is taken, and (4) effectiveness of the action is evaluated. While the procedure typically includes a bit more information about who is to oversee and sign off on corrective actions, what forms and databases are to be used to document the diagnosis/actions/results, and the required reporting channels and timing, the procedure usually does not provide any help for how to go about finding causes.

Six Sigma DMAIC The Define-Measure-Analyze-Improve-Control (DMAIC) model used for Six Sigma process improvement is certainly a good one. It helps an organization make sure that it is working on the right problems, has the right people involved, is considering critical-to-customer measures, is evaluating reliability/stability/capability of the process data, is identifying the most important factors contributing to performance, is changing the process to reduce the impact of those factors, and is maintaining the gains. The three steps of define, measure, and analyze are excellent for identifying root causes, but a Six Sigma Black Belt who guides project teams through such an analysis typically receives four weeks of training on how to apply the model and the various tools that support it. So just providing such a high-level model to assist the corrective action process would not be adequate, since untrained personnel would have insufficient knowledge of how to follow it.

Other Models There are, of course, many other problem-solving models available. Plan-Do-Check-Act (PDCA), developed by Dr. Walter Shewhart and communicated and modified by Deming to PDSA (Plan-Do-StudyAct), has been widely used but provides little detail on how to find a root cause. The Eight Discipline (8D) model, developed by the Ford Motor Company in the 1980s, has been widely adopted by many organizations, and the enhanced Global 8D version is quite good. But again, the raw form (for example, just the list of 8Ds) does not provide much cognitive guidance.

A PROPOSED MODEL Due to the demand for root cause analysis training, the author took his 7-step problem-solving model and expanded it to provide more depth in the diagnostic steps. Figure 1.1 is the resulting 10-step model, while Table 1.1 shows how it compares to other common models. The model consists of two major phases: Steps 1–5 are the diagnostic phase (finding the root cause), and Steps 6–10 are the solution phase (fixing the problem). And while the model looks linear, a unique feature is the iterative nature of the five diagnostic steps.

Getting Better Root Cause Analysis 7

Define the problem Understand the process Identify possible causes Collect the data Analyze the data

Identify possible solutions Select solution(s) to be implemented Implement the solution(s) Evaluate the effect(s) Institutionalize the change


Appendices contents page

  1. Psychiatry, Nursing & Midwifery Research Ethics Subcommittee (PNM RESC) original approval (18.11.2014)
  2. Psychiatry, Nursing & Midwifery Research Ethics Subcommittee (PNM RESC) modification approval (27.02.2015)
  3. Psychiatry, Nursing & Midwifery Research Ethics Subcommittee (PNM RESC) modification approval (16.07.2015)
  4. Questionnaires completed at each time point
  5. Study questionnaires
    1. The Habitual Index of Negative Thinking
    2. Self-Critical Rumination Scale
    3. Work and Social Adjustment Scale
    4. Patient Health Questionnaire (PHQ-9)
    5. Generalised Anxiety Disorder (GAD-7)
    6. Rosenberg’s Self-Esteem Scale
    7. The Multi-Dimensional Perfectionism Scale
    8. Self-Compassion Scale
    9. The Emotion Regulation Questionnaire
    10. Beliefs about Emotions scale
    11. The Forms of Self-Criticizing/Attacking and Self-Reassuring Scale
    12. The Functions of Self-Criticizing/Attacking Scale
    1. Session 1
    2. Session 2
    3. Session 3
    4. Session 4
    5. Session 5

    Appendix 1. Psychiatry, Nursing & Midwifery Research Ethics Subcommittee (PNM RESC) original approval (18.11.2014)

    Institute of Psychiatry, Psychology and Neuroscience

    De Crespigny Park
    London SE5 8AF

    PNM/14/15-33 Self-criticism: Development of a new intervention

    Review Outcome: Full Approval

    Thank you for submitting your application for ethical approval. This was reviewed by the PNM RESC on 18 November 2014. As a result, the Committee have granted full ethical approval for your study.

    Provisos
    Your approval is based on the following provisos being met:

    1. Sections 2.2 and 2.3: Please note that ethical approval for doctoral studies is normally granted for a period of 3 years.
    2. Section 7.1:
    1. The recruitment documents should clearly indicate that the study is a research project. There, the Committee strongly recommends that paragraphs beginning with ‘We are offering…’ are reworded to reflect this.
    2. The Committee recommends that participants are allowed at least 24 hours to consider whether to take part after reading the Information Sheet.
    1. Information Sheet:
    1. Remove the paragraph entitled ‘What if there is a problem?’
    2. Insert the paragraph beginning with ‘If this study has harmed you in any way…’ before the contact details for your academic supervisors.

    You are not required to provide evidence to the Committee that these provisos have been met, but your ethical approval is only valid if these changes are made. You must not commence your research until these provisos have been met.

    Please ensure that you follow all relevant guidance as laid out in the King’s College London Guidelines on Good Practice in Academic Research (http://www.kcl.ac.uk/college/policyzone/index.php?id=247).

    For your information ethical approval is granted until 20 November 2017. If you need approval beyond this point you will need to apply for an extension to approval at least two weeks prior to this explaining why the extension is needed, (please note however that a full re-application will not be necessary unless the protocol has changed). You should also note that if your approval is for one year, you will not be sent a reminder when it is due to lapse.

    Ethical approval is required to cover the duration of the research study, up to the conclusion of the research. The conclusion of the research is defined as the final date or event detailed in the study description section of your approved application form (usually the end of data collection when all work with human participants will have been completed), not the completion of data analysis or publication of the results.
    For projects that only involve the further analysis of pre-existing data, approval must cover any period during which the researcher will be accessing or evaluating individual sensitive and/or un-anonymised records.
    Note that after the point at which ethical approval for your study is no longer required due to the study being complete (as per the above definitions), you will still need to ensure all research data/records management and storage procedures agreed to as part of your application are adhered to and carried out accordingly.

    If you do not start the project within three months of this letter please contact the Research Ethics Office.

    Should you wish to make a modification to the project or request an extension to approval you will need approval for this and should follow the guidance relating to modifying approved applications: http://www.kcl.ac.uk/innovation/research/support/ethics/applications/modifications.aspx

    Please would you also note that we may, for the purposes of audit, contact you from time to time to ascertain the status of your research.

    If you have any query about any aspect of this ethical approval, please contact your panel/committee administrator in the first instance (http://www.kcl.ac.uk/innovation/research/support/ethics/contact.aspx)
    We wish you every success with this work.

    James Patterson – Senior Research Ethics Officer

    For and on behalf of

    Professor Gareth Barker, Chairman

    Psychiatry, Nursing and Midwifery Research Ethics Subcommittee (PNM RESC)

    Appendix 2. Psychiatry, Nursing & Midwifery Research Ethics Subcommittee (PNM RESC) modification approval (27.02.2015)

    King’s College London
    PO78, Addiction Sciences Building
    London SE5 8AF

    PNM/14/15-33 Self-criticism: Development of a new intervention

    Thank you for submitting a modifications request for the above study. I am writing to confirm approval of these. The approved modifications are summarised broadly below:

    1. Section 4:
    1. Collection of GAD-7, PHQ-9 and Rosenberg’s Self-Esteem Scale measures at T1.
    2. Use of SurveyMonkey to collect responses to measures T1 to T5.
    1. Section 6.2: Addition of anorexia nervosa to the exclusion criteria.
    2. Section 6.3: Use of Mini International Neuropsychiatric Interview for screening

    If you have any queries, please do not hesitate to contact the Research Ethics Office.

    Appendix 3. Psychiatry, Nursing & Midwifery Research Ethics Subcommittee (PNM RESC) modification approval (16.07.2015)

    PNM/14/15-33 Self-criticism: Development of a new intervention

    Thank you for submitting a modifications request for the above study. I am writing to confirm approval of this. The approved modification is broadly summarised below:

    If you have any questions, please let me know.

    James Patterson – Senior Research Ethics Officer

    Appendix 4. Questionnaires completed at each time point

    Questionnaire Type of measure Time point
    The Habitual Index of Negative Thinking Primary outcome measure Every time point
    Self-Critical Rumination Scale Primary outcome measure Every time point
    Work and Social Adjustment Scale Primary outcome measure Every time point
    Patient Health Questionnaire (PHQ-9) Secondary outcome measure Screening, S1, S3, S6 & follow-up
    Generalised Anxiety Disorder (GAD-7) Secondary outcome measure Screening, S1, S3, S6 & follow-up
    Rosenberg’s Self-Esteem Scale Secondary outcome measure Screening, S1, S3, S6 & follow-up
    The Multi-Dimensional Perfectionism Scale Secondary outcome measure S1, S3, S6 & follow-up
    Self-Compassion Scale Process measure S1 – S6 & follow-up
    The Emotion Regulation Questionnaire Process measure S1, S3, S6 & follow-up
    Beliefs about Emotions scale Process measure S1, S3, S6 & follow-up
    The Forms of Self-Criticizing/Attacking and Self-Reassuring Scale To aid formulation S1 only
    The Functions of Self-Criticizing/Attacking Scale To aid formulation S1 only

    Notes: S1: session 1 S3: session 3 S6: session 6

    Appendix 5. Study questionnaires

    Occasionally we think about ourselves. Such thoughts may be positive, but may also be negative. In this study we are interested in negative thoughts you may have about yourself. Please indicate how much you agree or disagree with the following statements.


    Sample size estimation for power and accuracy in the experimental comparison of algorithms

    Experimental comparisons of performance represent an important aspect of research on optimization algorithms. In this work we present a methodology for defining the required sample sizes for designing experiments with desired statistical properties for the comparison of two methods on a given problem class. The proposed approach allows the experimenter to define desired levels of accuracy for estimates of mean performance differences on individual problem instances, as well as the desired statistical power for comparing mean performances over a problem class of interest. The method calculates the required number of problem instances, and runs the algorithms on each test instance so that the accuracy of the estimated differences in performance is controlled at the predefined level. Two examples illustrate the application of the proposed method, and its ability to achieve the desired statistical properties with a methodologically sound definition of the relevant sample sizes.

    This is a preview of subscription content, access via your institution.


    A Validity and Reliability Study of the Multidimensional Trust in Health-Care Systems Scale in a Turkish Patient Population

    The importance of trust within health care is widely acknowledged. Measuring patients’ trust in health care systems may contribute to plans for the financing, delivery, and outcomes of health services. Although many scales are available to measure patient trust, less attention has been paid to the multidimensional nature of trust in health care systems. The purpose of this methodological study was to adapt the Multidimensional Trust in Health-Care Systems Scale into Turkish and to evaluate its psychometric properties for a Turkish patient population. The scale was adapted into Turkish through a translation and back-translation process. The content validity of the scale was assessed using expert approval. The psychometric properties of the scale were investigated by collecting data from 232 hospitalised patients in Ankara during theperiod of 1 January–30 December 2010. An exploratory factor analysis identified that the eigenvalues for the three factors of the scale were 7.30, 2.61, and 1.21 these three factors explained 65 % of the variance. A confirmatory factor analysis indicated a sufficient model fit for the construct validity of the scale. Cronbach’s α for the total scale was 0.87, as well as 0.91, 0.82, and 0.61 for the three subscales the Spearman-Brown split half reliability coefficient was 0.67. Despite the low internal consistency of the subscale 3, evidence from this study supports the validity and reliability of the Multidimensional Trust in Health-Care Systems Scale. This instrument can be used to measure multiple aspects of trust in the health care system however, as trust is a contextual phenomenon, further work is needed to test the psychometric properties of this scale both in Turkish and different cultures.

    This is a preview of subscription content, access via your institution.


    Objectives

    We aim to introduce the discussion on the crisis of confidence to sport and exercise psychology. We focus on an important aspect of this debate, the impact of sample sizes, by assessing sample sizes within sport and exercise psychology. Researchers have argued that publications in psychological research contain numerous false-positive findings and inflated effect sizes due to small sample sizes.

    Method

    We analyse the four leading journals in sport and exercise psychology regarding sample sizes of all quantitative studies published in these journals between 2009 and 2013. Subsequently, we conduct power analyses.

    Results

    A substantial proportion of published studies does not have sufficient power to detect effect sizes typical for psychological research. Sample sizes and power vary between research designs. Although many correlational studies have adequate sample sizes, experimental studies are often underpowered to detect small-to-medium effects.

    Conclusions

    As sample sizes are small, research in sport and exercise psychology may suffer from false-positive results and inflated effect sizes, while at the same time failing to detect meaningful small effects. Larger sample sizes are warranted, particularly in experimental studies.


    Contents

    Larger sample sizes generally lead to increased precision when estimating unknown parameters. For example, if we wish to know the proportion of a certain species of fish that is infected with a pathogen, we would generally have a more precise estimate of this proportion if we sampled and examined 200 rather than 100 fish. Several fundamental facts of mathematical statistics describe this phenomenon, including the law of large numbers and the central limit theorem.

    In some situations, the increase in precision for larger sample sizes is minimal, or even non-existent. This can result from the presence of systematic errors or strong dependence in the data, or if the data follows a heavy-tailed distribution.

    Sample sizes may be evaluated by the quality of the resulting estimates. For example, if a proportion is being estimated, one may wish to have the 95% confidence interval be less than 0.06 units wide. Alternatively, sample size may be assessed based on the power of a hypothesis test. For example, if we are comparing the support for a certain political candidate among women with the support for that candidate among men, we may wish to have 80% power to detect a difference in the support levels of 0.04 units.

    Estimation of a proportion Edit

    A relatively simple situation is estimation of a proportion. For example, we may wish to estimate the proportion of residents in a community who are at least 65 years old.

    The estimator of a proportion is p ^ = X / n >=X/n> , where X is the number of 'positive' observations (e.g. the number of people out of the n sampled people who are at least 65 years old). When the observations are independent, this estimator has a (scaled) binomial distribution (and is also the sample mean of data from a Bernoulli distribution). The maximum variance of this distribution is 0.25n, which occurs when the true parameter is p = 0.5. In practice, since p is unknown, the maximum variance is often used for sample size assessments. If a reasonable estimate for p is known the quantity p ( 1 − p ) may be used in place of 0.25.

    For sufficiently large n, the distribution of p ^ >> will be closely approximated by a normal distribution. [1] Using this and the Wald method for the binomial distribution, yields a confidence interval of the form

    If we wish to have a confidence interval that is W units total in width (W/2 on each side of the sample mean), we would solve

    for n, yielding the sample size

    For example, if we are interested in estimating the proportion of the US population who supports a particular presidential candidate, and we want the width of 95% confidence interval to be at most 2 percentage points (0.02), then we would need a sample size of (1.96 2 )/(0.02 2 ) = 9604. It is reasonable to use the 0.5 estimate for p in this case because the presidential races are often close to 50/50, and it is also prudent to use a conservative estimate. The margin of error in this case is 1 percentage point (half of 0.02).

    The foregoing is commonly simplified.

    will form a 95% confidence interval for the true proportion. If this interval needs to be no more than W units wide, the equation

    can be solved for n, yielding [2] [3] n = 4/W 2 = 1/B 2 where B is the error bound on the estimate, i.e., the estimate is usually given as within ± B. So, for B = 10% one requires n = 100, for B = 5% one needs n = 400, for B = 3% the requirement approximates to n = 1000, while for B = 1% a sample size of n = 10000 is required. These numbers are quoted often in news reports of opinion polls and other sample surveys. However, always remember that the results reported may not be the exact value as numbers are preferably rounded up. Knowing that the value of the n is the minimum number of sample points needed to acquire the desired result, the number of respondents then must lie on or above the minimum.

    Estimation of a mean Edit

    A proportion is a special case of a mean. When estimating the population mean using an independent and identically distributed (iid) sample of size n, where each data value has variance σ 2 , the standard error of the sample mean is:

    This expression describes quantitatively how the estimate becomes more precise as the sample size increases. Using the central limit theorem to justify approximating the sample mean with a normal distribution yields a confidence interval of the form

    If we wish to have a confidence interval that is W units total in width (W/2 on each side of the sample mean), we would solve

    for n, yielding the sample size

    For example, if we are interested in estimating the amount by which a drug lowers a subject's blood pressure with a 95% confidence interval that is six units wide, and we know that the standard deviation of blood pressure in the population is 15, then the required sample size is 4 × 1.96 2 × 15 2 6 2 = 96.04 imes 15^<2>><6^<2>>>=96.04> , which would be rounded up to 97, because the obtained value is the minimum sample size, and sample sizes must be integers and must lie on or above the calculated minimum.

    A common problem faced by statisticians is calculating the sample size required to yield a certain power for a test, given a predetermined Type I error rate α. As follows, this can be estimated by pre-determined tables for certain values, by Mead's resource equation, or, more generally, by the cumulative distribution function:

    Tables Edit

    The table shown on the right can be used in a two-sample t-test to estimate the sample sizes of an experimental group and a control group that are of equal size, that is, the total number of individuals in the trial is twice that of the number given, and the desired significance level is 0.05. [4] The parameters used are:

    • The desired statistical power of the trial, shown in column to the left. (= effect size), which is the expected difference between the means of the target values between the experimental group and the control group, divided by the expected standard deviation.

    Mead's resource equation Edit

    Mead's resource equation is often used for estimating sample sizes of laboratory animals, as well as in many other laboratory experiments. It may not be as accurate as using other methods in estimating sample size, but gives a hint of what is the appropriate sample size where parameters such as expected standard deviations or expected differences in values between groups are unknown or very hard to estimate. [5]

    All the parameters in the equation are in fact the degrees of freedom of the number of their concepts, and hence, their numbers are subtracted by 1 before insertion into the equation.

    • N is the total number of individuals or units in the study (minus 1)
    • B is the blocking component, representing environmental effects allowed for in the design (minus 1)
    • T is the treatment component, corresponding to the number of treatment groups (including control group) being used, or the number of questions being asked (minus 1)
    • E is the degrees of freedom of the error component, and should be somewhere between 10 and 20.

    For example, if a study using laboratory animals is planned with four treatment groups (T=3), with eight animals per group, making 32 animals total (N=31), without any further stratification (B=0), then E would equal 28, which is above the cutoff of 20, indicating that sample size may be a bit too large, and six animals per group might be more appropriate. [6]

    Cumulative distribution function Edit

    Let Xi, i = 1, 2, . n be independent observations taken from a normal distribution with unknown mean μ and known variance σ 2 . Consider two hypotheses, a null hypothesis:

    and an alternative hypothesis:

    for some 'smallest significant difference' μ * > 0. This is the smallest value for which we care about observing a difference. Now, if we wish to (1) reject H0 with a probability of at least 1 − β when Ha is true (i.e. a power of 1 − β), and (2) reject H0 with probability α when H0 is true, then we need the following:

    If zα is the upper α percentage point of the standard normal distribution, then

    is a decision rule which satisfies (2). (This is a 1-tailed test.)

    Now we wish for this to happen with a probability at least 1 − β when Ha is true. In this case, our sample average will come from a Normal distribution with mean μ * . Therefore, we require

    Through careful manipulation, this can be shown (see Statistical power#Example) to happen when

    With more complicated sampling techniques, such as stratified sampling, the sample can often be split up into sub-samples. Typically, if there are H such sub-samples (from H different strata) then each of them will have a sample size nh, h = 1, 2, . H. These nh must conform to the rule that n1 + n2 + . + nH = n (i.e. that the total sample size is given by the sum of the sub-sample sizes). Selecting these nh optimally can be done in various ways, using (for example) Neyman's optimal allocation.

    There are many reasons to use stratified sampling: [7] to decrease variances of sample estimates, to use partly non-random methods, or to study strata individually. A useful, partly non-random method would be to sample individuals where easily accessible, but, where not, sample clusters to save travel costs. [8]

    In general, for H strata, a weighted sample mean is

    The weights, W h > , frequently, but not always, represent the proportions of the population elements in the strata, and W h = N h / N =N_/N> . For a fixed sample size, that is n = ∑ n h > ,

    which can be made a minimum if the sampling rate within each stratum is made proportional to the standard deviation within each stratum: n h / N h = k S h /N_=kS_> , where S h = Var ⁡ ( x ¯ h ) = (<ar >_)>>> and k is a constant such that ∑ n h = n >=n> .

    An "optimum allocation" is reached when the sampling rates within the strata are made directly proportional to the standard deviations within the strata and inversely proportional to the square root of the sampling cost per element within the strata, C h > :

    where K is a constant such that ∑ n h = n >=n> , or, more generally, when

    Sample size determination in qualitative studies takes a different approach. It is generally a subjective judgment, taken as the research proceeds. [13] One approach is to continue to include further participants or material until saturation is reached. [14] The number needed to reach saturation has been investigated empirically. [15] [16] [17] [18]

    There is a paucity of reliable guidance on estimating sample sizes before starting the research, with a range of suggestions given. [16] [19] [20] [21] A tool akin to a quantitative power calculation, based on the negative binomial distribution, has been suggested for thematic analysis. [22] [21]


    Design and rationale of a mixed methods randomized control trial: ADdressing Health literacy, bEliefs, adheRence and self-Efficacy (ADHERE) program to improve diabetes outcomes

    Improving medication adherence is one of the most effective approaches to improving the health outcomes of patients with diabetes. To date, enhancing diabetes medication adherence has occurred by improving diabetes-related knowledge. Unfortunately, behavior change often does not follow knowledge change. Enhancing communication between patients and healthcare professionals through addressing health literacy-related psychosocial attributes is critical.

    Objective

    Examine whether a patient-centered intervention augmenting usual care with a health literacy-psychosocial support intervention will improve medication adherence for patients with diabetes, compared to usual care.

    Methods

    This study is a randomized controlled trial with an intervention mixed methods design. Fifty participants being enrolled are English-speaking, 18–80 years old with diagnosed diabetes, take at least one diabetes medication, have low diabetes medication adherence (proportion of days covered less than 80% or based on clinical notes), and have poor diabetes control (hemoglobin A1c of ≥8%). Participants will be allocated to either a control group receiving usual care (n = 25) or an intervention group (n = 25) receiving usual care and a 6-session intervention focusing on the modifiable psychosocial factors that may influence medication adherence. A questionnaire will be administered at baseline and at the end of the intervention to all participants to assess the effectiveness of the intervention. Fifteen participants from the intervention group will be interviewed to explore participants’ experiences and perceptions of the intervention processes and outcomes.

    Conclusions

    The trial will examine if a patient-centered intervention that addresses patients’ health literacy and focuses on modifiable psychosocial factors will improve medication adherence among patients with diabetes.


    Future research directions

    Psychiatry is in urgent need of approaches that enable tailored precision therapies. For designing efficient treatments, we also require a better understanding of the neurobiological mechanisms underlying pathology at a transdiagnostic level. While more traditional hypothesis-driven statistical approaches to these issues have not brought the necessary breakthroughs, modern ML algorithms like DNNs provide new hope given their outstanding performance in other medical domains. At first sight, the complexity (and thus computational strength) of DNNs comes at a cost—large sample sizes. However, as we tried to discuss here, there are several ways to make DNNs suitable even for much smaller sample sizes. We have discussed various concrete steps to enable the development of efficient schemes using complex models for individualized person-centered predictions (see also [9, 87]). Models first trained on group data may provide one future avenue (Fig. 5), if it can be achieved that these capture sufficient particularities at the (individualized) single-subject level to yield meaningful forecasts, and not just reflect common group characteristics.

    A deeper understanding of hidden network representations in DNNs, i.e. ‘opening the black box’, could on the other hand reveal new insights or generate new hypotheses regarding pathological neurobiological mechanisms. Indeed, several studies have already demonstrated that DNN representations may yield interpretable features (e.g., [33, 94, 99, 163]). For instance, by examining the weights of their DNN, Zeng et al. [94] observed that cortical-striatal-cerebellar functional connectivity features were most relevant to the classification of schizophrenia. After training a deep AE on brain volume data from a large set of healthy individuals, Pinaya et al. [138] assessed the region specific reconstruction error made by the network when predicting psychiatric patients to pinpoint the most relevant brain regions involved in separating patients from controls. Li et al. [163] developed a visualization framework to decipher regions of interest important in the detection of individuals with autism spectrum disorder compared to controls based on fMRI recordings. Visualization approaches for assessing DNNs are currently a hot topic in ML, and future developments in this direction may help uncover interpretable multi-modal biomarkers of psychiatric disease. The interplay between the bench and the bedside, pathophysiological understanding and tailored treatment, continues in the age of AI, aided by the new tools discussed in this paper.