There is now a link for members to go from the ASSBI website directly to BIM at the Cambridge University Press site
Sign into the ASSBI website then click on this link
The 2018 Douglas Tate Prize for the best research paper published in 2017 was a shared prize this year and the winners were Renee Roelofs who was lead author in the article: Social Cognitive Interventions in Neuropsychiatric Patients: A Meta-Analysis, and Nicholas Ryan whose article was entitled: Examining the Prospective Relationship between Family Affective Responsiveness and Theory of Mind in Chronic Paediatric Traumatic Brain Injury. They were awarded a certificate and a cash prize donated by Cambridge University Press at the ASSBI Conference in Adelaide.
The Douglas and Tate Prize, named after the two founding Editors of Brain Impairment (Professors Jacinta Douglas and Robyn Tate), will be presented to the best research article of the year annually at the ASSBI Conference Awards Ceremony.
Issue 1: Volume 19
The first issue of Volume 19 starts another year for Brain Impairment with a world class special issue on Quantitative Data Analysis guest edited by Robyn Tate and Michael Perdices.
Quantitative data analysis for single-case methods, between-groups designs, and instrument development
We are pleased to bring you this special issue of Brain Impairment on quantitative data analysis, an area of increasing complexity and sophistication. In planning the special issue, our intention was to bring together a set of articles covering diverse and topical areas in the field, with the idea of having the volume serve as a “go-to” resource. The special issue is aimed at researchers, clinicians engaged in research, and advanced students all of whom may have passing familiarity with a particular data analytic technique, but wish to know more about it and how to apply it. Accordingly, our aim is to equip the reader with concrete, hands-on information that can be applied in the day-to-day world of research. The authors of the articles comprising the special issue, each of whom is an expert in their field, were charged with the task of writing a practical guide and providing worked examples to illustrate the application of their selected technique/s. The papers in the special issue cover three domains: the single-case method, between-groups design, and psychometric aspects of instrument development.
Single-case research is increasingly used in the neurorehabilitation field. Perusal of evidence databases such as PsycBITE (www.psycbite.com) demonstrate the exponential growth of publications over the past 40 years, numbering almost 1,500 single-case intervention studies in field of acquired brain impairment alone. There is increasing recognition of the importance of scientific rigor in single-case experimental designs (SCED; e.g., Kratochwill et al., 2013; Tate et al., 2013), part and parcel of which is the critical role of data evaluation. Three of the papers in the special issue describe various approaches to data evaluation. Traditionally, SCEDs have focused on the visual analysis of graphed data, the argument being that if you cannot see a treatment effect, then it is likely not to be very important. In their paper on systematic use of visual analysis, Ledford, Lane and Severini provide a heuristic tutorial on the steps that a researcher should cover to comprehensively conduct a visual analysis, in terms of examining level, trend, variability, consistency, overlap and immediacy within and/or between phases.
Following on, Manolov and Solanas delineate a variety of descriptive and inferential techniques available for SCEDs. In so doing, they provide an integrative approach between the visual versus statistical camps, noting that they themselves “rely heavily on visual representation of the data to enhance the interpretation of the numerical results”. Analytic techniques in this area are rapidly evolving, but with the welcome increase comes the challenge of selecting the technique that is most suitable for the dataset. The authors re-analyse previously published data to illustrate the application of different statistical techniques, along with the rationale for using that technique. Among the helpful directions provided in the paper is knowing that (a) like between-groups analysis there is no single analytic technique that can be regarded as the gold standard technique, but (b) unlike between-groups analyses it is not advisable to determine the analytic technique a priori; rather the data need to be inspected for trend, variability and other features to determine a suitable technique that will not produce misleading results. Readers will appreciate the direction to websites where the intricacies of complicated procedures advocated by the authors, such as piecewise regression, can be conducted without angst.
Onghena and colleagues provide introduction to and step-by-step demonstration of the application of statistical techniques with both the unilevel model (evaluating level, trend, and serial dependency), and their cutting-edge work on multilevel meta-analytical procedures, as well as alternative approaches (e.g., use of randomisation tests). Serendipitously, the authors use one of the published data sets used in the previous paper to illustrate the application of increasingly sophisticated regression-based models. In a fitting conclusion to the first section of this special issue, Onghena et al. make thoughtful suggestions for furthering work in the field of single-case methods in general and single-case data evaluation in particular.
Researchers are generally more familiar with data analysis of the traditional between-groups design, covered by three articles in section 2 of the special issue. Here we have endeavoured to present papers that provide novel perspectives on familiar themes, which both the newcomer to the field, as well as the seasoned researcher, will appreciate. Everyone will want to know about the 50 tips for randomised controlled trials (RCT) from Harvey, Glinsky and Herbert. The authors provide pragmatic step-by step guidelines to help researchers avoid the many pitfalls that can befall the design and conduct of clinical trials. Their tips and advice are sage, honed from their extensive experience in conducting clinical trials. And the breadth of coverage is complete, going beyond ‘standard’ methodological and theoretical issues. For example, the item entitled “Try not to ask for too much from participants” cautions the investigator not make the burden of participation in the trial too onerous and thus risk losing participants, hence, potentially compromising the trial results. Eminently sensible advice, not usually found in text books.
The theme of points 41 to 43 from Harvey and colleagues (viz., don’t be misled by p values, estimate the size of the effect of the intervention, and consider how much uncertainty there is in your estimate of the effect of the intervention, respectively), is further developed in the article by Perdices. The paper reviews misconceptions regarding null hypothesis significance testing that have been entrenched for many decades in psychological and behavioural research. Null hypothesis testing does not really deliver what many researchers think it does, and p-values do not have the significance generally attributed to them. The American Psychological Association recommendations for the use of effect sizes and confidence intervals made over two decades ago are still not universally implemented. The paper presents a brief guide to commonly used effect sizes and worked out examples on how to calculate them. References to on-line calculators for both effect sizes and confidence intervals provided added value.
Systematic reviews and meta-analysis provide Level 1 evidence and hence are a valued resource in bibliographic data-bases. Yet, like the RCT and the SCED, the scientific quality of systematic reviews varies enormously. All of these methodologies have critical appraisal tools that assist the reader to identify sound research with minimal bias and credible results, for example the PEDro Scale for RCTs (Maher, Sherrington, Herbert, Moseley, & Elkins, 2003), Risk of Bias in N-of-1 Trials (RoBiNT) Scale for SCEDs (Tate et al., 2013), and A MeaSurement Tool to Assess systematic Reviews (AMSTAR) for systematic reviews (Shea et al., 2017). The most influential repository of systematic reviews in the health field is the Cochrane Database of Systematic Reviews. The article by Gertler and Cameron demonstrates the stages involved in conducting a Cochrane systematic review, focusing on data analysis techniques. If you want to know about assessing heterogeneity, understanding forest plots depicting results of meta-analyses, funnel plots to detect bias, GRADE analyses to take account of risk of bias, and other tantalising techniques, then this is a paper for you!
The third section of the special issue contains two papers addressing aspects of instrument development at the psychometric level. Approaches to instrument development and validation in the health field have taken a quantum leap in recent decades and item response theory (IRT), as a mathematical extension of classical test theory, is increasingly used in instrument development and evaluation. As Kean and colleagues point out in their paper, although the origins of the mathematical processes of IRT can be traced back to the work of Thurstone almost a century ago, its application in the health sciences is more recent. We can expect to see more studies using IRT because of its precision of measurement. The authors’ paper on IRT takes the reader through the why, what, when, and how of IRT and Rasch analysis.
In the final paper, Rosenkoetter and Tate address evaluation of the scientific quality of psychometric studies. No longer is it sufficient to report high reliability and validity coefficients – rather, the method by which such results are obtained is also of critical importance. They note that “the results of a study are trustworthy if the study design and methodology are sound. If they are not, the trustworthiness of the findings remains unknown“. The authors provide a head-to-head comparison of six instruments specifically developed to critically appraise psychometric studies in the behavioural sciences. The paper concludes with an application of the COSMIN checklist, along with the Terwee-m statistical quality criteria, and a levels of evidence synthesis.
We thank the authors who contributed to this special issue of Brain Impairment. Each of the articles has been carefully constructed to fulfil our brief and each also makes a unique, timely and erudite contribution to the field. Consequently, we believe that this volume will be a valuable resource and hold something new for every researcher, clinician and advanced student.
Robyn Tate and Michael Perdices
Gertler, P. & Cameron I.D. (2018). Making sense of data analytic techniques used in a Cochrane Systematic Review. Brain Impairment, 19(1)
Harvey, L.A., Glinsky, J.V., & Herbert, R.D. (2018). 50 tips for clinical trialists. Brain Impairment, 19(1)
Kean, J., Bisson, E.F., Brodke, D.S., Biber, J., & Gross, P.H. (2018). An introduction to item response theory and Rasch analysis: application using the Eating Assessment Tool (EAT-10). Brain Impairment, 19(1)
Kratochwill, T.R., Hitchcock, J., Horner, R.H., Levin, J.R., Odom, S.L., Rindskopf, D.M., & Shadish, W.R. (2013). Single-case intervention research design standards. Remedial and Special Education, 34(1), 26-38.
Ledford, J.R., Lane, J.D., & Severini, K.E. (2018). Systematic use of visul analysis for assessing outcomes in single case design studies. Brain Impairment, 19(1)
Maher, C.G., Sherrington, C., Herbert, R.D., Moseley, A.M., & Elkins, M. (2003). Reliability of the PEDro scale for rating quality of RCTs. Physical Therapy, 83, 713–721.
Manolov, R. & Solanis, A. (2018). Analytic options for single-case experimental designs: review and application to brain impairment. Brain Impairment, 19(1)
Onghena, P., Michiela, B., Jamshidi, L., Moeyaert, M., & van der Noortgate, W. (2018). One by one: accumulating evidence by using meta-analytical procedures for single-case experiments. Brain Impairment, 19(1)
Perdices, M. (2018). Null hypothesis significance testing, p-values, effect sizes and confidence intervals. Brain Impairment, 19(1)
Rosenkoetter, U. & Tate, R.L. (2018). Assessing features of psychometric assessment instruments: a comparison of the COSMIN checklist with other critical appraisal tools. Brain Impairment, 19(1)
Shea, B.J., Barnaby, C.R., Wells, G., Thurku, M., Hamel, C., Moran, J., … Kristjansson E. (2017). AMSTAR 2: a critical appraisal tool for systematic review that include randomized or non-randomised studies of healthcare interventions, or both. BMJ, 358, j4008
Tate, R.L., Perdices, M., Rosenkoetter, U., Wakim, D., Godbee, K., Togher, L., & McDonald, S. (2013). Revision of a method quality rating scale for single-case experimental designs and n-of-1 trials: The 15-item Risk of Bias in N-of-1 Trials (RoBiNT) Scale. Neuropsychological Rehabilitation, 23(5), 619-638.Jennifer Fleming and Grahame Simpson
About the Society
Working together to improve the lives of people with brain impairment.ASSBI is a multidisciplinary society dedicated to improving the quality of life of people with brain impairment and their families.
Copyright © 2017 ASSBI
Website design: Advance Web Design