Validity, Reliability and Acceptability of the Team Standardized Assessment of Clinical Encounter Report*

Camilla L. Wong, Mireille Norris, Samir S. Sinha, Maria L. Zorzitto, Sushma Madala, Jemila S. Hamid



The Team Standardized Assessment of a Clinical Encounter Report (StACER) was designed for use in Geriatric Medicine residency programs to evaluate Communicator and Collaborator competencies.


The Team StACER was completed by two geriatricians and interdisciplinary team members based on observations during a geriatric medicine team meeting. Postgraduate trainees were recruited from July 2010–November 2013. Inter-rater reliability between two geriatricians and between all team members was determined. Internal consistency of items for the constructs Communicator and Collaborator competencies was calculated. Raters completed a survey previously administered to Canadian geriatricians to assess face validity. Trainees completed a survey to determine the usefulness of this instrument as a feedback tool.


Thirty postgraduate trainees participated. The prevalence adjusted bias-adjusted kappa range inter-rater reliability for Communicator and Collaborator items were 0.87–1.00 and 0.86–1.00, respectively. The Cronbach’s alpha coefficient for Communicator and Collaborator items was 0.997 (95% CI: 0.993–1.00) and 0.997 (95% CI: 0.997–1.00), respectively. The instrument lacked discriminatory power, as all trainees scored “meets requirements” in the overall assessment. Niney-three per cent and 86% of trainees found feedback useful for developing Communicator and Collaborator competencies, respectively.


The Team StACER has adequate inter-rater reliability and internal consistency. Poor discriminatory power and face validity challenge the merit of using this evaluation tool. Trainees felt the tool provided useful feedback on Collaborator and Communicator competencies.


communication; collaboration; postgraduate; assessment; feedback

Full Text:




  • There are currently no refbacks.

ISSN: 1925-8348 (Online)