Background: Moorthy checklist (MC) and laparoscopic skill competency assessment tool (LS-CAT) are tools commonly used to evaluate the quality of laparoscopic suturing. The current assessment model is single measurement by multiple raters. Our aim is to examine the reliability of the current assessment model and tools. Methods: With IRB approval, participants of three different backgrounds, namely medical students, trainees, and surgeons, were enrolled. The participants each accomplished a standardized laparoscopic suturing task. The performances were video-recorded and reviewed with LS-CAT and MC independently by three blinded raters. Intraclass correlation coefficients (ICC) were calculated for inter-rater and intra-rater reliability. Results: 26 participants were enrolled, comprising 10 students, 10 trainees and 6 surgeons. In regard of inter-rater reliability, ICC values (95% CI) were 0.909 (0.768–0.961) and 0.868 (0.608–0.948) in LS-CAP and MC, respectively. For students, ICC values were 0.908 (0.682–0.976) and 0.815 (0.408–0.951) in LS-CAT and MC, respectively. For trainees, ICC values were 0.812 (0.426–0.947) and 0.717 (0.102–0.925), respectively. For surgeons, ICC values were 0.720 (0.064–0.955) and 0.868 (0.608–0.948), respectively. In regard of intra-rater reliability, ICC values of the mean scores from the three raters were 0.956 (0.905–0.980) and 0.925 (0.842–0.966) in LS-CAP and MC, respectively. Conclusion: LS-CAT and MC are both qualified assessment tools for laparoscopic suturing. LS-CAT is more reliable particularly for medical students and trainees. The current assessment model of single measurement by multiple raters provides excellent reliability.
ASJC Scopus subject areas