Study Objectives: Polysomnography is the gold standard in identifying sleep stages; however, there are discrepancies in how technicians use the standards. Because organizing meetings to evaluate this discrepancy and/or reach a consensus among multiple sleep centers is time-consuming, we developed an artificial intelligence system to efficiently evaluate the reliability and consistency of sleep scoring and hence the sleep center quality. Methods: An interpretable machine learning algorithm was used to evaluate the interrater reliability (IRR) of sleep stage annotation among sleep centers. The artificial intelligence system was trained to learn raters from 1 hospital and was applied to patients from the same or other hospitals. The results were compared with the experts' annotation to determine IRR. Intracenter and intercenter assessments were conducted on 679 patients without sleep apnea from 6 sleep centers in Taiwan. Centers with potential quality issues were identified by the estimated IRR. Results: In the intracenter assessment, the median accuracy ranged from 80.3%-83.3%, with the exception of 1 hospital, which had an accuracy of 72.3%. In the intercenter assessment, the median accuracy ranged from 75.7%-83.3% when the 1 hospital was excluded from testing and training. The performance of the proposed method was higher for the N2, awake, and REMsleep stages than for the N1 and N3 stages. The significant IRR discrepancy of the 1 hospital suggested a quality issue. This quality issue was confirmed by the physicians in charge of the 1 hospital. Conclusions: The proposed artificial intelligence system proved effective in assessing IRR and hence the sleep center quality.
ASJC Scopus subject areas