Label-Environment friendly Sleep Staging Utilizing Transformers Pre-trained with Place Prediction
Authors: Sayeri Lala, Hanlin Goh, Christopher Sandino
Summary: Sleep staging is a clinically necessary activity for diagnosing numerous sleep issues, however stays difficult to deploy at scale as a result of it as a result of it’s each labor-intensive and time-consuming. Supervised deep learning-based approaches can automate sleep staging however on the expense of huge labeled datasets, which will be unfeasible to obtain for numerous settings, e.g., unusual sleep issues. Whereas self-supervised studying (SSL) can mitigate this want, latest research on SSL for sleep staging have proven efficiency positive aspects saturate after coaching with labeled information from solely tens of topics, therefore are unable to match peak efficiency attained with bigger datasets. We hypothesize that the speedy saturation stems from making use of a sub-optimal pretraining scheme that pretrains solely a portion of the structure, i.e., the function encoder, however not the temporal encoder; subsequently, we suggest adopting an structure that seamlessly {couples} the function and temporal encoding and an acceptable pretraining scheme that pretrains your entire mannequin. On a pattern sleep staging dataset, we discover that the proposed scheme presents efficiency positive aspects that don’t saturate with quantity of labeled coaching information (e.g., 3–5% enchancment in balanced sleep staging accuracy throughout low- to high-labeled information settings), lowering the quantity of labeled coaching information wanted for top efficiency (e.g., by 800 topics). Primarily based on our findings, we suggest adopting this SSL paradigm for subsequent work on SSL for sleep staging.