This paper studied the use of eye movement data to form criteria for judging whether pilots perceive emergency information such as cockpit warnings. In the experiment, 12 subjects randomly encountered different warning information while flying a simulated helicopter, and their eye movement data were collected synchronously. Firstly, the importance of the eye movement features was calculated by ANOVA (analysis of variance). According to the sorting of the importance and the Euclidean distance of each eye movement feature, the warning information samples with different eye movement features were obtained. Secondly, the residual shrinkage network modules were added to CNN (convolutional neural network) to construct a DRSN (deep residual shrinkage networks) model. Finally, the processed warning information samples were used to train and test the DRSN model. In order to verify the superiority of this method, the DRSN model was compared with three machine learning models, namely SVM (support vector machine), RF (radom forest) and BPNN (backpropagation neural network). Among the four models, the DRSN model performed the best. When all eye movement features were selected, this model detected pilot perception of warning information with an average accuracy of 90.4%, of which the highest detection accuracy reached 96.4%. Experiments showed that the DRSN model had advantages in detecting pilot perception of warning information.