EEG-Based Eye Movement Tech Detects Low-Quality Video

Beijing Institute of Technology Press Co., Ltd

In a research paper, scientists from the Beijing Institute of Technology proposed an event related potential (ERP) extraction method to solve the asynchronous problem of low-quality video target detection, designed the time-frequency features based on continuous wavelet transform, and established an EEG decoding model based on neural characterization. The average decoding accuracy of 84.56% is achieved in pseudo-online test.

The new research paper, published July 4 in the journal Cyborg and Bionic Systems, introduces a low-quality video object detection technique based on EEG signals and an ERP alignment method based on eye movement signals, demonstrating proven effectiveness and feasibility. The technology is expected to be widely used in military, civil and medical fields.

According to Fei, "Machine vision technology has developed rapidly in recent years. Image processing and recognition are very efficient. However, identifying low-quality targets remains a challenge for machine vision." Based on these problems, Fei, the author of this study, proposed a solution: a) designed a new low-quality video target detection experimental paradigm to simulate UAV reconnaissance video in complex environments. b) An eye movement synchronization method based on eye movement signals was designed to determine the target recognition time by analyzing different eye movement types, so as to accurately extract ERP fragments. c) Neural representations in the process of target recognition are analyzed, including time domain, frequency domain and source space domain. d) Designed time-frequency features based on continuous wavelet transform, and constructed a low-quality video target EEG decoding model.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.