This paper aims to enhance the quality of images generated by electroencephalography (EEG) encoding and decoding models; this paper improves the neurological plausibility of the generated images by emulating key mechanisms of human visual processing.
Motivated by the center-periphery organization of visual perception, an EEG segmentation module and a feature fusion module are introduced into a diffusion-based EEG-to-image generation framework. The EEG segmentation module decomposes the input signal into a segment with high fluctuation amplitude and the remaining signal, which are separately encoded and subsequently fused to enable differentiated visual modeling. In addition, a neuro-inspired application framework is proposed to extend the EEG-to-image generation approach to Web search scenarios, where EEG-generated images serve as implicit visual representations of user intent.
Experimental results demonstrate that integrating EEG segmentation and feature fusion leads to measurable improvements in the perceptual quality and structural coherence of EEG-generated images.
An EEG segmentation module and a feature fusion module are introduced into a diffusion-based EEG-to-image generation framework. The EEG segmentation module decomposes the input signal into a segment with high fluctuation amplitude and the remaining signal, which are separately encoded and subsequently fused to enable differentiated visual modeling. A neuro-inspired application framework is proposed to extend the EEG-to-image generation approach to Web search scenarios, where EEG-generated images serve as implicit visual representations of user intent.
