Skip to content

NIRS-VIS is a Master Thesis Project for decoding visual stimuli from fNIRS brain data with transformers and autoencoders via Pytorch

Notifications You must be signed in to change notification settings

curiousbrutus/fNIRS-Vise

Repository files navigation

fNIRS-Vise: Decoding Visual Experiences and More from fNIRS Brain Signals

fNIRS-Vise is an advanced research project that explores the potential of functional near-infrared spectroscopy (fNIRS) in brain-computer interfaces (BCIs) by decoding and reconstructing a wide array of cognitive states directly from fNIRS brain signals. This project integrates cutting-edge deep learning techniques, including transformers and transfer learning, to push the boundaries of what can be achieved with fNIRS.

🎯 Key Objectives

Decode Cognitive States: Develop and refine deep learning models to interpret fNIRS data and extract meaningful representations of visual stimuli, emotional states, mental workload, speech, and disease detection.

Reconstruct Visual Experiences: Utilize generative models like Stable Diffusion to transform decoded neural patterns into visual imagery, effectively "seeing" through the mind's eye.

Advance fNIRS Applications: Demonstrate the feasibility of fNIRS-based brain decoding, paving the way for innovative BCI applications, neurofeedback systems, and clinical diagnostics.

🧠 Data & Methodology

Proprietary fNIRS Data: High-quality fNIRS recordings collected during various cognitive tasks, providing a unique resource for model training and validation.

Open-Source fNIRS Datasets: Leveraging publicly available datasets (e.g., fNIRS2MW) to enhance model generalizability and robustness.

Hybrid Deep Learning Architecture: Integrating the strengths of models like fNIRS-T, fNIRSNet, and MinD-Vis into a novel architecture optimized for decoding diverse cognitive states from fNIRS signals.

Transfer Learning: Exploring transfer learning between fMRI and fNIRS as well as across different EEG and fNIRS datasets to improve model performance and reduce data requirements.

💡 Inspiration & Background

MinD-Vis: This fMRI-based visual reconstruction framework serves as a key inspiration, particularly for applying transfer learning to fNIRS-based decoding.

Theoretical and Practical Insights: Building on the theoretical framework and challenges identified in recent thesis research, this project critically evaluates current approaches and identifies promising directions for future exploration.

🌐 Broader Impact fNIRS-Vise has the potential to revolutionize our understanding of the human brain and unlock new possibilities in:

Brain-Computer Interfaces: Enabling more intuitive and immersive communication and control systems.

Neurological Research: Providing insights into the neural mechanisms underlying visual perception, emotion recognition, and cognitive workload.

Clinical Applications: Developing diagnostic and therapeutic tools for neurological and psychological conditions.

🤝 Get Involved We welcome collaborations and contributions from researchers, developers, and enthusiasts interested in brain decoding, BCIs, and fNIRS. Contact me ([email protected]) to explore potential partnerships and contribute to this cutting-edge research.

📜 License This project is licensed under the Apache License, Version 2.0. See the LICENSE file for details.

About

NIRS-VIS is a Master Thesis Project for decoding visual stimuli from fNIRS brain data with transformers and autoencoders via Pytorch

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published