Data processing pipeline for iEEG (Neuralynx and Blackrock) recordings.
- Set montage Set the montage information which maps the device channel to the brain region.
- Unpack data Read binary data and save CSC (Continuously Sample Channel) signals and timestamps to .mat files.
- Automatic spike sort Detect spikes and cluster spikes into units.
- Extract LFP Remove spikes in the raw csc signals and downsample to 2k Hz.
- Manual spike sort Select spike clusters by visual inspection.
- Export to NWB Export data to NWB (neural data without borders) format for data sharing.
- Read NWB with Python
- Tools Scripts to do data hotfix.
It is recommended to download the latest release version, which has a smaller size (does not contain the test data), instead of the main branch.
scripts
: Pipelines to run on the local machine.
batch
: Pipelines to run on SGE.
Run MontageConfigUI.m
to open the UI to set the montage:
- Select
Custom
to input the channel label if it is not in the popup menu. - To skip a channel, set
Micros
to 0.
- Select channels with checkboxes, move them up/down, and remove or add new channels below.
- Use
shift
to select/unselect multiple channels. - For macro channels, use shift to select multiple cells in the table and delete the contents with
backspace
/delete
. The empty ports will be automatically filled with the following rules:- If
Port Start
is empty, it will be set asPort End
in the row above + 1. - If
Port End
is empty,- It will be set as
Port Start
in the row below - 1 if it is not empty - Otherwise, it will be set as
Port Start
in the current row.
- It will be set as
- If
If there are no skipped ports, you only need to set Port Start
, Port End
will be automatically filled. When both Port Start
and Port End
are empty at the end of the table, it will be filled assume each channel only takes one port.
After setting the montage, clicking confirm
will save the configuration file (to set up the neuralynx device) and a JSON file, which saves the information in the UI and can be loaded.
Run in Matlab:
scripts/run_unpackNeuralynx
You can either define the I/O path in the script or use the UI to select the file path by removing the path definition in the above script:
If you want to rename the channels, set the montage config file (created by MontageConfigUI.m
) in the script.
montageConfigFile = '/Users/XinNiuAdmin/Documents/MATLAB/nwbPipeline/montageConfig/montage_Patient-1702_exp-46_2024-06-10_16-52-31.json';
Otherwise, set it empty:
montageConfigFile = [];
Define experiment ID and file path in scripts/run_spikeSorting.m
to run spike sorting:
expIds = (4:7);
filePath = '/Users/XinNiuAdmin/HoffmanMount/data/PIPELINE_vc/ANALYSIS/MovieParadigm/570_MovieParadigm';
and run in Matlab:
scripts/run_spikeSorting
This will run spike detection using the minimal threshold across all selected experiments and combine spikes in a single .mat file for each channel.
Or define expIds
and job_name
in batch/runbatch_spikeSorting.m
and run on SGE (hoffman2):
qsub batch/runbatch_spikeSorting.sh
Similar to spike sorting, define expIds
and filePath
in scripts/run_extractLFP.m
and run in matlab:
scripts/run_extractLFP
Or define expIds
and job_name
in batch/runbatch_extractLFP.m
and run on SGE:
qsub batch/runbatch_extractLFP.sh
To do manual spike sort, run wave_clus
in Matlab command window, or open wave_clus.m
and press the Run button. Press Load Data
and select the *_spike.mat
file created by automatic spike sorting.
You need to run all three steps of automatic spike sorting before the manual spike sort.
To export data to .nwb file, you need to add matnwb.
This script will export LFP for all micro and macro channels, spike times, and mean spike waveform (for each unit) to .nwb file:
script/run_exportToNwb.m
NWB export has a test module for developers:
test/test_exportToNwb.m
Tutorials for matnwb:
https://neurodatawithoutborders.github.io/matnwb/tutorials/html/intro.html https://neurodatawithoutborders.github.io/matnwb/tutorials/html/ecephys.html https://github.com/NeurodataWithoutBorders/matnwb/blob/master/tutorials/convertTrials.m https://github.com/rutishauserlab/recogmem-release-NWB/blob/master/RutishauserLabtoNWB/events/newolddelay/matlab/export/NWBexport_demo.m
It is recommended that a virtual environment be started for this project. Then install dependencies for Python:
pip install -r requirements.txt
Start jupyter-notebook in a terminal:
jupyter-notebook
Open the file notebooks/demo_readNwb.ipynb
for a demo of reading data from nwb file.
General tools to support the analysis pipeline, including file organization, data manipulation, and configurations.
This script contains the global parameters for the pipeline.
Functions to align TTLs from recording device and experiment PC.
Including name patterns for micro and macro files, files that are ignored when unpacking, etc.
functions to write data to NWB files.
This is the code to read raw neuralynx files. https://www.urut.ch/new/serendipity/index.php?/pages/nlxtomatlab.html
Note: For Apple Silicon users, run the Intel version of MATLAB using Rosetta.
classes and functions for Neuralynx IO interface (WIP)
Functions to unpack black rock data.
UI tools to configure montage settings for neuralynx and BlackRock(wip).
The code for automatic and manual spike sorting, modified from PDM (by Emily) and Multi-Exp Analysis (by Chris Dao)
UI for manual spike sorting, adapted from: wave_clus
functions for screening analysis and raster plots.
The scripts to unpack raw data, spike detection, spike clustering, and export data to nwb format.
The scripts to run the pipeline on Hoffman (SGE).
Jupyter-notebooks to read nwb data in python.
scripts to do hotfix on data such as rename file names, migrate variables across files, editing variables, and check corrupted .mat files, etc.
Errors in the montage configuration file results in incorrect channel names (mostly for macros and misc channels).
check_macro_channels.py
: List macro channels according to montage and read channel names and id from neuralyn files.
fix_montage_error.py
: rename neuralyn files that does not match montage.
This folder contains example data and test modules for developers to debug the code.
functions to qsub jobs to SGE (not used so far)