MEG data analysis

Maxfilter

Using MaxFilter to clean and correct MEG data

Running MaxFilter is compulsory if your data has been recorded with IAS (Internal Active Shielding) on, and is strongly recommended for data where IAS is off.
MaxFilter manual is available among Elekta manuals in the MEG lab and 2nd floor analysis room, as well as on the server in PDF format at /neuro/manuals/.

Note: It is important to carefully inspect raw data by before and after MaxFiltering, regardless of exact MaxFilter procedure. It is often helpful to compare the data to your notes about the measurement session.

Versions

MaxFilter version 2.2 is the officially accepted and clinically validated version. The new Maxfilter 3.0 has many advantages and new features, but is still in the development stage. The main differences to the user are improved identification of bad channels and SSS origin fitting in 3.0. However, similar functionality can be obtained with MF 2.2 using utility programs xscan for bad channel detection and fit_sphere_to_isotrak for origin fitting, and inserting these to MF 2.2 command line. In addition, MF 3.0 has some improved noise rejection capabilities based on channel cross-validation, which can help in particular in increasing SNR in higher frequencies (gamma band).

MaxFilter 2.2 GUI can be opened with command maxfilter_gui, whereas MF 2.2 terminal command is maxfilter-2.2. MF 3.0 can be started only on command line at the moment, by maxfilter-3.0. Command-line options can be inspected by the switch -help. It is a good idea to always specify the intended software version.

Options

Due to the large number of different options, the optimal MaxFilter processing for each data set or project should be detailed by the researchers. For this, understanding of the principles is necessary. Please study the manual and original papers.

Recommended MaxFilter workflow includes the temporal extension, tSSS. This can be applied with the option -st. If there are no temporal projections to be rejected, tSSS will come down to basic SSS, although running it will be considerably slower than basic SSS. You can give the inspected window length in seconds after -st. It is good to know that tSSS, as a side effect, does high-pass filtering to the data, where the pass frequency is dependent on the selected window length.

If there is need to compare channel-level data between sessions or subjects, the option -trans <filename> must be used to virtually co-align the positions of the head between the recordings. The distance between the two head positions (reported by MaxFilter) must not exceed 25 mm, and even then requires inspection of the raw and average signals. Problems in the head position transform are most often visible in the vertex channels, and they are due to trying to fix originally low head position. A good way to minimize transformation distances is to use the middle head position (among sessions or subjects) as the transform target.

If continuous HPI (cHPI) has been on during the recording to monitor subject’s head movement, you need to apply movement correction as well. Use intermittent movement correction, -movecomp inter is the required command line option.
Other useful options

  • ds for downsampling - saves disk space, time and memory when processing
  • v for verbose - gives a lot more of information about the MaxFilter process
  • hpicons - adjusts initial consistency of isotrak and hpifit
  • waves <filename> - save tSSS rejected waveforms for later inspection
  • corr for tSSS correlation strength

For problem cases

  •   Use option -force to bypass warnings (use with consideration!)
  • Try lower correlation limit for tSSS, down to roughly 0.9, to reduce some artifacts
  • Skip problematic segments of data (option -skip t1 t2)

MaxFiltering scripts

You can find some example scripts on CIBR server in /opt/tools/cibr-meg/.
These are described in more detail in the instructions Scripts and stuff. 

Importing and processing MRI data

Importing MR images from Synlab optical disks

There are 2 slightly different options for import: (1) to general/Elekta format or (2) for Freesurfer only.

1. To Elekta format

With these instructions you are be able to import MRI data to CIBR servers. Thereafter, image processing for specific applications requires some further steps. You will end up with data directories sets (incl. a FIF-format description per image series) and slices (incl. all individual image slices), with personal ID information replaced by subject codes (pseudonyms). The sets data can be used in Elekta software and MNE, the slices also in other applications that accept DICOM formatted data. This job is best done on the 2nd floor analysis class Linux workstations in Kärki.

  1. Copy the data from the CD to a temporary directory on cibr-srv using the command scp (the same as in MEG data transfer). The MR data resides in the directory DICOM on the CD. The path to the CD on the computer varies, but on the megprime computers it is something like 

/run/media/<username>/. 

An example command would be something like: 

scp -pr /run/media/meguser/DCM_xyz/DICOM/S00001 <username>@cibr-srv2:~/tmp/ 

This will copy the DICOM images for subject 00001 to the user’s home directory. The xyz are replaced by a number.If you don’t have the tmp directory yet, you can create it by commanding 

mkdir ~/tmp 

  1. Start NoMachine connection to the CIBR server and launch the Elekta software DicomAccess.

3) Check and make note of what kind of data you have available:

  • hit "Query"
  • set Source as Directory
  • set Source path to your tmp data directory (where the slices copied from the CD are).
  • set Source format to DICOM 10

4) The program shows you which data sets are available. You can double-click on the Patient name to see available Studies, and the Study ID to see available Series (the MR images / data sets). Double-clicking on a Series shows some metadata about those slices. Perform step 3) for each MRI slice sub-directory. The series we are interested in are “Sag CUBE T1” and “Propeller T2”.

5) Import the data by selecting the name of the interesting series, clicking on "GET", and inserting the destination directory in the pop-up window. The destination directory is usually in a project directory. It is a good idea to have subject-wise directories for MRIs. Thus, the directory would be

 /projects/<project_name>/<subject_code>/.

When the import is complete, the program displays a summary of the imported data. Here you can identify the data set (MRI series / sequence) with the file name -- take note of this!. The data are now stored in two directories: slices for individual 2-D slices and sets for stacks of slices that form a single head scan (in FIFF format). Repeat step 5 for all interesting Series.

6) Exit the DicomAccess program.

7) Delete the data in the tmp directory created on step 1. You might need to change file permissions first. Use either the graphical file manager or command terminal. Commands 

chmod -R 770 tmp

rm -r tmp/*

should do the job.

8) Type DicomBrowser & in the command terminal to start DicomBrowser, You can use this software to pseudonymize the data:

  • First load the imported data in the slices directory (use mouse or Ctrl-a to select all slices)
  • select and double-click the patient name to see the DICOM metadata viewer
  • replace “patient name” and “patient ID” with your subject code (the same as in MEG)
  • click “save” and be sure to select “overwrite existing files”
  • check that no patient name / birthday / other ID remain in the metadata

9) The subject’s name appears probably also in the directory sets. Change the file name to something else — one option is to make them correspond to T1 and T2 and/or subject code, like:

 mv keijo_hamalainen_01.fif c0123_T1.fif

10) Keep original CDs and other material in a securely locked storage.

2. For FreeSurfer only

1) Copy the data from the CD to a temporary directory on cibr-srv using the command scp (the same as in MEG data transfer). On the CD, the MR data resides in the directory DCM_xyz/DICOM. The xyz are replaced by a study number. The path to the CD on each computer varies, but on the megprime computers it is /run/media/<username>/. An example command for copying the image files would thus be something like:

scp -pr /run/media/meguser/DCM_123/DICOM/S00001 <username>@cibr-srv2:~/tmp/

This will copy the DICOM images for subject S00001 to directory tmp under the home directory of the user. The user is specified with the 8-letter username.  If you don’t have the ~/tmp directory yet, you can create it by commanding mkdir -p ~/tmp 

2) Start a connection to the CIBR server and launch a command terminal.

3) Start DicomBrowser and open the MR data you just copied. Check which series correspond to which images -- there are usually 4 series under /DICOM/S00001. The series we are interested in are most often called “Sag CUBE T1” and “Propeller T2”, if both T1 and T2 have been acquired. You will need these information when running the FreeSurfer pipeline. If needed, you can rename these directories to T1 and T2 and delete the others.

4) Using DicomBrowser, pseudonymize all the data:

  • load all imported data under DICOM/S00001/ directory (use mouse or Ctrl-a to select all imported slices)
  • select and double-click the patient name to open the DICOM metadata viewer
  • Replace fields “patient name”, “patient ID”, “birthday” and other possible identifiers with the subject’s pseudonym code and other non-identifiable data
  • click “save” and be sure to select “overwrite existing files”

5) To start FreeSurfer processing, make sure that you have correct $SUBJECTS_DIR exported. A directory for storing the MRIs and FreeSurfer results is usually /projects/<project_name>/mri/.

Please refer to “Preparing head models” for further help.

6) After FreeSurfer processing is completed, remove the temporary MRI data directory created in (1).

Preparing head models from MRI

1. Using FreeSurfer to prepare MNE head models

These instructions walk you through creating anatomically valid models to be used in MEG/EEG source estimation.

Tested with versions FS 5.3, MNE 2.7

What do you need?

  • access to CIBR server
  • MEG data in CIBR server
  • MRI data in CIBR server — see separate instructions for importing and pseudonymizing

Setting up

The software are pre-installed and the commands readily linked. Some users experience problems because of tcsh-shell; in this case changing to bash with the command bash is a good idea. All subsequent commands will be in bash format.

Before starting, we need to set some environmental variables, which in this case control basically data paths. The first is the directory for FreeSurfer processing results, $SUBJECTS_DIR, which defaults to shared directory /opt/freesurfer/subjects. You should change this to point to the MRI subjects directory under your project. The other, simply stating the subject ID currently in process, can be set in a similar way:

export SUBJECTS_DIR=‘/projects/xbrain/mri’

export SUBJECT=‘case_0001’

In this example, our project is called xbrain and the subject code is case_0001. Thereafter, you need to localize your MRI slices, usually one for T1 contrast and one for T2 contrast. These inform FreeSurfer about the kind and location of anatomical data. From now on, $SUBJECTS_DIR will refer to the project’s subjects directory, and $SUBJECT will refer to the subject being processed.

Running and checking FreeSurfer reconstructions

The FreeSurfer recon-all pipeline can be run either as one big or three smaller parts. Here, we run everything at once. For more information on the stages, see the Wiki at:

https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all

The recon-all pipeline performs skull-stripping, segmentation, Talairach transformation and spherical morphing, among others. The command to run this pipeline for a new subject is:

recon-all -s $SUBJECT -i <a_T1_slice> -T2 <a_T2_slice> -T2pial -wsatlas -all

Replace the arguments <a_T1_slice> and <a_T2_slice> by paths to DICOM slices that have been imported to the server. If you don’t have T2-data, omit the T2-arguments. You can use DicomBrowser or e.g. information on number of slices to identify T1/T2 slices.

Please note, that it takes around 10—14 hours per subject to run the whole process.

If FreeSurfer finishes without errors, you continue to checking the results. If there are errors, take action according to the software’s suggestions. There are several results to be inspected. In particular, check the skull stripping — the result should show only the brain, without remnants of skull, skin, dura etc. left. You can check the skullstrip result by commanding

freeview -v $SUBJECTS_DIR/$SUBJECT/mri/T1.mgz $SUBJECTS_DIR/$SUBJECT/mri/brainmask.mgz:colormap=heat:opacity=0.5

If there are some skull/dura left, the -gcut option seems to often work reasonably well. It might also help to try different values of -wsthresh. These are explained in https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/SkullStripFix_freeview. Check the recon-all pipeline link above for instructions on manual editing, if needed.

recon-all -skullstrip -clean-bm -gcut -subjid $SUBJECT

Check how it looks like:

freeview -v $SUBJECT/mri/brainmask.mgz $SUBJECT/mri/brainmask.gcuts.mgz:colormap=heat:opacity=0.3 &

and re-run the relevant parts of the pipeline:

recon-all -autorecon-pial -subjid $SUBJECT

Then, you can also check that the white matter and pial surfaces are correct:

freeview -v $SUBJECTS_DIR/$SUBJECT/mri/T1.mgz -f $SUBJECTS_DIR/$SUBJECT/surf/lh.pial $SUBJECTS_DIR/$SUBJECT/surf/rh.pial $SUBJECTS_DIR/$SUBJECT/surf/lh.white:edgecolor=red $SUBJECTS_DIR/$SUBJECT/surf/rh.white:edgecolor=red

Set up MRIs and source space

The original MRI slices and sets will be needed in later stages of the analysis, so let’s import them. Now that we have set up the environmental variables, it is enough to command

mne_setup_mri

and image stacks $SUBJECTS_DIR/$SUBJECT/mri/T1-neuromag and brain-neuromag will appear. The brain-neuromag stack will include the skull-stripped slices. You can add the T2-stack by adding --mri T2 to the command:

mne_setup_mri --mri T2

Next, we can proceed to setting up the source space, which will define the possible locations for the estimated neuronal currents in subject’s brain (see MNE manual Ch. 3.5):

mne_setup_source_space --ico 4

The switch --ico 4 can be replaced by some other options to produce different source spaces — check the MNE manual Ch. 3.5. for this. The option --ico 4 will produce a source space with 5124 potential source dipole locations, each with 3 possible directions.

The file $SUBJECTS_DIR/$SUBJECT/$SUBJECT-ico-4-src.fif will now contain the Elekta FIFF-format description of the source space. Check these results by loading an MRI stack to the MRIlab software and overlaying the source space points. The points corresponding to source space <name>-src.fif can be found in $SUBJECTS_DIR/$SUBJECT/bem/<name>-lh.pnt and -rh.pnt. You can import these points to MRIlab via the File menu —> Import points.

Building the forward BEM model

The forward model is a mathematical-physical description of the extra-cranial electromagnetic field that would be generated by given neuronal currents. Understandably, an accurate forward model is needed before trying to estimate the inverse, i.e. which sources generated a measured field. The first thing we need is a conductor model, which describes the anatomical regions and their electric conductivities in the head. In the general case, one could employ a simple homogeneously conductive sphere as the head model, but as we’ve got this far with MRIs already, we will use those anatomical information to establish a more realistic model of the individual subject’s head.

mne_watershed_bem --atlas

will produce descriptions of the inner and outer skull surfaces and skin. If this is not accurate enough, you can adjust the watershed algorithm pre-flooding parameter from 15 to, e.g., 25 using the original mri_watershed function and the switch -h:

mri_watershed -h 25 -atlas -useSRAS -surf $SUBJECTS_DIR/$SUBJECT/bem/watershed/$SUBJECT $SUBJECTS_DIR/$SUBJECT/mri/T1.mgz $SUBJECT/bem/watershed/ws

You can check the resulting boundaries with Freeview:

freeview -v $SUBJECTS_DIR/$SUBJECT/mri/T1.mgz -f $SUBJECTS_DIR/$SUBJECT/bem/watershed/${SUBJECT}_inner_skull_surface:edgecolor=red

It is known that identifying the boundary surfaces of conductivity changes is enough to construct a boundary element model (BEM), which can be used in magnetic field computations (forward modeling). An additional step here is to link these surfaces to the directory bem to be used by the software:

ln -s $SUBJECTS_DIR/$SUBJECT/bem/watershed/$SUBJECT_inner_skull_surface $SUBJECTS_DIR/$SUBJECT/bem/inner_skull.surf

ln -s $SUBJECTS_DIR/$SUBJECT/bem/watershed/$SUBJECT_outer_skull_surface $SUBJECTS_DIR/$SUBJECT/bem/outer_skull.surf

ln -s $SUBJECTS_DIR/$SUBJECT/bem/watershed/$SUBJECT_outer_skin_surface $SUBJECTS_DIR/$SUBJECT/bem/outer_skin.surf

As can be seen, we are producing a 3-layer BEM model. Now that we have the source locations and conductivity regions all set, we are ready to start working with the forward model. There is a convenience script for setting up the forward model in MNE software:

mne_setup_forward_model --surf --ico 4

Adding a switch --homog signifies a homogeneous 1-layer model, which is enough for many MEG applications — if you have EEG data or otherwise prefer to, creating a 3-layer model with the above command is as easy, only that computations will last a bit longer. The script produces files *-bem.fif and *-bem-sol.fif, which include the BEM geometry model and the computational solution. In addition, surface points are saved in MRIlab and FreeSurfer compatible formats. It’s a good idea to check that the surfaces are in the right places by overlaying them e.g. on the T1-neuromag images in MRIlab. You can also ask the software to --check or even --checkmore the results. For further details, see Ch. 3.7 of the MNE manual.

We usually don’t need to care about bad channels at this point, as we are using MaxFilter to identify, mark and fix those. If this has not possibly been done, give a list of bad channels with the utility mne_mark_bad_channels.

As an alternative to mne_watershed_bem you can also use the Seglab software by Elekta Neuromag — as some other steps also differ, see the MNE manual appendix A.2.

[END OF MEG-INDEPENDENT PROCESSING]

2. Using the SimNIBS software 

Preparing MNE inverse operators

MEG-MRI co-registration

So far, this work has been independent of the MEG data. Thus, it should suffice to perform these steps only once per subject. From now on, the data from the MEG measurement will be used to include information about measurement sensors and head position.

The first and possibly the most important thing to do is manually co-registrating the MEG and MRI data in the head coordinate frame. This will inform the forward and inverse solutions on head location, which is very important for the computations and validity of the results — any inaccuracy here will be propagated forward and probably elevated in data processing. This is also the step in the pipeline for which you acquired the extra digitisation points during subject preparation and documented the digitised anatomical landmarks. At this point it is also important that channel info are correct, in particular that EEG channels and only them are named as such. Note: See MNE manual 7.16.1 if you want to utilise a high-resolution head surface tessellation in co-registration.

There are good instructions for this step in the MNE manual, which are replicated below— other possibilities to establish the MRI-to-head transform include using “mne coreg” from MNE-python or the Elekta MRIlab software. This part needs to be done for each session, i.e. if the fiducials digitisation has changed or there are new MRIs (or preprocessing).

————————————————————————————————————————

12.11.1 Initial alignment 

Follow these steps to make an initial approximation for the coordinate alignment.

In command terminal, go to the subject’s data directory — make sure you have exported the correct $subject and $subject_dir environmental variables

Launch mne_analyze

Select File/Load digitizer data... and load the digitizer data from the MEG data file

Load an inflated surface from File/Load surface... 

Bring up the viewer window from View/Show viewer...

Click Options... in the viewer window. Make the following selections: 

  1. Switch left and right cortical surface display off.
  2. Make the scalp transparent.
  3. Switch Digitizer data off.

Bring up the coordinate alignment window from Adjust/Coordinate alignment...

Click on the RAP (Right Auricular Point) button. It turns red, indicating that you should select the point from the viewer window. Click at the approximate location of this point in the viewer. The button jumps up, turns to normal color, and the MRI coordinates of the point appear in the text fields next to the button.

Proceed similarly for the other two landmark points: Nasion and LAP

Press Align using fiducials. Notice that the coordinate transformation changes from a unit transformation (no rotation, no origin translation) to a one determined by the identified landmark locations. The rotation matrix (upper left 3 x 3 part of the transformation) should have positive values close to one on the diagonal. The x and y values of the translation should be small and the z value should be negative, around -50 mm.

Make the Digitizer data visible from the options of the viewer window. After a while, the digitizer points will be shown. The color and size of the circles indicate whether and how much the point is inside (blue) or outside (red) of the scalp. The HPI coils are shown in green and the landmarks in light blue or light red color.

12.11.2 Refining the coordinate transformation 

Before proceeding to the refinement procedure, it is useful to remove outlier digitizer points. To discard an outlier point: 

  1. Click on Omit in the Adjust coordinate alignment window.
  2. Enter 10 for the distance of the points to be discarded.
  3. Click done. Points further than 10 mm disappear. 

The coordinate transformation can be adjusted manually with the arrow buttons in the middle of Adjust coordinate alignment dialog.

An automatic iterative procedure, Iterative Closest Point (ICP) matching is also provided. At each iteration step: 

  1. For each digitizer point, the closest point on the triangulated scalp surface is determined. 
  2. The best coordinate transformation aligning the digitizer points with the closest points on the head surface is computed. 

In step 2 of the iteration, the nasion is assigned five times the weight of the other points, since it can be assumed that the nasion is the easiest point to identify reliably from the surface image. 

The ICP alignment can be invoked by entering the desired number of iterations next to the ICP align button followed by return, or simply pressing ICP align. The iteration will converge in 10 to 20 steps. 

Warning: Use the ICP alignment option in mne_analyze with caution. The iteration will not converge to a reasonable solution unless and initial alignment is performed first according to Section 12.11.1. Outlier points should be excluded as described above. No attempt is made to compensate for the possible distance of the digitized EEG electrode locations from the scalp. 

You can also rotate the transformation using the arrow keys.In the end the co-registration process, you should see roughly the same number of small red and blue dots on the scalp, and landmarks and coils in the correct positions. Remember that we are aiming at millimeter-scale accuracy!

12.11.3 Saving the transformation 

To create a MRI FIFF description file, which incorporates the coordinate transformation, click Save MRI set in the Adjust coordinate alignment dialog. This will create the MRI set file in directory $SUBJECTS_DIR/$SUBJECT/mri/T1-neuromag/sets. You can choose to save the transformation matrix alone, e.g. to file ./meg-mri-trans.fif, to facilitate scripting.

————————————————————————————————————————

The forward solution

Finally, the source model, the BEM model, the MEG measurement info and MEG/MRI transformation are brought together to compute the forward solution. The BEM model is read from the sub-directory bem, the MEG information is collected from the FIFF data file, and the MEG/MRI transformation from the file “$SUBJECTS_DIR/$SUBJECT/mri/T1-neuromag/sets/COR-<user>-<date>-<time>.fif”, defined above. The forward solution for this subject and measurement is then obtained by the following command:

mne_do_forward_solution --meas <data_file_name>

The output is the forward solution file in the measurement directory, automatically named according to MEG data and source space, maybe something like 003_block1d_raw-003-ico-4-src-fwd.fif — you can also specify output with the switch —fwd. You can specify other things, like source space, BEM and transformation with options --src, --bem and --trans or --mri, respectively. You can also add --megonly to use a 1-layer model.

In MNE-Python, forward solution is built with:

mne.make_forward_solution(data.info, trans.fif, src, bem)

A separate forward solution is computed for each measurement file, because head position varies between measurements.

Computing the noise covariance matrix

The role of the noise covariance matrix is to describe the noise components that are expected in the recorded channel data. There is a small discussion in the MNE manual Ch. 3.12 on how this matrix should be computed. In a nutshell, use data that is as similar to your data of interest as possible, but without the neuronal component of interest. Most commonly, for evoked studies we use the pre-stimulus baseline of the epochs, and for studying ongoing activity we use empty-room recordings (or sometimes rest periods). If you have applied some pre-processing to your data, you need to apply the same pre-processing steps to the data used to compute the noise covariance.

MNE-Python offers nicely developed tools for covariance estimation and inverse operator preparation, as well as many subsequent processing steps. Estimating noise covariance is implemented in functions mne.compute_covariance() and mne.compute_raw_covariance()

You can also use e.g. mne_browse_raw to compute noise covariance matrices (see the MNE manual to learn the details).

Calculating and applying the inverse operator

In mne_analyze, inverse operator is generated with the following command (options can vary):

mne_do_inverse_operator --fwd /projects/brainx/c007/c007-fwd.fif --loose 0.2 --depth --senscov /projects/brainx/c007/ER-cov.fif

The forward and noise covariance data are loaded from the previously saved files. You can then load the MEG data and and start playing with source models!

To generate an inverse operator in MNE-Python, say for example:

inv = mne.minimum_norm.make_inverse_operator(ave.info, fwd, ncov loose=0.2, depth=0.8)

Here, fwd and ncov are instances of forward solution and noise covariance matrix. Check possible options for different kinds of inverse models in the manual or on-line documentation. Note, that you can create beamformers in MNE-Python in a very similar fashion!

Available analysis software

Most common software needed for MEG analysis are available on the CIBR servers. If you have additional needs, you can request from cibr-support.

Analysis training

CIBR organizes MEG data analysis training as Methods Clinics. Everyone is free to suggest topics! Current schedule is available in CIBR web pages, and information about Clinics is sent to meg-users mailing list.

Scripts and stuff

Different scripts meant for everyone's enjoyment are located on CIBR servers, at /opt/tools/cibr-meg/. If you find them useful, you are supposed to copy the script to your own project directory and then edit it as needed. A more detailed description of the code base is coming up soon.