MRI-related processing

Importing and processing MRI data

For FreeSurfer:

1) Copy the data from the CD to a temporary directory on cibr server using the command scp (the same as in MEG data transfer). On the CD, the MR data might reside in directory DCM_xyz/DICOM or CDROM/a. The "xyz" are replaced by a study number. The path to the CD on each computer varies, but on the megprime computers it is /run/media/<username>/. An example command for copying the image files could thus be something like:

scp -pr /run/media/meguser/DCM_123/DICOM/S00001 <username>@cibr2:~/tmp/

This will copy the DICOM images for subject S00001 to directory tmp under the home directory of the user. The user is specified with the 8-letter username. If you don’t have the ~/tmp directory yet, you can create it by commanding mkdir -p ~/tmp.

2) Start a connection to the CIBR server (like described elsewhere in the Intranet instruction pages) and launch a command terminal. To enable modification and later deletion of the data, you need to allow writing access for all the files. This is done most conveniently the following way: first, find your way to where the data are by commanding cd ~/tmp (if that is where you put your MRIs in step 1). Then, check the name of your imported directory (most often "a" or "S00001") and give yourself the write permission there: chmod -R +w S00001.

3) Start DicomBrowser and open all the MR slices you just copied to the server (use Ctrl-A). Check which series correspond to which files by navigating the tree structure on the left pane -- there are usually 4 series and the file names could be something like z123. The series we are interested in are most often called “Sag CUBE T1” and either “T2 FSE” or "Propeller T2", if both T1 and T2 have been acquired. You will need these information soon when running the FreeSurfer reconstruction pipeline.

4) Using DicomBrowser, pseudonymize all the data:

  • load all imported data under DICOM/S00001/ directory (use mouse or Ctrl-a to select all imported slices)
  • select and double-click the field "patient name" to open the DICOM metadata viewer
  • Replace fields “patient name”, “patient ID”, “birthday” and other possible identifiers with the subject’s pseudonym code and other non-identifiable data
  • click “save” and be sure to select “overwrite existing files”

5) To start FreeSurfer processing, make sure that you have correct $SUBJECTS_DIR exported. A directory for storing the MRIs and FreeSurfer results is usually /projects/<project_name>/mri/. Please refer to “Preparing head models from MRI” below for further help.

6) After FreeSurfer processing is completed, remove the temporary MRI data directory created in step 1 with rm -rf ~/tmp/S00001 -- be very careful here to not delete anything by accident! Keep the MRI data discs in a safe place.

Preparing head models from MRI

1. Using FreeSurfer to prepare MNE head models

These instructions walk you through creating anatomically valid models to be used in MEG/EEG source estimation.

Tested with versions FS 7.1, MNE 2.7

What do you need?

  • access to CIBR server
  • MEG data in CIBR server
  • MRI data in CIBR server — see separate instructions for importing and pseudonymizing

Setting up

The software are pre-installed and the commands readily linked. Some users experience problems because of tcsh-shell; in this case changing to bash with the command bash is a good idea. All subsequent commands will be in bash format.

Before starting, we need to set some environmental variables, which in this case control basically data paths. The first is the directory for FreeSurfer processing results, $SUBJECTS_DIR, which defaults to shared directory /opt/freesurfer7/7.1.1-1/subjects. You should change this to point to the MRI directory under your project. The subject ID currently in process can be set in a similar way:

export SUBJECTS_DIR=‘/projects/xbrain/mri’

export SUBJECT=‘case_0001’

In this example, our project would be called xbrain and the subject code is case_0001. From now on, $SUBJECTS_DIR will refer to the project’s subjects directory, and $SUBJECT will refer to the subject being processed. Thereafter, you need to localize your MRI slice files, one for T1 and one for T2.

Running and checking FreeSurfer reconstructions

The FreeSurfer recon-all pipeline can be run either as one big or three smaller parts. Here, we run everything at once. For more information on the stages, see the Wiki at:

https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all

The recon-all pipeline performs skull-stripping, segmentation, Talairach transformation and spherical morphing, among others. The command to run this pipeline for a new subject is:

recon-all -s $SUBJECT -i <a_T1_slice> -T2 <a_T2_slice> -T2pial -wsatlas -all

Replace the arguments <a_T1_slice> and <a_T2_slice> by paths to DICOM slice files that you have imported to the server and identified with DicomBrowser. If you don’t have T2-data, omit the T2-related arguments.

Please note, that it takes around 10—14 hours per subject to run the whole process.

If FreeSurfer finishes without errors, you continue to checking the results. If there are errors, take action according to the software’s suggestions. There are several results to be inspected. In particular, check the skull stripping — the result should show only the brain, without remnants of skull, skin, dura etc. left. You can check the skullstrip result by commanding

freeview -v $SUBJECTS_DIR/$SUBJECT/mri/T1.mgz $SUBJECTS_DIR/$SUBJECT/mri/brainmask.mgz:colormap=heat:opacity=0.5

If there are some skull/dura left, the -gcut option seems to often work reasonably well. It might also help to try different values of -wsthresh. These are explained in https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/SkullStripFix_freeview. Check the recon-all pipeline link above for instructions on manual editing, if needed.

recon-all -skullstrip -clean-bm -gcut -subjid $SUBJECT

Check how it looks like:

freeview -v $SUBJECT/mri/brainmask.mgz $SUBJECT/mri/brainmask.gcuts.mgz:colormap=heat:opacity=0.3 &

and re-run the relevant parts of the pipeline:

recon-all -autorecon-pial -subjid $SUBJECT

Then, you can also check that the white matter and pial surfaces are correct:

freeview -v $SUBJECTS_DIR/$SUBJECT/mri/T1.mgz -f $SUBJECTS_DIR/$SUBJECT/surf/lh.pial $SUBJECTS_DIR/$SUBJECT/surf/rh.pial $SUBJECTS_DIR/$SUBJECT/surf/lh.white:edgecolor=red $SUBJECTS_DIR/$SUBJECT/surf/rh.white:edgecolor=red

Set up MRIs and source space

The original MRI slices and sets will be needed in later stages of the analysis, so let’s import them. Now that we have set up the environmental variables, it is enough to command

mne_setup_mri

and image stacks $SUBJECTS_DIR/$SUBJECT/mri/T1-neuromag and brain-neuromag will appear. The brain-neuromag stack will include the skull-stripped slices. You can add the T2-stack by adding --mri T2 to the command.

Next, we can proceed to setting up the source space, which will define the possible locations for the estimated neuronal currents in subject’s brain (see MNE manual Ch. 3.5):

mne_setup_source_space --ico 4

The switch --ico 4 can be replaced by some other options to produce different source spaces — check the MNE manual Ch. 3.5. for this. The option --ico 4 will produce a source space with 5124 potential source dipole locations, each with 3 possible orientations.

The file $SUBJECTS_DIR/$SUBJECT/$SUBJECT-ico-4-src.fif will now contain the Elekta FIFF-format description of the source space. Check these results by loading an MRI stack to the MRIlab software and overlaying the source space points. The points corresponding to source space <name>-src.fif can be found in $SUBJECTS_DIR/$SUBJECT/bem/<name>-lh.pnt and -rh.pnt. You can import these points to MRIlab via the File menu —> Import points.

Building the forward BEM model

The forward model is a mathematical-physical description of the extra-cranial electromagnetic field that would be generated by given neuronal currents. Understandably, an accurate forward model is needed before trying to estimate the inverse, i.e. which sources generated a measured field. The first thing we need is a conductor model, which describes the anatomical regions and their electric conductivities in the head. In the general case, one could employ a simple homogeneously conductive sphere as the head model, but as we’ve got this far with MRIs already, we will use those anatomical information to establish a more realistic model of the individual subject’s head.

mne_watershed_bem --atlas

will produce descriptions of the inner and outer skull surfaces and skin. If this is not accurate enough, you can adjust the watershed algorithm pre-flooding parameter from 15 to, e.g., 25 using the original mri_watershed function and the switch -h:

mri_watershed -h 25 -atlas -useSRAS -surf $SUBJECTS_DIR/$SUBJECT/bem/watershed/$SUBJECT $SUBJECTS_DIR/$SUBJECT/mri/T1.mgz $SUBJECT/bem/watershed/ws

You can check the resulting boundaries with Freeview, for example:

freeview -v $SUBJECTS_DIR/$SUBJECT/mri/T1.mgz -f $SUBJECTS_DIR/$SUBJECT/bem/watershed/${SUBJECT}_inner_skull_surface:edgecolor=red

It is known that identifying the boundary surfaces of conductivity changes is enough to construct a boundary element model (BEM), which can be used in magnetic field computations (forward modeling). An additional step here is to link these surfaces to the directory bem to be used by the software:

ln -s $SUBJECTS_DIR/$SUBJECT/bem/watershed/$SUBJECT_inner_skull_surface $SUBJECTS_DIR/$SUBJECT/bem/inner_skull.surf

ln -s $SUBJECTS_DIR/$SUBJECT/bem/watershed/$SUBJECT_outer_skull_surface $SUBJECTS_DIR/$SUBJECT/bem/outer_skull.surf

ln -s $SUBJECTS_DIR/$SUBJECT/bem/watershed/$SUBJECT_outer_skin_surface $SUBJECTS_DIR/$SUBJECT/bem/outer_skin.surf

Optionally, these surfaces can be estimated with another software called SimNIBS. See the instructions below for this.

As can be seen, we are producing a 3-layer BEM model. Now that we have the source locations and conductivity regions all set, we are ready to start working with the forward model. There is a convenience script for setting up the forward model in MNE software:

mne_setup_forward_model --surf --ico 4

Adding a switch --homog signifies a homogeneous 1-layer model, which is enough for many MEG applications — if you have EEG data or otherwise prefer to, creating a 3-layer model with the above command is as easy, only that computations will last a bit longer. The script produces files *-bem.fif and *-bem-sol.fif, which include the BEM geometry model and the computational solution. In addition, surface points are saved in MRIlab and FreeSurfer compatible formats. It’s a good idea to check that the surfaces are in the right places by overlaying them e.g. on the T1-neuromag images in MRIlab. You can also ask the software to --check or even --checkmore the results. For further details, see Ch. 3.7 of the MNE manual.

We usually don’t need to care about bad channels at this point, as we are using MaxFilter to identify, mark and fix those. If this has not been done properly, give a list of bad channels with the utility mne_mark_bad_channels.

As another alternative to mne_watershed_bem you can also use the Seglab software by Elekta Neuromag — as some other steps also differ, see the MNE manual appendix A.2.

2. Using the SimNIBS software

Magnetic resonance images are needed to construct accurate individual head models for source modelling in MEG. Instructions for importing MRI data provided by Synlab, pseudonymizing them and preparing MNE-compatible head models using the FreeSurfer pipeline can be found above.

SimNIBS is a toolkit originally made for simulating the electromagnetic fields due to brain stimulation. It should be able to provide better anatomical surface reconstructions than the watershed algorithm shipped along FreeSurfer. Moreover, there is a group who develops SimNibs. To get started, you need the MRI data imported and prepared with recon-all. In the most simple use case, the next step is running the commands in the example SimNIBS script at:

/opt/tools/cibr-meg/MRI/simnibs_example.sh

Processing all steps in the example script will take a couple of hours. The script produces surface files (*.surf) for inner and outer skull and skin surface in the subfolder m2m_<subject>/fsmesh. You will use these surfaces in the FreeSurfer-based pipeline for preparing the BEM head models, instead of the surfaces produced by the “watershed” algorithm. Note that you still need FreeSurfer results for example in setting up the source space for MNE.

Copy the surface mesh files to the bem-subfolder in subject’s FreeSurfer directory or make symbolic links there, which goes by something like:

ln -s /path/to/fsmesh/inner-skull.surf /path/to/subject/bem/inner_skull.surf

The validity of the produced surfaces is checked using FreeView; for example in folder /projects/training/MRI/:

freeview -v simnibs_test/cibr_test_adult1_T1fs_conform.nii.gz -f cibr_test_adult1/bem/inner_skull.surf:edgecolor=red cibr_test_adult1/bem/outer_skull.surf:edgecolor=blue

will overlay the inner and outer skull surfaces with T1-weighted MR images, where you can visually confirm that the surfaces have been extracted correctly. You can then proceed with the FreeSurfer instruction above, by preparing the forward model using the command mne_setup_forward_model —homog —surf

[END OF MEG-INDEPENDENT PROCESSING]

Preparing MNE inverse operators

MEG-MRI co-registration

So far, this work has been independent of the MEG data. Thus, it should suffice to perform these steps only once per subject. From now on, the data from the MEG measurement will be used to include information about measurement sensors and head position.

The first and possibly the most important thing to do is manually co-registrating the MEG and MRI data in the head coordinate frame. This will inform the forward and inverse solutions on head location, which is very important for the computations and validity of the results — any inaccuracy here will be propagated forward and probably elevated in data processing. This is also the step in the pipeline for which you acquired the extra digitisation points during subject preparation and documented the digitised anatomical landmarks. At this point it is also important that channel info are correct, in particular that EEG channels and only them are named as such. Note: See MNE manual 7.16.1 if you want to utilise a high-resolution head surface tessellation in co-registration.

There are good instructions for this step in the MNE manual, which are replicated below— other possibilities to establish the MRI-to-head transform include using “mne coreg” from MNE-python or the Elekta MRIlab software. This part needs to be done for each session, i.e. if the fiducials digitisation has changed or there are new MRIs (or preprocessing).

————————————————————————————————————————

12.11.1 Initial alignment

Follow these steps to make an initial approximation for the coordinate alignment.

In command terminal, go to the subject’s data directory — make sure you have exported the correct $subject and $subject_dir environmental variables

Launch mne_analyze

Select File/Load digitizer data... and load the digitizer data from the MEG data file

Load an inflated surface from File/Load surface...

Bring up the viewer window from View/Show viewer...

Click Options... in the viewer window. Make the following selections:

  1. Switch left and right cortical surface display off.
  2. Make the scalp transparent.
  3. Switch Digitizer data off.

Bring up the coordinate alignment window from Adjust/Coordinate alignment...

Click on the RAP (Right Auricular Point) button. It turns red, indicating that you should select the point from the viewer window. Click at the approximate location of this point in the viewer. The button jumps up, turns to normal color, and the MRI coordinates of the point appear in the text fields next to the button.

Proceed similarly for the other two landmark points: Nasion and LAP

Press Align using fiducials. Notice that the coordinate transformation changes from a unit transformation (no rotation, no origin translation) to a one determined by the identified landmark locations. The rotation matrix (upper left 3 x 3 part of the transformation) should have positive values close to one on the diagonal. The x and y values of the translation should be small and the z value should be negative, around -50 mm.

Make the Digitizer data visible from the options of the viewer window. After a while, the digitizer points will be shown. The color and size of the circles indicate whether and how much the point is inside (blue) or outside (red) of the scalp. The HPI coils are shown in green and the landmarks in light blue or light red color.

12.11.2 Refining the coordinate transformation

Before proceeding to the refinement procedure, it is useful to remove outlier digitizer points. To discard an outlier point:

  1. Click on Omit in the Adjust coordinate alignment window.
  2. Enter 10 for the distance of the points to be discarded.
  3. Click done. Points further than 10 mm disappear.

The coordinate transformation can be adjusted manually with the arrow buttons in the middle of Adjust coordinate alignment dialog.

An automatic iterative procedure, Iterative Closest Point (ICP) matching is also provided. At each iteration step:

  1. For each digitizer point, the closest point on the triangulated scalp surface is determined.
  2. The best coordinate transformation aligning the digitizer points with the closest points on the head surface is computed.

In step 2 of the iteration, the nasion is assigned five times the weight of the other points, since it can be assumed that the nasion is the easiest point to identify reliably from the surface image.

The ICP alignment can be invoked by entering the desired number of iterations next to the ICP align button followed by return, or simply pressing ICP align. The iteration will converge in 10 to 20 steps.

Warning: Use the ICP alignment option in mne_analyze with caution. The iteration will not converge to a reasonable solution unless and initial alignment is performed first according to Section 12.11.1. Outlier points should be excluded as described above. No attempt is made to compensate for the possible distance of the digitized EEG electrode locations from the scalp.

You can also rotate the transformation using the arrow keys.In the end the co-registration process, you should see roughly the same number of small red and blue dots on the scalp, and landmarks and coils in the correct positions. Remember that we are aiming at millimeter-scale accuracy!

12.11.3 Saving the transformation

To create a MRI FIFF description file, which incorporates the coordinate transformation, click Save MRI set in the Adjust coordinate alignment dialog. This will create the MRI set file in directory $SUBJECTS_DIR/$SUBJECT/mri/T1-neuromag/sets. You can choose to save the transformation matrix alone, e.g. to file ./meg-mri-trans.fif, to facilitate scripting.

————————————————————————————————————————

The forward solution

Finally, the source model, the BEM model, the MEG measurement info and MEG/MRI transformation are brought together to compute the forward solution. The BEM model is read from the sub-directory bem, the MEG information is collected from the FIFF data file, and the MEG/MRI transformation from the file “$SUBJECTS_DIR/$SUBJECT/mri/T1-neuromag/sets/COR-<user>-<date>-<time>.fif”, defined above. The forward solution for this subject and measurement is then obtained by the following command:

mne_do_forward_solution --meas <data_file_name>

The output is the forward solution file in the measurement directory, automatically named according to MEG data and source space, maybe something like 003_block1d_raw-003-ico-4-src-fwd.fif — you can also specify output with the switch —fwd. You can specify other things, like source space, BEM and transformation with options --src, --bem and --trans or --mri, respectively. You can also add --megonly to use a 1-layer model.

In MNE-Python, forward solution is built with:

mne.make_forward_solution(data.info, trans.fif, src, bem)

A separate forward solution is computed for each measurement file, because head position varies between measurements.

Computing the noise covariance matrix

The role of the noise covariance matrix is to describe the noise components that are expected in the recorded channel data. There is a small discussion in the MNE manual Ch. 3.12 on how this matrix should be computed. In a nutshell, use data that is as similar to your data of interest as possible, but without the neuronal component of interest. Most commonly, for evoked studies we use the pre-stimulus baseline of the epochs, and for studying ongoing activity we use empty-room recordings (or sometimes rest periods). If you have applied some pre-processing to your data, you need to apply the same pre-processing steps to the data used to compute the noise covariance.

MNE-Python offers nicely developed tools for covariance estimation and inverse operator preparation, as well as many subsequent processing steps. Estimating noise covariance is implemented in functions mne.compute_covariance() and mne.compute_raw_covariance()

You can also use e.g. mne_browse_raw to compute noise covariance matrices (see the MNE manual to learn the details).

Calculating and applying the inverse operator

In mne_analyze, inverse operator is generated with the following command (options can vary):

mne_do_inverse_operator --fwd /projects/brainx/c007/c007-fwd.fif --loose 0.2 --depth --senscov /projects/brainx/c007/ER-cov.fif

The forward and noise covariance data are loaded from the previously saved files. You can then load the MEG data and and start playing with source models!

To generate an inverse operator in MNE-Python, say for example:

inv = mne.minimum_norm.make_inverse_operator(ave.info, fwd, ncov loose=0.2, depth=0.8)

Here, fwd and ncov are instances of forward solution and noise covariance matrix. Check possible options for different kinds of inverse models in the manual or on-line documentation. Note, that you can create beamformers in MNE-Python in a very similar fashion!