MEG data analysis


Using MaxFilter to clean and correct MEG data

Running MaxFilter is compulsory if your data has been recorded with IAS (Internal Active Shielding) on, and is recommended for data where IAS is off.
MaxFilter manual is available among other MEG manuals in the MEG lab and 2nd floor analysis room, as well as on the CIBR server in PDF format at /neuro/manuals/.

Note: It is important to carefully inspect raw data by before and after MaxFiltering, regardless of exact MaxFilter procedure. It is often helpful to compare the data to the original data as well as your notes about the measurement session.


MaxFilter version 2.2 is the officially released and clinically accepted version. The new Maxfilter 3.0 has many advantages and new features, but is still in the development stage, which means bugs. The main differences to the user are improved identification of bad channels and SSS origin fitting in 3.0. However, similar functionality can be obtained with MF 2.2 using utility programs xscan for bad channel detection and fit_sphere_to_isotrak for origin fitting. In addition, MF 3.0 has some improved noise rejection capabilities based on channel cross-validation, which can help in particular in increasing SNR in higher frequencies (gamma band).

MaxFilter 2.2 GUI can be opened with command maxfilter_gui, whereas for running MF 2.2 in the terminal the command is maxfilter-2.2. MF 3.0 can be started only on command line at the moment, by maxfilter-3.0. Command-line options can be inspected by the switch -help. It is a good idea to explicitly specify the intended software version and parameters.


Due to the large number of different options, the optimal MaxFilter processing for each data set or project should be detailed by the researchers. For this, understanding of the underlying principles is necessary. Please study the manual and original SSS papers by Taulu et al.

Recommended MaxFilter workflow includes the temporal extension, tSSS. This can be applied with the option -st. If there are no temporal projections to be rejected, tSSS will result to basic SSS, although running it will be considerably slower. You can give the inspected window length in seconds after -st. It is good to know that tSSS, as a side effect, does high-pass filtering to the data, where the edge frequency depends on the selected window length.

If there is need to compare channel-level data between sessions or subjects, the option -trans <filename.fif> must be used to virtually co-align the positions of the head between the recordings. The distance between the two head positions (reported by MaxFilter) must not exceed 25 mm, and even then requires inspection of the resulting signals. Problems in the head position transform are most often visible in the vertex channels, and they are due to trying to fix an originally low head position. A good way to minimize transformation distances is to use the middle head position (among sessions or subjects) as the transform target (see the cibr-meg script collection below).

If continuous HPI (cHPI) has been on during the recording to monitor subject’s head movement, you need to apply movement correction as well. Intermittent movement correction, -movecomp inter, is the required command line option.
Other useful options:

  • -ds for downsampling - saves disk, time and memory in later processing
  • -v for verbose - gives a lot more of information about processing
  • -hpicons - adjusts initial consistency of isotrak and hpifit
  • -waves <filename> - save tSSS rejected waveforms for later inspection
  • -corr for tSSS correlation strength

For problem cases:

  • Use option -force to bypass warnings (use with consideration!)
  • Try lower correlation limit for tSSS, down to roughly 0.9, to further reduce artifacts
  • Skip problematic segments of data (option -skip t1 t2)

MaxFiltering scripts

You can find some example scripts on CIBR server in /opt/tools/cibr-meg/.
These are described in more detail in the instructions Scripts and stuff

Importing and processing MRI data

Importing MR images from Synlab optical disks

There are 2 slightly different options for import: (1) to Megin format or (2) for Freesurfer only.

1. To Megin format

With these instructions you are be able to import MRI data to CIBR servers. Thereafter, image processing for specific applications requires some further steps. You will end up with data directories sets (incl. a FIF-format description per image series) and slices (incl. all individual image slices), with personal ID information replaced by subject codes (pseudonyms). The sets data can be used in Megin DANA software and MNE, the slices also in other applications that accept DICOM formatted data. This job is best done on the 2nd floor analysis class Linux workstations in Kärki. 

1) Copy the data from the CD to a temporary directory on cibr server using the command scp (the same as in MEG data transfer). The MR data resides in the directory DICOM on the CD. The path to the CD on the computer varies, but on the "megprime" computers it is something like /run/media/<username>/.  An example command would be something like:  scp -pr /run/media/meguser/DCM_xyz/DICOM/S00001 <username>@cibr2:~/tmp/ (the xyz are replaced by a number)This will copy the DICOM images for subject 00001 to tmp subfolder in the user’s home directory. You can create tmp by commanding mkdir -p ~/tmp.

2) Start NoMachine connection to the CIBR server and launch the software DicomAccess.

3) Check and make note of what kind of data you have available:

  • hit Query
  • set Source as Directory
  • set Source path to your tmp data directory (where the slices copied from the CD are).
  • set Source format to DICOM 10

The program shows you which data sets there exist. You can double-click on the Patient name to see available Studies, and the Study ID to see available Series (the MR images / data sets). Double-clicking on a Series shows some metadata about those slices. Perform step 3) for each MRI slice sub-directory. The series we are interested in are “Sag CUBE T1” and “Propeller T2”. 

4) Import the data by selecting the name of the interesting series, clicking on "GET", and inserting the destination directory in the pop-up window. The destination directory is usually in a project directory. It is a good idea to have subject-wise directories for MRIs. Thus, the directory would be something like /projects/<project_name>/mri/<subject_code>/. When the import is complete, the program displays a summary of the imported data. Here you can identify the data set (MRI series / sequence) with the file name -- take note of these! The data are now stored in two directories: slices for individual 2-D slices and sets for stacks of slices that form a single head scan (in FIFF format). Repeat step 4 for all interesting Series.

5) Exit the DicomAccess program.

6) Delete the data in the tmp directory created on step 1. You might need to change file permissions first. Use either the graphical file manager or command terminal. Commands chmod -R 770 tmp and rm -r tmp/* should do the job.

7) Type DicomBrowser & in the command terminal to start DicomBrowser, which you use to pseudonymize the data:

  • First load the imported data in the slices directory (use mouse or Ctrl-a to select all slices)
  • Select and double-click the field "patient name" to see the DICOM metadata viewer
  • Replace “patient name” and “patient ID” with your subject code (the same as in MEG)
  • Click “save” and be sure to select “overwrite existing files”
  • Check that no patient name / birthday / other ID remain in the metadata

8) The subject’s name appears probably also in the directory sets. Change the file name to something else — one option is to make them correspond to T1 and T2 and/or subject code, like: mv keijo_hamalainen_01.fif c0123_T1.fif

9) Keep original CDs and other material in a securely locked storage cabinet.

2. For FreeSurfer only

1) Copy the data from the CD to a temporary directory on cibr server using the command scp (the same as in MEG data transfer). On the CD, the MR data might reside in directory DCM_xyz/DICOM or CDROM/a. The "xyz" are replaced by a study number. The path to the CD on each computer varies, but on the megprime computers it is /run/media/<username>/. An example command for copying the image files could thus be something like: scp -pr /run/media/meguser/DCM_123/DICOM/S00001 <username>@cibr2:~/tmp/

This will copy the DICOM images for subject S00001 to directory tmp under the home directory of the user. The user is specified with the 8-letter username.  If you don’t have the ~/tmp directory yet, you can create it by commanding mkdir -p ~/tmp

2) Start a connection to the CIBR server (like described elsewhere in the Intranet instruction pages) and launch a command terminal. To enable modification and later deletion of the data, you need to allow writing access for all the files. This is done most conveniently the following way: first, find your way to where the data are by commanding cd ~/tmp (if that is where you put your MRIs in step 1). Then, check the name of your imported directory (most often "a" or "S00001") and give yourself the write permission there: chmod -R +w S00001.

3) Start DicomBrowser and open all the MR slices you just copied to the server (use Ctrl-A). Check which series correspond to which files by navigating the tree structure on the left pane -- there are usually 4 series and the file names could be something like z123. The series we are interested in are most often called “Sag CUBE T1” and either “T2 FSE” or "Propeller T2", if both T1 and T2 have been acquired. You will need these information soon when running the FreeSurfer reconstruction pipeline. 

4) Using DicomBrowser, pseudonymize all the data:

  • load all imported data under DICOM/S00001/ directory (use mouse or Ctrl-a to select all imported slices)
  • select and double-click the field "patient name" to open the DICOM metadata viewer
  • Replace fields “patient name”, “patient ID”, “birthday” and other possible identifiers with the subject’s pseudonym code and other non-identifiable data
  • click “save” and be sure to select “overwrite existing files”

5) To start FreeSurfer processing, make sure that you have correct $SUBJECTS_DIR exported. A directory for storing the MRIs and FreeSurfer results is usually /projects/<project_name>/mri/.

Please refer to “Preparing head models from MRI” below for further help.

6) After FreeSurfer processing is completed, remove the temporary MRI data directory created in step 1 with rm -rf ~/tmp/S00001 -- be very careful here to not delete anything by accident!

Preparing head models from MRI

1. Using FreeSurfer to prepare MNE head models

These instructions walk you through creating anatomically valid models to be used in MEG/EEG source estimation.

Tested with versions FS 7.1, MNE 2.7

What do you need?

  • access to CIBR server
  • MEG data in CIBR server
  • MRI data in CIBR server — see separate instructions for importing and pseudonymizing

Setting up

The software are pre-installed and the commands readily linked. Some users experience problems because of tcsh-shell; in this case changing to bash with the command bash is a good idea. All subsequent commands will be in bash format.

Before starting, we need to set some environmental variables, which in this case control basically data paths. The first is the directory for FreeSurfer processing results, $SUBJECTS_DIR, which defaults to shared directory /opt/freesurfer7/7.1.1-1/subjects. You should change this to point to the MRI directory under your project. The subject ID currently in process can be set in a similar way:

export SUBJECTS_DIR=‘/projects/xbrain/mri’

export SUBJECT=‘case_0001’

In this example, our project would be called xbrain and the subject code is case_0001. From now on, $SUBJECTS_DIR will refer to the project’s subjects directory, and $SUBJECT will refer to the subject being processed. Thereafter, you need to localize your MRI slice files, one for T1 and one for T2. 

Running and checking FreeSurfer reconstructions

The FreeSurfer recon-all pipeline can be run either as one big or three smaller parts. Here, we run everything at once. For more information on the stages, see the Wiki at:

The recon-all pipeline performs skull-stripping, segmentation, Talairach transformation and spherical morphing, among others. The command to run this pipeline for a new subject is:

recon-all -s $SUBJECT -i <a_T1_slice> -T2 <a_T2_slice> -T2pial -wsatlas -all

Replace the arguments <a_T1_slice> and <a_T2_slice> by paths to DICOM slice files that you have imported to the server and identified with DicomBrowser. If you don’t have T2-data, omit the T2-related arguments. 

Please note, that it takes around 10—14 hours per subject to run the whole process.

If FreeSurfer finishes without errors, you continue to checking the results. If there are errors, take action according to the software’s suggestions. There are several results to be inspected. In particular, check the skull stripping — the result should show only the brain, without remnants of skull, skin, dura etc. left. You can check the skullstrip result by commanding

freeview -v $SUBJECTS_DIR/$SUBJECT/mri/T1.mgz $SUBJECTS_DIR/$SUBJECT/mri/brainmask.mgz:colormap=heat:opacity=0.5

If there are some skull/dura left, the -gcut option seems to often work reasonably well. It might also help to try different values of -wsthresh. These are explained in Check the recon-all pipeline link above for instructions on manual editing, if needed.

recon-all -skullstrip -clean-bm -gcut -subjid $SUBJECT

Check how it looks like:

freeview -v $SUBJECT/mri/brainmask.mgz $SUBJECT/mri/brainmask.gcuts.mgz:colormap=heat:opacity=0.3 &

and re-run the relevant parts of the pipeline:

recon-all -autorecon-pial -subjid $SUBJECT

Then, you can also check that the white matter and pial surfaces are correct:

freeview -v $SUBJECTS_DIR/$SUBJECT/mri/T1.mgz -f $SUBJECTS_DIR/$SUBJECT/surf/lh.pial $SUBJECTS_DIR/$SUBJECT/surf/rh.pial $SUBJECTS_DIR/$SUBJECT/surf/lh.white:edgecolor=red $SUBJECTS_DIR/$SUBJECT/surf/rh.white:edgecolor=red

Set up MRIs and source space

The original MRI slices and sets will be needed in later stages of the analysis, so let’s import them. Now that we have set up the environmental variables, it is enough to command


and image stacks $SUBJECTS_DIR/$SUBJECT/mri/T1-neuromag and brain-neuromag will appear. The brain-neuromag stack will include the skull-stripped slices. You can add the T2-stack by adding --mri T2 to the command.

Next, we can proceed to setting up the source space, which will define the possible locations for the estimated neuronal currents in subject’s brain (see MNE manual Ch. 3.5):

mne_setup_source_space --ico 4

The switch --ico 4 can be replaced by some other options to produce different source spaces — check the MNE manual Ch. 3.5. for this. The option --ico 4 will produce a source space with 5124 potential source dipole locations, each with 3 possible orientations.

The file $SUBJECTS_DIR/$SUBJECT/$SUBJECT-ico-4-src.fif will now contain the Elekta FIFF-format description of the source space. Check these results by loading an MRI stack to the MRIlab software and overlaying the source space points. The points corresponding to source space <name>-src.fif can be found in $SUBJECTS_DIR/$SUBJECT/bem/<name>-lh.pnt and -rh.pnt. You can import these points to MRIlab via the File menu —> Import points.

Building the forward BEM model

The forward model is a mathematical-physical description of the extra-cranial electromagnetic field that would be generated by given neuronal currents. Understandably, an accurate forward model is needed before trying to estimate the inverse, i.e. which sources generated a measured field. The first thing we need is a conductor model, which describes the anatomical regions and their electric conductivities in the head. In the general case, one could employ a simple homogeneously conductive sphere as the head model, but as we’ve got this far with MRIs already, we will use those anatomical information to establish a more realistic model of the individual subject’s head.

mne_watershed_bem --atlas

will produce descriptions of the inner and outer skull surfaces and skin. If this is not accurate enough, you can adjust the watershed algorithm pre-flooding parameter from 15 to, e.g., 25 using the original mri_watershed function and the switch -h:

mri_watershed -h 25 -atlas -useSRAS -surf $SUBJECTS_DIR/$SUBJECT/bem/watershed/$SUBJECT $SUBJECTS_DIR/$SUBJECT/mri/T1.mgz $SUBJECT/bem/watershed/ws

You can check the resulting boundaries with Freeview, for example:

freeview -v $SUBJECTS_DIR/$SUBJECT/mri/T1.mgz -f $SUBJECTS_DIR/$SUBJECT/bem/watershed/${SUBJECT}_inner_skull_surface:edgecolor=red

It is known that identifying the boundary surfaces of conductivity changes is enough to construct a boundary element model (BEM), which can be used in magnetic field computations (forward modeling). An additional step here is to link these surfaces to the directory bem to be used by the software:

ln -s $SUBJECTS_DIR/$SUBJECT/bem/watershed/$SUBJECT_inner_skull_surface $SUBJECTS_DIR/$SUBJECT/bem/

ln -s $SUBJECTS_DIR/$SUBJECT/bem/watershed/$SUBJECT_outer_skull_surface $SUBJECTS_DIR/$SUBJECT/bem/

ln -s $SUBJECTS_DIR/$SUBJECT/bem/watershed/$SUBJECT_outer_skin_surface $SUBJECTS_DIR/$SUBJECT/bem/

Optionally, these surfaces can be estimated with another software called SimNIBS. See the instructions below for this.

As can be seen, we are producing a 3-layer BEM model. Now that we have the source locations and conductivity regions all set, we are ready to start working with the forward model. There is a convenience script for setting up the forward model in MNE software:

mne_setup_forward_model --surf --ico 4

Adding a switch --homog signifies a homogeneous 1-layer model, which is enough for many MEG applications — if you have EEG data or otherwise prefer to, creating a 3-layer model with the above command is as easy, only that computations will last a bit longer. The script produces files *-bem.fif and *-bem-sol.fif, which include the BEM geometry model and the computational solution. In addition, surface points are saved in MRIlab and FreeSurfer compatible formats. It’s a good idea to check that the surfaces are in the right places by overlaying them e.g. on the T1-neuromag images in MRIlab. You can also ask the software to --check or even --checkmore the results. For further details, see Ch. 3.7 of the MNE manual.

We usually don’t need to care about bad channels at this point, as we are using MaxFilter to identify, mark and fix those. If this has not been done properly, give a list of bad channels with the utility mne_mark_bad_channels.

As another alternative to mne_watershed_bem you can also use the Seglab software by Elekta Neuromag — as some other steps also differ, see the MNE manual appendix A.2.

2. Using the SimNIBS software

Magnetic resonance images are needed to construct accurate individual head models for source modelling in MEG. Instructions for importing MRI data provided by Synlab, pseudonymizing them and preparing MNE-compatible head models using the FreeSurfer pipeline can be found above.

SimNIBS is a toolkit originally made for simulating the electromagnetic fields due to brain stimulation. It should be able to provide better anatomical surface reconstructions than the watershed algorithm shipped along FreeSurfer. Moreover, there is a group who develops SimNibs. To get started, you need the MRI data imported and prepared with recon-all. In the most simple use case, the next step is running the commands in the example SimNIBS script at:


Processing all steps in the example script will take a couple of hours. The script produces surface files (*.surf) for inner and outer skull and skin surface in the subfolder m2m_<subject>/fsmesh. You will use these surfaces in the  FreeSurfer-based pipeline for preparing the BEM head models, instead of the surfaces produced by the “watershed” algorithm. Note that you still need FreeSurfer results for example in setting up the source space for MNE.

Copy the surface mesh files to the bem-subfolder in subject’s FreeSurfer directory or make symbolic links there, which goes by something like:

ln -s /path/to/fsmesh/ /path/to/subject/bem/

The validity of the produced surfaces is checked using FreeView; for example in folder /projects/training/MRI/:

freeview -v simnibs_test/cibr_test_adult1_T1fs_conform.nii.gz -f cibr_test_adult1/bem/ cibr_test_adult1/bem/

will overlay the inner and outer skull surfaces with T1-weighted MR images, where you can visually confirm that the surfaces have been extracted correctly. You can then proceed with the FreeSurfer instruction above, by preparing the forward model using the command mne_setup_forward_model —homog —surf


Preparing MNE inverse operators

MEG-MRI co-registration

So far, this work has been independent of the MEG data. Thus, it should suffice to perform these steps only once per subject. From now on, the data from the MEG measurement will be used to include information about measurement sensors and head position.

The first and possibly the most important thing to do is manually co-registrating the MEG and MRI data in the head coordinate frame. This will inform the forward and inverse solutions on head location, which is very important for the computations and validity of the results — any inaccuracy here will be propagated forward and probably elevated in data processing. This is also the step in the pipeline for which you acquired the extra digitisation points during subject preparation and documented the digitised anatomical landmarks. At this point it is also important that channel info are correct, in particular that EEG channels and only them are named as such. Note: See MNE manual 7.16.1 if you want to utilise a high-resolution head surface tessellation in co-registration.

There are good instructions for this step in the MNE manual, which are replicated below— other possibilities to establish the MRI-to-head transform include using “mne coreg” from MNE-python or the Elekta MRIlab software. This part needs to be done for each session, i.e. if the fiducials digitisation has changed or there are new MRIs (or preprocessing).


12.11.1 Initial alignment 

Follow these steps to make an initial approximation for the coordinate alignment.

In command terminal, go to the subject’s data directory — make sure you have exported the correct $subject and $subject_dir environmental variables

Launch mne_analyze

Select File/Load digitizer data... and load the digitizer data from the MEG data file

Load an inflated surface from File/Load surface... 

Bring up the viewer window from View/Show viewer...

Click Options... in the viewer window. Make the following selections: 

  1. Switch left and right cortical surface display off.
  2. Make the scalp transparent.
  3. Switch Digitizer data off.

Bring up the coordinate alignment window from Adjust/Coordinate alignment...

Click on the RAP (Right Auricular Point) button. It turns red, indicating that you should select the point from the viewer window. Click at the approximate location of this point in the viewer. The button jumps up, turns to normal color, and the MRI coordinates of the point appear in the text fields next to the button.

Proceed similarly for the other two landmark points: Nasion and LAP

Press Align using fiducials. Notice that the coordinate transformation changes from a unit transformation (no rotation, no origin translation) to a one determined by the identified landmark locations. The rotation matrix (upper left 3 x 3 part of the transformation) should have positive values close to one on the diagonal. The x and y values of the translation should be small and the z value should be negative, around -50 mm.

Make the Digitizer data visible from the options of the viewer window. After a while, the digitizer points will be shown. The color and size of the circles indicate whether and how much the point is inside (blue) or outside (red) of the scalp. The HPI coils are shown in green and the landmarks in light blue or light red color.

12.11.2 Refining the coordinate transformation 

Before proceeding to the refinement procedure, it is useful to remove outlier digitizer points. To discard an outlier point: 

  1. Click on Omit in the Adjust coordinate alignment window.
  2. Enter 10 for the distance of the points to be discarded.
  3. Click done. Points further than 10 mm disappear. 

The coordinate transformation can be adjusted manually with the arrow buttons in the middle of Adjust coordinate alignment dialog.

An automatic iterative procedure, Iterative Closest Point (ICP) matching is also provided. At each iteration step: 

  1. For each digitizer point, the closest point on the triangulated scalp surface is determined. 
  2. The best coordinate transformation aligning the digitizer points with the closest points on the head surface is computed. 

In step 2 of the iteration, the nasion is assigned five times the weight of the other points, since it can be assumed that the nasion is the easiest point to identify reliably from the surface image. 

The ICP alignment can be invoked by entering the desired number of iterations next to the ICP align button followed by return, or simply pressing ICP align. The iteration will converge in 10 to 20 steps. 

Warning: Use the ICP alignment option in mne_analyze with caution. The iteration will not converge to a reasonable solution unless and initial alignment is performed first according to Section 12.11.1. Outlier points should be excluded as described above. No attempt is made to compensate for the possible distance of the digitized EEG electrode locations from the scalp. 

You can also rotate the transformation using the arrow keys.In the end the co-registration process, you should see roughly the same number of small red and blue dots on the scalp, and landmarks and coils in the correct positions. Remember that we are aiming at millimeter-scale accuracy!

12.11.3 Saving the transformation 

To create a MRI FIFF description file, which incorporates the coordinate transformation, click Save MRI set in the Adjust coordinate alignment dialog. This will create the MRI set file in directory $SUBJECTS_DIR/$SUBJECT/mri/T1-neuromag/sets. You can choose to save the transformation matrix alone, e.g. to file ./meg-mri-trans.fif, to facilitate scripting.


The forward solution

Finally, the source model, the BEM model, the MEG measurement info and MEG/MRI transformation are brought together to compute the forward solution. The BEM model is read from the sub-directory bem, the MEG information is collected from the FIFF data file, and the MEG/MRI transformation from the file “$SUBJECTS_DIR/$SUBJECT/mri/T1-neuromag/sets/COR-<user>-<date>-<time>.fif”, defined above. The forward solution for this subject and measurement is then obtained by the following command:

mne_do_forward_solution --meas <data_file_name>

The output is the forward solution file in the measurement directory, automatically named according to MEG data and source space, maybe something like 003_block1d_raw-003-ico-4-src-fwd.fif — you can also specify output with the switch —fwd. You can specify other things, like source space, BEM and transformation with options --src, --bem and --trans or --mri, respectively. You can also add --megonly to use a 1-layer model.

In MNE-Python, forward solution is built with:

mne.make_forward_solution(, trans.fif, src, bem)

A separate forward solution is computed for each measurement file, because head position varies between measurements.

Computing the noise covariance matrix

The role of the noise covariance matrix is to describe the noise components that are expected in the recorded channel data. There is a small discussion in the MNE manual Ch. 3.12 on how this matrix should be computed. In a nutshell, use data that is as similar to your data of interest as possible, but without the neuronal component of interest. Most commonly, for evoked studies we use the pre-stimulus baseline of the epochs, and for studying ongoing activity we use empty-room recordings (or sometimes rest periods). If you have applied some pre-processing to your data, you need to apply the same pre-processing steps to the data used to compute the noise covariance.

MNE-Python offers nicely developed tools for covariance estimation and inverse operator preparation, as well as many subsequent processing steps. Estimating noise covariance is implemented in functions mne.compute_covariance() and mne.compute_raw_covariance()

You can also use e.g. mne_browse_raw to compute noise covariance matrices (see the MNE manual to learn the details).

Calculating and applying the inverse operator

In mne_analyze, inverse operator is generated with the following command (options can vary):

mne_do_inverse_operator --fwd /projects/brainx/c007/c007-fwd.fif --loose 0.2 --depth --senscov /projects/brainx/c007/ER-cov.fif

The forward and noise covariance data are loaded from the previously saved files. You can then load the MEG data and and start playing with source models!

To generate an inverse operator in MNE-Python, say for example:

inv = mne.minimum_norm.make_inverse_operator(, fwd, ncov loose=0.2, depth=0.8)

Here, fwd and ncov are instances of forward solution and noise covariance matrix. Check possible options for different kinds of inverse models in the manual or on-line documentation. Note, that you can create beamformers in MNE-Python in a very similar fashion!

Available analysis software

Most common software needed for MEG analysis are available on the CIBR servers. Just type the software name in the terminal to start using it. If you need help or have additional software needs, request these from cibr-support ät


Python language can be used for almost any purposes, but the installations on the server are mainly meant to support MNE-Python. We manage versions of MNE-Python with virtual Conda environments, so that projects can be completed using a single software version, and code doesn’t need to be modified whenever there are updates to MNE. Environments for major releases are called mne_<version> and the development version is in mne-dev. Virtual environments can be listed with “conda env list” and started by commanding “source activate <environment>”. If there is need for other Python packages, please ask.

There are many different ways to work with Python, depending on personal preferences. You can either write your scripts and run them directly from the terminal environment by commanding “python <scriptname>”, use interactive ipython, or even an IDE, like Spyder.


Matlab can be started by commanding “matlab” on the server. If you do not want the graphical user interface, or even the Java virtual machine and graphics at all because they slow down the work, command “matlab -nodesktop” or “matlab -nojvm” instead.

We have many toolboxes available in Matlab, but to avoid confusion between toolboxes you need to call them up first. This is done by saying “use_<toolbox>”, where <toolbox> is replaced by “spm”, “brainstorm”, “eeglab”, “fieldtrip”, “mne” or “fastica”.

Analysis training

CIBR organizes MEG data analysis training as Methods Clinics, and other more specific training sessions every now and then. Everyone is free to suggest topics! Current schedule is available in CIBR web pages, and information is sent to the meg-users mailing list.


The CIBR servers have a directory /projects/training/, under which you can find several topics for training different analysis approaches yourself. In these directories, you should find file “instructions.pdf” to get started, and exemplary data or code as needed. This material is still work in progress, so contact CIBR staff if something is missing or inconceivable. Go and check!

Scripts and stuff

Different scripts meant for everyone's enjoyment are located on CIBR servers, at repository /opt/tools/cibr-meg/. If you find some of them useful, you are supposed to first copy the scripts to your own project directory and then edit them as needed. There are subdirectories for MaxFilter scripts and for MRI-related stuff, while other scripts are useful for pre-processing and analysing MEG data. Both Python and Shell scripts are included. A more detailed listing and description of the scripts is coming up here, but they are mostly documented in the scripts themselves.