MEG data analysis

Analysis training

CIBR organizes MEG data analysis training as Methods Clinics, and some more specific training sessions every now and then. Everyone is free to suggest topics! Information about upcoming training events is sent to the meg-users mailing list at least.

The actual methods training is currently being moved to a self-study course in Moodle. 


The CIBR servers have a directory /projects/training/, under which you can find several topics for training different analysis approaches yourself. In (some of) these directories, you should find file “instructions.pdf” to get started, and exemplary data or code as needed. This material is still work in progress, so contact CIBR staff if something is missing or inconceivable. Go and check!


Note that for pre-processing of MEG data, we have a recommended shared Python script. You can find this on the server at /opt/tools/cibr-meg/ - more info from CIBR support persons!

Using MaxFilter to clean and correct MEG data

Running MaxFilter is compulsory if your data has been recorded with IAS (Internal Active Shielding) on, and is recommended for data where IAS is off (IAS is off in CIBR MEG lab by default).
MaxFilter manual is available among other MEG manuals in the MEG lab and 2nd floor analysis room, as well as on the CIBR server in PDF format at /neuro/manuals/.

Note: It is important to carefully inspect raw data before and after MaxFiltering, regardless of exact the MaxFilter procedure you are using. It is often helpful to compare the data to the original data as well as your notes about the measurement session.


MaxFilter version 2.2 is the officially released and clinically accepted version. The new Maxfilter 3.0 has many advantages and new features, but is not yet official. The main differences to the user are improved identification of bad channels and SSS origin fit in 3.0. However, similar functionality can be obtained with MF 2.2 using utility programs xscan for bad channel detection and fit_sphere_to_isotrak for origin fitting. In addition, MF 3.0 has some improved noise rejection capabilities based on channel cross-validation, which can help in particular in increasing SNR in higher frequencies (gamma band).

MaxFilter 2.2 GUI can be opened with command maxfilter_gui, whereas for running MF 2.2 in the terminal the command is maxfilter-2.2. MF 3.0 can be used only on command line at the moment, by maxfilter-3.0. Command-line options can be inspected by the switch -help. It is a good idea to explicitly specify the intended software version and parameters.

We recommend Maxfiltering your project data using shell scripts. This way it is easy to re-run everything if needed and you will always know which settings were used. There are some example scripts in the CIBR servers under /opt/tools/cibr-meg/maxfilter/ - copy one to your project folder and edit for your needs.


Due to the large number of different options, the optimal MaxFilter processing for each data set or project should be detailed by the researchers. For this, understanding of the underlying principles is necessary. Please study the manual and original SSS papers by Taulu et al.

Recommended MaxFilter workflow includes the temporal extension, tSSS. This can be applied with the option -st. If there are no temporal projections to be rejected, tSSS will result to basic SSS, although running it will be considerably slower. You can give the inspected window length in seconds after -st. It is good to know that tSSS, as a side effect, does high-pass filtering to the data, where the edge frequency depends on the selected window length.

If there is need to compare channel-level data between sessions or subjects, the option -trans <filename.fif> must be used to virtually co-align the positions of the head between the recordings. The distance between the two head positions (reported by MaxFilter) must not exceed 25 mm, and even then requires inspection of the resulting signals. Problems in the head position transform are most often visible in the vertex channels, and they are due to trying to fix an originally low head position. A good way to minimize transformation distances is to use the middle head position (among sessions or subjects) as the transform target (see the cibr-meg script collection below).

If continuous HPI (cHPI) has been on during the recording to monitor subject’s head movement, you need to apply movement correction as well. Intermittent movement correction, -movecomp inter, is the required command line option.
Other useful options:

  • -ds for downsampling - saves disk, time and memory in later processing
  • -v for verbose - gives a lot more of information about processing
  • -hpicons - adjusts initial consistency of isotrak and hpifit
  • -waves <filename> - save tSSS rejected waveforms for later inspection
  • -corr for tSSS correlation strength

For problem cases:

  • Use option -force to bypass warnings (use with consideration!)
  • Try lower correlation limit for tSSS, down to roughly 0.9, to further reduce artifacts
  • Skip problematic segments of data (option -skip t1 t2)

MaxFiltering scripts

You can find some example scripts on CIBR server in /opt/tools/cibr-meg/.
These are described in more detail in the instructions Scripts and stuff

Available analysis software

Most common software needed for MEG analysis are available on the CIBR servers. Just type the software name in the terminal to start using it. If you need help or have additional software needs, request these from cibr-support ät


The Neuromag / Elekta / MEGIN data analysis suite (DANA) is available on the CIBR server. On virtual desktop, you can find these in the menu via the Neuromag icon. You can also start them from command line, if you know their true names, like xfit, graph, xplotter, mrilab. Please check the software manuals for more info. You can find the white folders either in the MEG lab or the 2nd floor analysis class. They are also available as PDFs on the server at /neuro/manuals/.


Python language can be used for almost any purposes, but the installations on the server are mainly meant to support MNE-Python and related software. We manage versions of MNE-Python with virtual Conda environments, so that projects can be completed using a single software version, and code doesn’t need to be modified whenever there are updates to MNE. The virtual Conda environments for major releases of MNE-Python are called "mne_<version>". Virtual environments can be listed with “conda env list” and started by commanding “conda activate <environment_name>”. If there is need for other Python packages, please ask.

There are many different ways to work with Python, depending on personal preferences. You can either write your scripts and run them directly from the terminal environment by commanding “python <scriptname>”, use interactive ipython, an IDE like Spyder, or Jupyter notebooks.


Matlab can be started by commanding “matlab” on the server. If you do not want the graphical user interface, or even the Java virtual machine and graphics at all because they slow down the work, command “matlab -nodesktop” or “matlab -nojvm” instead.

We have many toolboxes available in Matlab, but to avoid confusion between toolboxes you need to call them up first. This is done by saying “use_<toolbox>”, where <toolbox> is replaced by “spm”, “brainstorm”, “eeglab”, “fieldtrip”, “mne” or “fastica”.

Scripts and stuff

Different scripts meant for everyone's enjoyment are located on CIBR servers, at repository /opt/tools/cibr-meg/. If you find a useful one, you are supposed to first copy the script to your own project directory and then edit them as needed. There are subdirectories for MaxFilter scripts and for MRI-related stuff, while other scripts are useful for pre-processing and analysing MEG data. Both Python and Shell scripts are included. A more detailed documentation of the scripts is coming up, but they are also self-documenting to an extent.