Scott-huberty's Blog

pyTracking: Analyzing eye-tracking data in python:

Scott-huberty
Published: 07/10/2023

I am a PhD candidate in Neuroscience at McGill University with interests in computational neuroscience. My research focuses on EEG, source reconstruction, and eye-tracking. For GSoC 2023, I am working to add support for eye-tracking to MNE-Python, the default Python package for analyzing neurophysiological data. This will make it easier for researchers to analyze eye-tracking data and gain insights into human cognition. You can learn more about the project and how you can contribute on MNE-Python's website.

 

Week 5

 

In Neuroscience research, multi-modal data collection is becoming increasingly utilized. For example, research labs are conducting simultaneous EEG and fMRI to merge the powerful temporal resolution of EEG with the spatial resolution of MRI. 

Research labs are also increasingly combining EEG, MEG, or fMRI with eye-tracking, because for example it allows one to analyze the brain response to the exact moment that the participant gazed to a stimuli of interest on the screen. combining brain imaging with eye-tracking could be the key to understanding the relationship between neural mechanisms and behavior.

I spent week 5 of my project building a tutorial for comining simultaneously collected eye-tracking and EEG data, and analyzing the signals together. We published the tutorial on MNE-Python's website, you can check it out, here!

View Blog Post

pyTracking: Analyzing eye-tracking data in python:

Scott-huberty
Published: 07/03/2023

 

I am a PhD candidate in Neuroscience at McGill University with interests in computational neuroscience. My research focuses on EEG, source reconstruction, and eye-tracking. My PhD work has focused on integrating EEG, neuroimaging, and eye-tracking data acquisition. For GSoC 2023, I am working to add support for eye-tracking to MNE-Python, the default Python package for analyzing neurophysiological data. This will make it easier for researchers to analyze eye-tracking data and gain insights into human cognition. You can learn more about the project and how you can contribute on MNE-Python's website.

 

Weeks 3-4

By now, we are able to load eye-tracking data, visualize the signals, and check the eye-tracking calibration quality (see week 1's blog post) for more on that).

In the last two weeks, I pivoted towards developing helper functions for pre-processing steps that researchers often need to carry out before the data can be analyzed.

For example, take blink interpolation, which is frequently applied in pupillometry research: Eye-trackers typically record the x- and y-coordinates of the screen that the eye is gazing at, and the pupil size. Naturallly, participants will blink during any given eye-tracking experiment, resulting in data loss (see the figure below).

In pupillometry research in particular, it is common to interpolate the missing pupil size data during blinks. The basic idea is that blinks are very short in duration (< 300ms), while the pupil response (i.e., the rate at which pupil size changes) is fairly slow. Thus, it is generally acceptable to interpolate the missing pupil data.

 

Let's plot the same data from above, after we interpolate missing data during the blink periods with our newly developed function:

 

import mne
from mne.datasets.eyelink import data_path
from mne.preprocessing.eyetracking import interpolate_blinks

# Load some data
fpath = data_path() / "sub-01_task-plr_eyetrack.asc"
raw = mne.io.read_raw_eyelink(fpath, create_annotations=["blinks"})
events = mne.find_events(raw, min_duration=0.01, shortest_event=1, uint_cast=True)
event_dict = {"Flash": 2}

# Interpolate missing data during blinks
interpolate_blinks(raw, buffer=(0.025, .2), interpolate_gaze=True)
raw.plot(events=et_events, event_id=event_dict, event_color="g")

 

 

Great! Next week, we'll synchronize this data with some simultaneously recorded EEG data. In my next blog post, I'll explain the experimental task in more detail, and we'll visualize the evoked responses to the experiment stimuli.

View Blog Post

(py)Eye-Tracking: Week 1 of my GSoC project

Scott-huberty
Published: 06/04/2023

Hello everyone - My name is Scott. I am a PhD Candidate in Neuroscience. My GSoC project is focused on building tools for reading, analyzing, and visualizing eye-tracking data in Python. 

Eye-trackers, as you may have guessed, allow us to measure where a person is looking, and are used to study a variety of things, such as visual attention, cognitive load, and emotion.

For this project I am working with MNE-Python, the de facto package for analyzing (neuro)-physiological data in Python. I've been using MNE for a couple of years now, and have contributed to a few minor PR's, so I am happy to be able to get more involved via this project.

MNE is a perfect home for this work, because not only is it a mature and well maintained package, but it already contains a number of functions that can be directly applied to eye-tracking data. Further, many researchers are collecting M/EEG, fNIRS data simultaneously with eye-tracking data, and being able to process/analyze these data in a single ecosystem will be a big benefit.

What did I do this week?

Fortunately, prior to starting my GSoC project, I had already completed an initial PR for reading eye-tracking data from a widely used system, SR Research Eyelink, so I already had some ground work to build off of. 

This week, I focused on extending this reader to extract some important meta-information from eye-tracking data that will be needed for functions to be developed later in this project.

Namely, I wanted to extract and neatly store information about the eye-tracking calibration. Basically, a calibration is a common routine done at the start of an eye-tracking recording, where the participant is asked to look at points at different locations on the screen, so that the eye-tracker can calibrate itself to that specific participant, leading to more accurate data collection. Reviewing the quality of a calibration is a common and important first step in almost any eye-tracking processing pipeline.

Without going into the nitty gritty details, the code we developed will extract the information on any calibrations recorded in the data-file, and store it in a new, `Calibrations` class. For example, Information on the estimated error in the calibration, the eye that was used for the calibration, etc, are all stored in this `dict`-like class. We also developed a `plot` method for the calibration class, so the user can view the participants gaze to each calibration point.

Here's a quick preview. Imagine the plot below is the screen that is used for eye-tracking. At the start of the session, the participant is presented with five points on the screen, one at a time, and is asked to look at them. The grey dots on the plot below are the coordinates of the points that the eye-tracker displayed to the participant. The red dots are the actual gaze positions when the participant looked at the calibration points. The number next to each red dot represents the offset (in visual degrees) between the grey dot (calibration point) and the red dot (actual gaze position). This information gives you a rough idea of the quality of the eye-tracking calibration for the participant. In short, the higher the offset, the worse the calibration. Finally, the text in the top left corner displays the average offset across all the calibration points, and the highest offset out of all the calibration points (in this case it's 0.43, which is the offset for the top calibration point).

 

What is coming up next?

Next week I will shift gears a bit, and turn towards developing a pre-processing function that is pretty fundamental to eye-tracking research: interpolating blinks. Basically, you can think of this process as removing blink artifact. More on that, next week!

Did I get stuck anywhere?

Not necessarily on anything specific, but my first week was a good reminder that when programming, there are always hiccups that appear in what you would otherwise think should be a simple and quick task : )

But here's something new I learned in Python this week...

This week I ran into the concept of structured arrays in NumPy. Structured arrays are essentially regular NumPy arrays, but were designed to mimic ‘structs’ in the C language, in that they allow you to store data of different types in the same array, each with a "field" name. This "field" name is the part I found interesting, because it allows you to (somewhat) use name indexing with numpy arrays, similar to the way you would with Pandas or Xarray. Take the following code as an example:

# Make a toy numpy array and some field names.
calibration_labels = ['point_x', 'point_y', 'offset', 'diff_x', 'diff_y']
calibration_vals = np.array([960., 92., 0.5, 23.7, 1.3])
dtype = [(field, 'f8') for field in calibration_labels]
 
this_calibration = np.empty(1, dtype=dtype)
 
# Assign a field name for each calibration value
for field, val in zip(calibration_labels, calibration_vals):
  this_calibration[0][field] = val
 
print(this_calibration)
array([(960., 92., 0.5, 23.7, 1.3)], dtype=[('point_x', '<f8'),>
 
print(this_calibration["point_x"])
# [960.]
</f8'),>

Neat!

 

View Blog Post
DJDT

Versions

Time

Settings from gsoc.settings

Headers

Request

SQL queries from 1 connection

Static files (2312 found, 3 used)

Templates (20 rendered)

Cache calls from 1 backend

Signals

Log messages