Articles on Scott-huberty's Bloghttps://blogs.python-gsoc.orgUpdates on different articles published on Scott-huberty's BlogenMon, 10 Jul 2023 13:04:49 +0000pyTracking: Analyzing eye-tracking data in python:https://blogs.python-gsoc.org/en/scott-hubertys-blog/pytracking-analyzing-eye-tracking-data-in-python-1/<p><em>I am a PhD candidate in Neuroscience at McGill University with interests in computational neuroscience. My research focuses on EEG, source reconstruction, and eye-tracking. For GSoC 2023, I am working to add support for eye-tracking to MNE-Python, the default Python package for analyzing neurophysiological data. This will make it easier for researchers to analyze eye-tracking data and gain insights into human cognition. You can learn more about the project and how you can contribute on MNE-Python's <a href="https://mne.tools/stable/index.html: https://mne.tools/stable/index.html">website</a>.</em></p> <p> </p> <h2 style="text-align: center;">Week 5</h2> <p> </p> <p>In Neuroscience research, multi-modal data collection is becoming increasingly utilized. For example, research labs are conducting simultaneous EEG and fMRI to merge the powerful temporal resolution of EEG with the spatial resolution of MRI. </p> <p>Research labs are also increasingly combining EEG, MEG, or fMRI with eye-tracking, because for example it allows one to analyze the brain response to the exact moment that the participant gazed to a stimuli of interest on the screen. combining brain imaging with eye-tracking could be the key to understanding the relationship between neural mechanisms and behavior.</p> <p>I spent week 5 of my project building a tutorial for comining simultaneously collected eye-tracking and EEG data, and analyzing the signals together. We published the tutorial on MNE-Python's website, you can check it out, <a href="https://mne.tools/dev/auto_tutorials/preprocessing/90_eyetracking_data.html">here</a>!</p>scotteric.hub@gmail.com (Scott-huberty)Mon, 10 Jul 2023 13:04:49 +0000https://blogs.python-gsoc.org/en/scott-hubertys-blog/pytracking-analyzing-eye-tracking-data-in-python-1/pyTracking: Analyzing eye-tracking data in python:https://blogs.python-gsoc.org/en/scott-hubertys-blog/pytracking-analyzing-eye-tracking-data-in-python/<h2 style="text-align: center;"> </h2> <p><em>I am a PhD candidate in Neuroscience at McGill University with interests in computational neuroscience. My research focuses on EEG, source reconstruction, and eye-tracking. My PhD work has focused on integrating EEG, neuroimaging, and eye-tracking data acquisition. For GSoC 2023, I am working to add support for eye-tracking to MNE-Python, the default Python package for analyzing neurophysiological data. This will make it easier for researchers to analyze eye-tracking data and gain insights into human cognition. You can learn more about the project and how you can contribute on MNE-Python's <a href="https://mne.tools/stable/index.html: https://mne.tools/stable/index.html">website</a>.</em></p> <p> </p> <h2 style="text-align: center;">Weeks 3-4</h2> <p>By now, we are able to load eye-tracking data, visualize the signals, and check the eye-tracking calibration quality (<a href="https://blogs.python-gsoc.org/en/scott-hubertys-blog/">see week 1's blog post</a>) for more on that).</p> <p>In the last two weeks, I pivoted towards developing helper functions for pre-processing steps that researchers often need to carry out before the data can be analyzed.</p> <p>For example, take <em>blink interpolation, </em>which is frequently applied in pupillometry research: Eye-trackers typically record the x- and y-coordinates of the screen that the eye is gazing at, and the pupil size. Naturallly, participants will blink during any given eye-tracking experiment, resulting in data loss (see the figure below).</p> <p><img alt="" height="388" src="/media/uploads/8e864867-fcc6-402e-9c39-e7603f91d9e7.png" width="996"></p> <p>In pupillometry research in particular, it is common to <em>interpolate</em> the missing pupil size data during blinks. The basic idea is that blinks are very short in duration (&lt; 300ms), while the pupil response (i.e., the rate at which pupil size changes) is fairly slow. Thus, it is generally acceptable to interpolate the missing pupil data.</p> <p> </p> <p>Let's plot the same data from above, after we interpolate missing data during the blink periods with our newly developed function:</p> <p> </p> <pre><code class="language-python">import mne from mne.datasets.eyelink import data_path from mne.preprocessing.eyetracking import interpolate_blinks # Load some data fpath = data_path() / "sub-01_task-plr_eyetrack.asc" raw = mne.io.read_raw_eyelink(fpath, create_annotations=["blinks"}) events = mne.find_events(raw, min_duration=0.01, shortest_event=1, uint_cast=True) event_dict = {"Flash": 2} # Interpolate missing data during blinks interpolate_blinks(raw, buffer=(0.025, .2), interpolate_gaze=True) raw.plot(events=et_events, event_id=event_dict, event_color="g") </code></pre> <p> </p> <p><img alt="" height="388" src="/media/uploads/b52fcb22-9a79-4642-8141-e9acece951a7.png" width="980"></p> <p> </p> <p>Great! Next week, we'll synchronize this data with some simultaneously recorded EEG data. In my next blog post, I'll explain the experimental task in more detail, and we'll visualize the evoked responses to the experiment stimuli.</p>scotteric.hub@gmail.com (Scott-huberty)Mon, 03 Jul 2023 03:19:37 +0000https://blogs.python-gsoc.org/en/scott-hubertys-blog/pytracking-analyzing-eye-tracking-data-in-python/(py)Eye-Tracking: Week 1 of my GSoC projecthttps://blogs.python-gsoc.org/en/scott-hubertys-blog/py-eye-tracking-week-1-of-my-gsoc-project/<p><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">Hello everyone - My name is Scott. I am a PhD Candidate in Neuroscience. My GSoC project is focused on building tools for reading, analyzing, and visualizing eye-tracking data in Python. </span></span></p> <p><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">Eye-trackers, as you may have guessed, allow us to measure where a person is looking, and are used to study a variety of things, such as visual attention, cognitive load, and emotion.</span></span></p> <p><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">For this project I am working with MNE-Python, the de facto package for analyzing (neuro)-physiological data in Python. I've been using MNE for a couple of years now, and have contributed to a few minor PR's, so I am happy to be able to get more involved via this project.</span></span></p> <p><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">MNE is a perfect home for this work, because not only is it a mature and well maintained package, but it already contains a number of functions that can be directly applied to eye-tracking data. Further, many researchers are collecting M/EEG, fNIRS data simultaneously with eye-tracking data, and being able to process/analyze these data in a single ecosystem will be a big benefit.</span></span></p> <h2><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">What did I do this week?</span></span></h2> <p><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">Fortunately, prior to starting my GSoC project, I had already completed an initial PR for reading eye-tracking data from a widely used system, <em>SR Research Eyelink</em>, so I already had some ground work to build off of. </span></span></p> <p><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">This week, I focused on extending this reader to extract some important meta-information from eye-tracking data that will be needed for functions to be developed later in this project.</span></span></p> <p><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">Namely, I wanted to extract and neatly store information about the <i>eye-tracking calibration</i>. Basically, a calibration is a common routine done at the start of an eye-tracking recording, where the participant is asked to look at points at different locations on the screen, so that the eye-tracker can <em>calibrate</em> itself to that specific participant, leading to more accurate data collection. Reviewing the quality of a calibration is a common and important first step in almost any eye-tracking processing pipeline.</span></span></p> <p><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">Without going into the nitty gritty details, the code we developed will extract the information on any calibrations recorded in the data-file, and store it in a new, `Calibrations` class. For example, Information on the estimated <em>error </em>in the calibration, the eye that was used for the calibration, etc, are all stored in this `dict`-like class. We also developed a `plot` method for the calibration class, so the user can view the participants gaze to each calibration point.</span></span></p> <p><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">Here's a quick preview. Imagine the plot below is the screen that is used for eye-tracking. At the start of the session, the participant is presented with five points on the screen, one at a time, and is asked to look at them. The grey dots on the plot below are the coordinates of the points that the eye-tracker displayed to the participant. The red dots are the actual gaze positions when the participant looked at the calibration points. The number next to each red dot represents the <em>offset </em>(in visual degrees) between the grey dot (calibration point) and the red dot (actual gaze position). This information gives you a rough idea of the quality of the eye-tracking calibration for the participant. In short, the higher the offset, the worse the calibration. Finally, the text in the top left corner displays the average offset across all the calibration points, and the highest offset out of all the calibration points (in this case it's 0.43, which is the offset for the top calibration point).</span></span></p> <p><img alt="" height="491" src="/media/uploads/fb49e9eb-5051-42a1-8eaa-4ad4b09f5c35.png" width="651"></p> <p> </p> <h2><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">What is coming up next?</span></span></h2> <p><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">Next week I will shift gears a bit, and turn towards developing a pre-processing function that is pretty fundamental to eye-tracking research: <em>interpolating blinks</em>. Basically, you can think of this process as <em>removing blink artifact</em>. More on that, next week!</span></span></p> <h2><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">Did I get stuck anywhere?</span></span></h2> <p><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">Not necessarily on anything specific, but my first week was a good reminder that when programming, there are always hiccups that appear in what you would otherwise think should be a simple and quick task : )</span></span></p> <h2><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">But here's something new I learned in Python this week...</span></span></h2> <p><span style="color: #000000;"><span style="font-family: Arial,Helvetica,sans-serif;">This week I ran into the concept of structured arrays in NumPy. Structured arrays are essentially regular NumPy arrays, but were designed to mimic ‘</span><span style="font-family: Courier New,Courier,monospace;">structs</span><span style="font-family: Arial,Helvetica,sans-serif;">’ in the C language, in that they allow you to store data of different types in the same array, each with a "field" name. This "field" name is the part I found interesting, because it allows you to (somewhat) use name indexing with numpy arrays, similar to the way you would with </span><span style="font-family: Courier New,Courier,monospace;">Pandas</span><span style="font-family: Arial,Helvetica,sans-serif;"> or </span><span style="font-family: Courier New,Courier,monospace;">Xarray</span><span style="font-family: Arial,Helvetica,sans-serif;">. Take the following code as an example:</span></span></p> <div style="background: #eeeeee; border: 1px solid #cccccc; padding: 5px 10px;"> <div><span style="color: #000000;"><span style="font-family: Courier New,Courier,monospace;"># Make a toy numpy array and some field names.</span></span></div> <div><span style="color: #000000;"><span style="font-family: Courier New,Courier,monospace;">calibration_labels = ['point_x', 'point_y', 'offset', 'diff_x', 'diff_y']</span></span></div> <div><span style="color: #000000;"><span style="font-family: Courier New,Courier,monospace;">calibration_vals = np.array([960., 92., 0.5, 23.7, 1.3])</span></span></div> <div><span style="color: #000000;"><span style="font-family: Courier New,Courier,monospace;">dtype = [(field, 'f8') for field in calibration_labels]</span></span></div> <div> </div> <div><span style="color: #000000;"><span style="font-family: Courier New,Courier,monospace;">this_calibration = np.empty(1, dtype=dtype)</span></span></div> <div> </div> <div><span style="color: #000000;"><span style="font-family: Courier New,Courier,monospace;"># Assign a field name for each calibration value</span></span></div> <div><span style="color: #000000;"><span style="font-family: Courier New,Courier,monospace;">for field, val in zip(calibration_labels, calibration_vals):</span></span></div> <div><span style="color: #000000;"><span style="font-family: Courier New,Courier,monospace;">  this_calibration[0][field] = val</span></span></div> <div> </div> <div><span style="color: #000000;"><span style="font-family: Courier New,Courier,monospace;">print(this_calibration)</span></span></div> <div>array([(960., 92., 0.5, 23.7, 1.3)], dtype=[('point_x', '&lt;f8'),&gt; <div> </div> <div><span style="color: #000000;"><span style="font-family: Courier New,Courier,monospace;">print(this_calibration["point_x"])</span></span></div> <div><span style="color: #000000;"><span style="font-family: Courier New,Courier,monospace;"># [960.]</span></span></div> &lt;/f8'),&gt;</div> <p>Neat!</p> <p> </p> </div>scotteric.hub@gmail.com (Scott-huberty)Sun, 04 Jun 2023 15:54:01 +0000https://blogs.python-gsoc.org/en/scott-hubertys-blog/py-eye-tracking-week-1-of-my-gsoc-project/