.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "_auto_examples/shortclips/01_plot_explainable_variance.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr__auto_examples_shortclips_01_plot_explainable_variance.py: ================================ Compute the explainable variance ================================ Before fitting any voxelwise model to fMRI responses, it is good practice to quantify the amount of signal in the test set that can be predicted by an encoding model. This quantity is called the *explainable variance*. The measured signal can be decomposed into a sum of two components: the stimulus-dependent signal and noise. If we present the same stimulus multiple times and we record brain activity for each repetition, the stimulus-dependent signal will be the same across repetitions while the noise will vary across repetitions. In voxelwise modeling, the features used to model brain activity are the same for each repetition of the stimulus. Thus, encoding models will predict only the repeatable stimulus-dependent signal. The stimulus-dependent signal can be estimated by taking the mean of brain responses over repeats of the same stimulus or experiment. The variance of the estimated stimulus-dependent signal, which we call the explainable variance, is proportional to the maximum prediction accuracy that can be obtained by a voxelwise encoding model in the test set. Mathematically, let :math:`y_i, i = 1 \dots N` be the measured signal in a voxel for each of the :math:`N` repetitions of the same stimulus and :math:`\bar{y} = \frac{1}{N}\sum_{i=1}^Ny_i` the average brain response across repetitions. For each repeat, we define the residual timeseries between brain response and average brain response as :math:`r_i = y_i - \bar{y}`. The explainable variance (EV) is estimated as .. math:: \text{EV} = \frac{1}{N}\sum_{i=1}^N\text{Var}(y_i) - \frac{N}{N-1}\sum_{i=1}^N\text{Var}(r_i) In the literature, the explainable variance is also known as the *signal power*. For more information, see these references [1]_ [2]_ [3]_. .. GENERATED FROM PYTHON SOURCE LINES 38-38 .. code-block:: Python :dedent: 1 .. GENERATED FROM PYTHON SOURCE LINES 40-42 Path of the data directory -------------------------- .. GENERATED FROM PYTHON SOURCE LINES 42-47 .. code-block:: Python from voxelwise_tutorials.io import get_data_home directory = get_data_home(dataset="shortclips") print(directory) .. rst-class:: sphx-glr-script-out .. code-block:: none /home/jlg/mvdoc/voxelwise_tutorials_data/shortclips .. GENERATED FROM PYTHON SOURCE LINES 48-52 .. code-block:: Python # modify to use another subject subject = "S01" .. GENERATED FROM PYTHON SOURCE LINES 53-55 Compute the explainable variance -------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 55-58 .. code-block:: Python import numpy as np from voxelwise_tutorials.io import load_hdf5_array .. GENERATED FROM PYTHON SOURCE LINES 59-61 First, we load the fMRI responses on the test set, which contains brain responses to ten (10) repeats of the same stimulus. .. GENERATED FROM PYTHON SOURCE LINES 61-67 .. code-block:: Python import os file_name = os.path.join(directory, 'responses', f'{subject}_responses.hdf') Y_test = load_hdf5_array(file_name, key="Y_test") print("(n_repeats, n_samples_test, n_voxels) =", Y_test.shape) .. rst-class:: sphx-glr-script-out .. code-block:: none (n_repeats, n_samples_test, n_voxels) = (10, 270, 84038) .. GENERATED FROM PYTHON SOURCE LINES 68-69 Then, we compute the explainable variance for each voxel. .. GENERATED FROM PYTHON SOURCE LINES 69-75 .. code-block:: Python from voxelwise_tutorials.utils import explainable_variance ev = explainable_variance(Y_test) print("(n_voxels,) =", ev.shape) .. rst-class:: sphx-glr-script-out .. code-block:: none (n_voxels,) = (84038,) .. GENERATED FROM PYTHON SOURCE LINES 76-78 To better understand the concept of explainable variance, we can plot the measured signal in a voxel with high explainable variance... .. GENERATED FROM PYTHON SOURCE LINES 78-93 .. code-block:: Python import matplotlib.pyplot as plt voxel_1 = np.argmax(ev) time = np.arange(Y_test.shape[1]) * 2 # one time point every 2 seconds plt.figure(figsize=(10, 3)) plt.plot(time, Y_test[:, :, voxel_1].T, color='C0', alpha=0.5) plt.plot(time, Y_test[:, :, voxel_1].mean(0), color='C1', label='average') plt.xlabel("Time (sec)") plt.title("Voxel with large explainable variance (%.2f)" % ev[voxel_1]) plt.yticks([]) plt.legend() plt.tight_layout() plt.show() .. image-sg:: /_auto_examples/shortclips/images/sphx_glr_01_plot_explainable_variance_001.png :alt: Voxel with large explainable variance (0.71) :srcset: /_auto_examples/shortclips/images/sphx_glr_01_plot_explainable_variance_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 94-95 ... and in a voxel with low explainable variance. .. GENERATED FROM PYTHON SOURCE LINES 95-106 .. code-block:: Python voxel_2 = np.argmin(ev) plt.figure(figsize=(10, 3)) plt.plot(time, Y_test[:, :, voxel_2].T, color='C0', alpha=0.5) plt.plot(time, Y_test[:, :, voxel_2].mean(0), color='C1', label='average') plt.xlabel("Time (sec)") plt.title("Voxel with low explainable variance (%.2f)" % ev[voxel_2]) plt.yticks([]) plt.legend() plt.tight_layout() plt.show() .. image-sg:: /_auto_examples/shortclips/images/sphx_glr_01_plot_explainable_variance_002.png :alt: Voxel with low explainable variance (-0.05) :srcset: /_auto_examples/shortclips/images/sphx_glr_01_plot_explainable_variance_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 107-108 We can also plot the distribution of explainable variance over voxels. .. GENERATED FROM PYTHON SOURCE LINES 108-116 .. code-block:: Python plt.hist(ev, bins=np.linspace(0, 1, 100), log=True, histtype='step') plt.xlabel("Explainable variance") plt.ylabel("Number of voxels") plt.title('Histogram of explainable variance') plt.grid('on') plt.show() .. image-sg:: /_auto_examples/shortclips/images/sphx_glr_01_plot_explainable_variance_003.png :alt: Histogram of explainable variance :srcset: /_auto_examples/shortclips/images/sphx_glr_01_plot_explainable_variance_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 117-123 We see that many voxels have low explainable variance. This is expected, since many voxels are not driven by a visual stimulus, and their response changes over repeats of the same stimulus. We also see that some voxels have high explainable variance (around 0.7). The responses in these voxels are highly consistent across repetitions of the same stimulus. Thus, they are good targets for encoding models. .. GENERATED FROM PYTHON SOURCE LINES 125-145 Map to subject flatmap ---------------------- To better understand the distribution of explainable variance, we map the values to the subject brain. This can be done with `pycortex `_, which can create interactive 3D viewers to be displayed in any modern browser. ``pycortex`` can also display flattened maps of the cortical surface to visualize the entire cortical surface at once. Here, we do not share the anatomical information of the subjects for privacy concerns. Instead, we provide two mappers: - to map the voxels to a (subject-specific) flatmap - to map the voxels to the Freesurfer average cortical surface ("fsaverage") The first mapper is 2D matrix of shape (n_pixels, n_voxels) that maps each voxel to a set of pixel in a flatmap. The matrix is efficiently stored in a ``scipy`` sparse CSR matrix. The function ``plot_flatmap_from_mapper`` provides an example of how to use the mapper and visualize the flatmap. .. GENERATED FROM PYTHON SOURCE LINES 145-152 .. code-block:: Python from voxelwise_tutorials.viz import plot_flatmap_from_mapper mapper_file = os.path.join(directory, 'mappers', f'{subject}_mappers.hdf') plot_flatmap_from_mapper(ev, mapper_file, vmin=0, vmax=0.7) plt.show() .. image-sg:: /_auto_examples/shortclips/images/sphx_glr_01_plot_explainable_variance_004.png :alt: 01 plot explainable variance :srcset: /_auto_examples/shortclips/images/sphx_glr_01_plot_explainable_variance_004.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 153-162 This figure is a flattened map of the cortical surface. A number of regions of interest (ROIs) have been labeled to ease interpretation. If you have never seen such a flatmap, we recommend taking a look at a `pycortex brain viewer `_, which displays the brain in 3D. In this viewer, press "I" to inflate the brain, "F" to flatten the surface, and "R" to reset the view (or use the ``surface/unfold`` cursor on the right menu). Press "H" for a list of all keyboard shortcuts. This viewer should help you understand the correspondence between the flatten and the folded cortical surface of the brain. .. GENERATED FROM PYTHON SOURCE LINES 164-168 On this flatmap, we can see that the explainable variance is mainly located in the visual cortex, in early visual regions like V1, V2, V3, or in higher-level regions like EBA, FFA or IPS. This is expected since this dataset contains responses to a visual stimulus. .. GENERATED FROM PYTHON SOURCE LINES 170-177 Map to "fsaverage" ------------------ The second mapper we provide maps the voxel data to a Freesurfer average surface ("fsaverage"), that can be used in ``pycortex``. First, let's download the "fsaverage" surface. .. GENERATED FROM PYTHON SOURCE LINES 177-188 .. code-block:: Python import cortex surface = "fsaverage" if not hasattr(cortex.db, surface): cortex.utils.download_subject(subject_id=surface, pycortex_store=cortex.db.filestore) cortex.db.reload_subjects() # force filestore reload assert hasattr(cortex.db, surface) .. GENERATED FROM PYTHON SOURCE LINES 189-193 Then, we load the "fsaverage" mapper. The mapper is a matrix of shape (n_vertices, n_voxels), which maps each voxel to some vertices in the fsaverage surface. It is stored as a sparse CSR matrix. The mapper is applied with a dot product ``@`` (equivalent to ``np.dot``). .. GENERATED FROM PYTHON SOURCE LINES 193-201 .. code-block:: Python from voxelwise_tutorials.io import load_hdf5_sparse_array voxel_to_fsaverage = load_hdf5_sparse_array(mapper_file, key='voxel_to_fsaverage') ev_projected = voxel_to_fsaverage @ ev print("(n_vertices,) =", ev_projected.shape) .. rst-class:: sphx-glr-script-out .. code-block:: none (n_vertices,) = (327684,) .. GENERATED FROM PYTHON SOURCE LINES 202-205 We can then create a ``Vertex`` object in ``pycortex``, containing the projected data. This object can be used either in a ``pycortex`` interactive 3D viewer, or in a ``matplotlib`` figure showing only the flatmap. .. GENERATED FROM PYTHON SOURCE LINES 205-208 .. code-block:: Python vertex = cortex.Vertex(ev_projected, surface, vmin=0, vmax=0.7, cmap='viridis') .. GENERATED FROM PYTHON SOURCE LINES 209-213 To start an interactive 3D viewer in the browser, we can use the ``webshow`` function in pycortex. (Note that this method works only if you are running the notebooks locally.) You can start an interactive 3D viewer by changing ``run_webshow`` to ``True`` and running the following cell. .. GENERATED FROM PYTHON SOURCE LINES 213-218 .. code-block:: Python run_webshow = False if run_webshow: cortex.webshow(vertex, open_browser=False, port=8050) .. GENERATED FROM PYTHON SOURCE LINES 219-224 Alternatively, to plot a flatmap in a ``matplotlib`` figure, use the `quickshow` function. (This function requires Inkscape to be installed. The rest of the tutorial does not use this function, so feel free to ignore.) .. GENERATED FROM PYTHON SOURCE LINES 224-231 .. code-block:: Python from cortex.testing_utils import has_installed fig = cortex.quickshow(vertex, colorbar_location='right', with_rois=has_installed("inkscape")) plt.show() .. image-sg:: /_auto_examples/shortclips/images/sphx_glr_01_plot_explainable_variance_005.png :alt: 01 plot explainable variance :srcset: /_auto_examples/shortclips/images/sphx_glr_01_plot_explainable_variance_005.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 232-246 References ---------- .. [1] Sahani, M., & Linden, J. F. (2003). How linear are auditory cortical responses?. Advances in neural information processing systems, 125-132. .. [2] Hsu, A., Borst, A., & Theunissen, F. E. (2004). Quantifying variability in neural responses and its application for the validation of model predictions. Network: Computation in Neural Systems, 15(2), 91-109. .. [3] Schoppe, O., Harper, N. S., Willmore, B. D., King, A. J., & Schnupp, J. W. (2016). Measuring the performance of neural models. Frontiers in computational neuroscience, 10, 10. .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 28.850 seconds) .. _sphx_glr_download__auto_examples_shortclips_01_plot_explainable_variance.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: 01_plot_explainable_variance.ipynb <01_plot_explainable_variance.ipynb>` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: 01_plot_explainable_variance.py <01_plot_explainable_variance.py>` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_