Mesmerize Documentation¶

New: Video Tutorials!¶
The Main Overview Tutorial playlist provides a quick overview that takes you from raw imaging data, to downstream analysis and interactive visualizations:
Additional tutorials on other aspects of Mesmerize will be placed in this playlist: https://www.youtube.com/playlist?list=PLgofWiw2s4RF_RkGRUfflcj5k5KUTG3o_
Overview¶
Mesmerize is a platform for the annotation and analysis of neuronal calcium imaging data. It encompasses the entire process of calcium imaging analysis from raw data to semi-final publication figures that are interactive, and aids in the creation of FAIR-functionally linked datasets. It is applicable for a broad range of experiments and is intended to be used by users with and without a programming background.
Welcome Window¶
Share your analysis pipelines and figures along with your publication

The Viewer¶
Explore image sequences, and use various modules for pre-processing and signal-extraction. Annotate regions of interest with any relevant information. Map stimuli/behavior periods.

CaImAn modules¶
Mesmerize contains front-end GUI modules for the CaImAn library. This makes it very easy for users without a programming background to use the library.
CaImAn Elastic Motion Correction

CNMFE

Batch Manager¶
Computationally intense procedures performed can be organized with the Mesmerize Batch Manager.

Project Organization¶
Explore project data and create experimental groups.

Data analysis - pyqtgraph programmable flowcharts.¶
Build your own analysis piplines using flowcharts.

Interactive Plots¶
Create shareable interactive plots where the analysis history of every datapoint is traceable. Explore information associated with a datapoint, such as the spatial localization of its parent ROI and raw data.
Interactive Heatmaps

Interactive Cross-correlation analysis

Other types of plots: Beeswarm, Violins, KShape, Proportions, Scatter
Installation¶
Mesmerize can be installed on Linux, Mac OSX and Windows. On Windows, Mesmerize can be installed in an anaconda environment. For Mac OSX and Linux you may use either virtual environments or conda environments, but we have had much better experience with virtual environments.
All platforms¶
We provide a ready to use VM with Mesmerize and all features pre-installed. You can run this VM on Windows, Mac OSX, or Linux. This is the easiest way to get started with Mesmerize if you don’t want to setup anaconda or virtual environments by yourself. Just install VirtualBox and import the mesmerize-v060-2-vm.ova
file.
VirtualBox: https://www.virtualbox.org/wiki/Downloads
Download the VM file
mesmerize-v060-2-vm.ova
from zenodo: https://zenodo.org/record/4738514
When you start the VM, just double click the mesmerize launcher on the desktop.
You can setup Shared Folders in the settings for the VM to share data between the VM and your host computer.
You can mount network drives etc. from within the VM.
Do not delete the
venvs
directory, this will remove the virtual environment for Mesmerize.An example batch with a few examples from the caiman sample data is provided at
/home/user/example_batch
.
The details for the user account on the VM are:
user
password
You can use the same password for sudo
.
By default the VM is set to use 7 threads and 12GB of RAM. You may modify this according to the resources available on your host computer. You generally want to leave 2-4 threads free on your host computer.
If you get the following error when importing the VM you probably don’t have enough space on your computer, I recommend importing the VM on a computer that has a few hundred gigabytes of free space:
E_INVALIDARG (0x80070057)
Video instructions:
To update Mesmerize in the VM:
# activate the environment
source ~/venvs/mesmerize/bin/activate
# get the latest version of mesmerize
pip install --upgrade mesmerize
Note
Virtualization features of your CPU must be enabled in your BIOS. VirtualBox will throw errors if it is not.
Linux¶
pip (PyPI)¶
You will need python==3.6 for tensorflow v1
Install python 3.6:
# Debian & Ubuntu based sudo apt-get install python3.6 # Fedora/CentOS sudo dnf install python36
Note
If you’re using Ubuntu 20.04 you’ll need to add a PPA to get python3.6
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.6 python3.6-dbg python3.6-dev python3.6-doc python3.6-gdbm python3.6-gdbm-dbg python3.6-tk python3.6-tk-dbg python3.6-venv
Install build tools and other dependencies:
# Debian & Ubuntu based distros sudo apt-get install build-essential python3.6-dev python3.6-venv qt5-default tcl graphviz git llvm # Fedora/CentOS sudo dnf install @development-tools sudo dnf install python3-devel tcl graphviz sudo dnf install llvm
For other distributions install the equivalent meta package to get build tools.
If you’re on Fedora/CentOS you’ll also need redhat-rpm-config
, install using:
sudo dnf install redhat-rpm-config
Create a new virtual environment:
python3.6 -m venv <new_venv_path>
Activate this environment:
source <new_venv_path/bin/activate>
Make sure you have a recent version of pip and setuptools:
pip install --upgrade pip setuptools
Install numpy & cython:
pip install numpy cython
Install
tensorflow
v1.15 (v2 is not supported for nuset) if you want to use Caiman or Nuset:# CPU bound pip install tensorflow~=1.15 # GPU pip install tensorflow-gpu~=1.15
Install tslearn & bottleneck (optional):
pip install tslearn~=0.4.1 bottleneck==1.2.1
Install mesmerize:
pip install mesmerize
Now you should be be able to launch mesmerize from the terminal:
mesmerize
You will always need to activate the environment for Mesmerize before launching it.
If you want Caiman features you’ll need to install caiman into the same environment as mesmerize:
git clone https://github.com/flatironinstitute/CaImAn cd CaImAn/ git checkout v1.8.8 source <new_venv_path/bin/activate> pip install -e .
You might need to setup Caiman using
caimanmanager.py
. Please see their docs for details: https://caiman.readthedocs.io/en/master/Installation.html#installation-on-macos-and-linuxIn order to use some features that launch subprocesses, such as the batch manager, you will need to check your System Configuration settings in Mesmerize to make sure that it activates the environment that mesmerize is installed in. By default the pre-run commands contain
# source /<path_to_env>/activate'
, you will need to uncomment the line (remove the#
) and set the path to your environment.
Note
Caiman=>1.8.9 requires tensorflow v2, which is currently not supported by nuset. If you want to use the latest version of caiman, you will need to install tensorflow v2 and use python3.8
Mac OSX¶
This requires Anaconda and will install Mesmerize in an Anaconda environment. If you want to install into a python virtual environment use the instructions for the Linux installation from step #3 onward. Tested on macOS Catalina 10.15.1
Download Anaconda for Python 3: https://www.anaconda.com/distribution/
First make sure you have xcode:
xcode-select --install
This might take a while.
Create an environment & install Mesmerize
Create a new environment using python 3.6:
conda create --name mesmerize python=3.6
Enter the environment:
source activate mesmerize
Install
caiman
for Caiman features:conda install -c conda-forge caiman
Install Mesmerize. On Mac installing tslearn before mesmerize creates problems on anaconda for some reason:
pip install mesmerize
Install cython, and downgrade pandas:
conda install Cython pandas~=0.25.3
Install tslearn~=0.4.1:
conda install -c conda-forge tslearn=0.4.1
Install bottleneck (optional):
pip install bottleneck==1.2.1
To launch Mesmerize call it from the terminal:
mesmerize
You will always need to activate the environment for Mesmerize before launching it.
You might get a matplotlib error similar to below:
Bad val 'qt5' on line #1
"backend: qt5
in file "/Users/kushal/.matplotlib/matplotlibrc"
Key backend: Unrecognized backend string 'qt5': valid strings are ['GTK3Agg', 'GTK3Cairo', 'MacOSX', 'nbAgg', 'Qt4Agg', 'Qt4Cairo', 'Qt5Agg', 'Qt5Cairo', 'TkAgg', 'TkCairo', 'WebAgg', 'WX', 'WXAgg', 'WXCairo', 'agg', 'cairo', 'pdf', 'pgf', 'ps', 'svg', 'template']
To fix this, execute the following which appends the default matplotlib backend-option. Note that this will probably affect matplotlib in all your environments:
echo "backend: qt5" >> ~/.matplotlib/matplotlibrc
You might need to setup Caiman using caimanmanager.py
. Please see their docs for details: https://caiman.readthedocs.io/en/master/Installation.html#installation-on-macos-and-linux
In order to use some features that launch subprocesses, such as the batch manager, you will need to check your System Configuration settings in Mesmerize to make sure that it activates the environment that mesmerize is installed in.
Windows¶
Tested on Windows 10, not sure if it’ll work on earlier Windows versions.
Download & install Anaconda for Python 3: https://www.anaconda.com/distribution/
Make sure you select the option to add anaconda to the PATH environment variable during installation.
You will also need git: https://gitforwindows.org/
Warning
It is highly recommended that you use Mesmerize in a new dedicated environment, even if you already have major dependencies (like caiman) installed in another environment.
All commands are to be run in the powershell
You will need anaconda to be accessible through powershell. You may need to run powershell as administrator for this step to to work. Close & open a new non-admin powershell after running this:
conda init powershell
You will need a relatively recent version of Anaconda in order to run conda commands through the powershell.
Create a new anaconda environment:
conda create -n mesmerize python=3.6
Activate the environment:
conda activate mesmerize
Install tensorflow v1.15:
conda install tensorflow=1.15
Install caiman:
conda install -c conda-forge caiman
Downgrade pandas, install Cython:
conda install Cython pandas~=0.25.3
Install tslearn (optional):
conda install -c conda-forge tslearn=0.4.1
Install bottleneck (optional):
pip install bottleneck==1.2.1
Install graphviz:
conda install graphviz
Install pywin32:
pip install pywin32
Install Mesmerize:
pip install mesmerize
Allow powershell to execute scripts. Run powershell as administrator to execute these commands. This is required for the batch manager and k-Shape GUI which launch external processes. This may affect the security of your system by allowing scripts to be executable. I’m not an expert on Windows so if someone knows a better way to do this let me know! As far as I know, I’m not sure why you would try to execute untrusted scripts so this shouldn’t be a concern?:
Set-ExecutionPolicy RemoteSigned Set-ExecutionPolicy Bypass -scope Process -Force
Launch Mesmerize:
mesmerize
You might need to setup Caiman using caimanmanager.py
. Please see their docs for details: https://caiman.readthedocs.io/en/master/Installation.html#installation-on-macos-and-linux
Note
In order to use some features, such as the batch manager, you will need to check your System Configuration settings in Mesmerize to make sure that it activates the conda environment that mesmerize is installed in. By default the pre-run commands contain # conda activate mesmerize
but you will need to uncomment the line (remove the #
) or change it if you are using an environment with a different name.
From GitHub (Development)¶
First, make sure you have compilers & python3.6 (see the details above for various Linux distros or Mac OSX)
Create a virtual environment:
# Choose a path to house the virtual environment python3.6 -m venv /path/to/venv
Activate the virtual environment:
source /path/to/venv/bin/activate
Upgrade pip & setuptools & install some build dependencies:
pip install --upgrade pip setuptools pip install Cython numpy
Install
tensorflow
ortensorflow-gpu
, you must use version~=1.15
:pip install tensorflow~=1.15
Install tslearn (required) & bottleneck (optional):
pip install tslearn~=0.4.1 bottleneck==1.2.1
If you want Caiman features you’ll need to install caiman into the same environment as mesmerize:
git clone https://github.com/flatironinstitute/CaImAn cd CaImAn/ source <new_venv_path/bin/activate> pip install -e .
You might need to setup Caiman using
caimanmanager.py
. Please see their docs for details: https://caiman.readthedocs.io/en/master/Installation.html#installation-on-macos-and-linuxFork the main repo on github and clone it, or install from our repo:
git clone https://github.com/kushalkolar/MESmerize.git # or your own form # git clone https://github.com/<your_github_username>/MESmerize.git cd MESmerize
Switch to new branch:
git checkout -b my-new-feature
Install in editable mode:
pip install -e .
Make your changes to the code & push to your fork:
git push origin my-new-feature
Create a pull request if you want to incorporate it into the main Mesmerize repo.
Getting Help¶
If you have questions, encounter an issue, or need help:
- Post an issue on GitHub:
Please provide all the details that the issue template asks for.
- Contact us on gitter (for smaller questions or for discussion).
Please use the GitHub issue tracker for actual issues, error messages/tracebacks etc. Do not post large error messages/tracebacks in the gitter room, it gets very messy it’s harder for us to help you.
FAQs¶
ROIs¶
- Can I delete an ROI?
- I don’t want to delete ROIs but I want to mark them for exclusion in further analysis, how can I do this?
You can do this by creating an ROI type category. See <link here> Add New ROI Type Later which uses this as an example. You can also create this ROI Type category when you create a New Project, not necessarily when you already have a project as the example uses.
- Can I tag more than one piece of information to each ROI?
Yes, add as many ROI Type categories as you want in the Project Configuration.
See also
- I already have a Mesmerize project with many Samples in it. Can I add a new ROI Type category?
Yes, just add it to your Project Configuration
- Can some samples in my project have ROIs that originate from CNMF(E) and others that are manually drawn?
Yes, but be aware that you may need to separate the CNMF(E) and manual data in downstream analysis if using flowchart nodes that work with data from specific sources.
CNMFE¶
- I have ROIs that clearly encompass multiple cells instead of just one
Increase min_coor
Might help to reduce gSig as well
- I have too many bad ROIs around random regions that are clearly noise
Increase min_pnr
- Min_PNR image is completely blue and void of any signals
Increase gSig
- Vmin slider is stuck in Inspect Correlation & PNR GUI.
Close and reopen it. This is a matplotlib issue, not something I can fix.
Caiman Motion Correction¶
- I have video tearing
Try increasing upsample grid
It’s possible that the movement is too severe to be motion corrected. When the movement is so severe that the information do not exist, it is impossible to motion correct it.
- My animal is growing
This is growth, not motion. Unfortunately cannot be corrected for. If you have an idea for a technique I can try it out.
- The output actually has more motion, it has created false motion.
- Try these things:
Reduce Strides & Overlaps by ~25%
Reduce max shifts X & Y by ~25%
Reduce max deviation from rigid by ~25%
Project Organization¶
- Can I modify a sample?
Yes. Double click the Sample ID in the Project Browser to open it in a viewer. You can then make any modifications you want and then go to File -> Add to Project and select the “Save Changes (overwrite)” option at the bottom. If you have not changed the image sequence itself you can uncheck “Overwrite image data”.
- Can I change the SampleID?
No this is fundamentally impossible.
A work-around is to open that Sample in the viewer (double click it in the project browser), make any modifications if necessary, then go to File -> Add to Project, enter the the information for this sample and a new Animal ID (and Trial ID if wanted), and then select the option “Add to Project Dataframe” at the bottom and click Proceed. This will now add a new Sample to the project with this Sample ID. You can then delete the previous Sample.
- Can I add a new Custom Column, ROI Column, or Stimulus Column to my project when I already have samples in my project?
Yes, just modify your Project Configuration. In the Welcome Window go to Configure -> Project Configuration. Add anything that you want, and then click “Save and Apply”. It’s best to immediately restart Mesmerize whenever you change your project configuration.
If you are adding a new Custom Column you can enter a “Dataframe replace value”. This will allow you to set a value for all existing Samples in your project for this new column.
If you do not set a Dataframe replace value it will label all existing as “untagged”
Citation guide¶
Mesmerize provides interfaces to many great tools that were created by other developers. Please cite the papers for the following Viewer Modules and analysis methods that you use in addition to citing Mesmerize. I would also suggest citing numpy, pandas, scipy, sklearn, and matplotlib.
Mesmerize relies heavily on pyqtgraph widgets. Citing pyqtgraph.
Viewer¶
Module |
Cite |
---|---|
Giovannucci A., Friedrich J., Gunn P., Kalfon J., Brown, B., Koay S.A., Taxidis J., Najafi F., Gauthier J.L., Zhou P., Baljit, K.S., Tank D.W., Chklovskii D.B., Pnevmatikakis E.A. (2019). CaImAn: An open source tool for scalable Calcium Imaging data Analysis. eLife 8, e38173. https://elifesciences.org/articles/38173
Pnevmatikakis, E.A., Soudry, D., Gao, Y., Machado, T., Merel, J., … & Paninski, L. (2016). Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron 89(2):285-299. http://dx.doi.org/10.1016/j.neuron.2015.11.037
Pnevmatikakis, E.A., Gao, Y., Soudry, D., Pfau, D., Lacefield, C., … & Paninski, L. (2014). A structured matrix factorization framework for large scale calcium imaging data analysis. arXiv preprint arXiv:1409.2903. http://arxiv.org/abs/1409.2903
|
|
In addition to the above CNMF papers:
Zhou, P., Resendez, S. L., Rodriguez-Romaguera, J., Jimenez, J. C., Neufeld, S. Q., Giovannucci, A., … Paninski, L. (2018). Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data. ELife, 7. doi: https://doi.org/10.7554/eLife.28728.001
|
|
Giovannucci A., Friedrich J., Gunn P., Kalfon J., Brown, B., Koay S.A., Taxidis J., Najafi F., Gauthier J.L., Zhou P., Baljit, K.S., Tank D.W., Chklovskii D.B., Pnevmatikakis E.A. (2019). CaImAn: An open source tool for scalable Calcium Imaging data Analysis. eLife 8, e38173. https://elifesciences.org/articles/38173
Pnevmatikakis, E.A., and Giovannucci A. (2017). NoRMCorre: An online algorithm for piecewise rigid motion correction of calcium imaging data. Journal of Neuroscience Methods, 291:83-92. https://doi.org/10.1016/j.jneumeth.2017.07.031
|
|
Nuset Segmentation |
Yang L, Ghosh RP, Franklin JM, Chen S, You C, Narayan RR, et al. (2020) NuSeT: A deep learning tool for reliably separating and analyzing crowded cells. PLoS Comput Biol 16(9): e1008193. https://doi.org/10.1371/journal.pcbi.1008193
|
Nodes/Analysis¶
Node/Method |
Cite |
---|---|
Paparrizos, J., & Gravano, L. (2016). k-Shape. ACM SIGMOD Record, 45(1), 69–76. doi: http://dx.doi.org/10.1145/2723372.2737793
Romain Tavenard, Johann Faouzi, Gilles Vandewiele and Felix Divo, Guillaume Androz, Chester Holtz, Marie Payne, Roman Yurchak, Marc Ruβwurm, Kushal Kolar, & Eli Woods. (2017). Tslearn, A Machine Learning Toolkit for Time Series Data. Journal of Machine Learning Research, (118):1−6, 2020. http://jmlr.org/papers/v21/20-091.html
|
|
Romain Tavenard, Johann Faouzi, Gilles Vandewiele and Felix Divo, Guillaume Androz, Chester Holtz, Marie Payne, Roman Yurchak, Marc Ruβwurm, Kushal Kolar, & Eli Woods. (2017). Tslearn, A Machine Learning Toolkit for Time Series Data. Journal of Machine Learning Research, (118):1−6, 2020. http://jmlr.org/papers/v21/20-091.html
|
|
Rick Chartrand, “Numerical Differentiation of Noisy, Nonsmooth Data,” ISRN Applied Mathematics, vol. 2011, Article ID 164564, 11 pages, 2011. https://doi.org/10.5402/2011/164564. |
Scientific Libraries¶
Library |
|
---|---|
numpy |
Van Der Walt, S., Colbert, S. C. & Varoquaux, G. The NumPy array: A structure for efficient numerical computation. Comput. Sci. Eng. (2011) doi:10.1109/MCSE.2011.37
|
pandas |
McKinney, W. Data Structures for Statistical Computing in Python. Proc. 9th Python Sci. Conf. (2010)
|
scipy |
Virtanen, P., Gommers, R., Oliphant, T.E. et al. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat Methods (2020). https://doi.org/10.1038/s41592-019-0686-2
|
sklearn |
Pedregosa, F. et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. (2011)
|
matplotlib |
Hunter, J. D. Matplotlib: A 2D graphics environment. Comput. Sci. Eng. (2007)
|
pyqtgraph |
Who uses Mesmerize?¶
Sars Centre - Marios Chatzigeorgiou lab
University of Oslo - Joel Glover lab
NIH - Integrative Bioinformatics Support Group
KOKI Institute of Experimental Medicine
University of Alberta - Ian Winship lab
Feinstein Institute of Bioelectronic Medicine
Create a New Project¶
Video Tutorial¶
This tutorial shows how to create a New Project, open images in the Viewer, use the Stimulus Mapping module and perform Caiman motion correction
Biological Questions¶
Before you create a new Mesmerize Project you must think about the biological questions that you are interested in. Here are some thoughts to help you:
The effects of different types of temporary stimulation or behavior? Such as odors, visual stimuli etc.
Are you interested in neuronal activity during specific behavioral periods?
Differences in calcium dynamics between different anatomical regions or cell types?
Experiments using transgenes to stimulate or suppress specific cells, such as with optogenetics or chemogenetics.
Long-duration/chronic exposure to pharmacological agents. For example, if you are inducing seizures with drugs like PTZ
Differences in calcium dynamics between different stages during development.
Differences in calcium dynamics between different cell types using GCaMP driven by specific promoters.
All of the above information can be encoded by different types of categorical variables within the Mesmerize Project Configuration.
New Project¶
To create a new project click New Project in the Welcome Window. You will then be prompted to choose a location and a name for the project. This will create a directory with the chosen name in the location you previously selected.
Project Configuration¶
After setting a project name you must configure it. This is where your biological questions of interest are important. You can change your project configuration later, but it is most time efficient if you enter all your categories of interest now.

Warning
Restart Mesmerize whenever you change the project configuration.
Note
If you have Samples in your project and you change the project configuration at a later date to add new columns, all existing rows in your project DataFrame are labelled as “untagged” for the new columns.
See also
Add To Project Guide to understand how the project configuration relates to the addition of data samples to your project
Categorical Data Columns¶
Mesmerize allows you to create three main different types of categorical data columns (for the project DataFrame), and an unlimited number of each type. These categorical data columns allow you to group your data during analysis, and therefore perform comparisons between experimental groups. In essense, these categorical data columns form scaffold with which you can create your experimental groups during analysis.
Note
You can change the project configuration at any point in the future by adding new columns or changing the visible/hidden columns.
Note
It is generally advisable to keep the names of your categorical data columns short with lowercase letters. When sharing your project you can provide a mapping for all your keys. This helps maintain consistency throughout your project and makes the data more readable.
ROI Type Columns¶
Create ROI-bound categories with which you want to group your data. Enter the desired name for the category and click Add. Here are some examples:
If you are interested in calcium dynamics between different anatomical regions, you create a column named
anatomical_region
.You want to define defined notochord cell identities on a anterior-posterior axis, defined as “cell_1”, “cell_2”, ... “cell_n”. You can create an ROI Type Column named
notochord_cell_id
.

See also
ROI Manager to understand how labels can be tagged onto ROIs using these categories that you have defined in the ROI Type Columns.
Stimulus Type Columns¶
If you’re interested in mapping temporal information to your traces, such as stimuli or behavioral periods, add a “Stimulus Type column” for each type. This is only for temporary stimulation or behavioral periods that do not span the entire length of the video.
See also
Stimulus Mapping guide to understand how stimuli can be labelled.**
Custom Columns¶
Here you can create categories to tag any other piece of useful information to each Sample. i.e. to the entire video recording. For example:
You are studying seizures, you perform a 5 minute recording in the medium, and then subsequent 5 minute recordings in PTZ. You can create a category called “drug_state”. When you add samples to your project you can tag drug states named “control”, “ptz_1”, “ptz_2”, “ptz_recovery_1” etc.
This is also what you would use for chemogenetics experiments if you are recording for example without CNO for 5 minutes, and then with CNO for another 5 minutes.
Three different data types can be tagged to a category, standard text, whole numbers, and decimal numbers.
Warning
Data types cannot be changed later. If you are familiar with pandas you can manually change it, and the corresponding value in the project config file.
If you want to tag numerical information, such as the animal’s development stage, it can be useful to set the data type to whole numbers. This allows you to sort your data numerically. For example you may want to compare dynamics of all curves between stage 48 and 72.

If you are interested in dynamics between different cell types for which you are using specific GCaMP promoters, you can create a custom column called promoter
or cell_type
and select standard text as the data type.

When you add samples to your project from the viewer, you will be prompted to enter information that is directly based on the Custom Columns that you create here.
See also

Project Browser¶
Browse, edit and sort the project DataFrame
You can open the Project Browser from the Welcome Window after you have opened a project.

The columns that are visible in the Project Browser Window correspond to the Project Configuration. For each column you will see a list which is a set of unique elements from that column in the project DataFrame.
Functions
Open Sample¶
Double-click on a Sample in the SampleID column to open it in the Viewer.
In the viewer you can make changes and then save it by going to File -> Add to Project. You will see a “Save changes (overwrite)” option which will overwrite the data for this project Sample with the current data in the viewer work environment. If you have not changed the image sequence data you can uncheck the “Overwrite image data” checkbox, useful if your image sequences are large.

Note
You can make any changes that you want to the Sample. This may include things such as changing or adding new tags to ROIs, changing stimulus maps, tagging a new custom column etc.
Warning
You can never change the AnimalID or TrialID (i.e. SampleID) since these are partially used as unique identifiers. A workaround is described in the FAQ for Project Organization.
Filter¶
You can sort your Project DataFrame into different groups (such as experimental groups) using text and numerical filters. Type a filter into the text entries that are below the list of elements for a column. You can also click on one or many elements in a column to set those elements as the filters.
If you filter out of the root tab, it will always create a new tab with a name of your choice. If you filter out of any other tab it will apply the filter in-place unless you right click on the “Apply” button and choose “Apply in new tab”
Text filters¶
Partial match¶
To filter out a group based on partial text matches just enter the text into the filter text entry below the column(s) of interest and click “Apply”

Since this is filtering out of the root tab, you will be prompted to give a name for a new tab that will be created based on the filter you have entered.

The result is a DataFrame containing all rows where the cell_name contains ‘aten’

If you go to View -> Current dataframe you can see the whole dataframe.

To see how the filter translates to pandas commands go to View -> Current tab filter history

Multiple filters¶
You can combine filters together by using the |
seperator. The |
acts as an “or” operator.

The result is all rows where mn, palp, or pr are in the cell_name column.

Note
This can be combined with Modifiers
Filter multiple columns¶
You can filter multiple columns simultaneously if you are not in the root tab. You can create a new tab that is essentially the same as the root by just keeping the filter entries blank and clicking “Apply”.
Filter out all rows where the cell_name column contains ‘pr’ and promoter column contains ‘pc2’ or ‘cng_ch4’.

Right click on the “Apply” button and choose “Apply all” or “Apply all in new tab”

If you view the pandas filter history (View -> Current tab filter history) you can see that the filters for each column are simply applied sequentially.

The dataframe

Modifiers¶
You can perform other types of matches, such as exact matches, negations, and exact negations. Enter the filter and then right click on the text entry to see available modifiers and choose the desired modifier.

Modifier |
Description |
---|---|
$NOT: |
Results in the negation of partial matches |
$STR: |
Treats the filter as a str, same as Partial Match (see above sub-section) |
$STR=: |
Exact text match |
$STR!=: |
Negation of exact text match |
Numerical filters¶
By default the filters in all entires are treated as text. If your column contains numerical data you have additional options for modifiers. The first four modifiers are the same as explained above. The rest are self explanatory.

Editor¶
You can view and edit the Project DataFrame directly in a GUI using the DataFrame editor.

Warning
Make sure you know what you are doing when you directly modify the Project DataFrame. Changes cannot be undone but you can restore a backup from the project’s dataframe directory. For example, do not modify data under the following columns: CurvePath, ImgInfoPath, ImgPath, ROI_State, any uuid column.
See also
Uses the Spyder object editor
Console¶
If you are familiar with pandas you can interact with the project DataFrame directly. If you are unfamiliar with pandas it’s very easy to learn.
See also
Useful Callables
Callable |
Purpose |
---|---|
get_dataframe() |
returns dataframe of the current project browser tab |
get_root_dataframe() |
always returns dataframe of the root tab (entire project DataFrame) |
set_root_dataframe() |
pass a pandas.DataFrame instance to set it as the project DataFrame |
Usage¶
General usage to modify the project DataFrame would be something like this:
# Get a copy the project DataFrame to modify
df = get_root_dataframe().copy()
# Do stuff to df
...
# Set the project DataFrame with the modified one
set_root_dataframe(df)
Example¶
Let’s say you have been inconsistent in naming “ATENA” ROI Tags in the “cell_name” column. You can rename all occurances of ‘atena’ to ‘ATENA’
# Get a copy of the project DataFrame
>>> df = get_root_dataframe().copy()
# View all occurances of 'atena'
>>> df.cell_name[df.cell_name == 'atena']
2 atena
3 atena
4 atena
5 atena
6 atena
205 atena
Name: cell_name, dtype: object
# Rename all occurances of 'atena' to 'ATENA'
>>> df.cell_name[df.cell_name == 'atena'] = 'ATENA'
# Check that there are more occurances of 'atena'
>>> df.cell_name[df.cell_name == 'atena']
Series([], Name: cell_name, dtype: object)
# Check that we have renamed the 'atena' occurances to 'ATENA'
# Indices 2-6 and 205 were named 'atena'
>>> df.cell_name
0 untagged
1 untagged
2 ATENA
3 ATENA
4 ATENA
5 ATENA
6 ATENA
7 atenp
...
Name: cell_name, Length: 311, dtype: object
# Check index 205
>>> df.cell_name.iloc[205]
'ATENA'
# Finally set the changed DataFrame as the root (project) DataFrame
>>> set_root_dataframe(df)
Viewer overview¶
Based on the pyqtgraph ImageView widget.
The Viewer allows you to do the following things:
Examine your calcium movies
Use modules to perform things like motion correction, CNMF(E), ROI labeling, and stimulus mapping. See their respective guides for details.
You can also make modifications to an existing Sample in your project by opening it in the Viewer. See Modify Sample and Overwrite guide.
Video Tutorial¶
This tutorial shows how to create a New Project, open images in the Viewer, use the Stimulus Mapping module and perform Caiman motion correction
Layout¶

To access Viewer modules choose the module you want to run from the Modules menu at the top. All modules, except the Batch Manager, are small floating windows which you can dock into the Viewer by dragging them to an edge of the viewer.
3D data¶
When viewing 3D data a slider on the left allows you to move through the z axis.

The image stack shown above is from Martin Haesemeyer’s dataset from the following paper:
Haesemeyer M, Robson DN, Li JM, Schier AF, Engert F. A Brain-wide Circuit Model of Heat-Evoked Swimming Behavior in Larval Zebrafish. Neuron. 2018;98(4):817-831.e6. doi:10.1016/j.neuron.2018.04.013
Work Environment¶
Everything in the viewer is stored in a Work Environment object. The main data attributes of the viewer work environment are outlined below.
See also
Attribute |
Description |
---|---|
imgdata |
ImgData object containing the Image Sequence and meta data from the imaging source |
roi_manager |
The back-end ROI Manager that is currently in use |
sample_id |
SampleID, if opened from a project Sample |
stim_maps |
Stimulus maps, if any are defined |
history_trace |
History log, currently used for logging caiman motion correction, CNMF and CNMFE history. |
UUID |
If opened from a project Sample, it refers to the ImgUUID |
You can view everything in the current work environment by going to View -> Work Environment Editor. You cannot edit through this GUI at this time.
Console¶
You can interact directly with the work environment using the console.
See also
Namespace¶
Reference |
Description |
---|---|
vi |
Instance of ViewerUtils. Use this to interact with the viewer. |
all_modules |
List all available modules (includes default and any available plugins/custom modules) |
ViewerWorkEnv |
Use for creating new instances of ViewerWorkEnv |
ImgData |
Use for creating new instances of ImgData |
get_workEnv() |
Get the current viewer work environment (instance of ViewerWorkEnv) |
get_image() |
Get the current image sequence (returns current ViewerWorkEnv.imgdata.seq). If the data are 3D it returns the current plane only. |
get_meta() |
Get the current meta data |
get_module(<name>) |
Pass the name of a module as a string. Returns that module if it is available. |
get_batch_manager() |
Get the batch manager. |
update_workEnv() |
Update the viewer GUI with the viewer work environment (vi.viewer.workEnv) |
clear_workEnv() |
Clear the current work envionment, cleanup the GUI and free the RAM |
Video Tutorial¶
Examples¶
These examples can be run in the Viewer Console.
Working with meta data¶
# view meta data
>>> get_meta()
{'origin': 'AwesomeImager', 'version': '4107ff58a0c3d4d5d3c15c3d6a69f8798a20e3de', 'fps': 10.0, 'date': '20190426_152034', 'vmin': 323, 'vmax': 1529, 'orig_meta': {'source': 'AwesomeImager', 'version': '4107ff58a0c3d4d5d3c15c3d6a69f8798a20e3de', 'level_min': 323, 'stims': {}, 'time': '152034', 'date': '20190426', 'framerate': 10.0, 'level_max': 1529}}
# manually set meta data entries
>>> get_meta()['fps'] = 30.0
Open image¶
Use the Viewer Core API to open any arbitrary image
This example loads an image stored using numpy.save(), but this is applicable to images stored in any format that can eventually be represented as a numpy array in python. For example, you could also load image files stored in HDF5 format and load the numpy array that represents your image sequence.
1import numpy as np
2
3# clear the viewer work environment
4clear_workEnv()
5
6a = np.load('/path_to_image.npy')
7
8# check what the axes order is
9a.shape
10
11# (1000, 512, 512) # for example
12# looks like this is in [t, x, y]
13# this can be transposed so we get [x, y, t]
14# ImgData takes either [x, y, t] or [x, y, t, z] axes order
15
16# Define a meta data dict
17meta = \
18 {
19 "origin": "Tutorial example",
20 "fps": 10.0,
21 "date": "20200629_171823",
22 "scanner_pos": [0, 1, 2, 3, 4, 5, 6]
23 }
24
25# Create ImgData instance
26imgdata = ImgData(a.T, meta) # use a.T to get [x, y, t]
27
28# Create a work environment instance
29work_env = ViewerWorkEnv(imgdata)
30
31# Set the current Viewer Work Environment from this new instance
32vi.viewer.workEnv = work_env
33
34# Update the viewer with the new work environment
35# this MUST be run whenever you replace the viewer work environment (the previous line)
36update_workEnv()
Image data¶
Image sequences are simply numpy arrays. For example extract the image sequence between frame 1000 and 2000.
See also
1# Get the current image sequence
2seq = get_image()
3
4# Trim the image sequence
5trim = seq[:, :, 1000:2000]
6
7# Set the viewer work environment image sequence to the trim one
8vi.viewer.workEnv.imgdata.seq = trim
9
10# Update the GUI with the new work environment
11update_workEnv()
View analysis log¶
View the analysis log, such as batch manager processing history.
>>> get_workEnv().history_trace
[{'caiman_motion_correction': {'max_shifts_x': 32, 'max_shifts_y': 32, 'iters_rigid': 1, 'name_rigid': 'Does not matter', 'max_dev': 20, 'strides': 196, 'overlaps': 98, 'upsample': 4, 'name_elas': 'a1_t2', 'output_bit_depth': 'Do not convert', 'bord_px': 5}}, {'cnmfe': {'Input': 'Current Work Environment', 'frate': 10.0, 'gSig': 10, 'bord_px': 5, 'min_corr': 0.9600000000000001, 'min_pnr': 10, 'min_SNR': 1, 'r_values_min': 0.7, 'decay_time': 2, 'rf': 80, 'stride': 40, 'gnb': 8, 'nb_patch': 8, 'k': 8, 'name_corr_pnr': 'a8_t1', 'name_cnmfe': 'a1_t2', 'do_corr_pnr': False, 'do_cnmfe': True}}, {'cnmfe': {'Input': 'Current Work Environment', 'frate': 10.0, 'gSig': 10, 'bord_px': 5, 'min_corr': 0.9600000000000001, 'min_pnr': 14, 'min_SNR': 1, 'r_values_min': 0.7, 'decay_time': 4, 'rf': 80, 'stride': 40, 'gnb': 8, 'nb_patch': 8, 'k': 8, 'name_corr_pnr': '', 'name_cnmfe': 'a1_t2', 'do_corr_pnr': False, 'do_cnmfe': True}}]
Running scripts¶
You can use the Script Editor to run scripts in the Viewer console for automating tasks such as batch creation. It basically allows you to use the viewer console more conveniently with a text editor. The execution environment of the viewer console and script editor are identical.
Some example are provided for caiman modules and stimulus mapping.
Add a Sample to the Project¶
When you are happy with the ROIs in the viewer for the current CNMF(E) derived or manually created ROIs, you can add this as a Sample to your project.
Each sample in your project contains the following:
The imaging data from which ROIs were extracted (the video)
All the ROIs with their spatial location, temporal dynamics, and any tags that you have entered in the ROI Manager.
Stimulus mappings, if your project is configured for this.
Meta data (that were associated with the imaging video), the date, video framerate.
Any further information that you have chosen to add based on your Project Configuration
Note
If your ROIs were obtained through CNMF/CNMFE the following attributes from the final cnm object are stored: cnm.A, cnm.b, cnm.C, cnm.f, cnm.YrA
How to¶
To add the current viewer work environment (see above) as a sample to your project, go to File -> Add To Project. You will be presented with a window similar to this:

The entries that you are prompted with directly correspond to the custom columns in your Project Configuration.
See also
Every Sample in a project has a unique SampleID which is the combination of AnimalID + TrialID.
Warning
You can never change the AnimalID or TrialID (i.e. SampleID) since these are partially used as unique identifiers. A workaround is described in the FAQ for Project Organization.
Warning
AnimalID and TrialID are separated by the -_-
character combination when stored as a SampleID. Therefore do not use that character combination within your AnimalID or TrialID.
Video Tutorial¶
Tiff file module¶
To open a tiff file go to Modules -> Load Images -> Tiff files.
Note
You can also use this module through the console and scripts. See Tiff module API.
To open tiff files first click the “Select file” button and choose your file. You can also drag and drop a tiff file (drag and drop doesn’t work properly on Windows).

Next, you must select an appropriate Image Load Method (see next section). You can also import meta data associated with your recording.
Certain meta data, such as the sampling rate of the data, are necessary for some downstream analysis procedures. There are a few ways to import your meta data into the Viewer Work Environment:
Simple JSON files, see json_minimal under the table in the Meta data section
Define your own Custom functions to open meta in other file formats
Manually create a meta data dictionary using the Console
Load Method¶
The options for “Load Method” correspond to the tifffile library method that is used for loading the images.
If you are not sure which method you should use, try all of them and see which one loads your data appropriately. If none of them work, create an issue on GitHub.
asarray: Should work for most tiff files, fast method
asarray - multi series: Also fast. Use this if it is a multi-page tiff. For example if the tiff file was created by a program that appends each frame to a file as they are being acquired by the camera.
imread: Usually slower, should work for most tiff files.
Axes order¶
Choose the default axes order or manually enter the axes order if your tiff file uses a different order.
Meta data¶
Check the “Load meta data” checkbox if you want to load meta data. Alternatively, you can uncheck this box and create a meta data dictionary manually using the console (see the Console section)
You can select a meta data format from the list. This list of formats correspond to the functions available in the module: mesmerize.viewer.core.organize_meta
.
When you select a meta data format, it will automatically try to find a file with the extension specified by the selected format if it has the same name as the selected tiff file.
If you have questions on meta data formats feel free to drop a message in the Gitter room
Default list of formats that are recognized:
Name
extension
Description
json_minimal
.json
Recognizes a json file that contains at least the minimal set of necessary keys:
origin
,fps
anddate
.All other keys in the JSON file are placed in a sub-dictionary with the keyorig_meta
See Minimal dict below for more info.AwesomeImager
.json
Used for 1p imaging in the Chatzigeorgiou group at the Sars Center
ome_tiff
.tiff
Imports OME XML meta data stored within the tiff file. Specification is described here: https://docs.openmicroscopy.org/ome-model/6.2.2/ome-tiff/specification.html
The following values are computed and added to the meta data for the Viewer Work Environment, and can be accessed through
get_meta()
in the Viewer Consolefps
- mean sampling rate in Hz, volumetric sampling rate if 3D, frame-to-frame sampling rate if 2Dfps_std
- standard deviation of the sampling ratefps_max_dev
- maximum deviation from the mean sampling rateA warning box is shown iffps_std
> 0.01 orfps_max_dev
> 0.1
Custom functions¶
You may define your own function to organize your meta data. It MUST return a dict which has at least the following keys: origin
, fps
and date
.
origin
is astr
describing the software or microscope the recording comes from. This is for your own record.
fps
is the sampling rate of the recording as afloat
orint
date
is the date & time represented by astr
in the following format:"YYYYMMDD_HHMMSS"
In addition to these 3 keys, you may include any additional keys as you wish.
If you think your meta data organizing function will be useful for others I’ll be happy to review a pull request and it can be included by default in Mesmerize. We’re happy help you create a meta data function, just contact us on Gitter or create an issue on GitHub.
Minimal dict¶
Example of a minimal meta data dict.
{
"origin": "microscope or software origin", # must be a str
"fps": 10.0, # must be a int or float
"date": "20201123_172345" # must be a str formatted as "YYYYMMDD_HHMMSS"
}
Function outline¶
Basic outline of a function that you can add to mesmerize.viewer.core.organize_meta
for organizing your meta data:
The function can only take the
path
to the meta data file as the argument.The expected file extension for the meta data must be specified. The files of a single format are allowed to have multiple different file extension but you must only specify the most common one.
The function would generally open the meta data file specified by the
path
, using any python libraries or other code of your choice, and finally return a dictionary that contains the minimal complement of keys, i.e.origin
,fps
anddate
with values of the appropriate types (see previous section).
def my_meta_organizer(path: str) -> dict:
""".ext""" # define the file ext in the docstring
raw_meta = function_to_load_my_file(path)
# do stuff to organize the raw_meta
meta = ... # stuff to organize raw meta
return meta
# return the organized meta data dict
# that mesmerize can use
Console/Script usage¶
You can also load tiff files through the Viewer Console or Script Editor.
This example can be run line-by-line through the Viewer Console, or from the Script Editor.
1image_path = # path to tiff file
2meta_path = # path to json meta data file
3
4clear_workEnv() # Prevents a confirmation dialog from appearing
5
6# Get the tiff module
7tio = get_module('tiff_io', hide=True)
8
9# Load the tiff file
10tio.load(image_path, method='imread', axes_order='txy', meta_path=meta_path, meta_format='json_minimal)
Alternatively, you may manually create a meta data dictionary after loading a tiff file:
1image_path = # path to tiff file
2
3clear_workEnv() # Prevents a confirmation dialog from appearing
4
5# Get the tiff module
6tio = get_module('tiff_io', hide=True)
7
8# Load the tiff file
9tio.load(image_path, method='imread', axes_order='txy')
10
11meta_dict = \
12 {
13 "origin": "my_microscope_software", # must a str
14 "fps": 17.25, # must be a int or float
15 "date" "20201123_172345" # must be a str formatted as "YYYYMMDD_HHMMSS"/
16 }
17
18get_workEnv().imgdata.meta = meta_dict
See also
Inscopix Importer¶
The Inscopix Importer module can be used to open .isxd
movies created by Inscopix acquisition software
Note
You must have your own license for activating/running the Inscopix Data Processing Software (IDPS) and downloading the IDPS API and isx
library. Mesmerize only provides an implementation of isx
to read .isxd
movies into the application.
In order to use the importer you will need to add the path to the parent dir containing the isx
library to your PYTHONPATH
environment variable.
For example if your isx
dir is located at:
`
/home/user/Inscopix Data Processing 1.6.0/Inscopix Data Processing.linux/Contents/API/Python/isx
```
Then you will need to add the path to the parent dir, for example:
`
export PYTHONPATH="/home/user/Inscopix Data Processing 1.6.0/Inscopix Data Processing.linux/Contents/API/Python:$PYTHONPATH"
`
Usage:
Enter the path to the
.isxd
or click the...
and choose the file.Click the button to load the file into the Viewer Work Environment.
The sampling rate (framerate) of the video is automatically imported from the isxd
file.
Note
Memory usage is quite high when loading files, you will need at least twice as much RAM as the size of the file you’re trying to open.
Batch Manager¶
Batch process computationally intensive tasks.
See also
Video Tutorial¶
This tutorial shows how to create a New Project, open images in the Viewer, use the Stimulus Mapping module and perform Caiman motion correction
This is currently used for Caiman Motion Correction, CNMF, CNMF 3D and CNMFE.
The Batch Manager can be accessed in the viewer through Modules -> Batch Manager. If you don’t have a batch open you will be prompted with a dialog to open a batch or to select a location for a new batch.
Warning
The full path to the batch directory must not contain spaces or special characters, only a-z, A-Z, 0-9 and underscores.
The Batch Manager processes the batch items in external processes, allowing you to add batch items when that batch is being processed.
Layout¶

Window title: Name of batch directory
Top: Parent directory of batch directory
Top left: list of batch items and some controls.
Colors
Description
Green
Finished without exceptions
Red
Did not finish, click on the item to see the exceptions in the bottom right information area
Yellow
Currently being processed
Orange
Item aborted by user
Blue
Output data for this item are being moved from the work dir to the batch dir.
Button
Description
Start
Process the batch from the first item.
Start at selection
Process the batch starting from the item that is currently selected in the list.
Delete selection
Delete the item that is currently being selected along with the associated data in the batch dir.
Export shell scripts
Export bash scripts so that the batch items can be run on a computing cluster
Abort current item
Abort the current batch item and move on to the next item
Abort batch
Abort the current item and stop processing the batch
New batch
Create a new batch
Open batch
Open a batch
View Input
Open the input work environment, in the viewer, for the currently selected item
Use work dir: Check this box to use the work dir that has been set in the System Configuration. This feature is only available on Linux & Mac OSX.
Top right: Standard out from the external processes that are processing the batch items.
Bottom left: Parameters for the selected batch item. The first line is the UUID of the batch item.
Bottom right: Output information area for the currently selected item.
Scheduling¶
You can schedule a batch to run at a later time using the following bash script. Doesn’t work for a snap installation yet.
Usage:
Usage: mesmerize-scheduler -b <batch> -i <start item> -t <start time>
-b full batch path in quotes, no spaces
-i uuid of the batch item to start from, no quotes
-t time at which to start the batch, no quotes
examples of how to specify time:
23:00 7:30Feb30
use 24hr time and no spaces
Full usage example:
mesmerize-scheduler -b "/share/data/temp/kushal/pc2_batch" -i a80d1923-e490-4eb3-ba4f-7e651d4cf938 -t 2:00
Stimulus Mapping¶
Video Tutorial¶
This tutorial shows how to create a New Project, open images in the Viewer, use the Stimulus Mapping module and perform Caiman motion correction
Map temporal information such as stimulus or behavioral periods.
Stimulus Mapping Module

Stimulus periods illustrated on the viewer timeline

The tabs that are available in the stimulus mapping module corresponds to the stimulus types in your Project Configuration.
You can add stimulus periods either manually or through a script.
Manual Annotation¶
To add a stimulus manually click the “Add Row” button. This will add an empty row to the current tab page.
Enter a name for the stimulus, start time, end time, and pick a color for illustrating the stimulus periods on the Viewer timeline.
To remove a stimulus click the “Remove stim” button. Stimulus periods do not have to be added in chronological order.
Click “Set all maps” to set the mappings for all stimulus types. You can then choose to illustrate a stimulus on the viewer timeline by selecting it from “Show on timeline”
Import and Export are not implemented yet.
Warning
At the moment, only “frames” are properly supported for the time units.
Note
It is generally advisable to keep your stimulus names short with lowercase letters. When sharing your project you can provide a mapping for all your keys. This helps maintain consistency throughout your project and makes the data more readable.
Script¶
See also
You can also use the Stimulus Mapping module’s API to set the stimulus mappings from a pandas DataFrame.
This example creates a pandas DataFrame from a csv file to set the stimulus mappings. It uses the csv file from the pvc-7 dataset availble on CRCNS: http://dx.doi.org/10.6080/K0C8276G
You can also download the csv here: stimulus_pvc7.csv
This example is meant to be run through the Viewer Script Editor
1import pandas as pd
2from mesmerize.plotting.utils import get_colormap
3
4# Load dataframe from CSV
5df = pd.read_csv('path_to_csv_file')
6
7# Sort according to time
8df.sort_values(by='start').reset_index(drop=True, inplace=True)
9
10# Trim off the stimulus periods that are not in the current image sequence
11trim = get_image().shape[2]
12df = df[df['start'] <= trim]
13
14# get one dataframe for each of the stimulus types
15ori_df = df.drop(columns=['sf', 'tf', 'contrast']) # contains ori stims
16sf_df = df.drop(columns=['ori', 'tf', 'contrast']) # contains sf stims
17tf_df = df.drop(columns=['sf', 'ori', 'contrast']) # contains tf stims
18
19# Rename the stimulus column of interest to "name"
20ori_df.rename(columns={'ori': 'name'}, inplace=True)
21sf_df.rename(columns={'sf': 'name'}, inplace=True)
22tf_df.rename(columns={'tf': 'name'}, inplace=True)
23
24
25# Get the stimulus mapping module
26smm = get_module('stimulus_mapping')
27
28# set the stimulus map in Mesmerize for each of the 3 stimulus types
29for stim_type, _df in zip(['ori', 'sf', 'tf'], [ori_df, sf_df, tf_df]):
30 # data in the name column must be `str` type for stimulus mapping module
31 _df['name'] = _df['name'].apply(str)
32
33 # Get the names of the stimulus periods
34 stimuli = _df['name'].unique()
35 stimuli.sort()
36
37 # Create colormap with the stimulus names
38 stimuli_cmap = get_colormap(stimuli, 'tab10', output='pyqt', alpha=0.6)
39
40 # Create a column with colors that correspond to the stimulus names
41 # This is for illustrating the stimulus periods in the viewer plot
42 _df['color'] = _df['name'].map(stimuli_cmap)
43
44 # Set the data in the Stimulus Mapping module
45 smm.maps[stim_type].set_data(_df)
ROI Manager¶
Manage and annotate ROIs

The ROI Manager has a manual mode, to draw ROIs manually, and a CNMF(E) mode where ROIs can be imported from CNMF(E) outputs.
Note
You cannot combine manual and CNMF(E) ROIs in the same sample.
The ImageJ ROI import uses the read-roi package by Hadrien Mary https://pypi.org/project/read-roi/
Video Tutorial¶
Layout¶

Controls
UI |
Description |
---|---|
Add ROI button |
Add Polygon ROI (Manual mode) Right click this button to add an elliptical ROI
|
Show all |
Show all ROIs in the viewer |
Live plot |
Live update of the curve plot with changes (Manual mode) |
Plot |
Plot the curves (Manual mode) |
Import from ImageJ |
Import ROIs from an ImageJ ROIs zip file (Manual mode). Freehand ROIs are downsampled by 5. |
Switch to manual … |
Switch to Manual mode. Clears CNMF(E) ROIs. |
ROIs list |
Color-coded list of ROIs. Left click to highlight the ROI in the viewer
Right click to show the context menu allowing you to delete the selected ROI
|
Tags list |
List of tags for the selected ROI Correspond to the ROI Type Columns of the Project Configuration
|
Add Tag to ROI Def… |
Set the tag for the current selection in the Tags list |
Set ROI Tag |
Click to set the tag, or just press return in the text entry above |
Note
It is generally advisable to keep your ROI tags short with lowercase letters. When sharing your project you can provide a mapping for all your keys. This helps maintain consistency throughout your project and makes the data more readable.
Note
When using 3D data, the ROIs are colored randomly along the list (not linearly as shown in the image). If you want to set the colors linearly call this in the Viewer Console: get_workEnv().roi_manager.roi_list.reindex_colormap(random_shuffle=False)
Warning
Importing several thousands of ROIs can take 15-30 minutes. You will be able to track the progress of the import in the Viewer Window’s status bar.
Keyboard shortcuts.
These only work when the ROI manager is docked within the Viewer and while you are typing in the Add Tag to ROI Definition text entry.
Key |
Description |
---|---|
Page Up |
Select previous ROI |
Page Down |
Select next ROI |
Right Arrow |
Play the video at high speed |
Left Arrow |
Play the video backwards at high speed |
Home |
Go to the beginning of the video |
End |
Go to the end of the video |
Manual ROI Mode¶
When you click the “Add ROI” button to add a Manual Polygon ROI, a new rectangular ROI will be add in the top left corner of the image. You can add new vertices to this polygon by clicking on any of its edges. You can drag the vertices to change the shape of the polygon, and you can drag the entire ROI as well by clicking and dragging within the ROI region. Similarly you can reshape elliptical ROIs.
Hovering over the ROI selects it in the ROI list.
Console¶
These examples can be run through the viewer console or Script editor to interact with the ROIs.
See also
Get the back-end ROI Manager, see ROI Manager APIs
>>> get_workEnv().roi_manager
<mesmerize.viewer.modules.roi_manager_modules.managers.ManagerCNMFROI object at 0x7f01b8780668>``
Get the ROI List, see ROIList API
>>> get_workEnv().roi_manager.roi_list
[<mesmerize.viewer.modules.roi_manager_modules.roi_types.CNMFROI object at 0x7f01bc78b278>, <mesmerize.viewer.modules.roi_manager_modules.roi_types.CNMFROI object at 0x7f01bc817630>, <mesmerize.viewer.modules.roi_manager_modules.roi_types.CNMFROI object at 0x7f01bc817668>, <mesmerize.viewer.modules.roi_manager_modules.roi_types.CNMFROI object at 0x7f01bc7c5438>, <mesmerize.viewer.modules.roi_manager_modules.roi_types.CNMFROI object at 0x7f01bc7c5208>]
Work with an ROI object, see ROI Type APIs
# Get the curve data of an ROI
>>> get_workEnv().roi_manager.roi_list[3].curve_data
(array([ 0, 1, 2, ..., 2995, 2996, 2997]), array([ -207.00168389, -161.78229208, -157.62522988, ..., -1017.73174502,
-1030.27047731, -1042.26989668]))
# Get the tags of an ROI
>>> get_workEnv().roi_manager.roi_list[2].get_all_tags()
{'anatomical_location': 'tail', 'cell_name': 'dcen', 'morphology': 'untagged'}
# Get a single tag
>>> get_workEnv().roi_manager.roi_list[2].get_tag('cell_name')
'dcen'
Caiman Motion Correction¶
Perform motion correction using the NoRMCorre implementation in the CaImAn library.
I highly recommend going through the following before using this module
- The CaImAn demo notebook, the implementation in Mesmerize is basically from the demo
https://github.com/flatironinstitute/CaImAn/blob/master/demos/notebooks/demo_motion_correction.ipynb

Parameters
Output bit depth: The motion corrected image sequences are of float32 type. You can optionally convert the output to 8 or 16 bit uint types to save disk space. This doesn’t always work from my experience, values might get clipped.
For all other parameters please see the demo notebook mentioned above.
You can also enter parameters as keyword arguments (kwargs) in the text box if you select “Use motion correction kwargs”. This is useful if you want to enter parameters that cannot be entered in the GUI for example. Use single quotes if you want to enter string kwargs, do not use double quotes.
Usage¶
This module adds a “caiman motion correction” item to the batch. Set the desired parameters (see demo notebook) and then enter a name to add it as an item to the batch. After the batch item is processed, double-click the batch item to open the motion corrected image sequence in the viewer. You can then use this motion corrected image sequence for further analysis.
See also
This modules uses the Batch Manager.
Note
The parameters used for motion correction are stored in the work environment of the viewer and this log is carried over and saved in the Project Sample as well. To see the parameters that were used for motion correction in the viewer, execute get_workEnv().history_trace
in the viewer console and look for the caiman_motion_correction
entry.
Warning
If you’re using Windows, large memmap files will linger in your batch dir or work dir, you can clean them out periodically.
Script Usage¶
A script can be used to add caiman motion correction batch items. This is much faster than using the GUI.
See also
Add items¶
This example shows how to add all tiff files (of image sequences) from a directory as batch items with 3 different variants of parameters.
See also
This example uses the Caiman Motion Correction Module API, ViewerWorkEnv API, and Batch Manager API
1# Import glob so we can get all tiff files in a dir
2from glob import glob
3# Import os to get filenames from paths
4import os
5
6# Motion correction params.
7
8mc_kwargs = \
9{
10 "max_shifts": (6, 6),
11 "niter_rig": 2,
12 "max_deviation_rigid": 3,
13 "strides": (196, 196),
14 "overlaps": (98, 98),
15 "upsample_factor_grid": 4,
16 "gSig_filt": (10, 10) # Set to `None` for 2p data
17}
18
19params = \
20{
21 'mc_kwargs': mc_kwargs, # the kwargs we set above
22 'item_name': "will set later per file",
23 'output_bit_depth': "Do not convert" # can also set to `8` or `16` if you want the output in `8` or `16` bit
24}
25
26# Path to the dir containing images
27files = glob("/full_path_to_raw_images/*.tiff")
28# Sort in alphabetical order (should also work for numbers)
29files.sort()
30
31# Open each file, crop, and add to batch with 3 diff mot cor params
32for i, path in enumerate(files):
33 print("Working on file " + str(i + 1) + " / " + str(len(files)))
34
35 # get json file path for the meta data
36 meta_path = path[:-5] + ".json"
37
38 # Create a new work environment with this image sequence
39 work_env = ViewerWorkEnv.from_tiff(path, "asarray-multi", meta_path)
40
41 # set it as the current work environment
42 vi.viewer.workEnv = work_env
43 vi.update_workEnv()
44
45 # Get caiman motion correction module, hide=False to not show GUI
46 mc_module = get_module("caiman_motion_correction", hide=True)
47
48 # Set name for this video file
49 name = os.path.basename(path)[:-5]
50 params["item_name"] = name
51
52 # First variant of params
53 params["mc_kwargs"]["strides"] = (196, 196)
54 params["mc_kwargs"]["overlaps"] = (98, 98)
55
56 # Add one variant of params for this video to the batch
57 mc_module.add_to_batch(params)
58
59 # Try another variant of params
60 params["mc_kwargs"]["strides"] = (256, 256)
61 params["mc_kwargs"]["overlaps"] = (128, 128)
62
63 # Set these params and add to batch
64 mc_module.add_to_batch(params)
65
66 # Try one more variant of params
67 params["mc_kwargs"]["strides"] = (296, 296)
68 params["mc_kwargs"]["overlaps"] = (148, 148)
69
70 # Set these params and add to batch
71 mc_module.add_to_batch(params)
72
73# If you want to process the batch after adding the items uncomment the following lines
74#bm = get_batch_manager()
75#bm.process_batch(clear_viewers=True)
Crop and add items¶
This example shows how to crop videos prior to adding them as batch items. This is useful if you want to crop-out large unchanging regions of your movides. It uses either simple thresholding or spectral salieny on a standard deviation projection to determine the bounding box for cropping.
See also
This example uses the Caiman Motion Correction Module API, ViewerWorkEnv API, and Batch Manager API
1# Import glob so we can get all tiff files in a dir
2from glob import glob
3# Import os to get filenames from paths
4import os
5
6# Just get a shortcut reference to the auto_crop function
7auto_crop = image_utils.auto_crop
8
9# Parameters for cropping, these should work for everything
10# These worked well for various different constructs
11# If you get non-specific cropping (too much black) try "method" as "spectral_saliency" (See below)
12crop_params = \
13{
14 "projection": "max+std",
15 "method": "threshold",
16 "denoise_params": (32, 32),
17}
18
19# Spectral saliency is another method
20# You can try and play around with the parameters
21# If the cropping is insufficient, you can set "projection" to just "max" or "std"
22# If you get too much junk blackness around the animal try increasing denoise_params
23# or reduce padding. Default padding is 30 (when nothing is specified like above)
24crop_params_salient = \
25{
26 "projection": "max+std",
27 "method": "spectral_saliency",
28 "denoise_params": (16, 16),
29 "padding": 40
30}
31
32# Motion correction params.
33mc_kwargs = \
34{
35 "max_shifts": (6, 6),
36 "niter_rig": 2,
37 "max_deviation_rigid": 3,
38 "strides": (196, 196),
39 "overlaps": (98, 98),
40 "upsample_factor_grid": 4,
41 "gSig_filt": (10, 10) # Set to `None` for 2p data
42}
43
44params = \
45{
46 'mc_kwargs': mc_kwargs, # the kwargs we set above
47 'item_name': "will set later per file",
48 'output_bit_depth': "Do not convert" # can also set to `8` or `16` if you want the output in `8` or `16` bit
49}
50
51# Path to the dir containing images
52files = glob("/full_path_to_raw_images/*.tiff")
53# Sort in alphabetical order (should also work for numbers)
54files.sort()
55
56# Open each file, crop, and add to batch with 3 diff mot cor params
57for i, path in enumerate(files):
58 print("Working on file " + str(i + 1) + " / " + str(len(files)))
59
60 # get json file path for the meta data
61 meta_path = path[:-5] + ".json"
62
63 # Create a new work environment with this image sequence
64 work_env = ViewerWorkEnv.from_tiff(path, "asarray-multi", meta_path)
65
66 # autocrope the image sequence in the work environment
67 raw_seq = work_env.imgdata.seq
68 # Auto crop the image sequence
69 print("Cropping file: " + str(i + 1))
70
71 cropped = auto_crop.crop(raw_seq, crop_params)
72 # Set work env img seq to the cropped one and update
73 work_env.imgdata.seq = cropped
74
75 # update thew work environment
76 vi.viewer.workEnv = work_env
77 vi.update_workEnv()
78
79 # Get caiman motion correction module, hide=False to not show GUI
80 mc_module = get_module("caiman_motion_correction", hide=True)
81
82 # Set name for this video file
83 name = os.path.basename(path)[:-5]
84 params["item_name"] = name
85
86 # First variant of params
87 params["mc_kwargs"]["strides"] = (196, 196)
88 params["mc_kwargs"]["overlaps"] = (98, 98)
89
90 # Add one variant of params for this video to the batch
91 mc_module.add_to_batch(params)
92
93 # Try another variant of params
94 params["mc_kwargs"]["strides"] = (256, 256)
95 params["mc_kwargs"]["overlaps"] = (128, 128)
96
97 # Set these params and add to batch
98 mc_module.add_to_batch(params)
99
100 # Try one more variant of params
101 params["mc_kwargs"]["strides"] = (296, 296)
102 params["mc_kwargs"]["overlaps"] = (148, 148)
103
104 # Set these params and add to batch
105 mc_module.add_to_batch(params)
106
107# If you want to process the batch after adding the items uncomment the following lines
108#bm = get_batch_manager()
109#bm.process_batch(clear_viewers=True)
CNMF¶
Perform CNMF using the implementation provided by the CaImAn library. This modules basically provides a GUI for parameter entry.
I highly recommend going through the following before using this module
- CNMFE builds upon CNMF
- CaImAn demo notebook, the implementation in Mesmerize is basically from the demo. The second half of the notebook describes CNMF
https://github.com/flatironinstitute/CaImAn/blob/master/demos/notebooks/demo_pipeline.ipynb

Parameters
Please see the CaImAn demo notebook mentioned above to understand the parameters. The Caiman docs also provide descriptions of the parameters: https://caiman.readthedocs.io/
You can also enter parameters for CNMF and component evaluation as keyword arguments (kwargs) in the the respective text boxes if you select “Use CNMF kwrags” or “Use evaluation params”. This is useful if you want to enter parameters that cannot be entered in the GUI for example. Use single quotes if you want to enter string kwargs, do not use double quotes.
Usage¶
This module adds a “CNMF” item to the batch. Set the desired parameters (see Caiman docs & demos) and then enter a name to add it as an item to the batch. After the batch item is processed, double-click the batch item to import the CNMF output into a Viewer. You can then annotate and curate ROIs, and add the data as a Sample to your project.
See also
This modules uses the Batch Manager.
Warning
It’s recommended to open a new Viewer when you want to import 3D CNMF data. Full garbage collection of 3D data in the Viewer Work environment is a WIP for when you want to clear & import 3D data into the same viewer. However when you close the Viewer entirely it is garbage collected entirely.
Note
The parameters used for CNMF are stored in the work environment of the viewer and this log is carried over and saved in Project Samples as well. To see the parameters that were used for CNMF in the viewer, execute get_workEnv().history_trace
in the viewer console and look for the ‘cnmf’ entry.
Warning
Importing several thousands of ROIs into the Viewer can take 15-30 minutes. You will be able to track the progress of the import in the Viewer Window’s status bar.
Warning
If you’re using Windows, large memmap files will linger in your batch dir or work dir, you can clean them out periodically.
Script usage¶
A script can be used to add CNMF batch items. This is much faster than using the GUI. This example sets the work environment from the output of a batch item. See the Caiman Motion Correction script usage examples for how to load images if you want to add CNMF items from images that are not in a batch.
See also
1def reset_params():
2 # CNMF Params that we will use for each item
3 cnmf_kwargs = \
4 {
5 'p': 2,
6 'gnb': 1,
7 'merge_thresh': 0.25,
8 'rf': 70,
9 'stride': 40,
10 'k': 16,
11 'gSig': (8, 8),
12 'gSiz': (33, 33)
13 }
14
15 # component evaluation params
16 eval_kwargs = \
17 {
18 'min_SNR': 2.5,
19 'rval_thr': 0.8,
20 'min_cnn_thr': 0.8,
21 'cnn_lowest': 0.1,
22 'decay_time': 2.0,
23 }
24
25 # the dict that will be passed to the mesmerize caiman module
26 params = \
27 {
28 "cnmf_kwargs": cnmf_kwargs,
29 "eval_kwargs": eval_kwargs,
30 "refit": True, # if you want to perform a refit
31 "item_name": "will set later per file",
32 }
33
34 return params
35
36# Get the batch manager
37bm = get_batch_manager()
38cnmf_mod = get_module('cnmf', hide=True)
39
40# Start index if we want to start processing the new items after they have been added
41start_ix = bm.df.index.size + 1
42
43# This example uses motion corrected output items from the batch manager
44# You can also open image files directly from disk, see the motion correction
45# script examples to see how to open images from disk.
46for ix, r in bm.df.iterrows():
47 # Use output of items 6 - 12
48 # for example if items 6 - 12 were motion correction items
49 if ix < 6:
50 continue
51 if ix > 12: # You need to set a break point, else the batch grows infinitely
52 break
53
54 # get the first variant of params
55 params = reset_parmas()
56
57 # Get the name of the mot cor item
58 name = r['name']
59
60 # Set the name for the new cnmf item
61 params['item_name'] = name
62
63 # Load the mot cor output
64 bm.load_item_output(module='caiman_motion_correction', viewers=viewer, UUID=r['uuid'])
65
66 # Set the sampling rate of the data
67 params['eval_kwargs']['fr'] = vi.viewer.workEnv.imgdata.meta['fps']
68
69 # Get the border_pix value from the motion correction output
70 # skip this if loading files that don't have NaNs on the image borders
71 history_trace = vi.viewer.workEnv.history_trace
72 border_pix = next(d for ix, d in enumerate(history_trace) if 'caiman_motion_correction' in d)['caiman_motion_correction']['bord_px']
73
74 # Set the border_pix values
75 params['border_pix'] = border_pix
76 params['cnmf_kwargs']['border_pix'] = border_pix
77
78 # Add to batch
79 cnmf_mod.add_to_batch(params)
80
81 # change some of the params and add this variant to batch
82 params['cnmf_kwargs']['gSig'] = (10, 10)
83 params['cnmf_kwargs']['gSiz'] = (41, 41)
84
85 # Add to batch with this params variant
86 cnmf_mod.add_to_batch(params)
87
88 # another parameter variant
89 params['eval_kwargs']['rval_thr'] = 0.7
90 params['eval_kwargs']['min_cnn_thr'] = 0.65
91
92 # Add to batch with this params variant
93 cnmf_mod.add_to_batch(params)
94
95# Cleanup the work environment
96vi._clear_workEnv()
97
98# Uncomment the last two lines to start the batch as well
99#bm.process_batch(start_ix, clear_viewers=True)
CNMF 3D¶
Perform 3D CNMF using the implementation provided by the CaImAn library. This modules basically provides a GUI for parameter entry.
I highly recommend going through the following before using this module
- CNMFE builds upon CNMF
- CaImAn demo notebook, the implementation in Mesmerize is basically from the demo.
https://github.com/flatironinstitute/CaImAn/blob/master/demos/notebooks/demo_caiman_cnmf_3D.ipynb

Parameters
Please see the CaImAn demo notebook mentioned above to understand the parameters. The Caiman docs also provide descriptions of the parameters: https://caiman.readthedocs.io/
You can also enter parameters for CNMF and component evaluation as keyword arguments (kwargs) in the the respective text boxes if you select “Use CNMF kwrags” or “Use evaluation params”. This is useful if you want to enter parameters that cannot be entered in the GUI for example. Use single quotes if you want to enter string kwargs, do not use double quotes.
Note
The parameters used for 3D CNMF are stored in the work environment of the viewer and this log is carried over and saved in Project Samples as well. To see the parameters that were used for 3D CNMF in the viewer, execute get_workEnv().history_trace
in the viewer console and look for the ‘cnmf_3d’ entry.
Warning
Importing several thousands of ROIs into the Viewer can take 15-30 minutes. You will be able to track the progress of the import in the Viewer Window’s status bar.
Usage¶
This module adds a “CNMF_3D” item to the batch. Set the desired parameters (see Caiman docs & demos) and then enter a name to add it as an item to the batch. After the batch item is processed, double-click the batch item to import the CNMF output into a Viewer. You can then annotate and curate ROIs, and add the data as a Sample to your project.
See also
This modules uses the Batch Manager.
Warning
It’s recommended to open a new Viewer when you want to import 3D CNMF data. Full garbage collection of 3D data in the Viewer Work environment is a WIP for when you want to clear & import 3D data into the same viewer. However when you close the Viewer entirely it is garbage collected entirely.
Script Usage¶
A script can be used to add CNMF batch items. This is much faster than using the GUI. This example sets the work environment from the output of a batch item.
See also
This example loads 3D sequences from disk & adds them to a batch with 3 parameter variants.
1# just so we can reset the params for each new image file
2def reset_params():
3 # CNMF Params that we will use for each item
4 cnmf_kwargs = \
5 {
6 'p': 2,
7 'merge_thresh': 0.8,
8 'k': 50,
9 'gSig': (10, 10, 1),
10 'gSiz': (41, 41, 4)
11 }
12
13 # component evaluation params
14 eval_kwargs = \
15 {
16 'min_SNR': 3.0,
17 'rval_thr': 0.75,
18 'decay_time': 1.0,
19 }
20
21 # the dict that will be passed to the mesmerize caiman module
22 params = \
23 {
24 "cnmf_kwargs": cnmf_kwargs,
25 "eval_kwargs": eval_kwargs,
26 "refit": True, # if you want to perform a refit
27 "item_name": "will set later per file",
28 "use_patches": False,
29 "use_memmap": False, # re-use the memmap from a previous batch item, reduces computation time
30 "memmap_uuid: None, # UUID (as a str) of the batch item to use the memmap from
31 "keep_memmmap": False # keep the memmap of this batch item
32
33 }
34
35 return params
36
37# get the 3d cnmf module
38cnmf_mod = get_module('cnmf_3d', hide=True)
39
40# Path to the dir containing images
41files = glob("/full_path_to_raw_images/*.tiff")
42# Sort in alphabetical order (should also work for numbers)
43files.sort()
44
45# Open each file, crop, and add to batch with 3 diff mot cor params
46for i, path in enumerate(files):
47 print("Working on file " + str(i + 1) + " / " + str(len(files)))
48
49 # get json file path for the meta data
50 meta_path = path[:-5] + ".json"
51
52 # Create a new work environment with this image sequence
53 vi.viewer.workEnv = ViewerWorkEnv.from_tiff(path=path, # tiff file path
54 method='imread', # use imread
55 meta_path=meta_path, # json metadata file path
56 axes_order=None) # default axes order
57 # see Mesmerize Tiff file module docs for more info on axes order
58
59 # update the work environment
60 vi.update_workEnv()
61
62 # get the first variant of params
63 params = reset_parmas()
64
65 # Set name for this video file
66 name = os.path.basename(path)[:-5]
67 params["item_name"] = name
68
69 # add batch item with one variant of params
70 u = cnmf_mod.add_to_batch(params)
71
72 # add the same image but change some params
73 params["cnmf_kwargs"]["gSig"] = (12, 12, 1)
74 params["eval_kwargs"]["min_SNR"] = 2.5
75
76 # use the same memmap as the previous batch item
77 # since it's the same image
78 params["use_memmap"] = True
79 params["memmap_uuid"] = str(u)
80
81 # add this param variant to the batch
82 cnmf_mod.add_to_batch(params)
83
84 # one more variant of params
85 params["eval_kwargs"]["min_SNR"] = 2.0
86
87 # add this param variant to the batch
88 cnmf_mod.add_to_batch(params)
CNMFE¶
Perform CNMFE using the implementation provided by the CaImAn library.
I highly recommend going through the following before using this module
- CNMFE builds upon CNMF
- CaImAn CNMF-E demo notebook, the implementation in Mesmerize is basically from the demo
https://github.com/flatironinstitute/CaImAn/blob/master/demos/notebooks/demo_pipeline_cnmfE.ipynb

Parameters
Ain: Seed spatial components from another CNMFE item by entering its UUID here.
Please see the CaImAn demo notebook mentioned above to understand the rest of the parameters. The Caiman docs also provide descriptions of the parameters: https://caiman.readthedocs.io/
You can also enter parameters for CNMF and component evaluation as keyword arguments (kwargs) in the the respective text boxes if you select “Use CNMF kwrags” or “Use evaluation params”. This is useful if you want to enter parameters that cannot be entered in the GUI for example. Use single quotes if you want to enter string kwargs, do not use double quotes.
Usage¶
This module creates two types of batch items, one where you can inspect the Correlation & PNR images and another that performs CNMFE and extracts components. Here is an outline of typical usage:
Enter a gSig parameter value and a name for “Inspect Correlation and PNR”, the text entry for “Stop here”. Click “Add to batch”. Run the batch item.
Double-click the batch item, you will be presented with a GUI to help optimize min_corr and min_pnr. For the correlation image use the vmin slider to optimize the seperation of cells and set the min_corr parameter to this value. Likewise, optimize the value for the PNR until the PNR image mostly contains regions that show real signal and no or few regions that are likely to be just noise and set this vmin value as the min_pnr parameter. You may need to try slightly different variations to optimize the parameters.

Enter the rest of the parameters and give a name under “Perform CNMF-E”, click “Add to batch” and run the item.
Double-click the batch item and you will be presented with 3 options. The first option will display the correlation-pnr images and the second option is currently non-functional (matplotlib Qt issue). The last option will import the components extracted by CNMFE into an open Viewer. The components are managed by the ROI Manager.
See also
See also
This modules uses the Batch Manager.
Note
The parameters used for CNMFE are stored in the work environment of the viewer and this log is carried over and saved in Project Samples as well. To see the parameters that were used for CNMFE in the viewer, execute get_workEnv().history_trace
in the viewer console and look for the ‘cnmfe’ entry.
Warning
If you’re using Windows, large memmap files will linger in your batch dir or work dir, you can clean them out periodically.
Script Usage¶
A script can be used to add CNMFE batch items. This is much faster than using the GUI.
See also
Add Corr PNR items¶
Add Corr PNR batch items from a batch that contains motion corrected items. This example add 2 variants of parameters (just gSig) for each motion corrected item.
See also
This example uses the Caiman CNMFE module API and Batch Manager API
See also
Caiman Motion Correction script usage examples for how to load images if you want to add Corr PNR items from images that are not in a batch.
1# Get the batch manager
2bm = get_batch_manager()
3
4# Get the CNMFE module
5cnmfe_mod = get_module('cnmfe', hide=True)
6
7# Start index to start processing the new items after they have been added
8start_ix = bm.df.index.size + 1
9
10for ix, r in bm.df.iterrows():
11 if ix == start_ix:
12 break
13
14 # Load the output of the motion corrected batch item
15 # The output will load into the viewer that this script
16 # is running in.
17 bm.load_item_output(module='caiman_motion_correction', viewers=viewer, UUID=r['uuid'])
18
19 # Get the currently set params
20 # You just need the dict with all the correct keys
21 # You will just modify the "gSig" and "item_name" keys
22 params = cnmfe_mod.get_params(item_type='corr_pnr', group_params=True)
23
24 # Get the name of the mot cor item
25 name = r['name']
26 params['item_name'] = name
27
28 params['border_pix'] = border_pix
29
30 # Set the gSig and name params
31 params['corr_pnr_kwargs']['gSig'] = 8
32
33 # Add to batch
34 cnmfe_mod.add_to_batch_corr_pnr(params)
35
36 # Another variant of params
37 params['corr_pnr_kwargs']['gSig'] = 10
38
39 # Add to batch with this variant of params
40 cnmfe_mod.add_to_batch_corr_pnr(params)
41
42# Cleanup the work environment
43vi._clear_workEnv()
44
45# Start the batch from the start_ix
46bm.process_batch(start_ix, clear_viewers=True)
CNMFE¶
Caiman HDF5 Importer¶
You can import HDF5 files containing CNMF results that were produced externally by Caiman. The ROIs produced by CNMF, 3D-CNMF or CNMFE will be imported into the current work environment and placed onto the image that is currently open.

You can also use this module through the viewer console, or in the Script Editor instead of clicking buttons.
Example
1# get the module, hide the GUI
2caiman_importer = get_module('caiman_importer', hide=True)
3
4# import the file
5caiman_importer.import_file('/path/to/file.hdf5')
Suite2p Importer¶
You can load Suite2p output files to import ROIs into the current work environment. This places the Suite2p-derived ROIs onto the image that is currently open.

Video Tutorial¶
Script Usage¶
You can also use this module through the viewer console, or in the Script Editor instead of clicking buttons.
Example
1# get the module, hide the GUI
2s2p_importer = get_module('suite2p_importer', hide=True)
3
4# set the path to the dir containing the suite2p output files
5s2p_importer.data.set_dir('/path/to/dir')
6
7# set the amount of neuropil contamination to subtract
8s2p_importer.data.Fneu_sub = 0.7
9
10# import the suite2p data into the current work environment
11s2p_importer.import_rois()
12
13# clear the data from the importer before importing another directory
14# this doesn't do anything to the viewer work environment, just clears the importer data
15s2p_importer.data.clear()
Nuset Segmentation¶
Deep learning based segmentation, useful for nuclear localized indicators. ROIs segmented through this module can be imported into the Viewer Work Environment.
Note
If you use this tool, please cite the Nuset paper in addition to citing Mesmerize: Yang L, Ghosh RP, Franklin JM, Chen S, You C, Narayan RR, et al. (2020) NuSeT: A deep learning tool for reliably separating and analyzing crowded cells. PLoS Comput Biol 16(9): e1008193. https://doi.org/10.1371
Parameters¶
Projection¶
Choose a projection which maximizes the visibility of your regions of interest
Pre-process¶
Parameter |
Description |
---|---|
do_preprocess |
perform pre-processing |
do_sigmoid |
perform sigmoid correction |
sigmoid_cutoff |
cutoff, lower values will increase the exposure |
sigmoid_gain |
gain, high values can be thought of as increasing contrast |
sigmoid_invert |
invert the image if necessary. Regions of interesting should be bright, background should be dark |
do_equalize |
perform adaptive histogram equalization |
equalize_lower |
Set a lower limit, this helps remove background & increase contrast |
equalize_upper |
Upper limit for the histogram |
equalize_kernel |
kernel size, increase if the pre-processed image is grainy. Start with a value ~1/16-1/8 the size of the image |
NuSeT¶
Parameter |
Description |
---|---|
watershed |
wastershed the image, useful if your cells are tightly packed. Uncheck if cells are large and/or sparse. |
min_score |
Decreasing this value will cause more regions to be found, i.e. cells tend to split more |
nms_threshold |
Increasing this value will cause more regions to be found, i.e. cells tend to split more |
rescale_ratio |
Use smaller values less than 1.0 if you have large bright cells, If you have smaller or dim cells use values higher than 1.0 |
Note
min_score & nms_threshold work in opposing ways
Note
Segmentation will utilize all threads available on your system (regardless of the value set in your System Configuration). However it only takes a few seconds or a few minutes if segmenting a large 3D stack.
Note
high rescale_ratio values will increase the time required for segmentation. Values around 3.0 take about ~1 minute for 512x512 sized images on ~16 core CPUs.
Post-process¶
Export¶
If you export using a Convex Hull masks containing only a few pixels, which may be noise, will be removed.
Note
Segmentation will utilize all threads available on your system (regardless of the value set in your System Configuration). However it only takes a few seconds if exporting a 2D image, and make take ~10 minutes if exporting a large 3D stack.
Script Editor¶
A simple text editor for writing scripts that can be run in the viewer console
The scripts are simply ran in the viewer console and all output will also be visible in the viewer console.

See also
Warning
There is no auto-save function
Femtonics Importers¶
You can import .mes
and .mesc
files containing data recorded by a Femtonics microscope. Access these modules in Viewer through Modules -> Load images -> Femtonics
mesc files¶
You can explore the contents of a .mesc
file using the module’s GUI show on the left in the image below. To load a recording just double click on a selection under Channel. If the recording is an image sequence it will be imported into the Viewer. If the recording is a Curve a plot will open in a new window to display the curve.

mes files¶
You can import recordings from a .mes
using this module, and you can also map metadata from the microscope to specific stimuli.
To load an image sequence into the Viewer work environment, just double click the desired recording from the list.

You can map voltage data from various microscope channels (such as auxiliary outputs) to specific stimuli. The stimulus types which you can choose from will correspond to the Stimulus Type columns in your Project Configuration. You can view & edit the imported stimulus data using the Stimulus Mapping Module

Flowchart Overview¶
The flowchart allows you to analyze samples in your project and create plots by arranging analysis nodes. Each node takes an input, performs an operation, and produces an output. For example the Derivative node takes use-specified numerical arrays, computes the derivative of these arrays, and then outputs the result.
The Flowchart is based on the pyqtgraph flowchart widgets
Flowchart Window

Add node: Right click -> Add node -> Choose from selection
Click on a node to highlight the Control Widget
Remove node: Right click -> Remove node
Connecting nodes: Click on a node terminal and drag to another terminal
Save the flowchart layout: Click “Save as…” to save the layout to a new file. You must specify the file extension as “.fc”. If you save this file within the “flowcharts” directory of your project it will show up in the Welcome Window when you open your project.
Note
This does not save the data, use the Save node to save data.
Warning
Due to a weird Qt or pyqtgraph bug certain parameter values (such as those in drop-down menus) can’t be saved. Similarly, parameters values are lost when you save to an existing .fc file. If you’re interested take a look at
pyqtgraphCore.WidgetGroup
. Anyways you shouldn’t be using the flowchart layout to save this information, that’s what the History Trace in Transmission objects is for.
Load an .fc file: Click the “Load” button.
Reset View button: Reset the view, for example if you zoom out or pan too far.
Video Tutorial¶
Part 5 - 9 of the Main Tutorial series also provide various examples for how the flowchart can be used: https://www.youtube.com/playlist?list=PLgofWiw2s4REPxH8bx8wZo_6ca435OKqg
Transmission¶
Almost every node uses a Transmission object for input and output. A Transmission is basically a DataFrame and a History Trace (analysis log) of the data within the DataFrame.
Transmission DataFrame
The Transmission DataFrame is created from your Project DataFrame (or sub-DataFrame) by the Load_Proj_DF node. This initial DataFrame will contain the same columns as your Project DataFrame, and a new column named _RAW_CURVE. Each element (row) in the _RAW_CURVE column is a 1-D numerical array representing a single raw curve extracted from an ROI.
A new column named _BLOCK_ is also added which contains the UUID for logging the analysis history of this newly created block of DataFrame rows, known as a data block. This allows you to merge Transmissions (see Merge node) and maintain their independent analysis logs prior to the merge.
Naming conventions for DataFrame columns according to the data types
numerical data: single leading underscore ( _ ). All caps if produced by a flowchart node.
categorial data: no leading underscore. All caps if produced by flowhchart node.
special cases: Peak detection data are placed in a column named peaks_bases where each element is a DataFrame.
uuid data: has uuid or UUID in the name
Note
_BLOCK_ is an exception, it contains UUIDs not numerical data.
History Trace
The History Trace of a Transmission is a log containing the discrete analysis steps, known as operations, along with their parameters and any other useful information. When a flowchart node performs an operation it stores the output(s) data in the Transmission DataFrame and appends the operation parameters to this log. A seperate log is kept for each data block present in the Transmission DataFrame.
Console¶
You have direct access to the data within the nodes through the console in the flowchart. To show the console go to View -> Console.
See also
If you are unfamiliar with the console see the overview on Consoles
Call get_nodes()
to view a dict of all nodes in the flowchart. You can access the output Transmission in most nodes through the attribute t. You can access the transmission dataframe through t.df
.
See also
See the Transmission API for more information. Sources for the nodes at mesmerize/pyqtgraphCore/flowchart/library.
Example, directly accessing DataFrame elements through the flowchart console

Transmission Files¶
You can save a Transmission files using the Save node and work with the data directly in scripts, jupyter notebooks etc. You can also save them through the flowchart console (and plot consoles) through Transmission.to_hdf5
.
Working with Transmission files¶
Load a saved Transmission instance using Transmission.from_hdf5
1>>> from mesmerize import Transmission
2>>> from uuid import UUID
3
4# load transmission file
5>>> t = Transmission.from_hdf5('/share/data/temp/kushal/data.trn')
6<mesmerize.analysis.data_types.Transmission at 0x7f4d42f386a0>
7
8# The DataFrame is always the 'df' attribute
9>>> t.df.head()
10
11 CurvePath ... FCLUSTER_LABELS
120 curves/a2-_-1-_-843c2d43-75f3-421a-9fef-483d1e... ... 8
131 curves/brn3b_a6-_-2-_-21557a64-6868-4ff4-8db1-... ... 4
142 curves/brn3b_a6-_-2-_-21557a64-6868-4ff4-8db1-... ... 5
153 curves/brn3b_day1_3-_-2-_-ff3e95df-0e15-495c-9... ... 8
164 curves/brn3b_day1_3-_-2-_-ff3e95df-0e15-495c-9... ... 6
17
18[5 rows x 27 columns]
19
20# the `df` is just a pandas dataframe
21# View a list of samples in the current file
22>>> t.df.SampleID.unique()
23
24array(['a2-_-1', 'a5-_-1', 'brn3b_a6-_-2', 'brn3b_day1_3-_-2',
25 'brn3b_day1_a1-_-2', 'brn3b_day1_a2-_-2', 'brn3b_day1_a4-_-2',
26 'brn3b_day2_a1-_-2', 'brn3b_day2_a1-_-t', 'brn3b_day2_a10-_-2',
27 'brn3b_day2_a2-_-1', 'brn3b_day2_a2-_-3', 'brn3b_day2_a8-_-1',
28 'cesa_a1-_-1', 'cesa_a1-_-2', 'cesa_a1_jan_2019-_-1',
29 'cesa_a1_jan_2019-_-2', 'cesa_a2-_-2', 'cesa_a6-_-1',
30 'cesa_a7-_-1', 'cesa_a7-_-2', 'cesa_a8-_-1', 'cesa_a9-_-1',
31 'cng_ch4_day1_a2-_-t1', 'cng_ch4_day1_a2-_-t2',
32 'cng_ch4_day2_a4-_-t1', 'dmrt1_day1_a2-_-2', 'dmrt1_day1_a4-_-t2',
33 'dmrt1_day1_a5-_-', 'dmrt1_day1_a6-_-t', 'dmrt1_day1_a6-_-t2',
34 'dmrt1_day2_a1-_-t1', 'dmrt1_day2_a1-_-t2', 'dmrt1_day2_a2-_-t1',
35 'dmrt1_day2_a3-_-t1', 'dmrt1_day2_a3-_-t2', 'dmrt1_day2_a4-_-t1',
36 'dmrt1_day2_a4-_-t2', 'hnk1_a5-_-2', 'hnk1_a6-_-1', 'hnk1_a7-_-1',
37 'hnk1_a7-_-2', 'hnk1_a8-_-1', 'pc2_a10-_-1', 'pc2_a11-_-1',
38 'pc2_a13-_-1', 'pc2_a14-_-1', 'pc2_a15-_-1', 'pc2_a16-_-1',
39 'pc2_a9-_-1', 'pde9_day1_a2-_-2', 'pde9_day1_a3-_-1',
40 'pde9_day1_a4-_-1', 'pde9_day1_a4-_-2', 'pde9_day2_a2-_-t2',
41 'pde9_day2_a2-_-t4', 'pde9_day2_a4-_-t1', 'pde9_day2_a4-_-t2',
42 'pde9_day2_a4-_-t3', 'pde9_day2_a5-_-t1', 'pde9_day2_a5-_-t2',
43 'pde9_day2_a6-_-t1', 'pde9_day2_a7-_-t1', 'pde9_day2_a7-_-t2'],
44 dtype=object)
45
46# Show data associated with a single sample
47>>> t.df[t.df['SampleID'] == 'brn3b_day1_a1-_-2']
48
49 CurvePath ... FCLUSTER_LABELS
506 curves/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-... ... 6
517 curves/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-... ... 6
528 curves/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-... ... 5
539 curves/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-... ... 7
5410 curves/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-... ... 5
55
56# View the data associated with one ROI
57# the `uuid_curve` is a unique identifier for each curve/ROI
58>> t.df[t.df['SampleID'] == 'brn3b_day1_a1-_-2'].iloc[0]
59
60CurvePath curves/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-...
61ImgInfoPath images/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-...
62ImgPath images/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-...
63ImgUUID d3c5f225-7039-4abd-a7a1-5e9ef2150013
64ROI_State {'roi_xs': [554, 553, 553, 552, 552, 551, 551,...
65SampleID brn3b_day1_a1-_-2
66anatomical_location palp
67cell_name palp
68comments untagged
69date 20190425_110103
70dorso_ventral_axis untagged
71misc {}
72morphology untagged
73promoter brn3b
74rostro_caudal_axis untagged
75stimulus_name [untagged]
76uuid_curve f44fbd3d-6eaa-4e19-a677-496908565fde
77_RAW_CURVE [81.41972198848178, 75.61356993008134, 70.0493...
78meta {'origin': 'AwesomeImager', 'version': '4107ff...
79stim_maps [[None]]
80_BLOCK_ 3e069e2d-d012-47ee-830c-93d85197e2f4
81_SPLICE_ARRAYS [2.646593459501195, 1.8252819116136887, 1.7422...
82_NORMALIZE [0.0681729940259753, 0.06533186950232853, 0.06...
83_RFFT [443.19357880089615, -66.8777897472859, 55.244...
84_ABSOLUTE_VALUE [443.19357880089615, 66.8777897472859, 55.2443...
85_LOG_TRANSFORM [2.646593459501195, 1.8252819116136887, 1.7422...
86FCLUSTER_LABELS 6
87Name: 6, dtype: object
88
89# Show the ROI object data
90>>> t.df[t.df['SampleID'] == 'brn3b_day1_a1-_-2'].iloc[0]['ROI_State']
91
92{'roi_xs': array([554, 553, 553, 552, 552, 551, 551, 551, 551, 550, 550, 550, 549,
93 548, 547, 547, 546, 546, 545, 545, 544, 543, 543, 542, 541, 541,
94 540, 540, 539, 539, 538, 537, 536, 535, 534, 533, 532, 531, 531,
95 530, 529, 528, 527, 527, 526, 526, 525, 525, 525, 524, 524, 523,
96 522, 522, 521, 521, 520, 521, 521, 521, 521, 521, 522, 522, 522,
97 522, 522, 522, 522, 522, 521, 521, 521, 521, 521, 521, 522, 523,
98 524, 524, 525, 525, 525, 526, 526, 527, 528, 528, 529, 529, 529,
99 530, 530, 531, 532, 532, 533, 534, 535, 535, 536, 536, 537, 538,
100 539, 540, 540, 541, 541, 542, 542, 543, 544, 545, 546, 546, 547,
101 548, 548, 549, 549, 549, 549, 550, 550, 550, 550, 551, 551, 551,
102 552, 552, 552, 553, 553, 553, 554, 554, 554, 553, 554, 554, 554,
103 554, 554]),
104'roi_ys': array([155, 156, 156, 157, 157, 158, 159, 160, 160, 161, 162, 162, 162,
105 162, 163, 163, 164, 164, 165, 165, 165, 166, 166, 166, 167, 167,
106 167, 166, 167, 167, 167, 167, 167, 167, 167, 167, 167, 168, 168,
107 168, 168, 168, 168, 167, 167, 166, 166, 165, 164, 164, 163, 163,
108 163, 162, 162, 161, 161, 160, 160, 159, 158, 157, 156, 156, 155,
109 154, 153, 152, 151, 150, 150, 149, 148, 147, 146, 145, 144, 144,
110 144, 144, 143, 143, 142, 141, 141, 140, 140, 140, 139, 139, 138,
111 137, 137, 136, 136, 136, 135, 135, 135, 136, 136, 137, 137, 137,
112 137, 137, 138, 138, 138, 137, 137, 136, 136, 136, 136, 137, 137,
113 137, 138, 138, 139, 140, 141, 141, 142, 143, 144, 144, 145, 146,
114 146, 147, 148, 148, 149, 150, 150, 151, 151, 152, 152, 153, 154,
115 155, 155]),
116'curve_data': (array([ 0, 1, 2, ..., 2996, 2997, 2998]),
117array([ 81.41972199, 75.61356993, 70.04934883, ..., 195.4416283 ,
118 184.8844155 , 174.76708104])),
119'tags': {'anatomical_location': 'palp',
120'cell_name': 'palp',
121'morphology': 'untagged'},
122'roi_type': 'CNMFROI',
123'cnmf_idx': 2}
View History Log¶
Transmissions have a history_trace attribute which is an instance of HistoryTrace
.
Use the get_data_block_history
and get_operations_list
methods to view the history log of a data block.
1# To view the history log, first get the block UUID of the dataframe row of which you want the history log
2
3# Block UUIDs are stored in the _BLOCK_ column
4>>> bid = t.df.iloc[10]._BLOCK_
5>>> bid
6
7'248a6ece-e60e-4a09-845e-188a5199d262'
8
9# Get the history log of this data block
10# HistoryTrace.get_operations_list() returns a list of operations, without parameters
11# HistoryTrace.get_data_block_history() returns the operations list with the parameters
12>>> t.history_trace.get_operations_list(bid)
13
14['spawn_transmission',
15 'splice_arrays',
16 'normalize',
17 'rfft',
18 'absolute_value',
19 'log_transform',
20 'splice_arrays',
21 'fcluster']
22
23# View the entire history log with all params
24>>> t.history_trace.get_data_block_history(bid)
25
26[{'spawn_transmission': {'sub_dataframe_name': 'neuronal',
27'dataframe_filter_history': {'dataframe_filter_history': ['df[~df["promoter"].isin([\'cesa\', \'hnk1\'])]',
28 'df[~df["promoter"].isin([\'cesa\', \'hnk1\'])]',
29 'df[~df["cell_name"].isin([\'not_a_neuron\', \'non_neuronal\', \'untagged\', \'ependymal\'])]']}}},
30{'splice_arrays': {'data_column': '_RAW_CURVE',
31'start_ix': 0,
32'end_ix': 2990,
33'units': 'time'}},
34{'normalize': {'data_column': '_SPLICE_ARRAYS', 'units': 'time'}},
35{'rfft': {'data_column': '_NORMALIZE',
36'frequencies': [0.0,
37 0.0033444816053511705,
38 0.0033444816053511705,
39 0.006688963210702341,
40 ...
41
42# Get the parameters for the 'fcluster' operation
43>>> fp = t.history_trace.get_operation_params(bid, 'fcluster')
44
45# remove the linkage matrix first so we can view the other params
46>>> fp.pop('linkage_matrix');fp
47
48{'threshold': 8.0,
49 'criterion': 'maxclust',
50 'depth': 1,
51 'linkage_params': {'method': 'complete',
52 'metric': 'wasserstein',
53 'optimal_ordering': True}}
54
55# Draw the analysis history as a graph
56# This will open your defeault pdf viewer with the graph
57>>> t.history_trace.draw_graph(bid, view=True)
58
59# If you are using the API to perform analysis on
60# transmission files, you can use the `HistoryTrace`
61# to log the analysis history
62# For example, add a number `3.14` to all datapoints in a curve
63>>> t.df['_RAW_CURVE'] = t.df['_RAW_CURVE'].apply(lambda x: x + 3.14)
64
65# Append the analysis log
66>>> t.history_trace.add_operation(data_block_id='all', operation='addition', parameters={'value': 3.14}
Nodes¶
Data¶
These nodes are for performing general data related operations
LoadFile¶
Loads a save Transmission file. If you have a Project open it will automatically set the project path according to the open project. Otherwise you must specify the project path. You can specify a different project path to the project that is currently open (this is untested, weird things could happen). You should not merge Transmissions originating from different projects.
Note
You can also load a saved Transmission file by dragging & dropping it into the Flowchart area. It will create a LoadFile node with the name of the dropped.
Terminal
Description
Out
Transmission loaded from the selected file.
Parameters
Description
load_trn
Button to choose a .trn file (Transmission) to load
proj_trns
Load transmission file located in the project’s “trns” directory
proj_path
Button to select the Mesmerize project that corresponds to the chosen .trn file.
Note
The purpose of specifying the Project Path when you load a save Transmission file is so that interactive plots and the Datapoint Tracer can find raw data that correspond to datapoints.
LoadProjDF¶
Load the entire Project DataFrame (root) of the project that is currently open, or a sub-DataFrame that corresponds a tab that you have created in the Project Browser.
Output Data Column (numerical): _RAW_CURVE
Each element in this output column contains a 1-D array representing the trace extracted from an ROI.
Terminal
Description
Out
Transmission created from the Project DataFrame or sub-DataFrame.
Parameters
Description
DF_Name
DataFrame name. List correponds to Project Browser tabs.
Update
Re-create Transmission from corresponding Project Browser tab.
Apply
Process data through this node
Note
The DF_Name options do not update live with the removal or creation of tabs in the Project Browser, you must create a new node to reflect these types of changes.
Save¶
Save the input Transmission to a file so that the Transmission can be used re-loaded in the Flowchart for later use.
Usage: Connect an input Transmission to this node’s In terminal, click the button to choose a path to save a new file to, and then click the Apply checkbox to save the input Transmission to the chosen file.
Terminal
Description
In
Transmission to be saved to file
Parameters
Description
saveBtn
Button to choose a filepath to save the Transmission to.
Apply
Process data through this node
Note
You must always save a Transmission to a new file (pandas with hdf5 exihibts weird behavior if you overwrite, this is the easiest workaround). If you try to overwrite the file you will be presented with an error saying that the file already exists.
Merge¶
Merge multiple Transmissions into a single Transmission. The DataFrames of the individual Transmissions are concatenated using pandas.concat and History Traces are also merged. The History Trace of each indidual input Transmission is kept separately.
Warning
At the moment, if you create two separate data streams that originate from the same Transmission and then merge them at a later point, the analysis log (History Trace) of the individual data streams are not maintained. See the information about data blocks in the Transmission.
Terminal
Description
In
Transmissions to be merged
Out
Merged Transmission
ViewTransmission¶
View the input Transmission object using the spyder Object Editor. For example you can explore the Transmission DataFrame and HistoryTrace.
ViewHistory¶
View the HistoryTrace of the input Transmission in a nice Tree View GUI.
TextFilter¶
Include or Exclude Transmission DataFrame rows according to a text filter in a categorical column.
Usage Example: If you want to select all traces that are from photoreceptor cells and you have a categorical column, named cell_types for example, containing cell type labels, choose “cell_type” as the Column parameter and enter “photoreceptor” as the filter parameter, and select Include. If you want to select everything that are not photoreceptors select Exclude.
Note
It is recommended to filter and group your data beforehand using the Project Browser since it allows much more sophisticated filtering.
Terminal
Description
In
Input Transmission
Out
Transmission its DataFrame filtered accoring parameters
Parameters
Description
Column
Categorical column that contains the text filter to apply
filter
Text filter to apply
Include
Include all rows matching the text filter
Exclude
Exclude all rows matching the text filter
Apply
Process data through this node
HistoryTrace output structure: Dict of all the parameters for this node
SpliceArrays¶
Splice arrays derived in the specified numerical data column and place the spliced output arrays in the output column.
Output Data Column (numerical): _SPLICE_ARRAYS
Terminal
Description
In
Input Transmission
Out
Transmission with arrays from the input column spliced and placed in the output column
Parameters
Description
data_column
Numerical data column containing the arrays to be spliced
indices
The splice indices, “start_index:end_index”
Apply
Process data through this node
DropNa¶
Drop NaNs and Nones (null) from the Transmission DataFrame. Uses DataFrame.dropna and DataFrame.isna methods.
If you choose “row” or “column” as axis, entire rows or columns will be dropped if any or all (see params) of the values are NaN/None.
If you choose to drop NaNs/Nones according to a specific column, it will drop the entire row if that row has a NaN/None value for the chosen column.
Terminal
Description
In
Input Transmission
Out
Transmission NaNs and None’s removed according to the params
Parameters
Description
axis
Choose to rows, columns, or a rows according to a specific column.
how
any: Drop if any value in the row/column is NaN/None
all: Drop only if all values in the row/column are Nan/Noneignored if “axis” parameter is set to a specific columnApply
Process data through this node
NormRaw¶
Source
Scale the raw data such that the min and max values are set to the min and max values derived from the raw spatial regions of the image sequences they originate from. Only for CNMFE data.
The arrays in the _RAW_CURVE column are scaled and the output is placed in a new column named _NORMRAW
Terminal
Description
In
Input Transmission
Out
Transmission with the result placed in the output column
Parameter
Description
option
Derive the raw min & max values from one of the following options:top_5: Top 5 brightest pixelstop_10: Top 10 brighest pixelstop_5p: Top 5% of brightest pixelstop_10p: Top 10% of brightest pixelstop_25p: Top 25% of brightest pixelsfull_mean: Full mean of the min and max arrayApply
Process data through this node
Note
If the raw min value is higher than the raw max value the curve will be excluded in the output. You will be presented with a warning box with the number of curves that were excluded due to this.
Display¶
These nodes connect input Transmission(s) to various plots for visualization
The actual Plot Widget instance that these nodes use can be accessed through the plot_widget
attribute in the flowchart console.
For example
# Get a heatmap node that is named "Heatmap.0"
>>> hn = get_nodes()['Heatmap.0']
# the plot widget instance
>>> hn.plot_widget
<mesmerize.plotting.widgets.heatmap.widget.HeatmapTracerWidget object at 0x7f26e5d29678>
BeeswarmPlots¶
Based on pqytgraph Beeswarm plots.
Visualize data points as a pseudoscatter and as corresponding Violin Plots. This is commonly used to visualize peak features and compare different experimental groups.
For information on the plot widget see Beeswarm Plots
Terminal
Description
In
Input Transmission
The DataFrame column(s) of interest must have single numerical values, not arrays
Heatmap¶
Used for visualizing numerical arrays in the form of a heatmap. Also used for visualizing a hieararchical clustering tree (dendrogram) along with a heatmap with row order corresponding to the order leaves of the dendrogram.
For information on the plot widget see Heat Plot
Terminal
Description
In
Input Transmission
The arrays in the DataFrame column(s) of interest must be of the same lengthNote
Arrays in the DataFrame column(s) of interest must be of the same length. If they are not, you must splice them using the SpliceArrays node.
CrossCorr¶
Perform Cross-Correlation analysis. For information on the plot widget see CrossCorrelation Plot
Plot¶
For information on the plot widget see Simple Plot
A simple plot.
Terminal
Description
In
Input Transmission
Parameters
Description
data_column
Data column to plot, must contain numerical arrays
Show
Show/hide the plot window
Apply
Process data through this node
Proportions¶
Plot stacked bar chart of one categorical variable vs. another categorical variable.
For information on the plot widget see Proportions Plot
ScatterPlot¶
Create scatter plot of numerical data containing [X, Y] values
For information on the plot widget see Scatter Plot
Signal¶
Routine signal processing functions
I recommend this book by Tom O’Haver if you are unfamiliar with basic signal processing: https://terpconnect.umd.edu/~toh/spectrum/TOC.html
Butterworth¶
Source
Creates a Butterworth filter using scipy.signal.butter and applies it using scipy.signal.filtfilt.
The Wn parameter of scipy.signal.butter is calculated by dividing the sampling rate of the data by the freq_divisor parameter (see below).
Output Data Column (numerical): _BUTTERWORTH
Terminal
Description
In
Input Transmission
Out
Transmission with filtered signals in the output data column
Parameters
Description
data_column
Data column containing numerical arrays to be filtered
order
Order of the filter
freq_divisor
Divisor for dividing the sampling frequency of the data to get Wn
Apply
Process data through this node
SavitzkyGolay¶
Savitzky Golay filter. Uses scipy.signal.savgol_filter.
Output Data Column (numerical): _SAVITZKY_GOLAY
Terminal
Description
In
Input Transmission
Out
Transmission with filtered signals in the output data column
Parameters
Description
data_column
Data column containing numerical arrays to be filtered
window_length
Size of windows for fitting the polynomials. Must be an odd number.
polyorder
Order of polynomials to fit into the windows. Must be less than window_length
Apply
Process data through this node
PowSpecDens¶
Resample¶
Resample the data in numerical arrays. Uses scipy.signal.resample.
Output Data Column (numerical): _RESAMPLE
Terminal
Description
In
Input Transmission
Out
Transmission with resampled signals in the output data column
Parameters
Description
data_column
Data column containing numerical arrays to be resampled
Rs
New sampling rate in Tu units of time.
Tu
Time unit
Apply
Process data through this node
Note
If Tu = 1, then Rs is the new sampling rate in Hertz.
ScalerMeanVariance¶
Uses tslearn.preprocessing.TimeSeriesScalerMeanVariance
Output Data Column (numerical): _SCALER_MEAN_VARIANCE
Terminal
Description
In
Input Transmission
Out
Transmission with scaled signals in the output column
Parameters
Description
data_column
Data column containing numerical arrays to be scaled
mu
Mean of the output time series
std
Standard Deviation of the output time series
Apply
Process data through this node
Note
if mu = 0 and std = 1, the output is the z-score of the signal.
Normalize¶
Normalize the signal so that all values are between 0 and 1 based on the min and max of the signal.
Output Data Column (numerical): _NORMALIZE
Terminal
Description
In
Input Transmission
Out
Transmission with scaled signals in the output column
Parameters
Description
data_column
Data column containing numerical arrays to be scaled
Apply
Process data through this node
RFFT¶
Uses scipy.fftpack.rfft. “Discrete Fourier transform of a real sequence”
Output Data Column (numerical): _RFFT
Terminal
Description
In
Input Transmission
Out
Transmission with the RFT of signals in the output column
Parameters
Description
data_column
Data column containing numerical arrays
Apply
Process data through this node
iRFFT¶
Uses scipy.fftpack.irfft. “inverse discrete Fourier transform of real sequence x”
Output Data Column (numerical): _IRFFT
PeakDetect¶
Simple Peak Detection using derivatives. The “Differentiation” chapter of Tom O’Haver’s book has a section on Peak Detection which I recommend reading. https://terpconnect.umd.edu/~toh/spectrum/TOC.html
Output Data Column (DataFrame): peaks_bases
See also
Terminal
Description
Derivative
Transmission with derivatives of signals. Must have _DERIVATIVE column.
It’s recommended to use a derivative from a normalized filtered signal.Normalized
Transmission containing Normalized signals, used for thresholding
See Normalize nodeCurve
Transmission containing original signals.
Usually not filtered to avoid distortions caused by filteringPB_Input (optional)
Transmission containing peaks & bases data (peaks_bases column).
Useful for visualizing a saved Transmission that has peaks & bases dataOut
Transmission with the detected peaks & bases as DataFrames in the output column
Warning
The PB_Input terminal overrides all other terminals. Do not connect inputs to PB_Input and other terminals simultaneously.
Parameter
Description
data_column
Data column of the input Curve Transmission for placing peaks & bases onto
Fictional_Bases
Add bases to beginning and end of signal if first or last peak is lonely
Edit
Open Peak Editor GUI, see Peak Editor
SlopeThr
Slope threshold
AmplThrAbs
Absolute amplitude threshold
AmplThrRel
Relative amplitude threshold
Apply
Process data through this node
PeakFeatures¶
Compute peak features. The DataFrame of the ouput Transmission contains one row for each peak.
Output Data Column
Description
_pf_peak_curve
array representing the peak
_pf_ampl_rel_b_ix_l
peak amplitude relative to its left base
_pf_ampl_rel_b_ix_r
peak amplitude relative to its right base
_pf_ampl_rel_b_mean
peak amplitude relative to the mean of its bases
_pf_ampl_rel_zero
peak amplitude relative to zero
_pf_area_rel_zero
_pf_area_rel_min
Simpson’s Rule Integral relative to the minimum value of the curveSubstracts the minimum values of the peak curve before computing the integral_pf_rising_slope_avg
slope of the line drawn from the left base to the peak
_pf_falling_slope_avg
slope of the line drawn from the right base to the peak
_pf_duration_base
distance between the left and right base
_pf_p_ix
index of the peak maxima in the parent curve
_pf_uuid
peak UUID
_pf_b_ix_l
index of the left base in the parent curve
_pf_b_ix_r
index of the right base in the parent curve
See also
mesmerize/analysis/compute_peak_features
for the code that computes the peak features.
Terminal
Description
In
Input Transmission. Must contain peak_bases column that contains peak_bases DataFrames.
Out
Transmission with peak features in various output columns
Parameter
Description
data_column
Data column containing numerical arrays from which to compute peak features.
Apply
Process data through this node
Warning
If there are issues with a particular peak a user warning will be displayed in the terminal that Mesmerize is running and the peak will be ignored. This happens when a peak is 1) not flanked by bases on both sides, 2) a peak or base is out of bounds for the parent curve from teh chosen data_column or 3) other index issues w.r.t. the peak. In the terminal, the number after the progress bar will show the index of the parent curve, for example here the parent curve is 319: 41%|████▏ | 319/771. The index of the offending peak within the parent curve will be printed below the progress bar along with a statement that may specific the issue with the peak.
Math¶
Nodes for performing basic Math functions
Derivative¶
Computes the first derivative.
Output Data Column (numerical): _DERIVATIVE
Terminal
Description
In
Input Transmission
Out
Transmission with the derivative placed in the output column
Parameter
Description
data_column
Data column containing numerical arrays
Apply
Process data through this node
TVDiff¶
Based on Numerical Differentiation of Noisy, Nonsmooth Data. Rick Chartrand. (2011).. Translated to Python by Simone Sturniolo.
XpowerY¶
Raises each element of the numerical arrays in the data_column to the exponent Y
Output Data Column (numerical): _X_POWER_Y
Terminal
Description
In
Input Transmission
Out
Transmission with the result placed in the output column
Parameter
Description
data_column
Data column containing numerical arrays
Y
Exponent
Apply
Process data through this node
AbsoluteValue¶
Element-wise absolute values of the input arrays. Computes root mean squares if input arrays are complex.
Output Data Column (numerical): _ABSOLUTE_VALUE
Terminal
Description
In
Input Transmission
Out
Transmission with the result placed in the output column
Parameter
Description
data_column
Data column containing numerical arrays
Apply
Process data through this node
LogTransform¶
Perform Logarithmic transformation of the data.
Output Data Column (numerical): _LOG_TRANSFORM
Terminal
Description
In
Input Transmission
Out
Transmission with the result placed in the output column
Parameter
Description
data_column
Data column containing numerical arrays
transform
log10: Base 10 logarithm
ln: Natural logarithmmodlog10: \(sign(x) * \log_{10} (|x| + 1)\)modln: \(sign(x) * \ln (|x| + 1)\)Apply
Process data through this node
ArrayStats¶
Perform a few basic statistical functions.
Output Data Column (numerical): Customizable by user entry
Output data are single numbers, not arrays
Terminal
Description
In
Input Transmission
Out
Transmission with the result placed in the output column
The desired function is applied to each 1D array in the data_column and the output is placed in the Output Data Column.
Parameter
Description
data_column
Data column containing numerical arrays
function
amin: Return the minimum of the input arrayamax: Return the maximum of the input arraynanmin: Return the minimum of the input array, ignore NaNsnanmax: Return the maximum of the input array, ignore NaNsptp: Return the range (max - min) of the values of the input arraymedian: Return the median of the input arraymean: Return the mean of the input arraystd: Return the standard deviation of the input arrayvar: Return the variance of the input arraynanmedian: Return the median of the input array, ignore NaNsnanmean: Return the mean of the input array, ignore NaNsnanstd: Return the standard deviation of the input array, ignore NaNsnanvar: Return the variance of the input array, ignore NaNsgroup_by (Optional)
Group by a categorial variable, for example get the mean array of a group
group_by_sec (Optional)
Group by a secondary categorical variable
output_col
Enter a name for the output column
Apply
Process data through this node
ArgGroupStat¶
Group by a categorial variable and return the value of any other column based on a statistic. Basically creates sub-dataframes for each group and then returns based on the sub-dataframe.
Group by column “group_by” and return value from column “return_col” where data in data_column fits “stat”
Output Data Column (Any): ARG_STAT
Terminal
Description
In
Input Transmission
Out
Transmission with the result placed in the output column
Parameter
Description
data_column
Data column containing single numbers (not arrays for now)
group_by
Group by column (categorical variables)
return_col
Return value from this column (any data)
stat
“max” or “min”
Apply
Process data through this node
ZScore¶
Compute Z-Scores of the data. Uses scipy.stats.zscore. The input data are are divided into groups according to the group_by parameter. Z-Scores are computed for the data in each group with respect to the data only in that group.
Output Data Column (numerical): _ZSCORE
Terminal
Description
In
Input Transmission
Out
Transmission with the result placed in the output column
Parameter
Description
data_column
Input data column containing numerical arrays
group_by
Categorial data column to group by.
Apply
Process data through this node
LinRegress¶
Basically uses scipy.stats.linregress
Performs Linear Regression on numerical arrays and returns slope, intercept, r-value, p-value and standard error
Terminal
Description
In
Input Transmission
Out
Transmission with the result placed in the output column
Parameter
Description
data_column
Data column containing 1D numerical arrays.The values are used as the y values and indices as the x values for the regressionOutput Columnns: Single numbers, _SLOPE, _INTERCEPT, _R-VALUE, _P-VALUE, _STDERR as decribed in scipy.stats.linregress
Biology¶
Nodes for some biologically useful things which I couldn’t categorize elsewhere
ExtractStim¶
Extract the portions of a trace corresponding to stimuli that have been temporally mapped onto it. It outputs one row per stimulus period.
Note: Stimulus extraction is currently quite slow, will be optimized after some planned changes in the Transmission object.
Output Data Column
Description
ST_TYPE
Stimulus type, corresponds to your Project Config
ST_NAME
Name of the stimulus
_ST_CURVE
The extracted array based on the parameters
_ST_START_IX
Start index of the stimulus period in the parent curve
_ST_END_IX
End index of the stimulus period in the parent curve
ST_uuid
UUID assigned for the extracted stimulus period
Parameter
Description
data_column
Data column containing the signals to be extracted based on the stimulus maps
Stim_Type
Type of stimulus to extract
start_offset
Offset the start index of the stimulus mapping by a value (in frames)
end_offset
Offset the end index of the stimulus mapping by a value (in frames)
zero_pos
Zero index of the extracted signal
start_offset: extraction begins at the start_offset value, stops at the end_offsetstim_end: extraction begins at the end of the stimulus, stops at the end_offset.stim_center: extraction begins at the midpoint of the stimulus period plus the start_offset, stops at end_offset
DetrendDFoF¶
Source
Uses the detrend_df_f function from the CaImAn library. This node does not use any of the numerical data in a Transmission DataFrame to compute the detrended \(\Delta F / F_0\). It directly uses the CNMF output data for the Samples that are present in the Transmission DataFrame.
Output Data Column (numerical): _DETREND_DF_O_F
StaticDFoFo¶
Perform \(\frac{F - F_0}{F_0}\) without a rolling window. \(F\) is an input array and \(F_0\) is the minimum value of the input array.
Output Data Column (numerical): _STATIC_DF_O_F
Terminal
Description
In
Input Transmission
Out
Transmission with the result placed in the output column
Parameter
Description
data_column
Data column containing numerical arrays
Apply
Process data through this node
StimTuning¶
Stimulus Tuning analysis. For more information see Stimulus Tuning Plot
Clustering¶
KShape¶
Perform KShape clustering. For more information see KShape plot.
KMeans¶
Basically sklearn.cluster.KMeans.
Hierarchical¶
These nodes allow you to perform Hierarchical Clustering using scipy.cluster.hierarchy.
If you are unfamiliar with Hierarchical Clustering I recommend going through this chapter from Michael Greenacre: http://www.econ.upf.edu/~michael/stanford/maeb7.pdf
Note
Some of these nodes do not use Transmission objects for some inputs/outputs.
Linkage¶
Compute a linkage matrix which can be used to form flat clusters using the FCluster node.
Based on scipy.cluster.hierarchy.linkage
Terminal
Description
In
Input Transmission
Out
dict containing the Linkage matrix and parameters, not a Transmission object
Parameters
Description
data_column
Numerical data column used for computing linkage matrix
method
linkage method
metric
metric for computing distance matrix
optimal_order
minimize distance between successive leaves, more intuitive visualization
Apply
Process data through this node
FCluster¶
“Form flat clusters from the hierarchical clustering defined by the given linkage matrix.”
Based on scipy.cluster.hierarchy.fcluster
Output Data Column (categorial): FCLUSTER_LABELS
Terminal
Description
Linkage
Linkage matrix, output from Linkage node.
Data
Input Transmission, usually the same input Transmission used for the Linkage node.
IncM (optional)
Inconsistency matrix, output from Inconsistent
Monocrit (optional)
Output from MaxIncStat or MaxInconsistent
Out
Transmission with clustering data that can be visualized using the Heatmap
Parameters: Exactly as desribed in scipy.cluster.hierarchy.fcluster
HistoryTrace output structure: Dict of all the parameters for this node, as well as the parameters used for creating the linkage matrix and the linkage matrix itself from the Linkage node.
Inconsistent¶
MaxIncStat¶
MaxInconsistent¶
Transform¶
Nodes for transforming data
LDA¶
Perform Linear Discriminant Analysis. Uses sklearn.discriminant_analysis.LinearDiscriminantAnalysis
Terminal
Description
train_data
Input Transmission containing the training data
predict
Input Transmission containing data on which to predict
T
Transmission with Transformed data and decision function. Output columns outlined below:_LDA_TRANSFORM: The transformed data, can be visualized with a Scatter Plot for instance_LDA_DFUNC: Decision function (confidence scores). Can be visualized with a Heatmapcoef
Transmission with LDA Coefficients. Output columns outlined below:classes: The categorical labels that were trained against_COEF: LDA Coefficients (weight vectors) for the classes. Can be visualized with a Heatmapmeans
Transmission with LDA Means. Output columns outlined below:classes: The categorical labels that were trained against_MEANS: LDA means for the classes. Can be visualized with a Heatmappredicted
Transmission containing predicted class labels for the data.The class labels are placed in a column named LDA_PREDICTED_LABELSThe names of the class labels correspond to the labels from the training labelsoptional
Parameter
Description
train_data
Single or multiple data columns that contain the input features.
labels
Data column containing categorical labels to train to
solver
svd: Singular Value Decompositionlsqr: Least Squares solutioneigen: Eigen decompositionshrinkage
Can be used with lsqr or eigen solvers.
shrinkage_val
shrinkage value if shrinkage is set to “value”
n_components
Number of components to output
tol
Tolereance threshold exponent. The used value is 10^<tol>
score
Displays mean score of the classification (read only)
predict_on
Single or multiple data columns that contain the data that are used for predicting onUsually the same name as the data column(s) used for the training data.optionalHistoryTrace output structure: Dict of all the parameters for this node
Examples¶
Datasets¶
You can view examples of flowcharts in the demo dataset or one of the other datasets associated with the paper:
Demo dataset: https://doi.org/10.6084/m9.figshare.11370183
intestinalis dataset: https://doi.org/10.6084/m9.figshare.10289162
elegans dataset: https://doi.org/10.6084/m9.figshare.10287113
PVC-7 as a Mesmerize dataset: https://doi.org/10.6084/m9.figshare.10293041
Video Tutorials¶
Screenshots¶
Flowchart screenshots from the C. intestinalis dataset.
Z-score¶

Peak detection¶

Hierarchical clustering¶

Beeswarm¶
Used for visualization of data points using a pseudo-scatter and violin plots.
Layout

You can click on individual datapoints and view the associated data using the Datapoint Tracer. To show the Datapoint Tracer, in the menubar go to View -> Live datapoint tracer
Parameters¶
Parameter |
Description |
---|---|
Data columns |
Multi-select data columns to plot They must have single numerical values, not arrays
|
Group based on |
Categorical data column used for grouping the data |
Datapoint tracer curve |
Data column, containing numerical arrays, that is shown in the Datapoint Tracer |
UUID column |
Column containing the UUIDs that correspond to the data in the selected data column(s) |
Apply all |
Apply the plot parameters and draw the plot |
Consoles¶
Currently the Heatmap, Scatter, Proportions plot and KShape follow a uniform structure allowing internal access to the data and plot axes. Refer to their Base API. For example, through their consoles you can access the Transmission containing data for the current plot, manually save the plot, etc.
Cross Correlation¶
Explore Cross-correlation functions of all curves from a sample. Normalized cross-correlations are computed using tslearn.cycc.normalized_cc
This is an interactive widget. You can click on the individual cells in the heatmap to view the individual curves, the cross-correlation function of the two curves, and the spatial localization of the ROI that they originate from.

Layout¶
Left: Lag or Maxima Matrix (see below) with thresholds applied and visualized as a heatmap. When you click on the individual cells it will open/update the Datapoint Tracer according to the two curves the cell corresponds to.
Top Center: Parameters.
Center: When you click on a cell in the heatmap you will see Curve 1 (x-axis of heatmap), and Curve 2 (y-axis of heatmap) and their cross-correlation function. The units are in seconds for all of these
Right: List of Samples. Click on a Sample to select it as the current sample.
Lag Matrix¶
- Computed as follows:
A 2D array is created where each element is a cross-correlation function (represented by a 1D numerical array).
The x-distance (time) between zero and the global maxima of the cross-correlation function (called \(lag\)) is computed for each of these elements.
The 2D array of cross-correlation functions is reduced to a 2D array of these \(lag\) values.
The result is a matrix where each element is the x-distance between zero and the global maxima of the cross-correlation of the two curves the element represents.
Maxima Matrix¶
Similar to computation of the Lag Matrix above, but instead of using the \(lag\) between zero and the global maxima it uses the y-value of the global maxima.
Parameters¶
Data column: The data column, containing numerical arrays, that are used as the input curves for computing cross-correlations.
Labels column: The labels column, containing categorical labels, that are used for the row and column labels in the heatmaps.
Pre-normalization: Option to perform 0 - 1 Normalization (Same method as the Normalize) or Z-Score of the input curves prior to computing their cross-correlation functions.
Compute all data: Appy the parameters and compute cross-correlation data for all Samples in the DataFrame of the input transmission.
Thresholds¶
Apply thresholds for \(lag\) and the maxima value. The colormap limits of the heatmap are set according to this threshold and all data under are set to white on the heatmap (you can still click and explore them).
Thresholds are applied live onto the heatmap.
Datapoint Tracer¶
The Datapoint Tracer is attached to many plots, allowing you to interactively explore the data associated to the datapoints. You can explore the analysis history, the spatial localization of the ROI it originates from, associated numerical or categorical data, and view an additional numerical column (such as the raw trace).
The Datapoint Tracer is embedded in some plots, and in others you can open it by going to View -> Live Datapoint Tracer.
Video Tutorial¶
This tutorial shows how the Heatmap plot can be used along with the Datapoint Tracer during the latter half of this tutorial.
Part 5, 6 & 8 of the main tutorial series also show how the Datapoint Tracer can be used along with other types of plots: https://www.youtube.com/playlist?list=PLgofWiw2s4REPxH8bx8wZo_6ca435OKqg
Layout¶

Top right: Max Projection or Standard Deviation Project of the image sequence.
Bottom right: Numerical data, based on the “DPT Curve column” that the user has specified in the plot controls. If exploring peak feature based data the temporal span of the peak will be highlighted.
Top left: Analysis log, a ordered list of operations and their parameters.
Bottom left: All other data associated with this datapoint (the data present in the other columns of the row this datapoint is present in, see Transmission)
Open in viewer button: Open the parent Sample of the current datapoint in the viewer.
Heatmap¶
Note
This plot can be saved in an interactive form, see Saving plots
Visualize numerical arrays in the form of a heatmap. Also used for visualization of Hierarchical clusterting dendrograms. Datapoint Tracer is embedded.
Video Tutorial¶
This tutorial shows how the Heatmap plot can be used along with the Datapoint Tracer during the latter half of this tutorial.
Layout¶

Left: The heatmap. Clicking the heatmap highlights the selected row and upates the Datapoint Tracer. Right click on the heatmap to clear the selection highlight on the heatmap. You can zoom and pan the heatmap using the tools above the plot area. You can zoom/pan in the legend and heatmap. The up and down keys on your keyboard can be used to move the current row selection.
Bottom left: Set the row order of the heatmap according to a categorical column.
Middle: Plot controls.
Very bottom: Status label - displays any issues that were raised while setting the plot data. Click on the status label to see more information.
Parameters¶
Data column: Data column, numerical arrays, that contain the data for the heatmap. Each row of this data column (a 1D array) is represented as a row on the heatmap.
Labels column: Column containing categorical labels that are used to create the row legend for the heatmap.
DPT curve column: Data column, containing numerical arrays, that is shown in the Datapoint Tracer.
Data colormap: Colormap used for representing the data in the heatmap. Default is ‘jet’.
Legend colormap: Colormap used for the row legend.
Live update from input transmission: If checked this plots receives live updates from the flowchart.
Plot: Updates data input from the flowchart.
Save: Save the plot data and state in an interactive form
Load: Load a plot that has been saved as a “.ptrn” file.
Layout to visualize Hierarchical Clustering

This plot widget can also be used to visualize a dendrogram on top of a heatmap of data.
The differences are:
There are two legend bars
Left: Cluster label
Right: Corresponds to Labels column parameter.
You can also zoom/pan the dendrogram in addition to the legends and heatmap.
Sorting the heatmap rows is disabled because this wouldn’t make sense
Console¶
You can directly access the heatmap widget through the console. This is useful for plot customization and exporting with specific parameters.
Toggle the console’s visibility by clicking on the “Show/Hide Console” button at the bottom of the controls.
See also
Namespace¶
reference |
Description |
---|---|
this |
The higher-level HeatmapTracerWidget instance, i.e. the entire widget |
this.transmission |
Current input Transmission |
get_plot_area() |
Returns the lower-level Heatmap variant instance, basically the actual plot area |
get_plot_area().plot |
Returns the seaborn ClusterGrid instance containing the axes |
get_plot_area().fig |
Returns the matplotlib Figure instance |
Attributes of get_plot_area().plot
For example, the heatmap axes object can be retrieved through get_plot_area().plot.ax_heatmap
. See the usage examples.
ax_heatmap |
Heatmap axes |
ax_row_dendrogram |
Row dendrogram axes |
ax_col_dendrogram |
Used for the legend |
cax |
Colorbar axes |
Examples¶
Export¶
See also
matplotlib API for: Figure.savefig, Figure.set_size_inches, Figure.get_size_inches
1# Desired size (width, height)
2size = (2.0, 2.5)
3
4# Get the figure
5fig = get_plot_area().fig
6
7# original size to reset the figure after we save it
8orig_size = fig.get_size_inches()
9
10#Set the desired size
11fig.set_size_inches(size)
12
13# Save the figure as a png file with 1200 dpi
14fig.savefig('/share/data/temp/kushal/amazing_heatmap.png', dpi=1200, bbox_inches='tight', pad_inches=0)
15
16# Reset the figure size and draw()
17fig.set_size_inches(orig_size)
18get_plot_area().draw()
Note
The entire plot area might go gray after the figure is reset to the original size. I think this is a Qt-matplotlib issue. Just resize the window a bit and the plot will be visible again!
Warning
From my experience I have not been able to open clustermap SVG files saved with very high DPI (600+). Even with 32 cores & 128GB of RAM both inkscape and illustrator just hang ¯\_(ツ)_/¯. Try png or other formats.
x tick labels¶
If the data are in the time domain:
1from mesmerize.analysis import get_sampling_rate
2import numpy as np
3
4# Get the sampling rate of the data
5sampling_rate = get_sampling_rate(this.transmission)
6
7# Number of frames currently displayed in the heatmap
8num_frames = get_plot_area().data.shape[1]
9
10# Set an appropriate interval
11interval = 30 # This is in seconds, not frames
12
13# Get the recording time in seconds
14recording_time = int(num_frames / sampling_rate)
15
16# Set the new ticks
17get_plot_area().plot.ax_heatmap.set_xticks(np.arange(0, num_frames, interval * sampling_rate))
18
19# Set the tick labels
20# You can change the fontsize here
21get_plot_area().plot.ax_heatmap.set_xticklabels(np.arange(0, recording_time, interval), fontdict={'fontsize': 4})
22
23# Set a title for the x axis. You can change the fontsize here
24get_plot_area().plot.ax_heatmap.set_xlabel('Time (seconds)', fontdict={'fontsize': 6})
25
26# Draw the plot with these changes
27get_plot_area().draw()
Note
You may need to resize the dock widget that the plot is present in to display the newly drawn plot, this is a Qt-matplotlib issue.
If the data are in the frequency domain:
1from mesmerize.analysis import get_frequency_linspace
2import numpy as np
3
4# Get frequency linspace and Nyquist frequency
5freqs, nf = get_frequency_linspace(this.transmission)
6
7# Get the number of frequencies currently shown in the heatmap
8num_freqs = get_plot_area().data.shape[1]
9
10# The max frequency currently display in the heatmap
11max_freq = freqs[num_freqs - 1]
12
13# Set an appropriate interval
14interval = 0.25 # This is in Hertz
15
16# Set the tick labels
17# Set the new ticks
18get_plot_area().plot.ax_heatmap.set_xticks(np.arange(0, num_freqs, (num_freqs * interval) / max_freq))
19
20# You can change the fontsize here
21get_plot_area().plot.ax_heatmap.set_xticklabels(np.arange(0, max_freq, interval), fontdict={'fontsize': 4})
22
23# Set a title for the x axis. You can change the fontsize here
24get_plot_area().plot.ax_heatmap.set_xlabel('Frequency (Hertz)', fontdict={'fontsize': 6})
25
26# Draw the plot with these changes
27get_plot_area().draw()
Note
You may need to resize the dock widget that the plot is present in to display the newly drawn plot, this is a Qt-matplotlib issue.
Colorbar label¶
get_plot_area().plot.cax.set_title('norm. z-score', x=-0.25, y=0.65, fontdict={'fontsize': 6}, rotation=90)
get_plot_area().draw()
Axes visibility¶
Hide/show legend
get_plot_area().plot.ax_col_dendrogram.set_visible(False)
get_plot_area().draw()
Hide/show y axis (similar for x axis)
get_plot_area().plot.ax_heatmap.get_yaxis().set_visible(False)
get_plot_area().draw()
Hide/show colorbar
get_plot_area().plot.cax.set_visible(False)
get_plot_area().draw()
KShape¶
Perform KShape clustering.
I recommend reading the paper on it: Paparrizos, John, and Luis Gravano. “k-Shape: Efficient and Accurate Clustering of Time Series.” In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pp. 1855-1870. ACM, 2015.
This GUI uses the tslearn.clustering.KShape implementation.
See also
Note
This plot can be saved in an interactive form, see Saving plots
Layout

Left: KShape parameters and Plot parameters
Bottom left: Plot of a random sample of input data from a cluster.
Center: Plot of cluster mean and either confidence interval, standard deviation, or neither. Uses seaborn.lineplot
Right: Proportions plot. Exactly the same as Proportions.
Bottom Right: Console
KShape Parameters¶
The parameters and input data are simply fed to tslearn.clustering.KShape
Parameters outlined here are simply as they appear in the tslearn docs.
data_column: Input data for clustering.
n_clusters: Number of clusters to form.
max_iter: Maximum number of iterations of the k-Shape algorithm.
tol: Inertia variation threshold. If at some point, inertia varies less than this threshold between two consecutive iterations, the model is considered to have converged and the algorithm stops.
n_init: Number of times the k-Shape algorithm will be run with different centroid seeds. The final results will be the best output of n_init consecutive runs in terms of inertia.
random_state: Generator used to initialize the centers. If an integer is given, it fixes the seed. Defaults to the global numpy random number generator.
training subset: The subset of the input data that are used for used for training. After training, the predictions are fit on all the input data.
Plot Options¶
Plot cluster: The cluster from which to plot random samples of input data in the bottom left plot
Show centers: Show the centroids returned by the KShape model
Warning
There’s currently an issue where cluster centroids don’t appear to be index correctly. See https://github.com/rtavenar/tslearn/issues/114
max num curves: Maximum number of input data samples to plot
Error band: The type of data to show for the the error band in the means plots.
set x = 0 at: The zero position of a means plots with respect to the cluster members in the plot.
Console¶
The console can be useful for formatting plots, inspecting the underlying data etc.
See also
Namespace¶
reference |
Description |
---|---|
this |
The higher-level KShape widget instance, i.e. the entire widget |
this.transmission |
Current input Transmission |
get_plot_means() |
Returns the means plot |
get_plot_raw() |
Returns the raw plot |
get_plot_proportions() |
Returns the proportions plot, which is an instance of Proportions Widget |
Examples¶
See also
Set axis ranges¶
Set equal x & y axis ranges for the means plots. Also removes the top & right spines.
1from itertools import product
2
3# Get the means plot
4plot = get_plot_means()
5
6# Get the indices of the subplots
7ixs = product(range(plot.nrows), range(plot.ncols))
8
9# Set the same x & y axis limits for all subplots
10for ix in ixs:
11
12 # The subplot axes
13 ax = plot.axs[ix]
14
15 # Set the y limits
16 ax.set_ylim([-2, 15.5])
17
18 # Set the x limits
19 ax.set_xlim([-30, 1000])
20
21 # Remove the top & right plot spins
22 ax.spines['right'].set_visible(False)
23 ax.spines['top'].set_visible(False)
24
25# Set a tight layout
26plot.fig.tight_layout()
27
28# Draw the plot
29plot.draw()
Note
You may need to resize the dock widget that the plot is present in to display the newly drawn plot, this is a Qt-matplotlib issue.
x tick labels¶
Set the x tick labels in time units instead of frames
1import numpy as np
2from itertools import product
3from mesmerize.analysis import get_sampling_rate
4
5# Get the sampling rate of the data
6sampling_rate = get_sampling_rate(this.transmission)
7
8# Get the padded number of frames that are shown in the plots
9num_frames = this.cluster_centers.shape[1]
10
11# Set an appropriate interval
12interval = 5 # This is in seconds, not frames
13
14# Convert the padded frame number to time units
15total_time = int(num_frames / sampling_rate)
16
17ixs = product(range(4), range(3))
18
19# Set these time units for all the means plots
20# For the raw plots just remove the loop
21for ix in ixs:
22 # Get the axes
23 ax = get_plot_means().axs[ix]
24
25 # Set the new ticks
26 ax.set_xticks(np.arange(0, num_frames, interval * sampling_rate))
27
28 # Set the tick labels
29 # You can change the fontsize here
30 ax.set_xticklabels(np.arange(0, total_time, interval), fontdict={'fontsize': 4}, rotation=90)
31
32 # Set a title for the x axis. You can change the fontsize here
33 ax.set_xlabel('Time (seconds)', fontdict={'fontsize': 6})
34
35 # Set ylabel as well
36 ax.set_ylabel('z-score', fontdict={'fontsize': 6})
37
38# Set a tight layout
39get_plot_means().fig.tight_layout()
40
41# Draw the plot with these changes
42get_plot_means().draw()
Note
You may need to resize the dock widget that the plot is present in to display the newly drawn plot, this is a Qt-matplotlib issue.
Hide legend¶
Hide/show legend in the proportions plot
get_plot_proportions().ax.legend().set_visible(True)
get_plot_proportions().draw()
Export¶
You can export any of the plots with a specific size & DPI.
Replace the get_<plot>().fig
on line 5 with the desired plot.
See also
matplotlib API for: Figure.savefig, Figure.set_size_inches, Figure.get_size_inches
1# Desired size (width, height)
2size = (7.0, 10.0)
3
4# Get the figure
5fig = get_<plot>().fig
6
7# original size to reset the figure after we save it
8orig_size = fig.get_size_inches()
9
10#Set the desired size
11fig.set_size_inches(size)
12
13# Save the figure as an png file with 600 dpi
14fig.savefig('/share/data/temp/kushal/amazing_shapes.png', dpi=600, bbox_inches='tight', pad_inches=0)
15
16# Reset the figure size and draw
17fig.set_size_inches(orig_size)
18get_<plot>().draw()
Note
The entire plot area might go gray after the figure is reset to the original size. I think this is a Qt-matplotlib issue. Just resize the window a bit and the plot will be visible again!
Peak Editor¶
Visualize and edit detected peaks & bases. This GUI is accessible through the PeakDetect node.
Video Tutorial¶
Usage¶
Optimize your peaks/bases detection through the datastreams that feed into the Derivative and Normalize terminals of the parent PeakDetect node. For example, play with filtering parameters for the ButterWorth node or SavitzkyGolay node.
Optimize amplitude thresholds of the parent PeakDetect node.
Disconnect from the flowchart (see below).
Edit your peaks/bases
Click “Send all curves to flowchart” (see below) to set the edited data as the output of the parent PeakDetect node.
Layout¶

Bottom¶
List of curves from the Transmission inputted to the Curve or PB_Input terminal. See PeakDetect node
Top¶
Mode buttons: Set the current interactive mode for mouse events.
View: Just view, pan, and zoom the plot.
Drag: Click and drag peaks/bases along the curve.
Add Peak/Base: Click to add a peak/base onto the curve.
Delete: Delete a peak or base.
Dot Size: Move the slider to change the size of the dots representing peaks/bases.
Set Peaks at relative maxima: Not implemented yet.
Save Curve: Save the current curve. A curve auto-saved when you switch to another one.
Clear Curve: Not implemented.
Disconnect from flowchart: Disconnect the GUI from changes in the flowchart. Edits to the peaks/bases will be lost if this GUI is not disconnected while changes occur in the flowchart.
Send all curves to flowchart: Set the edited data as the output of the parent PeakDetect node
Open in viewer: Open the parent Sample of this curve in a Viewer.
Right¶
History Tree Widget
Proportions¶
Compare proportions of categorical variables between different groups using bar charts.

Parameter |
Description |
---|---|
X column |
DataFrame column containing the categorical labels used for grouping the data
Data in each X column sums to 100% if Show percentages is checked
|
Y column |
DataFrame column containing the categorical labels that are counted for each group |
Show percentages |
When unchecked shows raw counts |
bar plot |
Visualize as bar plots |
heatmap |
Visualize as a heatmap |
Update Plot |
Update plot |
Swap X-Y |
Swap X & Y columns |
Save |
|
Load |
|
Export CSV |
Export the data for the current plot as to a csv file. |
Show Console |
Show/hide the console |
Scatter¶
Interactive scatter plot
Video Tutorial¶
From 13:04 onward this tutorial shows how you can perform PCA and visualize the transformed data using the Scatter Plot.
Layout¶

Left: Controls
Control
Data Column
Data column containing numerical arrays of size 2, X & Y values [x, y]
X
Data column containing only X values
Y
Data column containing only Y values
log x
Use \(log_{10}\) of the X data
log y
Use \(log_{10}\) of the Y data
Colors based on
Set spot colors based on categorical labels in this column
Choose colormap
Colromap for the the spot colors
Shapes based on
Set spot shapes based on categorical labels in this column
UUID Column
Column containing UUIDs that correspond to the plot data
DPT Curve column
Data column containing numerical arrays to show in the Datapoint Tracer
Spot size
Size of the spots
Alpha
Not implemented yet
Live update…
Update the plot with live inputs from the flowchart
Update Plot
Update the plot according to the input data from the flowchart and the parameters
Save
Load
Export to ma…
Not implemented yet
Export data
Not implemented yet
Below the plot: Status label that displays plotting issues. Click the label to see more information.
Right: Datapoint Tracer. Click datapoints in the plot to set the Datapoint Tracer.
Bottom: Console
Simple Plot¶
Just a very basic time-series plot. It will plot all the data in the selected data column.

SpaceMap¶
Note
This plot can be saved in an interactive form, see Saving plots
Spatially map a categorical variable onto a projection of a Sample’s image sequence

Note
Image produced from the following dataset: Garner, Aleena (2014): In vivo calcium imaging of layer 4 cells in the mouse using sinusoidal grating stimuli. CRCNS.org. http://dx.doi.org/10.6080/K0C8276G
Video Tutorial¶
This shows how you can view a space map that shows the tuning of cells. The Space map plot itself is shows after 3:38.
Controls¶
Parameter |
Description |
---|---|
Patch labels |
Categorical column to use for the patch labels |
Image Colormap |
Colormap for the image |
Patches Colormap |
Colormap for the patches |
Projection |
Show the image as a “Max” or “Standard Deviation” projection |
Fill Patches |
Fill the patches |
Line width |
Line width of the patches |
Alpha |
Alpha level of the patches |
Samples |
Click on the sample to plot |
Save |
|
Load |
Load a plot that has been saved as a “.ptrn” file. |
Console¶
See also
Namespace¶
reference |
Description |
---|---|
this |
The SpaceMapWidget instance, i.e. the entire widget |
this.transmission |
Current input Transmission |
get_plot() |
Returns the plot area |
get_plot().fig |
Returns the matplotlib Figure instance |
get_plot().ax |
Returns the Axes for the current plot matplotlib Axes |
Examples¶
Export¶
See also
matplotlib API for: Figure.savefig, Figure.set_size_inches, Figure.get_size_inches
1# Desired size (width, height)
2size = (6,5)
3
4# Get the figure
5fig = get_plot().fig
6
7# original size to reset the figure after we save it
8orig_size = fig.get_size_inches()
9
10#Set the desired size
11fig.set_size_inches(size)
12
13# Save the figure as a png file with 600 dpi
14fig.savefig('/share/data/temp/kushal/spacemap.png', dpi=600, bbox_inches='tight', pad_inches=0)
15
16# Reset to original size and draw
17fig.set_size_inches(orig_size)
18get_plot().draw()
Note
The entire plot area might go gray after the figure is reset to the original size. I think this is a Qt-matplotlib issue. Just resize the window a bit and the plot will be visible again!
Legend Title¶
See also
matplotlib API for matplotlib.axes.Axes.get_legend
get_plot().ax.get_legend().set_title('New Title')
get_plot().draw()
Hide Axis Borders¶
See also
matplotlib API for matplotlib.axes.Axes.axis
get_plot().ax.axis('off')
get_plot().draw()
Stimulus Tuning¶
Get the stimulus tuning and tuning curves of neurons.
The output from this plot node can also be used for various things downstream, such as hierarchical clustering to sort your cells based on their tuning, visualizing the tuning of each neuron using a SpaceMap plot, and much more. See the video tutorial for examples.
Video Tutorial¶
This tutorial uses part of the CRCNS pvc-7 dataset from the Allen Institute to get stimulus tuning curves, perform hierarchical clustering and dimensionality reduction.
Layout¶

Controls¶
Parameter |
Description |
---|---|
data_column |
Data column used to determine the stimulus tuning of the cells |
method |
Use one of |
start offset |
Use a start offset for the stimulus periods (can be either positive or negative) |
end offset |
Use a end offset for the stimulus periods (can be either positive or negative) |
DPT column |
Data column that is shown in the Datapoint Tracer. |
Set |
Set the stimulus extraction parameters defined above. |
Save |
Usage¶
Set the desired parameters for data_column, method, start offset, end offset and DPT column.
Click Set.
Choose a Sample from the list.
Click on an ROI number to view the tuning curve, and corresponding spatial localization and curve in the Datapoint Tracer.
You can use the output of this plot node for further analysis, as show in the tutorial video.
Welcome Window¶
The Welcome Window is the first window that you are presented with when you launch Mesmerize.
Use the large buttons for opening new Viewer or Flowchart windows.
Open a project using the button, or double-click a recent project from the list.
Create a new project using the button.
You basically have access to all objects in the Mesmerize instance through this console.
See also

Project Structure¶
A Mesmerize project is encapsulated within a single directory. It contains the following:
config file - contains configuration data, such as roi type columns, stimulus type columns, and custom columns with their datatypes.
Warning
Do not manually modify the config file
Directories
Dir |
Purpose |
---|---|
dataframes |
Contains an file storing the project dataframe, root.dfr, and backups. A new backup is created every time a a new Sample is added to the project. Restore a backup by renaming it to “root.dfr”. |
images |
Contains the image sequences and work environment data for all samples in the project |
batches |
Used for storing batches used by the Batch Manager if you wish. |
flowcharts |
Used for storing .fc flowchart files that save the layout of nodes in a flowchart. |
plots |
Used for storing .ptrn interactive plot files. |
See also
Warning
Do not manually modify the data under the images or curves directories
Consoles¶
A Python console is embedded in many parts of Mesmerize. You can use it to perform very specific operations, further automate tasks, save an analysis object, format plots, etc.
The console is accessible in many windows through View -> Console. Within the console namespace this
refers to the window. For example this
refers to the Project Browser Window instance in the Project Browser’s console. A list of useful object references and helper functions are listed when you open most consoles.
You can run entire scripts within the console. You can also use import statements to import libraries that you have in your Python environment.
Keyboard controls:
Execute: Shift + Enter
New line: Enter
Scroll up through history: Page Up
Scroll down through history: Page Down
The history is stored in ~/.mesmerize
Saving plots¶
Some plots allow you to save them in an interactive form, along with the plot data and the plot state as a “.ptrn” file. If you save the file in the “plots” directory of your project it will be listed in the Welcome Window when you open your project.
This is currently possible with the following plots: Heatmap, KShape, Proportions, Scatter, and SpaceMap
System Configuration¶
Set system configuration options
This window is accessible through the Welcome Window menubar at Configuration -> System Configuration.

Maximum number of threads to use: The maximum number of threads that Mesmerize is allowed to use, this includes proccesses started by the Batch Manager, various analysis proccesses in the flowchart, and the viewer as well.
Python call: Many parts of Mesmerize, such as the Batch Manager use external processes to run a python script. This setting sets which python call should be used. python3
should work for Linux & Mac OSX. We’ve found that this needs to be set to python
to work within Anaconda environments on Windows.
Use CUDA: Use CUDA accerlation if you have a GPU with CUDA cores. You must have pycuda and scikit-cuda (as well as the nvidia CUDA toolkit) installed. CUDA accerlation isn’t used much currently.
Work dir: Many parts of Mesmerize use a working directory for temporary files. If you have a fast filesystem you can use that for this purpose.
Pre-run commands (large text entry): Mesmerize runs some computationally intensive tasks in subprocesses. These commands are run prior to the python script that performs the task.
If you are using Mesmerize in a virtual environment or conda environment you will need activate the environment so you must include the line
source /path_to_venv/bin/activate
orconda activate <env_name>
to the pre-run commands.If you are using an Intel CPU you should get optimal performance by installing Math Kernel Library (MKL) and including
export MKL_NUM_THREADS=1
to the pre-run commands.If you are using an AMD CPU make sure you have OpenBLAS installed for optimal performance and include
export OPENBLAS_NUM_THREADS=1
to the pre-run commands. You may better performance by installing the AMD specific libraries.
Plugins directory: If you have a plugins dir include enter its path here.
Nodes¶
The easiest way to create a new node is create a class that inherits from CtrlNode
. You can place this class in one of the existing modules in mesmerize/pyqtgraphCore/flowchart/library
Become familiar with the Transmission object before creating a node. Almost all nodes work with a Transmission object for storing data. Make sure to conform to the conventions for naming of data columns and categorical columns.
Simple node¶
The simplest type of node performs an operation on a user-specified data column and doesn’t take any parameters.
Basic structure
class MyNode(CtrlNode):
"""Doc String that is shown when node is clicked on"""
nodeName = 'MyNode'
uiTemplate = <list of tuples, see below>
def processData(self, transmission: Transmission):
self.t = transmission.copy() #: input to this node
# .. do stuff to the Transmission DataFrame
params = <dict of analysis params>
# log the analysis that was done
self.t.history_trace.add_operation('all', 'mynode', params)
# Send the output
return self.t
Required class attributes:
nodeName: (str) The name prefix used when instances of this node are created in the flowchart
uiTemplate: (list) List of UI element tuples (see section)
If the node only has one input and one output terminal it is sufficient to create a processData method that performs the node’s analysis operation(s).
Example¶
1class Derivative(CtrlNode):
2"""Return the Derivative of a curve."""
3nodeName = 'Derivative'
4uiTemplate = [('data_column', 'combo', {}),
5 ('Apply', 'check', {'checked': False, 'applyBox': True})
6 ]
7
8# If there is only one input and one output temrinal, processData will
9# always have a single argument which is just the input transmission,
10# i.e. the output from the previous node.
11def processData(self, transmission: Transmission):
12 # the input transmission
13 self.t = transmission
14
15 # If a comboBox widget named 'data_column' is specified in the
16 # uiTemplate, you can update its contents using the following method.
17 # This will populate the comboBox with all the data columns from the
18 # input transmission and select the input data column as the
19 # output data column from the previous node.
20 self.set_data_column_combo_box()
21
22 # Check if the Apply checkbox is checked
23 if self.ctrls['Apply'].isChecked() is False:
24 return
25
26 # Make a copy of the input transmission so we can modify it to create an output
27 self.t = transmission.copy()
28
29 # By convention output columns are named after the node's name and in all caps
30 # Columns containing numerical data have a leading underscore
31 output_column = '_DERIVATIVE'
32
33 # Perform this nodees operation
34 self.t.df[output_column] = self.t.df[self.data_column].apply(np.gradient)
35
36 # Set tranmission's `last_output` attribute as the name of the output column
37 # This is used by the next node to know what thet last output data was
38 self.t.last_output = output_column
39
40 # Create a dict of parameters that this node used
41 # Usually a dict that captures the state of the uiTemplate
42 # the transmission `last_unit` attribute is the data units of the data
43 # in the output column (i.e. `t.last_output`). Change it only if the data units change
44 params = {'data_column': self.data_column,
45 'units': self.t.last_unit
46 }
47
48 # Add a log of this node's operation to the transmission's `HistoryTrace` instance
49 # Nodes usually perform an operation on all datablocks pass 'all' to the data_block_id argument
50 # By convention the operation name is the name of the node in lowercase letters
51 self.t.history_trace.add_operation(data_block_id='all', operation='derivative', parameters=params)
52
53 # return the modified transmission instance, which is then the output of this node
54 return self.t
Complex node¶
For a more complex node with multiple inputs and/or outputs you will need to explicitly specify the terminals when instantiating the parent CtrlNode
and create a simple override of the process() method.
Format of the dict specifying the node’s terminals:
{
<terminal name (str)>: {'io': <'in' or 'out'>},
<another terminal name (str)>: {'io', <'in' or 'out'>},
<another terminal name (str)>: {'io', <'in' or 'out'>}
...
}
Override the process() method simply pass all kwargs to a processData() method and return the output The processData() method must return a dict. This dict must have keys that correspond to the specified output terminals. The values of these keys are the outputs from the respective terminals.
Here is a trimmed down example from the LDA node
:
1class LDA(CtrlNode):
2"""Linear Discriminant Analysis, uses sklearn"""
3nodeName = "LDA"
4uiTemplate = [('train_data', 'list_widget', {'selection_mode': QtWidgets.QAbstractItemView.ExtendedSelection,
5 'toolTip': 'Column containing the training data'}),
6 ('train_labels', 'combo', {'toolTip': 'Column containing training labels'}),
7 ('solver', 'combo', {'items': ['svd', 'lsqr', 'eigen']}),
8 ('shrinkage', 'combo', {'items': ['None', 'auto', 'value']}),
9 ('shrinkage_val', 'doubleSpin', {'min': 0.0, 'max': 1.0, 'step': 0.1, 'value': 0.5}),
10 ('n_components', 'intSpin', {'min': 2, 'max': 1000, 'step': 1, 'value': 2}),
11 ('tol', 'intSpin', {'min': -50, 'max': 0, 'step': 1, 'value': -4}),
12 ('score', 'lineEdit', {}),
13 ('predict_on', 'list_widget', {'selection_mode': QtWidgets.QAbstractItemView.ExtendedSelection,
14 'toolTip': 'Data column of the input "predict" Transmission\n'
15 'that is used for predicting from the model'}),
16 ('Apply', 'check', {'applyBox': True, 'checked': False})
17 ]
18
19def __init__(self, name, **kwargs):
20 # Specify the terminals with a dict
21 CtrlNode.__init__(self, name, terminals={'train': {'io': 'in'},
22 'predict': {'io': 'in'},
23
24 'T': {'io': 'out'},
25 'coef': {'io': 'out'},
26 'means': {'io': 'out'},
27 'predicted': {'io': 'out'}
28 },
29 **kwargs)
30 self.ctrls['score'].setReadOnly(True)
31
32# Very simple override
33def process(self, **kwargs):
34 return self.processData(**kwargs)
35
36def processData(self, train: Transmission, predict: Transmission):
37 self.t = train.copy() #: Transmisison instance containing the training data with the labels
38 self.to_predict = predict.copy() #: Transmission instance containing the data to predict after fitting on the the training data
39
40 # function from mesmerize.analysis.utils
41 dcols, ccols, ucols = organize_dataframe_columns(self.t.df.columns)
42
43 # Set available options for training data & labels
44 self.ctrls['train_data'].setItems(dcols)
45 self.ctrls['train_labels'].setItems(ccols)
46
47 dcols = organize_dataframe_columns(self.to_predct.df.columns)
48 # Set available data column options for predicting on
49 self.ctrls['predict_on'].setItems(dcols)
50
51 # Process further only if Apply is checked
52 if not self.ctrls['Apply'].isChecked():
53 return
54
55 # Get the user-set parameters
56 train_column = self.ctrls['train_data'].currentText()
57
58 # ... get other params
59 n_components = self.ctrls['n_components'].value()
60
61 # ... do stuff
62
63 # This node ouputs separate transmissions that are all logged
64 self.t.history_trace.add_operation('all', 'lda', params)
65 self.t_coef.history_trace.add_operation('all', 'lda', params)
66 self.t_means.history_trace.add_operation('all', 'lda', params)
67
68 # the `to_predict` transmission is logged differently
69 self.to_predict.history_trace.add_operations('all', 'lda-predict', params_predict)
70
71 # dict for organizing this node's outputs
72 # The keys MUST be the same those specified for this node's output terminals
73 out = {'T': self.t,
74 'coef': self.t_coef,
75 'means': self.t_means,
76 'predicated': self.to_predct
77 }
78
79 return out
uiTemplate¶
Specify the uiTemplate attribute as a list of tuples.
One tuple per UI element with the following structure:
(<name (str)>, <type (str)>, <dict of attributes to set>)
Examples:
('dist_metric', 'combo', {'items': ['euclidean', 'wasserstein', 'bah'], 'toolTip': 'distance metric to use'})
('n_components', 'intSpin', {'min': 2, 'max': 10, 'value': 2, 'step': 1, 'toolTip': 'number of components'})
('data_columns', 'list_widget', {'selection_mode': QtWidgets.QAbstractItemView.ExtendedSelection})
The name can be anything. Accepted types and accepted attributes are outlined below
widget type |
attributes that can be set |
---|---|
intSpin |
min (int): minimum value allowed in the spinbox
max (int): maximum value allowed
step (int): step size
value (int): default value
|
doubleSpin |
min (float): minimum value allowed in the spinbox
max (float): maximum value allowed
step (float): step size
value (float): default value
|
check |
checked (bool): default state of the checkBox
applyBox (bool): Whether this is an “Apply checkbox”
|
radioBtn |
checked (bool): default state of this radioButton
|
combo |
items (list): default list of items that will be set in the comboBox
|
list_widget |
items (list): default list of items that will be set in the list_widget
selection_mode: One of the accepted QAbstractItemView selection modes
|
lineEdit |
text (str): default text in the line edit
placeHolder (str): placeholder text
readOnly (bool): set as read only
|
plainTextEdit |
text (str): default text in the text edit
placeHolder (str): placeholder text
|
label |
text (str): default text
|
button |
text (str): default text on the button
checkable (bool): whether this button is checkable
|
color |
Does not take any attributes |
All UI widget types outlined above take ‘toolTip’ as an attribute which can be used to display tooltips
Plots¶
The easiest way to create a plot module is by subclassing the BasePlotWidget. You could also subclass the abstract base if you need to define all the common functionality differently.
General Design¶
This shows how you can design a plot using the SpaceMapPlot as a simple example. It will generally consist of a class for the main plot area, plot control, and the plot window which contains the controls and plot area.
Plot Area¶
A class which holds the actual plot, could be a matplotlib widget or pyqtgraph plot widget for example. In the SpaceMapPlot this is simply a subclass of the pyqtgraph matplotlib widget with a few more attributes and a helper method. The error_label attribute is simply a QLabel used for displaying a plot error summary and is handled by the exceptions_label decorator from qdialogs.
Plot Controls¶
A class which manages the plot controls. Generally useful to use a QDockWidget for this and design the actual GUI layout using QtDesigner. The WidgetRegistry provides a simple way to package the plot control values (plot parameters) into a dict.
Register a widget to the registry using the WidgetRegistry instance’s register() method. The getter method corresponds to the widget’s method which will return the value of the widget (such as text or a number) that is set in the parameters dict which is created when widget_registry.get_state() is called. Correspondingly, setter method is the widget’s method that is used to set a value to the widget and is used when saved plots are restored. In essense, setter and getter must be interoperable.
The Space Map plot uses a sig_changed class attribute that simply emits when any of the widgets are changed. This is later used in the main plot window to update the plot.
A fill_widget() method is useful for populating the controls in the dock widget when the input data to the plot window changes.
In the Space Map widget, get_state() and set_state() simply wrap the corresponding methods from the WidgetRegistry instance.
Plot Window¶
Subclass from QMainWindow and BasePlotWidget. Mandatory to specify a drop_opts class attribute of type list. This list contains the name of any widgets in the dict return from the WidgetRegistry that should be exluded when saving the plot. This should be used if you are using data types that are not JSON serializable, however it is rarely necessary. Support for drop_opts may be removed in the future.
In general specifying the methods described below should be sufficient to create a saveable plot. If you need finer control of the data struture for saving/opening plots you can subclass from the abstract base class.
__init__¶
Setting things up, connection signals, etc. Useful to have a console dock widget.
set_update_live()¶
A method that interacts with a “live update” checkbox in the plot controls.
set_input()¶
Set the input transmission for this plot if it is in “live update” mode or if the plot instance is new (has not had input data previously).
Useful to have a BasePlotWidget.signal_blocker decorator so that the plot doesn’t constantly update while the new data comes in, since it could cause plot options to change etc.
fill_control_widget()¶
Organize the plot options that are available to the user and set the control widgets.
Useful to have a BasePlotWidget.signal_blocker decorator here as well for same reasons as described above.
update_plot()¶
This is the core of plot. Use the input transmission and the user-selected plot parameters to draw the plot in the plot area. Generally interacts with the Plot Area instance. You can use the get_state() method of the control widget’s WidgetRegistry to conveniently get a dict of all the user-selected plot parameters.
Useful to have an exceptions_label or present_exceptions decorator from the qdialogs module. The exceptions_label provides a less annoying way to present exceptions that occured when updating the plot.
get_plot_opts()¶
Usually just returns the dict from the widget registry containing all user-set plot parameters.
set_plot_opts()¶
Usually just calls the widget registry’s set_state() method to set the plot parameters from a dict.
Useful to have a BasePlotWidget.signal_blocker decorator. In general you would use the BasePlotWidget.open_plot() method to open a saved plot and it takes care of updating the plot after the input transmission and plot parameters are set.
show_exception_info()¶
Called when the exceptions_label is clicked. Opens a QMessageBox to show the entire stack trace.
Viewer Modules¶
Viewer modules appear as either QDockWidgets or QWidgets to the user. They must consist of a main ModuleGUI
class which inherits from either QDockWidget or QWidget. They can utilize any additional python modules, classes, etc.
Instructions¶
Create a plugins directory if you don’t have one. Set this plugins directory in the System Configuration. This directory can contain as many custom modules as you want. All python modules within the plugins directory are automatically imported.
Download the __init__.py and place it within the plugins directory.
Your directory structure & System Configuration should look something like this. A
__pycache__
will automatically be created when you launchmesmerize
, this is normal.

Create the main module file for your custom module, in this example we’ll create
example_plugin.py
and place it in the plugins directory that we specific in the System Configuration alongside the provided__init__.py
. You can create as many plugin modules as you want, they can be named as you wish but must use the struture outlined below. In addition to this main module file you can create a separate subdirectory to house any modules or files associated to this module. You can create Qt templates using Qt Creator and convert them to .py template files using pyuic5 and use them for your custom module.Basic Structure
1from PyQt5 import QtWidgets 2 3module_name = 'Example Module' 4 5 6# You must define `module_name` 7# This is the name that will be displayed in the "Plugins" menu of the Viewer Window. 8# You can use this to reference the ModuleGUI instance through the Viewer Console via ``get_module(<module_name>)`` 9 10 11# The main GUI class MUST be named ModuleGUI. 12# You can have other classes and more GUIs however ModuleGUI is the one that the Viewer Window directly calls. 13 14class ModuleGUI(QtWidgets.QDockWidget): 15 # The Viewer MainWindow will pass its Viewer instance that 16 # can be used to interact with the viewer and work environment. 17 def __init__(self, parent, viewer_instance): 18 QtWidgets.QDockWidget.__init__(self, parent) 19 self.setWindowTitle('Example Viewer Module') 20 self.setFloating(True) # Must be floating 21 22 self.viewer_instance = viewer_instance 23 24 # container widget 25 self.dockwidget_area = QtWidgets.QWidget() 26 27 # create a vertical layout 28 self.vlayout = QtWidgets.QVBoxLayout(self.dockwidget_area) 29 30 # add a button 31 self.button = QtWidgets.QPushButton(self.dockwidget_area) 32 self.button.setText('button') 33 self.button.clicked.connect(self.callback_button_clicked) 34 self.vlayout.addWidget(self.button) 35 36 # add a label 37 self.label = QtWidgets.QLabel(self.dockwidget_area) 38 self.vlayout.addWidget(self.label) 39 40 # set the qdockwidget's layout 41 self.setLayout(self.vlayout) 42 self.setWidget(self.dockwidget_area) 43 44 # self.show() 45 46 # change the label text when the button is clicked 47 def callback_button_clicked(self): 48 self.label.setText('button clicked!')
The module will be accessible through the Viewer Window’s “Plugins” menu. The names in the plugins menu will correspond to the aforementioned
module_name
variable.
Common¶
- mesmerize.common.get_proj_config(proj_path: Optional[str] = None) configparser.RawConfigParser [source]¶
- Parameters
proj_path – Full project path
Utils¶
Some frequently used utility functions
- mesmerize.common.utils.make_workdir(prefix: str = '') str [source]¶
Make a workdir within the mesmerize_tmp directory of the workdir specified in the configuration The name of the created workdir is the date & time of its creation. You can add a prefix to this name.
- Parameters
prefix – Prefix for the workdir namec
- Returns
full workdir path
- Return type
- mesmerize.common.utils.make_runfile(module_path: str, savedir: str, args_str: Optional[str] = None, filename: Optional[str] = None, pre_run: Optional[str] = None, post_run: Optional[str] = None) str [source]¶
Make an executable bash script. Used for running python scripts in external processes.
- Parameters
module_path (str) – absolute module path
args_str (str) – str of args that is directly passed with the python command in the bash script
savedir (Optional[str]) – working directory
filename (Optional[str]) – optional, specific filename for the script
pre_run (Optional[str]) – optional, str to run before module is ran
post_run (Optional[str]) – optional, str to run after module has run
- Returns
path to the shell script that can be run
- Return type
- class mesmerize.common.utils.HdfTools[source]¶
Functions for saving and loading HDF5 data
- static save_dataframe(path: str, dataframe: pandas.core.frame.DataFrame, metadata: Optional[dict] = None, metadata_method: str = 'json', raise_meta_fail: bool = True)[source]¶
Save DataFrame to hdf5 file along with a meta data dict.
Meta data dict can either be serialized with json and stored as a str in the hdf5 file, or recursively saved into hdf5 groups if the dict contains types that hdf5 can deal with. Experiment with both methods and see what works best
Currently the hdf5 method can work with these types: [str, bytes, int, float, np.int, np.int8, np.int16, np.int32, np.int64, np.float, np.float16, np.float32, np.float64, np.float128, np.complex].
If it encounters an object that is not of these types it will store whatever that object’s __str__() method returns if on_meta_fail is False, else it will raise an exception.
- Parameters
path (str) – path to save the file to
dataframe (pd.DataFrame) – DataFrame to save in the hdf5 file
metadata (Optional[dict]) – Any associated meta data to store along with the DataFrame in the hdf5 file
metadata_method (str) – method for storing the metadata dict, either ‘json’ or ‘recursive’
raise_meta_fail (bool) – raise an exception if recursive metadata saving encounters an unsupported object If false, it will save the unsupported object’s __str__() return value
- static load_dataframe(filepath: str) Tuple[pandas.core.frame.DataFrame, Optional[dict]] [source]¶
Load a DataFrame along with meta data that were saved using
HdfTools.save_dataframe
QDialogs¶
Decorators for Qt Dialog GUIs used throughout Mesmerize
- mesmerize.common.qdialogs.present_exceptions(title: str = 'error', msg: str = 'The following error occurred.')[source]¶
Use to catch exceptions and present them to the user in a QMessageBox warning dialog. The traceback from the exception is also shown.
This decorator can be stacked on top of other decorators.
Example:
- Parameters
title – Title of the dialog box
msg – Message to display above the traceback in the dialog box
help_func – A helper function which is called if the user clicked the “Help” button
- mesmerize.common.qdialogs.exceptions_label(label: str, exception_holder: Optional[str] = None, title: str = 'error', msg: str = 'The following error occured')[source]¶
Use a label to display an exception instead of a QMessageBox
- Parameters
label – name of a QLabel instance
exception_holder – name of an exception_holder attribute where the exception message is stored. This can be used to view the whole exception when the label is clicked on for example.
title – title supplied for the QMessageBox (if used later)
msg – message supplied for the QMessageBox (if used later)
- mesmerize.common.qdialogs.use_open_file_dialog(title: str = 'Choose file', start_dir: Optional[str] = None, exts: Optional[List[str]] = None)[source]¶
Use to pass a file path, for opening, into the decorated function using QFileDialog.getOpenFileName
- Parameters
title – Title of the dialog box
start_dir – Directory that is first shown in the dialog box.
exts – List of file extensions to set the filter in the dialog box
- mesmerize.common.qdialogs.use_save_file_dialog(title: str = 'Save file', start_dir: Optional[str] = None, ext: Optional[str] = None)[source]¶
Use to pass a file path, for saving, into the decorated function using QFileDialog.getSaveFileName
- Parameters
title – Title of the dialog box
start_dir – Directory that is first shown in the dialog box.
exts – List of file extensions to set the filter in the dialog box
- mesmerize.common.qdialogs.use_open_dir_dialog(title: str = 'Open directory', start_dir: Optional[str] = None)[source]¶
Use to pass a dir path, to open, into the decorated function using QFileDialog.getExistingDirectory
- Parameters
title – Title of the dialog box
start_dir – Directory that is first shown in the dialog box.
Example:
@use_open_dir_dialog('Select Project Directory', '') def load_data(self, path, *args, **kwargs): my_func_to_do_stuff_and_load_data(path)
Viewer Core¶
Video Tutorial¶
ViewerWorkEnv¶
This objects stores the data that the Viewer interacts with.
- class mesmerize.viewer.core.ViewerWorkEnv(imgdata: Optional[mesmerize.viewer.core.data_types.ImgData] = None, sample_id='', UUID=None, meta=None, stim_maps=None, roi_manager=None, roi_states=None, comments='', origin_file='', custom_cols=None, history_trace: Optional[list] = None, additional_data: Optional[dict] = None, misc: Optional[dict] = None, **kwargs)[source]¶
- _UUID¶
UUID, if opened from a project Sample refers to the ImgUUID
- __init__(imgdata: Optional[mesmerize.viewer.core.data_types.ImgData] = None, sample_id='', UUID=None, meta=None, stim_maps=None, roi_manager=None, roi_states=None, comments='', origin_file='', custom_cols=None, history_trace: Optional[list] = None, additional_data: Optional[dict] = None, misc: Optional[dict] = None, **kwargs)[source]¶
A class that encapsulates the main work environment objects (img sequence, ROIs, and ROI associated curves) of the viewer. Allows for a work environment to be easily spawned from different types of sources and allows for a work environment to be easily saved in different ways regardless of the type of original data source.
- __weakref__¶
list of weak references to the object (if defined)
- static _organize_meta(meta: dict, origin: str) dict [source]¶
Organize input meta data dict into a uniform structure :param meta: meta data dict, origin from a json file for example :param origin: name of the origin source of the meta data, such a program or microscope etc. :return: dict organized with keys that are used throughout Mesmerize.
- classmethod from_mesfile(mesfile_object: mesmerize.viewer.core.mesfile.MES, img_reference: str)[source]¶
Return instance of work environment with MesmerizeCore.ImgData class object using seq returned from MES.load_img from MesmerizeCore.FileInput module and any stimulus map that the user may have specified.
- Parameters
mesfile_object – MES object, created from .mes file
img_reference – image reference to load, see
mesmerize.viewer.core.mesfile.MES.get_image_references()
- classmethod from_pickle(pickle_file_path: str, tiff_path: Optional[str] = None)[source]¶
Get pickled image data from a pickle file & image sequence from a npz or tiff. Used after motion correction & to view a sample from a project DataFrame. Create ImgData class object (See MesmerizeCore.DataTypes) and return instance of the work environment.
- Param
pickle_file_path: full path to the pickle containing image metadata, stim maps, and roi_states
- Param
tiff_path: str of the full path to a tiff file containing the image sequence
- classmethod from_tiff(path: str, method: str, meta_path: Optional[str] = None, axes_order: Optional[str] = None, meta_format: Optional[str] = None)[source]¶
Return instance of work environment with ImgData.seq set from the tiff file.
- Parameters
path – path to the tiff file
method – one of ‘imread’, ‘asarray’, or ‘asarray-multi’. Refers to usage of either tifffile.imread or tifffile.asarray. ‘asarray-multi’ will load multi-page tiff files.
meta_path – path to a file containing meta data
meta_format – meta data format, must correspond to the name of a function in viewer.core.organize_meta
axes_order – Axes order as a 3 or 4 letter string for 2D or 3D data respectively. Axes order is assumed to be “txy” or “tzxy” if not specified.
- history_trace¶
history log
- imgdata: mesmerize.viewer.core.data_types.ImgData¶
ImgData instance
- isEmpty¶
Return True if the work environment is empty
- static load_mesfile(path: str) mesmerize.viewer.core.mesfile.MES [source]¶
Just passes the path of a .mes file to the constructor of class MES in MesmerizeCore.FileInput. Loads .mes file & constructs MES obj from which individual images & their respective metadata can be loaded to construct viewer work environments using the classmethod viewerWorkEnv.from_mesfile.
- Parameters
path – full path to a single .mes file.
- roi_manager¶
reference to the back-end ROI Manager that is currently in use
- sample_id¶
SampleID, if opened from a project Sample
- stim_maps¶
Stimulus maps
- to_pandas(proj_path: str, modify_options: Optional[dict] = None) list [source]¶
Used for saving the work environment as a project Sample.
- Parameters
proj_path – Root path of the current project
modify_options –
- Returns
list of dicts that each correspond to a single curve that can be appended as rows to the project dataframe
- to_pickle(dir_path: str, filename: Optional[str] = None, save_img_seq=True, UUID=None) str [source]¶
Package the current work Env ImgData class object (See MesmerizeCore.DataTypes) and any paramteres such as for motion correction and package them into a pickle & image seq array. Used for batch motion correction and for saving current sample to the project. Image sequence is saved as a tiff and other information about the image is saved in a pickle.
ImgData¶
ViewerUtils¶
The Viewer is usually not interacted with directly from modules outside of the viewer (such as viewer modules. They instead use the ViewerUtils class which includes helper functions and a reference to the viewer.
- class mesmerize.viewer.core.ViewerUtils(viewer_reference: <module 'mesmerize.pyqtgraphCore.imageview.ImageView' from '/home/docs/checkouts/readthedocs.org/user_builds/mesmerize/envs/v0.8.0/lib/python3.7/site-packages/mesmerize-0.8.0-py3.7.egg/mesmerize/pyqtgraphCore/imageview/ImageView.py'>)[source]¶
Some utility functions for interfacing viewer.core.ViewerWorkEnv with the pyqtgraphCore.ImageView widget
- __init__(viewer_reference: <module 'mesmerize.pyqtgraphCore.imageview.ImageView' from '/home/docs/checkouts/readthedocs.org/user_builds/mesmerize/envs/v0.8.0/lib/python3.7/site-packages/mesmerize-0.8.0-py3.7.egg/mesmerize/pyqtgraphCore/imageview/ImageView.py'>)[source]¶
- discard_workEnv(clear_sample_id=False)[source]¶
Ask the user if they want to discard their work environment. If Yes, calls _clear_workEnv()
- set_statusbar(msg)[source]¶
Set the status bar message in the viewer window.
- Parameters
msg – text to display in the status bar
- viewer¶
reference to the pyqtgraph ImageView widget instance (viewer)
- work_env¶
ViewerWorkEnv instance
Mesfile¶
- class mesmerize.viewer.core.mesfile.MES(filename: str)[source]¶
Handles of opening .mes files and organizing the images and meta data. The load_img() method returns a 3D array (dims are [time, cols, rows]) of the image sequence and its associated meta data.
Usage: Create a MES instance by passing the path of your mes file, example:
mesfile = MES(‘/path/to/mesfile/experiment_Feb_31.mes’)
Call the get_image_references() method to get a list of references for images that can be loaded.
To load an image that is available in the instance, just pass one of the references from get_image_references() to the load_img method:
img_array, meta_dict = mesfile.load_img(‘IF0001_0001’)
- __weakref__¶
list of weak references to the object (if defined)
Examples¶
These examples can be run in the Viewer Console.
Working with meta data¶
# view meta data
>>> get_meta()
{'origin': 'AwesomeImager', 'version': '4107ff58a0c3d4d5d3c15c3d6a69f8798a20e3de', 'fps': 10.0, 'date': '20190426_152034', 'vmin': 323, 'vmax': 1529, 'orig_meta': {'source': 'AwesomeImager', 'version': '4107ff58a0c3d4d5d3c15c3d6a69f8798a20e3de', 'level_min': 323, 'stims': {}, 'time': '152034', 'date': '20190426', 'framerate': 10.0, 'level_max': 1529}}
# manually set meta data entries
>>> get_meta()['fps'] = 30.0
Open image¶
Use the Viewer Core API to open any arbitrary image
This example loads an image stored using numpy.save(), but this is applicable to images stored in any format that can eventually be represented as a numpy array in python. For example, you could also load image files stored in HDF5 format and load the numpy array that represents your image sequence.
1import numpy as np
2
3# clear the viewer work environment
4clear_workEnv()
5
6a = np.load('/path_to_image.npy')
7
8# check what the axes order is
9a.shape
10
11# (1000, 512, 512) # for example
12# looks like this is in [t, x, y]
13# this can be transposed so we get [x, y, t]
14# ImgData takes either [x, y, t] or [x, y, t, z] axes order
15
16# Define a meta data dict
17meta = \
18 {
19 "origin": "Tutorial example",
20 "fps": 10.0,
21 "date": "20200629_171823",
22 "scanner_pos": [0, 1, 2, 3, 4, 5, 6]
23 }
24
25# Create ImgData instance
26imgdata = ImgData(a.T, meta) # use a.T to get [x, y, t]
27
28# Create a work environment instance
29work_env = ViewerWorkEnv(imgdata)
30
31# Set the current Viewer Work Environment from this new instance
32vi.viewer.workEnv = work_env
33
34# Update the viewer with the new work environment
35# this MUST be run whenever you replace the viewer work environment (the previous line)
36update_workEnv()
Image data¶
Image sequences are simply numpy arrays. For example extract the image sequence between frame 1000 and 2000.
See also
1# Get the current image sequence
2seq = get_image()
3
4# Trim the image sequence
5trim = seq[:, :, 1000:2000]
6
7# Set the viewer work environment image sequence to the trim one
8vi.viewer.workEnv.imgdata.seq = trim
9
10# Update the GUI with the new work environment
11update_workEnv()
View analysis log¶
View the analysis log, such as batch manager processing history.
>>> get_workEnv().history_trace
[{'caiman_motion_correction': {'max_shifts_x': 32, 'max_shifts_y': 32, 'iters_rigid': 1, 'name_rigid': 'Does not matter', 'max_dev': 20, 'strides': 196, 'overlaps': 98, 'upsample': 4, 'name_elas': 'a1_t2', 'output_bit_depth': 'Do not convert', 'bord_px': 5}}, {'cnmfe': {'Input': 'Current Work Environment', 'frate': 10.0, 'gSig': 10, 'bord_px': 5, 'min_corr': 0.9600000000000001, 'min_pnr': 10, 'min_SNR': 1, 'r_values_min': 0.7, 'decay_time': 2, 'rf': 80, 'stride': 40, 'gnb': 8, 'nb_patch': 8, 'k': 8, 'name_corr_pnr': 'a8_t1', 'name_cnmfe': 'a1_t2', 'do_corr_pnr': False, 'do_cnmfe': True}}, {'cnmfe': {'Input': 'Current Work Environment', 'frate': 10.0, 'gSig': 10, 'bord_px': 5, 'min_corr': 0.9600000000000001, 'min_pnr': 14, 'min_SNR': 1, 'r_values_min': 0.7, 'decay_time': 4, 'rf': 80, 'stride': 40, 'gnb': 8, 'nb_patch': 8, 'k': 8, 'name_corr_pnr': '', 'name_cnmfe': 'a1_t2', 'do_corr_pnr': False, 'do_cnmfe': True}}]
Running scripts¶
You can use the Script Editor to run scripts in the Viewer console for automating tasks such as batch creation. It basically allows you to use the viewer console more conveniently with a text editor. The execution environment of the viewer console and script editor are identical.
Some example are provided for caiman modules and stimulus mapping.
Viewer Modules¶
Batch Manager¶
- class mesmerize.viewer.modules.batch_manager.ModuleGUI(parent, run_batch: Optional[list] = None, testing: bool = False)[source]¶
GUI for the Batch Manager
- add_item(module: str, input_workEnv: mesmerize.viewer.core.viewer_work_environment.ViewerWorkEnv, input_params: dict, name: str = '', info: dict = '') uuid.UUID [source]¶
Add an item to the currently open batch
- Parameters
module – The module to run from /batch_run_modules.
input_workEnv – Input workEnv that the module will use
input_params – Input params that the module will use. Depends on your subclass of BatchRunInterface.process() method
name – A name for the batch item
info – A dictionary with any metadata information to display in the scroll area label.
- Returns
UUID of the added item
- del_item()[source]¶
Delete the currently selected item from the batch and any corresponding dependents of the item’s output
- df¶
pandas.DataFrame that stores a “database” of information on the batch
- get_item_index(u: Union[uuid.UUID, str]) int [source]¶
Get DataFrame index from UUID
- Parameters
u – UUID or str representing UUID
- Returns
numerical index of the DataFrame corresponding to the UUID
- get_selected_items() Tuple[List[int], List[uuid.UUID]] [source]¶
Returns a list of numeric indices and uuids for the currently selected items
- load_item_input(viewers: Union[mesmerize.viewer.main_window.MainWindow, collections.UserList], r: pandas.core.series.Series = None, UUID: uuid.UUID = None, *args)[source]¶
Pass either the batch DataFrame row or UUID of the item of which to load the input into a viewer
- Parameters
viewers – ViewerWindow or list of ImageView
r – Row of batch DataFrame corresponding to the selected item
UUID – UUID of the item to load input from
- load_item_output(module: str, viewers: Union[mesmerize.viewer.main_window.MainWindow, mesmerize.pyqtgraphCore.imageview.ImageView.ImageView, collections.UserList], UUID: uuid.UUID)[source]¶
Calls subclass of BatchRunInterface.show_output()
- Parameters
module – The module name under /batch_run_modules that the batch item is from
viewers – ViewerWindow, ImageView, or list of ViewerWindows
UUID – UUID of the item to load output from
- process_batch(start_ix: Union[int, uuid.UUID] = 0, clear_viewers=False)[source]¶
Process everything in the batch by calling subclass of BatchRunInterface.process() for all items in batch
- Parameters
start_ix – Either DataFrame index (int) or UUID of the item to start from.
clear_viewers – Clear work environments in all viewers that are open
Tiff Module¶
Uses the tifffile package created by Christoph Gohlke: https://pypi.org/project/tifffile/
Can be used with scripts within Mesmerize for loading tiff files without using the API of Viewer Core
- class mesmerize.viewer.modules.tiff_io.ModuleGUI(parent, viewer_reference)[source]¶
- check_meta_path() bool [source]¶
check if a file exists with the same name and the meta data extension specified by the selected meta data format
- load(tiff_path: str, method: str, axes_order: Optional[str] = None, meta_path: Optional[str] = None, meta_format: Optional[str] = None) mesmerize.viewer.core.viewer_work_environment.ViewerWorkEnv [source]¶
Load a tiff file along with associated meta data
- Parameters
tiff_path – path to the tiff file
meta_path – path to the json meta data file
method – one of “asarray”, “asarray-multi”, or “imread” “asarray” and “asarray-multi” uses
tifffile.asarray()
“asarray-multi” is for multi-page tiffs “imread” usestifffile.imread()
axes_order – axes order, examples: txy, xyt, tzxy, xytz etc.
meta_format – name of function from viewer.core.organize_meta that should be used to organize the meta data.
Caiman Motion Correction¶
Front-end for Caiman NoRMCorre parameters entry
Can be used with scripts for adding batch items.
See also
CNMF¶
Front-end for Caiman CNMF parameter entry
Can be used with scripts for adding batch items.
See also
- class mesmerize.viewer.modules.cnmf.ModuleGUI(parent, viewer_reference)[source]¶
CNMFE¶
Front-end for Caiman CNMFE parameter entry
Can be used with scripts for adding batch items.
See also
- class mesmerize.viewer.modules.cnmfe.ModuleGUI(parent, viewer_reference)[source]¶
- get_params(item_type: str, group_params: bool = False) dict [source]¶
Get a dict of the set parameters. If the work environment was loaded from a motion correction batch item it put the bord_px in the dict. Doesn’t use any arguments
- Parameters
item_type – one of corr_pnr or cnmfe
MESc Importer¶
MESc importer for exploring & importing image sequences from .mesc HDF5 files.
ModuleGUI¶
- class mesmerize.viewer.modules.femtonics_mesc.ModuleGUI(parent, viewer_ref)[source]¶
instance of
MEScNavigator
- plot_widgets¶
list of plot widgets
- set_file(path: str, *args)[source]¶
Create an h5py file handle from the .mesc file at the given
path
.*args
are not used, its just there for compatibility with the decorator.- Parameters
path (str) – path to the .mes file
args – not used
ROI Manager¶
Video Tutorial¶
ModuleGUI¶
The GUI QDockWidget that communicates with the back-end managers
- class mesmerize.viewer.modules.roi_manager.ModuleGUI(parent, viewer_reference)[source]¶
The GUI front-end for the ROI Manager module
- manager¶
The back-end manager instance.
Delete the currently selected ROI
- start_backend(type_str: str)[source]¶
Choose backend, one of the Manager classes in the managers module.
- start_manual_mode()[source]¶
Start in manual mode. Creates a new back-end manager instance (Uses ManagerManual)
- package_for_project() dict [source]¶
Gets all the ROI states so that they can be packaged along with the rest of the work environment to be saved as a project Sample
Managers¶
The back-end managers that are used by the ROI Manager ModuleGUI
The managers hold instances of ROIs in an instance of ROIList
AbstractBaseManager¶
Subclass this if you want to make your own Manager Back-end.
- class mesmerize.viewer.modules.roi_manager_modules.managers.AbstractBaseManager(parent, ui, viewer_interface: mesmerize.viewer.core.common.ViewerUtils)[source]¶
Base ROI Manager
- __init__(parent, ui, viewer_interface: mesmerize.viewer.core.common.ViewerUtils)[source]¶
Set the common attributes
- Parameters
parent – The ModuleGUI QDockWidget instance
ui – The ui of the ModuleGUI QDockWidget instance,
viewer_interface – A ViewerUtils instance for accessing the Viewer the parent QDockWidget belongs to
- roi_list¶
The ROIList instance that stores the list of ROIs
- abstract add_roi(*args, **kwargs)[source]¶
Method for adding an ROI, must be implemented in subclass
- get_all_states() dict [source]¶
Get the ROI states for all ROIs in self.roi_list so that they can be restored. The appropriate manager is instantiated based on the ‘roi_type’ key of the returned dict
- get_plot_item() mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem [source]¶
Get the viewer plot item that is associated to these ROIs
- __del__()[source]¶
Cleanup of all ROIs in the list and deletes the manager instance. Used when switching modes.
- __weakref__¶
list of weak references to the object (if defined)
ManagerManual¶
- class mesmerize.viewer.modules.roi_manager_modules.managers.ManagerManual(parent, ui, viewer_interface)[source]¶
Bases:
mesmerize.viewer.modules.roi_manager_modules.managers.AbstractBaseManager
The Manager for the Manual mode
- __init__(parent, ui, viewer_interface)[source]¶
Set the common attributes
- Parameters
parent – The ModuleGUI QDockWidget instance
ui – The ui of the ModuleGUI QDockWidget instance,
viewer_interface – A ViewerUtils instance for accessing the Viewer the parent QDockWidget belongs to
- add_roi(shape: str) mesmerize.viewer.modules.roi_manager_modules.roi_types.ManualROI [source]¶
Add an ROI to the list
- Parameters
shape – either “PolyLineROI” or “EllipseROI”
- import_from_imagej(path: str)[source]¶
Uses read-roi package created by Hadrien Mary. https://pypi.org/project/read-roi/
- Parameters
path – Full path to the ImageJ ROIs zip file
ManagerScatterROI¶
- class mesmerize.viewer.modules.roi_manager_modules.managers.ManagerScatterROI(parent, ui, viewer_interface: mesmerize.viewer.core.common.ViewerUtils)[source]¶
Bases:
mesmerize.viewer.modules.roi_manager_modules.managers.AbstractBaseManager
Manager for unmoveable ROIs drawn using scatterplots
- __init__(parent, ui, viewer_interface: mesmerize.viewer.core.common.ViewerUtils)[source]¶
Set the common attributes
- Parameters
parent – The ModuleGUI QDockWidget instance
ui – The ui of the ModuleGUI QDockWidget instance,
viewer_interface – A ViewerUtils instance for accessing the Viewer the parent QDockWidget belongs to
- add_roi(curve: numpy.ndarray, xs: numpy.ndarray, ys: numpy.ndarray, metadata: Optional[dict] = None, dfof_data: Optional[numpy.ndarray] = None, spike_data: Optional[numpy.ndarray] = None) mesmerize.viewer.modules.roi_manager_modules.roi_types.ScatterROI [source]¶
Add a single ROI
xs and ys arguments are 1D numpy arrays.
- Parameters
curve – curve data, 1-D array, y values/intensity values
xs – x-values for the scatter plot to spatially illustrate the ROI
ys – corresponding y-values for the scatter plot to spatially illustrate the ROI
metadata – Any metadata for this ROI
- Returns
ScatterROI object
ManagerVolROI¶
- class mesmerize.viewer.modules.roi_manager_modules.managers.ManagerVolROI(parent, ui, viewer_interface: mesmerize.viewer.core.common.ViewerUtils)[source]¶
Bases:
mesmerize.viewer.modules.roi_manager_modules.managers.ManagerScatterROI
Manager for 3D ROIs
- __init__(parent, ui, viewer_interface: mesmerize.viewer.core.common.ViewerUtils)[source]¶
Set the common attributes
- Parameters
parent – The ModuleGUI QDockWidget instance
ui – The ui of the ModuleGUI QDockWidget instance,
viewer_interface – A ViewerUtils instance for accessing the Viewer the parent QDockWidget belongs to
ManagerVolCNMF¶
- class mesmerize.viewer.modules.roi_manager_modules.managers.ManagerVolCNMF(parent, ui, viewer_interface)[source]¶
Bases:
mesmerize.viewer.modules.roi_manager_modules.managers.ManagerVolROI
Manager for 3D CNMF based ROIs
- __init__(parent, ui, viewer_interface)[source]¶
Set the common attributes
- Parameters
parent – The ModuleGUI QDockWidget instance
ui – The ui of the ModuleGUI QDockWidget instance,
viewer_interface – A ViewerUtils instance for accessing the Viewer the parent QDockWidget belongs to
- add_all_components(cnmf_data_dict: dict, input_params_dict: dict)[source]¶
Add all components from a CNMF(E) output. Arguments correspond to CNMF(E) outputs
- Parameters
cnmf_data_dict – CNMF results data directly from the HDF5 file
input_params_dict – dict of input params, from the batch manager
calc_raw_min_max – Calculate raw min & max for each ROI
- Returns
ManagerCNMFROI¶
- class mesmerize.viewer.modules.roi_manager_modules.managers.ManagerCNMFROI(parent, ui, viewer_interface)[source]¶
Bases:
mesmerize.viewer.modules.roi_manager_modules.managers.AbstractBaseManager
Manager for ROIs imported from CNMF or CNMFE outputs
- add_all_components(cnmf_data_dict, input_params_dict, calc_raw_min_max=False)[source]¶
Add all components from a CNMF(E) output. Arguments correspond to CNMF(E) outputs
- Parameters
cnmf_data_dict – CNMF results data directly from the HDF5 file
input_params_dict – dict of input params, from the batch manager
calc_raw_min_max – Calculate raw min & max for each ROI
- Returns
ROI List¶
Used for holding instance of ROIs
- class mesmerize.viewer.modules.roi_manager_modules.roi_list.ROIList(ui, roi_types: type, viewer_interface: mesmerize.viewer.core.common.ViewerUtils)[source]¶
A list for holding ROIs of one type
- __init__(ui, roi_types: type, viewer_interface: mesmerize.viewer.core.common.ViewerUtils)[source]¶
Instantiate
- Parameters
ui – The ui from the parent ModuleGUI, used to interact with the ROI list widget etc.
roi_types – The type of ROI that this list will hold
viewer_interface – ViewerUtils instance for interacting with the parent Viewer
- list_widget¶
ROI list widget
- list_widget_tags¶
Tags list widget
- vi¶
ViewrUtils instance
- current_index¶
Current index (int)
- previous_index¶
Previous index (int)
- append(roi: Union[mesmerize.viewer.modules.roi_manager_modules.roi_types.ScatterROI, mesmerize.viewer.modules.roi_manager_modules.roi_types.ManualROI, mesmerize.viewer.modules.roi_manager_modules.roi_types.VolMultiCNMFROI], add_to_list_widget: bool = True)[source]¶
Add an ROI instance to the list
- __delitem__(key)[source]¶
Delete an ROI from the list and cleanup from the viewer, reindex the colors etc.
- reindex_colormap(random_shuffle=False)[source]¶
Reindex the colors so they sequentially follow the HSV colormap
- __getitem__(item) Union[mesmerize.viewer.modules.roi_manager_modules.roi_types.ManualROI, mesmerize.viewer.modules.roi_manager_modules.roi_types.ScatterROI] [source]¶
Get an item (ROI) from the list
- highlight_roi(roi: Union[mesmerize.viewer.modules.roi_manager_modules.roi_types.ManualROI, mesmerize.viewer.modules.roi_manager_modules.roi_types.ScatterROI])[source]¶
Highlight an ROI in white, both the spatial visualization and the curve
- slot_show_all_checkbox_clicked(b: bool)[source]¶
Show all ROIs in the viewer overlay visualization and curves
- _show_graphics_object(ix: int)[source]¶
Show the ROI at the passed index in the viewer overlay visualization
- _hide_graphics_object(ix: int)[source]¶
Hide the ROI at the passed index in the viewer overlay visualization
- plot_manual_roi_regions()[source]¶
Plot the ROI curves from the regions of all ManualROI instances in the list
- set_pg_roi_plot(ix: int)[source]¶
Plot the ROI curve from the region of the ManualROI instance at the passed index
- update_roi_defs_from_configuration()[source]¶
Update ROI_DEFs in the Tags list from the project configuration
- __weakref__¶
list of weak references to the object (if defined)
ROI Types¶
A list of these are held by an instance of ROIList
AbstractBaseROI¶
- class mesmerize.viewer.modules.roi_manager_modules.roi_types._AbstractBaseROI(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: Optional[dict])[source]¶
Abstract base class defining an ROI that works with the ROIList and ROI Managers. Inherit from this or BaseROI to make a new ROI class
- abstract __init__(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: Optional[dict])[source]¶
Minimum required attributes
- Parameters
curve_plot_item – The plot item that is used for display the curves in the viewer
view_box – ViewBox containing the image sequence, used for overlaying the ROIs on top of the image
state – ROI state, used for restoring the ROIs. Pass None is not restoring an ROI from a state dict
- abstract get_roi_graphics_object() PyQt5.QtWidgets.QGraphicsObject [source]¶
Get the QGraphicsObject used for visualization of the spatial localization of the ROI
- abstract set_roi_graphics_object(*args, **kwargs)[source]¶
Set the QGraphicsObject used for visualization of the spatial localization of the ROI
- abstract set_original_color(color)[source]¶
Set the original color for this ROI
- Parameters
color – 1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- abstract get_color() numpy.ndarray [source]¶
Get the current color of this ROI
- Returns
1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- Return type
np.ndarray
- abstract set_color(color, *args, **kwargs)[source]¶
Set the current color of this ROI
- Parameters
color – 1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- abstract set_tag(roi_def: str, tag: str)[source]¶
Set a tag for the passed roi_def
- Parameters
roi_def – The ROI_DEF that should be tagged
tag – The tag to label for the passed ROI_DEF/ROI Type
- abstract classmethod from_state(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: dict)[source]¶
Restore this ROI from a state
- Parameters
curve_plot_item – The plot item that is used for display the curves in the viewer
view_box – ViewBox containing the image sequence, used for overlaying the ROIs on top of the image
state – ROI state, used for restoring the ROIs. Pass None is not restoring an ROI from a state dict
- __weakref__¶
list of weak references to the object (if defined)
BaseROI¶
Subclass from this if you want to make your own ROI Type.
- class mesmerize.viewer.modules.roi_manager_modules.roi_types.BaseROI(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: Optional[dict] = None, metadata: Optional[dict] = None)[source]¶
Bases:
mesmerize.viewer.modules.roi_manager_modules.roi_types._AbstractBaseROI
A base class that is used by ManualROI and CNMFEROI Inherit from this to make a new ROI class
- __init__(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: Optional[dict] = None, metadata: Optional[dict] = None)[source]¶
Instantiate common attributes
- Parameters
curve_plot_item – The plot item that is used for display the curves in the viewer
view_box – ViewBox containing the image sequence, used for overlaying the ROIs on top of the image
state – ROI state, used for restoring the ROIs. Pass None is not restoring an ROI from a state dict
- get_roi_graphics_object() PyQt5.QtWidgets.QGraphicsObject [source]¶
Get the QGraphicsObject used for visualization of the spatial localization of the ROI
- set_roi_graphics_object(*args, **kwargs)[source]¶
Set the QGraphicsObject used for visualization of the spatial localization of the ROI
- set_original_color(color)[source]¶
Set the original color for this ROI
- Parameters
color – 1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- get_color()[source]¶
Get the current color of this ROI
- Returns
1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- Return type
np.ndarray
- set_color(color: Union[numpy.ndarray, str], *args, **kwargs)[source]¶
Set the current color of this ROI
- Parameters
color – 1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- set_tag(roi_def: str, tag: str)[source]¶
Set a tag for the passed roi_def
- Parameters
roi_def – The ROI_DEF that should be tagged
tag – The tag to label for the passed ROI_DEF/ROI Type
- classmethod from_state(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: dict)[source]¶
Restore this ROI from a state
- Parameters
curve_plot_item – The plot item that is used for display the curves in the viewer
view_box – ViewBox containing the image sequence, used for overlaying the ROIs on top of the image
state – ROI state, used for restoring the ROIs. Pass None is not restoring an ROI from a state dict
ManualROI¶
- class mesmerize.viewer.modules.roi_manager_modules.roi_types.ManualROI(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, roi_graphics_object: mesmerize.pyqtgraphCore.graphicsItems.ROI.ROI, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: Optional[dict] = None, spike_data: Optional[numpy.ndarray] = None, dfof_data: Optional[numpy.ndarray] = None)[source]¶
Bases:
mesmerize.viewer.modules.roi_manager_modules.roi_types.BaseROI
A class manually drawn ROIs
- __init__(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, roi_graphics_object: mesmerize.pyqtgraphCore.graphicsItems.ROI.ROI, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: Optional[dict] = None, spike_data: Optional[numpy.ndarray] = None, dfof_data: Optional[numpy.ndarray] = None)[source]¶
- get_roi_graphics_object() mesmerize.pyqtgraphCore.graphicsItems.ROI.ROI [source]¶
Get the QGraphicsObject used for visualization of the spatial localization of the ROI
- set_roi_graphics_object(graphics_object: mesmerize.pyqtgraphCore.graphicsItems.ROI.ROI)[source]¶
Set the QGraphicsObject used for visualization of the spatial localization of the ROI
- classmethod from_state(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: dict)[source]¶
Restore this ROI from a state
- Parameters
curve_plot_item – The plot item that is used for display the curves in the viewer
view_box – ViewBox containing the image sequence, used for overlaying the ROIs on top of the image
state – ROI state, used for restoring the ROIs. Pass None is not restoring an ROI from a state dict
- reset_color()¶
Reset the color of this ROI back to the original color
- set_original_color(color)¶
Set the original color for this ROI
- Parameters
color – 1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- get_color()¶
Get the current color of this ROI
- Returns
1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- Return type
np.ndarray
- set_color(color: Union[numpy.ndarray, str], *args, **kwargs)¶
Set the current color of this ROI
- Parameters
color – 1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- set_tag(roi_def: str, tag: str)¶
Set a tag for the passed roi_def
- Parameters
roi_def – The ROI_DEF that should be tagged
tag – The tag to label for the passed ROI_DEF/ROI Type
- add_to_viewer()¶
Add this ROI to the viewer.
- remove_from_viewer()¶
Remove this ROI from the viewer
ScatterROI¶
- class mesmerize.viewer.modules.roi_manager_modules.roi_types.ScatterROI(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: Optional[dict] = None, curve_data: Optional[numpy.ndarray] = None, xs: Optional[numpy.ndarray] = None, ys: Optional[numpy.ndarray] = None, metadata: Optional[dict] = None, spike_data: Optional[numpy.ndarray] = None, dfof_data: Optional[numpy.ndarray] = None, **kwargs)[source]¶
Bases:
mesmerize.viewer.modules.roi_manager_modules.roi_types.BaseROI
A class for unmoveable ROIs drawn using scatter points
- __init__(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: Optional[dict] = None, curve_data: Optional[numpy.ndarray] = None, xs: Optional[numpy.ndarray] = None, ys: Optional[numpy.ndarray] = None, metadata: Optional[dict] = None, spike_data: Optional[numpy.ndarray] = None, dfof_data: Optional[numpy.ndarray] = None, **kwargs)[source]¶
- Parameters
curve_plot_item –
view_box –
state –
curve_data – 1D numpy array of y values
kwargs –
- set_roi_graphics_object(xs: numpy.ndarray, ys: numpy.ndarray)[source]¶
Set the QGraphicsObject used for visualization of the spatial localization of the ROI
- get_roi_graphics_object() mesmerize.pyqtgraphCore.graphicsItems.ScatterPlotItem.ScatterPlotItem [source]¶
Get the QGraphicsObject used for visualization of the spatial localization of the ROI
- classmethod from_state(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: dict, **kwargs)[source]¶
Restore this ROI from a state
- Parameters
curve_plot_item – The plot item that is used for display the curves in the viewer
view_box – ViewBox containing the image sequence, used for overlaying the ROIs on top of the image
state – ROI state, used for restoring the ROIs. Pass None is not restoring an ROI from a state dict
- reset_color()¶
Reset the color of this ROI back to the original color
- set_original_color(color)¶
Set the original color for this ROI
- Parameters
color – 1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- get_color()¶
Get the current color of this ROI
- Returns
1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- Return type
np.ndarray
- set_color(color: Union[numpy.ndarray, str], *args, **kwargs)¶
Set the current color of this ROI
- Parameters
color – 1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- set_tag(roi_def: str, tag: str)¶
Set a tag for the passed roi_def
- Parameters
roi_def – The ROI_DEF that should be tagged
tag – The tag to label for the passed ROI_DEF/ROI Type
- add_to_viewer()¶
Add this ROI to the viewer.
- remove_from_viewer()¶
Remove this ROI from the viewer
VolCNMF¶
- class mesmerize.viewer.modules.roi_manager_modules.roi_types.VolCNMF(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, cnmf_idx: Optional[int] = None, curve_data: Optional[numpy.ndarray] = None, contour: Optional[dict] = None, state: Optional[dict] = None, spike_data: Optional[numpy.ndarray] = None, dfof_data: Optional[numpy.ndarray] = None, metadata: Optional[dict] = None, zlevel: int = 0)[source]¶
Bases:
mesmerize.viewer.modules.roi_manager_modules.roi_types.ScatterROI
3D ROI for CNMF data
- __init__(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, cnmf_idx: Optional[int] = None, curve_data: Optional[numpy.ndarray] = None, contour: Optional[dict] = None, state: Optional[dict] = None, spike_data: Optional[numpy.ndarray] = None, dfof_data: Optional[numpy.ndarray] = None, metadata: Optional[dict] = None, zlevel: int = 0)[source]¶
- Parameters
curve_plot_item –
view_box –
state –
curve_data – 1D numpy array of y values
kwargs –
- set_roi_graphics_object()[source]¶
Set the QGraphicsObject used for visualization of the spatial localization of the ROI
- set_zlevel(z: int)[source]¶
Set the z-level of the ROI to correspond with the z-level of the image.
Different from setZValue!!
- get_roi_graphics_object() mesmerize.pyqtgraphCore.graphicsItems.ScatterPlotItem.ScatterPlotItem ¶
Get the QGraphicsObject used for visualization of the spatial localization of the ROI
- reset_color()¶
Reset the color of this ROI back to the original color
- set_original_color(color)¶
Set the original color for this ROI
- Parameters
color – 1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- get_color()¶
Get the current color of this ROI
- Returns
1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- Return type
np.ndarray
- set_color(color: Union[numpy.ndarray, str], *args, **kwargs)¶
Set the current color of this ROI
- Parameters
color – 1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- set_tag(roi_def: str, tag: str)¶
Set a tag for the passed roi_def
- Parameters
roi_def – The ROI_DEF that should be tagged
tag – The tag to label for the passed ROI_DEF/ROI Type
- add_to_viewer()¶
Add this ROI to the viewer.
- remove_from_viewer()¶
Remove this ROI from the viewer
- classmethod from_state(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: dict, **kwargs)¶
Restore this ROI from a state
- Parameters
curve_plot_item – The plot item that is used for display the curves in the viewer
view_box – ViewBox containing the image sequence, used for overlaying the ROIs on top of the image
state – ROI state, used for restoring the ROIs. Pass None is not restoring an ROI from a state dict
CNMFROI¶
- class mesmerize.viewer.modules.roi_manager_modules.roi_types.CNMFROI(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, cnmf_idx: Optional[int] = None, curve_data: Optional[numpy.ndarray] = None, contour: Optional[dict] = None, state: Optional[dict] = None, spike_data: Optional[numpy.ndarray] = None, dfof_data: Optional[numpy.ndarray] = None, metadata: Optional[dict] = None, **kwargs)[source]¶
Bases:
mesmerize.viewer.modules.roi_manager_modules.roi_types.ScatterROI
A class for ROIs imported from CNMF(E) output data
- get_roi_graphics_object() mesmerize.pyqtgraphCore.graphicsItems.ScatterPlotItem.ScatterPlotItem ¶
Get the QGraphicsObject used for visualization of the spatial localization of the ROI
- set_roi_graphics_object(xs: numpy.ndarray, ys: numpy.ndarray)¶
Set the QGraphicsObject used for visualization of the spatial localization of the ROI
- reset_color()¶
Reset the color of this ROI back to the original color
- set_original_color(color)¶
Set the original color for this ROI
- Parameters
color – 1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- get_color()¶
Get the current color of this ROI
- Returns
1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- Return type
np.ndarray
- set_color(color: Union[numpy.ndarray, str], *args, **kwargs)¶
Set the current color of this ROI
- Parameters
color – 1D numpy array of 4 floating point numbers (range 0 - 255) in RBGA format, [R, G, B, A]
- set_tag(roi_def: str, tag: str)¶
Set a tag for the passed roi_def
- Parameters
roi_def – The ROI_DEF that should be tagged
tag – The tag to label for the passed ROI_DEF/ROI Type
- add_to_viewer()¶
Add this ROI to the viewer.
- remove_from_viewer()¶
Remove this ROI from the viewer
- classmethod from_state(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, state: dict, **kwargs)¶
Restore this ROI from a state
- Parameters
curve_plot_item – The plot item that is used for display the curves in the viewer
view_box – ViewBox containing the image sequence, used for overlaying the ROIs on top of the image
state – ROI state, used for restoring the ROIs. Pass None is not restoring an ROI from a state dict
- __init__(curve_plot_item: mesmerize.pyqtgraphCore.graphicsItems.PlotDataItem.PlotDataItem, view_box: mesmerize.pyqtgraphCore.graphicsItems.ViewBox.ViewBox.ViewBox, cnmf_idx: Optional[int] = None, curve_data: Optional[numpy.ndarray] = None, contour: Optional[dict] = None, state: Optional[dict] = None, spike_data: Optional[numpy.ndarray] = None, dfof_data: Optional[numpy.ndarray] = None, metadata: Optional[dict] = None, **kwargs)[source]¶
Instantiate attributes.
- Type
curve_data: np.ndarray
- Parameters
curve_data – 1D numpy array of y values
cnmf_idx – original index of the ROI from cnmf idx_components
Basic Examples¶
These examples can be run through the viewer console or Script editor to interact with the ROIs.
See also
Get the back-end ROI Manager, see ROI Manager APIs
>>> get_workEnv().roi_manager
<mesmerize.viewer.modules.roi_manager_modules.managers.ManagerCNMFROI object at 0x7f01b8780668>``
Get the ROI List, see ROIList API
>>> get_workEnv().roi_manager.roi_list
[<mesmerize.viewer.modules.roi_manager_modules.roi_types.CNMFROI object at 0x7f01bc78b278>, <mesmerize.viewer.modules.roi_manager_modules.roi_types.CNMFROI object at 0x7f01bc817630>, <mesmerize.viewer.modules.roi_manager_modules.roi_types.CNMFROI object at 0x7f01bc817668>, <mesmerize.viewer.modules.roi_manager_modules.roi_types.CNMFROI object at 0x7f01bc7c5438>, <mesmerize.viewer.modules.roi_manager_modules.roi_types.CNMFROI object at 0x7f01bc7c5208>]
Work with an ROI object, see ROI Type APIs
# Get the curve data of an ROI
>>> get_workEnv().roi_manager.roi_list[3].curve_data
(array([ 0, 1, 2, ..., 2995, 2996, 2997]), array([ -207.00168389, -161.78229208, -157.62522988, ..., -1017.73174502,
-1030.27047731, -1042.26989668]))
# Get the tags of an ROI
>>> get_workEnv().roi_manager.roi_list[2].get_all_tags()
{'anatomical_location': 'tail', 'cell_name': 'dcen', 'morphology': 'untagged'}
# Get a single tag
>>> get_workEnv().roi_manager.roi_list[2].get_tag('cell_name')
'dcen'
Stimulus Mapping¶
ModuleGUI¶
Page¶
Each Page instance contains the mapping data for one stimulus type
- class mesmerize.viewer.modules.stimmap_modules.page.Page(parent, stim_type: str)[source]¶
- set_data(dataframe: pandas.core.frame.DataFrame)[source]¶
Set the stimulus map
- Parameters
dataframe – DataFrame with the appropriate rows (see add_row())
- add_row(pd_series: Optional[pandas.core.series.Series] = None)[source]¶
Add a row to the stimulus map
- Parameters
pd_series – pandas series containing the following: stimulus name, start, end, and color
- Returns
DataFrame Format¶
Page.set_data() expects a DataFrame in the following format
Columns
Column
Description
name
Stimulus name
start
Start time of stimulus period
end
End time of stimulus period
color
Color to display in the viewer curve plot
Data types:
Column
Data type
name
str
start
numpy.float64
end
numpy.float64
color
Tuple in RGBA format
(int, int, int, int)Each int must be within the 0 - 255 range
Example
name
start
end
color
control
0.0
328.0
(0, 75, 0, 255)
stim_A
328.0
1156.0
(0, 0, 125, 255)
stim_C
1156.0
2987.0
(125, 0, 0, 255)
Example¶
This example creates a pandas DataFrame from a csv file to set the stimulus mappings. It uses the csv file from the pvc-7 dataset availble on CRCNS: http://dx.doi.org/10.6080/K0C8276G
You can also download the csv here: stimulus_pvc7.csv
This example is meant to be run through the Viewer Script Editor
1import pandas as pd
2from mesmerize.plotting.utils import get_colormap
3
4# Load dataframe from CSV
5df = pd.read_csv('path_to_csv_file')
6
7# Sort according to time
8df.sort_values(by='start').reset_index(drop=True, inplace=True)
9
10# Trim off the stimulus periods that are not in the current image sequence
11trim = get_image().shape[2]
12df = df[df['start'] <= trim]
13
14# get one dataframe for each of the stimulus types
15ori_df = df.drop(columns=['sf', 'tf', 'contrast']) # contains ori stims
16sf_df = df.drop(columns=['ori', 'tf', 'contrast']) # contains sf stims
17tf_df = df.drop(columns=['sf', 'ori', 'contrast']) # contains tf stims
18
19# Rename the stimulus column of interest to "name"
20ori_df.rename(columns={'ori': 'name'}, inplace=True)
21sf_df.rename(columns={'sf': 'name'}, inplace=True)
22tf_df.rename(columns={'tf': 'name'}, inplace=True)
23
24
25# Get the stimulus mapping module
26smm = get_module('stimulus_mapping')
27
28# set the stimulus map in Mesmerize for each of the 3 stimulus types
29for stim_type, _df in zip(['ori', 'sf', 'tf'], [ori_df, sf_df, tf_df]):
30 # data in the name column must be `str` type for stimulus mapping module
31 _df['name'] = _df['name'].apply(str)
32
33 # Get the names of the stimulus periods
34 stimuli = _df['name'].unique()
35 stimuli.sort()
36
37 # Create colormap with the stimulus names
38 stimuli_cmap = get_colormap(stimuli, 'tab10', output='pyqt', alpha=0.6)
39
40 # Create a column with colors that correspond to the stimulus names
41 # This is for illustrating the stimulus periods in the viewer plot
42 _df['color'] = _df['name'].map(stimuli_cmap)
43
44 # Set the data in the Stimulus Mapping module
45 smm.maps[stim_type].set_data(_df)
Data types used for analysis¶
Transmission¶
Inherits from BaseTransmission
- class mesmerize.Transmission(df: pandas.core.frame.DataFrame, history_trace: mesmerize.analysis.data_types.HistoryTrace, proj_path: Optional[str] = None, last_output: Optional[str] = None, last_unit: Optional[str] = None, ROI_DEFS: Optional[list] = None, STIM_DEFS: Optional[list] = None, CUSTOM_COLUMNS: Optional[list] = None, plot_state: Optional[dict] = None)[source]¶
The transmission object used throughout the flowchart
- __init__(df: pandas.core.frame.DataFrame, history_trace: mesmerize.analysis.data_types.HistoryTrace, proj_path: Optional[str] = None, last_output: Optional[str] = None, last_unit: Optional[str] = None, ROI_DEFS: Optional[list] = None, STIM_DEFS: Optional[list] = None, CUSTOM_COLUMNS: Optional[list] = None, plot_state: Optional[dict] = None)¶
Base class for common Transmission functions
- Parameters
df (pd.DataFrame) – Transmission dataframe
history_trace (HistoryTrace) – HistoryTrace object, keeps track of the nodes & node parameters the transmission has been processed through
proj_path (str) – Project path, necessary for the datapoint tracer
last_output (str) – Last data column that was appended via a node’s operation
last_unit (str) – Current units of the data. Refers to the units of column in last_output
plot_state (dict) – State of a plot, such as data and label columns. Used when saving interactive plots.
- Variables
df – DataFrame instance
history_trace –
HistoryTrace instance
last_output – Name of the DataFrame column that contains data from the most recent node
last_unit – The data units for the data in the column of ‘last_output’
plot_state – State of a plot, containing user entered plot parameters. Used for storing interactive plot states.
- static empty_df(transmission, addCols: Optional[list] = None) pandas.core.frame.DataFrame ¶
Just a helper method to return an empty DataFrame with the same columns
- Parameters
transmission – Transmission object that forms the basis
addCols – list of columns to add
- Returns
The input transmission with an empty dataframe containing the same columns and any additional columns that were passed
- classmethod from_pickle(path)¶
Load Transmission from a pickle.
- Parameters
path – file path, usually ends in .trn
- to_pickle(path: str)¶
Save Transmission as a pickle. Not recommended for sharing data, use
to_hdf5()
- Parameters
path – file path, usually ends in .trn
- classmethod from_hdf5(path: str)¶
Create Transmission from an hdf5 file. See
HdfTools
for information on the file structure.- Parameters
path – file path, usually ends in .trn (.ptrn for plots)
- to_hdf5(path: str)¶
Save as an hdf5 file. Uses pytables to save the DataFrame, serielizes the HistoryTrace using JSON. See
HdfTools
- Parameters
path – file path, usually ends in .trn
- get_proj_path() str ¶
Get the project root dir associated to this Transmission.
- Returns
Root directory of the project
- set_proj_path(path: str)¶
Set the project root dir for this transmission.
Used for finding associated project files, for example the Datapoint Tracer uses it to find max and std projections of image sequences.
- Parameters
path – Root directory of the project
- classmethod from_proj(proj_path: str, dataframe: pandas.core.frame.DataFrame, sub_dataframe_name: str = 'root', dataframe_filter_history: Optional[dict] = None)[source]¶
- Parameters
proj_path – root directory of the project
dataframe – Chosen Child DataFrame from the Mesmerize Project
sub_dataframe_name – Name of the sub DataFrame to load
dataframe_filter_history – Filter history of the child dataframe
- static _load_files(proj_path: str, row: pandas.core.series.Series) pandas.core.series.Series [source]¶
Loads npz of curve data and pickle files containing metadata using the paths specified in each row of the chosen sub-dataframe of the project
- classmethod merge(transmissions: list)[source]¶
Merges a list of Transmissions into one transmission. A single DataFrame is created by simple concatenation. HistoryTrace objects are also merged using HistoryTrace.merge.
- Parameters
transmissions – A list containing Transmission objects to merge
- Returns
Merged transmission
HistoryTrace¶
- class mesmerize.analysis.data_types.HistoryTrace(history: Optional[Dict[Union[uuid.UUID, str], List[Dict]]] = None, data_blocks: Optional[List[Union[uuid.UUID, str]]] = None)[source]¶
Structure of a history trace:
A dict with keys that are the block_ids. Each dict value is a list of operation_dicts. Each operation_dict has a single key which is the name of the operation and the value of that key is the operation parameters.
{block_id_1: [
{operation_1:
{
param_1: a,
param_2: b,
param_n, z
}
},
{operation_2:
{
param_1: a,
param_n, z
}
},
…
{operation_n:
{
param_n: x
}
}
]
block_id_2: <list of operation dicts>,
…
block_id_n: <list of operation dicts>
}
The main dict illustrated above should never be worked with directly.
You must use the helper methods of this class to query or add information
- __init__(history: Optional[Dict[Union[uuid.UUID, str], List[Dict]]] = None, data_blocks: Optional[List[Union[uuid.UUID, str]]] = None)[source]¶
- Parameters
history – Dict containing a data block UUIDs as keys. The values are a list of dicts containing operation parameters.
data_blocks – List of data block UUIDs
- Variables
_history – The dict of the actual data, as illustrated above. Should not be accessed directly. Use the
history
property or call get_all_data_blocks_history()._data_blocks – List of all data blocks. Should not be accessed directly, use the
data_blocks
property instead.
- property data_blocks: list¶
List of UUIDs that allow you to pin down the history of specific rows of the dataframe to their history as stored in the history trace data structure (self.history)
- create_data_block(dataframe: pandas.core.frame.DataFrame) Tuple[pandas.core.frame.DataFrame, uuid.UUID] [source]¶
Creates a new UUID, assigns it to the input dataframe by setting the UUID in the _BLOCK_ column
- Parameters
dataframe – Assigns a block ID to this entire DataFrame.
- _add_data_block(data_block_id: uuid.UUID)[source]¶
Adds new datablock UUID to the list of datablocks in this instance. Throws exception if UUID already exists.
- add_operation(data_block_id: Union[uuid.UUID, str], operation: str, parameters: dict)[source]¶
Add a single operation, that is usually performed by a node, to the history trace. Added to all or specific datablock(s), depending on which datablock(s) the node performed the operation on
- Parameters
data_block_id – data_block_id to log the operation on to. either a UUID or ‘all’ to append the operation to all data blocks
operation – name of the operation, usually the same as the name of the node in all lowercase
parameters – operation parameters.
- get_data_block_history(data_block_id: Union[str, uuid.UUID], copy: bool = False) List[dict] [source]¶
Get the full history trace of a single data block.
Use copy=False if you want to modify the history trace of the data block.
- get_operations_list(data_block_id: Union[uuid.UUID, str]) list [source]¶
Returns just a simple list of operations in the order that they were performed on the given datablock. To get the operations along with their parameters call get_data_block_history()
- get_operation_params(data_block_id: Union[uuid.UUID, str], operation: str) dict [source]¶
Get the parameters dict for a specific operation that was performed on a specific data block
- check_operation_exists(data_block_id: uuid.UUID, operation: str) bool [source]¶
Check if a specific operation was performed on a specific datablock
- static _to_uuid(u: Union[str, uuid.UUID]) uuid.UUID [source]¶
If argument ‘u’ is type <str> that can be formatted as a UUID, return it as UUID type. If argument ‘u’ is a UUID, just return it.
- to_dict() dict [source]¶
Package the HistoryTrace instance as a dict. Converts all UUIDs to <str> representation for JSON compatibility.
- static from_dict(d: dict) dict [source]¶
Format a dict stored using HistoryTrace.to_dict so that it can be used to create a HistoryTrace instance. Converts all the <str> representations of UUID back to
<uuid.UUID>
types.- Parameters
d – dict containing appropriate ‘history’ and ‘datablocks’ keys. Must be packaged by HistoryTrace.to_dict()
- Returns
dict formatted so that it can be used to instantiate a HistoryTrace instance recapitulating the HistoryTrace it was packaged from.
- to_json(path: str)[source]¶
Save HistoryTrace to a JSON file.
- Parameters
path – file path, usually ends with .json
- classmethod from_json(path: str)[source]¶
Instantiate HistoryTrace from JSON file (that was saved using HistoryTrace.to_json)
- Parameters
path – file path, usually ends with .json
- classmethod from_pickle(path: str)[source]¶
Load HistoryTrace that was pickled
- Parameters
path – file path
- classmethod merge(history_traces: list)[source]¶
Merge a list of HistoryTrace instances into one HistoryTrace instance. Useful when merging Transmission objects.
- Parameters
history_traces – list of HistoryTrace instances
Examples¶
You can save a Transmission files using the Save node and work with the data directly in scripts, jupyter notebooks etc. You can also save them through the flowchart console (and plot consoles) through Transmission.to_hdf5
.
Working with Transmission files¶
Load a saved Transmission instance using Transmission.from_hdf5
1>>> from mesmerize import Transmission
2>>> from uuid import UUID
3
4# load transmission file
5>>> t = Transmission.from_hdf5('/share/data/temp/kushal/data.trn')
6<mesmerize.analysis.data_types.Transmission at 0x7f4d42f386a0>
7
8# The DataFrame is always the 'df' attribute
9>>> t.df.head()
10
11 CurvePath ... FCLUSTER_LABELS
120 curves/a2-_-1-_-843c2d43-75f3-421a-9fef-483d1e... ... 8
131 curves/brn3b_a6-_-2-_-21557a64-6868-4ff4-8db1-... ... 4
142 curves/brn3b_a6-_-2-_-21557a64-6868-4ff4-8db1-... ... 5
153 curves/brn3b_day1_3-_-2-_-ff3e95df-0e15-495c-9... ... 8
164 curves/brn3b_day1_3-_-2-_-ff3e95df-0e15-495c-9... ... 6
17
18[5 rows x 27 columns]
19
20# the `df` is just a pandas dataframe
21# View a list of samples in the current file
22>>> t.df.SampleID.unique()
23
24array(['a2-_-1', 'a5-_-1', 'brn3b_a6-_-2', 'brn3b_day1_3-_-2',
25 'brn3b_day1_a1-_-2', 'brn3b_day1_a2-_-2', 'brn3b_day1_a4-_-2',
26 'brn3b_day2_a1-_-2', 'brn3b_day2_a1-_-t', 'brn3b_day2_a10-_-2',
27 'brn3b_day2_a2-_-1', 'brn3b_day2_a2-_-3', 'brn3b_day2_a8-_-1',
28 'cesa_a1-_-1', 'cesa_a1-_-2', 'cesa_a1_jan_2019-_-1',
29 'cesa_a1_jan_2019-_-2', 'cesa_a2-_-2', 'cesa_a6-_-1',
30 'cesa_a7-_-1', 'cesa_a7-_-2', 'cesa_a8-_-1', 'cesa_a9-_-1',
31 'cng_ch4_day1_a2-_-t1', 'cng_ch4_day1_a2-_-t2',
32 'cng_ch4_day2_a4-_-t1', 'dmrt1_day1_a2-_-2', 'dmrt1_day1_a4-_-t2',
33 'dmrt1_day1_a5-_-', 'dmrt1_day1_a6-_-t', 'dmrt1_day1_a6-_-t2',
34 'dmrt1_day2_a1-_-t1', 'dmrt1_day2_a1-_-t2', 'dmrt1_day2_a2-_-t1',
35 'dmrt1_day2_a3-_-t1', 'dmrt1_day2_a3-_-t2', 'dmrt1_day2_a4-_-t1',
36 'dmrt1_day2_a4-_-t2', 'hnk1_a5-_-2', 'hnk1_a6-_-1', 'hnk1_a7-_-1',
37 'hnk1_a7-_-2', 'hnk1_a8-_-1', 'pc2_a10-_-1', 'pc2_a11-_-1',
38 'pc2_a13-_-1', 'pc2_a14-_-1', 'pc2_a15-_-1', 'pc2_a16-_-1',
39 'pc2_a9-_-1', 'pde9_day1_a2-_-2', 'pde9_day1_a3-_-1',
40 'pde9_day1_a4-_-1', 'pde9_day1_a4-_-2', 'pde9_day2_a2-_-t2',
41 'pde9_day2_a2-_-t4', 'pde9_day2_a4-_-t1', 'pde9_day2_a4-_-t2',
42 'pde9_day2_a4-_-t3', 'pde9_day2_a5-_-t1', 'pde9_day2_a5-_-t2',
43 'pde9_day2_a6-_-t1', 'pde9_day2_a7-_-t1', 'pde9_day2_a7-_-t2'],
44 dtype=object)
45
46# Show data associated with a single sample
47>>> t.df[t.df['SampleID'] == 'brn3b_day1_a1-_-2']
48
49 CurvePath ... FCLUSTER_LABELS
506 curves/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-... ... 6
517 curves/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-... ... 6
528 curves/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-... ... 5
539 curves/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-... ... 7
5410 curves/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-... ... 5
55
56# View the data associated with one ROI
57# the `uuid_curve` is a unique identifier for each curve/ROI
58>> t.df[t.df['SampleID'] == 'brn3b_day1_a1-_-2'].iloc[0]
59
60CurvePath curves/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-...
61ImgInfoPath images/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-...
62ImgPath images/brn3b_day1_a1-_-2-_-d3c5f225-7039-4abd-...
63ImgUUID d3c5f225-7039-4abd-a7a1-5e9ef2150013
64ROI_State {'roi_xs': [554, 553, 553, 552, 552, 551, 551,...
65SampleID brn3b_day1_a1-_-2
66anatomical_location palp
67cell_name palp
68comments untagged
69date 20190425_110103
70dorso_ventral_axis untagged
71misc {}
72morphology untagged
73promoter brn3b
74rostro_caudal_axis untagged
75stimulus_name [untagged]
76uuid_curve f44fbd3d-6eaa-4e19-a677-496908565fde
77_RAW_CURVE [81.41972198848178, 75.61356993008134, 70.0493...
78meta {'origin': 'AwesomeImager', 'version': '4107ff...
79stim_maps [[None]]
80_BLOCK_ 3e069e2d-d012-47ee-830c-93d85197e2f4
81_SPLICE_ARRAYS [2.646593459501195, 1.8252819116136887, 1.7422...
82_NORMALIZE [0.0681729940259753, 0.06533186950232853, 0.06...
83_RFFT [443.19357880089615, -66.8777897472859, 55.244...
84_ABSOLUTE_VALUE [443.19357880089615, 66.8777897472859, 55.2443...
85_LOG_TRANSFORM [2.646593459501195, 1.8252819116136887, 1.7422...
86FCLUSTER_LABELS 6
87Name: 6, dtype: object
88
89# Show the ROI object data
90>>> t.df[t.df['SampleID'] == 'brn3b_day1_a1-_-2'].iloc[0]['ROI_State']
91
92{'roi_xs': array([554, 553, 553, 552, 552, 551, 551, 551, 551, 550, 550, 550, 549,
93 548, 547, 547, 546, 546, 545, 545, 544, 543, 543, 542, 541, 541,
94 540, 540, 539, 539, 538, 537, 536, 535, 534, 533, 532, 531, 531,
95 530, 529, 528, 527, 527, 526, 526, 525, 525, 525, 524, 524, 523,
96 522, 522, 521, 521, 520, 521, 521, 521, 521, 521, 522, 522, 522,
97 522, 522, 522, 522, 522, 521, 521, 521, 521, 521, 521, 522, 523,
98 524, 524, 525, 525, 525, 526, 526, 527, 528, 528, 529, 529, 529,
99 530, 530, 531, 532, 532, 533, 534, 535, 535, 536, 536, 537, 538,
100 539, 540, 540, 541, 541, 542, 542, 543, 544, 545, 546, 546, 547,
101 548, 548, 549, 549, 549, 549, 550, 550, 550, 550, 551, 551, 551,
102 552, 552, 552, 553, 553, 553, 554, 554, 554, 553, 554, 554, 554,
103 554, 554]),
104'roi_ys': array([155, 156, 156, 157, 157, 158, 159, 160, 160, 161, 162, 162, 162,
105 162, 163, 163, 164, 164, 165, 165, 165, 166, 166, 166, 167, 167,
106 167, 166, 167, 167, 167, 167, 167, 167, 167, 167, 167, 168, 168,
107 168, 168, 168, 168, 167, 167, 166, 166, 165, 164, 164, 163, 163,
108 163, 162, 162, 161, 161, 160, 160, 159, 158, 157, 156, 156, 155,
109 154, 153, 152, 151, 150, 150, 149, 148, 147, 146, 145, 144, 144,
110 144, 144, 143, 143, 142, 141, 141, 140, 140, 140, 139, 139, 138,
111 137, 137, 136, 136, 136, 135, 135, 135, 136, 136, 137, 137, 137,
112 137, 137, 138, 138, 138, 137, 137, 136, 136, 136, 136, 137, 137,
113 137, 138, 138, 139, 140, 141, 141, 142, 143, 144, 144, 145, 146,
114 146, 147, 148, 148, 149, 150, 150, 151, 151, 152, 152, 153, 154,
115 155, 155]),
116'curve_data': (array([ 0, 1, 2, ..., 2996, 2997, 2998]),
117array([ 81.41972199, 75.61356993, 70.04934883, ..., 195.4416283 ,
118 184.8844155 , 174.76708104])),
119'tags': {'anatomical_location': 'palp',
120'cell_name': 'palp',
121'morphology': 'untagged'},
122'roi_type': 'CNMFROI',
123'cnmf_idx': 2}
View History Log¶
Transmissions have a history_trace attribute which is an instance of HistoryTrace
.
Use the get_data_block_history
and get_operations_list
methods to view the history log of a data block.
1# To view the history log, first get the block UUID of the dataframe row of which you want the history log
2
3# Block UUIDs are stored in the _BLOCK_ column
4>>> bid = t.df.iloc[10]._BLOCK_
5>>> bid
6
7'248a6ece-e60e-4a09-845e-188a5199d262'
8
9# Get the history log of this data block
10# HistoryTrace.get_operations_list() returns a list of operations, without parameters
11# HistoryTrace.get_data_block_history() returns the operations list with the parameters
12>>> t.history_trace.get_operations_list(bid)
13
14['spawn_transmission',
15 'splice_arrays',
16 'normalize',
17 'rfft',
18 'absolute_value',
19 'log_transform',
20 'splice_arrays',
21 'fcluster']
22
23# View the entire history log with all params
24>>> t.history_trace.get_data_block_history(bid)
25
26[{'spawn_transmission': {'sub_dataframe_name': 'neuronal',
27'dataframe_filter_history': {'dataframe_filter_history': ['df[~df["promoter"].isin([\'cesa\', \'hnk1\'])]',
28 'df[~df["promoter"].isin([\'cesa\', \'hnk1\'])]',
29 'df[~df["cell_name"].isin([\'not_a_neuron\', \'non_neuronal\', \'untagged\', \'ependymal\'])]']}}},
30{'splice_arrays': {'data_column': '_RAW_CURVE',
31'start_ix': 0,
32'end_ix': 2990,
33'units': 'time'}},
34{'normalize': {'data_column': '_SPLICE_ARRAYS', 'units': 'time'}},
35{'rfft': {'data_column': '_NORMALIZE',
36'frequencies': [0.0,
37 0.0033444816053511705,
38 0.0033444816053511705,
39 0.006688963210702341,
40 ...
41
42# Get the parameters for the 'fcluster' operation
43>>> fp = t.history_trace.get_operation_params(bid, 'fcluster')
44
45# remove the linkage matrix first so we can view the other params
46>>> fp.pop('linkage_matrix');fp
47
48{'threshold': 8.0,
49 'criterion': 'maxclust',
50 'depth': 1,
51 'linkage_params': {'method': 'complete',
52 'metric': 'wasserstein',
53 'optimal_ordering': True}}
54
55# Draw the analysis history as a graph
56# This will open your defeault pdf viewer with the graph
57>>> t.history_trace.draw_graph(bid, view=True)
58
59# If you are using the API to perform analysis on
60# transmission files, you can use the `HistoryTrace`
61# to log the analysis history
62# For example, add a number `3.14` to all datapoints in a curve
63>>> t.df['_RAW_CURVE'] = t.df['_RAW_CURVE'].apply(lambda x: x + 3.14)
64
65# Append the analysis log
66>>> t.history_trace.add_operation(data_block_id='all', operation='addition', parameters={'value': 3.14}
Analysis¶
Analysis helper functions
Utils¶
- mesmerize.analysis.utils.get_array_size(transmission: mesmerize.analysis.data_types.Transmission, data_column: str) int [source]¶
Returns the size of the 1D arrays in the specified data column. Throws an exception if they do not match
- Parameters
transmission (Transmission) – Desired Transmission
data_column (str) – Data column of the Transmission from which to retrieve the size
- Returns
Size of the 1D arrays of the specified data column
- Return type
- mesmerize.analysis.utils.get_frequency_linspace(transmission: mesmerize.analysis.data_types.Transmission) Tuple[numpy.ndarray, float] [source]¶
Get the frequency linspace.
Throwns an exception if all datablocks do not have the same linspace & Nyquist frequencies
- Parameters
transmission – Transmission containing data from which to get frequency linspace
- Returns
tuple: (frequency linspace as a 1D numpy array, nyquist frequency)
- Return type
Tuple[np.ndarray, float]
- mesmerize.analysis.utils.get_proportions(xs: Union[pandas.core.series.Series, numpy.ndarray, list], ys: Union[pandas.core.series.Series, numpy.ndarray], xs_name: str = 'xs', ys_name: str = 'ys', swap: bool = False, percentages: bool = True) pandas.core.frame.DataFrame [source]¶
Get the proportions of xs vs ys.
xs & ys are categorical data.
- Parameters
xs (Union[pd.Series, np.ndarray]) – data plotted on the x axis
ys (Union[pd.Series, np.ndarray]) – proportions of unique elements in ys are calculated per xs
xs_name (str) – name for the xs data, useful for labeling the axis in plots
ys_name (str) – name for the ys data, useful for labeling the axis in plots
swap (bool) – swap x and y
- Returns
DataFrame that can be plotted in a proportions bar graph
- Return type
pd.DataFrame
- mesmerize.analysis.utils.get_sampling_rate(transmission: mesmerize.analysis.data_types.Transmission, tolerance: Optional[float] = 0.1) float [source]¶
Returns the mean sampling rate of all data in a Transmission if it is within the specified tolerance. Otherwise throws an exception.
- Parameters
transmission (Transmission) – Transmission object of the data from which sampling rate is obtained.
tolerance (float) – Maximum tolerance (in Hertz) of sampling rate variation between different samples
- Returns
The mean sampling rate of all data in the Transmission
- Return type
- mesmerize.analysis.utils.organize_dataframe_columns(columns: Iterable[str]) Tuple[List[str], List[str], List[str]] [source]¶
Organizes DataFrame columns into data column, categorical label columns, and uuid columns.
- mesmerize.analysis.utils.pad_arrays(a: numpy.ndarray, method: str = 'random', output_size: Optional[int] = None, mode: str = 'minimum', constant: Optional[Any] = None) numpy.ndarray [source]¶
Pad all the input arrays so that are of the same length. The length is determined by the largest input array. The padding value for each input array is the minimum value in that array.
Padding for each input array is either done after the array’s last index to fill up to the length of the largest input array (method ‘fill-size’) or the padding is randomly flanked to the input array (method ‘random’) for easier visualization.
- Parameters
a (np.ndarray) – 1D array where each element is a 1D array
method (str) – one of ‘fill-size’ or ‘random’, see docstring for details
output_size – not used
mode (str) – one of either ‘constant’ or ‘minimum’. If ‘minimum’ the min value of the array is used as the padding value. If ‘constant’ the values passed to the “constant” argument is used as the padding value.
constant (Any) – padding value if ‘mode’ is set to ‘constant’
- Returns
Arrays padded according to the chosen method. 2D array of shape [n_arrays, size of largest input array]
- Return type
np.ndarray
Cross correlation¶
functions¶
Helper functions. Uses tslearn.cycc
- mesmerize.analysis.math.cross_correlation.ncc_c(x: numpy.ndarray, y: numpy.ndarray) numpy.ndarray [source]¶
Must pass 1D array to both x and y
- Parameters
x – Input array [x1, x2, x3, … xn]
y – Input array [y2, y2, x3, … yn]
- Returns
Returns the normalized cross correlation function (as an array) of the two input vector arguments “x” and “y”
- Return type
np.ndarray
- mesmerize.analysis.math.cross_correlation.get_omega(x: Optional[numpy.ndarray] = None, y: Optional[numpy.ndarray] = None, cc: Optional[numpy.ndarray] = None) int [source]¶
Must pass a 1D array to either both “x” and “y” or a cross-correlation function (as an array) to “cc”
- Parameters
x – Input array [x1, x2, x3, … xn]
y – Input array [y2, y2, x3, … yn]
cc – cross-correlation function represented as an array [c1, c2, c3, … cn]
- Returns
index (x-axis position) of the global maxima of the cross-correlation function
- Return type
np.ndarray
- mesmerize.analysis.math.cross_correlation.get_lag(x: Optional[numpy.ndarray] = None, y: Optional[numpy.ndarray] = None, cc: Optional[numpy.ndarray] = None) float [source]¶
Must pass a 1D array to either both “x” and “y” or a cross-correlation function (as an array) to “cc”
- Parameters
x – Input array [x1, x2, x3, … xn]
y – Input array [y2, y2, x3, … yn]
cc – cross-correlation function represented as a array [c1, c2, c3, … cn]
- Returns
Position of the maxima of the cross-correlation function with respect to middle point of the cross-correlation function
- Return type
np.ndarray
- mesmerize.analysis.math.cross_correlation.get_epsilon(x: Optional[numpy.ndarray] = None, y: Optional[numpy.ndarray] = None, cc: Optional[numpy.ndarray] = None) float [source]¶
Must pass a 1D vector to either both “x” and “y” or a cross-correlation function to “cc”
- Parameters
x – Input array [x1, x2, x3, … xn]
y – Input array [y2, y2, x3, … yn]
cc – cross-correlation function represented as an array [c1, c2, c3, … cn]
- Returns
Magnitude of the global maxima of the cross-correlationn function
- Return type
np.ndarray
- mesmerize.analysis.math.cross_correlation.get_lag_matrix(curves: Optional[numpy.ndarray] = None, ccs: Optional[numpy.ndarray] = None) numpy.ndarray [source]¶
Get a 2D matrix of lags. Can pass either a 2D array of 1D curves or cross-correlations
- Parameters
curves – 2D array of 1D curves
ccs – 2D array of 1D cross-correlation functions represented by arrays
- Returns
2D matrix of lag values, shape is [n_curves, n_curves]
- Return type
np.ndarray
- mesmerize.analysis.math.cross_correlation.get_epsilon_matrix(curves: Optional[numpy.ndarray] = None, ccs: Optional[numpy.ndarray] = None) numpy.ndarray [source]¶
Get a 2D matrix of maximas. Can pass either a 2D array of 1D curves or cross-correlations
- Parameters
curves – 2D array of 1D curves
ccs – 2D array of 1D cross-correlation functions represented by arrays
- Returns
2D matrix of maxima values, shape is [n_curves, n_curves]
- Return type
np.ndarray
- mesmerize.analysis.math.cross_correlation.compute_cc_data(curves: numpy.ndarray) mesmerize.analysis.math.cross_correlation.CC_Data [source]¶
Compute cross-correlation data (cc functions, lag and maxima matrices)
- Parameters
curves – input curves as a 2D array, shape is [n_samples, curve_size]
- Returns
cross correlation data for the input curves as a CC_Data instance
- Return type
CC_Data¶
Data container
Warning
All arguments MUST be numpy.ndarray type for CC_Data for the save to be saveable as an hdf5 file. Set numpy.unicode
as the dtype for the curve_uuids
and labels
arrays. If the dtype is 'O'
(object) the to_hdf5() method will fail.
- class mesmerize.analysis.cross_correlation.CC_Data(input_data: Optional[numpy.ndarray] = None, ccs: Optional[numpy.ndarray] = None, lag_matrix: Optional[numpy.ndarray] = None, epsilon_matrix: Optional[numpy.ndarray] = None, curve_uuids: Optional[numpy.ndarray] = None, labels: Optional[numpy.ndarray] = None)¶
- __init__(input_data: Optional[numpy.ndarray] = None, ccs: Optional[numpy.ndarray] = None, lag_matrix: Optional[numpy.ndarray] = None, epsilon_matrix: Optional[numpy.ndarray] = None, curve_uuids: Optional[numpy.ndarray] = None, labels: Optional[numpy.ndarray] = None)¶
Object for organizing cross-correlation data
types must be numpy.ndarray to be compatible with hdf5
- Parameters
ccs (np.ndarray) – array of cross-correlation functions, shape: [n_curves, n_curves, func_length]
lag_matrix (np.ndarray) – the lag matrix, shape: [n_curves, n_curves]
epsilon_matrix (np.ndarray) – the maxima matrix, shape: [n_curves, n_curves]
curve_uuids (np.ndarray) – uuids (str representation) for each of the curves, length: n_curves
labels (np.ndarray) – labels for each curve, length: n_curves
- ccs¶
array of cross-correlation functions
- lag_matrix¶
lag matrix
- epsilon_matrix¶
maxima matrix
- curve_uuids¶
uuids for each curve
- labels¶
labels for each curve
- get_threshold_matrix(matrix_type: str, lag_thr: float, max_thr: float, lag_thr_abs: bool = True) numpy.ndarray ¶
Get lag or maxima matrix with thresholds applied. Values outside the threshold are set to NaN
- Parameters
matrix_type – one of ‘lag’ or ‘maxima’
lag_thr – lag threshold
max_thr – maxima threshold
lag_thr_abs – threshold with the absolute value of lag
- Returns
the requested matrix with the thresholds applied to it.
- Return type
np.ndarray
Clustering metrics¶
- mesmerize.analysis.clustering_metrics.get_centerlike(cluster_members: numpy.ndarray, metric: Optional[Union[str, callable]] = None, dist_matrix: Optional[numpy.ndarray] = None) Tuple[numpy.ndarray, int] [source]¶
Finds the 1D time-series within a cluster that is the most centerlike
- Parameters
cluster_members – 2D numpy array in the form [n_samples, 1D time_series]
metric – Metric to use for pairwise distance calculation, simply passed to sklearn.metrics.pairwise_distances
dist_matrix – Distance matrix of the cluster members
- Returns
The cluster member which is most centerlike, and its index in the cluster_members array
- mesmerize.analysis.clustering_metrics.get_cluster_radius(cluster_members: numpy.ndarray, metric: Optional[Union[str, callable]] = None, dist_matrix: Optional[numpy.ndarray] = None, centerlike_index: Optional[int] = None) float [source]¶
Returns the cluster radius according to chosen distance metric
- Parameters
cluster_members – 2D numpy array in the form [n_samples, 1D time_series]
metric – Metric to use for pairwise distance calculation, simply passed to sklearn.metrics.pairwise_distances
dist_matrix – Distance matrix of the cluster members
centerlike_index – Index of the centerlike cluster member within the cluster_members array
- Returns
The cluster radius, average between the most centerlike member and all other members
- mesmerize.analysis.clustering_metrics.davies_bouldin_score(data: numpy.ndarray, cluster_labels: numpy.ndarray, metric: Union[str, callable]) Tuple[float, numpy.ndarray] [source]¶
Adopted from sklearn.metrics.davies_bouldin_score to use any distance metric
- Parameters
data – Data that was used for clustering, [n_samples, 1D time_series]
metric – Metric to use for pairwise distance calculation, simply passed to sklearn.metrics.pairwise_distances
cluster_labels – Cluster labels
- Returns
Davies Bouldin Score using EMD
Nodes¶
Data¶
- class mesmerize.pyqtgraphCore.flowchart.library.Data.DropNa(*args, **kwargs)[source]¶
Drop NaNs from the DataFrame
- class mesmerize.pyqtgraphCore.flowchart.library.Data.LoadFile(name)[source]¶
Load Transmission data object from pickled file
- class mesmerize.pyqtgraphCore.flowchart.library.Data.LoadProjDF(name)[source]¶
Load raw project DataFrames as Transmission
- class mesmerize.pyqtgraphCore.flowchart.library.Data.NormRaw(name, ui=None, terminals=None, **kwargs)[source]¶
Normalize between raw min and max values.
- class mesmerize.pyqtgraphCore.flowchart.library.Data.PadArrays(name, ui=None, terminals=None, **kwargs)[source]¶
Pad 1-D numpy arrays in a particular column
- class mesmerize.pyqtgraphCore.flowchart.library.Data.Save(name)[source]¶
Save Transmission data object
- class mesmerize.pyqtgraphCore.flowchart.library.Data.SelectColumns(name, ui=None, terminals=None, **kwargs)[source]¶
- class mesmerize.pyqtgraphCore.flowchart.library.Data.SelectRows(name, ui=None, terminals=None, **kwargs)[source]¶
- class mesmerize.pyqtgraphCore.flowchart.library.Data.SpliceArrays(name, ui=None, terminals=None, **kwargs)[source]¶
Splice 1-D numpy arrays in a particular column.
- class mesmerize.pyqtgraphCore.flowchart.library.Data.TextFilter(name, ui=None, terminals=None, **kwargs)[source]¶
Simple string filtering in a specified column
- class mesmerize.pyqtgraphCore.flowchart.library.Data.ViewHistory(*args, **kwargs)[source]¶
View History Trace of the input Transmission
Display¶
- class mesmerize.pyqtgraphCore.flowchart.library.Display.AnalysisGraph(name)[source]¶
Graph of the analysis log
- class mesmerize.pyqtgraphCore.flowchart.library.Display.BeeswarmPlots(name)[source]¶
Beeswarm and Violin plots
- class mesmerize.pyqtgraphCore.flowchart.library.Display.FrequencyDomainMagnitude(name, ui=None, terminals=None, **kwargs)[source]¶
Plot Frequency vs. Frequency Domain Magnitude
- class mesmerize.pyqtgraphCore.flowchart.library.Display.Heatmap(name)[source]¶
Stack 1-D arrays and plot visually like a heatmap
- class mesmerize.pyqtgraphCore.flowchart.library.Display.Plot(name)[source]¶
Plot curves and/or scatter points
- class mesmerize.pyqtgraphCore.flowchart.library.Display.Proportions(name)[source]¶
Plot proportions of one categorical column vs another
- class mesmerize.pyqtgraphCore.flowchart.library.Display.ScatterPlot(name)[source]¶
Scatter Plot, useful for visualizing transformed data and clusters
Signal¶
- class mesmerize.pyqtgraphCore.flowchart.library.Signal.ButterWorth(name, ui=None, terminals=None, **kwargs)[source]¶
Butterworth Filter
- class mesmerize.pyqtgraphCore.flowchart.library.Signal.Normalize(name, ui=None, terminals=None, **kwargs)[source]¶
Normalize a column containing 1-D arrays such that values in each array are normalized between 0 and 1
Output Column -> Input Column
- class mesmerize.pyqtgraphCore.flowchart.library.Signal.PeakDetect(name, **kwargs)[source]¶
Detect peaks & bases by finding local maxima & minima. Use this after the Derivative Filter
- class mesmerize.pyqtgraphCore.flowchart.library.Signal.PeakFeatures(*args, **kwargs)[source]¶
Extract peak features after peak detection
- class mesmerize.pyqtgraphCore.flowchart.library.Signal.PowerSpectralDensity(name, ui=None, terminals=None, **kwargs)[source]¶
Return the Power Spectral Density of a curve.
- class mesmerize.pyqtgraphCore.flowchart.library.Signal.RFFT(name, ui=None, terminals=None, **kwargs)[source]¶
Uses fftpack.rfft, ‘Discrete Fourier transform of a real sequence.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.fftpack.rfft.html#scipy.fftpack.rfft
- class mesmerize.pyqtgraphCore.flowchart.library.Signal.Resample(name, ui=None, terminals=None, **kwargs)[source]¶
Resample 1D data, uses scipy.signal.resample. “Rs” is the new sampling rate in “Tu” units of time. If “Tu” = 1, then Rs is the new sampling rate in Hertz.
- class mesmerize.pyqtgraphCore.flowchart.library.Signal.SavitzkyGolay(name, ui=None, terminals=None, **kwargs)[source]¶
Savitzky-Golay filter.
- class mesmerize.pyqtgraphCore.flowchart.library.Signal.ScalerMeanVariance(name, ui=None, terminals=None, **kwargs)[source]¶
Scaler for time series. Scales time series so that their mean (resp. standard deviation) in each dimension is mu (resp. std).
- class mesmerize.pyqtgraphCore.flowchart.library.Signal.SigmaMAD(name, ui=None, terminals=None, **kwargs)[source]¶
- class mesmerize.pyqtgraphCore.flowchart.library.Signal.iRFFT(name, ui=None, terminals=None, **kwargs)[source]¶
Uses fftpack.irfft, ‘Return inverse discrete Fourier transform of real sequence.’
https://docs.scipy.org/doc/scipy/reference/generated/scipy.fftpack.irfft.html#scipy.fftpack.irfft
Input must have an _RFFT column from the RFFT node.
Math¶
- class mesmerize.pyqtgraphCore.flowchart.library.Math.AbsoluteValue(name, ui=None, terminals=None, **kwargs)[source]¶
Performs numpy.abs(<input>). Returns root-mean-square value if <input> is complex
- class mesmerize.pyqtgraphCore.flowchart.library.Math.ArgGroupStat(name, ui=None, terminals=None, **kwargs)[source]¶
Group by a certain column and return value of another column based on a data column statistic
- class mesmerize.pyqtgraphCore.flowchart.library.Math.ArrayStats(name, ui=None, terminals=None, **kwargs)[source]¶
Perform various statistical functions
- class mesmerize.pyqtgraphCore.flowchart.library.Math.Derivative(name, ui=None, terminals=None, **kwargs)[source]¶
Return the Derivative of a curve.
- class mesmerize.pyqtgraphCore.flowchart.library.Math.Integrate(name, ui=None, terminals=None, **kwargs)[source]¶
- class mesmerize.pyqtgraphCore.flowchart.library.Math.LinRegress(name, ui=None, terminals=None, **kwargs)[source]¶
Linear Regression
- class mesmerize.pyqtgraphCore.flowchart.library.Math.LogTransform(name, ui=None, terminals=None, **kwargs)[source]¶
Can perform various log transforms
- class mesmerize.pyqtgraphCore.flowchart.library.Math.TVDiff(name, ui=None, terminals=None, **kwargs)[source]¶
Total Variation Regularized Numerical Differentiation, Chartrand 2011 method
Biology¶
- class mesmerize.pyqtgraphCore.flowchart.library.Biology.ExtractStim(name, ui=None, terminals=None, **kwargs)[source]¶
Extract portions of curves according to stimulus maps
- class mesmerize.pyqtgraphCore.flowchart.library.Biology.ManualDFoF(name, ui=None, terminals=None, **kwargs)[source]¶
Set Fo for dF/Fo using a particular time period. Useful for looking at stimulus responses
Clustering¶
- class mesmerize.pyqtgraphCore.flowchart.library.Clustering.KMeans(name, ui=None, terminals=None, **kwargs)[source]¶
KMeans clustering https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html
Output column -> KMEANS_CLUSTER_<data_column>
Hierarchical¶
- class mesmerize.pyqtgraphCore.flowchart.library.Hierarchical.FCluster(name, **kwargs)[source]¶
Basically scipy.cluster.hierarchy.fcluster. Form flat clusters from the hierarchical clustering defined by the given linkage matrix.
- class mesmerize.pyqtgraphCore.flowchart.library.Hierarchical.Inconsistent(name)[source]¶
Calculate inconsistency statistics on a linkage matrix. Returns inconsistency matrix
- class mesmerize.pyqtgraphCore.flowchart.library.Hierarchical.Linkage(name, ui=None, terminals=None, **kwargs)[source]¶
Basically scipy.cluster.hierarchy.linkage Compute a linkage matrix for Hierarchical clustering
Transform¶
- class mesmerize.pyqtgraphCore.flowchart.library.Transform.LDA(name, **kwargs)[source]¶
Linear Discriminant Analysis, uses sklearn
- class mesmerize.pyqtgraphCore.flowchart.library.Transform.Manifold(name, ui=None, terminals=None, **kwargs)[source]¶
Manifold learning
CtrlNode¶
Base for all nodes
- class mesmerize.pyqtgraphCore.flowchart.library.common.CtrlNode(name, ui=None, terminals=None, **kwargs)[source]¶
Abstract class for nodes with auto-generated control UI
- ctrlWidget()[source]¶
Return this Node’s control widget.
By default, Nodes have no control widget. Subclasses may reimplement this method to provide a custom widget. This method is called by Flowcharts when they are constructing their Node list.
- process(**kwargs)[source]¶
Process data through this node. This method is called any time the flowchart wants the node to process data. It will be called with one keyword argument corresponding to each input terminal, and must return a dict mapping the name of each output terminal to its new value.
This method is also called with a ‘display’ keyword argument, which indicates whether the node should update its display (if it implements any) while processing this data. This is primarily used to disable expensive display operations during batch processing.
- saveState()[source]¶
Return a dictionary representing the current state of this node (excluding input / output values). This is used for saving/reloading flowcharts. The default implementation returns this Node’s position, bypass state, and information about each of its terminals.
Subclasses may want to extend this method, adding extra keys to the returned dict.
Plotting utils¶
A few useful helper functions
- mesmerize.plotting.utils.get_colormap(labels: iter, cmap: str, **kwargs) collections.OrderedDict [source]¶
Get a dict for mapping labels onto colors
Any kwargs are passed to auto_colormap()
- Parameters
labels – labels for creating a colormap. Order is maintained if it is a list of unique elements.
cmap – name of colormap
- Returns
dict of labels as keys and colors as values
- mesmerize.plotting.utils.map_labels_to_colors(labels: iter, cmap: str, **kwargs) list [source]¶
Map labels onto colors according to chosen colormap
Any kwargs are passed to auto_colormap()
- Parameters
labels – labels for mapping onto a colormap
cmap – name of colormap
- Returns
list of colors mapped onto the labels
- mesmerize.plotting.utils.auto_colormap(n_colors: int, cmap: str = 'hsv', output: str = 'mpl', spacing: str = 'uniform', alpha: float = 1.0) List[Union[PyQt5.QtGui.QColor, numpy.ndarray, str]] [source]¶
If non-qualitative map: returns list of colors evenly spread through the chosen colormap. If qualitative map: returns subsequent colors from the chosen colormap
- Parameters
n_colors – Numbers of colors to return
cmap – name of colormap
output – option: ‘mpl’ returns RGBA values between 0-1 which matplotlib likes, option: ‘pyqt’ returns QtGui.QColor instances that correspond to the RGBA values option: ‘bokeh’ returns hex strings that correspond to the RGBA values which bokeh likes
spacing – option: ‘uniform’ returns evenly spaced colors across the entire cmap range option: ‘subsequent’ returns subsequent colors from the cmap
alpha – alpha level, 0.0 - 1.0
- Returns
List of colors as either
QColor
,numpy.ndarray
, or hexstr
with lengthn_colors
- class mesmerize.plotting.utils.WidgetRegistry[source]¶
Register widgets to conveniently store and restore their states
- register(widget: PyQt5.QtWidgets.QWidget, setter: callable, getter: callable, name: str)[source]¶
Register a widget. The setter and getter methods must be interoperable
- Parameters
widget (QtWidgets.QWidget) – widget to register
setter (callable) – widget’s method to use for setting its value
getter (callable) – widget’s method to use for getting its value. This value must be settable through the specified “setter” method
name (str) – a name for this widget
Plot Bases¶
AbstractBasePlotWidget¶
- class mesmerize.plotting.widgets.base._AbstractBasePlotWidget[source]¶
- abstract property transmission: mesmerize.analysis.data_types.Transmission¶
The input transmission
- Return type
- abstract set_input(transmission: mesmerize.analysis.data_types.Transmission)[source]¶
Set the input Transmission with data to plot
- Parameters
transmission – Input transmission
- abstract get_plot_opts() dict [source]¶
Package all necessary plot parameters that in combination with the transmission property are sufficient to restore the plot
- abstract set_plot_opts(opts: dict)[source]¶
Set plot parameters from a dict in the format returned by get_plot_opts()
BasePlotWidget¶
Inherit from this to create interactive plots that can be saved and restored.
- class mesmerize.plotting.widgets.base.BasePlotWidget[source]¶
Bases:
mesmerize.plotting.widgets.base._AbstractBasePlotWidget
Base for plot widgets.
Subclasses must define the class attribute “drop_opts” which is used to store a list of JSON incompatible keys returned by the get_plot_opts() method
- block_signals_list¶
List of QObjects included in dynamic signal blocking. Used for storing plot parameter widgets so that changing all of them quickly (like when restoring a plot) doesn’t cause the plot to constantly update.
- property transmission: mesmerize.analysis.data_types.Transmission¶
The input transmission
- Return type
- set_input(transmission: mesmerize.analysis.data_types.Transmission)[source]¶
Set the input Transmission with data to plot
- Parameters
transmission – Input transmission
- fill_control_widget(data_columns: list, categorical_columns: list, uuid_columns: list)[source]¶
Method for filling the control widget(s) when inputs are set. Must be implemented in subclass
- classmethod signal_blocker(func)[source]¶
Use as a decorator. Block Qt signals from all QObjects instances in the block_signals_list
Datapoint Tracer¶
- class mesmerize.plotting.DatapointTracerWidget[source]¶
- set_widget(datapoint_uuid: uuid.UUID, data_column_curve: str, row: pandas.core.series.Series, proj_path: str, history_trace: Optional[list] = None, peak_ix: Optional[int] = None, tstart: Optional[int] = None, tend: Optional[int] = None, roi_color: Optional[Union[str, float, int, tuple]] = 'ff0000', clear_linear_regions: bool = True)[source]¶
Set the widget from the datapoint.
- Parameters
datapoint_uuid – appropriate UUID for the datapoint (such as uuid_curve or _pfeature_uuid)
data_column_curve – data column containing an array to plot
row – DataFrame row that corresponds to the datapoint
proj_path – root dir of the project the datapoint comes from, used for finding max & std projections
history_trace – history trace of the datablock the datapoint comes from
peak_ix – Deprecated
tstart – lower bounds for drawing LinearRegionItem
tend – upper bounds for drawing LinearRegionItem
roi_color – color for drawing the spatial bounds of the ROI
Heatmap¶
Widgets¶
Higher level widgets that are directly used by the end-user. Both Heatmap widgets use the same plot variant.
HeatmapSplitterWidget¶
Heatmap with a vertical splitter that can be used to house another widget. The plot is compatible with both ‘row’ and ‘item’ selection modes.
- class mesmerize.plotting.HeatmapSplitterWidget(highlight_mode='row')[source]¶
Widget for interactive heatmaps
- __init__(highlight_mode='row')[source]¶
- Parameters
highlight_mode – Interactive mode, one of ‘row’ or ‘item’
- set_data(dataframes: Union[pandas.core.frame.DataFrame, list], data_column: str, labels_column: str, cmap: str = 'jet', transmission: Optional[mesmerize.analysis.data_types.Transmission] = None, sort: bool = True, reset_sorting: bool = True, **kwargs)[source]¶
Set the data and then set the plot
- Parameters
dataframes – list of dataframes or a single DataFrame
data_column – data column of the dataframe that is plotted in the heatmap
labels_column – dataframe column (usually categorical labels) used to generate the y-labels and legend.
cmap – colormap choice
transmission – transmission object that dataframe originates, used to calculate data units if passed
sort – if False, the sort comboBox is ignored
reset_sorting – reset the order of the rows in the heatmap
kwargs – Passed to Heatmap.set
- _set_sort_order(column: str)[source]¶
Set the sort order of the heatmap rows according to a dataframe column. The column must contain categorical values. The rows are grouped together according to the categorical values.
- Parameters
column – DataFrame column containing categorical values used for sorting the heatmap rows
- set_transmission(transmission: mesmerize.analysis.data_types.Transmission)[source]¶
Set the input transmission
- get_transmission() mesmerize.analysis.data_types.Transmission [source]¶
Get the input transmission
HeatmapTracerWidget¶
Heatmap with an embedded Datapoint Tracer that can be saved and restored.
- class mesmerize.plotting.HeatmapTracerWidget[source]¶
Bases:
mesmerize.plotting.widgets.base.BasePlotWidget
,mesmerize.plotting.widgets.heatmap.widget.HeatmapSplitterWidget
Heatmap with an embedded datapoint tracer
- drop_opts = ['dataframes', 'transmission']¶
keys of the plot_opts dict that are not JSON compatible and not required for restoring this plot
- live_datapoint_tracer¶
The embedded Datapoint Tracer <API_DatapointTracer>
- set_update_live(*args, **kwds)¶
Must be implemented in subclass
- set_current_datapoint(ix: tuple)[source]¶
Set the currently selected datapoint in the Datapoint Tracer.
- Parameters
ix – index, (x, y). x is always 0 for this widget since it only uses ‘row’ selection mode and not ‘item’
- set_input(*args, **kwds)¶
Set the input Transmission with data to plot
- Parameters
transmission – Input transmission
- set_input(*args, **kwds)¶
Set the input Transmission with data to plot
- Parameters
transmission – Input transmission
- get_plot_opts(drop: bool = False) dict [source]¶
Get the plot options
- Parameters
drop – Drop the non-json compatible objects that are not necessary to restore this plot
- set_plot_opts(*args, **kwds)¶
Must be implemented in subclass
- set_data(*args, datapoint_tracer_curve_column: str = None, **kwargs)[source]¶
Set the plot data, parameters and draw the plot. If the input Transmission comes directly from the FCluster it will pass a dict from get_cluster_kwargs() to the cluster_kwargs argument. Else it will pass None to cluster_kwargs.
- Parameters
args – arguments to pass to superclass set_data() method
datapoint_tracer_curve_column – Data column containing curves to use in the datapoint tracer
kwargs – keyword arguments, passed to superclass set_data() method
- property transmission: mesmerize.analysis.data_types.Transmission¶
The input transmission
- Return type
- save_plot_dialog(path, *args)¶
Plot save dialog
- save_plot(path)¶
Save the plot as a Transmission in an HDF5 file. Plot parameters are stored as a JSON string within the HDF5 file. See Transmission.to_hdf5
- Parameters
path – Path to save the file to. For easy identification use “.ptrn” extension.
- open_plot_dialog(filepath, dirpath, *args, **kwargs)¶
Open plot dialog
Variant¶
Lower level widget that handles the actual plotting and user interaction
- class mesmerize.plotting.variants.Heatmap(highlight_mode='row')[source]¶
Bases:
mesmerize.pyqtgraphCore.widgets.MatplotlibWidget.MatplotlibWidget
Heatmap plot variant
- sig_selection_changed¶
Emits indices of data coordinates (x, y) from mouse-click events on the heatmap
- __init__(highlight_mode='row')[source]¶
- Parameters
highlight_mode – The selection mode for the heatmap. One of either ‘row’ or ‘item’
- data¶
2D numpy array of the heatmap data
- selector¶
Selection instance that organizes mouse click events on the heatmap
- plot¶
ClusterGrid object instance containing the plot Axes
- set(data: numpy.ndarray, *args, ylabels: Optional[Union[pandas.core.series.Series, numpy.ndarray, list]] = None, ylabels_cmap: str = 'tab20', cluster_kwargs: Optional[dict] = None, **kwargs)[source]¶
- Parameters
data – 2D numpy array
args – Additional args that are passed to sns.heatmap()
ylabels – Labels used to create the ylabels bar
ylabels_cmap – colormap for the ylabels bar
cluster_kwargs – keywoard arguments for visualizing hierarchical clustering
kwargs – Additional kwargs that are passed to sns.heatmap()
KShape¶
- class mesmerize.plotting.KShapeWidget(parent=None)[source]¶
Bases:
PyQt5.QtWidgets.QMainWindow
,mesmerize.plotting.widgets.base.BasePlotWidget
User-end KShape widget
- sig_output_changed¶
Emits output Transmission containing cluster labels
- drop_opts = None¶
Unused by this plot widget
- property input_arrays: numpy.ndarray¶
The input arrays for clustering
- Returns
2D array, shape is [num_samples, padded_peak_curve_length]
- Return type
np.ndarray
- property ks: tslearn.clustering.KShape¶
tslearn KShape object
- property train_data: numpy.ndarray¶
The training data for clustering
- Returns
Training data as a 2D array, shape is [n_samples, padded_curve_length]
- Return type
np.ndarray
- property y_pred: numpy.ndarray¶
Predicted cluster labels after the model converges
- Returns
1D array of cluster labels that correspond to the input_data
- Return type
np.ndarray
- property cluster_centers: numpy.ndarray¶
Cluster centroids
- Returns
2D array, shape is [n_clusters, centroid_array]
- Return type
np.ndarray
- property cluster_means: numpy.ndarray¶
The cluster means
- Returns
2D array, shape is [cluster_label, mean_array]
- Return type
np.ndarray
- set_input(transmission: mesmerize.analysis.data_types.Transmission)[source]¶
Set the input Transmission for the widget
- Parameters
transmission – Input Transmission
- pad_input_data(a: numpy.ndarray, method: str = 'random') numpy.ndarray [source]¶
Pad all the input arrays so that are of the same length. The length is determined by the largest input array. The padding value for each input array is the minimum value in that array.
Padding for each input array is either done after the array’s last index to fill up to the length of the largest input array (method ‘fill-size’) or the padding is randomly flanked to the input array (method ‘random) for easier visualization.
- Parameters
a – 1D array of input arrays where each element is a sample array
method – ‘fill-size’ or ‘random’
- Returns
2D array of the padded arrays in the rows
- property transmission: mesmerize.analysis.data_types.Transmission¶
The input transmission
- Return type
- class mesmerize.plotting.widgets.kshape.widget.KShapeMeansPlot(parent)[source]¶
Bases:
mesmerize.pyqtgraphCore.widgets.MatplotlibWidget.MatplotlibWidget
Means plots grouped by cluster membership
- axs¶
array of axis objects used for drawing the means plots, shape is [nrows, ncols]
- set_plots(input_arrays: numpy.ndarray, n_clusters: int, y_pred: numpy.ndarray, xzero_pos: str, error_band)[source]¶
Set the subplots
- Parameters
input_arrays – padded input arrays (2D), shape is [num_samples, padded_peak_curve_length]
n_clusters – number of clusters
y_pred – cluster predictions (labels)
xzero_pos – set the zero position as the ‘zero’ position of the input array or the ‘maxima’ of the input array
error_band – Type of error band to show, one of either ‘ci’ or ‘std’
Proportions¶
- class mesmerize.plotting.ProportionsWidget[source]¶
Bases:
mesmerize.plotting.widgets.base.BasePlotWidget
,mesmerize.pyqtgraphCore.widgets.MatplotlibWidget.MatplotlibWidget
- drop_opts = ['xs', 'ys']¶
Drop the ‘xs’ and ‘ys’ since they are pd.Series objects and not required for restoring the plot
- property ax: matplotlib.axes._axes.Axes¶
The Axes object for this plot
- Returns
The Axes object for this plot
- Return type
AXes
- set_input(*args, **kwds)¶
Set the input Transmission with data to plot
- Parameters
transmission – Input transmission
- get_plot_opts(drop: bool = False)[source]¶
Get the plot options
- Parameters
drop – Drop the ‘xs’ and ‘ys’ objects when saving the returned dict for saving to an hdf5 file
- set_plot_opts(*args, **kwds)¶
Must be implemented in subclass
- property transmission: mesmerize.analysis.data_types.Transmission¶
The input transmission
- Return type
- save_plot_dialog(path, *args)¶
Plot save dialog
- save_plot(path)¶
Save the plot as a Transmission in an HDF5 file. Plot parameters are stored as a JSON string within the HDF5 file. See Transmission.to_hdf5
- Parameters
path – Path to save the file to. For easy identification use “.ptrn” extension.
- open_plot_dialog(filepath, dirpath, *args, **kwargs)¶
Open plot dialog
Scatter Plot¶
ScatterPlotWidget¶
Higher level widget that is directly used by the end-user. Scatter plot with docked Control Widget, Datapoint Tracer, and Console.
- class mesmerize.plotting.ScatterPlotWidget[source]¶
Bases:
PyQt5.QtWidgets.QMainWindow
,mesmerize.plotting.widgets.base.BasePlotWidget
- live_datapoint_tracer¶
Instance of DatapointTracer
- set_input(*args, **kwds)¶
Set the input Transmission with data to plot
- Parameters
transmission – Input transmission
- get_plot_opts(drop: bool = False) dict [source]¶
Get the plot options
- Parameters
drop – no drop opts are specified for this plot
- set_plot_opts(*args, **kwds)¶
Must be implemented in subclass
- set_current_datapoint(identifier: uuid.UUID)[source]¶
Set the UUID of the current datapoint and update the Datapoint Tracer
- property transmission: mesmerize.analysis.data_types.Transmission¶
The input transmission
- Return type
- save_plot_dialog(path, *args)¶
Plot save dialog
- save_plot(path)¶
Save the plot as a Transmission in an HDF5 file. Plot parameters are stored as a JSON string within the HDF5 file. See Transmission.to_hdf5
- Parameters
path – Path to save the file to. For easy identification use “.ptrn” extension.
- open_plot_dialog(filepath, dirpath, *args, **kwargs)¶
Open plot dialog
Variant¶
Lower level widget that interfaces with pqytgraph.ScatterPlotItem and has some helper methods.
- class mesmerize.plotting.variants.PgScatterPlot(graphics_view: mesmerize.pyqtgraphCore.widgets.GraphicsLayoutWidget.GraphicsLayoutWidget, parent=None)[source]¶
Bases:
PyQt5.QtCore.QObject
- signal_spot_clicked¶
Emits the UUID of a spot when it is clicked
- __init__(graphics_view: mesmerize.pyqtgraphCore.widgets.GraphicsLayoutWidget.GraphicsLayoutWidget, parent=None)[source]¶
- Parameters
graphics_view – This plot will instantiate within this GraphicsLayoutWidget
- add_data(xs: numpy.ndarray, ys: numpy.ndarray, uuid_series: pandas.core.series.Series, color: Union[str, PyQt5.QtGui.QColor, PyQt5.QtGui.QBrush, List[Union[PyQt5.QtGui.QBrush, PyQt5.QtGui.QColor, str]]], size: int = 10, **kwargs)[source]¶
Add data to the plot
- Parameters
xs (np.ndarray) – array of x values, indices must correspond to the “ys” array
ys (np.ndarray) – array of y values, indices must correspond to the “xs” array
uuid_series (pd.Series) – series of UUID values. Each SpotItem on the plot is tagged with these UUIDs, therefore the indices must correspond to the “xs” and “ys” arrays.
color (Union[str, QtGui.QColor, QtGui.QBrush, List[Union[QtGui.QBrush, QtGui.QColor, str]]]) – Either a single color or list of colors that pqytgraph.fn.mkBrush() can accept
size (int) – spot size
kwargs – any additional kwargs that are passed to ScatterPlotItem.addPoints()
SpaceMap¶
- class mesmerize.plotting.SpaceMapWidget[source]¶
Bases:
PyQt5.QtWidgets.QMainWindow
,mesmerize.plotting.widgets.base.BasePlotWidget
- sample_df¶
sub-dataframe of the current sample
- property transmission: mesmerize.analysis.data_types.Transmission¶
The input transmission
- Return type
- save_plot_dialog(path, *args)¶
Plot save dialog
- save_plot(path)¶
Save the plot as a Transmission in an HDF5 file. Plot parameters are stored as a JSON string within the HDF5 file. See Transmission.to_hdf5
- Parameters
path – Path to save the file to. For easy identification use “.ptrn” extension.
- open_plot_dialog(filepath, dirpath, *args, **kwargs)¶
Open plot dialog
- open_plot(ptrn_path: str, proj_path: str) Optional[Tuple[str, str]] ¶
Open a plot saved by the save_plot() method
- Parameters
ptrn_path – Path to the HDF5 Transmission file. By convention file extension is “.ptrn”
proj_path – Project path of the associated plot data.
- set_plot_opts(*args, **kwds)¶
Must be implemented in subclass