GithubHelp home page GithubHelp logo

swharden / pyabf Goto Github PK

View Code? Open in Web Editor NEW
100.0 19.0 33.0 229.79 MB

pyABF is a Python package for reading electrophysiology data from Axon Binary Format (ABF) files

Home Page: https://swharden.com/pyabf

License: MIT License

Batchfile 0.02% Python 17.63% C# 0.62% Jupyter Notebook 81.43% Shell 0.01% HTML 0.24% CSS 0.06%
electrophysiology neuroscience abf physiology neuron hacktoberfest hacktoberfest2022

pyabf's Introduction

pyABF

CI

pyabf is a Python library for reading electrophysiology data from Axon Binary Format (ABF) files. It was created with the goal of providing a Pythonic API to access the content of ABF files which is so intuitive to use (with a predictive IDE) that documentation is largely unnecessary. Flip through the pyabf Tutorial and you'll be analyzing data from your ABF files in minutes!

Installation

pip install --upgrade pyabf

Quickstart

import pyabf
abf = pyabf.ABF("demo.abf")
abf.setSweep(3)
print(abf.sweepY) # displays sweep data (ADC)
print(abf.sweepX) # displays sweep times (seconds)
print(abf.sweepC) # displays command waveform (DAC)

Supported Python Versions

The latest version of pyABF runs on all currently supported Python versions.

Users who wish to run pyABF on older versions of python may do so by installing older pyABF packages available on the Release History Page on PyPi. Additional information is available on the pyABF Release History Page on GitHub. Note that pyabf 2.1.10 was the last version to support both Python 2.7 and Python 3.5.

Resources

pyabf's People

Contributors

akjama avatar konung-yaropolk avatar lucarossi147 avatar nilegraddis avatar pnewstein avatar saglag avatar swharden avatar t-b avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyabf's Issues

support caching of custom stimulus waveforms

Currently a new instance of ATF is created for every loaded protocol. This takes quite some time and makes it unbearable slow here for ATF files with 20MB.

My old approach of caching the results needs to be resurrected so that this is usable again.

I'll provide a PR.

abfDateTime is not accurate

Always, thank you for your contribution.
I found minor issue while handling my data with datetime of recording.

To get right into the point, abfDateTime seem to have minor problem
I have checked my file with your getInfoPage().launchTempWebpage() function .

import pyabf
abf = pyabf.ABF('sample.abf')
abf.getInfoPage().launchTempWebpage()

and it returns

abfDateTime = 2018-01-05 18:28:13

uFileStartDate = 20180305
uFileStartTimeMS = 66313103

above date time does not match and I know second one is correct.

Therefore, I looked up your code and I found some minor issue.
In your code, it is written like below.

        startDate = str(self._headerV2.uFileStartDate)
        startTime = round(self._headerV2.uFileStartTimeMS/1000)
        startDate = datetime.datetime.strptime(startDate, **"%Y%M%d"**)
        startTime = datetime.timedelta(seconds=startTime)
        self.abfDateTime = startDate+startTime

as you can see, strptime format has little mistypo I guess.
In my guess, you have to change
"%Y%M%d" into "%Y%m%d" .

abf1 tags

abf1 files do support tags. fix this.

improve auto-detection of channel units

Issue

Currently the code looks to see if sADCUnit has the label "mV", and if it does assumes it's voltage clamp (pA units) otherwise it assumes its current-clamp (mV units). This happens here:
https://github.com/swharden/pyABF/blob/master/src/pyabf/header.py#L132

This has caused problems in the past, because non-telegraphed instruments such as (axo-patch) don't include sADCUnit in their header. Additionally, some ABFs have channels recording more than just voltage or current (some ABFs in the data section record temperature). In these cases, the lookup fails, and the default unit is pA.

Solution

Find an alternative field in the header to more reliably determine units. Make it ABF1 and ABF2 compatible.

optimizing ABF class design for subsequent analysis

The spirit of the ABF class is to read ABF files... not to analyze them.

Currently things like baseline subtraction and averaging are in the ABF class. These "advanced" features might best be moved outside the core ABF class. Perhaps an "analysis" class which you could feed ABF objects would be a better way to approach analytical processes which lie outside the scope of ABF file reading (and hence the ABF class). This ticket notes on these ideas.

abfDateTime return string, not DateTime object

Tested on pyabf-2.0.20 and pyabf-2.0.19. Returns DateTime object in 2.0.9, which I chose at random because I really need to get some files processed.
Version 2.0.17 returns a DateTime object correctly.

remove autoanalysis project from pyABF

The autoanalysis project (/dev/autoanalysis) started as a small side-project initially intended to become a pyABF module (similar to the intrinsic property calculating memtest.py module, or action potential detection AP.py module). It has since exploded in complexity, is highly experimental, and probably merits its own repo instead of bundling all this code with the pyABF project. Frequent commits to the autoanalysis folder are also drowning-out meaningful commits to the core pyABF project.

Remove the autoanalysis project from pyABF, and give it its own repo.

refactor epoch synthesis module

abf.epochPoints and abf.epochValues are now simple to access. If abf.epochTypes is added, epoch waveform synthesis can be dramatically simplified (by simply reading these 3 lists). stimulus.py could be greatly simplified by refactoring to this model.

autoanalysis paired pulse

example ABF
X:\Data\C57\Tat project\abfs-evoked-ratio\2018_09_05_DIC3_0011.abf

Make it produce something like:
image
image

creating ABF1 files from scratch

A lot of people seem to use software like MiniAnalysis which expects data to arrive as ABF1 files. It may be useful to create a pyABF module which allows the creation of ABF1 files from scratch.

The goal of this ticket is to track progress toward exporting data as ABF1 files that MiniAnalysis can import. In the process a sweep-synthesis module will be created to create simulated data which contains events suitable for detection.

Tasks

  • code to synthesize whole-cell patch-clamp data from scratch
    • EPSCs and IPSCs (exponential)
    • EPSPs and IPSPs (alpha)
    • action potentials
    • gaussian amplifier noise
    • cell drift (wobble) instability
    • integrate trace synthesis code into pyABF as a module
    • document trace synthesis
  • ABF1 file export
    • suitable for ClampFit to open
    • suitable for MiniAnalysis to open
    • integrate ABF1 export into pyABF
    • document ABF1 export

clarify byte size of header items

According to the struct format, "i" and "l" are both 4-byte integers. Make it clear in the source code these are to be read identically. Currently, i and l are used interchangeably. Maybe it means something to C89, but not to Python.

https://docs.python.org/2/library/struct.html#format-characters

class SectionMap:
"""
Reading three numbers (int, int, long) at specific byte locations
yields the block position, byte size, and item count of specific
data stored in sections. Note that a block is 512 bytes. Some of
these sections are not read by this class because they are either
not useful for my applications, typically unused, or have an
unknown memory structure.
"""
def __init__(self, fb):
self.ProtocolSection = readStruct(fb, "IIl", 76)
self.ADCSection = readStruct(fb, "IIl", 92)
self.DACSection = readStruct(fb, "IIl", 108)
self.EpochSection = readStruct(fb, "IIl", 124)
self.ADCPerDACSection = readStruct(fb, "IIl", 140)
self.EpochPerDACSection = readStruct(fb, "IIl", 156)
self.UserListSection = readStruct(fb, "IIl", 172)
self.StatsRegionSection = readStruct(fb, "IIl", 188)
self.MathSection = readStruct(fb, "IIl", 204)
self.StringsSection = readStruct(fb, "IIl", 220)
self.DataSection = readStruct(fb, "IIl", 236)
self.TagSection = readStruct(fb, "IIl", 252)
self.ScopeSection = readStruct(fb, "IIl", 268)
self.DeltaSection = readStruct(fb, "IIl", 284)
self.VoiceTagSection = readStruct(fb, "IIl", 300)
self.SynchArraySection = readStruct(fb, "IIl", 316)
self.AnnotationSection = readStruct(fb, "IIl", 332)
self.StatsSection = readStruct(fb, "IIl", 348)

should ATFstorage be an ABF class argument?

When using a custom stimulus waveform, ATF stimulus waveforms are supported with abf.py#L35 which uses atf_reader.py and atf_storage.py to cache ATF files and load them in as an ABF class argument.

Should atfStorage=ATFStorage really be an ABF class argument?

This feels odd because this seems like a rare case (ABF files with ATF stimulus files), yet its made its way all the way to become an argument in the primary class of this entire project. It works as it is, but I'm preparing to port this project to C# (re-writing it from scratch) so I want to make sure everything is in its most logical spot.

An alternative way to get the same functionality is just to have the user do the work of reading and caching these files themselves. The tools can still be available with pyABF, just imported outside the ABF class. Perhaps even an ATF class would be useful.

import pyabf

abf = pyabf.ABF("data.abf")
abf.epochsByChannel = pyabf.ATF("stim.atf").epochsByChannel

# then all the usual python code works
print(abf.sweepX, abf.sweepY)
print(abf.sweepX, abf.sweepC)

Although, does it really make sense to call it epochs at all at that point? Why not just keep the stimulus data outside the ABF class forever? This would mean you'd write use-case-based analysis code as needed:

import pyabf

stim = pyabf.ATF("stim.atf")
abf = pyabf.ABF("data.abf")

# then write your custom program as needed
print(abf.sweepX, abf.sweepY)
print(abf.sweepX, stim.sweepC)

I'm leaning toward this, because it makes it easy to maintain the ATF code and ATF code doesn't live inside of the ABF class...

simplify header report

The current method of generating a text-formatted header report isn't intuitive:

abf = pyabf.ABF("demo.abf")
abf.getInfoPage().showText()

Consider something like:

abf = pyabf.ABF("demo.abf")
print(abf.headerText)

Then add that to the quickstart guide.

Improve Python version enforcement

Add better support for Python version enforcement

What Python version should be required?

pyABF uses lots of f-strings which according to PEP 4098 means Python <3.6 isn't supported. Python 3.6 is probably the minimum supported by pyABF.

Original Issue

Check how this fails if run on Python 2. I got an email from someone saying this happens (which is unclear).

  File "/Users/dir/Library/Python/2.7/lib/python/site-packages/pyabf/__init__.py", line 9, in <module>
    from ._version import __version__
  File "/Users/dir/Library/Python/2.7/lib/python/site-packages/pyabf/_version.py", line 28
    errMsg = f"pyABF version {__version__} < required {versionNeeded}"

improve ABF version checking

A lot of places in core.py I do things like if self.abfFileFormat == 2. Currently the abfFileFormat property of the ABF class is just a 1 or a 2 depending on if abfVersion starts with a 1 or 2. The abfFileFormat property can probably be removed entirely, replaced by a small functions which determines if the file format is ABF1 or ABF2 based on looking at abfVersion.

create stimulus waveform for all epoch types

Currently only epoch types 0 (disabled), 1 (step), and 2 (ramp) are supported. Determine what additional epochs are available and be able to synthesize their waveforms.

Stress test this against new multi-channel ABFs with epochs of variable duration and variable levels. This is likely best done in conjunction with a full overall of the epoch processing procedure.

add test for all header values

just regenerating the data index isn't working because the output is flagged as different based on numpy version number.

make a test that ensures header values don't change when core restructuring happens.

non-continuous sweeps

I'm not sure what currently happens when a 5 second sweep is set to record every 10 seconds. 50% of that time should be empty.

Locate or create an ABF with known empty space and ensure it's handled properly by pyABF

Type error when file has only one comment

When attempting to read in an .abf file with only one comment, the following error is shown:
" 'int' object is not iterable ", from line 77 in abf.py.

My crude solution so far is to replace lines 77-79 with the following (there should be a more elegant way, I think):

if isinstance(self.commentTimes, int):
    self.commentTimesSec = self.commentTimes*self._abfHeader.header["fSynchTimeUnit"]/1e6
    self.commentTimeMin = self.commentTimesSec * 60
    self.commentSweeps = int(self.commentTimesSec/self.sweepLengthSec)
else:
    self.commentTimesSec=[x*self._abfHeader.header["fSynchTimeUnit"]/1e6 for x in self.commentTimes]
    self.commentTimesMin=[x*60 for x in self.commentTimesSec]
    self.commentSweeps=[int(x/self.sweepLengthSec) for x in self.commentTimesSec]

On a related note, I would greatly appreciate a fix to issue #3 :D

ability to add/modify tags in ABF files

Often I forget to add a tag or later want to change the time or comment of a tag in an ABF file. Since the tag structure is known, create a module to modify tags by changing the specific bytes of the file associated with tags.

This might be a little complex with files which don't contain tags, as there will be no bytes specifically allocated to store tag information. Investigation will require finding unused space in the ABF file which can be modified, or seeing if adding extra bytes at the end of the file is destructive.

Investigate Python 2.7 support

How hard would it be to make the current code base back-compatible with Python 2.7? Assess feasibility and follow-up here. Perhaps a single release supporting Python 2.7 would be useful.

Channel 1 scaling factor is used to scale all channels

Hi Scott,

We have an ABF file with 2 channels. Channel 0 is membrane potential which is recovered correctly. Channel 1 is an analog output from a temperature recorder.

The temperature is scaled as 2.3 + 10*T and when the ABF file is read into the software that generated it, it displays and formats correctly. However when it's read into python /w pyabf the temperature is not transformed and the units appear to be set as mV

I've included the ABF file in question along with an example use case

example.zip

import pyabf
abf = pyabf.ABF("example.abf")
abf.setSweep(0, channel=0)
Vm = abf.dataY
t = abf.dataX
abf.setSweep(0, channel=1)
Temp = abf.dataY
#Temp = 2.3 + 10* abf.dataY

It's obviously not a critical issue - but it would be a nice feature (or if I'm doing something wrong, if you could let me know that would be great).

Cheers,
Aaron

time string length is inconsistent

Strip/add to ensure trailing zeros to 3 spaces for the seconds timestamp

Loading 16426005.abf (6/1570) ...
2016-04-26T13:48:58.265000
Loading 16426006.abf (7/1570) ...
2016-04-26T14:19:26
Loading 16426007.abf (8/1570) ...
2016-04-26T14:47:20.375000

make ABF header class require numpy

Close this ticket when numpy is imported normally.

Initially a lot of effort was put into making the abf header class numpy-agnostic. This just complicates code maintenance, and since non-numpy data arrays are so slow, let's just go ahead and require numpy for pyABF. If a user wants a non-numpy ABF header class, they can modify it like this.

Numpy-agnostic import

try:
    import numpy as np # use Numpy if we have it
except:
    np=False

ABF data reading (with our without numpy)

fb=open("someFile.abf",'rb')
fb.seek(firstBytePosition)
scaleFactor = self.header['lADCResolution'] / 1e6
if np:
	data = np.fromfile(fb, dtype=np.int16, count=pointCount)
	data = np.multiply(data,scaleFactor,dtype='float32')
else:
	print("WARNING: data is being retrieved without numpy (this is slow). See docs.")
	data = struct.unpack("%dh"%(pointCount), fb.read(pointCount*2)) # 64-bit int
	data = [point*scaleFactor for point in data] # 64-bit int * 64-bit floating point
fb.close()

default gradient colormap

For a while I used winter, then I switched to Dark2. I just now realized Dark2 is stepped, not a gradient. Find a gradient color map which is versatile in many situations.

Examples where demo is used: https://github.com/swharden/pyABF/tree/master/docs/getting-started#advanced-plotting-with-the-pyabfplot-module

Change the default colormap here:

def colorsBinned(bins, colormap="Dark2", reverse=False):

There aren't any great options:
https://github.com/swharden/pyABF/blob/master/docs/advanced/v1%20cookbook/2017-11-12%20colormaps/colormaps.pdf

... so maybe a custom gradient to mimic Dark2?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.