GithubHelp home page GithubHelp logo

alleninstitute / bigfeta Goto Github PK

View Code? Open in Web Editor NEW
3.0 11.0 2.0 10.7 MB

Big FeaTure Aligner: a scalable solution for feature alignment

License: BSD 2-Clause "Simplified" License

Shell 1.14% Python 82.17% Makefile 0.17% C 16.51%

bigfeta's Introduction

Build Status Documentation Status

BigFeta

Global linear-least squares solution of transforms for alignment of features.

Level of support

We are planning on occasional updating this tool with no fixed schedule. Community involvement is encouraged through both issues and pull requests.

Documentation

https://bigfeta.readthedocs.io/en/latest/index.html

Acknowledgement of Government Sponsorship

Supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior / Interior Business Center (DoI/IBC) contract number D16PC00004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.

bigfeta's People

Contributors

djkapner avatar russtorres avatar fcollman avatar gayathrimahalingam avatar mlnichols avatar trautmane avatar

Stargazers

Matt McCormick avatar vislabwwy avatar Philipp Otto avatar

Watchers

Derrick Brittain avatar James Cloos avatar Steven J Cook avatar Marc Takeno avatar  avatar Cameron Devine avatar  avatar  avatar M Maleckar avatar  avatar  avatar

bigfeta's Issues

Avoid modification of matches

the list of input matches can be modified during multi-step in-memory alignments. If I recall correctly, this is due to bigfeta.utils.transform_match writing transforms from the apply_list back to a match.

The way I have implemented multi-step alignments currently involves copying matches which can be expensive on large collections (hence the renderapi.pointmatch.copy_matches_explicit method https://github.com/AllenInstitute/render-python/blob/c4a1ba489055c872730f45a5b2e100881cc211a7/renderapi/pointmatch.py#L41)

Better practice is probably to avoid modifying the inputs at this stage.

AttributeError: 'NoneType' object has no attribute 'data'

Hi all,
when I try to run the solve.py in ASAP, the following errors arise:

File "/usr/local/lib/python3.8/dist-packages/bigfeta-1.1.0-py3.8.egg/bigfeta/solve/solve_scipy.py", line 34, in solve
    weights.data = np.sqrt(weights.data)
AttributeError: 'NoneType' object has no attribute 'data'

I also print the information of assemble_result attained by assemble_result = self.assemble_from_db(zvals) in bigfeta.py, which goes that:

{'A': None, 'weights': None, 'reg': <21x21 sparse matrix of type '<class 'numpy.float64'>'
        with 21 stored elements in Compressed Sparse Row format>, 'x': array([[1.0000e+00, 0.0000e+00],
       [0.0000e+00, 1.0000e+00],
       [2.1500e+03, 0.0000e+00],
       [1.0000e+00, 0.0000e+00],
       [0.0000e+00, 1.0000e+00],
       [8.6010e+03, 0.0000e+00],
       [1.0000e+00, 0.0000e+00],
       [0.0000e+00, 1.0000e+00],
       [4.3010e+03, 0.0000e+00],
       [1.0000e+00, 0.0000e+00],
       [0.0000e+00, 1.0000e+00],
       [0.0000e+00, 0.0000e+00],
       [1.0000e+00, 0.0000e+00],
       [0.0000e+00, 1.0000e+00],
       [1.0752e+04, 0.0000e+00],
       [1.0000e+00, 0.0000e+00],
       [0.0000e+00, 1.0000e+00],
       [6.4510e+03, 0.0000e+00],
       [1.0000e+00, 0.0000e+00],
       [0.0000e+00, 1.0000e+00],
       [1.2902e+04, 0.0000e+00]]), 'rhs': None}

It seems that 'weights' is 'None', but I have no idea why that happens. My settings in the solve.py are:

montage_example = {
    "first_section": 10,  # 1020
    "last_section": 10,  # 1020
    "solve_type": "montage",
   "close_stack": True,
   "transformation": "affine",
   "start_from_file": "",
   "output_mode": "stack",
   "input_stack": {
       "owner": "EM_group1",
       "project": "lens_corr1",
       "name": "mm2_mipmap_test",  
       "host": "10.8.204.10",
       "port": 9001,
       "mongo_host": "10.8.204.10",
       "mongo_port": 27017,  # 27017
       "client_scripts": "/home/asap/render-ws-java-client/src/main/scripts",
       "collection_type": "stack",
       "db_interface": "render"  
   },
   "pointmatch": {
       "owner": "EM_group1",
       "name": "dev_collection_can_delete",  
       "host": "10.8.204.10",
       "port": 9001,
       "mongo_host": "10.8.204.10",
       "mongo_port": 27017,                        
       "client_scripts": "/home/asap/render-ws-java-client/src/main/scripts",
       "collection_type": "pointmatch",  # pointmatch(default)
       "db_interface": "render"  
   },
   "output_stack": {
       "owner": "EM_group1",
       "project": "lens_corr1",
       "name": "python_montage_results",
       "host": "10.8.204.10",
       "port": 9001,
       "mongo_host": "10.8.204.10",
       "mongo_port": 27017,
       "client_scripts": "/home/asap/render-ws-java-client/src/main/scripts",
        "collection_type": "stack",
       "db_interface": "render"
   },
   "hdf5_options": {
       "output_dir": "/home/asap/example_output",
       "chunks_per_file": -1
   },
   "matrix_assembly": {
       "depth": 2,
       "montage_pt_weight": 1.0,
       "cross_pt_weight": 0.5,
       "npts_min": 5,
       "npts_max": 500,
       "inverse_dz": "True"
   },
   "regularization": {
       "default_lambda": 1.0e3,
       "translation_factor": 1.0e-5
   }
}

I have tried a lot to debug the above issue, but it still retains. Could you please help me solve it? Thank you very much!

Best,
MY

When there is no metafile, how to import the images to render-ws

Hi Russel Torres,
Sorry for disturbing you.

I currently perform the EM stitching and alignment based on ASAP-modules. Typically, in the first step of ASAP, I need to conduct python -m asap.mesh_lens_correction.do_mesh_lens_correction and the camera info is stored in metafile as shown in the example https://github.com/AllenInstitute/asap-modules/blob/master/asap/mesh_lens_correction/do_mesh_lens_correction.py.

The snapshot of a metafile example provided by you is
7e18f475b3444b2960dd826fe01ca51

However, how to directly import EM images to the render without setting the metafile, since I only have EM images without camera info. The snapshort of my images is
4db750dea35808c2b603b2256c7412d

Hi Russel Torres, is there any method to solve the above problem? Thanks a lot for your help. Thank you!

Best,
MY

Why there exist black boundaries?

Hi Russel Torres,
Sorry for disturbing you. I have tried to stitch some tiles into a montage, as shown below. As observed, the montage results are perfectly good. Nevertheless I wonder why there exist black boundaies? I tuned some parameters such as mask_coords in the do_mesh_lens_correction.py but there still exist black boundaries. I also tuned other parameters but the results seem unchanged. Do I need to tune other parameters to remove the black boundaries? Thanks for your help in advance.
1 0_temp

When loading matches from file, matches are collected based on intersection not equality

In utils.py

def get_matches(iId, jId, collection, dbconnection): .

    ... [Removed deliberately]

    matches = []
    if collection['db_interface'] == 'file':
        matches = jsongz.load(collection['input_file'])
        matches = [m for m in matches
                   if set([m['pGroupId'], m['qGroupId']]) & set([iId, jId])]

Here we'll return all matches that have one of 'pGroupId', 'qGroupId' in common with 'iId', 'jId'.

It seems to me this behaviour is wrong (let me know if I'm mistaken). It is filtered out at the next step in bigfeta.py but should still rely on equality here.

As a sub-example

pGroupId = '1.0'
qGroupId = '2.0'
(i.e. the match is between layer 1 and layer 2)

iId = '2.0'
jId = '3.0'
We want the sections between 2 and 3

set([pGroupId, qGroupId]) & set([iId, jId]) evaluates to a set of '2.0' (the intersection of them) and if then evaluates to True since the set isn't empty.

It may be that you wanted to have the set elements be lists, but lists are unhashable. You could have the elements be tuples (but shouldn't since the order then matters which while in practice it is ordered, it isn't guaranteed from what I understand) in which case:

set([(pGroupId, qGroupId)]) & set([(iId, jId)]) evaluates correctly (or set(((pGroupId, qGroupId))) & set(((iId, jId))) but not set((pGroupId, qGroupId)) & set((iId, jId)) since it collapses the iterates over the tuple or list to create the set).

Assuming you wanted it to include them only if they match the section id's then there's a few different ways to get it to work.

The smallest change is to just use set equality, i.e.

matches = [m for m in matches if set([m['pGroupId'], m['qGroupId']]) == set([iId, jId])]

This requires a lot of set generations if the matches is large, which is a little silly (but shouldn't take much time, so if it's conceptually neater it's fine), so I'd probably suggest the following.

def get_matches(iId, jId, collection, dbconnection): .

    ... [Removed deliberately]

    matches = []
    if collection['db_interface'] == 'file':
        matches = jsongz.load(collection['input_file']) #As a side note this should be cached, or the code refactored to only load once but not relevant to this bug report
        section_set = set([iId, jId])
        matches = [m for m in matches
                   if m['pGroupId'] in section_set and m['qGroupId'] in section_set]

mpl_scatter_density requirement

As described in AllenInstitute/asap-modules#243 the pip installation of this repo and asap-modules is complicated by fast-histogram, a dependency of mpl_scatter_density.

The only place where mpl_scatter_density is used is bigfeta.qctools.CheckResiduals -- does it make sense to make this an optional dependency to improve build processes?

3D: more flexible z, groupId mappings when building blocks from correspondences

def determine_zvalue_pairs(resolved, depths):

There seems to be a current assumption that match pair (pGroupId, qGroupId) relate to (z, z+dz) where dz is defined by the non-negative integer section depth. In my current use case p and q do not strictly adhere to this. While it is pretty trivial to swap them in the match input, it seems like behavior that is not sensitive to this difference makes more sense.

Are these stitching results correct?

Hi Russel Torres,
The following image is a 2D stitching result following the asap-modules https://asap-modules.readthedocs.io/en/latest/index.html.
1679491249389
The metadata I test is 21617_R1_166_T5_15_20201231140731_20201231140731 provided by you. Followings are visualization error maps with a html format:

1679491354556
1278ff883ad54f41a2203b3f9c3e137
23d33b5804cf8859b5af08835bcd0ce

Are these stitching results correct? Why do they look like undergoing a spatial rotation? In addition, I also test some EM tiles sampled by ourselves, the 2D stitching results are also rotated by some degree. How do I solve the above problem?

Thanks in advance, Russel Torres. Thank you!

Best and sincere wishes,
MY

marshmallow.exceptions.ValidationError: {'sparkhome': ['%s is not a directory']}

Hi RussTorres,
When I try to generate 3D point matches using python -m asap.pointmatch.create_tilepairs in ASAP as suggested by https://asap-modules.readthedocs.io/en/latest/readme/rough_alignment.html#step-3-generate-point-matches, the following issue arises:

Traceback (most recent call last):
  File "/home/asap/render-ws-java-client/asap-modules/asap/pointmatch/generate_point_matches_spark.py", line 154, in <module>
    module = PointMatchClientModuleSpark(input_data=example)
  File "/usr/local/lib/python3.8/dist-packages/argschema/argschema_parser.py", line 175, in __init__
    result = self.load_schema_with_defaults(self.schema, args)
  File "/usr/local/lib/python3.8/dist-packages/argschema/argschema_parser.py", line 274, in load_schema_with_defaults
    result = utils.load(schema, args)
  File "/usr/local/lib/python3.8/dist-packages/argschema/utils.py", line 422, in load
    raise mm.ValidationError(errors)
marshmallow.exceptions.ValidationError: {'sparkhome': ['%s is not a directory']}

I use the default sparkhome setting in my file:
"sparkhome": "/allen/programs/celltypes/workgroups/em-connectomics/ImageProcessing/utils/spark/",
since I have no idea where to get the spark directory.

The whole settings in my file are:

example = {
    "render": {
        "host": "10.8.204.10",
        "port": 9001,
        "owner": "EM_group1",
        "project": "lens_corr1",
        "client_scripts": "/home/asap/render-ws-java-client/src/main/scripts"
    },
    "sparkhome": "/allen/programs/celltypes/workgroups/em-connectomics/ImageProcessing/utils/spark/",
    "masterUrl": "spark://10.8.204.10:7077",
    "logdir": "/home/asap/example_output/", 
    "jarfile": "/home/asap/render-ws-java-client/target/render-ws-spark-client-4.0.0-standalone.jar",
    "className": "org.janelia.render.client.spark.SIFTPointMatchClient",
    "baseDataUrl": "http://10.8.204.10:9001/render-ws/v1",
    "owner": "EM_group1",
    "collection": "mm2_rough_align_test",
    "pairJson": "/home/asap/montageTilepairs3D/tile_pairs_python_montage_results_z_100_to_100_dist_0.json",
    "SIFTfdSize": 8,
    "SIFTsteps": 3,
    "matchMaxEpsilon": 20.0,
    "maxFeatureCacheGb": 15,
    "SIFTminScale": 0.38,
    "SIFTmaxScale": 0.82,
    "renderScale": 0.3,
    "matchRod": 0.9,
    "matchMinInlierRatio": 0.0,
    "matchMinNumInliers": 8,
    "matchMaxNumInliers": 200
}

Hi RussTorres, could you tell me how can I attain the correct spark directory? Thank you very much for your help! Thanks~

Best wishes,
MY

Very rough 3D alignment

Hello,

I am having a great deal of difficulty aligning my EM image stack with BigFeta. I'm hoping you guys could provide some guidance in where I might be going wrong. Apologies in advance if this issue would be more appropriate elsewhere.

Some context

I have a fairly small (9 layers / ~320 EM tiles per layer / ~3000 total tiles) stack of EM data that I aligned with EM_aligner_python---what I gather is the predecessor to BigFeta. The montage went quite well I would say (at least visually -- images below). As I typically just have one section, this is the first time I've tried a 3D alignment of my data. When things started to go haywire with EM_aligner_python, I hoped updating to BigFeta would fix up the alignment, but alas.

Here is the 3rd and 4th z layer of the montaged stack
montaged

And then after running the alignment with bigfeta the result is
aligned

Attempts at troubleshooting

I've tried sweeping through the regularization parameters default_lambda and translation_factor by several orders of magnitude (somewhat aimlessly just to see what would happen). While the resulting alignment indeed changes, I've not found an alignment much better than that shown above.

Details

To run bigfeta I make a call to

python -m bigfeta.bigfeta --input_json /path/to/align.json

where my input_json looks like

{
  "first_section": 1.0,
  "last_section": 9.0,
  "solve_type": "3D",
  "close_stack": "True",
  "transformation": "affine",
  "start_from_file": "",
  "output_mode": "stack",
  "input_stack": {
    "owner": "rlane",
    "project": "20191101_RL010",
    "name": "lil_EM_montaged",
    "host": "sonic",
    "port": 8080,
    "mongo_host": "sonic",
    "mongo_port": 27017,
    "client_scripts": "/home/catmaid/render/render-ws-java-client/src/main/scripts",
    "collection_type": "stack",
    "db_interface": "mongo"
  },
  "pointmatch": {
    "owner": "rlane",
    "name": "20191101_RL010_lil_EM_montaged_points",
    "host": "sonic",
    "port": 8080,
    "mongo_host": "sonic",
    "mongo_port": 27017,
    "client_scripts": "/home/catmaid/render/render-ws-java-client/src/main/scripts",
    "collection_type": "pointmatch",
    "db_interface": "mongo"
  },
  "output_stack": {
    "owner": "rlane",
    "project": "20191101_RL010",
    "name": "lil_EM_aligned",
    "host": "sonic",
    "port": 8080,
    "mongo_host": "sonic",
    "mongo_port": 27017,
    "client_scripts": "/home/catmaid/render/render-ws-java-client/src/main/scripts",
    "collection_type": "stack",
    "db_interface": "render"
  },
  "hdf5_options": {
    "output_dir": "",
    "chunks_per_file": -1
  },
  "matrix_assembly": {
    "depth": 2,
    "montage_pt_weight": 1.0,
    "cross_pt_weight": 0.1,
    "npts_min": 5,
    "npts_max": 500,
    "inverse_dz": "True"
  },
  "regularization": {
    "default_lambda": 1000,
    "translation_factor": 0.05
  }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.