GithubHelp home page GithubHelp logo

lucidrains / equiformer-pytorch Goto Github PK

View Code? Open in Web Editor NEW
227.0 14.0 22.0 17.86 MB

Implementation of the Equiformer, SE3/E3 equivariant attention network that reaches new SOTA, and adopted for use by EquiFold for protein folding

License: MIT License

Python 100.00%
artificial-intelligence deep-learning equivariance transformers attention-mechanisms protein-folding molecules

equiformer-pytorch's Issues

Why this implementation takes up much more memory than equifold?

Hi, great work! I find this library very memory-intensive. How can I reduce the memory usage? Do you have any plans to reduce the occupation of GPU memory?
When I run the following code, I will get CUDA out of memory Error on RTX4090 which have 24GB gpu memory.

import torch

from equiformer_pytorch.equiformer_pytorch import Equiformer

model = Equiformer(
    dim=128,
    depth=2,
    l2_dist_attention=True,
    reduce_dim_out=True
).to('cuda')

feats = torch.randn(2, 64, 128, device='cuda')
coors = torch.randn(2, 64, 3, device='cuda')
mask = torch.ones(2, 64, dtype=torch.bool, device='cuda')

out = model(feats, coors, mask)

and error is

...
File "/xxx/equiformer-pytorch/equiformer_pytorch/equiformer_pytorch.py", line 426, in forward
    out = out + R[..., i] * B[..., i]
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.66 GiB (GPU 0; 23.99 GiB total capacity; 21.60 GiB already allocated; 0 bytes free; 22.44 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Looking forward to your reply, thanks!

Error when use equiformer-pytorch

Hi, I install the equiformer-pytorch , but there is an error when I use it in my project:

File "/root/autodl-tmp/project/DeepPROTACs/model.py", line 7, in
from equiformer_pytorch import Equiformer
File "/root/miniconda3/envs/DeepPROTACs/lib/python3.7/site-packages/equiformer_pytorch/init.py", line 1, in
from equiformer_pytorch.equiformer_pytorch import Equiformer
File "/root/miniconda3/envs/DeepPROTACs/lib/python3.7/site-packages/equiformer_pytorch/equiformer_pytorch.py", line 121, in
class Linear(nn.Module):
File "/root/miniconda3/envs/DeepPROTACs/lib/python3.7/site-packages/beartype/_decor/main.py", line 193, in beartype
return beartype_args_mandatory(obj, conf)
File "/root/miniconda3/envs/DeepPROTACs/lib/python3.7/site-packages/beartype/_decor/_core.py", line 123, in beartype_args_mandatory
return _beartype_type(cls=obj, conf=conf) # type: ignore[return-value]
File "/root/miniconda3/envs/DeepPROTACs/lib/python3.7/site-packages/beartype/_decor/_core.py", line 313, in _beartype_type
f'{repr(cls)} not decoratable by @beartype, as '
beartype.roar.BeartypeDecorWrappeeException: <class 'equiformer_pytorch.equiformer_pytorch.Linear'> not decoratable by @beartype, as non-dataclasses (i.e., types not decorated by @dataclasses.dataclass) currently unsupported by @beartype.

I thought it was a problem with the version of the beartype package, but when I tried various versions the problem still persisted. I don't know how to fix it?
Thanks!

Dependency Conflict

Hey There,
Was attempting to do a pip install and run of equiformer in colab. Ran into a dependency issue. tensorflow-probability 0.22.0 was the native install but it requires a typing extension of 4.6.0 or less. So had to do a tensorflow-probability update prior to the pip install of equiformer. Not a big deal but wanted to post.

Specifying edge index / adjacency

Hi, @lucidrains! Thank you for your impressive work!

Equiformer, as seen in the architecture figure, takes a graph as an input. I am, however, not sure how to specify custom edges (adjacency matrix) to Equiformer.forward. I see that it is already implemented but I am still confused how I should properly use edges and neighbor_mask. I would be very grateful if you could add a simple example with a minimalistic graph.

Question About Graph Sparsity/Edges

Hey There,
So I'm currently trying to use the equiformer for a protein/ligand prediction task. I've inherited the dataset from an earlier model I've made and it is in the PyG batching format of one large graph made of sub graphs. I've got the adjacency matrices made of shape [1, N, N] as shown in example and am passing to the model. But the loss is directly related to the size of the batch being fed, which means something is up with graphs talking to one another. I'm using the settings of 'num_neighbors=0' and 'max_sparse_neighbors=32'. My understanding from the documentation is that this means I'll only be selecting 32 neighbors for each node, and those neighbors must come from the adjacency matrix. Is that understanding correct? Or if there are some small graphs with >32 nodes am I going to start cross contaminating? Additionally, if I wanted to convert the format of dataset to the suggested batching system (with masks), would it I simply set num_neighbors to 32 and call it a day?

Adapting Equiformer for Efficient Handling of Graphs with Sparse Matrix in COO format?

Thank you for implementing Equiformer. Decoupling the model from the original OC20 tasks significantly broadens its applicability.

In my project, I'm utilizing PyTorch Geometric, where batches are merged into a large graph, and edges are represented in COO format of a sparse matrix. I noticed that you have implemented support for sparse matrices. However, the dimension remains NxN, which becomes impractically large with an increased number of nodes.

Could you consider adapting Equiformer to not always operate on an NxN basis, but instead focus on a subset of nodes at a time, with edges defined in COO format?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.