Comments (4)
Hi Gökçen,
Your approach sounds right, but we haven't spent much time on this. The main effort will be around parallelizing the embedding generation, which will be specific to the cluster you're on.
@ebetica may be able to share a snippet for generating a faiss index.
Best,
Tom
from esm.
Here's a snippet:
def build_index(
data_path: str,
num_clusters: int,
test: bool = False,
rebuild: bool = False,
pca=64,
) -> Tuple[List[str], faiss.Index]:
cache_fn = f"{data_path}/cache.faiss"
embfiles = list(sorted(glob(f"{data_path}/embs.*.pt")))
seqfiles = list(sorted(glob(f"{data_path}/seqs.*.txt")))
should_load = not rebuild and path.exists(cache_fn)
if test:
embfiles = embfiles[:2]
seqfiles = seqfiles[:2]
mat = load(embfiles[0])
d = mat.shape[1]
fits_into_memory = mat.size * len(embfiles) * 4 < 200e9
# PCAR64 means to do a PCA to 64 dimensions, this should get our dataset to fit into RAM
# Middle argument is recommended for many vectors
# Last argument is scalar quantization from 4 bytes to 1
if should_load:
print(f"Loading cached index from {cache_fn}...")
index = faiss.read_index(cache_fn)
elif test:
index = faiss.index_factory(d, f"PCAR{pca},IVF32_HNSW32,SQ8")
elif fits_into_memory:
index = faiss.IndexFlatIP(d)
else:
index = faiss.index_factory(d, f"PCAR{pca},IVF{num_clusters}_HNSW32,SQ8")
if not should_load:
print("| Loading training set for FAISS...")
mats = []
total_train = 0
with tqdm(total=num_clusters * 40) as pb:
for fn in embfiles:
mats.append(load(fn))
total_train += mats[-1].shape[0]
pb.update(mats[-1].shape[0])
if total_train >= num_clusters * 40:
break
print("| Training FAISS quantization scheme...")
t = time.time()
index.train(np.concatenate(mats))
print(f"| Done in {time.time() - t} seconds")
keys = []
print("| Adding data to FAISS...")
for i, (fn, sfn) in tqdm(enumerate(zip(embfiles, seqfiles)), total=len(embfiles)):
if not should_load:
mat = load(fn)
index.add(mat)
with open(sfn, "r") as f:
keys += [x.strip() for x in f.readlines()]
D, I = index.search(mat[:5], 2) # sanity check
print("Sanity check: 2-NN of first 5 elements in your data")
print(D)
print(I)
print("\n".join(keys[i] for i in I[:, 0]))
if not should_load:
faiss.write_index(index, cache_fn)
return keys, index
Say you dump your embeddings in {data_path}/embs12345.pt
and sequences in {data_path}/seqs12345.txt
. You can use this function to load them all and combine them into a FAISS index. Check the FAISS documentation for the right number to select for num_clusters. Pick your PCA dimension depending on how much memory you have available. I'm still experimenting with this, so sorry if the code does not work perfectly.
from esm.
Thx Zeming! Let me close this now but happy to help out with any follow ups!
from esm.
Thank you so much both!
from esm.
Related Issues (20)
- HuggingFace EsmForProteinFolding inference raises exception on Windows HOT 1
- ESM2 pretrain
- index out of bounds during zero-shot with msa1b
- how to input cropped protein for ESM-2 ? HOT 1
- AttributeError: module 'deepspeed' has no attribute 'comm' HOT 3
- Expecting multiple model of single sequence input
- requests.exceptions.SSLError :: Streamlit
- How to train LinearProjectionDistogramModel for my data?
- ESMFold for multimer fails when using HuggingFace installation HOT 1
- fold.py in naming pdb file
- Embedding
- Query on Alphabet Consistency Across Different Scales of ESM2 Models [8M, 35M, 150M, 650M, 3B, 150B]
- the specific code of the ESM2 model
- RuntimeError HOT 5
- Error when predicting contacts for heterodimer
- Failed to build openfold HOT 3
- Contact prediction for multimeric proteins HOT 1
- Why the inpu size of embedding is 33? HOT 1
- offset=24 in [Zero-shot variant prediction with protein language models]
- Different result with esmfold server HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from esm.