GithubHelp home page GithubHelp logo

idrblab / annopro Goto Github PK

View Code? Open in Web Editor NEW
25.0 25.0 8.0 4.1 MB

Feature map and function annotation of Proteins

License: MIT License

Python 99.98% Shell 0.02%
blastp deep-learning feature-map protein-function-prediction

annopro's People

Contributors

gcs-zhn avatar swallow-design avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

annopro's Issues

UnboundLocalError: local variable 'data_onehot' referenced before assignment

Hello!

I followed the installation tutorial on the README.md.
I tried both

  • installing using 'pip install annopro'
  • install from source code.

The commands I used are:

  • python -m annopro -i data.fasta -o output --used_gpu 0,1,2,3
  • python -m annopro -i data.fasta -o output
  • python run.py

After these attempts, I got the same error as indicated in the Logs.
Do you know how to fix this?
Thanks in advance!

Input

The data.fasta is like:

>sp|O14793|GDF8_HUMAN Growth/differentiation factor 8 OS=Homo sapiens OX=9606 GN=MSTN PE=1 SV=1
MQKLQLCVYIYLFMLIVAGPVDLNENSEQKENVEKEGLCNACTWRQNTKSSRIEAIKIQI
LSKLRLETAPNISKDVIRQLLPKAPPLRELIDQYDVQRDDSSDGSLEDDDYHATTETIIT
MPTESDFLMQVDGKPKCCFFKFSSKIQYNKVVKAQLWIYLRPVETPTTVFVQILRLIKPM
KDGTRYTGIRSLKLDMNPGTGIWQSIDVKTVLQNWLKQPESNLGIEIKALDENGHDLAVT
FPGPGEDGLNPFLEVKVTDTPKRSRRDFGLDCDEHSTESRCCRYPLTVDFEAFGWDWIIA
PKRYKANYCSGECEFVFLQKYPHTHLVHQANPRGSAGPCCTPTKMSPINMLYFNGKEQII
YGKIPAMVVDRCGCS

The run.py is:

from annopro import main
main("data.fasta", "output", "0,1,2,3")

or

from annopro import main
main("data.fasta", "output")

Runtime Environment

  • Operation System: RedHat 8.6
  • Python: Python 3.8
  • Tensorflow: 2.6.5
  • Cuda: 11.6.1
  • CuDNN: 8.4.1.50

Logs

Running "module reset". Resetting modules to system default. The following $MODULEPATH directories have been removed: None
A conda environment has been detected CONDA_PREFIX=
CONDA_PATH/envs/annopro
anaconda3_gpu is loaded. Consider running conda deactivate and reloading it.
diamond v2.1.0.154 (C) Max Planck Society for the Advancement of Science
Documentation, support and updates available at http://www.diamondsearch.org/
Please cite: http://dx.doi.org/10.1038/s41592-021-01101-x Nature Methods (2021)

#CPU threads: 4
Scoring parameters: (Matrix=BLOSUM62 Lambda=0.267 K=0.041 Penalties=11/1)
Temporary directory: output
#Target sequences to report alignments for: 25
Opening the database... [0.044s]
Database: USER_DIR/.annopro/data/cafa4.dmnd (type: Diamond database, sequences: 87514, letters: 44798577)
Block size = 2000000000
Opening the input file... [0.001s]
Opening the output file... [0s]
Loading query sequences... [0s]
Masking queries... [0s]
Algorithm: Double-indexed
Building query histograms... [0s]
Loading reference sequences... [0.042s]
Masking reference... [0.583s]
Initializing temporary storage... [0.007s]
Building reference histograms... [0.402s]
Allocating buffers... [0s]
Processing query block 1, reference block 1/1, shape 1/2, index chunk 1/4.
Building reference seed array... [0.159s]
Building query seed array... [0s]
Computing hash join... [0.003s]
Masking low complexity seeds... [0s]
Searching alignments... [0.001s]
Deallocating memory... [0s]
Processing query block 1, reference block 1/1, shape 1/2, index chunk 2/4.
Building reference seed array... [0.186s]
Building query seed array... [0s]
Computing hash join... [0.003s]
Masking low complexity seeds... [0s]
Searching alignments... [0s]
Deallocating memory... [0s]
Processing query block 1, reference block 1/1, shape 1/2, index chunk 3/4.
Building reference seed array... [0.202s]
Building query seed array... [0s]
Computing hash join... [0.003s]
Masking low complexity seeds... [0s]
Searching alignments... [0s]
Deallocating memory... [0s]
Processing query block 1, reference block 1/1, shape 1/2, index chunk 4/4.
Building reference seed array... [0.16s]
Building query seed array... [0s]
Computing hash join... [0.003s]
Masking low complexity seeds... [0s]
Searching alignments... [0s]
Deallocating memory... [0s]
Processing query block 1, reference block 1/1, shape 2/2, index chunk 1/4.
Building reference seed array... [0.151s]
Building query seed array... [0s]
Computing hash join... [0.003s]
Masking low complexity seeds... [0s]
Searching alignments... [0s]
Deallocating memory... [0s]
Processing query block 1, reference block 1/1, shape 2/2, index chunk 2/4.
Building reference seed array... [0.186s]
Building query seed array... [0s]
Computing hash join... [0.003s]
Masking low complexity seeds... [0s]
Searching alignments... [0s]
Deallocating memory... [0s]
Processing query block 1, reference block 1/1, shape 2/2, index chunk 3/4.
Building reference seed array... [0.201s]
Building query seed array... [0s]
Computing hash join... [0.003s]
Masking low complexity seeds... [0s]
Searching alignments... [0s]
Deallocating memory... [0s]
Processing query block 1, reference block 1/1, shape 2/2, index chunk 4/4.
Building reference seed array... [0.155s]
Building query seed array... [0s]
Computing hash join... [0.004s]
Masking low complexity seeds... [0s]
Searching alignments... [0s]
Deallocating memory... [0s]
Deallocating buffers... [0s]
Clearing query masking... [0s]
Computing alignments... Loading trace points... [0.014s]
Sorting trace points... [0s]
Computing alignments... [0.004s]
Deallocating buffers... [0s]
Loading trace points... [0s]
[0.022s]
Deallocating reference... [0s]
Loading reference sequences... [0s]
Deallocating buffers... [0s]
Deallocating queries... [0s]
Loading query sequences... [0s]
Closing the input file... [0s]
Closing the output file... [0.001s]
Closing the database... [0s]
Cleaning up... [0s]
Total time = 2.668s
Reported 5 pairwise alignments, 5 HSPs.
1 queries aligned.
2023-06-16 13:27:18.651280: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-06-16 13:27:27.991777: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 43511 MB memory: -> device: 0, name: NVIDIA A40, pci bus id: 0000:07:00.0, compute capability: 8.6
2023-06-16 13:27:28.072954: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 43511 MB memory: -> device: 1, name: NVIDIA A40, pci bus id: 0000:46:00.0, compute capability: 8.6
2023-06-16 13:27:28.074780: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 43511 MB memory: -> device: 2, name: NVIDIA A40, pci bus id: 0000:85:00.0, compute capability: 8.6
2023-06-16 13:27:28.076816: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 43511 MB memory: -> device: 3, name: NVIDIA A40, pci bus id: 0000:c7:00.0, compute capability: 8.6
Traceback (most recent call last):
File "CONDA_PATH/envs/annopro/lib/python3.8/run.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "CONDA_PATH/envs/annopro/lib/python3.8/run.py", line 87, in _run_code
exec(code, run_globals)
File "PROJECT_PATH/AnnoPRO/annopro/main.py", line 4, in
console_main()
File "PROJECT_PATH/AnnoPRO/annopro/init.py", line 27, in console_main
main(
File "PROJECT_PATH/AnnoPRO/annopro/init.py", line 75, in main
predict(output_dir=output_dir,
File "PROJECT_PATH/AnnoPRO/annopro/prediction.py", line 19, in predict
init_evaluate(term_type=term_type,
File "PROJECT_PATH/AnnoPRO/annopro/prediction.py", line 160, in init_evaluate
preds = model.predict(data_generator, steps=data_steps)
File "CONDA_PATH/envs/annopro/lib/python3.8/site-packages/keras/engine/training.py", line 1720, in predict
data_handler = data_adapter.get_data_handler(
File "CONDA_PATH/envs/annopro/lib/python3.8/site-packages/keras/engine/data_adapter.py", line 1383, in get_data_handler
return DataHandler(*args, **kwargs)
File "CONDA_PATH/envs/annopro/lib/python3.8/site-packages/keras/engine/data_adapter.py", line 1138, in init
self._adapter = adapter_cls(
File "CONDA_PATH/envs/annopro/lib/python3.8/site-packages/keras/engine/data_adapter.py", line 917, in init
super(KerasSequenceAdapter, self).init(
File "CONDA_PATH/envs/annopro/lib/python3.8/site-packages/keras/engine/data_adapter.py", line 794, in init
peek, x = self._peek_and_restore(x)
File "CONDA_PATH/envs/annopro/lib/python3.8/site-packages/keras/engine/data_adapter.py", line 928, in _peek_and_restore
return x[0], x
File "PROJECT_PATH/AnnoPRO/annopro/prediction.py", line 50, in getitem
return ([data_onehot, data_si])
UnboundLocalError: local variable 'data_onehot' referenced before assignment

Attributes in diamond_scores.txt

Hi,
I have installed the annopro model and used it for predicting function of an amino acid sequence. It generated many files including the diamond_scores.txt file. This files do not seem to have a header. So I was wondering what does each column indicate here. I understood that the first column is the name of the sequence provided and the second column is probably showing uniprot id. But what about the rest?

Please have a look at the first five rows of the diamond_scores.txt file:
Secreted_Seq1 P32261 28.9 360 244 5 62 409 103 462 2.88e-34 134
Secreted_Seq1 Q60854 30.9 363 229 9 63 409 22 378 7.08e-34 131
Secreted_Seq1 P80229 32.3 372 227 10 56 409 14 378 3.59e-33 129
Secreted_Seq1 O08800 30.7 374 240 8 49 409 7 374 6.39e-33 128
Secreted_Seq1 Q9S7T8 32.2 369 215 15 67 409 30 389 8.55e-33 128

What does column 3 to 12 indicate? What are these numbers? How to interpret them?

Many thanks for your time and I look forward to hearing from you.

Best,
Sourav

Downloading cafa4.dmnd taking forever

Hello,

I've attempted to use your program but it is practically stuck on downloading cafa4.dmnd. The download speed is less than 1kb/sec. It seems that your server needs a restart? In case there is a fault, is there a way to download it from somewhere else and use it with AnnoPro?

Thanks,
Maxim

ValueError("All arrays must be of the same length")

Bug Description

I tried to run it on the following fasta file it gives me this error:

>seq-2
MKKKKKKKLKKLKKKLKKKLKKKKKLLLLLLLLKKKKKKK
>seq-9
MKKKIKKIKKKIEKKKKKKLKKLKKKKKKKKLLLLLLLLL
>seq-10
MSEKFSEIAEKYDEERILSRSAGELAELTRELGLKPGDRVLDVGCGTGYLTLPLAERVGPEGTVIGIDRSEEMLARARERAAAAGLSNVEFQVADAEALPFPDESFDLVTCRLVLHHLPDPAKALREMRRVLKPGGRFVVSDWDASSMAFPDEEAELAERLRRYAEARAAAGGERDALRRALEAAGFRDVTVRSLTAWRRRAGEAAAAAL
>seq-13
MKKKKKLKKKLKKKKKKKK

Runtime Environment

Fresh install of requirements

Logs

annopro -i test_proteins.fasta -o output-test
Download cafa4.dmnd...
100% [........................................................................] 46988123 / 46988123
Validate md5sum of cafa4.dmnd...
diamond v2.1.0.154 (C) Max Planck Society for the Advancement of Science
Documentation, support and updates available at http://www.diamondsearch.org
Please cite: http://dx.doi.org/10.1038/s41592-021-01101-x Nature Methods (2021)

#CPU threads: 4
Scoring parameters: (Matrix=BLOSUM62 Lambda=0.267 K=0.041 Penalties=11/1)
Temporary directory: output-test
#Target sequences to report alignments for: 25
Opening the database...  [0.042s]
Database: /home/ubuntu/.annopro/data/cafa4.dmnd (type: Diamond database, sequences: 87514, letters: 44798577)
Block size = 2000000000
Opening the input file...  [0s]
Opening the output file...  [0s]
Loading query sequences...  [0s]
Masking queries...  [0.001s]
Algorithm: Double-indexed
Building query histograms...  [0s]
Loading reference sequences...  [0.055s]
Masking reference...  [0.588s]
Initializing temporary storage...  [0s]
Building reference histograms...  [0.493s]
Allocating buffers...  [0s]
Processing query block 1, reference block 1/1, shape 1/2, index chunk 1/4.
Building reference seed array...  [0.163s]
Building query seed array...  [0s]
Computing hash join...  [0.004s]
Masking low complexity seeds...  [0s]
Searching alignments...  [0s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 1/2, index chunk 2/4.
Building reference seed array...  [0.192s]
Building query seed array...  [0s]
Computing hash join...  [0.002s]
Masking low complexity seeds...  [0s]
Searching alignments...  [0s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 1/2, index chunk 3/4.
Building reference seed array...  [0.213s]
Building query seed array...  [0s]
Computing hash join...  [0.003s]
Masking low complexity seeds...  [0s]
Searching alignments...  [0s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 1/2, index chunk 4/4.
Building reference seed array...  [0.154s]
Building query seed array...  [0s]
Computing hash join...  [0.003s]
Masking low complexity seeds...  [0s]
Searching alignments...  [0s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 2/2, index chunk 1/4.
Building reference seed array...  [0.155s]
Building query seed array...  [0s]
Computing hash join...  [0.003s]
Masking low complexity seeds...  [0s]
Searching alignments...  [0s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 2/2, index chunk 2/4.
Building reference seed array...  [0.19s]
Building query seed array...  [0s]
Computing hash join...  [0.003s]
Masking low complexity seeds...  [0s]
Searching alignments...  [0s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 2/2, index chunk 3/4.
Building reference seed array...  [0.211s]
Building query seed array...  [0s]
Computing hash join...  [0.002s]
Masking low complexity seeds...  [0s]
Searching alignments...  [0s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 2/2, index chunk 4/4.
Building reference seed array...  [0.154s]
Building query seed array...  [0s]
Computing hash join...  [0.004s]
Masking low complexity seeds...  [0s]
Searching alignments...  [0s]
Deallocating memory...  [0s]
Deallocating buffers...  [0.004s]
Clearing query masking...  [0s]
Computing alignments... Loading trace points...  [0.001s]
Sorting trace points...  [0s]
Computing alignments...  [0s]
Deallocating buffers...  [0s]
Loading trace points...  [0s]
 [0.002s]
Deallocating reference...  [0.002s]
Loading reference sequences...  [0s]
Deallocating buffers...  [0s]
Deallocating queries...  [0s]
Loading query sequences...  [0s]
Closing the input file...  [0s]
Closing the output file...  [0s]
Closing the database...  [0.002s]
Cleaning up...  [0s]
Total time = 2.766s
Reported 21 pairwise alignments, 21 HSPs.
1 queries aligned.
Invalid feature 0.6934-309 for seq-13 at line 596
Invalid feature 0.6934-309 for seq-13 at line 596
Invalid feature 0.5127-315 for seq-13 at line 596
Invalid feature 0.5127-315 for seq-13 at line 596
Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/annopro/bin/annopro", line 8, in <module>
    sys.exit(console_main())
  File "/home/ubuntu/anaconda3/envs/annopro/lib/python3.8/site-packages/annopro/__init__.py", line 27, in console_main
    main(
  File "/home/ubuntu/anaconda3/envs/annopro/lib/python3.8/site-packages/annopro/__init__.py", line 71, in main
    process(
  File "/home/ubuntu/anaconda3/envs/annopro/lib/python3.8/site-packages/annopro/data_procession/__init__.py", line 8, in process
    data = Data_process(protein_file=profeat_file,
  File "/home/ubuntu/anaconda3/envs/annopro/lib/python3.8/site-packages/annopro/data_procession/data_predict.py", line 36, in __init__
    self.__data__()
  File "/home/ubuntu/anaconda3/envs/annopro/lib/python3.8/site-packages/annopro/data_procession/data_predict.py", line 39, in __data__
    proteins_f = profeat_to_df(self.protein_file)
  File "/home/ubuntu/anaconda3/envs/annopro/lib/python3.8/site-packages/profeat/__init__.py", line 69, in profeat_to_df
    return pd.DataFrame(feature_list).T
  File "/home/ubuntu/anaconda3/envs/annopro/lib/python3.8/site-packages/pandas/core/frame.py", line 636, in __init__
    mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
  File "/home/ubuntu/anaconda3/envs/annopro/lib/python3.8/site-packages/pandas/core/internals/construction.py", line 502, in dict_to_mgr
    return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy)
  File "/home/ubuntu/anaconda3/envs/annopro/lib/python3.8/site-packages/pandas/core/internals/construction.py", line 120, in arrays_to_mgr
    index = _extract_index(arrays)
  File "/home/ubuntu/anaconda3/envs/annopro/lib/python3.8/site-packages/pandas/core/internals/construction.py", line 674, in _extract_index
    raise ValueError("All arrays must be of the same length")
ValueError: All arrays must be of the same length

关于promap的一些问题

生成promap的代码(即使用序列生成类图像的过程)可否详解?我对此处有极其大的兴趣。此外如果要使用Uniprot上的数据集,所使用的数据文件是否都要进行改动?希望您能不吝赐教。祝好

Allocation of 754647040 exceeds 10% of free system memory.

Hello! @swallow-design @GCS-ZHN
I have tried several times but each time I only get a result of 8001 sequences, but I submit 16000 sequences. Is there something wrong with my parameter settings, here is my code.

python -m annopro -i  lh_over_30bp.fa -o lh-out  --overwrite

Here is my log file

diamond v2.1.0.154 (C) Max Planck Society for the Advancement of Science
Documentation, support and updates available at http://www.diamondsearch.org
Please cite: http://dx.doi.org/10.1038/s41592-021-01101-x Nature Methods (2021)

#CPU threads: 4
Scoring parameters: (Matrix=BLOSUM62 Lambda=0.267 K=0.041 Penalties=11/1)
Temporary directory: lh-out
#Target sequences to report alignments for: 25
Opening the database...  [0.053s]
Database: /home/liuli/.annopro/data/cafa4.dmnd (type: Diamond database, sequences: 87514, letters: 44798577)
Block size = 2000000000
Opening the input file...  [0.01s]
Opening the output file...  [0s]
Loading query sequences...  [0.035s]
Masking queries...  [0.381s]
Algorithm: Double-indexed
Building query histograms...  [0.261s]
Loading reference sequences...  [0.064s]
Masking reference...  [2.06s]
Initializing temporary storage...  [0.025s]
Building reference histograms...  [1.422s]
Allocating buffers...  [0s]
Processing query block 1, reference block 1/1, shape 1/2, index chunk 1/4.
Building reference seed array...  [0.503s]
Building query seed array...  [0.096s]
Computing hash join...  [0.068s]
Masking low complexity seeds...  [0.006s]
Searching alignments...  [0.087s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 1/2, index chunk 2/4.
Building reference seed array...  [0.53s]
Building query seed array...  [0.099s]
Computing hash join...  [0.067s]
Masking low complexity seeds...  [0.006s]
Searching alignments...  [0.074s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 1/2, index chunk 3/4.
Building reference seed array...  [0.548s]
Building query seed array...  [0.103s]
Computing hash join...  [0.068s]
Masking low complexity seeds...  [0.006s]
Searching alignments...  [0.069s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 1/2, index chunk 4/4.
Building reference seed array...  [0.485s]
Building query seed array...  [0.091s]
Computing hash join...  [0.068s]
Masking low complexity seeds...  [0.006s]
Searching alignments...  [0.066s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 2/2, index chunk 1/4.
Building reference seed array...  [0.484s]
Building query seed array...  [0.091s]
Computing hash join...  [0.067s]
Masking low complexity seeds...  [0.006s]
Searching alignments...  [0.068s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 2/2, index chunk 2/4.
Building reference seed array...  [0.526s]
Building query seed array...  [0.099s]
Computing hash join...  [0.067s]
Masking low complexity seeds...  [0.006s]
Searching alignments...  [0.061s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 2/2, index chunk 3/4.
Building reference seed array...  [0.544s]
Building query seed array...  [0.102s]
Computing hash join...  [0.067s]
Masking low complexity seeds...  [0.006s]
Searching alignments...  [0.06s]
Deallocating memory...  [0s]
Processing query block 1, reference block 1/1, shape 2/2, index chunk 4/4.
Building reference seed array...  [0.481s]
Building query seed array...  [0.09s]
Computing hash join...  [0.066s]
Masking low complexity seeds...  [0.006s]
Searching alignments...  [0.058s]
Deallocating memory...  [0s]
Deallocating buffers...  [0.009s]
Clearing query masking...  [0s]
Computing alignments... Loading trace points...  [0.035s]
Sorting trace points...  [0.012s]
Computing alignments...  [10.428s]
Deallocating buffers...  [0s]
Loading trace points...  [0s]
 [10.48s]
Deallocating reference...  [0.003s]
Loading reference sequences...  [0s]
Deallocating buffers...  [0s]
Deallocating queries...  [0s]
Loading query sequences...  [0s]
Closing the input file...  [0s]
Closing the output file...  [0.016s]
Closing the database...  [0.003s]
Cleaning up...  [0s]
Total time = 20.936s
Reported 88062 pairwise alignments, 88062 HSPs.
8348 queries aligned.
2024-07-09 10:37:31.644732: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-07-09 10:37:31.654184: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: gpu03
2024-07-09 10:37:31.654200: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: gpu03
2024-07-09 10:37:31.654323: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: 550.54.15
2024-07-09 10:37:31.654390: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 550.54.15
2024-07-09 10:37:31.654402: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:310] kernel version seems to match DSO: 550.54.15
2024-07-09 10:37:31.669962: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-09 10:37:31.808310: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 754647040 exceeds 10% of free system memory.
2024-07-09 10:37:32.854598: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 754647040 exceeds 10% of free system memory.
2024-07-09 10:37:33.089177: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 754647040 exceeds 10% of free system memory.
2024-07-09 10:37:34.509640: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 754647040 exceeds 10% of free system memory.
2024-07-09 10:37:36.304317: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
2024-07-09 10:48:23.850180: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 754647040 exceeds 10% of free system memory.

Any help would be appreciated!

Error while "Download cc.h5"

I am running AnnoPro for the first time. However, it seems that there are something wrong while downloading related files:

...
Deallocating reference... [0.002s]
Loading reference sequences... [0s]
Deallocating buffers... [0s]
Deallocating queries... [0s]
Loading query sequences... [0s]
Closing the input file... [0s]
Closing the output file... [0s]
Closing the database... [0.003s]
Cleaning up... [0s]
Total time = 7.186s
Reported 0 pairwise alignments, 0 HSPs.
0 queries aligned.
Download cc.h5...
Traceback (most recent call last):
...

File "/home/user/miniforge3/envs/annopro/lib/python3.8/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found

How can I deal with this?

Requirement for old cuda drivers

Hello,

I've been unable to run AnnoPRO in GPU mode due to the requirement for old CUDA drivers; this is the error that I get:

tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory

Would you be able to update the code to accommodate the latest CUDA drivers?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.