GithubHelp home page GithubHelp logo

haddocking / haddock3 Goto Github PK

View Code? Open in Web Editor NEW
95.0 19.0 30.0 439.45 MB

Official repo of the modular BioExcel version of HADDOCK

Home Page: https://www.bonvinlab.org/haddock3

License: Apache License 2.0

Python 63.13% Prolog 0.05% Fortran 34.38% NASL 1.49% Lua 0.07% Shell 0.46% Roff 0.36% C 0.06%
utrecht-university python3 workflows modelling docking proteins complexes integrative-modeling bioinformatics

haddock3's Introduction

Welcome to the HADDOCK3-Beta version.

The main branch represents the latest state of HADDOCK v3. Currently, stable beta version.

unit tests build docs

Codacy Badge Codacy Badge

fair-software.eu OpenSSF Best Practices

Research Software Directory DOI


HADDOCK3

1. Installation

To install HADDOCK3 follow the instructions in the INSTALL file.

2. Documentation

HADDOCK3-beta documentation is hosted online at https://www.bonvinlab.org/haddock3/. The documentation is rendered and update at every commit to the main branch. So, you will always find the latest version in the link above.

If you want to compile the HADDOCK3 documentation pages locally (offline), install HADDOCK3 and activate the haddock3 python environment as explained in the installation instructions. Then, in your terminal window, run:

tox -e docs

Ignore any warning messages. After, use your favorite browser to open the file haddock3-docs/index.html. This will open a local webpage with the complete HADDOCK3 documentation. Exactly the same you will find online. Navigate around, enjoy, and contribute.

3. Examples

3.1. Basic scoring of an ensemble of 5 structures:

In the examples/ folder you find several examples for you to test and learn HADDOCK3. Additional information is in the documentation pages.

cd examples/scoring/
haddock3 emscoring-test.cfg

4. Contribute

If you want to contribute to HADDOCK3's development, read the CONTRIBUTING file for instructions.

5. Keep in contact and support us

HADDOCK3 is an academic project supported by various grants, including the EU BioExcel Center of Excellence for Computational Biomolecular Research. HADDOCK3 is fully open-source and free to download. If you clone this repository and use HADDOCK3 for your research, please support us by signing this Google form if you have not yet done so. This will allow us contact you when needed for HADDOCK3-related issues, and also provide us a mean to demonstrate impact when reporting for grants.

haddock3's People

Contributors

alchemistcai avatar amjjbonvin avatar annakravchenko avatar brianjimenez avatar bvreede avatar cvnoort avatar dependabot[bot] avatar douweschulte avatar joaomcteixeira avatar mgiulini avatar rvhonorato avatar sarahalidoost avatar sschott avatar sverhoeven avatar vgpreys avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

haddock3's Issues

Clustering failure: KeyError

I am running the alpha1 version of haddock3. After completing it0, it1, and itw (1000, 100, and 100 iterations, respectively), clustering fails with the following:

  • Calculating HADDOCK score for 100 structures
  • Clustering with cutoff: 0.6 and threshold: 4
Traceback (most recent call last):
  File "/home/martinw3/beegfs/haddock3-3.0.alpha1/haddock/run_haddock.py", line 249, in <module>
    run_analysis(water_refinement_complexes)
  File "/home/martinw3/beegfs/haddock3-3.0.alpha1/haddock/run_haddock.py", line 173, in run_analysis
    ana.cluster(cutoff=0.60, threshold=4)
  File "/home/martinw3/beegfs/haddock3-3.0.alpha1/haddock/modules/analysis/ana.py", line 211, in cluster
    self.structure_dic[cluster_center_name][f"cluster-{cutoff}-{threshold}_name"] = c.name
KeyError: 'water_refinement/complex_itw_100.pdb'

The error makes sense; the file doesn't exist. However, I can't sort why it's looking for a 101st file (or why it's not complex_itw_000100.pdb). Even more puzzling, this doesn't happen with the sample protein-protein system but does with all of my other runs. It appears, unless I'm missing something, that all of the normal output files are generated outside of the cluster_0.6_4.out and scoring.stat files.

`caprieval` fails if interface arrays have different dimenions

If the reference and the model have a different number of atoms, the following is raised:

Exception has occurred: ValueError
shapes (3,400) and (399,3) not aligned: 400 (dim 1) != 399 (dim 0)
  File "/Users/rodrigo/repos/haddock3/src/haddock/modules/analysis/caprieval/__init__.py", line 43, in kabsch
    C = np.dot(np.transpose(P), Q)
  File "/Users/rodrigo/repos/haddock3/src/haddock/modules/analysis/caprieval/__init__.py", line 212, in irmsd
    U = kabsch(P, Q)
  File "/Users/rodrigo/repos/haddock3/src/haddock/modules/analysis/caprieval/__init__.py", line 443, in run
    capri.irmsd(cutoff=self.params['irmsd_cutoff'])
  File "/Users/rodrigo/repos/haddock3/src/haddock/libs/libworkflow.py", line 104, in execute
    module.run(**self.config)
  File "/Users/rodrigo/repos/haddock3/src/haddock/libs/libworkflow.py", line 25, in run
    step.execute()
  File "/Users/rodrigo/repos/haddock3/src/haddock/clis/cli.py", line 143, in main
    workflow.run()
  File "/Users/rodrigo/repos/haddock3/src/haddock/clis/cli.py", line 62, in cli
    main(**vars(cmd))
  File "/Users/rodrigo/repos/haddock3/src/haddock/clis/cli.py", line 67, in maincli
    cli(ap, main)
  File "/Users/rodrigo/repos/haddock3/src/haddock/clis/cli.py", line 154, in <module>
    sys.exit(maincli())

We should properly handle this error and also add an option to ignore_missing; this option should ignore atoms in the model that are not present in the reference.

Implement working directory as a user parameter

Now when haddock3 is executed, the folder structure is generated in the current directory, each folder being one step it would be more organized if a working directory could be passed as argument in the .toml file.

scoring/
|_ topology
|_ step_1
|_ step_2

to

scoring/
|__ run<name>
    |_ topology
    |_ step_1
    |_ step_2

What to do when config does not match the available options

I think excluding a module from the execution when it does not exist is NOT a good approach because this can facilitate passing silent errors because of typos or mistakes. In my opinion if a module specified by the user is not present we should abort the execution, and report the error to the user.

Originally posted by @joaomcteixeira in #38 (comment)

General discussion: how should we approach these situations? As I said, I vote for aborting the run and inform the user.

Definition of input PDB files in the CNS scripts

The current implementation of defining the input PDB files in the CNS script directly calls the coordinate file, e.g.:

This is problematic for when random removal of restraints is turned on (currently not working) since we also need to read a corresponding seed file.
It would be better for all modules to encode first the PDB file into a variable which can subsequently be used in the CNS script, e.g. something like:

evaluate ($input_pdb_filename="/Users/abonvin/haddock_git/haddock3/examples/docking/run1/01_rigidbody/rigidbody_0.pdb")
cool @@$input_pdb_filename

This will allow to define later in the script the seed file to be read as:

evaluate ($fileseed=$input_pdb_filename - ".pdb" + ".seed")

Double logging when running jobs

When running HADDOCK3 jobs after #106 on SLURM we have that the standard output is saved on both #SBATCH -o file.out and the log file. What would be your preference?

  1. Keep it this way?
  2. Save only in the log file and ignore #SBATCH -o
  3. Make #SBATCH -o the log file, but that would be a bit against #106 in the first place.

I vote for 2.

Where should the folder for all-purpose CNS scripts be placed?

If we are to store CNS scripts that (can) serve several HADDOCK3 modules in a separate folder, where should that folder be?

possibilities:

  • src/haddock/cns, this would imply removing the py files from here and placing them in libs.libcns.py.
  • src/cns, src/haddock/__init__.py would have a pointer to this folder.
  • others?

CNS engine

CNS class in engine.py file needs to properly work with the docker container provided in the configuration file.

Re-implement CNS-dependency finder script

In version alpha we had part of the code that would scan the .cns script and return its dependency tree up to 5 levels of complexity, this was used to compose the .inp script that would be executed by CNS, adding the script to relevant tags RUN, @@, etc.

The dependency tree can get quite complex since there are many nested calls:

mainscript.cns
  |_ script_A.cns
  |_ script_B.cns
    |_ script_B1.cns
  |_ script_C.cns
    |_ script_C1.cns
      |_ script_C1.1.cns
        |_ script_C1.1.1
      |_ script_C1.2.cns
    |_ script C2.cns 

This part of the code is missing in the main branch, we should re-implement it as a tool to help us remove unnecessary CNS scripts from the core modules.

Implement CAPRI metrics calculations

It would be beneficial to have analysis modules capable of calculating the CAPRI metrics in the workflow; I-RMSD, L-RMSD, I-L-RMSD and fnat.

Thinking about the usage: all metrics should be evaluated by the same module, but in a scenario in which the user does not need all of them, it should have an option to select which are going to be evaluated and which are skipped.

How could we name this module? capricalc, caprieval? Open to suggestions.

Improve `examples`

Examples should contain workflow as well as systems examples. Systems being: protein-protein, protein-ligand, etc. Work on porting HADDOCK2 examples here.

We can have a task list, as each example should likely go in its own PR.

  • to be announced
  • more...

Error while running one simulation on top of a previous one

Hi, I have tested haddock3 installation by:

$ cd examples/protein-protein

# change the number of processors (nproc) according to your system
$ vim run.toml

$ haddock3.py run.toml
##############################################
#                                            #
#                   HADDOCK                  #
#                                            #
#             EXPERIMENTAL BUILD             #
#                                            #
##############################################

Starting HADDOCK 3.0.alpha-2 on 2020-10-22 18:46:00

Python 3.7.6 (default, Jan  8 2020, 19:59:22)
[GCC 7.3.0]

+ WARNING: run1 already present
+ Executing run1

++ Generating topologies
+ Running parallel, 2 cores 2 jobs

++ Running it0
+ Running parallel, 40 cores 1000 jobs

++ Running it1
+ Running parallel, 40 cores 400 jobs

++ Running itw
+ Running parallel, 40 cores 400 jobs

++ Running analysis

+ Calculating HADDOCK score for 400 structures
+ FCC clustering with cutoff: 0.60 and max size: 4

+ FASTCONTACT not correctly configured in haddock3.ini

+ DFIRE not correctly configured in haddock3.ini

+ DockQ not correctly configured in haddock3.ini

+ Saving single-structure analysis

+ Saving cluster analysis

 Finished at 22/10/2020 19:05:11

Adรฉu-siau!
Ciao!
Dovidenia!

However, after editing run.toml to my proteins, and run again haddock3.py run.toml, the following error appeared:

##############################################
#                                            #
#                   HADDOCK                  #
#                                            #
#             EXPERIMENTAL BUILD             #
#                                            #
##############################################

Starting HADDOCK 3.0.alpha-2 on 2020-10-22 19:47:00

Python 3.7.6 (default, Jan  8 2020, 19:59:22)
[GCC 7.3.0]

+ WARNING: run1 already present

+ WARNING: The following residues were removed because they are present in the input but not in the topology file
++ 64 ,65 ,2 ,ICA ,62 ,66
+ Executing run1

++ Generating topologies
+ Running parallel, 2 cores 2 jobs

++ Running it0
+ Running parallel, 8 cores 1000 jobs
+ ERROR: Job rigid_body/complex_it0_000000.inp has failed, please check the output rigid_body/complex_it0_000000.out

May I know should there be any difference in format of the .pdb file that would make the difference?

Thanks!

Re-implement basic features

We dropped the alpha development branch in favor of a more robust code and development direction, however it currently does not have the basic features that were present in the alpha.

This set us back quite a bit since we need to re-implement the features in the new code; to put it back in line I've split the basic modules between implemented, tested (covered by unit tests) and the workflows between implemented and benchmarked


Core Modules Implemented

  • Topology generation
  • Rigid body sampling (rigidbody)
  • Semi-flexible refinement (flexref)
  • Water refinement (mdref)
  • Final EM (emref)
  • Analysis (clustfcc)
  • Analysis (caprieval)
  • Scoring

Core Modules Tested

  • Topology generation
  • Rigid body sampling (rigdock)
  • Semi-flexible refinement (flexref)
  • Water refinement (mdref)
  • Final EM (emref)
  • Analysis (clustfcc)
  • Analysis (caprieval)
  • EMScoring (emscoring)
  • MD scoring (mdscoring) -> #344

Core Workflows Implemented

  • Docking
  • Scoring

CNS parameter errors

During the isolation of the CNS routines that execute Haddock's main stages (it0, it1, itw) not all parameters were accounted for. This generates warnings in the execution and might lead to wrong results.

The first step is to make sure there are no -ERR in the output files. This can be done by identifying which parameters were not assigned grepping the errors in the output file and editing both the main ins code and the companion .json.

Warn the user about missing mandatory parameters

This is actually a good opportunity to explain and enforce mandatory parameters. The problem was that sampling was not being provided, so the code crashed when trying to read that parameter. However a proper error message was not defined. We should make sure mandatory parameters for each haddock3 module are identified and reported to the user in case these are missing in the configuration file.

Originally posted by @joaomcteixeira in #60 (comment)

Also, if a mandatory parameter is missing from the configuration file, the default value should NOT be used. The user should be informed all the time.

Create a independent PDB sanitize step before entering the haddock3 pipeline

After the discussions in #140 and #142. PR #144 implements what is here discussed. The preprocessing steps happen before the haddock3 pipeline when the original input date is copied to the run_dir/data folder. The aim is also to have a CLI that the user can run and correct the PDBs (or dry-run) before submittting.

Done:

The list follows the execution order defined in the process_pdbs function, order matters:

  • ANISOU records can be discarded (use pdb_keepcoord)
  • REMARK lines are discarded (use pdb_keepcoord)
  • Select altloc with the highest occupancy (waiting for haddocking/pdb-tools#117 to be merged)
  • sets all occupancy to 1.00 (uses pdb_occ)
  • Rename the MSE residues (selenomethionine) to MET. Also if MSE are defined as HETATM, make them ATOM.
  • Rename HSD, HSE, HIE, HID to HIS and convert them also to ATOM if needed.
  • Corrects charge in ions
  • Supports all residues defined in cns/toppar/*.top files. Automatically retrieves new residues from those files.
  • Convert ATOM to HETATM for residues that are expected to be HETATM. If the user provides .top file also residues defined there get converted to HETATM if needed.
  • Convert HETATM to ATOM for residues that should be ATOM and are defined as HETATM (these are natural and modified aminoacids)
  • Remove unsupported HETATM. Accepts user .top file.
  • Remove unsupported ATOM.
  • Insertions (AARG, BARG) (uses pdb_fixinsert)
  • Renumbers atoms (uses pdb_reatom starting at 1)
  • Renumbers residues (uses pdb_reres starting at 1)
  • pdb_tidy (waiting haddocking/pdb-tools#119)
  • address #138
  • If input PDBs have the same chain ID (in different files) these are corrected such that all PDBs have different chains.

Todo

  • Residues in the same chain cannot have repeated numbering
  • If there is a gap in the sequence, the gap must be maintained (from the above list nothing corrects for this, so it should be as is, to be tested)
  • All models in an ensemble (MODEL) should be equal, that is, same labels.
  • Add flag to skip the preprocessing step

Probably good to check what our current 2.4 server machinery is doing in terms of input PDB validation

Reduce the number of files generated

Something for us to keep in mind is the amount of files generated by each module, we should aim to minimize and clean them as much as possible, keeping relevant information for debugging purposes.

The `--restart` option should also evalute the state of the module

@amjjbonvin requested:

The --restart option after #88 restarts the run at a given stage. For example, --restart 2 would maintain modules 00 and 01 and restart the run at module 02_, meaning it would delete everything previously done in 02_. That raises the problem that if 02_ crashes in the last calculation of 10,000 calculations we need to repeat all that again. Hence, --restart should also evaluate the state of the stage to avoid repeating calculations.

Refactor modules to dump their data in human-readable format

Request from @amjjbonvin:

What I am also missing from a user perspective is a simple way to get the ranking of models within one directory - I see the info is in io.json but not human readable - or sortable


This is achievable by making sure each module will produce a human-readable file format containing all relevant information produced by it.

For example; modules in the docking category must output a list of pdb_filename,score,components (similar to file.list, the clustering module should output the cluster information (similar to cluster.out)

Below is a task-list of the modules that need to be refactored:

  • caprieval
  • clustfcc
  • seletop
  • seletopclusts
  • emref
  • flexref
  • mdref
  • rigidbody
  • emscoring
  • mdscoring
  • topoaa

Config generator

When configuring an execution run, users might ask which parameters are available for each module/action/package.

Haddock3 should have a CLI that could pull the parameters from the module and generate a .toml with its default values. In fact, we could have an actual "config generator" CLI able to create config.toml files from user requests. Given a menu with the available modules, the users would select the order of modules to execute. Then, the CLI should create a config file containing the default parameters as pulled directly from the modules' signature.

Discussing layer architecture and current names

HADDOCK3 hierarchic names and general architecture:

Current configuration

The current configuration at the time of writing is:

  1. A workflow: contains all operations and is somehow managed by the class WorkflowManager
  2. class Workflowis assigned as a self.recipe inside class WorkflowManager. So, a recipe is a workflow within the WorkflowManager context.
  3. a stage is an operation done in the workflow. Examples of stages are: topology,scoring, clustering . stages are controlled by the class Workflow. And stages do nothing, they simply refer to the concept of the operation.
  4. stages have different steps - class Step.
  5. steps are the actual operations defined by the python modules within each stage's folder, like default.py.
  6. steps are also named mode inside the class Step.
  7. And the stage is named module inside the class Step.

Comments

In the configuration file, and in the code, on top of the haddock module layer there is the stage layer. But I don't see more than one stage item in the [stage] layer. Is it? Even looking to the original @brianjimenez examples I see no more than one stage item. There are many modules inside the stage layer, but not another stage. So, why we need the concept stage?

class WorkflowManager manages nothing and it can be refactored to a simpler function named, for example, run_haddock().

My suggestions

So, I suggest refactoring to:

A general Workflow that contains a sequence of modes. Nothing more.

In this way, we refactor from 5 conceptual layers to only 2. The fact that modes are contained within modules does not mean that a module per se is a conceptual or organizational layer; it is just a category, a tag, organizing the modes. Well, it happens to be a folder serving code organization purposes.

If we refactor from 5 layers to 2 layers as I am proposing, we will decouple the stages and the user can freely compose the workflow considering only the modes:

order = [
    "topology"
    "scoring"  # this would give the default
    "clustering.fcc"
    "analysis.whatever"
    "scoring.megascore"
]

Yes, we can currently do:

order = ["topology", "scoring.1", "scoring.2"]

but we have to assign mode="" assuming this is default and also using integers for ordering is suboptimal because if we want to reorder we need to change every integer in the list. Furthermore, there is this concept of stage in the code and the configuration file for which I don't see a purpose. Note: if there exists a reason for having several stages it is not reflected in the current definition of the orderpameter, not the code itself.

In

module_name = f"haddock.modules.{self.module}.{self.mode}"
we read self.module to refer to conceptual haddock3modules and self.mode to refer the actual operations. I suggest we find another name for "haddock modules" because modules refer to *.py files in a project, and creates confusion.

If we remove the stage and module layers, we can also rename mode because mode refers to a mode of a module and we would not separate them anymore in the configuration file. Maybe we can have action?


@amjjbonvin @rvhonorato, what do you think? If you agree, I can work on this refactor.

Wrong syntax for 1-index

Introduced in #129;

>>> [(i, j) for i, j in enumerate(['A','B','C'])]
[(0, 'A'), (1, 'B'), (2, 'C')]
>>> [(i, j) for i, j in enumerate(1, ['A','B','C'])]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'list' object cannot be interpreted as an integer
>>> [(i, j) for i, j in enumerate(['A','B','C'], start=1)]
[(1, 'A'), (2, 'B'), (3, 'C')]

Remove hard dependency on CNS

Is your feature request related to a problem? Please describe.
CNS seems not to be under open active development and its license is not as permissive as the rest of Haddock3.

Describe the solution you'd like
Make CNS an optional dependency and program essential engine functionality directly in Haddock3

Describe alternatives you've considered
Use another engine that is actively and openly developed and has a more permissive license.

All parameters defined in a cfg file should be passed to the CNS scripts

Currently working on the topoaa module.
In the default.cfg there are parameters defined per molecule, e.g.:

[input]
[input.mol1]
prot_segid = 'A'
fix_origin = false
dna = false
shape = false
cg = false
cyclicpept = false

All those should be passed to the CNS scripts, i.e. written to the generated script.
This is causing errors (not always fatal) in the CNS execution.

Remove the need for `activate_haddock`

Some of the HADDOCK3 parameters are currently defined as bash global variables. These forces the user to source the activate_haddock file, as well as, to handle bash scripting, for example to update the desired number of cores. We need to remove the need for the activate_haddock file and have those parameter defined in the HADDOCK3 configuration file or in other parts of the software.

activate_haddock configures the parameters bellow. Ticks inform on those re-implemented.

  • export HADDOCK3_NUM_CORES=4
  • export HADDOCKPATH=pwd
  • export PYTHONPATH=$PYTHONPATH:$HADDOCKPATH
  • export PATH=$PATH:$HADDOCKPATH/bin
  • export HADDOCK3_CNS_EXE="${HADDOCKPATH}/bin/cns"

Weird behaviour when PDB input files have no chainID

In the topoaa module, when an input PDB file has no chainID/segID defined the module puts $PRO as segid which causes problems downstream. No segIDs were defined in the config files since those are in principle defined in the default.cfg file of the module.

This does not happen when a PDB input has the chainID present.

In default.cfg there are default segIDs defined for each molecule. Those should be used.

error while executing docking.toml

File "/home/eigenket/haddock3/src/haddock/modules/refinement/flexref/init.py", line 79, in run
first_model = models_to_refine[0]
IndexError: list index out of range

Implement GitHub actions CI here

For modules that depend on Python (open) dependencies, we can have their own GitHub actions; CNS-based testing go to private Jenkins. For example:

  • GitHub actions using pytest for the general HADDOCK3 Python shell
  • GitHub actions specific for each module that uses thirdparty libraries for which installation via git action does not compromise the privacy of the module
  • Private Jenkin server to test CNS-based modules.
  • Other CI related stuff (version bump, docs, lint, etc) go also in GitHub actions.

Do you agree?

Originally posted by @joaomcteixeira in #103 (reply in thread)

Discussion about CLIs

Currently HADDOCK3 has a single command-line interface: haddock3, added after #28

However, after #43 I foresee that more CLIs will be needed. Personally, I like the idea of having all functionalities in well structured CLIs instead of separate scripts inside the repository. Yet, the CLI for #42 and #43 is not a user related CLI, it is more of a developer related CLI.

What about we have two main haddoc3 structured CLIs?:

  • haddock3
  • haddock3-dev

In the future, CLIs related to structural biology would be placed under haddock3 and tools related to aid development could be placed under haddock3-dev. For example, #43 could be under:

haddock3-dev cnstree

In this way the CLI could easily have integrated docs.

It is a bit wordy, but helps converging everything to a single point.

I am still brainstorming this idea, comments are welcomed.

Where to define the `run` directory?

This is a small question but has profound implications.

Currently the run_dir is defined in the configuration file. However, is the run dir really part of the haddock setup? Instead, if we define the run_dir defined in the client interface, say haddock3 CONFIG_PATH --run_dir FOLDER the run dir becomes more execution specific and not configuration specific. It becomes easier to configure for repeating runs in several different folders, or moving configurations around. Having the parameter in the client also helps some internal implementations, for example the logging.

What are your thoughts?

test suite

Which test suite do you prefer to use? I prefer pytest, I am more used to it and I like the way you can work with tests as functions, pytest.parametrize, and pytest.fixtures. Is there any prevalent reason to use unittest ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.