GithubHelp home page GithubHelp logo

adjtomo / seisflows Goto Github PK

View Code? Open in Web Editor NEW
170.0 30.0 122.0 35.69 MB

An automated workflow tool for full waveform inversion and adjoint tomography

Home Page: http://seisflows.readthedocs.org

License: BSD 2-Clause "Simplified" License

Shell 0.39% Python 99.61%

seisflows's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seisflows's Issues

Add capability to visualize SPECFEM2D models

There are now some instructions for SPECFEM2D visualization included at
http://seisflows.readthedocs.org/en/latest/instructions_remote.html#visualize-inversion-results

The '.bin' files themselves are Fortran binary files, which have 4 bytes padding at the beginning and end and data in the middle. Some relevant information is available here
http://stackoverflow.com/questions/8751185/fortran-unformatted-file-format

In the future, I hope to move to a more standard binary file format for models and kernels. The only reason we haven't done so yet is that it would require fairly significant modifications to SPECFEM2D.

Code stop with errors

When run the 'inversion' workflow with ‘slurm_sm’ system, the following problems always happened, and it is independent of the clusters and models that I used. And inversion with the same models and parameters (except the parameters for slurm_sm) work well if I use the system option 'multithreaded'. So I think model parameters for the inversion are correct.

Anyone has come across the problems and has any idea to solve them? Thank you very much!

WARNING: f0 != PAR.F0
WARNING: f0 != PAR.F0
Traceback (most recent call last):
File "/home1/03419/liren/seisflows-master/seisflows/system/wrappers/run", line 41, in
func(**kwargs)
File "/home1/03419/liren/seisflows-master/seisflows/solver/base.py", line 199, in eval_func
self.export_residuals(path)
File "/home1/03419/liren/seisflows-master/seisflows/solver/base.py", line 420, in export_residuals
unix.mkdir(join(path, 'residuals'))
File "/home1/03419/liren/seisflows-master/seisflows/tools/unix.py", line 81, in mkdir
os.makedirs(dir)
File "/home1/03419/liren/anaconda2/envs/obspy1.0.3/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 17] File exists: '/work/03419/liren/stampede2/tests/marmousi_offshore/scratch/evalfunc/residuals'
srun: error: c495-044: task 11: Exited with exit code 1
Generating data

Starting iteration 1
Generating synthetics
Computing gradient
Computing search direction
Computing step length
trial step 1
trial step 2
trial step 3
trial step 4

Starting iteration 2
Generating synthetics
Computing gradient
Computing search direction
Computing step length
trial step 1

Starting iteration 3
Generating synthetics
Computing gradient
Computing search direction
...
..
..
Starting iteration 21
Generating synthetics
Computing gradient
Computing search direction
Computing step length
trial step 1

Starting iteration 22
Generating synthetics
Computing gradient
Computing search direction
Computing step length
trial step 1
Traceback (most recent call last):
File "/tmp/slurmd/job1005876/slurm_script", line 24, in
workflow.main()
File "/home1/03419/liren/seisflows-master/seisflows/workflow/inversion.py", line 128, in main
self.line_search()
File "/home1/03419/liren/seisflows-master/seisflows/workflow/inversion.py", line 185, in line_search
self.evaluate_function()
File "/home1/03419/liren/seisflows-master/seisflows/workflow/inversion.py", line 212, in evaluate_function
path=PATH.FUNC)
File "/home1/03419/liren/seisflows-master/seisflows/system/slurm_sm.py", line 125, in run
+ '%s ' % PAR.ENVIRONS)
File "/home1/03419/liren/seisflows-master/seisflows/tools/tools.py", line 31, in call
subprocess.check_call(*args, **kwargs)
File "/home1/03419/liren/anaconda2/envs/obspy1.0.3/lib/python2.7/subprocess.py", line 186, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'srun --wait=0 /home1/03419/liren/seisflows-master/seisflows/system/wrappers/run /work/03419/liren/stampede2/tests/marmousi_offshore/output solver eval_func ' returned non-zero exit status 1
1

When this error happed, the residual file for source number 11 (/scratch/evalfunc/residuals/000011) is not built,
OSError: [Errno 17] File exists: '/work/03419/liren/stampede2/tests/marmousi_offshore/scratch/evalfunc/residuals'
srun: error: c495-044: task 11: Exited with exit code 1

When the second error happed, the code

  • exits

subprocess.CalledProcessError: Command 'srun --wait=0 /home1/03419/liren/seisflows-master/seisflows/system/wrappers/run /work/03419/liren/stampede2/tests/marmousi_offshore/output solver eval_func ' returned non-zero exit status 1

how to incorporate Gian's new mpi system class?

Hi Gian,

I've tested your system.mpi class. It works for me right out of the box, both running it on the login node and under SLURM. If I understand, this could be a really significant improvement in terms of reliability and reduced code duplication.

  1. Should we go forward and make pbs_sm, lsf_sm, slurm_sm inherit from mpi? In each case, all that would be required, I think, is overloading the submit method. The current slurm_sm could be renamed slurm_md or something like that. Also, there would no longer be a need I think to distinguish between pbs_sm and pbs_torque_sm.

  2. Since we've both found pbsdsh unreliable, should we remove this way of doing things entirely?

Ryan

split system.run into two methods?

Rationale for splitting system.run into two functions:

  • The hosts parameter prevents use of positional arguments to system.run

  • I expected there'd turn out to be many cases for 'hosts', but in practice we've only ever needed hosts=all and head. It seems safe to split system.run into two cases...

  • Changes to seisflows/system could be automated by a script. Outside of seisflows/system, disruption should be fairly limited

how to generate SPECFEM2D model .bin files?

1) Choose desired mesh discretization

  • Choose length of and number of elements along x-axis by modifying nx, xmin, xmax in DATA/Par_file.

  • Choose length of and number of elements along z-axis by modifying DATA/interfaces.dat

2) output SPECFEM2D mesh coordinates

  • Specify a homogenous or water-layer-over-halfspace model using nbmodels, nbregions and associated parameters in DATA/Par_file

  • Choose the following settings in DATA/Par_file

MODEL = default
SAVE_MODEL = ascii
  • Run mesher and solver.
bin/xmeshfem2D
bin/xspecfem3D

3) After running the mesher and solver, there should be ascii file(s) DATA/proc******_rho_vp_vs.dat. The first two columns of this file specify the x and z coordinates of the mesh. Interpolate your values for vp,vp,rho onto these coordinates.

4) Write interpolated values obtained from the previous step in fortran binary format. To do this you'll need a short Fortran program, such as the one in the following comment.

5) If you want, plot the newly created .bin files to make sure everything look alright (see issue #34). To use your binary files, don't forget to change back the settings in DATA/Par_file:

MODEL = binary
SAVE_MODEL = default

release 0.1 planned

This fall or summer, the plan is to release the first numbered version of seisflows. By then (1) we should have a buildbot-like continuous integration system in place, (2) commits to the main repository will more often take the form of pull requests, and (3) I will have finished or be close to finishing my thesis and may be able to devote more time to software development. Until then, changes will probably just continue in the current fashion. Thanks for bearing with me. -Ryan

the problem I found during running the checkboard example

  1. First, the checkboard example is stored in Google Drive. It is a little bit hard for those people who lived in China mainland to download all that files. I will try a way to get those files and upload to BaiduDisk(百度网盘) or other network disk.

  2. There are some mistakes in the main program and parameter file:

  • In folder "checkers"(the checkboard example folder), the Par_file in "specfem2d-d745c542", the parameter "f0_attenuation" should be changed into "ATTENUATION_f0_REFERENCE".

  • After change this parameter, the program still goes wrong. The error information should be found in "checkers/scratch/solver/#NUMBER#/solver.log", the #NUMBER# is the id of sources. SPECFEM2D would tell you the parameter you miss. Add them into the Par_file

  • Then the program will suck in “Generating synthetic data”, we could find this situation in terminal. It may cause by different version of SPECFEM2D. We could modify the seisflow code in "seisflows/seisflows/specfem2d.py". Search and subtitude the "unix.rename('single_p.su', 'single.su', src)" into "unix.rename('single_d.su', 'single.su', src)" will fix the problem. Notice that there are two line to change.

  • Then the program will suck in “Computing gradient” at the end of "task 25 of 25". The terminal will tell you KeyError: 'MINUSGRADIENT'. After checking the "seisflows/seisflows/workflow/inversion.py", I think the better way to fix it is change the “parameters.py” in "checkers" folder. Add "MINUSGRADIENT=1" of section "#WORKFLOW" in “parameters.py” , and add "MINUSGRADIENT:1" of section "#WORKFLOW" in "parameters.yaml"

new github url coming; old url should automatically redirect

Next week, I’ll be transferring ownership of the repository from the university to my personal GitHub domain to avoid loss of access after I graduate.

I don't believe there will be any noticeable differences from a user standpoint. As described here, the old url should automatically redirect to the new one. Still, feel free to respond this issue if there are any questions or concerns.

PATH variable name changes in progress

User feedback indicates a need for the following name changes

SOLVER_FILES -> SOLVER_DATA
SOLVER_BINARIES -> SOLVER_BIN

Changes will be put in place on the master branch on tomorrow afternoon.

Recurring matplotlib warning

/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')

This warning occurs when using wrappers to spawn multiple processes pbsdsh/slurm (presumably). Reloading objects that import obspy causes this warning to trigger on each import. At the very least it creates spam in the output.log; it's unclear if there is additional overhead.

matplotlib/matplotlib#5836 - relevant matplotlib discussion.

Would be nice to suppress this or perhaps limit importing Obspy to where necessary.

exit gracefully when external solver fails

It can be difficult to detect when SPECFEM or another external solver fails-- correct exit codes may not always be returned. This can lead to misleading tracebacks-- the python code continues to run for some time longer, even though simulation results are not present or not correct. It may be possible to fix this problem through changes to seisflows or to the external solvers themselves.

3D examples

Hi Ryan,

Thanks for the help!

I used the 2-D example in SeisFlows months ago, it works well. However, they haven't provided a 3-D example for specfem3d nor specfem3d_globe at that time. That is why I have to write scripts of iterative inverison with specfem3d by myself.

As reminded by you, they have already launched 3-D examples for Cartesian and Global version. Cheers ! However, the 3-D examples are only available for local user with account on tiger, which I am not. I have taken your suggest to use MODEL=gll for updating models, it works well. Thanks!

In the meantime, do you have some materials about how to use src/tomography codes and I will appreciate if you can provide me some. By the way, could it possible for non-local user like me to get the two 3-D examples for Cartesian and Global version from Seisflows?

Best regards,
Kai

Complete documentation with example problems, FAQs, etc.

The documentation page (https://seisflows.readthedocs.io/en/latest/index.html) is currently a direct port from the previous SeisFlows3 documentation, which was in a developmental state. Documentation pages which should be added to consider the next docs version "complete" include:

  • Update documentation to remove old references of SeisFlows3 and SF3
  • SPECFEM3D example problem and walkthrough
  • How-To extend SeisFlows yourself (i.e., writing your own super class)
  • Frequently Asked Questions (FAQ)
  • Debugging SeisFlows

Handling simultaneous shots.

SPECFEM3D and SPECFEM3D_GLOBE both have a feature allowing to launch simultaneous and independent shots in a single MPI job.

A new system class in seisflows/seisflows/system/ is needed.

It needs to have the following features:

  • From a number of shot, an optimal job size and the number of processes required by a single job, guess the number of shot to bundle.
  • Since there is no fault tolerance (yet), all jobs in a run either fail or succeed, it will require some extra work to be sure that every shot has been performed. Automatically relaunching jobs might be problematic as if the error is on the software side, jobs will be relaunched endlessly.
  • The number of shot per run might not be constant. This number depends on a parameter specified in SPECFEM3D* Par_file. A way to handle this without breaking SeisFlow solver independance has to be be found.

Answer FIXMEs

There is a few FIXME comments in the code.
They annotate either small errors or ambiguities.

Pre-commit framework, hooks, CI

This repository would benefit from a fully-fledged pre-commit framework and continuous integration. Potential dependencies that would be added include: Flake8, Black, pre-commit, codecoverage, Travis CI.

Here is a task list for including each of these framework tools:

  • Flake8
  • Black
  • pre-commit
  • codecoverage
  • Travis CI

consolidate solver classes?

Currently, there are three main solver classes: specfem2d, specfem3d, and specfem3d_globe. Would it make sense to have all three inherit from a single base class?

Parallelize 'default' preprocessing module

Currently, the internal "default" preprocessing steps (waveform filtering, misfit quantification, adjoint source creation) are run in serial for each event (but in parallel for all events). These tasks are embarrassingly parallel and could be sped up significantly through parallelization using something like concurrent.futures

That being said, using "pyatoa" preprocessing allows for parallel preprocessing, as well as expanded capabilities for misfit quantification including windowing and figure generation. See the Pyaflowa core module in Pyatoa for more information.

Function "apply" in preprocess is no more needed?

It seems that function "apply" was removed. For instance, in test_preprocess it was possible to use it in a following manner:
d = preprocess.apply(preprocess.process_traces, [d], [h])

If there is an alternative way to work with multi-component data?

running on Cray cluster (XC30)

Hi Ryan,
I am using seisflows for some tests. Thanks a lot for
sharing this amazing package, it really makes my life easier. But I have
trouble to run it on a 'aprun' based Cray cluster (also SLURM batch system), it seems some environment variables are missing. For example:
in system/slurm_sm.py
def getnode(self):

""" Gets number of running task
"""
gid = os.getenv('SLURM_GTIDS').split(',')
lid = int(os.getenv('SLURM_LOCALID'))
return int(gid[lid])

The following changes also do now work:
1, based on your recommendation:
def getnode(self):

""" Gets number of running task
"""
from mpi4py import MPI
return MPI.COMM_WORLD.Get_rank()
2, found online:
def getnode(self):
""" Gets number of running task
"""
return int(os.getenv('ALPS_APP_PE'))

The errors seem 'io' related (two process are trying to make the same directory?).
more comments or advices are appreciated.

Thanks in advance,
Fang

shot parallelism discussion

On Mon, Jul 18, 2016 at 3:23 PM, Gian Matharu [email protected] wrote:

Hi Ryan,

So I was wondering if you've run into any issues using the MPI system class. It works fine on my local PC, but when I try to run on HPC systems I begin running into peculiar issues. Occasionally one of the subprocess calls seems to simply hang (e.g. on a forward/adjoint solve). I was curious if you'd experienced anything similar at any point.

Regards,

Gian

New system sub-class that prioritizes long queue times and large jobs

Following discussions with the Princeton group, it would be great to create a system class that prioritizes long queue times and large jobs over arrayed jobs. SeisFlows currently submits N array jobs (where N is the number of events used) on the system, which may take an appreciable amount of time as each job must be scheduled separately. If queue times are long on the system, wait times may be high.

One approach to fix this would be to submit one large job where each of the N tasks is doled out on the compute node itself (as opposed to distributing jobs as arrays from the master job). This could be contained within a separate 'qcluster' (q for queue) system module which has some internal logic to dole out these tasks after job submission, perhaps taking advantage of asyncio or a ThreadPoolExecutor from concurrent.futures.

marmousi example not working

Dear Dr Ryan,

I am trying to run the examples 2D at seisflow and i am not able to run.

My tests are sucesfuly done and i realise that woulb be impossible to run since the file legacy,py it is not at the package for download.

Also i found other package at https://github.com/DmBorisov/SeisFlows_tunnel but it is also not working to run this examples.

Could you please help me?

I am very interest to use seisflow, thank you very much for your time.

salvus integration?

Has anyone tried SALVUS? I'm hopeful this promising new solver can be made to work with SeisFlows, but still not familiar enough to say whether that would be easy or hard. Hoping that SALVUS might eventually provide a solver alternative, at least for 2D inversions.

main development has been forked + planned changes

This is a notice that the main development of Seisflows is currently taking place on a separate fork which we plan to merge into this repository at some point in the coming future.

Working with @rmodrak and others, I've been implementing Seisflows in an earthquake tomography problem for New Zealand. Along the way I forked at d906298 and did an overhaul of the source code, mostly updating syntax to Python3 and writing docstrings/comments where necessary. A direct link to this new branch, nicknamed Seisflows3, can be found here:

https://github.com/bch0w/seisflows/tree/seisflows3

Discussing with Ryan, we see this fork as a candidate for future development of Seisflows, however, we both agree that a direct merge would be inefficient due to the level of divergence.

We propose that Seisflows3 be made a branch in this repo, eventually becoming master branch. The current master (46a9604) will be left as is for those who would like to continue using it.

Feedback very welcome in this transition phase. Stay tuned!

math library bug

On Wed, Aug 23, 2017 at 1:11 PM, Gian Matharu [email protected] wrote:

Just a quick bug report. I believe there should be a += on the interior updates for the nabla2 operator in seisflows.tools.math. 

On Wed, Aug 23, 2017 at 3:52 PM, Ryan Modrak [email protected] wrote:

Thanks Gian.  That is a major bug.  It looks like there was a regression on Aug 11, 2016.

rename PATH.GLOBAL -> PATH.SCRATCH

There has been some confusion concerning the name of the "scratch path" used by SeisFlows to write temporary/intermediate files. Currently this path is referred in the code as PATH.GLOBAL, because it must be a globally accessible, i.e. accessible to all compute nodes if you are running SeisFlows on a cluster. Now that it is possible to run SeisFlows not just on a cluster, but also in some cases on a desktop or laptop, a better and more descriptive name is PATH.SCRATCH.

With this change, users who have written their own subclasses will find it necessary to replace any occurences of PATH.GLOBAL with PATH.SCRATCH.

Given that James and Gian have recently been making significant changes to the seisflows.system classes, now may be as good a time as any to roll out such a change. Perhaps in the future, we might put a version number system in place as an additional way of alerting users to these types of changes.

setup.py not working

python setup.py install
works fine in a conda virtual environment set up with:
conda create -n sf_env python=2.7 setuptools matplotlib numpy scipy

However it a completely blank environment:
conda create -n sf_env python=2.7 setuptools
There is some problems:
* mkl is not found correctly. scipy looks for numpy's mkl bindings and numpy try to get the mkl library in $MKLROOT/lib/emt64 instead of $MKLROOT/lib/intel64
* matplotlib complains about __version__ not being found

Announcement: this repo has moved

Ownership of the SeisFlows repository is being transferred to:
https://github.com/adjtomo

Related Issue: bch0w#2

Transfer will take place in one week (planned for May 17, 2022). All new development to the SeisFlows package will be focused on the future page: github.com/adjtomo/seisflows (note: this link will not work until the transfer is completed)

During this transition period, we will merging and deleting an existing fork, SeisFlows3 (announced in #111). All future development will be focused into SeisFlows. During the transfer we will perform the following tasks:

The version of SeisFlows that currently resides in this repository will be accessible as v1.1 (at commit 46a9604). All changes made after the SeisFlows3 merge will begin versioning at V2.0.0.

Thanks for your patience as we make this transition.

broken PBS system classes

James and I have been working on these classes. At the moment they are not working--in fact, there are a lot of hardwired variables and debug statements on the master branch. Hope to address this when James arrives back.

Can we use more than one processor when system='multithreaded'

Hi, Ryan

I can run the package using system='multithreaded' on my computer, however,
we only use one processor for each task so that the runtime is rather long for some examples.
"NPROC=1 # processors per task"
Can we use the parallel ( mpiexec or mpirun) under the multithreaded system? that is,
can we set NPROC>1?

Thanks

better comments in parameter file

In a future version, users will be able to supply yaml parameters and paths files. The old format will continue to work just fine. Rest assured, there will be no disruptions!

Here's what the new "checkers example" parameter file might look like.

EDIT: Keep in mind things will look much better in a text editor with syntax coloring.

Is this an improvement? Feedback welcome!


# The first six parameters correspond closely to the structure of the source
# code and determine which modules are actually loaded at runtime. To see 
# available options for the first six parameters, browse the matching code
# directories


# Which workflow will be run?
# e.g. inversion, migration
WORKFLOW: ‘inversion’


# What type of system will the workflow run on?
# e.g. serial, multicore, pbs_sm, pbs_lg, slurm_sm, slurm_lg
SYSTEM: ‘serial’


# Which computational engine will be used?
# e.g. specfem2d, specfem3d, specfem3d_globe
SOLVER: ‘specfem2d’


# Which nonlinear optimization algorithm will be used?
# e.g. NLCG, LBFGS
OPTIMIZE: ‘LBFGS’


# What facilities to use for signal processing operations on seismic traces?
PREPROCESS: ‘default’


# What facilities to use for image processing operations on models or sensitivity kernels?
POSTPROCESS: ‘default’



### data processing

# data misfit function (see plugins/misfit)
MISFIT: ‘Waveform’


# data file format (see plugins/readers)
FORMAT: ‘su’

# data channels
CHANNELS: ‘y’


### solver

# material parameterization type
# e.g. Acoustic, Elastic
MATERIALS: ‘Elastic’

# time scheme
NT: 4800
DT: 0.06


### nonlinear optimization

# preconditioning strategy (see plugins/preconds)
PRECOND: None

# maximum number of function evaluations allowed per line search
STEPMAX: 10

# optional step length safeguard
# (smaller values are more conservative)
STEPTHRESH: 0.1



### system

# the workflow involves how many embarrassingly parallel tasks?
# typically equal to the number of seismic sources
NTASK: 25

# how many CPU cores are required per task?
NPROC: 1

convert seismic data reader and writer into standalone tools

Currently, routines for reading and writing seismic traces are tied to individual solver repositories. Converting them to standalone tools would reduce dependencies between classes and make for a much simpler import system. Before proceeding though, we would need to be sure that our ability to (1) parallelize data processing operations and (2) handle multicomponent data would not be adversely affected.

Expand System capabilities to other clusters, workload managers

The current version of SeisFlows contains a SLURM System module, and super modules for two clusters: Maui (New Zealand HPC) and Chinook (University of Alaska Fairbanks).

Other SLURM systems that we plan to create super modules for include: Oakforest-PACS (Japanese HPC), Frontera (Texas Advanced Computing Center), and Mustang (Air Force Research Laboratory).

Other workload managers we should target include: PBS and LSF, however as of this comment, we do not have access to such clusters to test and develop on.

Other non-cluster System capabilities to target include: multicore machines, GPU workstations

Related to previous issue #18

Questions about the code

Hi,

Thanks for sharing this code.

I have two questions.

First, I used specfem3d_cartesian to do waveform modeling and kernel (K) computation. The density (rho) kernels used in this code are from the proc00*rho_kernel.bin files of specfem3D. Bsed on the Eq. 19 and 20 in the paper of Tromp et al., 2005, should it use rhop (rho prime) kernels rather than rho kernels?

Second, in the base.py file of the postprocess folder. The code times the model with kernels to estimate the gradients (g). 

    This is the original code:
    gradient = solver.load(
        path+'/'+'kernels/sum', suffix='_kernel')

    # merge into a single vector
    gradient = solver.merge(gradient)

    # convert to absolute perturbations, log dm --> dm         
    # see Eq.13 Tromp et al 2005         
    gradient *= solver.merge(solver.load(path + '/' + 'model'))

But based on the equation (Eq.13 in the paper of Tromp et al 2005), if f is the misfit function, df=Integration_over_space[K(dm/m)]. I think the gradient should be g=df/dm=Integration_over_space(K/m), so this code shoud use kernels to divide model parameters, not multiply by model parameters, right? 

Plese correct me if I'm wrong. Thank you very much and hope to get your answers.
Best regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.