GithubHelp home page GithubHelp logo

pyqg / pyqg Goto Github PK

View Code? Open in Web Editor NEW
134.0 134.0 84.0 18.02 MB

Quasigeostrophic model in python

Home Page: http://pyqg.readthedocs.org

License: MIT License

Python 28.56% Jupyter Notebook 67.17% Cython 4.27%

pyqg's Introduction

pyqg: Python Quasigeostrophic Model

DOI conda pypi Build Status codecov Documentation Status binder

pyqg is a python solver for quasigeostrophic systems. Quasigeostophic equations are an approximation to the full fluid equations of motion in the limit of strong rotation and stratitifcation and are most applicable to geophysical fluid dynamics problems.

Students and researchers in ocean and atmospheric dynamics are the intended audience of pyqg. The model is simple enough to be used by students new to the field yet powerful enough for research. We strive for clear documentation and thorough testing.

pyqg supports a variety of different configurations using the same computational kernel. The different configurations are evolving and are described in detail in the documentation. The kernel, implement in cython, uses a pseudo-spectral method which is heavily dependent on the fast Fourier transform. For this reason, pyqg tries to use pyfftw and the FFTW Fourier Transform library. (If pyfftw is not available, it falls back on numpy.fft) With pyfftw, the kernel is multi-threaded but does not support mpi. Optimal performance will be achieved on a single system with many cores.

Links

pyqg's People

Contributors

anirban89 avatar asross avatar cesar-rocha avatar cspencerjones avatar dante831 avatar dhruvbalwada avatar francispoulin avatar hugovk avatar jamesp avatar jbusecke avatar jrbourbeau avatar mbueti avatar mengcz13 avatar mfjansen avatar mjclobo avatar navidcy avatar pittwolfe avatar rabernat avatar rochanotes avatar salahkouhen avatar t-schanz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyqg's Issues

Bug in the kernel: missing negative sign

Working on the diagnostics for the layered model, I got confused by the negative sign we use in http://pyqg.readthedocs.org/en/stable/examples/two-layer.html#plot-diagnostics

I believe this is a bug in the kernel. A missing negative sign in the calculation of the tendencies:

                for i in range(self.Nk):
                    # overwrite the tendency, since the forcing gets called after
                    self.dqhdt[k,j,i] = ( self._ik[i] * self.uqh[k,j,i] +
                                    self._il[j] * self.vqh[k,j,i] +
                                    self._ikQy[k,i] * self.ph[k,j,i] )

This should be negative since we are solving

q_t = - J(psi,q) 

Note that the bottom drag has the correct sign.

bottom topography

It should be easy to add bottom topography to the layered models. When there is bottom topography, the lower layer PV gets an extra term f0 h / H where h is the topographic anomaly and H is the lower layer thickness. This can be incorporated into the background PV gradient when the advection tendency is calculated.

Redundant FFT plans?

Do we really need all these FFT plans in the kernel? The idea of FFT plans is to figure out the best algorithm to perform FFTs given the type (e.g. real to complex), size, and dimension of the array, and machine. It seems that these plans are redundant โ€“ we only need one plan for the forward transform and one plan for the backward transform. I've recently used this approach in a piece a code and it works. (I've also used a similar approach to calculate FFTs with FFTW3 in Fortran.)

  def _initialize_fft(self):
     """ Initializes FFT plans """
     A = pyfftw.n_byte_align_empty(self.shape_real, pyfftw.simd_alignment,\
                                 dtype=self.dtype_real)
     Ah = pyfftw.n_byte_align_empty(self.shape_cplx, pyfftw.simd_alignment,\
                                 dtype=self.dtype_cplx)

     self.real_to_complex_fft2 = pyfftw.builders.rfft2(A,threads=self.ntd,\
                       planner_effort='FFTW_MEASURE')

     self.complex_to_real_fft2 = pyfftw.builders.irfft2(Ah,threads=self.ntd,\
                       planner_effort='FFTW_MEASURE')

     del A, Ah

The same plan can be used to transform different arrays of the same type/size, e.g.:

self.Xih = self.real_to_complex_fft2(self.Xi)
self.vh = self.real_to_complex_fft2(self.v)

Perhaps the most important question is: would we save memory by eliminating redundant plans? I don't know how much memory each FFT plan takes.

pysw?

This week I want to create the foundations of a new github repository called pysw. Basically, this is a shallow water version of pyqg. Before I do anything I wanted to ask the pyqg creates a couple of questions.

First, would you mind if I used what we have as pyqg as a skeleton? I plan to copy what we have because I think it's really great and modify the equations. I know that pyqg is open source and that anyone can take anything and use it, within the confines of the MIT license (thanks to @rabernat ). But at the same time I think it would be rude if I didn't ask you all for your permission since you've put so much effort into it already.

One thought that I had was to have this as a subset of pyqg but I think that will make things too constraining for me and thought this would be easier.

Second, who would be interested in helping to develop this? By that I mean, would you like to have administration access to the github repo?

I have the equations coded up on a variety of other codes and don't think it will be a lot of work and would certainly appreciate your input and contributions.

One of the reasons for doing this sooner rather than later is that Geoff was asking whether this would exist before January, when he teaches a course. I want to have a version in the next month or so that he can start using it.

@jamesp , since you are working with Geoff, would you be interested in maybe doing some tests as things develop?

I hope that this will help to accomplish some of what pyqg does, 1) reproduce classical results and 2) let people simulate things on their own, but for problems that qg does not apply to. By that I mean with gravity waves and not so small Rossby numbers.

As usual, your thoughts/comments are greatly appreciated.

make coverage work

When I try to run the tests with coverage, the test suite just hangs and never starts. The command I'm using is

py.test pyqg --cov=pyqg --cov-config .coveragerc --cov-report term-missing

I suspect this has to do with cython, but I'm not sure about that.

test_twolayer_qq test fails

Despite our best attempt at making a useful test, the default test still fails. Here is the output I get from it.

FAIL: test_twolayer_qg.test_the_model
Make sure the results are correct within relative tolerance rtol.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/anaconda/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/rpa/pyqg/pyqg/tests/test_twolayer_qg.py", line 42, in test_the_model
    np.testing.assert_allclose(q1norm, 9.561430503712755e-08, rtol)
  File "/usr/local/anaconda/lib/python2.7/site-packages/numpy/testing/utils.py", line 1297, in assert_allclose
    verbose=verbose, header=header)
  File "/usr/local/anaconda/lib/python2.7/site-packages/numpy/testing/utils.py", line 665, in assert_array_compare
    raise AssertionError(msg)
AssertionError: 
Not equal to tolerance rtol=1e-15, atol=0

(mismatch 100.0%)
 x: array(9.723198783759038e-08)
 y: array(9.561430503712755e-08)
-------------------- >> begin captured stdout << ---------------------
t=               0, tc=         0: cfl=0.021228, ke=0.000019292, T_e=527.025961099
t=        12800000, tc=      1000: cfl=0.021195, ke=0.000019292, T_e=527.025961099
t=        25600000, tc=      2000: cfl=0.021309, ke=0.000035737, T_e=387.206968354
t=        38400000, tc=      3000: cfl=0.026747, ke=0.000089140, T_e=245.170789728
t=        51200000, tc=      4000: cfl=0.042333, ke=0.000222572, T_e=155.154023419
t=        64000000, tc=      5000: cfl=0.066459, ke=0.000555989, T_e=98.164418796
t=        76800000, tc=      6000: cfl=0.105021, ke=0.001390120, T_e=62.085201635
t=        89600000, tc=      7000: cfl=0.102573, ke=0.003476802, T_e=39.257825464
time:       9.3312e+07
q1norm:     9.723198783759038e-08

--------------------- >> end captured stdout << ----------------------

----------------------------------------------------------------------
Ran 4 tests in 14.851s

FAILED (failures=1, errors=1)

surface wind stress forcing

In QG, the surface wind stress is felt via Ekman pumping in the upper layer, which introduces a term proportional to curl tau in the upper layer PV equation.

For a steady forcing, this should be easy to add to the PV tendency.

FFTW accuracy

Testing the FFTW wrapper on my Mac Pro with Anaconda (64-bit) and numpy 1.9.1. Very similar results with mklfft.


import numpy as np
import pyfftw

def rfft2(A):
    a = pyfftw.n_byte_align_empty(A.shape, 8, 'float64')
    a[:]=np.copy(A)
    fft_object = pyfftw.builders.rfft2(a,threads=1)
    return fft_object()

def irfft2(Ahat):
    ah = pyfftw.n_byte_align_empty(Ahat.shape, 16, 'complex128')
    ah[:]=np.copy(Ahat)
    fft_object = pyfftw.builders.irfft2(ah,threads=1)
    return fft_object() 

A = np.random.randn(64,64)

Now calculate the real fft with numpy and two different calls of fftw


Ahat_np =  np.fft.rfft2(A)
Ahat_fftw = pyfftw.interfaces.numpy_fft.rfft2(A, threads=1)
Ahat_fftw_2 = rfft2(A)

Test if the fftw calls give the same results


In [9]: np.allclose(Ahat_fftw,Ahat_fftw_2,rtol=1.e-16,atol=1.e-16)
Out[9]: True

But there are some small differences to bumpy's pfftr


tol,atol = 1.e-14,1.e-14
In [47]: np.allclose(Ahat_np,Ahat_fftw,rtol,atol)
Out[47]: False

The test above passes with a tolerance of 1.e-13.

Now invert back to physical space and compare with original array


Anp = np.fft.irfft2(Ahat_np)
Afftw = pyfftw.interfaces.numpy_fft.irfft2(Ahat_fftw)
Afftw_2 = irfft2(Ahat_fftw_2)

In [40]: rtol,atol = 1.e-14, 1.e-14

In [41]: np.allclose(A,Anp,rtol,atol)
Out[41]: True

In [42]: np.allclose(A,Afftw,rtol,atol)
Out[42]: True

In [43]: np.allclose(A,Afftw_2,rtol,atol)
Out[43]: True

It is somehow upsetting that we can't get the exact same results to 1.e-15... But so far so good. Now repeat the calculations above in a nonsquare domain

 
A = np.random.randn(64,66)

...


In [61]: rtol,atol = 1.e-13, 1.e-13

In [62]: np.allclose(A,Anp,rtol,atol)
Out[62]: True

In [63]: np.allclose(A,Afftw,rtol,atol)
Out[63]: False

In [64]: np.allclose(A,Afftw_2,rtol,atol)
Out[64]: False

In particular, there are "significant" differences between between the original array and the one that goes through the fftw transforms:


In [65]: A-Afftw
Out[65]: 
array([[ -1.27955443e-08,  -4.15157833e-09,   2.97806797e-08, ...,
          2.01382537e-08,   2.04817235e-08,   1.73261541e-08],
       [ -2.57259580e-08,  -1.81484963e-08,   4.83268750e-09, ...,
         -3.66382132e-09,   1.07922297e-08,   2.94663406e-08],
       [ -3.59105948e-09,  -1.16771947e-08,   8.73914346e-09, ...,
          1.38128254e-08,   3.13628652e-08,   3.43982829e-08],
       ..., 
       [ -1.86094601e-08,   4.02695504e-08,  -1.77806448e-08, ...,
          3.73103608e-08,   3.07046049e-08,  -1.44515059e-08],
       [ -7.43273176e-09,  -2.24983171e-08,  -5.21882319e-10, ...,
         -1.52015343e-08,   6.26377332e-08,  -5.06093842e-09],
       [  3.93612150e-08,  -4.75781994e-08,  -3.41397135e-08, ...,
          1.64235306e-08,   3.56944274e-08,  -7.90450114e-08]])

Same thing if we use the call with byte align, etc.

Numpy's fft still OK:


In [66]: A-Anp
Out[66]: 
array([[ -1.94289029e-15,  -1.99840144e-15,   2.44249065e-15, ...,
         -2.22044605e-16,   0.00000000e+00,   2.55351296e-15],
       [  3.33066907e-16,  -4.44089210e-16,   2.66453526e-15, ...,
          7.07767178e-16,   1.83186799e-15,  -5.55111512e-16],
       [ -1.24900090e-16,  -1.66533454e-16,   2.66453526e-15, ...,
          2.10942375e-15,   1.55431223e-15,   2.22044605e-16],
       ..., 
       [ -2.77555756e-15,   2.22044605e-15,  -7.77156117e-16, ...,
          3.55271368e-15,   1.11022302e-15,  -3.44169138e-15],
       [  2.58126853e-15,   2.55351296e-15,  -4.51028104e-17, ...,
         -2.22044605e-15,  -3.10862447e-15,  -7.02216063e-15],
       [  6.66133815e-16,  -2.22044605e-16,  -4.21884749e-15, ...,
          0.00000000e+00,   4.66293670e-15,  -5.77315973e-15]])

Any thoughts on this?

Saving results to disk

Guys, I just implemented a single-layer subclass (I'll merge my branch into develop soon). I've created a simple notebook using the model to reproduce a classical experiment by McWilliams JFM 1984. One thing that came up when I was doing this is that we may want to write the state variables to disk so that we can study the evolution of the system.

Any thoughts on how to do the saving w/o compromising performance?

add additional side boundary conditions

It would be great to support sidewall on one or both sides of the domain. This requires using a discrete sine transform (DST) instead of the current discrete Fourier transform (DFT) along the dimension with the sidewalls.

If we have a field p(x,y) on the interval [(0,Lx), (0,Ly)] which needs to be Fourier transformed in x and y. Currently we use DFT, which implies that p is doubly periodic.

We would like to support "sidewalls" at either / both x=0,Lx and y=0,Ly, on which p = 0. The four cases would then be:

  1. doubly periodic: use r2c DST
  2. p(0,y) = 0: use r2r DST in x direction, r2c DFT in y direction
  3. p(y,0) = 0: use r2r DST in y direction, r2c DFT in x direction
  4. p(0,y) = 0 AND p(x,0) = 0: use r2r DST in both directions

Implementing this requires support for real-to-real DST in pyfftw. That is currently being discussed at pyFFTW/pyFFTW#39.

benchmarks

Has anyone run the benchmarks lately? Yesterday I updated to the latest version and decided for fun that I would run the benchmarks. It has been more than 12 hours and it's now on the 2048 test.

Also, from what I gather with the results, the openMP is slightly faster but you need a pretty big grid for it to be worthwhile. If this saves the data to a file I am happy to share it, if/when it finishes.

tavestart

tavestart is supposed to be the time at which the model start computing averages for the diagnostics, but it is never used in model.py. Here's what _calc_diagnostic does:

if (self.t>=self.dt) and (self.tc%self.taveints==0):
       self._increment_diagnostics()

(...)

is tavestart used elsewhere?

(I think we're missing a self.t>=self.tavestart above).

test_import.test_answer fails with error

This is a new test added by Cesar

======================================================================
ERROR: test_import.test_answer
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/anaconda/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/rpa/pyqg/pyqg/tests/test_import.py", line 3, in test_answer
    import mkl
ImportError: No module named mkl

Shape returned by rfft2 not consistent

This is a really strange error.

In the process of investigating the origin of the test mismatches, I tried to run the current master branch tests on the mac pro that Malte and I were using last summer. The test_twolayer_qg.test_the_model test failed with the following error.

ERROR: test_twolayer_qg.test_the_model
Make sure the results are correct within relative tolerance rtol.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/rpa/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/Users/rpa/RND/Public/pyqg/pyqg/tests/test_twolayer_qg.py", line 35, in test_the_model
    m.run()
  File "/Users/rpa/RND/Public/pyqg/pyqg/qg_model.py", line 249, in run
    self._step_forward()
  File "/Users/rpa/RND/Public/pyqg/pyqg/qg_model.py", line 262, in _step_forward
    self.ph1, self.ph2 = self.invph(self.qh1, self.qh2)
  File "/Users/rpa/RND/Public/pyqg/pyqg/qg_model.py", line 230, in invph
    ph1 = self.a11*zh1 + self.a12*zh2
ValueError: operands could not be broadcast together with shapes (32,17) (32,32) 

It turns out that this is due to np.fft.rfft2 returning different shapes on the two different machines. On my laptop (where the test succeeds) I get

>>> np.fft.rfft2(np.zeros((16,16))).shape
(16, 9)

This result makes sense, since the point of rfft is not to compute the redundant fourier coefficients. On this machine, I have numpy 1.8.1, Canopy 64 bit python 2.7.6.

However, on the mac pro, I get

>>> np.fft.rfft2(np.zeros((16,16))).shape
(16, 16)

which of course breaks the whole model. This machine has numpy 1.8.1, Canopy 64 bit python 2.7.3.

test_fft2.test_parseval fails on macbook

This test failed on my macbook but worked fine on my linux workstation. I think it is a precision issue.

======================================================================
FAIL: test_fft2.test_parseval
Make sure 2D fft from QGModel satisfy Parseval's relation
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/rpa/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/Users/rpa/RND/Public/pyqg/pyqg/tests/test_fft2.py", line 28, in test_parseval
    assert error<rtol, " *** QGModel FFT2 does not satisfy Parseval's relation "
AssertionError:  *** QGModel FFT2 does not satisfy Parseval's relation 
-------------------- >> begin captured stdout << ---------------------
Variance from spectrum: 1.0320939902371178
Variance in physical space: 1.0320939902371151
error = 0.0000000000000026

Make tests faster

There are a lot of redundant tests. Plus they are slow. How long do we have to run to be confident the model is "right"?

Adams Bashforth startup is broken

I think there is an error in the way Adams Bashforth time stepping gets started up.

In the original code, the AB timestep coefficients get initialized in the __init__method with values that make it do forward Euler

         self.dt0 = self.dt
         self.dt1 = 0.

Then later in step_forward, after the first AB tendency has been calculated, they get updated.

         # add time tendencies (using Adams-Bashforth):
         self.qh1 = self.filtr*(
                     self.qh1 + self.dt0*self.dqh1dt + self.dt1*self.dqh1dt_p)
         self.qh2 = self.filtr*(
                     self.qh2 + self.dt0*self.dqh2dt + self.dt1*self.dqh2dt_p)  

         # remember previous tendencies
         self.dqh1dt_p = self.dqh1dt.copy()
         self.dqh2dt_p = self.dqh2dt.copy()

        # the actual Adams-Bashforth stepping can only be used starting
        # at the second time-step and is thus set here:   
        if self.tc==0:
            self.dt0 = 1.5*self.dt
            self.dt1 = -0.5*self.dt

Starting with this commit (f7f50f8), the code to update dt0 and dt1 was moved before the first tendency calculation. This means that the first is calculated using forward Euler, but with 1.5*dt instead of dt. Later, when third order was implemented, the bug persisted, meaning that third order coefficients are used for the second timestep.

I will fix this along with my general rewrite of the AB code.

Numerical accuracy in two-layer example, and the multi-layer model

I wanted to give a heads up about an accuracy problem. I implemented a multi-layer class, and to test if things were correct I was trying to reproduce some of the results/tests of the two-layer model. I failed.

I figured that there's a small (but significant) difference in the inversion matrix. The infinite norm of the difference is O(1.e-5), corresponding to a very small wavenumber (wv2[1,0]), where the matrix we're trying to invert is nearly singular; at very large wavenumbers the differences are O(1e-12). For the general N-layer model I'm leveraging on np.linalg.inv, whereas in the two-layer model we simply use the matrix that we invert by hand. I checked the matrices we are inverting. They're the same:

From the multi_layer model (m):

In [417]: m.S - np.eye(m.nz)*m.wv2[1,0]
Out[417]: 
array([[ -3.59503397e-09,   3.55555556e-09],
       [  8.88888889e-10,  -9.28367306e-10]])

From the two_layer_model (m2):

In [425]: np.array([[-m2.F1-m2.wv2[1,0], m2.F1],[m2.F2,-m2.F2-m2.wv2[1,0]]])
Out[425]: 
array([[ -3.59503397e-09,   3.55555556e-09],
       [  8.88888889e-10,  -9.28367306e-10]])

But the results are different:

In [432]: m.a[:,:,1,0]-m2.a[:,:,1,0]
Out[432]: 
array([[  4.76837158e-06,   1.90734863e-05],
       [  4.76837158e-06,   1.52587891e-05]])

The problem is that the matrix we're inverting is poorly conditioned, particularly at small wavenumbers. (The determinant is O(1.e-19) !) We're loosing a lot accuracy...

I'm not sure how to circumvent this issue โ€“ I tried a few things, but with these numbers it is hard to know what is correct or what to expect. Any thoughts?

(Things get better as we increase the number of layers because the matrix becomes better conditioned.)

plotting final stopping time

When I modified the sqg script I noticed that if I choose tmax to be 26.0, it would not plot the solution at 26. I changed it to 26.005 and then it plotted it. This is not a big deal but thought I would mention it.

Is MKL being used?

landscape complains that mkl is not used. So maybe we do not need to try to import it in model.py and in the model subclasses, e.g. layered_model.py.

Apparently anaconda already builds numpy with mkl. I get the following message whenever importing numpy

>>> import numpy as np

Vendor:  Continuum Analytics, Inc.
Package: mkl
Message: trial mode expires in 21 days
Vendor:  Continuum Analytics, Inc.
Package: mkl
Message: trial mode expires in 21 days
Vendor:  Continuum Analytics, Inc.
Package: mkl
Message: trial mode expires in 21 days

Indeed:

>>> np.__config__.show()

lapack_opt_info:
    libraries = ['mkl_lapack95_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'iomp5', 'pthread']
    library_dirs = ['/Users/crocha/anaconda/lib']
    define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
    include_dirs = ['/Users/crocha/anaconda/include']
blas_opt_info:
    libraries = ['mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'iomp5', 'pthread']
    library_dirs = ['/Users/crocha/anaconda/lib']
    define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
    include_dirs = ['/Users/crocha/anaconda/include']
openblas_lapack_info:
  NOT AVAILABLE
lapack_mkl_info:
    libraries = ['mkl_lapack95_lp64', 'mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'iomp5', 'pthread']
    library_dirs = ['/Users/crocha/anaconda/lib']
    define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
    include_dirs = ['/Users/crocha/anaconda/include']
blas_mkl_info:
    libraries = ['mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'iomp5', 'pthread']
    library_dirs = ['/Users/crocha/anaconda/lib']
    define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
    include_dirs = ['/Users/crocha/anaconda/include']
mkl_info:
    libraries = ['mkl_intel_lp64', 'mkl_intel_thread', 'mkl_core', 'iomp5', 'pthread']
    library_dirs = ['/Users/crocha/anaconda/lib']
    define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
    include_dirs = ['/Users/crocha/anaconda/include']

instability of a QG jet

Inspired by the recent advances in pyqg, I would like to put together another QG example. Basically, I want to reproduce some of the results of Glenn's from 1987.

http://journals.ametsoc.org/doi/abs/10.1175/1520-0485(1987)017%3C1408:NWACVS%3E2.0.CO;2

He uses the reduced gravity BT QG model but one issue is that the streamfunction is not periodic in one direction. One way to do this is to decompose the model into a basic state and perturbation

psi = Psi + psi'
q = Q + q'
u = U + u'

The fully nonlinear governing equations are then

partial_t q' + J(Psi, q') + J(psi', Q) + J(psi',q') = 0

and we can look for solutions that are doubly periodic.

I know how to solve this in general but not sure how to fit this into the context of pyqg. I looked in BT_model and the evolution equations are not there.

Ryan: any ideas how best to do this?

Another option is to look how to use DCT in one direction. I thought the first option would be easier but am very interested to know what you all think.

If you think this is too much of a pain for the moment I don't have to do it now but I thought I'd mention it since it would be a nice example. Doing the linear stability would be nice and easy too.

build issue on os x

On yosemite 10.10.5, xcode version 6.4, I get the following error when I try to build with clang

$ python setup.py build_ext --inplace

or gcc

$ CC=/usr/local/gcc-4.8/bin/gcc python setup.py build_ext --inplace --library-dirs=/usr/local/gcc-4.8/lib
ld: library not found for -lgcc_s.10.5

It sounds similar to this issue.

Diagnostics: name/description and normalization

I know this may sound perfectionism for some. Current description of the diagnostics is misleading. These terms represent tendencies in the "spectral energy density". The energy spectrum has units of L^2 T^-2 per unit wavenumber. The have units of (L^2 T^{-3}) per unit wavenumber. For example, if the wanumber is in cycles per km, and energy in m^2/s^2, then this would be m^2 s^-3 / cpkm.

My main concern is:

  1. The "flux terms" are actually divergence of flux...To get the flux we must integrate in wavenumber.
  2. "APEgen" actually represents something like the spectrum of the rate of potential energy generation.
  3. "Total energy dissipation" should be the spectrum of mechanical energy dissipation through bottom drag.
    etc.

Also, the implementation of the fft computes an unnormalized transform. Note that because we're using pyfftw, this does not matter if we are just transforming back and forth. But, to get the correct magnitudes for the diagnostics we need to normalize to normalize things by the size of the array... In particular for quadratic quantities, such as the energy spectral density, we need to normalize by M^2, where M = Nx x Ny.

bug in run_the_model.py and some examples I'd like to do

I finally found the time to look at the BT QG problem and am very impressed with the code. When I try run_the_model.py I get an error because beta1 is not known,

Francis-Poulins-MacBook-Pro:examples fpoulin$ /Applications/anaconda/bin/python run_the_model.py
Traceback (most recent call last):
File "run_the_model.py", line 9, in
imshow(m.q[0] + m.beta1 * m.y)
AttributeError: 'QGModel' object has no attribute 'beta1'

If I go into the code and change the two instances of beta1 to beta than that does the trick.

It runs but it plots the image in such a way that you need to close each frame. I wonder if it might be easier to use plt.draw() so that it runs automatically.

Also, there are a few examples that I thought I might try to put together, hopefully soon.

  1. Take your BT QG model and switch on beta to hopefully generate zonal jets a la Rhines. I have tried doing this before with some success but if anyone has tried this and has nice parameters that would be welcome. Or I could also revisit Peter's original paper.

  2. An SQG model. Mostly everything is the same but I need to change the inversion. If I do that then would it make sense to have something called sqg_model.py?

  3. This is more ambitious, but if I wanted to use this code to study the stability of a Bickley jet, as Glenn did in 87 or so, because the streamfunction of the basic state is not periodic, it would be necessary to decompose the background state and the perturbation. Before I try this do you foresee any problems?

Add stochastic forcing

This would be useful for problems in which we want to force the system without imposing a background velocity.

check_for_openmp does not necessarily use the same compiler as cythonize

Everything works fine on linux server with CentOS. But a Mac OS El Capitan with gcc installed from Mac-hpc, check_for_openmp uses clang whereas cythonize appears to use gcc.

If we simply comment out that test in setup.py,

extra_compile_args = []
extra_link_args = []

use_openmp = True
#if check_for_openmp() and use_openmp:
#    extra_compile_args.append('-fopenmp')
#    extra_link_args.append('-fopenmp')
#else:
#    warnings.warn('Could not link with openmp. Model will be slow.')    

extra_compile_args.append('-fopenmp')
extra_link_args.append('-fopenmp')

then everything works.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.