GithubHelp home page GithubHelp logo

Comments (15)

bradbell avatar bradbell commented on July 3, 2024 1

I think that I have fixed the bug; see
5cf567d

The script
bug/sparse_list.sh
no longer gives an assert and the script
bug/efficient.sh
yields the expected result for the number of zero.

Please run your tests and see if you get any unexpected results. If not, please close this bug.

P.S.
I plan to do some cleanup of the bug directory; e.g., deleting the scripts above, once the bug is closed.

from cppad.

bradbell avatar bradbell commented on July 3, 2024

It is possible for two sparsity patterns to be correct and one to be more efficient; i.e., to have fewer possibly non-zero entries. It would help to know which pattern is more efficient. It would also help to know which commit the change occurred at. I sometimes use a binary search (between the limits of the commits) to find which one the difference occurred on.

from cppad.

joaoleal avatar joaoleal commented on July 3, 2024

I used the 20160000.1 and the 20170000.2 versions.
The 2017 version has more non-zero entries in the pattern (1293 vs 1260).

from cppad.

bradbell avatar bradbell commented on July 3, 2024

If you construct a shell script that automatically builds a test and outputs the number of non-zeros. I will do the binary search and determine that cause.

from cppad.

joaoleal avatar joaoleal commented on July 3, 2024

Here is a script that prints Error with an exit code of 1 when the number of nonzeros is not 1260.
Please note that the variable "cppad_path" might have to be changed.

#! /bin/bash -e

cppad_path=/usr/include/cppad/

if [ ! -e build ]
then
    mkdir build
fi
cd build

cat << EOF > test_jac_nnz.cpp
#include <cppad/cppad.hpp>
#include <assert.h>
#include <vector>

using namespace CppAD;

std::vector<CppAD::AD<double> > evaluateModel(const std::vector<CppAD::AD<double> >& x,
                                              size_t repeat,
                                              CppAD::atomic_base<double>& atomModel);

std::vector<double> getTypicalValues(std::vector<double> xa,
                                     size_t repeat);

const size_t K_ = 3;
const size_t ns_ = 4;
const size_t nm_ = 2;
const size_t npar_ = 22;
const size_t na_ = ns_ + nm_ + npar_;

int main(int argc, char *argv[]) {
    /**
     * create atomic function
     */
    auto atomicFunction = [](const std::vector <AD<double>>& ind,
                             std::vector <AD<double>>& dxdt) {
        // temporary variables
        std::vector<AD<double> > v(17);

        v[0] = ind[8] * ind[10];
        v[1] = ind[9] * ind[11];
        v[2] = 0.1768325928384763 * ind[0];
        v[3] = 0.0180152833 * 1000 * (-13.851 + 0.64038 * ind[2] + -0.00191 * ind[2] * ind[2] + 1.8211e-06 * ind[2] * ind[2] * ind[2]);
        v[4] = v[2] * v[3];
        v[5] = ((v[4] - 1.0e-15 * ind[1] * v[2]) / 0.0180152833) / v[2];
        v[6] = (v[5] * v[2]) / (ind[1] * v[2] + v[5] * v[2]);
        v[7] = 1 - v[6];
        v[8] = (0.0180152833 * 1000 * (-13.851 + 0.64038 * ind[2] + -0.00191 * ind[2] * ind[2] + 1.8211e-06 * ind[2] * ind[2] * ind[2]) * ind[5]) / (1.0e-15 * v[7] + 0.0180152833 * v[6]);
        v[9] = exp(log(ind[6]) - (8.31447215 * ind[7]) / (8.31447215 * ind[2]));
        v[7] = v[0] + v[1] - v[7] * v[8] + -1 * v[9] * v[2];
        v[10] = 0.0180152833 * 1000 * (-13.851 + 0.64038 * ind[12] + -0.00191 * ind[12] * ind[12] + 1.8211e-06 * ind[12] * ind[12] * ind[12]) * ind[10];
        v[11] = 0.0180152833 * 1000 * (-13.851 + 0.64038 * ind[13] + -0.00191 * ind[13] * ind[13] + 1.8211e-06 * ind[13] * ind[13] * ind[13]) * ind[11];
        v[12] = ind[12] * ind[12];
        v[13] = ind[2] * ind[2];
        v[14] = ind[13] * ind[13];
        v[15] = ind[2] * ind[2];
        v[16] = ind[22] * (0.1768325928384763 + 1.4906857141283567 * ind[0]) * (ind[2] - ind[3]);
        dxdt[2] = (1.0e-15 * v[0] * ind[15] * (ind[12] - ind[2]) + v[10] * (15340.863387921299 * (ind[12] - ind[2]) + -58.009079435348092 * (ind[12] * ind[12] - ind[2] * ind[2]) + 0.1503353174209219 * (ind[12] * ind[12] * ind[12] - ind[2] * ind[2] * ind[2]) + -0.00019588923145049848 * (ind[12] * ind[12] * ind[12] * ind[12] - ind[2] * ind[2] * ind[2] * ind[2]) + 1.0402389841962685e-07 * (v[12] * v[12] * ind[12] - v[13] * v[13] * ind[2])) + 1.0e-15 * v[1] * ind[15] * (ind[13] - ind[2]) + v[11] * (15340.863387921299 * (ind[13] - ind[2]) + -58.009079435348092 * (ind[13] * ind[13] - ind[2] * ind[2]) + 0.1503353174209219 * (ind[13] * ind[13] * ind[13] - ind[2] * ind[2] * ind[2]) + -0.00019588923145049848 * (ind[13] * ind[13] * ind[13] * ind[13] - ind[2] * ind[2] * ind[2] * ind[2]) + 1.0402389841962685e-07 * (v[14] * v[14] * ind[13] - v[15] * v[15] * ind[2])) - ind[16] * v[9] * v[2] + 0 - v[16]) / (v[4] * (1.0e-15 * ind[1] * ind[15] + 0.0180152833 * v[5] * (15340.863387921299 + -116.01815887069618 * ind[2] + 0.45100595226276569 * ind[2] * ind[2] + -0.00078355692580199391 * ind[2] * ind[2] * ind[2] + 5.2011949209813426e-07 * ind[2] * ind[2] * ind[2] * ind[2])) / (1.0e-15 * ind[1] + 0.0180152833 * v[5]) + ind[27]);
        v[11] = (1.0e-15 * v[7] + 0.0180152833 * (v[10] / 0.0180152833 + v[11] / 0.0180152833 - v[6] * v[8]) - v[2] * 0.0180152833 * 1000 * (0.64038 + -0.00382 * ind[2] + 5.4633e-06 * ind[2] * ind[2]) * dxdt[2]) / v[3];
        dxdt[0] = v[11] / 0.1768325928384763;
        dxdt[1] = (v[7] - ind[1] * v[11]) / v[2];
        v[11] = ind[17] * ind[17];
        v[7] = ind[3] * ind[3];
        dxdt[3] = (0.0180152833 * 1000 * (-13.851 + 0.64038 * ind[17] + -0.00191 * ind[17] * ind[17] + 1.8211e-06 * ind[17] * ind[17] * ind[17]) * ind[4] * (15340.863387921299 * (ind[17] - ind[3]) + -58.009079435348092 * (ind[17] * ind[17] - ind[3] * ind[3]) + 0.1503353174209219 * (ind[17] * ind[17] * ind[17] - ind[3] * ind[3] * ind[3]) + -0.00019588923145049848 * (ind[17] * ind[17] * ind[17] * ind[17] - ind[3] * ind[3] * ind[3] * ind[3]) + 1.0402389841962685e-07 * (v[11] * v[11] * ind[17] - v[7] * v[7] * ind[3])) + v[16]) / (0.0180152833 * (0.0180152833 * 1000 * (-13.851 + 0.64038 * ind[3] + -0.00191 * ind[3] * ind[3] + 1.8211e-06 * ind[3] * ind[3] * ind[3])) / 0.0180152833 * ind[26] * (15340.863387921299 + -116.01815887069618 * ind[3] + 0.45100595226276569 * ind[3] * ind[3] + -0.00078355692580199391 * ind[3] * ind[3] * ind[3] + 5.2011949209813426e-07 * ind[3] * ind[3] * ind[3] * ind[3]) + ind[19]);
    };

    std::vector<double> xx(na_);
    xx[0] = 0.3; // h
    xx[1] = 7.82e3; // Ca
    xx[2] = 304.65; // Tr
    xx[3] = 301.15; // Tj
    xx[4] = 2.3333e-04; // u1
    xx[5] = 6.6667e-05; // u2
    xx[6] = 6.2e14; //
    xx[7] = 10080; //
    xx[8] = 2e3; //
    xx[9] = 10e3; //
    xx[10] = 1e-11; //
    xx[11] = 6.6667e-05; //
    xx[12] = 294.15; //
    xx[13] = 294.15; //
    xx[14] = 1000; //
    xx[15] = 4184; //Cp
    xx[16] = -33488; //deltaH
    xx[17] = 299.15; // Tj0
    xx[18] = 302.65; //   Tj2
    xx[19] = 7e5; // cwallj
    xx[20] = 1203; // csteam
    xx[21] = 3.22; //dsteam
    xx[22] = 950.0; //Ug
    xx[23] = 0.48649427192323; //vc6in
    xx[24] = 1000; //rhoj
    xx[25] = 4184; //Cpj
    xx[26] = 0.014; //Vj
    xx[27] = 1e-7; //cwallr

    std::vector<AD<double>> ay(ns_), ax(na_);
    for(size_t i = 0; i < na_; ++i)
        ax[i] = xx[i];

    checkpoint<double> atomic("cstr",
                              atomicFunction,
                              ax,
                              ay,
                              atomic_base<double>::set_sparsity_enum);

    /**
     * create tape
     */
    size_t repeat = 6;

    std::vector<double> xTypical = getTypicalValues(xx, repeat);
    size_t n = xTypical.size();
    std::vector <AD<double>> u(n);
    for (size_t j = 0; j < n; j++)
        u[j] = xTypical[j];

    CppAD::Independent(u);

    std::vector<AD<double>> v = evaluateModel(u, repeat, atomic);

    ADFun<double> fun;
    fun.Dependent(v);

    /**
     * determine Jacobian sparsity
     */
    size_t m = fun.Range();

    std::vector<std::set<size_t> > s_s(m);
    for (size_t i = 0; i < m; i++)
        s_s[i].insert(i);

    auto sparsity = fun.RevSparseJac(m, s_s);

    size_t nnz = 0;
    for (const auto& s: sparsity) {
        nnz += s.size();
    }

    if(nnz == 1260)
        return 0;
    else
        return 1;
}

std::vector<double> getTypicalValues(std::vector<double> xa,
                                     size_t repeat) {
    /**
     * collocation model values
     */
    size_t nvarsk = ns_;
    size_t nMstart = npar_ + nvarsk * K_ * repeat + nvarsk;

    std::vector<double> x(nMstart + repeat * nm_, 1.0);
    // parameters
    for (size_t j = 0; j < npar_; j++)
        x[j] = xa[ns_ + nm_ + j];

    size_t s = npar_;

    // i = 0 K = 0
    // states
    for (size_t j = 0; j < ns_; j++) {
        x[s++] = xa[j];
    }

    for (size_t i = 0; i < repeat; i++) {
        // controls
        for (size_t j = 0; j < nm_; j++) {
            x[nMstart + nm_ * i + j] = xa[ns_ + j];
        }

        // K = 1
        // states
        for (size_t j = 0; j < ns_; j++) {
            x[s] = 1.0 + 0.01 * i;
            x[s++] = xa[j];
        }

        // K = 2
        // states
        for (size_t j = 0; j < ns_; j++) {
            x[s++] = xa[j];
        }

        // K = 3
        // states
        for (size_t j = 0; j < ns_; j++) {
            x[s++] = xa[j];
        }
    }

    return x;
}

std::vector<CppAD::AD<double> > evaluateModel(const std::vector<CppAD::AD<double> >& x,
                                              size_t repeat,
                                              CppAD::atomic_base<double>& atomModel) {
    size_t m2 = repeat * K_ * ns_;

    // dependent variable vector
    std::vector<AD<double> > dep(m2);

    std::vector<AD<double> > dxikdt(ns_);
    std::vector<AD<double> > xik(na_);

    // parameters
    for (size_t j = 0; j < npar_; j++)
        xik[ns_ + nm_ + j] = x[j];

    size_t s = npar_;
    size_t nvarsk = ns_;
    size_t nMstart = npar_ + nvarsk * K_ * repeat + nvarsk;
    size_t eq = 0;

    for (size_t i = 0; i < repeat; i++) {
        size_t s0 = s;

        // controls
        for (size_t j = 0; j < nm_; j++) {
            xik[ns_ + j] = x[nMstart + nm_ * i + j];
        }

        // K = 1
        for (size_t j = 0; j < ns_; j++) {
            xik[j] = x[s + j]; // states
        }
        s += nvarsk;
        // xik[ns + nm + npar] = x[s + ns];// time

        atomModel(xik, dxikdt); // ODE
        for (size_t j = 0; j < ns_; j++) {
            dep[eq + j] = dxikdt[j]
                          + 0.13797958971132715 * x[s0 + j]
                          + -0.10749149571305303 * x[s0 + nvarsk + j]
                          + -0.038928002823013501 * x[s0 + 2 * nvarsk + j]
                          + 0.008439908824739363 * x[s0 + 3 * nvarsk + j];
        }
        eq += ns_;

        // K = 2
        for (size_t j = 0; j < ns_; j++) {
            xik[j] = x[s + j]; // states
        }
        s += nvarsk;
        // xik[ns + nm + npar] = x[s + ns];// time

        atomModel(xik, dxikdt); // ODE
        for (size_t j = 0; j < ns_; j++) {
            dep[eq + j] = dxikdt[j]
                          + -0.057979589711327127 * x[s0 + j]
                          + 0.11892800282301351 * x[s0 + nvarsk + j]
                          + -0.025841837620280327 * x[s0 + 2 * nvarsk + j]
                          + -0.035106575491406049 * x[s0 + 3 * nvarsk + j];
        }
        eq += ns_;

        // K = 3
        for (size_t j = 0; j < ns_; j++) {
            xik[j] = x[s + j]; // states
        }
        s += nvarsk;
        // xik[ns + nm + npar] = x[s + ns];// time

        atomModel(xik, dxikdt); // ODE
        for (size_t j = 0; j < ns_; j++) {
            dep[eq + j] = dxikdt[j]
                          + 0.099999999999999978 * x[s0 + j]
                          + -0.18439908824739357 * x[s0 + nvarsk + j]
                          + 0.25106575491406025 * x[s0 + 2 * nvarsk + j]
                          + -0.16666666666666669 * x[s0 + 3 * nvarsk + j];
        }
        eq += ns_;

    }

    return dep;
}
EOF
# -----------------------------------------------------------------------------

g++ -I${cppad_path} --std=c++11 -g test_jac_nnz.cpp -o test_jac_nnz
#
if ./test_jac_nnz
then
    echo "OK"
    exit 0
else
    echo "Error"
    exit 1
fi

from cppad.

bradbell avatar bradbell commented on July 3, 2024

I have added this test on the current master branch as
bug/efficient.sh
see 51491ee

from cppad.

bradbell avatar bradbell commented on July 3, 2024

I have done the binary search. See comments near the top of the bug/efficient.sh file.

The change occurred between Feb 20 18:34:50 2016 and Feb 22 16:57:46 2016.
There is a commit between these two, on Feb 22 10:10:24 2016, that is failing an assert test for this problem.

Here are the corresponding log entries (note that this was done on a branch called sparse before being merged into the master).

commit f2f000b
Author: Brad Bell [email protected]
Date: Mon Feb 22 16:57:46 2016 -0700
sparse branch:
sparse_list.hpp: remove extra checks used for debugging.

commit 16bc02b
Author: Brad Bell [email protected]
Date: Mon Feb 22 10:10:24 2016 -0700
sparse branch:
Add test_more/local for testing features not in user API.
vector_set.cpp: test vector of sets operations.
sparse_list.hpp: user referece counting and list sharing.

commit 1d6dc96
Author: Brad Bell [email protected]
Date: Sat Feb 20 18:34:50 2016 -0700
1. Fix version number on cppad_lib.so.
2. Advance to cppad-20160220.

from cppad.

bradbell avatar bradbell commented on July 3, 2024

I can get bug/efficient.sh to fail with version='master' if in cppad/local/sparse_list.hpp, just before the return in for void add_elment, I add the code:
# ifndef NDEBUG
check_data_not_used();
# endif
None of the CppAD sparse tests fail with this addition, only the test in bug/efficient.sh. I am making progress with this case, but it may help to have a simpler one.

from cppad.

bradbell avatar bradbell commented on July 3, 2024

The script bug/sparse_list.sh contains a simplified example that generates an assert on the master branch.

from cppad.

joaoleal avatar joaoleal commented on July 3, 2024

I was able to reverse the number of nonzeros (less in CppAD 2017 than in 2016).
If you uncomment the term
// + x[s0 + nvarsk + j];
and therefore
dep[eq + j] = dxikdt[j] + x[s0 + j] + x[s0 + nvarsk + j];
it will compute more terms in 2017 than 2016.
If dep is only
dep[eq + j] = dxikdt[j];
then it will have the same number of zeros.

#include <cppad/cppad.hpp>
#include <assert.h>
#include <vector>

using namespace CppAD;

std::vector<CppAD::AD<double> > evaluateModel(const std::vector<CppAD::AD<double> >& x,
                                              CppAD::atomic_base<double>& atomModel);

const size_t K_ = 3;
const size_t ns_ = 4;
const size_t nm_ = 2;
const size_t npar_ = 22;
const size_t na_ = ns_ + nm_ + npar_;

int main(int argc, char* argv[]) {
    /**
     * create atomic function
     */
    auto atomicFunction = [](const std::vector <AD<double>>& ind,
    std::vector <AD<double>>& dxdt) {
        dxdt[0] = ind[0] + ind[1] + ind[2] + ind[3] + ind[5] + ind[6] + ind[7] + ind[8] + ind[9] + ind[10] + ind[11] + ind[12] + ind[13] + ind[15] + ind[16] + ind[22] + ind[27];
        dxdt[1] = ind[0] + ind[1] + ind[2] + ind[3] + ind[5] + ind[6] + ind[7] + ind[8] + ind[9] + ind[10] + ind[11] + ind[12] + ind[13] + ind[15] + ind[16] + ind[22] + ind[27];
        dxdt[2] = ind[0] + ind[1] + ind[2] + ind[3] + ind[6] + ind[7] + ind[8] + ind[9] + ind[10] + ind[11] + ind[12] + ind[13] + ind[15] + ind[16] + ind[22] + ind[27];
        dxdt[3] = ind[0] + ind[2] + ind[3] + ind[4] + ind[17] + ind[19] + ind[22] + ind[26];
    };

    std::vector<double> xx(na_, 1.0);
    std::vector<AD<double>> ay(ns_), ax(na_);
    for (size_t i = 0; i < na_; ++i)
        ax[i] = xx[i];

    checkpoint<double> atomic("cstr",
                              atomicFunction,
                              ax,
                              ay,
                              atomic_base<double>::set_sparsity_enum);

    /**
     * create tape
     */
    size_t nvarsk = ns_;
    size_t nMstart = npar_ + nvarsk * K_ +  nvarsk;
    std::vector<double> xTypical(nMstart + nm_, 1.0);
    size_t n = xTypical.size();
    std::vector <AD<double>> u(n);
    for (size_t j = 0; j < n; j++)
        u[j] = xTypical[j];

    CppAD::Independent(u);

    std::vector<AD<double>> v = evaluateModel(u, atomic);

    ADFun<double> fun;
    fun.Dependent(v);

    /**
     * determine Jacobian sparsity
     */
    size_t m = fun.Range();

    std::vector<std::set<size_t> > s_s(m);
    for (size_t i = 0; i < m; i++)
        s_s[i].insert(i);

    auto sparsity = fun.RevSparseJac(m, s_s);

    size_t nnz = 0;
    for (const auto& s : sparsity) {
        nnz += s.size();
    }

    if(nnz == 58)
        return 0;
    else
        return 1;
}

std::vector<CppAD::AD<double> > evaluateModel(const std::vector<CppAD::AD<double> >& x,
                                              CppAD::atomic_base<double>& atomModel) {
    size_t m2 = K_ * ns_;

    std::vector<AD<double> > dep(m2); // dependent variable vector
    std::vector<AD<double> > dxikdt(ns_);
    std::vector<AD<double> > xik(na_);

    for (size_t j = 0; j < npar_; j++)
        xik[ns_ + nm_ + j] = x[j];

    size_t s = npar_;
    size_t nvarsk = ns_;
    size_t nMstart = npar_ + nvarsk * K_ + nvarsk;
    size_t eq = 0;

    size_t s0 = s;

    for (size_t j = 0; j < nm_; j++)
        xik[ns_ + j] = x[nMstart + j];

    for (size_t j = 0; j < ns_; j++)
        xik[j] = x[s + j];
 
    s += nvarsk;

    atomModel(xik, dxikdt);
    for (size_t j = 0; j < ns_; j++) {
        dep[eq + j] = dxikdt[j] + x[s0 + j]; // + x[s0 + nvarsk + j]; // uncomment this term and the nonzeros will be 64 (in 2017) vs 62 (in 2016)
    }
    eq += ns_;


    return dep;
}

from cppad.

joaoleal avatar joaoleal commented on July 3, 2024

If the atomic function is smaller, for instance:

    auto atomicFunction = [](const std::vector <AD<double>>& ind,
                             std::vector <AD<double>>& dxdt) {
        dxdt[0] = ind[0] + ind[1];
        dxdt[1] = ind[0] + ind[1];
        dxdt[2] = ind[0] + ind[1];
        dxdt[3] = ind[0] + ind[2];
    };

it is also possible to see differences in both versions.

from cppad.

joaoleal avatar joaoleal commented on July 3, 2024

Thank you!
That solved the problem.

from cppad.

joaoleal avatar joaoleal commented on July 3, 2024

from cppad.

bradbell avatar bradbell commented on July 3, 2024

This bug fix was included in the stable/20170000 branch with the following commit
6cd9cc5

from cppad.

bradbell avatar bradbell commented on July 3, 2024

This bug fix is included in the following release:
https://github.com/coin-or/CppAD/releases/tag/20170000.3

from cppad.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.