GithubHelp home page GithubHelp logo

johnmcfarlane / cnl Goto Github PK

View Code? Open in Web Editor NEW
621.0 27.0 62.0 8.68 MB

A Compositional Numeric Library for C++

License: Boost Software License 1.0

Shell 0.07% CMake 0.78% C++ 97.45% Python 0.56% C 1.14%
cmake cpp fixed-point safe precision embedded simulation deterministic arithmetic multiprecision

cnl's Introduction

Compositional Numeric Library

push

The Compositional Numeric Library (CNL) is a C++ library of fixed-precision numeric classes which enhance integers to deliver safer, simpler, cheaper arithmetic types. Documentation can be found here. You can try out CNL on Compiler Explorer here.

CNL is particularly well-suited to:

  • compute or energy-constrained environments where FPUs are absent or costly;
  • compute or energy-intensive environments where arithmetic is the bottleneck such as simulations, machine learning applications and DSPs; and
  • domains such as finance where precision is essential.

Requirements

The latest version of CNL requires a C++20-compatible tool chain. (Version 1.x supports C++11.) CNL is continually tested on the following systems:

Linux

Tested:

  • GCC 10, 11, 12
  • Clang 10, 11, 12, 13, 14
  • libstdc++ 10, 11, 12
  • libc++ 13, 14
  • CMake 3.23.1
  • Conan 1.48.1

OS X

Tested:

  • GCC 11 / Clang 13
  • CMake 3.23.1
  • Conan 1.48.1

Windows

Tested:

  • Visual Studio 2019 Version 16 (19.29.30145.0)
  • Visual Studio 2022 Version 17 (19.32.31329.0)
  • CMake 3.23.1
  • Conan 1.48.1

Instructions

Build

CMake scripts are provided.

  • To build and install CNL on your system:
mkdir build && cd build
cmake ..
cmake --build . --target install

Note: you may need user privileges to install the library. Alternatively, you can install to user directory using CMAKE_INSTALL_PREFIX:

mkdir build && cd build
cmake -DCMAKE_INSTALL_PREFIX:FILE=/home/username/someplace ..
cmake --build . --target install

Alternatively, CNL is a header-only library so you can simply point to the include directory

c++ -isystem /path/to/cnl/include -std=c++20 my_program.cpp

or even include the root header directly in your code:

#include "/path/to/cnl/include/cnl/all.h"

Test

The test suite uses CMake and depends on Google Test and Google Benchmark. Optional integration tests use Boost.Multiprecision.

  1. Conan can be used to pull in essential dependencies. This example assumes GCC but other tool chain files are provided:

    cd build
    conan profile new --detect --force default
    conan profile update settings.compiler.libcxx=libstdc++11 default  # GCC/Clang only
    conan profile update env.CONAN_CMAKE_TOOLCHAIN_FILE=../test/toolchain/gcc.cmake default # GCC only
    conan install --build=missing --options test=unit ..

    ... and then configure, build and run unit tests:

    conan build --configure --build --test ..
  2. To run benchmarks, use --options test=benchmark...

    conan install --build=missing --options test=benchmark ..

    then configure and build

    conan build --configure --build ..

    and finally run explicitly to see the results.

    ./test/benchmark/test-benchmark

Integration

The API is exposed through headers in the include directory. Add this to your system header list and include, e.g.:

// to use a specific type:
#include <cnl/scaled_integer.h>

// or to include all CNL types:
#include <cnl/all.h>

Example Projects

Examples of projects using CNL:

  • CDSP - Compositional DSP Library for C++;
  • BrewBlox firmware - firmware for a brewery controller
  • cnl_example - minimal CMake-based project which uses CNL as a dependency.

Further Reading

Alternatives

  • Much of the simplicity and efficiency in CNL's API design can be traced back to Matheus Izvekov's fp fixed point library.
  • Together with Lawrence Crowl's fixed-point paper, P0106, it lays the groundwork for integer-backed real number approximation that minimizes loss of performance and precision.
  • fpm is a fixed-point math library with a high quantity of mathematical functions.
  • Fixed Point Class is a single-header C++98 solution from Peter Schregle's.

Contact Information

All feedback greatly appreciated.

cnl's People

Contributors

gitter-badger avatar hbe72 avatar johnmcfarlane avatar keryell avatar selavy avatar sethml avatar torfinnberset avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cnl's Issues

used_bits implementation for signed_multiprecision

This works:
using q4_20 = cnl::fixed_point<unsigned_multiprecision<24>,-20>;
q4_20 a = 3.051757812500000e-05;
` std::cout << "a : " << a << " used_bits : " << used_bits(a) << " leading bits : " << leading_bits(a) << std::endl;'

Producing correctly:
a : 3.0517578125e-05 used_bits : 6 leading bits : 18

This fails:
using q4_20 = cnl::fixed_point<signed_multiprecision<24>,-20>;
q4_20 a = 3.051757812500000e-05;
` std::cout << "a : " << a << " used_bits : " << used_bits(a) << " leading bits : " << leading_bits(a) << std::endl;'

with:
a : 3.0517578125e-05 used_bits : unknown file: Failure
C++ exception with description "Division by zero." thrown in the test body.
Process finished with exit code 1

disparity between int/safe_int shift operator

When compiling fixed_point_throwing_integer.cpp, the following are true:

namespace test_bitshift {
    // dynamic
    static_assert(identical(fixed_point<int, -4>{2}, fixed_point<uint8_t, -4>{1} << 1), "bitshift test failed");
    static_assert(identical(fixed_point<uint8, -4>{2}, fixed_point<uint8, -4>{1} << 1), "bitshift test failed");

This probably means that safe_integer bit-shift does not observe correct promotion. Fix the problem, then fix these tests. There should be no reference to fundamental types in fixed_point_common.h

certain width trait

Consider adding to _impl:

template<typename T>
struct certain_width : std::false_type {};

template<typename T>
struct certain_width<number_base<T>> : certain_width<T> {};

template<int D, typename N>
struct certain_width<elastic_integer<D, N> : std::true_type {};

Then operator/(fixed_point lhs, fixed_point rhs) is shmancy iff certain_width_v<rhs>.

Implement P0437

P0437. Start with is_integer and replace numeric_limits<boost::simd::pack<>> and anything for which is_integer is a 'stretch'.

Fix glvalue errors

Look for instances of #if defined(__clang__) || ! defined(__GNUG__) and find out why the code inside them does not compile.

cnl::floor

floor operation is now in cnl::_impl::fp::floor

Would it make sense to define that in cnl::floor rather than under _impl::fp?
I have actually uses for floor operation in fixed-point code and digging that out from cnl::_impl::fp::floor is a bit ugly.

    template<class Rep, int Exponent>
    constexpr inline Rep floor(fixed_point<Rep, Exponent> x) {
        return Rep { (_impl::to_rep(x)) >> -Exponent };
    }

MSVC C++17 features compiling?

Apparently, __cpp_deduction_guides is not defined in MSVC. But that may not mean that the compiler does not support that feature. Investigate.

CNL_INT128 is not a macro

Really, there needs to be a <ccnlint.h> to match with:

namespace cnl {
   int128_t, uint128_t
   intmax_t, uintmax_t
}

sqrt calculates incorrectly with elastic_fixed_point

Simple test to copy paste to test/fixed_point/extras.cpp

using cnl::fixed_point;
using cnl::elastic_fixed_point;
//cnl::sqrt
TEST(utils_tests, sqrt)
{
    EXPECT_FLOAT_EQ(static_cast<float>(sqrt(fixed_point<cnl::int32, -20>(0))), 0.0f);
    EXPECT_FLOAT_EQ(static_cast<float>(sqrt(fixed_point<cnl::int32, -20>(2.0))), 1.414213562373095f);
    EXPECT_FLOAT_EQ(static_cast<float>(sqrt(fixed_point<cnl::int32, -20>(4.0))), 2.0f);
    EXPECT_FLOAT_EQ(static_cast<float>(sqrt(elastic_fixed_point<32, -20>(0))), 0.0f);
    EXPECT_FLOAT_EQ(static_cast<float>(sqrt(elastic_fixed_point<32, -20>(2.0))), 1.414213562373095f);
    EXPECT_FLOAT_EQ(static_cast<float>(sqrt(elastic_fixed_point<32, -20>(4.0))), 2.0f);
}

Fails with

<path>/cnl/cmake-build-debug/src/test/fp_test --gtest_filter=* --gtest_color=no
<path>/cnl/src/test/fixed_point/extras.cpp:55: Failure
Expected equality of these values:
  static_cast<float>(sqrt(elastic_fixed_point<32, -20>(2.0)))
    Which is: 2
  1.414213562373095f
    Which is: 1.4142135

<path>/cnl/src/test/fixed_point/extras.cpp:56: Failure
Expected equality of these values:
  static_cast<float>(sqrt(elastic_fixed_point<32, -20>(4.0)))
    Which is: 2.8284264
  2.0f
    Which is: 2

safe widening function

To be used in divide and sqrt and specialized for elastic_fixed_point so that overloads of divide and sqrt are not necessary.

This is a feature of using 32 bit basic datatype, right?

Is this a feature of using 32 bit basic datatype?
Thus, underlying datatype is not promoted to 64 bit to avoid loss of precision?

BasicMath.cpp:62: Failure
      Expected: static_cast<float>(d)
      Which is: -1
To be equal to: 3.0f
      Which is: 3

BasicMath.cpp:73: Failure
      Expected: static_cast<float>(d)
      Which is: 0
To be equal to: 3.0f
      Which is: 3

With following test code

#include <gtest/gtest.h>

#include <cnl/bits/elastic_fixed_point.h>
#include <cnl/fixed_point.h>

TEST(BasicMath, fixed_point_mac)
{
    {
        using fxp_fix_t = cnl::fixed_point<int32_t,-13>;
        fxp_fix_t a = 2.0f;
        fxp_fix_t b = 1.0f;
        fxp_fix_t c = 1.0f;
        EXPECT_EQ(static_cast<float>(a),2.0f);
        EXPECT_EQ(static_cast<float>(b),1.0f);
        EXPECT_EQ(static_cast<float>(c),1.0f);
        fxp_fix_t d = c + a*b;
        EXPECT_EQ(static_cast<float>(d),3.0f);
    }
    {
        using fxp_fix_t = cnl::fixed_point<int32_t,-14>;
        fxp_fix_t a = 2.0f;
        fxp_fix_t b = 1.0f;
        fxp_fix_t c = 1.0f;
        EXPECT_EQ(static_cast<float>(a),2.0f);
        EXPECT_EQ(static_cast<float>(b),1.0f);
        EXPECT_EQ(static_cast<float>(c),1.0f);
        fxp_fix_t d = c + a*b;
        EXPECT_EQ(static_cast<float>(d),3.0f);
    }
    {
        using fxp_fix_t = cnl::fixed_point<int32_t,-15>;
        fxp_fix_t a = 2.0f;
        fxp_fix_t b = 1.0f;
        fxp_fix_t c = 1.0f;
        EXPECT_EQ(static_cast<float>(a),2.0f);
        EXPECT_EQ(static_cast<float>(b),1.0f);
        EXPECT_EQ(static_cast<float>(c),1.0f);
        fxp_fix_t d = c + a*b;
        EXPECT_EQ(static_cast<float>(d),3.0f);
    }
    {
        using fxp_fix_t = cnl::fixed_point<int32_t,-16>;
        fxp_fix_t a = 2.0f;
        fxp_fix_t b = 1.0f;
        fxp_fix_t c = 1.0f;
        EXPECT_EQ(static_cast<float>(a),2.0f);
        EXPECT_EQ(static_cast<float>(b),1.0f);
        EXPECT_EQ(static_cast<float>(c),1.0f);
        fxp_fix_t d = c + a*b;
        EXPECT_EQ(static_cast<float>(d),3.0f);
    }
}

Try to be more `explicit`

Consider:

static_assert(identical(
        elastic_fixed_point<2, 2>{1.5} << 1,
        elastic_fixed_point<2, 2>{3}));

Looks OK right? But increase the shiftage:

static_assert(identical(
        elastic_fixed_point<2, 2>{1.5} << 123,
        elastic_fixed_point<2, 2>{3}));

Oh no, it still compiles! Trick! The exponent is 2 so both sides of the equality are zero.

int would never allow you to do that:

static_assert(int(0.5f) == int(.25f), "");
static_assert(int{0.5f} == int{.25f}, "");

This doesn't compile because float to int narrows and curlies don't allow that. We cannot eliminate all gotchas like the elastic_fixed_point one above. But we may avoid lulling the user into a false sense of security by matching what fundamental types do a little more closely.

Are fixed_point arithmetic operators faithful to their `Rep`s?

When this is dropped in, tests fail.

    // subtract
    template<
            class LhsRep, int LhsExponent,
            class RhsRep, int RhsExponent>
    constexpr auto operator-(
            fixed_point<LhsRep, LhsExponent> const& lhs,
            fixed_point<RhsRep, RhsExponent> const& rhs)
    -> fixed_point<decltype(_impl::to_rep(lhs)-_impl::to_rep(rhs)), _impl::min(LhsExponent, RhsExponent)>
    {
        return _impl::from_rep<fixed_point<decltype(_impl::to_rep(lhs)-_impl::to_rep(rhs)), _impl::min(LhsExponent, RhsExponent)>>(
                _impl::to_rep(lhs)-_impl::to_rep(rhs));
    }

Why? Investigate.

Make averge work nice

In C++11:

int average(int a, int b) {
    return (make_elastic_integer(a) + make_elastic_integer(b)) / 2;
}

In C++17:

int average(int a, int b) {
    return (elastic_integer{a} + elastic_integer{b}) / 2;
}

Also, what happens to most negative number with elastic_integer{INT_MIN}?

elastic_fixed_point fails in multiply-accumulate

Started testing cnl vs sg14/fixed_point and found this:

Failure
      Expected: static_cast<float>(d)
      Which is: 2
To be equal to: 3.0f
      Which is: 3

with the last of the tests failing

#include <gtest/gtest.h>

#include <cnl/bits/elastic_fixed_point.h>
#include <cnl/fixed_point.h>

TEST(BasicMath, elastic_fixed_point_mac)
{
    {
        using fix_t = cnl::elastic_fixed_point<30, -13>;
        fix_t a = 2.0f;
        fix_t b = 1.0f;
        fix_t c = 1.0f;
        EXPECT_EQ(static_cast<float>(a),2.0f);
        EXPECT_EQ(static_cast<float>(b),1.0f);
        EXPECT_EQ(static_cast<float>(c),1.0f);
        fix_t d = c + a*b;
        EXPECT_EQ(static_cast<float>(d),3.0f);
    }
    {
        using fix_t = cnl::elastic_fixed_point<30, -14>;
        fix_t a = 2.0f;
        fix_t b = 1.0f;
        fix_t c = 1.0f;
        EXPECT_EQ(static_cast<float>(a),2.0f);
        EXPECT_EQ(static_cast<float>(b),1.0f);
        EXPECT_EQ(static_cast<float>(c),1.0f);
        fix_t d = c + a*b;
        EXPECT_EQ(static_cast<float>(d),3.0f);
    }
    {
        using fix_t = cnl::elastic_fixed_point<30, -15>;
        fix_t a = 2.0f;
        fix_t b = 1.0f;
        fix_t c = 1.0f;
        EXPECT_EQ(static_cast<float>(a),2.0f);
        EXPECT_EQ(static_cast<float>(b),1.0f);
        EXPECT_EQ(static_cast<float>(c),1.0f);
        fix_t d = c + a*b;
        EXPECT_EQ(static_cast<float>(d),3.0f);
    }
    {
        using fix_t = cnl::elastic_fixed_point<30, -16>;
        fix_t a = 2.0f;
        fix_t b = 1.0f;
        fix_t c = 1.0f;
        EXPECT_EQ(static_cast<float>(a),2.0f);
        EXPECT_EQ(static_cast<float>(b),1.0f);
        EXPECT_EQ(static_cast<float>(c),1.0f);
        fix_t d = c + a*b;
        EXPECT_EQ(static_cast<float>(d),3.0f);
    }
}

Old code which worked:

TEST(BasicMath, elastic_fixed_point_mac)
{
    {
        using fxp_fix_t = sg14::elastic_fixed_point<17,13>;
        fxp_fix_t a = 2.0f;
        fxp_fix_t b = 1.0f;
        fxp_fix_t c = 1.0f;
        EXPECT_EQ(static_cast<float>(a),2.0f);
        EXPECT_EQ(static_cast<float>(b),1.0f);
        EXPECT_EQ(static_cast<float>(c),1.0f);
        fxp_fix_t d = c + a*b;
        EXPECT_EQ(static_cast<float>(d),3.0f);
    }
    {
        using fxp_fix_t = sg14::elastic_fixed_point<16,14>;
        fxp_fix_t a = 2.0f;
        fxp_fix_t b = 1.0f;
        fxp_fix_t c = 1.0f;
        EXPECT_EQ(static_cast<float>(a),2.0f);
        EXPECT_EQ(static_cast<float>(b),1.0f);
        EXPECT_EQ(static_cast<float>(c),1.0f);
        fxp_fix_t d = c + a*b;
        EXPECT_EQ(static_cast<float>(d),3.0f);
    }
    {
        using fxp_fix_t = sg14::elastic_fixed_point<15,15>;
        fxp_fix_t a = 2.0f;
        fxp_fix_t b = 1.0f;
        fxp_fix_t c = 1.0f;
        EXPECT_EQ(static_cast<float>(a),2.0f);
        EXPECT_EQ(static_cast<float>(b),1.0f);
        EXPECT_EQ(static_cast<float>(c),1.0f);
        fxp_fix_t d = c + a*b;
        EXPECT_EQ(static_cast<float>(d),3.0f);
    }
    {
        using fxp_fix_t = sg14::elastic_fixed_point<14,16>;
        fxp_fix_t a = 2.0f;
        fxp_fix_t b = 1.0f;
        fxp_fix_t c = 1.0f;
        EXPECT_EQ(static_cast<float>(a),2.0f);
        EXPECT_EQ(static_cast<float>(b),1.0f);
        EXPECT_EQ(static_cast<float>(c),1.0f);
        fxp_fix_t d = c + a*b;
        EXPECT_EQ(static_cast<float>(d),3.0f);
    }
}

lsb_round

In several DSP architectures there is round operation supporting rounding from fixed point type to another.

Would it be possible to create something like this:

        using q4_20 = cnl::elastic_fixed_point<24, -20>;
        using q4_2 = cnl::elastic_fixed_point<6, -2>;
        using q4_1 = cnl::elastic_fixed_point<5, -1>;
        q4_20 a(1.750f);
        q4_2 b =  cnl::lsb_round<q4_2>(a); //+1 to bit 17 to Rep of a does not change anything
        q4_1 c = cnl::lsb_round<q4_1>(a);   //+1 to bit 18 to Rep of a causes propagation and rounding up
        q4_1 d = a;   //Just drops the bits
        EXPECT_EQ(static_cast<float>(a), 1.75f);
        EXPECT_EQ(static_cast<float>(b), 1.75f);
        EXPECT_EQ(static_cast<float>(c), 2.0f);
        EXPECT_EQ(static_cast<float>(d), 1.5f);

We would need to call it something else than round as std::round exists already and it does completely different thing...

Remove leaks in `number_base`

            explicit constexpr number_base(rep const& r)
                : _rep(r) { }

            template<class T>
            CNL_COPY_CONSTEXPR number_base& operator=(T const& r) {
                _rep = r;
                return static_cast<Derived&>(*this);
            }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.