GithubHelp home page GithubHelp logo

rzikm / netqd Goto Github PK

View Code? Open in Web Editor NEW
2.0 2.0 0.0 64 KB

.NET implementation of the double-double and quad-double technique for achieving almost 128-bit and 256-bit floating point precision types.

License: MIT License

C# 95.14% Smalltalk 4.86%
quad-double double-double

netqd's Introduction

NetQD

.NET port of the QD library implementing the double-double and quad-double technique for achieving almost 128-bit and 256-bit floating point precision types.

See original paper by David H. Bailey Yozo Hida and Xiaoye S. Li for mathematical details. An unofficial copy of the original C++/Fortran implementation is available e.g. here.

Note that the port is in it's early stages so there may be some bugs.

Build Status

Installing

dotnet add package NetQD

Roadmap

  • Comprehensive set of unit tests that check that nothing has been broken during porting
  • Add more math functions (Exp, Log, Pow...)
  • Fit dotnet IFormatProvider into parsing and printing code

Contributing

Pull Requests are welcome.

netqd's People

Contributors

rzikm avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

netqd's Issues

Simple test shows precision not better than double

I was checking out this library, but I find that it doesn't produce results that are correct to higher precision than just double precision.

First off, here is an implementation for you of the conversion from decimal that doesn't truncate the value to double precision:

public static explicit operator DdReal(decimal value) {
	double a = (double)value;
	double b = (double)(value - (decimal)a);
	return new DdReal(a, b);
}

Using that I can successfully convert a decimal value to DdReal and back to decimal without losing any precision.

I put together this to test the correctness of the arithmetic:

private static Random _rnd = new Random();

private static decimal DecimalRnd() {
	decimal sample = 1m;
	while (sample >= 1) {
		byte[] buf = new byte[8];
		_rnd.NextBytes(buf);
		int a = BitConverter.ToInt32(buf, 0);
		int b = BitConverter.ToInt32(buf, 4);
		int c = _rnd.Next(542101087);
		sample = new Decimal(a, b, c, false, 28);
	}
	return sample;
}

public static void Test() {
	decimal a = DecimalRnd();
	decimal b = DecimalRnd();
	DdReal aa = (DdReal)a;
	DdReal bb = (DdReal)b;
	Console.WriteLine($"Decimal: {a} + {b} = {(a + b)}");
	Console.WriteLine($"DdReal:  {(decimal)aa} + {(decimal)bb} = {(decimal)(aa + bb)}");
	Console.WriteLine($"Decimal: {a} - {b} = {(a - b)}");
	Console.WriteLine($"DdReal:  {(decimal)aa} - {(decimal)bb} = {(decimal)(aa - bb)}");
}

Example output:

Decimal: 0,1814580729426223105261687684 + 0,8262370913565243172775595613 = 1,0076951642991466278037283297
DdReal:  0,1814580729426223105261687684 + 0,8262370913565243172775595613 = 1,0076951642991501004477916328
Decimal: 0,1814580729426223105261687684 - 0,8262370913565243172775595613 = -0,6447790184139020067513907929
DdReal:  0,1814580729426223105261687684 - 0,8262370913565243172775595613 = -0,6447790184139019789958151773

As you see, the result is correct to 14 significant digits, which is what you get with just double arithmetic.


Update:

After trying to port several double-double libraries from different languages, I have come to the conclusion that it can't be done reliably in C#. According to the specification the C# compiler is free to choose between 64 bit and 80 bit precision for double calculations. As the double-double implementations rely on a specific precision, it can't be done in plain C#.

Ref:
https://stackoverflow.com/questions/6683059/is-floating-point-math-consistent-in-c-can-it-be

Difficulty understanding the results

I've been trying to implement the David H Bailey paper on Double Double and came upon this library that do just that. I stopped my implementation because I thought my results were wrong. But I discover that my implementation and yours give the same results. So we must have implemented correctly the double double as documented in the article.

However, I fail to understand what I should expect. Roughly speaking I should get about 31 digits (base 10) of precision. So when I do something like this:

3.0 + 0.000000000000000000567

I expect a result more or less

3.000000000000000000567

which is 21 digits of precision. However with your library and my unfinished implementation, I get 3.0. Almost looks like that the library is just truncating at 16 digits precision, which is basically what a double is. Since it doesn't make any sense, there is something I don't understand.

Have you come across something similar? Do you understand how the maths work and why we get such bizarre result? Is it me that is just plain stupid?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.