GithubHelp home page GithubHelp logo

yellowfeather / dbfdatareader Goto Github PK

View Code? Open in Web Editor NEW
126.0 7.0 54.0 1.77 MB

DbfDataReader is a small fast .Net Core library for reading dBase, xBase, Clipper and FoxPro database files

License: MIT License

C# 100.00%
xbase clipper foxpro-database-files dbf dbase

dbfdatareader's People

Contributors

ali-moussa avatar andreykorolev avatar blongen1 avatar chrisrichards avatar cjjohna avatar danielkocean avatar datagrafikk avatar delog-ru avatar dependabot-preview[bot] avatar dependabot[bot] avatar dustykline avatar gerhobbelt avatar jrybacek avatar kstrauss avatar larinlive avatar marcelroozekrans avatar ndudas1 avatar phoyd avatar stapetro avatar w357 avatar xyrus02 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbfdatareader's Issues

DbfDataReader use old DBF specification

Below some quotes from Microsoft Visual Foxpro documentation.
Very impotant byte for correct table encoding - 29 | Code page mark

Table Header Record Structure
Byte offset | Description

  • 0 | File type:
    0x02 FoxBASE / dBase
    0x03 FoxBASE+ / FoxPro /dBase III PLUS / dBase IV, no memo
    0x30 Visual FoxPro
    0x31 Visual FoxPro, autoincrement enabled
    0x32 Visual FoxPro, Varchar, Varbinary, or Blob-enabled
    0x43 dBASE IV SQL table files, no memo
    0x63 dBASE IV SQL system files, no memo
    0x83 FoxBASE+/dBASE III PLUS, with memo 0x8B dBASE IV with memo
    0xCB dBASE IV SQL table files, with memo
    0xF5 FoxPro 2.x (or earlier) with memo
    0xFB FoxBASE (?)
  • 1 - 3 | Last update (YYMMDD)
  • 4 – 7 | Number of records in file
  • 8 – 9 | Position of first data record
  • 10 – 11 | Length of one data record, including delete flag
    **- 12 – 27 | Reserved
  • 28 | Table flags:
    0x01   file has a structural .cdx
    0x02   file has a Memo field
    0x04   file is a database (.dbc) This byte can contain the sum of any of the above values. For example, the value 0x03 indicates the table has a structural .cdx and a Memo field.
  • 29 | Code page mark
  • 30 – 31 | Reserved, contains 0x00
  • 32 – n | Field subrecords The number of fields determines the number of field subrecords. One field subrecord exists for each field in the table.
  • n+1 | Header record terminator (0x0D)
  • n+2 to n+264 | A 263-byte range that contains the backlink, which is the relative path of an associated database (.dbc) file, information. If the first byte is 0x00, the file is not associated with a database. Therefore, database files (.DBC) always contain 0x00.**

Field Subrecords Structure

Byte offset Description
0 – 10 Field name with a maximum of 10 characters. If less than 10, it is padded with null characters (0x00).
11 Field type:
W   -   Blob
C   –   Character
C   –   Character (binary)
Y   –   Currency
B   –   Double
D   –   Date
T   –   DateTime
F   –   Float
G   –   General
I   –   Integer
L   –   Logical
M   –   Memo
M   –   Memo (binary)
N   –   Numeric
P   –   Picture
Q   -   Varbinary
V   -   Varchar
V   -   Varchar (binary)
Note For each Varchar and Varbinary field, one bit, or "varlength" bit, is allocated in the last system field, which is a hidden field and stores the null status for all fields that can be null. If the Varchar or Varbinary field can be null, the null bit follows the "varlength" bit. If the "varlength" bit is set to 1, the length of the actual field value length is stored in the last byte of the field. Otherwise, if the bit is set to 0, length of the value is equal to the field size.
12 – 15 Displacement of field in record
16 Length of field (in bytes)
17 Number of decimal places
18 Field flags:
0x01   System Column (not visible to user)
0x02   Column can store null values
0x04   Binary column (for CHAR and MEMO only) 0x06   (0x02+0x04) When a field is NULL and binary (Integer, Currency, and Character/Memo fields)
0x0C   Column is autoincrementing
19 - 22 Value of autoincrement Next value
23 Value of autoincrement Step value
24 – 31 Reserved

Currency values are not read consistently

sect_iia.zip
In the example attached, Field45, which has a value of 743280 in the DBF table, is not read. Field7, which has a value of 71475, is read as expected.

Here is the extension method I use to read the data:

        public static T GetValue<T>(this DbfDataReader.DbfDataReader reader, string columnName)
        {
            var ordinal = reader.GetOrdinal(columnName);

            if (reader.IsDBNull(ordinal)) return default(T);

            return (T)Convert.ChangeType(reader.GetValue(ordinal), typeof(T));
        }

The reader returns DBNull for Field45.

Note that I get the same result when I use "straight" DbfDataReader methods such as GetDecimal, GetFloat, etc.

I believe the problem might have something to do with the null binary check ("\0") in line 18 of DbfValueCurrency.cs, but I don't know enough about DBF to say for certain.

When I skip that check for this particular field and allow the remaining code to execute I get the result I expect (743280). The "remaining code" refers to lines 24 and 25 in DbfValueCurrency.cs:

                var value = BitConverter.ToUInt64(bytes, 0);
                Value = value / 10000.0f;

Column of type N with length 10 and value > int.MaxValue

I observed the case, when the column has N type, but data are written as 10 digits.
For e.g. numbers:
6166405117 - does not works (more than int.MaxValue, it is long should be)
1006078792 - works
But both are wirtten as N type. For these cases (and for any) I reccomend to store the original string value of the cell and return on the method GetValue() the original value, instead null, if parsing is fail.

Column names are not handled properly

I encountered problem using DbfDataReader. In DBFs I want to read some columns names do not end with '\0' sign which means I cannot use GetOrdinal method properly.

For example there is a column in dbf file like this: "STREET\0\0\0\01". I looked into code, column names are handled in this way:
Name = rawName.TrimEnd((char) 0); which will not work for my example cause name does not end with '\0'.

My proposition is to use split instead of trim like this:
Name = rawName.Split((char) 0)[0];

GetFieldType throws NotImplementedException, so the reader can't be used with Dapper

We were hoping to use Dapper's dataReader.Parse<T>(); extension method, but Dapper calls IDataReader.GetFieldType and the current implementation crashes:

public override Type GetFieldType(int ordinal)
{
throw new NotImplementedException();
}

This is the call stack from Dapper:

at DbfDataReader.DbfDataReader.GetFieldType(Int32 ordinal)
at Dapper.SqlMapper.GetColumnHash(IDataReader reader, Int32 startBound, Int32 length) in C:\projects\dapper\Dapper\SqlMapper.cs:line 41
at Dapper.SqlMapper.TypeDeserializerCache.GetReader(IDataReader reader, Int32 startBound, Int32 length, Boolean returnNullIfFirstMissing) in C:\projects\dapper\Dapper\SqlMapper.TypeDeserializerCache.cs:line 143
at Dapper.SqlMapper.TypeDeserializerCache.GetReader(Type type, IDataReader reader, Int32 startBound, Int32 length, Boolean returnNullIfFirstMissing) in C:\projects\dapper\Dapper\SqlMapper.TypeDeserializerCache.cs:line 50
at Dapper.SqlMapper.GetDeserializer(Type type, IDataReader reader, Int32 startBound, Int32 length, Boolean returnNullIfFirstMissing) in C:\projects\dapper\Dapper\SqlMapper.cs:line 1789
at Dapper.SqlMapper.GetRowParser[T](IDataReader reader, Type concreteType, Int32 startIndex, Int32 length, Boolean returnNullIfFirstMissing) in C:\projects\dapper\Dapper\SqlMapper.IDataReader.cs:line 142

Would you take a pull request to implement GetFieldType?

It looks the implementation could be something like a switch on DbfTable.Columns[ordinal].ColumnType similar to the switch in DbfRecord.CreateDbfValue.

Issue when reading a column of type "N" length 6

Great work. Thanks

I faced an issue while reading a file that had a column type "N" with length 6.

Unhandled Exception: System.ArgumentException: The output char buffer is too small to contain the decoded characters, encoding 'Unicode (UTF-8)' fallback 'System.Text.DecoderReplacementFallback'.
Parameter name: chars
at System.Text.Encoding.ThrowCharsOverflow()
at System.Text.Encoding.ThrowCharsOverflow(DecoderNLS decoder, Boolean nothingDecoded)
at System.Text.UTF8Encoding.GetChars(Byte* bytes, Int32 byteCount, Char* chars, Int32 charCount, DecoderNLS baseDecoder)
at System.Text.DecoderNLS.GetChars(Byte[] bytes, Int32 byteIndex, Int32 byteCount, Char[] chars, Int32 charIndex, Boolean flush)
at System.Text.DecoderNLS.GetChars(Byte[] bytes, Int32 byteIndex, Int32 byteCount, Char[] chars, Int32 charIndex)
at System.IO.BinaryReader.InternalReadOneChar()
at System.IO.BinaryReader.PeekChar()
at DbfDataReader.DbfValueLong.Read(BinaryReader binaryReader)
at DbfDataReader.DbfRecord.Read(BinaryReader binaryReader)
at db.Program.Main(String[] args) in C:....\Program.cs:line 60

Make stream behaviour consistent

Further to #21 make the behaviour around opening and closing streams consistent.

DbfTable and DbfMemo should construct the underlying BinaryReader with the same option to leaveOpen.

Empty Date give Exception

Describe the bug
Error happened on dbf table init, in constructor, where exists date field and it do not contains date(empty).
Before adding date column(which may be empty) in dbf file, it worked fine.

To Reproduce
Steps to reproduce the behavior:

  1. Create dbf which contains field date type and set empty value to this
  2. Initialize DbfTable
    using (var dbfTable = new DbfTable(dbfPath, Encoding.UTF8))// it must fall

Expected behavior
Change mapping to DateTime?(or type which using in this operation with ? - nullable) - if possible or set it to default date value

FoxPro DBF with FPT memos failing to complete load

Describe the bug
Object reference not set to an instance of an object error

To Reproduce
Run the attached minimum solution with the included dbf/fpt files.

Expected behavior
DBF to be read to the end with the memo files included

Screenshots
image

Desktop (please complete the following information):

  • OS: Windows 10

Additional context
I also got a copy of your repo and found and locally fixed the issue here.
image

It looks like someone has already submitted a pull request that would fix this issue.
#143

Minimum viable test application with test dbfs
DbfReaderTest.zip

I solved datetime missing lines

I found that you didn't add any code to read datetime values: so i solved it for foxpro files:
you can add just this to "DbfValueDateTime.cs":

byte[] b = binaryReader.ReadBytes(8);
int i1 = BitConverter.ToInt32(b,0);
int i2 = BitConverter.ToInt32(b, 4);
Value = new DateTime(1, 1, 1).AddDays(i1).Subtract(TimeSpan.FromDays(1721426)).AddMilliseconds(i2);

Date parsing issue

Hello, I observe that some dates are not being read by reader, while older dates are fine. It's probably the same problem as in PR
Fixed DbfValueDateTime null values checking #159.

You can easily reproduce it with attached dbf file. Last 3 rows won't have any dates but in fox pro I can see those dates.

My code for reading:

private DataTable Read(string path, string tableName)
{
    var options = new DbfDataReader.DbfDataReaderOptions
    {
        //SkipDeletedRecords = true                 
        // Encoding = EncodingProvider.GetEncoding(1252);
    };

    using var dbfDataReader = new DbfDataReader.DbfDataReader($"{path}\\{tableName}.db", options);
    var columns = dbfDataReader.GetColumnSchema();
    var datatable = new DataTable();
    foreach (var column in columns)
    {
        datatable.Columns.Add(column.ColumnName, column.DataType);
    }
    datatable.Columns.Add("Deleted", typeof(int));
    while (dbfDataReader.Read())
    {
        var values = new object[datatable.Columns.Count];
        for (var i = 0; i < columns.Count; i++)
        {
            values[i] = dbfDataReader.GetValue(i);
        }
        values[columns.Count] = dbfDataReader.DbfRecord.IsDeleted ? 1 : 0;
        datatable.LoadDataRow(values, true);
    }
    return datatable;
}

_PRM.zip

UPD:
image

Debuged mentioned date parsing, as you can see it's not empty date but first byte is 0. So it's exatcly the same issue in #159

Start the reading dbf row records from the last to the first

It's possible to start the reading records in order from the last row to the first row inserted?

That feature could be added in the options, something like this:

var options = new DbfDataReaderOptions
{
SkipDeletedRecords = true,
StartReadingFromLastRow = true
};

For example, if file.dbf has 1000 rows inserted and iterate over the rows from 1, 2, 3, 4, 5 until 1000 with the parameter StartReadingFromLastRow = true should be iterate from row 1000, 999, 998, 997, 996 ....... until the row 1.

I'm not sure if DbfDataReader already has an implementation to do this.

Thank you.

Issue when reading a column of type "MEMO" with UWP App

Works like a charm for many dbf Foxpro files.
But, I am having an issue when reading files which had a column type of "Character" if the length> 255.
dbf file and ScreenShot.zip

Error message :
Exception levée : 'System.IndexOutOfRangeException' dans DbfDataReader.dll
Une exception non gérée du type 'System.IndexOutOfRangeException' s'est produite dans DbfDataReader.dll
Index was outside the bounds of the array.

Same issue with : https://github.com/yellowfeather/dbf

Please consider restoring support for .NET Framework by supporting .NET Standard 2.0

We're using this in a .NET Framework project which isn't ready to move to .NET Core because of other third party libraries.

https://docs.microsoft.com/en-us/dotnet/standard/library-guidance/cross-platform-targeting suggests targeting .NET Standard 2.0. For us, targeting net48;netstandard2.1 would also work fine.

You can still use ReadOnlySpan<T> by adding a reference to https://www.nuget.org/packages/System.Memory without dropping support for .NET Framework apps.

Negative Currency Values

Negative currency values are not read correctly. In the attached sample file, all negative values will read 1844674000000000M.

I was able to fix by updating DbfValueCurrency.cs:

            var value = BitConverter.ToUInt64(bytes, 0);

to

            var value = BitConverter.ToInt64(bytes, 0);

DbfTable => IsClosed is inverted

Well as the title states, not much more to add.

public bool IsClosed => BinaryReader != null;

should be

public bool IsClosed => BinaryReader == null;

columns of "Char" type in dbf are trimmed (ltrim) of space.

Describe the bug
columns of "Char" type in dbf are trimmed (ltrim) of space.

To Reproduce
Steps to reproduce the behavior:
take a dbf with a "C" data type and add value in that column as " Hi there"
Read it using DbfTable dbfrecord
you will get the value of that column as "Hi There".
the string values are TRIMED.

Solution suggested:
following is LINE NO 26 in DbfValueString.cs
Value = value.Trim(NullChar,' ');
Can change it to
Value = value.Trim(NullChar,' ');
if(string.IsNullOrWhiteSpace(Value)
Value=null; // this is the behaviour when we use ODBC

DbfDataReader Not Reading All Rows If Deleted Rows Exist

Bug Description
DbfDataReader is Skipping Rows After Deleted Row if i set SkipDeletedRecords = true

Steps to reproduce the behavior:

  1. Read Dbf file that contains deleted record at index number (ex 100 from 500)
  2. try to bulk copy the dbf file to sql server
  3. copied rows will be rows before deleted record only which are 99 rows only
  4. See error

Expected behavior
it should read all rows and skip deleted rows

Screenshots
If applicable, add screenshots to help explain your problem.

Capture

Desktop:

  • OS: [e.g. windows 10]
  • Browser [e.g. chrome]
  • Version [e.g. 80]

system.ArgumentException

Hi Chris Richards;
i got this error when try to iterate over the rows. I just copy the code list in : "and to iterate over the rows:" section.
system.ArgumentException: The output char buffer is too small to contain the decoded characters, encoding 'Unicode (UTF-8)' fallback 'System.Text.DecoderReplacementFallback'.
Parameter name: chars
at System.Text.Encoding.ThrowCharsOverflow()

and how can i convert this line to a datatable ? "var dbfTable = new DbfTable(dbfPath)"

Stream Reader cannot access file because its in use by another process.

Describe the bug
Try to read a dbf file with DbfDataReader while Visual Fox Pro is using the file and you will get "Stream Reader cannot access file because its in use by another process." error. Using the path and options constructor for the DbfDataReader.

To Fix
In DbfTable.cs in the path and encoding constructor, replace the FileStream line with

File.SetAttributes(path, FileAttributes.Normal);
var stream = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);

In DbfMemo.cs in the path and encoding constructor, replace the FileStream line with

var stream = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);

Also, in DbfDataReader check that the DbfTable is not null before trying to close it. So for the public override void Close():

DbfTable?.Close();

If updating to .Net 5 or any .Net core then install the <PackageReference Include="System.Text.Encoding.CodePages" Version="5.0.0" /> Nuget package and for the DbfDataReaderOptions.cs change the constructor to

 public DbfDataReaderOptions()
        {
            Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);
            SkipDeletedRecords = false;
            Encoding = Encoding.GetEncoding(1252); // https://stackoverflow.com/questions/50858209/system-notsupportedexception-no-data-is-available-for-encoding-1252
        }

Expected behavior
Have the DbfDataReader read the file while VPF is using it.

See my fork

See this stack overflow

fpt for foxpro is not read well

for my visual foxpro files the memo field was giving error after debugging i found where the problem is.
it was in converting byte[] to int: so i fixed those two methods with this: in this file "DbfMemoFoxPro.cs"
private int CalculateBlockSize()
{
_binaryReader.BaseStream.Seek(0, SeekOrigin.Begin);
byte[] b = _binaryReader.ReadBytes(4);
Array.Reverse(b);
int x=BitConverter.ToInt32(b,0); // next block
b = _binaryReader.ReadBytes(2);
Array.Reverse(b);
int xx= BitConverter.ToInt16(b, 0); // unused
b = _binaryReader.ReadBytes(2);
Array.Reverse(b);
int xxx= BitConverter.ToInt16(b, 0);
return xxx;
}

    public override string BuildMemo(long startBlock)
    {
        var offset = Offset(startBlock);
        _binaryReader.BaseStream.Seek(offset, SeekOrigin.Begin);

        byte[] b = _binaryReader.ReadBytes(4);
        Array.Reverse(b);
        var blockType = BitConverter.ToInt32(b, 0);
        b = _binaryReader.ReadBytes(4);
        Array.Reverse(b);
        var memoLength = BitConverter.ToInt32(b, 0);

        if ((blockType != 1) || (memoLength == 0))
        {
            return string.Empty;
        }

        var memo = new string(_binaryReader.ReadChars(memoLength));
        memo = memo.Replace("\0","").Replace("\u0001","").Replace("\b","").TrimEnd('\0').Trim();
        return memo;
    }

Do not mess with user files

Describe the bug
In the construcotr of DbfTable which takes a path to the table you change the attributes of the file in question to open:

public DbfTable(string path, Encoding encoding = null)
{
    if (!File.Exists(path)) throw new FileNotFoundException();

    Path = path;
    CurrentEncoding = encoding;

    // https://stackoverflow.com/questions/23559452/stream-reader-process-cannot-access-file-because-its-in-use-by-another-process
    File.SetAttributes(path, FileAttributes.Normal); // <-- Do not do this!
    Stream = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);

    Init();

    var memoPath = MemoPath();
    if (!string.IsNullOrEmpty(memoPath)) Memo = CreateMemo(memoPath);
}

This for example clears the hidden state of the file. It is generally a bad idea to mess with files you have no control over. If the file can not be opened, because it is used by another process the code should just fail here. The developer using the library is responsible for taking the right actions if that happens.

Check if the package references for the supported target frameworks are correct

Is your feature request related to a problem? Please describe.
I am not sure if the project references of the nuget package are setup correctly, because there is no choice made whether the net6.0 or the netstandard2.1 framework is used:

<ItemGroup>
  <PackageReference Include="Microsoft.SourceLink.GitHub" Version="1.1.1" PrivateAssets="All" />
  <PackageReference Include="MinVer" Version="4.3.0" PrivateAssets="All" />
  <PackageReference Include="System.Data.Common" Version="4.3.0" />
  <PackageReference Include="System.Memory" Version="4.5.5" />
  <PackageReference Include="System.Text.Encoding.CodePages" Version="7.0.0" />
</ItemGroup>

Are the packages System.Data.Common and System.Memory really needed when net6.0 is the target framework?

Describe the solution you'd like
If they are not needed i would like to see the project references setup like this:

<ItemGroup>
  <PackageReference Include="Microsoft.SourceLink.GitHub" Version="1.1.1" PrivateAssets="All" />
  <PackageReference Include="MinVer" Version="4.3.0" PrivateAssets="All" />
  <PackageReference Include="System.Text.Encoding.CodePages" Version="7.0.0" />
</ItemGroup>

<ItemGroup Condition="'$(TargetFramework)' == 'netstandard2.1'">
  <PackageReference Include="System.Data.Common" Version="4.3.0" />
  <PackageReference Include="System.Memory" Version="4.5.5" />
</ItemGroup>

Additional context
In a .net7.0 project referencing the package DbfDataReader this changes the transitive packages from this:
image
to this:
image

Not really an issue more of a how to

Hey all,

I am stuck between a rock and a hard place I am trying to get this POC project to a client and It is using old foxpro tables that we have no choice but to intergrate with for a period of time until the newer (SQL) project is complete.

Here is the query I have and All I want is a record set
SELECT A.PH_EXTRA1,A.PH_EXTRA2,A.PH_oRDID, A.PH_CUSTPO ,A.SHIP_VIA AS CSHIP_VIA, a.shippaytyp, a.shipr_name, a.shipr_add1, a.shipr_add2, a.shipr_city, a.shipr_stat, a.shipr_zip, B.* FROM \ournetwork\LIVE\GDS\DATA\RB\pheader a INNER JOIN \ournetwork\LIVE\GDS\DATA\RB\pcconf b ON b.pc_ordid = a.ph_ordid WHERE b.clientname = 'BTN' AND b.pc_status = '41' ORDER BY pc_ord_no, b.scc18, b.product_cd

there are 3ooK records in the DBF files all I want its a simple results I can't seem to find a way to just get a List of an individual table any help would be greatly appreciated.

Thanks

DbfRecord Read reading EOF in middle of stream.

We are getting cases where it will stop reading a DBF halfway through so the extracted row count does not equal what the header says.

(Examples, table names removed)
Table1 Difference in row count got 273135(0 marked deleted) expected 770187
Table2 Difference in row count got 792885(0 marked deleted) expected 1047742
Table3 Difference in row count got 86359(0 marked deleted) expected 121140
Table4 Difference in row count got 604003(0 marked deleted) expected 885758
Table5 Difference in row count got 296093(0 marked deleted) expected 937534
Table6 Difference in row count got 31434(240 marked deleted) expected 52718
Table7 Difference in row count got 691370(116440 marked deleted) expected 1017245

To Reproduce
Do not have any reproduction steps, this has only happened on 2 sites and the data itself is sensitive.

Expected behavior
The DBF should detect it has not read all entries and continue reading.

Desktop (please complete the following information):

  • Windows 10 - Net Core 3.1

Additional context
DbfRecord.cs to also check the stream position is the stream length and skipping forward returning a null record if not.
And this seems to read all entries now however I am not sure this is the proper way as I do not know the file spec.

Nuget issues

The package doesn't want to install with nuget.
outputs the following error:
Could not install package 'DbfDataReader 0.8.0'. You are trying to install this package into a project that targets '.NETFramework,Version=v4.7.2', but the package does not contain any assembly references or content files that are compatible with that framework. For more information, contact the package author.

Tried to set different Netframework targets. but the result is the same.
Any help would be greatly appreciated

Cannot implement DbfDataReader getvalues method or datatable load method passing DbfDatareader as parameter

cannot fill a array object with the DbfDataReader getvalues method..

i created a datatable manualy to filled with the values of dbf file to implementing filter on it like filtering by date column..
because DbfDataReader not have this feature. when try to run this code get the error : System.NotImplementedException: 'The method or operation is not implemented.'

image

where ird is a DbfDataReader variable.

Desktop (please complete the following information):

  • OS: windows
    visual studio 2019
    C# , console app.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.