GithubHelp home page GithubHelp logo

dehesa / codablecsv Goto Github PK

View Code? Open in Web Editor NEW
452.0 8.0 72.0 941 KB

Read and write CSV files row-by-row or through Swift's Codable interface.

License: MIT License

Swift 99.75% Ruby 0.25%
csv swift codable swift-codable encoder decoder csv-reader csv-writer csv-parser

codablecsv's Introduction

Codable CSV

Swift 5.x macOS 10.10+ - iOS 8+ - tvOS 9+ - watchOS 2+ Ubuntu 18.04 MIT License

CodableCSV provides:

  • Imperative CSV reader/writer.
  • Declarative CSV encoder/decoder.
  • Support multiple inputs/outputs: Strings, Data blobs, URLs, and Streams (commonly used for stdin).
  • Support numerous string encodings and Byte Order Markers (BOM).
  • Extensive configuration: delimiters, escaping scalar, trim strategy, codable strategies, presampling, etc.
  • RFC4180 compliant with default configuration and CRLF (\r\n) row delimiter.
  • Multiplatform support with no dependencies (the Swift Standard Library and Foundation are implicit dependencies).

Usage

To use this library, you need to:

    Add CodableCSV to your project.

    You can choose to add the library through SPM or Cocoapods:

    • SPM (Swift Package Manager).

      // swift-tools-version:5.1
      import PackageDescription
      
      let package = Package(
          /* Your package name, supported platforms, and generated products go here */
          dependencies: [
              .package(url: "https://github.com/dehesa/CodableCSV.git", from: "0.6.7")
          ],
          targets: [
              .target(name: /* Your target name here */, dependencies: ["CodableCSV"])
          ]
      )
    • Cocoapods.

      pod 'CodableCSV', '~> 0.6.7'
      

    Import CodableCSV in the file that needs it.

    import CodableCSV

There are two ways to use this library:

  1. imperatively, as a row-by-row and field-by-field reader/writer.
  2. declaratively, through Swift's Codable interface.

Imperative Reader/Writer

The following types provide imperative control on how to read/write CSV data.

    CSVReader

    A CSVReader parses CSV data from a given input (String, Data, URL, or InputStream) and returns CSV rows as a Strings array. CSVReader can be used at a high-level, in which case it parses an input completely; or at a low-level, in which each row is decoded when requested.

    • Complete input parsing.

      let data: Data = ...
      let result = try CSVReader.decode(input: data)

      Once the input is completely parsed, you can choose how to access the decoded data:

      let headers: [String] = result.headers
      // Access the CSV rows (i.e. raw [String] values)
      let rows = result.rows
      let row = result[0]
      // Access the CSV record (i.e. convenience structure over a single row)
      let records = result.records
      let record = result[record: 0]
      // Access the CSV columns through indices or header values.
      let columns = result.columns
      let column = result[column: 0]
      let column = result[column: "Name"]
      // Access fields through indices or header values.
      let fieldB: String = result[row: 3, column: 2]
      let fieldA: String? = result[row: 2, column: "Age"]
    • Row-by-row parsing.

      let reader = try CSVReader(input: string) { $0.headerStrategy = .firstLine }
      let rowA = try reader.readRow()

      Parse a row at a time, till nil is returned; or exit the scope and the reader will clean up all used memory.

      // Let's assume the input is:
      let string = "numA,numB,numC\n1,2,3\n4,5,6\n7,8,9"
      // The headers property can be accessed at any point after initialization.
      let headers: [String] = reader.headers  // ["numA", "numB", "numC"]
      // Keep querying rows till `nil` is received.
      guard let rowB = try reader.readRow(),  // ["4", "5", "6"]
            let rowC = try reader.readRow()   /* ["7", "8", "9"] */ else { ... }

      Alternatively you can use the readRecord() function which also returns the next CSV row, but it wraps the result in a convenience structure. This structure lets you access each field with the header name (as long as the headerStrategy is marked with .firstLine).

      let reader = try CSVReader(input: string) { $0.headerStrategy = .firstLine }
      let headers = reader.headers      // ["numA", "numB", "numC"]
      
      let recordA = try reader.readRecord()
      let rowA = recordA.row         // ["1", "2", "3"]
      let fieldA = recordA[0]        // "1"
      let fieldB = recordA["numB"]   // "2"
      
      let recordB = try reader.readRecord()
    • Sequence syntax parsing.

      let reader = try CSVReader(input: URL(...), configuration: ...)
      for row in reader {
          // Do something with the row: [String]
      }

      Please note the Sequence syntax (i.e. IteratorProtocol) doesn't throw errors; therefore if the CSV data is invalid, the previous code will crash. If you don't control the CSV data origin, use readRow() instead.

    Reader Configuration

    CSVReader accepts the following configuration properties:

    • encoding (default nil) specify the CSV file encoding.

      This String.Encoding value specify how each underlying byte is represented (e.g. .utf8, .utf32littleEndian, etc.). If it is nil, the library will try to figure out the file encoding through the file's Byte Order Marker. If the file doesn't contain a BOM, .utf8 is presumed.

    • delimiters (default (field: ",", row: "\n")) specify the field and row delimiters.

      CSV fields are separated within a row with field delimiters (commonly a "comma"). CSV rows are separated through row delimiters (commonly a "line feed"). You can specify any unicode scalar, String value, or nil for unknown delimiters.

    • escapingStrategy (default ") specify the Unicode scalar used to escape fields.

      CSV fields can be escaped in case they contain privilege characters, such as field/row delimiters. Commonly the escaping character is a double quote (i.e. "), by setting this configuration value you can change it (e.g. a single quote), or disable the escaping functionality.

    • headerStrategy (default .none) indicates whether the CSV data has a header row or not.

      CSV files may contain an optional header row at the very beginning. This configuration value lets you specify whether the file has a header row or not, or whether you want the library to figure it out.

    • trimStrategy (default empty set) trims the given characters at the beginning and end of each parsed field.

      The trim characters are applied for the escaped and unescaped fields. The set cannot include any of the delimiter characters or the escaping scalar. If so, an error will be thrown during initialization.

    • presample (default false) indicates whether the CSV data should be completely loaded into memory before parsing begins.

      Loading all data into memory may provide faster iteration for small to medium size files, since you get rid of the overhead of managing an InputStream.

    The configuration values are set during initialization and can be passed to the CSVReader instance through a structure or with a convenience closure syntax:

    let reader = CSVReader(input: ...) {
        $0.encoding = .utf8
        $0.delimiters.row = "\r\n"
        $0.headerStrategy = .firstLine
        $0.trimStrategy = .whitespaces
    }

    CSVWriter

    A CSVWriter encodes CSV information into a specified target (i.e. a String, or Data, or a file). It can be used at a high-level, by encoding completely a prepared set of information; or at a low-level, in which case rows or fields can be written individually.

    • Complete CSV rows encoding.

      let input = [
          ["numA", "numB", "name"        ],
          ["1"   , "2"   , "Marcos"      ],
          ["4"   , "5"   , "Marine-Anaïs"]
      ]
      let data   = try CSVWriter.encode(rows: input)
      let string = try CSVWriter.encode(rows: input, into: String.self)
      try CSVWriter.encode(rows: input, into: URL("~/Desktop/Test.csv")!, append: false)
    • Row-by-row encoding.

      let writer = try CSVWriter(fileURL: URL("~/Desktop/Test.csv")!, append: false)
      for row in input {
          try writer.write(row: row)
      }
      try writer.endEncoding()

      Alternatively, you may write directly to a buffer in memory and access its Data representation.

      let writer = try CSVWriter { $0.headers = input[0] }
      for row in input.dropFirst() {
          try writer.write(row: row)
      }
      try writer.endEncoding()
      let result = try writer.data()
    • Field-by-field encoding.

      let writer = try CSVWriter(fileURL: URL("~/Desktop/Test.csv")!, append: false)
      try writer.write(row: input[0])
      
      input[1].forEach {
          try writer.write(field: field)
      }
      try writer.endRow()
      
      try writer.write(fields: input[2])
      try writer.endRow()
      
      try writer.endEncoding()

      CSVWriter has a wealth of low-level imperative APIs, that let you write one field, several fields at a time, end a row, write an empty row, etc.

      Please notice that a CSV requires all rows to have the same amount of fields.

      CSVWriter enforces this by throwing an error when you try to write more the expected amount of fields, or filling a row with empty fields when you call endRow() but not all fields have been written.

    Writer Configuration

    CSVWriter accepts the following configuration properties:

    • delimiters (default (field: ",", row: "\n")) specify the field and row delimiters.

      CSV fields are separated within a row with field delimiters (commonly a "comma"). CSV rows are separated through row delimiters (commonly a "line feed"). You can specify any unicode scalar, String value, or nil for unknown delimiters.

    • escapingStrategy (default .doubleQuote) specify the Unicode scalar used to escape fields.

      CSV fields can be escaped in case they contain privilege characters, such as field/row delimiters. Commonly the escaping character is a double quote (i.e. "), by setting this configuration value you can change it (e.g. a single quote), or disable the escaping functionality.

    • headers (default []) indicates whether the CSV data has a header row or not.

      CSV files may contain an optional header row at the very beginning. If this configuration value is empty, no header row is written.

    • encoding (default nil) specify the CSV file encoding.

      This String.Encoding value specify how each underlying byte is represented (e.g. .utf8, .utf32littleEndian, etc.). If it is nil, the library will try to figure out the file encoding through the file's Byte Order Marker. If the file doesn't contain a BOM, .utf8 is presumed.

    • bomStrategy (default .convention) indicates whether a Byte Order Marker will be included at the beginning of the CSV representation.

      The OS convention is that BOMs are never written, except when .utf16, .utf32, or .unicode string encodings are specified. You could however indicate that you always want the BOM written (.always) or that is never written (.never).

    The configuration values are set during initialization and can be passed to the CSVWriter instance through a structure or with a convenience closure syntax:

    let writer = CSVWriter(fileURL: ...) {
        $0.delimiters.row = "\r\n"
        $0.headers = ["Name", "Age", "Pet"]
        $0.encoding = .utf8
        $0.bomStrategy = .never
    }

    CSVError

    Many of CodableCSV's imperative functions may throw errors due to invalid configuration values, invalid CSV input, file stream failures, etc. All these throwing operations exclusively throw CSVErrors that can be easily caught with do-catch clause.

    do {
        let writer = try CSVWriter()
        for row in customData {
            try writer.write(row: row)
        }
    } catch let error {
        print(error)
    }

    CSVError adopts Swift Evolution's SE-112 protocols and CustomDebugStringConvertible. The error's properties provide rich commentary explaining what went wrong and indicate how to fix the problem.

    • type: The error group category.
    • failureReason: Explanation of what went wrong.
    • helpAnchor: Advice on how to solve the problem.
    • errorUserInfo: Arguments associated with the operation that threw the error.
    • underlyingError: Optional underlying error, which provoked the operation to fail (most of the time is nil).
    • localizedDescription: Returns a human readable string with all the information contained in the error.


    You can get all the information by simply printing the error or calling the localizedDescription property on a properly casted CSVError<CSVReader> or CSVError<CSVWriter>.

Declarative Decoder/Encoder

The encoders/decoders provided by this library let you use Swift's Codable declarative approach to encode/decode CSV data.

    CSVDecoder

    CSVDecoder transforms CSV data into a Swift type conforming to Decodable. The decoding process is very simple and it only requires creating a decoding instance and call its decode function passing the Decodable type and the input data.

    let decoder = CSVDecoder()
    let result = try decoder.decode(CustomType.self, from: data)

    CSVDecoder can decode CSVs represented as a Data blob, a String, an actual file in the file system, or an InputStream (e.g. stdin).

    let decoder = CSVDecoder { $0.bufferingStrategy = .sequential }
    let content = try decoder.decode([Student].self, from: URL("~/Desktop/Student.csv"))

    If you are dealing with a big CSV file, it is preferred to used direct file decoding, a .sequential or .unrequested buffering strategy, and set presampling to false; since then memory usage is drastically reduced.

    Decoder Configuration

    The decoding process can be tweaked by specifying configuration values at initialization time. CSVDecoder accepts the same configuration values as CSVReader plus the following ones:

    • nilStrategy (default: .empty) indicates how the nil concept (absence of value) is represented on the CSV.

    • boolStrategy (default: .insensitive) defines how strings are decoded to Bool values.

    • nonConformingFloatStrategy (default .throw) specifies how to handle non-numbers (e.g. NaN and infinity).

    • decimalStrategy (default .locale) indicates how strings are decoded to Decimal values.

    • dateStrategy (default .deferredToDate) specify how strings are decoded to Date values.

    • dataStrategy (default .base64) indicates how strings are decoded to Data values.

    • bufferingStrategy (default .keepAll) controls the behavior of KeyedDecodingContainers.

      Selecting a buffering strategy affects the decoding performance and the amount of memory used during the decoding process. For more information check the README's Tips using Codable section and the Strategy.DecodingBuffer definition.

    The configuration values can be set during CSVDecoder initialization or at any point before the decode function is called.

    let decoder = CSVDecoder {
        $0.encoding = .utf8
        $0.delimiters.field = "\t"
        $0.headerStrategy = .firstLine
        $0.bufferingStrategy = .keepAll
        $0.decimalStrategy = .custom({ (decoder) in
            let value = try Float(from: decoder)
            return Decimal(value)
        })
    }

    CSVDecoder.Lazy

    A CSV input can be decoded on demand (i.e. row-by-row) with the decoder's lazy(from:) function.

    let decoder = CSVDecoder(configuration: config).lazy(from: fileURL)
    let student1 = try decoder.decodeRow(Student.self)
    let student2 = try decoder.decodeRow(Student.self)

    CSVDecoder.Lazy conforms to Swift's Sequence protocol, letting you use functionality such as map(), allSatisfy(), etc. Please note, CSVDecoder.Lazy cannot be used for repeated access; It consumes the input CSV.

    let decoder = CSVDecoder().lazy(from: fileData)
    let students = try decoder.map { try $0.decode(Student.self) }

    A nice benefit of using the lazy operation, is that it lets you switch how a row is decoded at any point. For example:

    let decoder = CSVDecoder().lazy(from: fileString)
    // The first 100 rows are students.
    let students = (  0..<100).map { _ in try decoder.decode(Student.self) }
    // The second 100 rows are teachers.
    let teachers = (100..<110).map { _ in try decoder.decode(Teacher.self) }

    Since CSVDecoder.Lazy exclusively provides sequential access; setting the buffering strategy to .sequential will reduce the decoder's memory usage.

    let decoder = CSVDecoder {
        $0.headerStrategy = .firstLine
        $0.bufferingStrategy = .sequential
    }.lazy(from: fileURL)

    CSVEncoder

    CSVEncoder transforms Swift types conforming to Encodable into CSV data. The encoding process is very simple and it only requires creating an encoding instance and call its encode function passing the Encodable value.

    let encoder = CSVEncoder()
    let data = try encoder.encode(value, into: Data.self)

    The Encoder's encode() function creates a CSV file as a Data blob, a String, or an actual file in the file system.

    let encoder = CSVEncoder { $0.headers = ["name", "age", "hasPet"] }
    try encoder.encode(value, into: URL("~/Desktop/Students.csv"))

    If you are dealing with a big CSV content, it is preferred to use direct file encoding and a .sequential or .assembled buffering strategy, since then memory usage is drastically reduced.

    Encoder Configuration

    The encoding process can be tweaked by specifying configuration values. CSVEncoder accepts the same configuration values as CSVWriter plus the following ones:

    • nilStrategy (default: .empty) indicates how the nil concept (absence of value) is represented on the CSV.

    • boolStrategy (default: .deferredToString) defines how Boolean values are encoded to String values.

    • nonConformingFloatStrategy (default .throw) specifies how to handle non-numbers (i.e. NaN and infinity).

    • decimalStrategy (default .locale) indicates how decimal numbers are encoded to String values.

    • dateStrategy (default .deferredToDate) specify how dates are encoded to String values.

    • dataStrategy (default .base64) indicates how data blobs are encoded to String values.

    • bufferingStrategy (default .keepAll) controls the behavior of KeyedEncodingContainers.

      Selecting a buffering strategy directly affect the encoding performance and the amount of memory used during the process. For more information check this README's Tips using Codable section and the Strategy.EncodingBuffer definition.

    The configuration values can be set during CSVEncoder initialization or at any point before the encode function is called.

    let encoder = CSVEncoder {
        $0.headers = ["name", "age", "hasPet"]
        $0.delimiters = (field: ";", row: "\r\n")
        $0.dateStrategy = .iso8601
        $0.bufferingStrategy = .sequential
        $0.floatStrategy = .convert(positiveInfinity: "", negativeInfinity: "-∞", nan: "")
        $0.dataStrategy = .custom({ (data, encoder) in
            let string = customTransformation(data)
            var container = try encoder.singleValueContainer()
            try container.encode(string)
        })
    }

    The .headers configuration is required if you are using keyed encoding container.

    CSVEncoder.Lazy

    A series of codable types (representing CSV rows) can be encoded on demand with the encoder's lazy(into:) function.

    let encoder = CSVEncoder().lazy(into: Data.self)
    for student in students {
        try encoder.encodeRow(student)
    }
    let data = try encoder.endEncoding()

    Call endEncoding() once there is no more values to be encoded. The function will return the encoded CSV.

    let encoder = CSVEncoder().lazy(into: String.self)
    students.forEach {
        try encoder.encode($0)
    }
    let string = try encoder.endEncoding()

    A nice benefit of using the lazy operation, is that it lets you switch how a row is encoded at any point. For example:

    let encoder = CSVEncoder(configuration: config).lazy(into: fileURL)
    students.forEach { try encoder.encode($0) }
    teachers.forEach { try encoder.encode($0) }
    try encoder.endEncoding()

    Since CSVEncoder.Lazy exclusively provides sequential encoding; setting the buffering strategy to .sequential will reduce the encoder's memory usage.

    let encoder = CSVEncoder {
        $0.bufferingStrategy = .sequential
    }.lazy(into: String.self)

Tips using Codable

Codable is fairly easy to use and most Swift standard library types already conform to it. However, sometimes it is tricky to get custom types to comply to Codable for specific functionality.

    Basic adoption.

    When a custom type conforms to Codable, the type is stating that it has the ability to decode itself from and encode itself to a external representation. Which representation depends on the decoder or encoder chosen. Foundation provides support for JSON and Property Lists and the community provide many other formats, such as: YAML, XML, BSON, and CSV (through this library).

    Usually a CSV represents a long list of entities. The following is a simple example representing a list of students.

    let string = """
        name,age,hasPet
        John,22,true
        Marine,23,false
        Alta,24,true
        """

    A student can be represented as a structure:

    struct Student: Codable {
        var name: String
        var age: Int
        var hasPet: Bool
    }

    To decode the list of students, create a decoder and call decode on it passing the CSV sample.

    let decoder = CSVDecoder { $0.headerStrategy = .firstLine }
    let students = try decoder.decode([Student].self, from: string)

    The inverse process (from Swift to CSV) is very similar (and simple).

    let encoder = CSVEncoder { $0.headers = ["name", "age", "hasPet"] }
    let newData = try encoder.encode(students)

    Specific behavior for CSV data.

    When encoding/decoding CSV data, it is important to keep several points in mind:

      Codable's automatic synthesis requires CSV files with a headers row.

      Codable is able to synthesize init(from:) and encode(to:) for your custom types when all its members/properties conform to Codable. This automatic synthesis create a hidden CodingKeys enumeration containing all your property names.

      During decoding, CSVDecoder tries to match the enumeration string values with a field position within a row. For this to work the CSV data must contain a headers row with the property names. If your CSV doesn't contain a headers row, you can specify coding keys with integer values representing the field index.

      struct Student: Codable {
          var name: String
          var age: Int
          var hasPet: Bool
      
          private enum CodingKeys: Int, CodingKey {
              case name = 0
              case age = 1
              case hasPet = 2
          }
      }

      Using integer coding keys has the added benefit of better encoder/decoder performance. By explicitly indicating the field index, you let the decoder skip the functionality of matching coding keys string values to headers.

      A CSV is a long list of rows/records.

      CSV formatted data is commonly used with flat hierarchies (e.g. a list of students, a list of car models, etc.). Nested structures, such as the ones found in JSON files, are not supported by default in CSV implementations (e.g. a list of users, where each user has a list of services she uses, and each service has a list of the user's configuration values).

      You can support complex structures in CSV, but you would have to flatten the hierarchy in a single model or build a custom encoding/decoding process. This process would make sure there is always a maximum of two keyed/unkeyed containers.

      As an example, we can create a nested structure for a school with students who own pets.

      struct School: Codable {
          let students: [Student]
      }
      
      struct Student: Codable {
          var name: String
          var age: Int
          var pet: Pet
      }
      
      struct Pet: Codable {
          var nickname: String
          var gender: Gender
      
          enum Gender: Codable {
              case male, female
          }
      }

      By default the previous example wouldn't work. If you want to keep the nested structure, you need to overwrite the custom init(from:) implementation (to support Decodable).

      extension School {
          init(from decoder: Decoder) throws {
              var container = try decoder.unkeyedContainer()
              while !container.isAtEnd {
                  self.student.append(try container.decode(Student.self))
              }
          }
      }
      
      extension Student {
          init(from decoder: Decoder) throws {
              var container = try decoder.container(keyedBy: CustomKeys.self)
              self.name = try container.decode(String.self, forKey: .name)
              self.age = try container.decode(Int.self, forKey: .age)
              self.pet = try decoder.singleValueContainer.decode(Pet.self)
          }
      }
      
      extension Pet {
          init(from decoder: Decoder) throws {
              var container = try decoder.container(keyedBy: CustomKeys.self)
              self.nickname = try container.decode(String.self, forKey: .nickname)
              self.gender = try container.decode(Gender.self, forKey: .gender)
          }
      }
      
      extension Pet.Gender {
          init(from decoder: Decoder) throws {
              var container = try decoder.singleValueContainer()
              self = try container.decode(Int.self) == 1 ? .male : .female
          }
      }
      
      private CustomKeys: Int, CodingKey {
          case name = 0
          case age = 1
          case nickname = 2
          case gender = 3
      }

      You could have avoided building the initializers overhead by defining a flat structure such as:

      struct Student: Codable {
          var name: String
          var age: Int
          var nickname: String
          var gender: Gender
      
          enum Gender: Int, Codable {
              case male = 1
              case female = 2
          }
      }

    Encoding/decoding strategies.

    SE167 proposal introduced to Foundation JSON and PLIST encoders/decoders. This proposal also featured encoding/decoding strategies as a new way to configure the encoding/decoding process. CodableCSV continues this tradition and mirrors such strategies including some new ones specific to the CSV file format.

    To configure the encoding/decoding process, you need to set the configuration values of the CSVEncoder/CSVDecoder before calling the encode()/decode() functions. There are two ways to set configuration values:

    • At initialization time, passing the Configuration structure to the initializer.

      var config = CSVDecoder.Configuration()
      config.nilStrategy = .empty
      config.decimalStrategy = .locale(.current)
      config.dataStrategy = .base64
      config.bufferingStrategy = .sequential
      config.trimStrategy = .whitespaces
      config.encoding = .utf16
      config.delimiters.row = "\r\n"
      
      let decoder = CSVDecoder(configuration: config)

      Alternatively, there are convenience initializers accepting a closure with a inout Configuration value.

      let decoder = CSVDecoder {
          $0.nilStrategy = .empty
          $0.decimalStrategy = .locale(.current)
          // and so on and so forth
      }
    • CSVEncoder and CSVDecoder implement @dynamicMemberLookup exclusively for their configuration values. Therefore you can set configuration values after initialization or after a encoding/decoding process has been performed.

      let decoder = CSVDecoder()
      decoder.bufferingStrategy = .sequential
      decoder.decode([Student].self, from: url1)
      
      decoder.bufferingStrategy = .keepAll
      decoder.decode([Pets].self, from: url2)

    The strategies labeled with .custom let you insert behavior into the encoding/decoding process without forcing you to manually conform to init(from:) and encode(to:). When set, they will reference the targeted type for the whole process. For example, if you want to encode a CSV file where empty fields are marked with the word null (for some reason). You could do the following:

    let decoder = CSVDecoder()
    decoder.nilStrategy = .custom({ (encoder) in
        var container = encoder.singleValueContainer()
        try container.encode("null")
    })

    Type-safe headers row.

    You can generate type-safe name headers using Swift introspection tools (i.e. Mirror) or explicitly defining the CodingKey enum with String raw value conforming to CaseIterable.

    struct Student {
        var name: String
        var age: Int
        var hasPet: Bool
    
        enum CodingKeys: String, CodingKey, CaseIterable {
            case name, age, hasPet
        }
    }

    Then configure your encoder with explicit headers.

    let encoder = CSVEncoder {
        $0.headers = Student.CodingKeys.allCases.map { $0.rawValue }
    }

    Performance advices.

    #warning("TODO:")

Roadmap

Roadmap

The library has been heavily documented and any contribution is welcome. Check the small How to contribute document or take a look at the Github projects for a more in-depth roadmap.

Community

If CodableCSV is not of your liking, the Swift community offers other CSV solutions:

  • CSV.swift contains an imperative CSV reader/writer and a lazy row decoder and adheres to the RFC4180 standard.
  • SwiftCSV is a well-tested parse-only library which loads the whole CSV in memory (not intended for large files).
  • CSwiftV is a parse-only library which loads the CSV in memory and parses it in a single go (no imperative reading).
  • CSVImporter is an asynchronous parse-only library with support for big CSV files (incremental loading).
  • SwiftCSVExport reads/writes CSV imperatively with Objective-C support.
  • swift-csv offers an imperative CSV reader/writer based on Foundation's streams.

There are many good tools outside the Swift community. Since writing them all would be a hard task, I will just point you to the great AwesomeCSV github repo. There are a lot of treasures to be found there.

codablecsv's People

Contributors

davbeck avatar dechengma avatar dehesa avatar josh avatar kirow avatar lightsprint09 avatar rikiya-yamamoto avatar robo-fish avatar smpanaro avatar steveriggins avatar xsleonard avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

codablecsv's Issues

Memory leaks

After decoding the CSV file I noticed that many objects created by the decoder are still in memory. After a quick look, it looks like there is a retain cycle here that causes a memory leak:

DecodingRecordOrdered -> decoder -> chain -> state -> DecodingRecordOrdered

I think one of those references needs to be weak to break this cycle.

Encoding a single object produces a confusing error message.

When trying to encode a single object, the following error occurs:

struct Student: Encodable {
    let name: String, age: Int?, country: String?, hasPet: Bool?
}
let student = Student(name: "Marcos", age: 1, country: "Spain", hasPet: true)
let encoder = CSVEncoder { $0.headers = ["name", "age", "country", "hasPet"] }
let result = try encoder.encode(student, into: String.self)
print(result)

[CSVEncoder] Invalid coding path
	Reason: The coding key identifying a CSV row couldn't be transformed into an integer value.
	Help: The provided coding key identifying a CSV row must implement `intValue`.
	User info: Coding path: [CodingKeys(stringValue: "name", intValue: nil)], Key: CodingKeys(stringValue: "name", intValue: nil)

It works fine as long as the object is wrapped in an array, however. Browsing the source for a few minutes didn't make it clear to me why this is the case.

Value of type 'CSVReader' has no member 'columns'

Question

I'm trying to access columns property but it seems it's not available at all.

Additional Context

    ` let result = try CSVReader(input: url) {
            $0.encoding = .utf8
            $0.delimiters.row = "\r\n"
            $0.headerStrategy = .firstLine
            $0.trimStrategy = .whitespaces
        }
        

       let columns = result.columns `

System

Delete section if not applicable

  • OS: iOS 14.5
  • CodableCSV: [e.g. 0.6.6]

in iOS17 can't read file from CloudDocs

Describe the bug

A clear and concise description of what the bug is.

To Reproduce

Steps to reproduce the behavior:

  1. Create entity '...'
  2. Perform action '....'

Expected behavior

A clear and concise description of what you expected to happen.

System

  • OS: [e.g. macOS 11.5, iOS 14.7, Ubuntu 20.04]
  • CodableCSV: [e.g. 0.6.7]
    You can check this in your SPM Package.swift file (or Package.resolved file). Alternatively, go to Xcode's Source Control Navigator (⌘+2) and click on CodableCSV.

Additional context

Add any other context about the problem here (or delete this section if it is unnecessary).

Last column decodes as blank

Question
Hi all. Not sure if this is pilot error or if its a bug but it appears that the last column in our CSV consistently decodes to blank. We've got a correct header line and I'm using a .firstLine strategy. Have also confirmed that my data model has the same number of columns as vars. The only solution to fix this appears to be using a dummy column at the end.

System
OS: macOS 12.3.1, Xcode 12.3
CodableCSV 0.6.7

Memory leak

I have a large csv file (> 400,000 lines) which is too big to decode in one blob, so I loop through each line of the file by calling readLine(), and then for each line:

convert the line to Data
let obj = try decoder.decode([Obj].self, from: data).first

It seems that if you call decoder.decode repeatedly that it leaks memory. It looks like the allocation of Buffer() from the initialiser of CSVReader has a retain cycle.
I haven't yet had time to dive into this - am wiling to do so if no other advice. See attached memory graph.
Screen Shot 2020-03-11 at 8 19 29 pm

Make it conform to TopLevelDecoder/Encoder to use with Combine

Is your feature request related to a problem?

With the new Combine framework, one can chain the CSVDecoder.

Describe the solution you'd like

conform to protocol TopLevelDecoder/Encoder

Describe alternatives you've considered

Additional context

see examples from other Codable Frameworks such as XMLCoder and YAMS.

doesn't handle strings with ","

given a row like:
20 May 2021,"some description","$1,090"

the "," in "$1,090" is treated as a delimiter when it shouldn't be because its between the quotes.

To Reproduce

do {
     try parsedResults = CSVReader.decode(input: row)
} catch {
     print("ERROR")
}

Expected behavior

A clear and concise description of what you expected to happen.

System

  • OS: [e.g. macOS 11.2, iOS 14.4, Ubuntu 20.04]
  • CodableCSV: [e.g. 0.6.6]
    You can check this in your SPM Package.swift file (or Package.resolved file). Alternatively, go to Xcode's Source Control Navigator (⌘+2) and click on CodableCSV.

Additional context

Add any other context about the problem here (or delete this section if it is unnecessary).

Option to escape Excel-unfriendly strings

Is your feature request related to a problem?

When a csv file contains a string with a leading -, +, or =, Excel will treat it as a formula field and throw an error.

Describe the solution you'd like

Love an option to auto-detect these leading characters and escape them properly with a single leading single-quote '.

Encoding a struct with a nil value results in an infinite loop.

Describe the bug

Encoding a struct with a nil value results in an infinite loop.

To Reproduce

Steps to reproduce the behavior:

import Foundation
import CodableCSV

struct Employee: Encodable {
    let id: Int
    let name: String
    let supervisorId: Int?
    
    enum CodingKeys: String, CodingKey, CaseIterable {
        case id = "Employee ID"
        case name = "Name"
        case supervisorId = "Supervisor ID"
    }
}

let oneEmployee = Employee(id: 1, name: "Roy", supervisorId: nil)

let encoder = CSVEncoder {
    $0.headers = Employee.CodingKeys.allCases.map(\.rawValue)
}
let result = try! encoder.encode([oneEmployee], into: String.self)

print(result)

Expected behavior

I'm actually not sure how a nil value is represented in CSV but I think you just put nothing in between the two commas. In any case, this should at least throw an error instead of entering into an infinite loop.

System (please complete the following information)

  • OS: macOS 10.15.4
  • CodableCSV: 0.5.1

Give CSVWriter.Configuration its own delimiter type which has no inference API

Is your feature request related to a problem?

It is currently possible to create an invalid CSVWriter.Configuration by supplying nil as a field- or row-delimiter. nil means "infer the delimiter from the CSV data", which only makes sense for the CSVReader. This error is reported at runtime.

Describe the solution you'd like

I'd suggest having separate Delimiter.Pair types for CSVReader.Configuration and CSVWriter.Configuration so that we can prevent invalid configuration at compile-time. The Delimiter.Pair for the writer's configuration would simply not have an API for specifying inference.

Describe alternatives you've considered

Alternatively we can keep it as it is currently, and raise a run-time error when inference is requested from the CSVWriter. This does spare us from having two very similar Delimiter.Pair types.

Additional context

If we add a more explicit API for delimiter inference, as suggested in #44, I think this might become even more important, as the auto-completion will otherwise include .infer and multiple overloads of .infer(options:) in its suggestions, which would be quite confusing in the context of the CSVWriter.

Lazy decoding a CSV with CRLF line endings fails without an error

Describe the bug

A clear and concise description of what the bug is.

Using using CSVDecoder.Lazy() on a CSV file with CRLF line endings, and the delimiter is not configured to be CRLF (the default is only '\n'), decoding rows will silently fail.

To Reproduce

Steps to reproduce the behavior:

  1. Create entity '...'
  2. Perform action '....'

Expected behavior

Either throw an error when an invalid row delimiter is encountered, or successfully parse both \n and \r\n line endings without additional configuration.

System

  • OS: iOS 14.2
  • CodableCSV: 0.6.5

Additional context

This is the call stack from lazy decoding down to where the internal error is thrown. In between, the error is eaten by a try? and the Lazy iterator terminates without error, despite not processing the CSV.

#0 0x0000000102e0ba48 in CSVReader._parseEscapedField(rowIndex:escaping:) at CodableCSV/sources/imperative/reader/Reader.swift:278
#1 0x0000000102e091a0 in CSVReader._parseLine(rowIndex:) at CodableCSV/sources/imperative/reader/Reader.swift:165
#2 0x0000000102e0a1f8 in CSVReader.readRow() at CodableCSV/sources/imperative/reader/Reader.swift:112
#3 0x0000000102ddb160 in ShadowDecoder.Source.isRowAtEnd(index:) at CodableCSV/sources/declarative/decodable/internal/Source.swift:109
#4 0x0000000102dc1678 in CSVDecoder.Lazy.next() at CodableCSV/sources/declarative/decodable/DecoderLazy.swift:48

Can't`pod install` for macOS 10.15

It is just that.

CocoaPods says:

[!] The platform of the target `<TARGET_NAME>` (macOS 10.15) is not compatible with `CodableCSV (0.4.0)`, which does not support `macOS`.

Is there any specific reason for that?
Could you perhaps update the podspec?

Skip Column in Encoder (and Decoder)

Question

Hey @dehesa 👋

I am fairly new to this package and I have a question.
I want to skip a column during export and import.

Export: Given a CSVEncoder and struct Pet

struct Pet {
  let name: String
  let age: Int
}
let pets = ...
let encoder = CSVEncoder { $0.headers = ["name", "age"] }
let data = try encoder.encode(pets)

Is it possible to skip a particular column, that is, encode only a single column "name" into a csv file?

Import: Given a CSVDecoder,

let decoder = CSVDecoder()
let result = try decoder.decode([Pet].self, from: data)

Can I import data into an array of Pet, if data does not contain an age column (and perhaps give it a default value if the column does not exist)?

Many thanks for your help! 😊
Roman

System

  • OS: macOS Monterey
  • CodableCSV: 0.6.7

Cocoapods installation is not possible

Describe the bug

A clear and concise description of what the bug is.

The pod name "CodableCSV" is occupied by a different project with the same name as this one: https://github.com/pauljohanneskraft/CodableCSV.

Following the pod install instructions for this repo will fail.

The error message from pod is:

[!] CocoaPods could not find compatible versions for pod "CodableCSV":
  In Podfile:
    CodableCSV (~> 0.6.1)

None of your spec sources contain a spec satisfying the dependency: `CodableCSV (~> 0.6.1)`.

Performing a pod search CodableCSV:

-> CodableCSV (0.4.0)
   CodableCSV allows you to encode and decode CSV files using Codable model types.
   pod 'CodableCSV', '~> 0.4.0'
   - Homepage: https://github.com/pauljohanneskraft/CodableCSV
   - Source:   https://github.com/pauljohanneskraft/CodableCSV.git
   - Versions: 0.4.0, 0.2.0, 0.1.1 [trunk repo]

To Reproduce

Steps to reproduce the behavior:

Add pod "CodableCSV", "~> 0.6.1" to Podfile, as per the readme.
Perform:
pod install

Expected behavior

This package will be installed by pod.

Support for iOS 10

I needed to support iOS 10 in my app but the library only supports iOS >= 12. So, I had to replace the library.

But I was curious to know why the library requires iOS 12. So I added the source code directly to a test project that targets iOS 10, and it builds successfully!

It looks like the library actually supports iOS 10 as it is now and no need for extra work. I suggest changing the requirements for the library to the minimum version of every OS to allow it to be used in a wider range of projects.

How to omit properties from the CSV file

Question

How do I configure CodableCSV to omit selected properties from the object when creating the CSV file.

Additional Context

Model objects contain fields that are Codable but which I do not wish to have included in the CSV file. I have not located a method in this package that enables me to do this, though perhaps it would simply omit any properties for which there was no matching label in the header row. (That would be a user friendly and simple way to provide this functionality if it does not exist.)

System

  • OS: macOS 10.15.5, Ubuntu somethingCurrent
  • CodableCSV: 0.6.1

Handling CSV file with empty line after header

Hi! I'm looking for the correct configuration to handle a CSV file that has an empty line after the header, like so:

"Item Code","ItemStatus"

"ABC","In Stock"
"DEF","Unavailable"

The callsite looks like this:

let decoder = CSVDecoder { config in
  config.headerStrategy = .firstLine
}
let items = try decoder.decode([Item].self, from: csvString)
// items == []

I've used various combinations of delimiters.row = "\n", escapingStrategy = "\n", and trimStrategry = .whitespaces, but the decoder either throws an error or returns an empty array. Is there a way to ignore empty lines?

System (please complete the following information)

  • OS: macOS 10.15.4
  • CodableCSV: 0.5.1

question about escaping

Question

With the default config, how can I escape commas and line returns within a field in order to ensure the resulting CSV is readable?

Additional Context

I'm hacking together an app with Swift and Xcode and I'm a complete novice. To provide the app with some basic data I have provided it with csv files, which are parsed with CodableCSV. Many thanks for the package!

Using basic data it's working fine. I have tried not to fiddle with the configuration. Delimiters are commas and end of line is "\r".

However, for one of my tables I need now to expand one of the fields to include sentences or even paragraphs of text, which contain commas and newlines. Initially I understood from the documentation that the way to do this is to enclose the whole field in double quotes ("..."). That crashed the app and so did escaping the individual offending characters with double quotes (",) or with a backslash (,).

Many thanks for any pointers!

Example extract of table:

id,title,introduction
reg,Regular Models,"The good news for learners of Spanish..."
irr_i,Essentials I,
irr_ii,Essentials II,

Error message:

CodableCSV/Reader.swift:75:` Fatal error: 'try!' expression unexpectedly raised an error: [CSVReader] Invalid input
	Reason: The targeted field parsed successfully. However, the character right after it was not a field nor row delimiter.
	Help: If your CSV is CRLF, change the row delimiter to "\r\n" or add a trim strategy for "\r".
	User info: Row index: 1, Field: The good news for learners of `Spanish...

System

  • OS: macOS 12.3
  • CodableCSV: 0.6.7
  • Xcode 13

trimStrategy does not trim characters inside of a quoted field

Describe the bug

trimStrategy characters are not trimmed from a quoted string field.

To Reproduce

Use a CSVDecoder with trimStrategy = .whitespaces and a CSV like:

Name,Value
" Foo ","1"

The Name field is parsed as " Foo " - spaces are not trimmed from the string.

Expected behavior

Characters in trimStrategy trimmed from the result

System

  • CodableCSV: 0.6.6

Question: What's more efficient about supplying init(from:)?

From the README:

The previous example will work if the CSV file has a header row and the header titles match exactly the property names (name, age, and hasPet). A more efficient and detailed implementation:

struct Student: Decodable {
   let name: String
   let age: Int
   let hasPet: Bool

   init(from decoder: Decoder) throws {
       var row = try decoder.unkeyedContainer()
       self.name = try row.decode(String.self)
       self.age = try row.decode(Int.self)
       self.hasPet = try row.decode(Boolean.self)
   }
}

What makes this implementation more efficient than the implementation listed above it?

struct School: Decodable {
   // The custom CSV file is a list of all the students.
   let people: [Student]
}

struct Student: Decodable {
   let name: String
   let age: Int
   let hasPet: Bool
}

Are there benchmarks to show the implementation with init(from:) is more efficient? Should I implement all of my Codable structures with a manual init(from:)?

Customise cell parsing for declarative decoder

Is your feature request related to a problem?

I am a fan of the concept of Codable declarative CSV parsing, but am running into the edges of it a little with my current use case. I'm parsing a nutrient database (a UK public health source), and in their dataset they either offer a floating point value for a quantity of a nutrient, or special codes representing trace amounts: e.g. they use "N" to represent "significant but unmeasured quantity" or "Tr" to represent "trace amounts".

Here's an example subset of an input:

Water (g),Protein (g),Fat (g),Carbohydrate (g),Energy (kJ) (kJ),Starch (g),Total sugars (g),Glucose (g)
76.7,2.9,15.2,0.8,625,Tr,0.8,0.1
9.7,1.3,1.2,Tr,67,0.0,Tr,0.0
84.2,0.2,0.1,Tr,7,0.0,Tr,0.0
93.4,4.0,0.7,0.4,100,Tr,0.3,0.1
8.5,6.1,8.7,N,N,N,N,N

In my use case, I'd basically like to ignore N or Tr values (defaulting them to 0 in the parsed type, maybe), but the parser throws an exception and exits when it encounters a non-parseable Double value.

Describe the solution you'd like

Similar to the customisation point for a Decimal parser, It'd be great if we could customise the parsing for types such as Double to be able to handle for edge cases in our input data. In my case I'd be able to Double cast values that aren't "N" or "Tr", and return 0.0 for those edge cases.

Describe alternatives you've considered

I've been able to resolve my issues using the imperative parser, or by pre-processing the CSV whenever I parse it, but it ceases to be a nice declarative interface at that point (and requires loading the whole thing into memory, as my old SwiftCSV implementation did).

The Decimal parser option works, but results in Decimal values - in my case I want simple Doubles.

Is encoding Double or Float supported?

When I change the floatStrategy as .convert I get an fatal error in this code:

            case .throw: throw CSVEncoder.Error._invalidFloatingPoint(value, codingPath: self.codingPath)
            case .convert(let positiveInfinity, let negativeInfinity, let nan):
                if value.isNaN {
                    return nan
                } else if value.isInfinite {
                    return (value < 0) ? negativeInfinity : positiveInfinity
                } else { fatalError() }
            }

So with either strategy I either get the thrown error or a fatal error if a valid Double is attempted to be encoded. Is this expected or is there another configuration I'm missing?

Similarly, when I try to decode a double, I get an error thrown, but when I decode it as a string and convert the string to a double in my structs init(from: Decoder) I process the field correctly.

How can i install on linux?

Question

How can i install on linux? I can't make it work

Additional Context

Add any other context about the question here (or delete this section if it is unnecessary).

System

Delete section if not applicable

  • OS: Ubuntu 20.04
  • CodableCSV: [e.g. 0.6.7]

Define headers, but suppress header output?

Question

In a large scale streaming situation, the csv is being used to 'chunk' rows. I'd like to be able to pass in headers, but not send them to the CSV (since the header is already out there).

Is this possible? I can't use CodingKeys - because they are already being used as 'string' for a JSON decoder.

I've been trying to find a form of 'Lazy' where I could 'flushEncoding()' which reset the rows and left a usable lazy encoder.

Keeping a root encoder and making a new lazy() as needed also works great, except lacking the ability to suppress the header after the 1st lazy instance. (any way I've tried removing it 'after' the fact breaks CodingKeys lookup - as expected)

Last column decodes as blank

Question

Hi all. Not sure if this is pilot error or if its a bug but it appears that the last column in our CSV consistently decodes to blank. We've got a correct header line and I'm using a .firstLine strategy. Have also confirmed that my data model has the same number of columns as vars. The only solution to fix this appears to be using a dummy column at the end.

System

Delete section if not applicable

  • OS: macOS 12.3.1, Xcode 12.3
  • CodableCSV 0.6.7

Extra comma in header or data line causes failure to parse subsequent lines

Describe the bug

Having an extra comma in a data line (which is usually caused by the CSV creator failing to quote a field) causes that line and all subsequent lines to fail to parse. Having an extra comma at the end of the header line causes all subsequent data lines to fail to parse.

To Reproduce

Please see the attached test file (it is really a .swift file, but I changed the extension to .txt in order to attach it).
DecodingBadInputTests.txt

Expected behavior

Both of these situations (additional commas in either header or data line) are forbidden by rfc4180, so I would expect an exception to be raised.

System

  • OS: macOS 11.2.3
  • CodableCSV: 0.6.2

Additional context

I encountered both of these instances of ill-formed CSV in files I downloaded from my banks. I'm using CodableCSV in a Swift app I've written to take the differently formatted CSV from each bank and create a standard format which I then import into a spreadsheet for further analysis.

Build error in Xcode v16.0 beta 5

Question

Anyone else have issues building in Xcode 16 beta 5?

Additional Context

Beta 4 was OK. With beta 5, projects with CodableCSV package have build errors.

Screenshot 2024-08-07 at 8 43 40 PM

System

  • OS: macOS 14.6.1
  • CodableCSV: 0.6.7

Decodable sequence

Hey @dehesa!

I think I was too slow, it looks like you already implemented the sequential buffering strategy for 0.5.2. I was taking some time to learn about the Decoder protocol internals.

What I learned is that it's possible to decode an UnkeyedDecodingContainer into any sequence without buffering. ShadowDecoder.UnkeyedContainer seems to do a good job of iteratively decoding each item.

The README demos decoding into a preallocated array.

let decoder = CSVDecoder { $0.bufferingStrategy = .sequential }
let content: [Student] = try decoder([Student].self, from: URL("~/Desktop/Student.csv"))

Instead of an Array, I created a custom sequence wrapper. With the added benefit of customizing how the result is wrapped. I had my 🤞 that AnySequence was Decodable, but it's not.

class DecodableSequence<T: Decodable>: Sequence, IteratorProtocol, Decodable {
    private var container: UnkeyedDecodingContainer

    required init(from decoder: Decoder) throws {
        container = try decoder.unkeyedContainer()
    }

    func next() -> Result<T, Error>? {
        if container.isAtEnd {
            return nil
        }
        // or could use a try! here
        return Result { try container.decode(T.self) }
    }
}

Then:

let decoder = CSVDecoder { $0.bufferingStrategy = .sequential }
let url = URL(fileURLWithPath: "Student.csv")
let results = try decoder.decode(DecodableSequence<Student>.self, from: url)

for result in results {
    print(try result.get())
}

Any thoughts on this technique or Alternatives? Would a sequence wrapper like this be useful to include as part of the library?

Thanks!
@josh

Add support for field delimiter detection

Is your feature request related to a problem?

When using CodableCSV to load user-provided CSV files, one currently needs to ask the user which field delimiter is used in their file.

Describe the solution you'd like

It would be nice if CodableCSV had an option to automatically infer the field delimiter from the provided file.

I saw that this feature is on the roadmap, along with row delimiter detection and header detection. There are also already some references to it in the code, with the idea to use auto-detection when the field delimiter is set to nil in the reader's configuration.

I'd be happy to contribute this feature. My idea was to port the dialect detection code from the CleverCSV Python library to Swift.

Describe alternatives you've considered

An alternative would be to use the library directly, however that would introduce a dependency to the project, and, more importantly, I'm not quite sure how good Swift's support is for calling Python code. I guess it wouldn't work on iOS, for example?

@dehesa what do you think?

Decoding CSV file with CRLF line endings fail with error if the last column is quoted

Describe the bug

A clear and concise description of what the bug is.

Decoding a CSV file with CRLF line endings fails with an error, if the last field in a row is quoted.

The error:

Invalid input
	Reason: The last field is escaped (through quotes) and an EOF (End of File) was encountered before the field was properly closed (with a final quote character).
	Help: End the targeted field with a quote.

To Reproduce

Steps to reproduce the behavior:

Using a CSV file with CRLF line endings (url), decode with this:

    let decoder = try CSVDecoder(configuration: {
        $0.encoding = .utf8
        $0.bufferingStrategy = .sequential
        $0.headerStrategy = .firstLine
        $0.trimStrategy = .whitespaces
        $0.delimiters.row = "\r\n" // or "\n", also fails
    }).lazy(from: url)

Expected behavior

No error

System

  • CodableCSV: 0.6.6

Additional context

This was introduced in v0.6.6

CSVEncoder.lazy<URL> should support an appending strategy

Is your feature request related to a problem?

Sorry if this is available already, but couldn't find it in sources except in CSVWriter, but not CSVEncoder or in README.

Describe the solution you'd like

Simply, for a live time series serialization would want to append new data to the URL instead of overwrite it with a bufferingStrategy of sequential (but can also see where users wouldn't want it to append, so should be a separate strategy).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.