GithubHelp home page GithubHelp logo

databrickslabs / delta-sharing-java-connector Goto Github PK

View Code? Open in Web Editor NEW
12.0 5.0 5.0 1.81 MB

A Java connector for delta.io/sharing/ that allows you to easily ingest data on any JVM.

Home Page: https://databrickslabs.github.io/delta-sharing-java-connector/

License: Apache License 2.0

Java 100.00%
java delta-sharing

delta-sharing-java-connector's Introduction

Delta Sharing Java Connector

drawing

experimental build docs

A Java connector for Delta Sharing that allows you to easily ingest data on any JVM.

Design

Project Description

This project brings Delta Sharing capabilities to Java.

The Java connector follows the Delta Sharing protocol to read shared tables from a Delta Sharing Server. To further reduce and limit egress costs on the Data Provider side, we implemented a persistent cache to reduce and limit the egress costs on the Data Provider side by removing any unnecessary reads.

  • The data is served to the connector via persisted cache to limit the egress costs whenever possible.

    • Instead of keeping all table data in memory, we will use file stream readers to serve larger datasets even when there isn't enough memory available.
    • Each table will have a dedicated file stream reader per part file that is held in the persistent cache. File stream readers allow us to read the data in blocks of records and we can process data with more flexibility.
    • Data records are provided as a set of Avro GenericRecords that provide a good balance between the flexibility of representation and integrational capabilities. GenericRecords can easily be exported to JSON and/or other formats using EncoderFactory in Avro.
  • Every time the data access is requested the connector will check for the metadata updates and refresh the table data in case of any metadata changes.

    • The connector requests the metadata for the table based on its coordinate from the provider. The table coordinate is the profile file path following with # and the fully qualified name of a table (<share-name>.<schema-name>.<table-name>)
    • A lookup of table to metadata is maintained inside the JVM. The connector then compares the received metadata with the last metadata snapshot. If there is no change, then the existing table data is served from cache. Otherwise, the connector will refresh the table data in the cache.
  • When the metadata changes are detected both the data and the metadata will be updated.

    • The connector will request the pre-signed urls for the table defined by the fully qualified table name. The connector will only download the file whose metadata has changed and will store these files into the persisted cache location.

In the current implementation, the persistent cache is located in dedicated temporary locations that are destroyed when the JVM is shutdown. This is an important consideration since it avoids persisting orphaned data locally.

Project Support

Please note that all projects in the /databrickslabs github account are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects.

Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.

Building the Project

The project is implemented on top of maven. To build the project locally:

  • Make sure you are in the root directory of the project
  • Run mvn clean install
  • The jars will be available in /target directory

Using the Project

To use the connector in your projects use maven coordinates with desired version.

delta-sharing-java-connector's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

delta-sharing-java-connector's Issues

Unable to read date format columns (int96 type) from avro-parquet schema

I am facing the following exception when reading the parquet file having date column:

java.lang.IllegalArgumentException: INT96 is deprecated. As interim enable READ_INT96_AS_FIXED flag to read as byte array.

at org.apache.parquet.avro.AvroSchemaConverter$1.convertINT96(AvroSchemaConverter.java:331)
at org.apache.parquet.avro.AvroSchemaConverter$1.convertINT96(AvroSchemaConverter.java:313)
at org.apache.parquet.schema.PrimitiveType$PrimitiveTypeName$7.convert(PrimitiveType.java:341)
at org.apache.parquet.avro.AvroSchemaConverter.convertField(AvroSchemaConverter.java:312)
at org.apache.parquet.avro.AvroSchemaConverter.convertFields(AvroSchemaConverter.java:290)
at org.apache.parquet.avro.AvroSchemaConverter.convert(AvroSchemaConverter.java:279)
at org.apache.parquet.avro.AvroReadSupport.prepareForRead(AvroReadSupport.java:134)
at org.apache.parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:190)
at org.apache.parquet.hadoop.ParquetReader.initReader(ParquetReader.java:166)
at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:135)
at com.databricks.labs.delta.sharing.java.format.parquet.TableReader.read(TableReader.java:57)

Data filtering capabilities

Is it possible to have data filtering capabilities (without a spark session) on the list of generic records fetched in the java connector? Or any possibility to be able to get the reader with filter? The filtering capabilities available with spark data frames is what I am aiming for.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.