GithubHelp home page GithubHelp logo

zhangxiangxiao / glyph Goto Github PK

View Code? Open in Web Editor NEW
169.0 10.0 37.0 150 KB

Which Encoding is the Best for Text Classification in Chinese, English, Japanese and Korean?

License: BSD 3-Clause "New" or "Revised" License

Shell 49.63% Lua 45.45% Python 4.93%

glyph's Introduction

Glyph

This repository is used to publish all the code used for the following article:

Xiang Zhang, Yann LeCun, Which Encoding is the Best for Text Classification in Chinese, English, Japanese and Korean?, arXiv 1708.02657

The code and datasets are completely released as of January 2018, including all the code for crawling, preprocessing and training on the datasets. However, the documentation may not be complete yet. That said, readers could refer to the doc directory for an example in reproducing all the results for the Dianping dataset, and extend that to other datasets in similar ways.

Reproducibility Manifesto

If anyone sees a number in our paper, there is a script one can execute to reproduce it. No responsibility should be imposed on the user to figure out any experimental parameter barried in the paper's content.

Datasets

The data directory contains the preprocessing scripts for all the datasets used in the paper. These datasets are released separately of their processing source code. See below for details.

Summary

The following table is a summary of the datasets. Most of them have millions of samples for training.

Dataset Language Classes Train Test
Dianping Chinese 2 2,000,000 500,000
JD full Chinese 5 3,000,000 250,000
JD binary Chinese 2 4,000,000 360,000
Rakuten full Japanese 5 4,000,000 500,000
Rakuten binary Japanese 2 3,400,000 400,000
11st full Korean 5 750,000 100,000
11st binary Korean 2 4,000,000 400,000
Amazon full English 5 3,000,000 650,000
Amazon binary English 2 3,600,000 400,000
Ifeng Chinese 5 800,000 50,000
Chinanews Chinese 7 1,400,000 112,000
NYTimes English 7 1,400,000 105,000
Joint full Multilingual 5 10,750,000 1,500,000
Joint binary Multilingual 2 15,000,000 1,560,000

Download

Datasets are released separtely of the source code via links from Google Drive. These datasets should only be used for the purpose of research.

Dataset Train Test
Dianping Link Link
JD full Link Link
JD binary Link Link
Rakuten full Link Link
Rakuten binary Link Link
11st full Link Link
11st binary Link Link
Amazon full Link Link
Amazon binary Link Link
Ifeng Link Link
Chinanews Link Link
NYTimes Link Link
Joint full Link Link
Joint binary Link Link

GNU Unifont

The glyphnet scripts require the GNU Unifont character images to run. The file unifont-8.0.01.t7b.xz can be downloaded via this link.

glyph's People

Contributors

zhangxiangxiao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

glyph's Issues

About the dataset size

This work is really helpful to me.
According to your paper, the JD_full dataset has 3000000 training samples and 250000 test samples. But I download the dataset, the number of samples is 3047399 and 253658 for trainset and testset. Could you give some explanation? Thanks a lot!

ifeng和chinanews数据集的数据格式问题

你好,感谢你做的工作,目前我正在在你的基础上做一些工作,但是我发现ifeng和chinanews的数据集,你的论文中的说明:截取的是新闻的第一段。可是在实际的数据集中,每个样本共有三列,多的一列是新闻标题么?你在实验中,是将两列的文本直接拼接成一个大文本,再进行使用么?

数据截断问题

我看在chinanews的数据中,最长的文本达到了6761个字,请问一下,在你的实验中,有做截断处理么?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.