GithubHelp home page GithubHelp logo

bert-textclassification's People

Contributors

songyingxin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bert-textclassification's Issues

GPU 显存问题

我想走一下 Bert + CNN 的SST-2 数据的分类效果。
但是上来就报出 GPU 溢出,我换了公司服务器还是溢出,公司显存 10.75 G
我想知道这是为什么,是确实溢出还是说代码哪里我需要调整呢?

代码好几处运行报错

你好,我运行你的代码有好几处报错,改起来有点吃力,你是否可以更新一下,谢谢啊。
1、在Utils/utils.py文件line 33处,report = metrics.classification_report(labels, preds, labels=labels_list, target_names=label_list, digits=5, output_dict=True),其中output_dict=True未定义需要删掉。
2、Utils/train_evaluate.py文件line106处,会出现这样的错误:TypeError: string indices must be integers
如果你能帮忙改正一下将是极好的

BertLSTM的forward函数中对hidden操作写的似乎有些问题

#BertLSTM.py    line 19:
self.rnn = nn.LSTM(config.hidden_size, rnn_hidden_size, num_layers,bidirectional=bidirectional, batch_first=True, dropout=dropout)
.....

#BertLSTM.py    line 31-36:
_, (hidden, cell) = self.rnn(encoded_layers)
# outputs: [batch_size, seq_len, rnn_hidden_size * 2]
hidden = self.dropout(
torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1))  # 连接最后一层的双向输出

logits = self.classifier(hidden)

这里由于之前设置了batch_first=True,hidden的shape=[batch_size,num_layers*nums_directions,rnn_hidden_size],所以连接最后一层双向输出应该是(hidden[:,-2,:],hidden[:,-1,:])吧?或者将hidden=hidden.permute([1,0,2])

怎样对新来的几条文本做预测呢?

模型训练完成了,准确率也很高。
但到后面用的时候,往往是对新来的一条或若干条文本,需要预测它对应的标签。
加载训练后的模型,对新数据做预测能否单独写个样例参考一下?

关于SST-2数据训练的几个问题

你好,我想知道在你的数据集中的dev和test哪一个是测试集?在你给的百度云盘的代码中并没有给出test的由来,如果dev是测试集,那test又是什么?请具体说明一下。还有就是希望你能写一个关于SST-2 训练的文档,这样便于学习。谢谢。

为什么BERT后接CNN效果更好

为什么BERT后面接了CNN / LSTM效果更好呢?是因为BERT encode能力不够吗?(显然不是)这其中有什么原理吗?谢谢

中英文数据集处理问题

您好,我看您数据集有中文的也有英文的。但中英文取token的方式不是不一样吗?英文是wordpiece,中文是直接切分,我没看到您的代码中有做相关的处理。或是我对您的代码理解有误?

关于模型存放和日志存放

你好,首先感谢你的分享。
在代码中模型保存在.output/ 下,日志在.log/目录下,但是我完整运行run_xx.py后,发现output和log文件下什么都没有,请问是怎么回事呢

代码相关论文

您好,想知道和代码BERTRCNN匹配的论文可以提供一下获取方式吗

句子对数据集

如果数据集格式为:sentence_a sentence_b label,
为什么会出现找不到bert_config.json路径(路径已经提前设置好了)

load_state_dict error - some problems on parameters

Thanks for sharing, but I got some problems for BertOrigin model (others as well)

Firstly, I copy the config.json into .sst_output, then this error appeared. It seems like a very common problem because I saw this error as well in this repo: naver/sqlova#1 . Even if I changed gamma into weights, beta into bias, this model cannot work as well. The error information is NotImplementedError

RuntimeError: Error(s) in loading state_dict for BertOrigin: Missing key(s) in state_dict: "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.output.LayerNorm.weight", "bert.encoder.layer.1.attention.output.LayerNorm.bias", "bert.encoder.layer.1.output.LayerNorm.weight", "bert.encoder.layer.1.output.LayerNorm.bias", "bert.encoder.layer.2.attention.output.LayerNorm.weight", "bert.encoder.layer.2.attention.output.LayerNorm.bias", "bert.encoder.layer.2.output.LayerNorm.weight", "bert.encoder.layer.2.output.LayerNorm.bias", "bert.encoder.layer.3.attention.output.LayerNorm.weight", "bert.encoder.layer.3.attention.output.LayerNorm.bias", "bert.encoder.layer.3.output.LayerNorm.weight", "bert.encoder.layer.3.output.LayerNorm.bias", "bert.encoder.layer.4.attention.output.LayerNorm.weight", "bert.encoder.layer.4.attention.output.LayerNorm.bias", "bert.encoder.layer.4.output.LayerNorm.weight", "bert.encoder.layer.4.output.LayerNorm.bias", "bert.encoder.layer.5.attention.output.LayerNorm.weight", "bert.encoder.layer.5.attention.output.LayerNorm.bias", "bert.encoder.layer.5.output.LayerNorm.weight", "bert.encoder.layer.5.output.LayerNorm.bias", "bert.encoder.layer.6.attention.output.LayerNorm.weight", "bert.encoder.layer.6.attention.output.LayerNorm.bias", "bert.encoder.layer.6.output.LayerNorm.weight", "bert.encoder.layer.6.output.LayerNorm.bias", "bert.encoder.layer.7.attention.output.LayerNorm.weight", "bert.encoder.layer.7.attention.output.LayerNorm.bias", "bert.encoder.layer.7.output.LayerNorm.weight", "bert.encoder.layer.7.output.LayerNorm.bias", "bert.encoder.layer.8.attention.output.LayerNorm.weight", "bert.encoder.layer.8.attention.output.LayerNorm.bias", "bert.encoder.layer.8.output.LayerNorm.weight", "bert.encoder.layer.8.output.LayerNorm.bias", "bert.encoder.layer.9.attention.output.LayerNorm.weight", "bert.encoder.layer.9.attention.output.LayerNorm.bias", "bert.encoder.layer.9.output.LayerNorm.weight", "bert.encoder.layer.9.output.LayerNorm.bias", "bert.encoder.layer.10.attention.output.LayerNorm.weight", "bert.encoder.layer.10.attention.output.LayerNorm.bias", "bert.encoder.layer.10.output.LayerNorm.weight", "bert.encoder.layer.10.output.LayerNorm.bias", "bert.encoder.layer.11.attention.output.LayerNorm.weight", "bert.encoder.layer.11.attention.output.LayerNorm.bias", "bert.encoder.layer.11.output.LayerNorm.weight", "bert.encoder.layer.11.output.LayerNorm.bias", "classifier.weight", "classifier.bias". Unexpected key(s) in state_dict: "cls.predictions.bias", "cls.predictions.transform.dense.weight", "cls.predictions.transform.dense.bias", "cls.predictions.transform.LayerNorm.gamma", "cls.predictions.transform.LayerNorm.beta", "cls.predictions.decoder.weight", "cls.seq_relationship.weight", "cls.seq_relationship.bias", "bert.embeddings.LayerNorm.gamma", "bert.embeddings.LayerNorm.beta", "bert.encoder.layer.0.attention.output.LayerNorm.gamma", "bert.encoder.layer.0.attention.output.LayerNorm.beta", "bert.encoder.layer.0.output.LayerNorm.gamma", "bert.encoder.layer.0.output.LayerNorm.beta", "bert.encoder.layer.1.attention.output.LayerNorm.gamma", "bert.encoder.layer.1.attention.output.LayerNorm.beta", "bert.encoder.layer.1.output.LayerNorm.gamma", "bert.encoder.layer.1.output.LayerNorm.beta", "bert.encoder.layer.2.attention.output.LayerNorm.gamma", "bert.encoder.layer.2.attention.output.LayerNorm.beta", "bert.encoder.layer.2.output.LayerNorm.gamma", "bert.encoder.layer.2.output.LayerNorm.beta", "bert.encoder.layer.3.attention.output.LayerNorm.gamma", "bert.encoder.layer.3.attention.output.LayerNorm.beta", "bert.encoder.layer.3.output.LayerNorm.gamma", "bert.encoder.layer.3.output.LayerNorm.beta", "bert.encoder.layer.4.attention.output.LayerNorm.gamma", "bert.encoder.layer.4.attention.output.LayerNorm.beta", "bert.encoder.layer.4.output.LayerNorm.gamma", "bert.encoder.layer.4.output.LayerNorm.beta", "bert.encoder.layer.5.attention.output.LayerNorm.gamma", "bert.encoder.layer.5.attention.output.LayerNorm.beta", "bert.encoder.layer.5.output.LayerNorm.gamma", "bert.encoder.layer.5.output.LayerNorm.beta", "bert.encoder.layer.6.attention.output.LayerNorm.gamma", "bert.encoder.layer.6.attention.output.LayerNorm.beta", "bert.encoder.layer.6.output.LayerNorm.gamma", "bert.encoder.layer.6.output.LayerNorm.beta", "bert.encoder.layer.7.attention.output.LayerNorm.gamma", "bert.encoder.layer.7.attention.output.LayerNorm.beta", "bert.encoder.layer.7.output.LayerNorm.gamma", "bert.encoder.layer.7.output.LayerNorm.beta", "bert.encoder.layer.8.attention.output.LayerNorm.gamma", "bert.encoder.layer.8.attention.output.LayerNorm.beta", "bert.encoder.layer.8.output.LayerNorm.gamma", "bert.encoder.layer.8.output.LayerNorm.beta", "bert.encoder.layer.9.attention.output.LayerNorm.gamma", "bert.encoder.layer.9.attention.output.LayerNorm.beta", "bert.encoder.layer.9.output.LayerNorm.gamma", "bert.encoder.layer.9.output.LayerNorm.beta", "bert.encoder.layer.10.attention.output.LayerNorm.gamma", "bert.encoder.layer.10.attention.output.LayerNorm.beta", "bert.encoder.layer.10.output.LayerNorm.gamma", "bert.encoder.layer.10.output.LayerNorm.beta", "bert.encoder.layer.11.attention.output.LayerNorm.gamma", "bert.encoder.layer.11.attention.output.LayerNorm.beta", "bert.encoder.layer.11.output.LayerNorm.gamma", "bert.encoder.layer.11.output.LayerNorm.beta".

tf、torch的版本问题

您好,我按照您的readme中的去操作,发现在载入数据的时候会报错了,可能是因为tf、torch的版本问题吧,请您你使用的各个库的版本是多少?感谢

Missing key(s) in state_dict: "classifier.weight", "classifier.bias".

File "D:\progrom\python\python\python3\lib\site-packages\torch\nn\modules\module.py", line 719, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for BertOrigin:
Missing key(s) in state_dict: "classifier.weight", "classifier.bias".
Unexpected key(s) in state_dict: "cls.predictions.bias", "cls.predictions.transform.dense.weight", "cls.predictions.transform.dense.bias", "cls.predictions.transform.LayerNorm.weight", "cls.predictions.transform.LayerNorm.bias", "cls.predictions.decoder.weight", "cls.seq_relationship.weight", "cls.seq_relationship.bias".

early_stop

好像不能early_stop,不过我看了好久也看出来有啥问题

关于HAN模型

能不能给一下仓库里面的HAN代码的参考资料?

gradient accumulation

Thanks for sharing the code. Your gradient accumulation implementation helps me a lot on my datasets (roughly >10% f1 improvements with very large batch size).

Please check line 87 of train_evaluate.py. I think it should be "train_steps" instead of "step".

Thanks

问题请教

FileNotFoundError: [Errno 2] No such file or directory: '.cnews_outputBertOrigin/model_1/config.json'
你好,我在运行新闻分类的时候出现这个错误,可以帮忙看一下么

运行run_SST2.py报错

Exception has occurred: FileNotFoundError
[Errno 2] No such file or directory: 'outputs/outputBertLSTM/BertLSTM/config.json'
File "/data_sas/mz/3_Geobert_tasks/Bert-TextClassification/main.py", line 139, in main
bert_config = BertConfig(output_config_file)
File "/data_sas/mz/3_Geobert_tasks/Bert-TextClassification/run_SST2.py", line 45, in
main(config, config.save_name, label_list)

BERTDPCNN.py有一个小问题

在forward的x = x.squeeze()这里,如果使用者最后一个batch的大小刚好是1时,会被squeeze掉,这样输出标签的维度不匹配,建议修改。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.