GithubHelp home page GithubHelp logo

eirazhang / laco Goto Github PK

View Code? Open in Web Editor NEW
33.0 3.0 9.0 301 KB

This repository contains the code for our paper [Enhancing Label Correlation Feedback in Multi-Label Text Classification via Multi-Task Learning]

Python 100.00%

laco's Introduction

#Enhancing Label Correlation Feedback in Multi-Label Text Classification via Multi-Task Learning This repository contains the code for the ACL 2021 paper

"Enhancing Label Correlation Feedback in Multi-Label Text Classification via Multi-Task Learning".

If you use LACO in your work, please cite it as follows:

@article{zhang2021enhancing,
  title={Enhancing Label Correlation Feedback in Multi-Label Text Classification via Multi-Task Learning},
  author={Zhang, Ximing and Zhang, Qian-Wen and Yan, Zhao and Liu, Ruifang and Cao, Yunbo},
  journal={arXiv preprint arXiv:2106.03103},
  year={2021}
}

##Settings Environment Requirements

  • python 3.6+

  • Tensorflow 1.12.0+

Environmental preparation

  • You can change the experimental settings in LACO/common/global_config.py

  • The initial content under directory LACO/ie/src/bert is primarily from Google bert. Citation information is recorded in the corresponding file. You can download and unzip it at LACO/pretrained_model/ .

##Datasets

Data Preparation

The sample data are in the directory LACO/log/re_model/input. Note that the "text" field stores the text content, the "spo_list" field stores the relevant labels in "predicate", and the other fields can be ignored.

##How To Run

  • Train_mltc_with_plcp: python ie/train_main_plcp.py
  • Test_mltc_with_plcp: python ie/test_main_plcp.py
  • Train_mltc_with_clcp: python ie/train_main_clcp.py
  • Test_mltc_with_clcp: python ie/test_main_clcp.py

##Results The best model of +PLCP of AAPD dataset and and RCV1V2 dataset can be found at https://share.weiyun.com/5EXHqEPE (password: 8yrgji) for your reference.

##© Copyright Ximing Zhang ([email protected]), Qian-Wen Zhang ([email protected]), Zhao Yan ([email protected]), Ruifang Liu ([email protected]), Yunbo Cao ([email protected]), Tencent Cloud Xiaowei, Beijing, China && Beijing University of Posts and Telecommunications, Beijing, China

This code package can be used freely for academic, non-profit purposes. For other usage, please contact us for further information (Ximing Zhang: [email protected]).

laco's People

Contributors

eirazhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

laco's Issues

About LACO model

Hi,
How to train this model without CLCP/PLCP?
Just only use LACO model.

Could you provide the label2id files?

I have noticed LACO takes the labels as label0#label1#...#label53 for the AAPD dataset and label0#...#label102 for the RCV1-V2 dataset.
I can get close results to the paper on the AAPD dataset.
But on the RCV1-V2 dataset, the performance (mF1: 85.7) of the model trained from scratch with the plcp strategy differs significantly from that in the paper (mF1: 88.4).
And I cannot use the provided checkpoint because there is no file to describe how to convert the original labels to the ids.

So could you provide the label2id files?

5.4 Label Correlation Analysis

您好,
我现在得到了AAPD测试集真实标签的条件概率和模型预测标签的条件概率,但是预测标签的条件概率存在为0的情况,无法直接计算KL散度。
对于此情况,我是直接if-else处理的,if p_prediction ==0 or p_ground ==0, KL_score = 0。但是这种方法计算出来的Klein散度为2.719,与论文的数值差距太大。
想知道您是如何处理此情况的。如果方便的话,能否公开一下KL散度的计算代码?
谢谢,祝您身体健康,工作顺利~

Fine-tune

Is it possible to fine-tune the model on our dataset?

TypeError: '>' not supported between instances of 'NoneType' and 'int'

I am using sample data.

When I set FLAGS.do_train and FLAGS.train_and_evaluate to True at the same time, there is no error in do_train phase, but an error in train_and_evaluate phase.

When I set FLAGS.do_train and FLAGS.train_and_evaluate to False and FLAGS.do_eval to True, the same error occurred

train_data_path ../log/re_model
schema_set label0#label1#label2#label3#label4#label5#label6#label7#label8#label9#label10#label11#label12#label13#label14#label15#label16#label17#label18#label19#label20#label21#label22#label23#label24#label25#label26#label27#label28#label29#label30#label31#label32#label33#label34#label35#label36#label37#label38#label39#label40#label41#label42#label43#label44#label45#label46#label47#label48#label49#label50#label51#label52#label53
train_para {'para_batch_size': 2, 'para_epochs': 100, 'para_max_len': 320, 'para_learning_rate': 5e-05}
INFO:tensorflow:Using config: {'_model_dir': '../log/re_model\model/predicate_infer_out_plcp_train/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 1000, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 50, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
examples: [('the alignment of a set of objects by means of transformations plays an important role in computer vision whilst the case for only two objects can be solved globally , when multiple objects are considered usually it ##erative methods are used in practice the it ##erative methods perform well if the relative transformations between any pair of objects are free of noise however , if only noisy relative transformations are available \ ( e g due to missing data or wrong correspondence ##s \ ) the it ##erative methods may fail based on the observation that the underlying noise free transformations can be retrieved from the null space of a matrix that can directly be obtained from pair ##wise alignment ##s , this paper presents a novel method for the s ##ync ##hr ##oni ##sation of pair ##wise transformations such that they are transit ##ively consistent simulation ##s demonstrate that for noisy transformations , a large proportion of missing data and even for wrong correspondence assignments the method delivers encouraging results', 'label4 label14'), ("this paper studies a s ##han ##non the ##ore ##tic version of the generalized distribution preserving q ##uant ##ization problem where a stationary and memory ##less source is encoded subject to a distortion con ##stra ##int and the additional requirement that the reproduction also be stationary and memory ##less with a given distribution the en ##code ##r and de ##code ##r are s ##to ##cha ##stic and assumed to have access to independent common random ##ness recent work has characterized the minimum a ##chi ##eva ##ble coding rate at a given distortion level when unlimited common random ##ness is available here we consider the general case where the available common random ##ness may be rate limited our main result completely character ##izes the set of a ##chi ##eva ##ble coding and common random ##ness rate pairs at any distortion level , thereby providing the optimal trade ##off between these two rate quantities we also consider two variations of this problem where we investigate the effect of relaxing the strict output distribution con ##stra ##int and the role of private random ##ness ' used by the de ##code ##r on the rate region our results have strong connections with cu ##ff ' s recent work on distributed channel synthesis in particular , our a ##chi ##eva ##bility proof combines a coupling argument with the approach developed by cu ##ff , where instead of explicitly constructing the en ##code ##r de ##code ##r pair , a joint distribution is constructed from which a desired en ##code ##r de ##code ##r pair is established we show however that for our problem , the separated solution of first finding an optimal channel and then s ##ynth ##esi ##zing this channel results in a sub ##op ##ti ##mal rate region", 'label0 label1'), ("we consider the crowds ##our ##cing task of learning the answer to simple multiple choice micro ##tas ##ks in order to provide statistical ##ly significant results , one often needs to ask multiple workers to answer the same micro ##tas ##k a stopping rule is an algorithm that for a given micro ##tas ##k decides for any given set of worker answers if the system should stop and output an answer or it ##era ##te and ask one more worker a quality score for a worker is a score that reflects the historic performance of that worker in this paper we investigate how to de ##vise better stopping rules given such quality scores we conduct a data analysis on a large scale industrial crowds ##our ##cing platform , and use the observations from this analysis to design new stopping rules that use the workers ' quality scores in a non trivial manner we then conduct a simulation based on a real world work ##load , showing that our algorithm performs better than the more naive approaches", 'label3 label5'), ('since the early 2000s physicist ##s have developed an ing ##eni ##ous but non rigorous formal ##ism called the cavity method to put forward precise conjecture ##s on phase transitions in random problems me ##zard , par ##isi , z ##ec ##china science 2002 the cavity method predict ##s that the sat ##is ##fia ##bility threshold in the random k sat problem is 2 k l ##n ##2 f ##rac ##12 \\ ( 1 l ##n 2 \\ ) e ##ps ##ilon k , with l ##im k right ##ar ##row in ##fty e ##ps ##ilon k 0 me ##rten ##s , me ##zard , z ##ec ##china random structures and algorithms 2006 this paper contains a proof of that conjecture', 'label7 label10 label26'), ('interference is a main limiting factor of the performance of a wireless ad ho ##c network the temporal and the spatial correlation of the interference makes the out ##ages correlated temporal ##ly \\ ( important for re ##tra ##ns ##mission ##s \\ ) and spatial ##ly correlated \\ ( important for routing \\ ) in this letter we q ##uant ##ify the temporal and spatial correlation of the interference in a wireless ad ho ##c network whose nodes are distributed as a p ##ois ##son point process on the plane when al ##oh ##a is used as the multiple access scheme', 'label0 label1 label13 label26'), ('we propose a new method , it binary fused com ##pressive sensing \\ ( b ##f ##cs \\ ) , to recover sparse piece wise smooth signals from 1 bit com ##pressive measurements the proposed algorithm is a modification of the previous it binary it ##erative hard threshold ##ing \\ ( bi ##ht \\ ) algorithm , where , in addition to the spa ##rs ##ity con ##stra ##int , the total variation of the recovered signal is upper con ##stra ##ined as in bi ##ht , the data term of the objective function is an one sided el ##l 1 \\ ( or el ##l 2 \\ ) norm experiments on the recovery of sparse piece wise smooth signals show that the proposed algorithm is able to take advantage of the piece wise smooth ##ness of the original signal , achieving more accurate recovery than bi ##ht', 'label0 label1 label14'), ('in this paper , we study a general add ##itive state dependent g ##aus ##sian interference channel \\ ( as ##d g ##ic \\ ) where we consider two user interference channel with two independent states known non ca ##usal ##ly at both transmitter ##s , but unknown to either of the receivers an special case , where the add ##itive states over the two links are the same is studied in 1 , 2 , in which it is shown that the gap between the a ##chi ##eva ##ble symmetric rate and the upper bound is less than 1 4 bit for the strong interference case here , we also consider the case where each channel state has un ##bound ##ed variance 3 , which is referred to as the strong interference ##s we first obtain an outer bound on the capacity region by utilizing lattice based coding schemes , we obtain four a ##chi ##eva ##ble rate regions depend on noise variance and channel power con ##stra ##int , a ##chi ##eva ##ble rate regions can coincide with the channel capacity region for the symmetric model , the a ##chi ##eva ##ble sum rate reaches to within 0 66 ##1 bit of the channel capacity for signal to noise ratio \\ ( s ##n ##r \\ ) greater than one', 'label0 label1'), ('please see the content of this report', 'label13 label22'), ('the paper presents a software tool for analysis and interactive engagement in various logical reasoning tasks a first feature of the program consists in providing an interface for working with logic specific re ##po ##si ##tor ##ies of formal knowledge a second feature provides the means to in ##tu ##itive ##ly visual ##ize and interactive ##ly generate the underlying logical structure that prop ##els customary logical reasoning tasks starting from this we argue that both aspects have did ##actic potential and can be integrated in teaching activities to provide an engaging learning experience', 'label9 label23'), ('the s ##to ##cha ##stic block model \\ ( s ##b ##m \\ ) is a popular tool for community detection in networks , but fitting it by maximum likelihood \\ ( m ##le \\ ) involves an in ##fe ##asi ##ble optimization problem we propose a new semi definite programming \\ ( s ##d ##p \\ ) solution to the problem of fitting the s ##b ##m , derived as a relaxation of the m ##le our relaxation , which we call s ##d ##p 1 , is tighter than other recently proposed s ##d ##p relaxation ##s , namely what we call s ##d ##p 2 and s ##d ##p 3 , and thus previously established theoretical guarantees carry over however , we show that s ##d ##p 1 is , in fact , strongly consistent \\ ( i e , exactly recover ##s true communities \\ ) over a wider class of s ##b ##ms than what current results suggest in particular , one can relax the assumption of strong ass ##ort ##ati ##vity , imp ##licit in consistency conditions of current s ##d ##ps , to that of \\ ( weak \\ ) ass ##ort ##ati ##vity for s ##d ##p 1 , thus , significantly broad ##ening the class of applicable models our approach in der ##iving strong consistency results is based on a p ##rimal dual witness construction , and as a by product we recover current results for s ##d ##p 2 our approach also suggests that strong ass ##ort ##ati ##vity is necessary for the success of s ##d ##p 2 and s ##d ##p 3 and is not an artifact of the current proof ##s we provide empirical evidence of this conjecture , in addition to other numerical results comparing these s ##d ##ps , and ad ##ja ##ce ##ncy based spectral cluster ##ing , on real and synthetic data another feature of our relaxation is the tendency to produce more balanced \\ ( i e , equal sized \\ ) communities which , as we show with a real data example , makes it the ideal tool for fitting network his ##to ##gram ##s , a concept gaining popularity in the graph ##on est ##imation literature a general theme throughout will be to view all these s ##d ##ps within a unified framework , specifically , as relaxation ##s of the m ##le over various sub classes of the s ##b ##m this also leads to a connection with the well known problem of sparse p ##ca', 'label2 label4 label6')] num_actual_eval_examples: 10 eval_file: ../log/re_model\model/predicate_infer_out_plcp_train/eval.tf_record INFO:tensorflow:Writing example 0 of 10 INFO:tensorflow:*** Example *** INFO:tensorflow:guid: valid-0 INFO:tensorflow:tokens: [CLS] the alignment of a set of objects by means of transformations plays an important role in computer vision whilst the case for only two objects can be solved globally , when multiple objects are considered usually it ##erative methods are used in practice the it ##erative methods perform well if the relative transformations between any pair of objects are free of noise however , if only noisy relative transformations are available \ ( e g due to missing data or wrong correspondence ##s \ ) the it ##erative methods may fail based on the observation that the underlying noise free transformations can be retrieved from the null space of a matrix that can directly be obtained from pair ##wise alignment ##s , this paper presents a novel method for the s ##ync ##hr ##oni ##sation of pair ##wise transformations such that they are transit ##ively consistent simulation ##s demonstrate that for noisy transformations , a large proportion of missing data and even for wrong correspondence assignments the method delivers encouraging results [SEP] label0 label1 label2 label3 label4 label5 label6 label7 label8 label9 label10 label11 label12 label13 label14 label15 label16 label17 label18 label19 label20 label21 label22 label23 label24 label25 label26 label27 label28 label29 label30 label31 label32 label33 label34 label35 label36 label37 label38 label39 label40 label41 label42 label43 label44 label45 label46 label47 label48 label49 label50 label51 label52 label53 [SEP] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] INFO:tensorflow:input_ids: 101 1103 12793 1104 170 1383 1104 4546 1118 2086 1104 26139 2399 1126 1696 1648 1107 2775 4152 5994 1103 1692 1111 1178 1160 4546 1169 1129 13785 18526 117 1165 2967 4546 1132 1737 1932 1122 21126 4069 1132 1215 1107 2415 1103 1122 21126 4069 3870 1218 1191 1103 5236 26139 1206 1251 3111 1104 4546 1132 1714 1104 4647 1649 117 1191 1178 24678 5236 26139 1132 1907 165 113 174 176 1496 1106 3764 2233 1137 2488 12052 1116 165 114 1103 1122 21126 4069 1336 8693 1359 1113 1103 8310 1115 1103 10311 4647 1714 26139 1169 1129 14408 1121 1103 26280 2000 1104 170 8952 1115 1169 2626 1129 3836 1121 3111 15868 12793 1116 117 1142 2526 8218 170 2281 3442 1111 1103 188 27250 8167 11153 20414 1104 3111 15868 26139 1216 1115 1152 1132 9575 13517 8080 14314 1116 10541 1115 1111 24678 26139 117 170 1415 10807 1104 3764 2233 1105 1256 1111 2488 12052 15799 1103 3442 19603 11653 2686 102 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 INFO:tensorflow:token_label_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 INFO:tensorflow:label_ids: 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 INFO:tensorflow:fit_labelspace_positions: 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 INFO:tensorflow:fit_docspace_positions: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 INFO:tensorflow:{(4, 14): [0, 1], (4, 21): [1, 0], (14, 5): [1, 0]} INFO:tensorflow:len of (fit_labelspace_positions): 54 INFO:tensorflow:*** Example *** INFO:tensorflow:guid: valid-1 INFO:tensorflow:tokens: [CLS] this paper studies a s ##han ##non the ##ore ##tic version of the generalized distribution preserving q ##uant ##ization problem where a stationary and memory ##less source is encoded subject to a distortion con ##stra ##int and the additional requirement that the reproduction also be stationary and memory ##less with a given distribution the en ##code ##r and de ##code ##r are s ##to ##cha ##stic and assumed to have access to independent common random ##ness recent work has characterized the minimum a ##chi ##eva ##ble coding rate at a given distortion level when unlimited common random ##ness is available here we consider the general case where the available common random ##ness may be rate limited our main result completely character ##izes the set of a ##chi ##eva ##ble coding and common random ##ness rate pairs at any distortion level , thereby providing the optimal trade ##off between these two rate quantities we also consider two variations of this problem where we investigate the effect of relaxing the strict output distribution con ##stra ##int and the role of private random ##ness ' used by the de ##code ##r on the rate region our results have strong connections with cu ##ff ' s recent work on distributed channel synthesis in particular , our a ##chi ##eva ##bility proof combines a coupling argument with the approach developed by cu ##ff , where instead of explicitly constructing the en ##code ##r de ##code ##r pair , a joint distribution is constructed from which a desired en ##code ##r de ##code ##r pair is established we [SEP] label0 label1 label2 label3 label4 label5 label6 label7 label8 label9 label10 label11 label12 label13 label14 label15 label16 label17 label18 label19 label20 label21 label22 label23 label24 label25 label26 label27 label28 label29 label30 label31 label32 label33 label34 label35 label36 label37 label38 label39 label40 label41 label42 label43 label44 label45 label46 label47 label48 label49 label50 label51 label52 label53 [SEP]
INFO:tensorflow:input_ids: 101 1142 2526 2527 170 188 3822 9158 1103 4474 2941 1683 1104 1103 22214 3735 16593 186 27280 2734 2463 1187 170 19255 1105 2962 2008 2674 1110 12544 2548 1106 170 25856 14255 16468 10879 1105 1103 2509 8875 1115 1103 16600 1145 1129 19255 1105 2962 2008 1114 170 1549 3735 1103 4035 13775 1197 1105 1260 13775 1197 1132 188 2430 7147 5668 1105 4260 1106 1138 2469 1106 2457 1887 7091 1757 2793 1250 1144 6858 1103 5867 170 4313 12853 2165 19350 2603 1120 170 1549 25856 1634 1165 22921 1887 7091 1757 1110 1907 1303 1195 4615 1103 1704 1692 1187 1103 1907 1887 7091 1757 1336 1129 2603 2609 1412 1514 1871 2423 1959 9534 1103 1383 1104 170 4313 12853 2165 19350 1105 1887 7091 1757 2603 7608 1120 1251 25856 1634 117 8267 3558 1103 17307 2597 5792 1206 1292 1160 2603 12709 1195 1145 4615 1160 9138 1104 1142 2463 1187 1195 8242 1103 2629 1104 22187 1103 9382 5964 3735 14255 16468 10879 1105 1103 1648 1104 169 2029 7091 1757 112 1215 1118 1103 1260 13775 1197 1113 1103 2603 1805 1412 2686 1138 2012 6984 1114 16408 3101 112 188 2793 1250 1113 4901 3094 11362 1107 2440 117 1412 170 4313 12853 5474 6777 14215 170 22728 6171 1114 1103 3136 1872 1118 16408 3101 117 1187 1939 1104 12252 17217 1103 4035 13775 1197 1260 13775 1197 3111 117 170 4091 3735 1110 3033 1121 1134 170 8759 4035 13775 1197 1260 13775 1197 3111 1110 1628 1195 102 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 102
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0
INFO:tensorflow:token_label_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label_ids: 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:fit_labelspace_positions: 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318
INFO:tensorflow:fit_docspace_positions: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 319
INFO:tensorflow:{(0, 1): [0, 1], (1, 18): [1, 0], (0, 4): [1, 0]}
INFO:tensorflow:len of (fit_labelspace_positions): 54
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: valid-2
INFO:tensorflow:tokens: [CLS] we consider the crowds ##our ##cing task of learning the answer to simple multiple choice micro ##tas ##ks in order to provide statistical ##ly significant results , one often needs to ask multiple workers to answer the same micro ##tas ##k a stopping rule is an algorithm that for a given micro ##tas ##k decides for any given set of worker answers if the system should stop and output an answer or it ##era ##te and ask one more worker a quality score for a worker is a score that reflects the historic performance of that worker in this paper we investigate how to de ##vise better stopping rules given such quality scores we conduct a data analysis on a large scale industrial crowds ##our ##cing platform , and use the observations from this analysis to design new stopping rules that use the workers ' quality scores in a non trivial manner we then conduct a simulation based on a real world work ##load , showing that our algorithm performs better than the more naive approaches [SEP] label0 label1 label2 label3 label4 label5 label6 label7 label8 label9 label10 label11 label12 label13 label14 label15 label16 label17 label18 label19 label20 label21 label22 label23 label24 label25 label26 label27 label28 label29 label30 label31 label32 label33 label34 label35 label36 label37 label38 label39 label40 label41 label42 label43 label44 label45 label46 label47 label48 label49 label50 label51 label52 label53 [SEP] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding]
INFO:tensorflow:input_ids: 101 1195 4615 1103 13484 6334 4869 4579 1104 3776 1103 2590 1106 3014 2967 3026 17599 10401 4616 1107 1546 1106 2194 11435 1193 2418 2686 117 1141 1510 2993 1106 2367 2967 3239 1106 2590 1103 1269 17599 10401 1377 170 7202 3013 1110 1126 9932 1115 1111 170 1549 17599 10401 1377 6771 1111 1251 1549 1383 1104 7589 6615 1191 1103 1449 1431 1831 1105 5964 1126 2590 1137 1122 5970 1566 1105 2367 1141 1167 7589 170 3068 2794 1111 170 7589 1110 170 2794 1115 11363 1103 3432 2099 1104 1115 7589 1107 1142 2526 1195 8242 1293 1106 1260 16641 1618 7202 2995 1549 1216 3068 7432 1195 5880 170 2233 3622 1113 170 1415 3418 3924 13484 6334 4869 3482 117 1105 1329 1103 9959 1121 1142 3622 1106 1902 1207 7202 2995 1115 1329 1103 3239 112 3068 7432 1107 170 1664 23594 4758 1195 1173 5880 170 14314 1359 1113 170 1842 1362 1250 9607 117 4000 1115 1412 9932 10383 1618 1190 1103 1167 22607 8015 102 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:token_label_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label_ids: 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:fit_labelspace_positions: 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231
INFO:tensorflow:fit_docspace_positions: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319
INFO:tensorflow:{(3, 5): [0, 1], (5, 32): [1, 0], (3, 34): [1, 0]}
INFO:tensorflow:len of (fit_labelspace_positions): 54
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: valid-3
INFO:tensorflow:tokens: [CLS] since the early 2000s physicist ##s have developed an ing ##eni ##ous but non rigorous formal ##ism called the cavity method to put forward precise conjecture ##s on phase transitions in random problems me ##zard , par ##isi , z ##ec ##china science 2002 the cavity method predict ##s that the sat ##is ##fia ##bility threshold in the random k sat problem is 2 k l ##n ##2 f ##rac ##12 \ ( 1 l ##n 2 \ ) e ##ps ##ilon k , with l ##im k right ##ar ##row in ##fty e ##ps ##ilon k 0 me ##rten ##s , me ##zard , z ##ec ##china random structures and algorithms 2006 this paper contains a proof of that conjecture [SEP] label0 label1 label2 label3 label4 label5 label6 label7 label8 label9 label10 label11 label12 label13 label14 label15 label16 label17 label18 label19 label20 label21 label22 label23 label24 label25 label26 label27 label28 label29 label30 label31 label32 label33 label34 label35 label36 label37 label38 label39 label40 label41 label42 label43 label44 label45 label46 label47 label48 label49 label50 label51 label52 label53 [SEP] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding]
INFO:tensorflow:input_ids: 101 1290 1103 1346 8509 14646 1116 1138 1872 1126 16664 21462 2285 1133 1664 22112 4698 1863 1270 1103 19421 3442 1106 1508 1977 10515 26031 1116 1113 4065 26829 1107 7091 2645 1143 18551 117 14247 26868 117 195 10294 22952 2598 1617 1103 19421 3442 17163 1116 1115 1103 2068 1548 18771 5474 11810 1107 1103 7091 180 2068 2463 1110 123 180 181 1179 1477 175 19366 11964 165 113 122 181 1179 123 165 114 174 3491 27599 180 117 1114 181 4060 180 1268 1813 7596 1107 27944 174 3491 27599 180 121 1143 23619 1116 117 1143 18551 117 195 10294 22952 7091 4413 1105 14975 1386 1142 2526 2515 170 6777 1104 1115 26031 102 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:token_label_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label_ids: 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:fit_labelspace_positions: 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176
INFO:tensorflow:fit_docspace_positions: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319
INFO:tensorflow:{(7, 10): [0, 1], (7, 26): [0, 1], (10, 26): [0, 1], (7, 50): [1, 0], (7, 52): [1, 0], (26, 53): [1, 0], (10, 52): [1, 0], (26, 34): [1, 0], (10, 32): [1, 0]}
INFO:tensorflow:len of (fit_labelspace_positions): 54
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: valid-4
INFO:tensorflow:tokens: [CLS] interference is a main limiting factor of the performance of a wireless ad ho ##c network the temporal and the spatial correlation of the interference makes the out ##ages correlated temporal ##ly \ ( important for re ##tra ##ns ##mission ##s \ ) and spatial ##ly correlated \ ( important for routing \ ) in this letter we q ##uant ##ify the temporal and spatial correlation of the interference in a wireless ad ho ##c network whose nodes are distributed as a p ##ois ##son point process on the plane when al ##oh ##a is used as the multiple access scheme [SEP] label0 label1 label2 label3 label4 label5 label6 label7 label8 label9 label10 label11 label12 label13 label14 label15 label16 label17 label18 label19 label20 label21 label22 label23 label24 label25 label26 label27 label28 label29 label30 label31 label32 label33 label34 label35 label36 label37 label38 label39 label40 label41 label42 label43 label44 label45 label46 label47 label48 label49 label50 label51 label52 label53 [SEP] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding] [Padding]
INFO:tensorflow:input_ids: 101 11364 1110 170 1514 15816 5318 1104 1103 2099 1104 170 12784 8050 16358 1665 2443 1103 18107 1105 1103 15442 18741 1104 1103 11364 2228 1103 1149 12062 27053 18107 1193 165 113 1696 1111 1231 4487 2316 11234 1116 165 114 1105 15442 1193 27053 165 113 1696 1111 19044 165 114 1107 1142 2998 1195 186 27280 6120 1103 18107 1105 15442 18741 1104 1103 11364 1107 170 12784 8050 16358 1665 2443 2133 15029 1132 4901 1112 170 185 8586 2142 1553 1965 1113 1103 4261 1165 2393 10559 1161 1110 1215 1112 1103 2967 2469 5471 102 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:token_label_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label_ids: 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:fit_labelspace_positions: 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156
INFO:tensorflow:fit_docspace_positions: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319
INFO:tensorflow:{(0, 1): [0, 1], (0, 13): [0, 1], (0, 26): [0, 1], (1, 13): [0, 1], (1, 26): [0, 1], (13, 26): [0, 1], (26, 51): [1, 0], (0, 28): [1, 0], (26, 16): [1, 0], (13, 3): [1, 0], (13, 29): [1, 0], (13, 34): [1, 0], (1, 14): [1, 0], (1, 45): [1, 0], (13, 32): [1, 0], (1, 27): [1, 0], (26, 21): [1, 0], (1, 24): [1, 0]}
INFO:tensorflow:len of (fit_labelspace_positions): 54
INFO:tensorflow:***** Running evaluation *****
INFO:tensorflow: Num examples = 10 (10 actual, 0 padding)
INFO:tensorflow: Batch size = 2

INFO:tensorflow:*** Features ***
is_training: False
INFO:tensorflow: name = fit_docspace_positions, shape = (None, 266)
INFO:tensorflow: name = fit_labelspace_positions, shape = (None, 54)
INFO:tensorflow: name = input_ids, shape = (None, 320)
INFO:tensorflow: name = input_mask, shape = (None, 320)
INFO:tensorflow: name = is_real_example, shape = (None,)
INFO:tensorflow: name = label_ids, shape = (None, 54)
INFO:tensorflow: name = pair, shape = (None, 2)
INFO:tensorflow: name = pair_target, shape = (None, 2)
INFO:tensorflow: name = segment_ids, shape = (None, 320)
INFO:tensorflow: name = token_label_ids, shape = (None, 320)
........

File "E:/dululu/LACO-main-11171651/LACO-main/ie/train_main_plcp.py", line 45, in
ot_clf_process(job_id, schema_set, train_data_path, train_para)
File "E:/dululu/LACO-main-11171651/LACO-main/ie/train_main_plcp.py", line 29, in ot_clf_process
predicate_classification_model_train(job_id, schema_set, train_data_path, train_para)
File "E:\dululu\LACO-main-11171651\LACO-main\ie\src\run_predicate_classification_plcp.py", line 1096, in predicate_classification_model_train
run_pred()
File "E:\dululu\LACO-main-11171651\LACO-main\ie\src\run_predicate_classification_plcp.py", line 1010, in run_pred
result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps)
File "D:\ProgramData\Anaconda3\envs\py36wd\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 467, in evaluate
name=name)
File "D:\ProgramData\Anaconda3\envs\py36wd\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 510, in _actual_eval
return _evaluate()
File "D:\ProgramData\Anaconda3\envs\py36wd\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 492, in _evaluate
self._evaluate_build_graph(input_fn, hooks, checkpoint_path))
File "D:\ProgramData\Anaconda3\envs\py36wd\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1531, in _evaluate_build_graph
self._call_model_fn_eval(input_fn, self.config))
File "D:\ProgramData\Anaconda3\envs\py36wd\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1567, in _call_model_fn_eval
config)
File "D:\ProgramData\Anaconda3\envs\py36wd\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1163, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "E:\dululu\LACO-main-11171651\LACO-main\ie\src\run_predicate_classification_plcp.py", line 837, in model_fn
eval_metrics = metric_fn(token_label_predictions, num_labels, token_label_ids, fit_labelspace_positions, token_label_ids_labelspace, token_label_per_example_loss, is_real_example)
File "E:\dululu\LACO-main-11171651\LACO-main\ie\src\run_predicate_classification_plcp.py", line 813, in metric_fn
pos_indices_list, average="micro")
File "E:\dululu\LACO-main-11171651\LACO-main\ie\src\bert\tf_metrics.py", line 42, in precision
labels, predictions, num_classes, weights)
File "D:\ProgramData\Anaconda3\envs\py36wd\lib\site-packages\tensorflow\python\ops\metrics_impl.py", line 264, in _streaming_confusion_matrix
if predictions.get_shape().ndims > 1:
TypeError: '>' not supported between instances of 'NoneType' and 'int'

Process finished with exit code -1073741819 (0xC0000005)

This error is reported at the end when I run "test_main_plcp.py".
I tried to modify it according to online solutions, but none of them succeeded.
I haven't found a solution yet.
----------------------------The last part of the run results are as follows----------------------------------------------
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:*** Features ***
INFO:tensorflow: name = fit_docspace_positions, shape = (?, 266)
INFO:tensorflow: name = fit_labelspace_positions, shape = (?, 54)
INFO:tensorflow: name = input_ids, shape = (?, 320)
INFO:tensorflow: name = input_mask, shape = (?, 320)
INFO:tensorflow: name = is_real_example, shape = (?,)
INFO:tensorflow: name = label_ids, shape = (?, 54)
INFO:tensorflow: name = pair, shape = (?, 2)
INFO:tensorflow: name = pair_target, shape = (?, 2)
INFO:tensorflow: name = segment_ids, shape = (?, 320)
INFO:tensorflow: name = token_label_ids, shape = (?, 320)
WARNING:tensorflow:From E:\LACO\ie\src\bert\modeling.py:171: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

WARNING:tensorflow:From E:\LACO\ie\src\bert\modeling.py:172: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead.

WARNING:tensorflow:From E:\LACO\ie\src\bert\modeling.py:409: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.

WARNING:tensorflow:From E:\LACO\ie\src\bert\modeling.py:490: The name tf.assert_less_equal is deprecated. Please use tf.compat.v1.assert_less_equal instead.

WARNING:tensorflow:From E:\LACO\ie\src\bert\modeling.py:671: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.Dense instead.
WARNING:tensorflow:From D:\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\layers\core.py:187: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use layer.__call__ method instead.
WARNING:tensorflow:From D:\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\ops\nn_impl.py:183: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From D:\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1630: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:From E:\LACO\ie\src\run_predicate_classification_plcp_test.py:781: The name tf.trainable_variables is deprecated. Please use tf.compat.v1.trainable_variables instead.

Process finished with exit code -1073741819 (0xC0000005)

ERROR from CLCE model

Hello! I run the training code as instructed but meet the bug: ERROR from CLCE model. Can you help me about this? Thanks.

**if predictions.get_shape().ndims > 1: TypeError: '>' not supported between instances of 'NoneType' and 'int'**

Traceback (most recent call last):
File "ie/train_main_plcp.py", line 45, in
ot_clf_process(job_id, schema_set, train_data_path, train_para)
File "ie/train_main_plcp.py", line 29, in ot_clf_process
predicate_classification_model_train(job_id, schema_set, train_data_path, train_para)
File "/export1/scratch/qinjie/qinjie/Bert-MLC/LACO/LACO-main/ie/src/run_predicate_classification_plcp.py", line 1087, in predicate_classification_mode l_train
run_pred()
File "/export1/scratch/qinjie/qinjie/Bert-MLC/LACO/LACO-main/ie/src/run_predicate_classification_plcp.py", line 972, in run_pred
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 473, in train_a nd_evaluate
return executor.run()
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 613, in run
return self.run_local()
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 714, in run_loc al
saving_listeners=saving_listeners)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 367, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1158, in _trai n_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1192, in _trai n_model_default
saving_listeners)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1484, in _trai n_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 754, in run
run_metadata=run_metadata)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1252, in run
run_metadata=run_metadata)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1353, in run
raise six.reraise(*original_exc_info)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/six.py", line 719, in reraise
raise value
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1338, in run
return self._sess.run(*args, **kwargs)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1419, in run
run_metadata=run_metadata))
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 594, in aft er_run
if self._save(run_context.session, global_step):
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 619, in _sa ve
if l.after_save(session, step):
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 519, in after_s ave
self._evaluate(global_step_value) # updates self.eval_result
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 539, in _evalua te
self._evaluator.evaluate_and_export())
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 920, in evaluat e_and_export
hooks=self._eval_spec.hooks)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 477, in evalua te
name=name)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 519, in _actua l_eval
return _evaluate()
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 501, in _evalu ate
self._evaluate_build_graph(input_fn, hooks, checkpoint_path))
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1501, in _eval uate_build_graph
self._call_model_fn_eval(input_fn, self.config))
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1537, in _call _model_fn_eval
features, labels, ModeKeys.EVAL, config)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1146, in _call _model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/export1/scratch/qinjie/qinjie/Bert-MLC/LACO/LACO-main/ie/src/run_predicate_classification_plcp.py", line 835, in model_fn

eval_metrics = metric_fn(token_label_predictions, num_labels, token_label_ids, fit_labelspace_positions, token_label_ids_labelspace, token_label_per                                                      _example_loss, is_real_example)

File "/export1/scratch/qinjie/qinjie/Bert-MLC/LACO/LACO-main/ie/src/run_predicate_classification_plcp.py", line 811, in metric_fn
pos_indices_list, average="micro")
File "/export1/scratch/qinjie/qinjie/Bert-MLC/LACO/LACO-main/ie/src/bert/tf_metrics.py", line 42, in precision
labels, predictions, num_classes, weights)
File "/export/scratch/qinjie/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/ops/metrics_impl.py", line 265, in _streaming_confusio n_matrix

**if predictions.get_shape().ndims > 1:

TypeError: '>' not supported between instances of 'NoneType' and 'int'**

The input data

Hello! Is the following error due to the size of the input data? Where can I resize the input data?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.