We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HI, 我仿着你们的方式去训练我们模型,但是训练了5个多小时了,结果还没出来,语料数目在20万。不知道是不是哪边出了问题,你们这个训练出来的统计分词有没有和CRF 比较过呀?效果比起来如何? ./train_c weibo_data_process model 都是按照默认的。 输出日志: separator: [/] training file "weibo_data_process" scanned
然后就一直这样,是卡住了?有其他日志输出吗?还是一直就这样。语料在20万左右的话训练很久吗?
谢谢。
The text was updated successfully, but these errors were encountered:
补充一下,数据我调整成小的数据集,可以训练,但是出现这个报错,不知道是什么原因 *** Error in `./train_c': free(): invalid next size (fast): 0x0000000003946470 *** Aborted (core dumped)
Sorry, something went wrong.
No branches or pull requests
HI,
我仿着你们的方式去训练我们模型,但是训练了5个多小时了,结果还没出来,语料数目在20万。不知道是不是哪边出了问题,你们这个训练出来的统计分词有没有和CRF 比较过呀?效果比起来如何?
./train_c weibo_data_process model
都是按照默认的。
输出日志:
separator: [/]
training file "weibo_data_process" scanned
然后就一直这样,是卡住了?有其他日志输出吗?还是一直就这样。语料在20万左右的话训练很久吗?
谢谢。
The text was updated successfully, but these errors were encountered: