We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
https://github.com/facert/tumblr_spider 另一个开发者的项目,只需要填入一个人的username就会自动拉取相关用户的全部视频链接。 两者如果可以配合起来很好用。tumblr-crawler是只能每行填入username 然后下载他的全部发布视频。 就像这样,把tumblr_spider项目提取的视频链接通过js来正则匹配删除只保留username ID然后通过tumblr-crawler进行下载。如果作者能对这种链接做一下支持就好了,另外学js的朋友帮我写了一个脚本。可以实现提取每行视频链接中的用户ID再使用tumblr-crawler进行下载。
var readline = require('readline'); var fs = require('fs'); var os = require('os'); var fReadName = 'sites.txt'; var fWriteName = 'nishuo.txt'; var fRead = fs.createReadStream(fReadName); var fWrite = fs.createWriteStream(fWriteName); var objReadline = readline.createInterface({ input: fRead, // 这是另一种复制方式,这样on('line')里就不必再调用fWrite.write(line),当只是纯粹复制文件时推荐使用 // 但文件末尾会多算一次index计数 sodino.com // output: fWrite, // terminal: true }); var b = []; var index = 1; objReadline.on('line', (line)=>{ var a = line.split('/') var tmp = a[4] if(b.indexOf(tmp) == -1 && tmp != undefined){ fWrite.write(tmp + os.EOL); // 下一行 } b.push(tmp) index ++; }); objReadline.on('close', ()=>{ console.log('readline close...'); }); console.log(b)
The text was updated successfully, but these errors were encountered:
朋友那个js脚本可以转换视频地址成用户的id,然后粘贴到作者这个项目的文本进行下载视频,有点麻烦,如果作者能直接下载视频地址功能加上就好啦~等着造福群众 滑稽
Sorry, something went wrong.
是把转发的也下载了?
No branches or pull requests
The text was updated successfully, but these errors were encountered: