-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unhandled error FILE_ENDED from PullStream.js #48
Comments
Thanks - errors in streams are always a bit annoying (since they don't propagate like promises). You need to put an error handler to prevent the process from terminating. Was this a valid zip file? If so we would need to investigate further - technically the |
I'm doing something similar and see the same error when trying to extract from a ZIP containing large (> 4 GB) files. Testing against smaller files works fine. I'm able to unzip these larger ZIPs without error; I don't think it's a corruption issue, perhaps something related to ZIP64. Does node-unzipper support ZIP64?
|
@ZJONSSON I'm getting quite a few EDIT update in case it helps others... eventually I worked out that the problem is caused by a long-running processing operation (on the unzipped file), which takes longer when the node is busy with other stuff), with a data pipe still open all the way back to the remote source... perhaps some sort of timeout occurs and this breaks the pipe before the data has been fully processed. The solution is to break and restart the pipe after the remote fetch... i.e. just pipe the remote data (still zipped) into a local file and then stream from the local file into |
My app has an SFTP service which receives a ZIP file. Upon completion of the upload I fire off a task which unzips the file. Last night the upload was aborted part way through. This resulted in the node process terminating due to an unhandled error.
I am going to look at the SFTP handler to see if I can catch the aborted upload at that point and avoid any attempt at doing the unzip. However, it seems that unzipper should not cause a server crash due to a truncated zip file.
Having had a quick look, it seems to me that PullSteam.js is detecting the unexpected end of file and emitting an error which is not handled in the calling code. My error dump as as follows:
server stopping { Error: Unhandled "error" event. (FILE_ENDED) at PullStream.emit (events.js:185:19) at PullStream.onerror (_stream_readable.js:652:12) at emitOne (events.js:115:13) at PullStream.emit (events.js:210:7) at PullStream.pull (/Users/ianbale/Source Code/noderail-data-feed-manager/node_modules/unzipper/lib/PullStream.js:66:14) at emitOne (events.js:115:13) at PullStream.emit (events.js:210:7) at PullStream.<anonymous> (/Users/ianbale/Source Code/noderail-data-feed-manager/node_modules/unzipper/lib/PullStream.js:19:10) at emitNone (events.js:110:20) at PullStream.emit (events.js:207:7) at finishMaybe (_stream_writable.js:587:14) at endWritable (_stream_writable.js:595:3) at PullStream.Writable.end (_stream_writable.js:546:5) at ReadStream.onend (_stream_readable.js:584:10) at Object.onceWrapper (events.js:314:30) at emitNone (events.js:110:20) at ReadStream.emit (events.js:207:7) context: 'FILE_ENDED' } server stopping TypeError: cb is not a function at afterWrite (_stream_writable.js:438:3) at onwrite (_stream_writable.js:429:7) at /Users/ianbale/Source Code/noderail-data-feed-manager/node_modules/unzipper/lib/PullStream.js:59:60 at afterWrite (_stream_writable.js:438:3) at /Users/ianbale/Source Code/noderail-data-feed-manager/node_modules/async-listener/glue.js:188:31 at _combinedTickCallback (internal/process/next_tick.js:144:20) at Immediate._tickCallback (internal/process/next_tick.js:180:9) at Immediate._onImmediate (/Users/ianbale/Source Code/noderail-data-feed-manager/node_modules/async-listener/glue.js:188:31) at runCallback (timers.js:781:20) at tryOnImmediate (timers.js:743:5) at processImmediate [as _immediateCallback] (timers.js:714:5) DataManagerFeedProducer closed... kafka dataFeedClient closed... LogsProducer closed... kafka LogsClient closed... opsProducer closed... kafka opsClient closed... server stopping 1
The text was updated successfully, but these errors were encountered: