-
Notifications
You must be signed in to change notification settings - Fork 371
pyppeteer.errors.NetworkError: Protocol error Target.activateTarget: Target closed. #171
Comments
Same problem, and I according to this to fix my code like this: async def spider(self, url):
self.log.detail_info(f"[*] Start crawl {url}")
self.log.detail_info(f"[*] {url} started at {time.strftime('%X')}")
# Handle Error: pyppeteer.errors.NetworkError: Protocol error Runtime.callFunctionOn: Target closed.
browser = await launch({
'args': ['--no-sandbox']
})
page = await browser.newPage()
await page.setViewport(self.set_view_port_option)
await page.goto(url, self.goto_option)
title = await page.title()
filename = await self.translate_word(title)
await page.evaluate(scroll_page_js)
pdf = await page.pdf(self.pdf_option)
await browser.close()
self.log.detail_info(f"[*] {url} finished at {time.strftime('%X')}")
return filename, pdf But it doesn't work for me.This this answer said the error would happen if you call browser.close() while browser.newPage() has yet to resolve. |
Has your problem been solved? |
I have solved my problem by rewriting the js code to python code like this: async def scroll_page(page):
cur_dist = 0
height = await page.evaluate("() => document.body.scrollHeight")
while True:
if cur_dist < height:
await page.evaluate("window.scrollBy(0, 500);")
await asyncio.sleep(0.1)
cur_dist += 500
else:
break I guess the problem is that pyppeteer has a default running-timeout when executing js code or taking screenshot or saving page to pdf. Accroding to the results of my test, this default timeout is about 20 seconds. So when your work has been working for more than 20 seconds, that is, when your work times out, the running work will be automatically closed, which is independent of the timeout settings of the headless you set before. |
My reason is that the version of the websockets package is incorrect, and websockets 7.0 is replaced by WebSockets 6.0. |
In case someone ends up here after having wasted hours... @dp24678 kind of hinted at it but did not explain properly: the issue is indeed with websockets package. Downgrading back to 0.6.0 helped. Run: pip3 install websockets==6.0 --force-reinstall and everything should be okay. =) EDIT 2019-12-26: People also recommend |
@boramalper I just ran into this issue. Similar problem with scraping a webpage for longer than 20seconds. My initial traceback was "pyppeteer.errors.NetworkError: Protocol Error (Runtime.callFunctionOn): Session closed. Most likely the page has been closed." Which led me to this #178 I really wasn't ready to modify the codebase of pyppetteer according to the above "solution", which to me is really more of just a hack. So I tried some other things and got the traceback which led me here. Downgrading from websockets==7.0 to 6.0 instantly fixed this issue and should be the solution for anyone running into issue #171 or #178 Thanks!! |
…ll an older websockets: miyakogi/pyppeteer#171
Same here as described by @GrilledChickenThighs Should this be fixed somehow on this repo ? |
This suggestion saved me. After hours trying with other approaches. Thanks! |
@miyakogi would you consider transferring ownership of this repo to someone who's willing to maintain it? pyppeteer is a valuable tool but it's effectively broken while this issue is there. |
in the meantime you could use my fork which has the websockets fix (without downgrading) and supports the latest chrome revisions on windows: https://github.com/Francesco149/pyppeteer
|
|
import pyppdf.patch_pyppeteer above fix worked with parsing website and collecting links in rapid fashion. |
duplicate of #62 |
Same here!!! waitForSelector function raise pyppeteer.errors.NetworkError: Protocol error Target.activateTarget: Target closed. websockets 6.0 is OK |
I had the same problem which was resolved by
I resolved the same issue by your solution, downgrading to 6.0. however, for your information 8.1 still has the same issue. because I was having this version originally when I faced this problem. so the bug is yet to be resolved by the author. |
@mostafamirzaee It's actually not an issue in the websockets library per se, rather in chrome and pyppeteer. Chrome doesn't respond to pings over websockets, making the websockets (rightly) believe Chrome is dead, so it closes the connection. Older versions of websockets simply didn't support sending be pings to the server, hence why the problem doesn't show itself. The fix is to tell websockets to not send any pings at all, and this fix has been applied to pyppeteer2, the on-going update to this library. @boramalper since your comment is so far up would you mind including this info? |
I run demo on win7, python 3.7, have download chromium with command ' pyppeteer-install'
script as:
import asyncio
from pyppeteer import launch
async def main():
browser = await launch(headless=False)
page = await browser.newPage()
await page.goto('https://www.google.com');
await page.screenshot({'path': 'example.png'})
await browser.close()
asyncio.get_event_loop().run_until_complete(main())
but it has errors below:
Traceback (most recent call last):
File "F:/run.py", line 28, in
asyncio.get_event_loop().run_until_complete(main())
File "C:\Python37\Lib\asyncio\base_events.py", line 568, in run_until_complete
return future.result()
File "F:/run.py", line 25, in main
await page.screenshot({'path': 'example.png'})
File "C:\Users\admiunfd\Envs\gigipy3\lib\site-packages\pyppeteer\page.py", line 1227, in screenshot
return await self._screenshotTask(screenshotType, options)
File "C:\Users\admiunfd\Envs\gigipy3\lib\site-packages\pyppeteer\page.py", line 1232, in _screenshotTask
'targetId': self._target._targetId,
pyppeteer.errors.NetworkError: Protocol error Target.activateTarget: Target closed.
Process finished with exit code 1
The text was updated successfully, but these errors were encountered: