Skip to content

Commit

Permalink
Merge pull request #308 from Xonshiz/xonshiz-2022-04-16
Browse files Browse the repository at this point in the history
Fix for #299
  • Loading branch information
Xonshiz authored Apr 16, 2022
2 parents e3218a2 + 7826b1c commit 1929ac8
Show file tree
Hide file tree
Showing 36 changed files with 113 additions and 69 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -81,3 +81,4 @@ dist/*
comic_dl/build/*
comic_dl/dist/*
/comic_dl.egg-info
docs/build/*
3 changes: 2 additions & 1 deletion Changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,4 +125,5 @@
- Removed setup2.py file [2021.09.05]
- Checking for existing CBZ/PDF files before downloading them again [Fix for #247] [2021.09.05]
- Fix for chapter download at readmanganato
- Added support for webtoons.com (No audio download yet) [Fix for #284] [2021.09.05.1]
- Added support for webtoons.com (No audio download yet) [Fix for #284] [2021.09.05.1]
- Fix for #299 [2022.04.16]
5 changes: 4 additions & 1 deletion ReadMe.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,12 +178,13 @@ Currently, the script supports these arguments :
-pid, --page-id Takes the Page ID to download a particular "chapter number" of a manga.
--comic Add this after -i if you are inputting a comic id or the EXACT comic name.
[ Ex : -i "Deadpool Classic" --comic ]
-comic-search, --search-comic Searches for a comic through the scraped data from ReadComicOnline.to
-comic-search, --search-comic Searches for a comic through the scraped data from ReadComicOnline.li
[ Ex : -comic-search "Deadpool" ]
-comic-info, --comic-info Lists all the information about the given comic (argument can be either comic id or the exact comic name).
[ Ex : -comic-info "Deadpool Classic" ] or [ Ex : -comic-info 3865 ]
--update Updates the comic database for the given argument.
[ Ex: --update "Deadpool Classic" ] or [ Ex: --update "https://readcomiconline.li/Comic/Deadpool-Classic" ]
-cookie, --cookie Passes a cookie to be used throughout the session.
```

## Language Codes:
Expand Down Expand Up @@ -402,6 +403,8 @@ If you're here to make suggestions, please follow the basic syntax to post a req
This should be enough, but it'll be great if you can add more ;)

# Notes
* Readcomiconline.li has been a pain to work with and it might block you out a lot. Now you can use `--cookie` parameter to pass a working cookie. You can retrieve the cookie by checking network tab for `Cookie` value in request headers or by using an external browser plugin. Read more about this on [#299](https://github.com/Xonshiz/comic-dl/issues/299).

* comic.naver.com has korean characters and some OS won't handle those characters. So, instead of naming the file folder with the series name in korean, the script will download and name the folder with the comic's ID instead.

* Bato.to requires you to "log in" to read some chapters. So, to be on a safe side, provide the username/password combination to the script via "-p" and "-u" arguments.
Expand Down
2 changes: 1 addition & 1 deletion comic_dl/__version__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-

__version__ = "2022.04.09"
__version__ = "2022.04.16"
31 changes: 9 additions & 22 deletions comic_dl/comic_dl.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ def __init__(self, argv):
help='Tells the script which Quality of image to download (High/Low).', default='True')

parser.add_argument('-i', '--input', nargs=1, help='Inputs the URL to comic.')
parser.add_argument('-cookie', '--cookie', nargs=1, help='Passes cookie (text format) to be used throughout the session.')

# Chr1st-oo, added arguments
parser.add_argument("--comic", action="store_true", help="Add this after -i if you are inputting a comic id or the EXACT comic name.")
Expand Down Expand Up @@ -210,6 +211,7 @@ def __init__(self, argv):
conversion = data["conversion"]
keep_files = data["keep"]
image_quality = data["image_quality"]
manual_cookie = data["cookie"]
pbar_comic = tqdm(data["comics"], dynamic_ncols=True, desc="[Comic-dl] Auto processing", leave=True,
unit='comic')
for elKey in pbar_comic:
Expand All @@ -227,7 +229,8 @@ def __init__(self, argv):
chapter_range=download_range, conversion=conversion,
keep_files=keep_files, image_quality=image_quality,
username=el["username"], password=el["password"],
comic_language=el["comic_language"])
comic_language=el["comic_language"],
cookie=manual_cookie)
except Exception as ex:
pbar_comic.write('[Comic-dl] Auto processing with error for %s : %s ' % (elKey, ex))
pbar_comic.set_postfix()
Expand All @@ -246,6 +249,7 @@ def __init__(self, argv):
print("Run the script with --help to see more information.")
else:
print_index = False
manual_cookie = None
if args.print_index:
print_index = True
if not args.sorting:
Expand All @@ -260,6 +264,8 @@ def __init__(self, argv):
args.keep = ["True"]
if not args.quality or args.quality == "True":
args.quality = ["Best"]
if args.cookie:
manual_cookie = args.cookie[0]

# user_input = unicode(args.input[0], encoding='latin-1')
user_input = args.input[0]
Expand All @@ -281,32 +287,13 @@ def __init__(self, argv):
chapter_range=args.range, conversion=args.convert[0],
keep_files=args.keep[0], image_quality=args.quality[0],
username=args.username[0], password=args.password[0],
comic_language=args.manga_language[0], print_index=print_index)
comic_language=args.manga_language[0], print_index=print_index,
cookie=manual_cookie)
end_time = time.time()
total_time = end_time - start_time
print("Total Time Taken To Complete : %s" % total_time)
sys.exit()

# def string_formatter(self, my_string):
# temp = ""
# for char in my_string:
# print("Temp right now : {0}".format(char))
# # temp = temp + str(char).replace(char, self.to_utf_8(char))
# temp = temp + str(char).replace(char, self.to_utf_8(char))
#
# print("Temp is : {0}".format(temp))
#
#
# def to_utf_8(self, char):
# print("Received Key : {0}".format(char))
# char_dict = {
# 'ë': '%C3%AB'
# }
# try:
# return char_dict[char]
# except KeyError:
# return char

@staticmethod
def version():
print(__version__)
Expand Down
3 changes: 2 additions & 1 deletion comic_dl/honcho.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,7 @@ def checker(self, comic_url, download_directory, chapter_range, **kwargs):
sorting = kwargs.get("sorting_order")
comic_language = kwargs.get("comic_language")
print_index = kwargs.get("print_index")
manual_cookies = kwargs.get("cookie", None)

if log_flag is True:
logging.basicConfig(format='%(levelname)s: %(message)s', filename="Error Log.log", level=logging.DEBUG)
Expand All @@ -99,7 +100,7 @@ def checker(self, comic_url, download_directory, chapter_range, **kwargs):
chapter_range=chapter_range, conversion=kwargs.get("conversion"),
keep_files=kwargs.get("keep_files"),
image_quality=kwargs.get("image_quality"),
print_index=print_index)
print_index=print_index, manual_cookies=manual_cookies)
return 0
elif domain in ["www.comic.naver.com", "comic.naver.com"]:
comicNaver.ComicNaver(manga_url=comic_url, logger=logging, current_directory=current_directory,
Expand Down
68 changes: 58 additions & 10 deletions comic_dl/sites/readcomicOnlineli.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import base64

from comic_dl import globalFunctions
import re
Expand All @@ -12,6 +13,7 @@ class ReadComicOnlineLi(object):
def __init__(self, manga_url, download_directory, chapter_range, **kwargs):

current_directory = kwargs.get("current_directory")
self.manual_cookie = kwargs.get("manual_cookies", None)
conversion = kwargs.get("conversion")
keep_files = kwargs.get("keep_files")
self.logging = kwargs.get("log_flag")
Expand All @@ -21,6 +23,21 @@ def __init__(self, manga_url, download_directory, chapter_range, **kwargs):
self.print_index = kwargs.get("print_index")

url_split = str(manga_url).split("/")
self.appended_headers = {
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9',
'dnt': '1',
'sec-ch-ua': '" Not A;Brand";v="99", "Chromium";v="100", "Google Chrome";v="100"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'document',
'sec-fetch-mode': 'navigate',
'sec-fetch-site': 'same-origin',
'sec-fetch-user': '?1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.88 Safari/537.36'
}

if len(url_split) in [5]: # Sometimes, this value came out to be 6, instead of 5. Hmmmmmmmm weird.
# Removing "6" from here, because it caused #47
Expand All @@ -39,11 +56,12 @@ def __init__(self, manga_url, download_directory, chapter_range, **kwargs):
def single_chapter(self, comic_url, comic_name, download_directory, conversion, keep_files):
# print("Received Comic Url : {0}".format(comic_url))
print("Fooling CloudFlare...Please Wait...")
appended_headers = {
'referer': comic_url,
'Accept': "*/*",
'Cache-Control': 'no-cache'
}
if not comic_url.endswith("#1"):
comic_url += "#1"

if not self.appended_headers.get('cookie', None) and self.manual_cookie:
self.appended_headers['cookie'] = self.manual_cookie
self.appended_headers['referer'] = comic_url
chapter_number = str(comic_url).split("/")[5].split("?")[0].replace("-", " - ")

file_directory = globalFunctions.GlobalFunctions().create_file_directory(chapter_number, comic_name)
Expand All @@ -62,7 +80,7 @@ def single_chapter(self, comic_url, comic_name, download_directory, conversion,
print('Converted File already exists. Skipping.')
return 0

source, cookies = globalFunctions.GlobalFunctions().page_downloader(manga_url=comic_url, scrapper_delay=10, append_headers=appended_headers)
source, cookies = globalFunctions.GlobalFunctions().page_downloader(manga_url=comic_url, scrapper_delay=10, append_headers=self.appended_headers)

img_list = re.findall(r"lstImages.push\(\"(.*?)\"\);", str(source))

Expand All @@ -77,14 +95,16 @@ def single_chapter(self, comic_url, comic_name, download_directory, conversion,

links = []
file_names = []
print(img_list)
img_list = self.get_image_links(img_list)
for current_chapter, image_link in enumerate(img_list):
image_link = str(image_link).strip().replace("\\", "")

logging.debug("Image Link : %s" % image_link)
image_link = image_link.replace("=s1600", "=s0").replace("/s1600", "/s0") # Change low quality to best.

if str(self.image_quality).lower().strip() in ["low", "worst", "bad", "cancer", "mobile"]:
image_link = image_link.replace("=s0", "=s1600").replace("/s0", "/s1600")
image_link = image_link.replace("=s1600", "=s0").replace("/s1600", "/s0") # Change low quality to best.

current_chapter += 1
file_name = str(globalFunctions.GlobalFunctions().prepend_zeroes(current_chapter, len(img_list))) + ".jpg"
Expand All @@ -109,7 +129,10 @@ def name_cleaner(self, url):

def full_series(self, comic_url, comic_name, sorting, download_directory, chapter_range, conversion, keep_files):
print("Fooling CloudFlare...Please Wait...")
source, cookies = globalFunctions.GlobalFunctions().page_downloader(manga_url=comic_url, scrapper_delay=10)
if not self.appended_headers.get('cookie', None) and self.manual_cookie:
self.appended_headers['cookie'] = self.manual_cookie
self.appended_headers['referer'] = comic_url
source, cookies = globalFunctions.GlobalFunctions().page_downloader(manga_url=comic_url, scrapper_delay=10, append_headers=self.appended_headers)

all_links = []

Expand Down Expand Up @@ -157,7 +180,7 @@ def full_series(self, comic_url, comic_name, sorting, download_directory, chapte

if str(sorting).lower() in ['new', 'desc', 'descending', 'latest']:
for chap_link in all_links:
chap_link = "http://readcomiconline.li" + chap_link
chap_link = "https://readcomiconline.li" + chap_link
try:
self.single_chapter(comic_url=chap_link, comic_name=comic_name, download_directory=download_directory,
conversion=conversion, keep_files=keep_files)
Expand All @@ -172,7 +195,7 @@ def full_series(self, comic_url, comic_name, sorting, download_directory, chapte

elif str(sorting).lower() in ['old', 'asc', 'ascending', 'oldest', 'a']:
for chap_link in all_links[::-1]:
chap_link = "http://readcomiconline.to" + chap_link
chap_link = "https://readcomiconline.li" + chap_link
try:
self.single_chapter(comic_url=chap_link, comic_name=comic_name, download_directory=download_directory,
conversion=conversion, keep_files=keep_files)
Expand All @@ -186,3 +209,28 @@ def full_series(self, comic_url, comic_name, sorting, download_directory, chapte
globalFunctions.GlobalFunctions().addOne(comic_url)

return 0

def get_image_links(self, urls):
# JS logic extracted by : https://github.com/Xonshiz/comic-dl/issues/299#issuecomment-1098189279
temp = []
for url in urls:
print(url + '\n')
quality_ = None
if '=s0' in url:
url = url[:-3]
quality_ = '=s0'
else:
url = url[:-6]
quality_ = '=s1600'
# url = url.slice(4, 22) + url.slice(25);
url = url[4:22] + url[25:]
# url = url.slice(0, -6) + url.slice(-2);
url = url[0:-6] + url[-2:]
url = str(base64.b64decode(url).decode("utf-8"))
# url = url.slice(0, 13) + url.slice(17);
url = url[0:13] + url[17:]
# url = url.slice(0, -2) + (containsS0 ? '=s0' : '=s1600');
url = url[0:-2] + quality_
# return 'https://2.bp.blogspot.com/' + url;
temp.append('https://2.bp.blogspot.com/{0}'.format(url))
return temp
Binary file modified docs/build/doctrees/environment.pickle
Binary file not shown.
Binary file modified docs/build/doctrees/index.doctree
Binary file not shown.
Binary file modified docs/build/doctrees/list_of_arguments.doctree
Binary file not shown.
Binary file modified docs/build/doctrees/notes.doctree
Binary file not shown.
2 changes: 1 addition & 1 deletion docs/build/html/.buildinfo
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 9e8fc3beea7f09e3cf4fc33f343b5bb2
config: b45d5957682d5eaec36dc53eff3f7485
tags: 645f666f9bcd5a90fca523b33c5a78b7
2 changes: 0 additions & 2 deletions docs/build/html/_sources/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,6 @@ various Manga and Comic sites easily. You can search Manga from this
tool as well. Idea from
`youtube-dl <https://github.com/rg3/youtube-dl>`__.

If you’re looking for an application, or a UI for this, please move to :
`CoManga <https://github.com/Xonshiz/CoManga>`__

Don’t overuse this script. Support the developers of those websites
by disabling your adblock on their site. Advertisments pay for the
Expand Down
2 changes: 2 additions & 0 deletions docs/build/html/_sources/list_of_arguments.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -34,3 +34,5 @@ Currently, the script supports these arguments :
[ Ex : -comic-info "Deadpool Classic" ] or [ Ex : -comic-info 3865 ]
--update Updates the comic database for the given argument.
[ Ex: --update "Deadpool Classic" ] or [ Ex: --update "https://readcomiconline.li/Comic/Deadpool-Classic" ]
-cookie, --cookie Passes a cookie to be used throughout the session.

1 change: 1 addition & 0 deletions docs/build/html/_sources/notes.rst.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
Notes
=====
- Readcomiconline.li has been a pain to work with and it might block you out a lot. Now you can use `--cookie` parameter to pass a working cookie. You can retrieve the cookie by checking network tab for `Cookie` value in request headers or by using an external browser plugin. Read more about this on `#299 <https://github.com/Xonshiz/comic-dl/issues/299>`__.

- comic.naver.com has korean characters and some OS won’t handle those
characters. So, instead of naming the file folder with the series
Expand Down
2 changes: 1 addition & 1 deletion docs/build/html/_static/documentation_options.js
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
var DOCUMENTATION_OPTIONS = {
URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'),
VERSION: '2022.04.09',
VERSION: '2022.04.16',
LANGUAGE: 'None',
COLLAPSE_INDEX: false,
BUILDER: 'html',
Expand Down
4 changes: 2 additions & 2 deletions docs/build/html/auto_download.html
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" /><meta name="generator" content="Docutils 0.17.1: http://docutils.sourceforge.net/" />

<title>Auto Download &#8212; comic-dl 2022.04.09 documentation</title>
<title>Auto Download &#8212; comic-dl 2022.04.16 documentation</title>
<link rel="stylesheet" type="text/css" href="_static/pygments.css" />
<link rel="stylesheet" type="text/css" href="_static/alabaster.css" />
<script data-url_root="./" id="documentation_options" src="_static/documentation_options.js"></script>
Expand Down Expand Up @@ -82,13 +82,13 @@ <h1 class="logo"><a href="index.html">comic-dl</a></h1>

<h3>Navigation</h3>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="supported_sites.html">List of Supported Websites</a></li>
<li class="toctree-l1"><a class="reference internal" href="dependencies_installation.html">Dependencies Installation</a></li>
<li class="toctree-l1"><a class="reference internal" href="installation.html">Installation</a></li>
<li class="toctree-l1"><a class="reference internal" href="python_support.html">Python Support</a></li>
<li class="toctree-l1"><a class="reference internal" href="windows_binary.html">Windows Binary</a></li>
<li class="toctree-l1"><a class="reference internal" href="list_of_arguments.html">List of Arguments</a></li>
<li class="toctree-l1"><a class="reference internal" href="language_codes.html">Language Codes:</a></li>
<li class="toctree-l1"><a class="reference internal" href="language_codes.html#language-code-language">Language Code –&gt; Language</a></li>
<li class="toctree-l1"><a class="reference internal" href="using_the_search.html">Using The Search</a></li>
<li class="toctree-l1"><a class="reference internal" href="usage.html">Usage</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">Auto Download</a></li>
Expand Down
2 changes: 1 addition & 1 deletion docs/build/html/dependencies_installation.html
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" /><meta name="generator" content="Docutils 0.17.1: http://docutils.sourceforge.net/" />

<title>Dependencies Installation &#8212; comic-dl 2022.04.09 documentation</title>
<title>Dependencies Installation &#8212; comic-dl 2022.04.16 documentation</title>
<link rel="stylesheet" type="text/css" href="_static/pygments.css" />
<link rel="stylesheet" type="text/css" href="_static/alabaster.css" />
<script data-url_root="./" id="documentation_options" src="_static/documentation_options.js"></script>
Expand Down
Loading

0 comments on commit 1929ac8

Please sign in to comment.