diff --git a/CHANGELOG.rst b/CHANGELOG.rst index 804f7b83..c54496ce 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -1,106 +1,106 @@ -OSS SDK for Python 版本记录 +OSS SDK for Python Release record =========================== -Python SDK的版本号遵循 `Semantic Versioning `_ 规则。 +Python SDK version follows `Semantic Versioning `. Version 2.3.3 ------------- -- 修复:RequestResult.resp没有read,链接无法重用 +- Fix:No in RequestResult.resp and the link does not work. Version 2.3.2 ------------- -- 修复:issue #70 +- Fix:issue #70 Version 2.3.1 ------------- -- 修复:#63 增加 `oss2.defaults.logger` 配置项,用户可以设置该变量,来改变缺省的 `logger` (缺省是 `root` logger) -- 修复:#66 oss2相关的Adapter中用了__len__()函数会导致requests super_len()函数在32bit Windows上导致不能够上传超过2GB的文件。 +- Fix:issue #63 Add `oss2.defaults.logger` config. The default one is 'root logger'. +- Fix:issue #66 oss2 related Adapter uses __len__()function which leads to requests super_len() cannot handle file with more than 2GB size in 32 bit Windows. Version 2.3.0 ------------- -- 增加:符号链接接口 `bucket.put_symlink`,`bucket.get_symlink` +- Add:APIs `bucket.put_symlink`,`bucket.get_symlink` Version 2.2.3 ------------- -- 修复:`bucket.resumable_upload` 的返回值从null修正为PutObjectResult -- 修复:优化 `Response.read` 的字符串拼接方式,提高 `bucket.get_object` 的效率 issue #39 -- 修复:`bucket.copy_object` 对source key进行url编码 +- Fix:The return value of `bucket.resumable_upload` is corrected as PutObjectResult from null. +- Fix:Improves the way concatenanting string in `Response.read` which improves the efficiency of `bucket.get_object`. Issue #39. +- Fix:Uses the url encoding for source key in `bucket.copy_object`. Version 2.2.2 ------------- -- 修复:upload_part接口加上headers参数 +- Fix:Add header parameters in upload_part. Version 2.2.1 ------------- -- 修复:只有当OSS返回x-oss-hash-crc64ecma头部时,才对上传的文件进行CRC64完整性校验。 +- Fix:Only does the CRC64 integrity check when x-oss-hash-crc64ecma is returned from OSS. Version 2.2.0 ------------- -- 依赖:增加新的依赖: `crcmod` -- 增加:上传、下载增加了CRC64校验,缺省打开 -- 增加:`RTMP` 直播推流相关接口 -- 增加:`bucket.get_object_meta()` 接口,用来更为快速的获取文件基本信息 -- 修复:`bucket.object_exists()` 接口采用 `bucket.get_object_meta()` 来实现,避免因镜像回源造成的 issue #39 +- Dependency:Add a new dependency `crcmod` +- Add:Enable CRC (by default) in upload and download. +- Add:`RTMP` pushing streaming related APIs +- Add:`bucket.get_object_meta()` API for getting basic metadata. +- Fix:`bucket.object_exists()` API is re-implemented with `bucket.get_object_meta()` to solve the "retrieve from source" impact. issue #39 Version 2.1.1 ------------- -- 修复:issue #28。 -- 修复:正确的设置连接池大小。 +- Fix:issue #28。 +- Fix:Sets a correct connection pool size. Version 2.1.0 ------------- -- 增加:可以通过 `oss2.defaults.connection_pool_size` 来设置连接池的最大连接数。 -- 增加:可以通过 `oss2.resumable_upload` 函数的 `num_threads` 参数指定并发的线程数,来进行并发上传。 -- 增加:提供断点下载函数 `oss2.resumable_download` 。 -- 修复:保存断点信息的文件名应该由“规则化”的本地文件名生成;当断点信息文件格式不是json时,删除断点信息文件。 -- 修复:修复一些文档的Bug。 +- Add:Add a property `oss2.defaults.connection_pool_size` to set the max connection count in connection pool. +- Add:Add the parameter `num_threads` in `oss2.resumable_upload` to specify the thread number of uploading. +- Add:Add `oss2.resumable_download` to resume a download. +- Fix:The checkpoint file name should be generated from normalized local file name; when the checkpoint file is not in JSON format, delete the checkpoint file. +- Fix:Fix some documents bug. Version 2.0.6 ------------- -- 增加:可以通过新增的 `StsAuth` 类,进行STS临时授权 -- 增加:加入Travis CI的支持 -- 改变:对unit test进行了初步的梳理; +- Add:Support STS authentication by `StsAuth` class. +- Add:Add Travis CI support. +- Update:Initial version of unit test. Version 2.0.5 ------------- -- 改变:缺省的connect timeout由10秒改为60秒。为了兼容老的requests库(版本低于2.4.0),目前connect timeout和read timeout是同一个值,为了避免 -CopyObject、UploadPartCopy因read timeout超时,故把这个超时时间设长。 -- 增加:把 `security-token` 加入到子资源中,参与签名。 -- 修复:用户可以通过设置oss2.defaults里的变量值,直接修改缺省参数 +- Add:The default connect timeout is changed to 60s from 10s. To be compatible with older request library (< 2.4.0), the connect timeout and read timeout share the same value. + This is to avoid the read timeout in CopyObject、UploadPartCopy因read timeout. +- Add:Add `security-token` into sub resources and get signed. +- Fix:User could set properties in oss2.defaults to change the default value. Version 2.0.4 ------------- -- 改变:增加了unittest目录,原先的tests作为functional test;Tox默认是跑unittest -- 修复:按照依赖明确排除requests 2.9.0。因为 `Issue 2844 `_ 导致不能传输UTF-8数据。 -- 修复:Object名以'/'开头时,oss server应该报InvalidObjectName,而不是报SignatureDoesNotMatch。原因是URL中对'/'也要做URL编码。 -- 修复:MANIFEST.in中改正README.rst等 +- Update:Add the folder of unittests. The tests folder now becomes functional tests. Tox runs unittest by default. +- Fix:Remove the dependency of requests 2.9.0. This is due to `Issue 2844 `, which leads to UTF-8 data transfer issue. +- Fix:OSS server should return InvalidObjectName error instead of SignatureDoesNotMatch when Object name starts with '/'. The fix is to url encode the '/' in URL. +- Fix:Correct the README.rst in MANIFEST.in. Version 2.0.3 ------------- -- 重新设计Python SDK,不再基于原有的官方0.x.x版本开发。 -- 只支持Python2.6及以上版本,支持Python 3。 -- 基于requests库 +- Redesign the Python SDK. The original offical SDK of version 0.x.x is deprecated. +- Only supports Python2.6 or higher, supports Python 3 as well. +- Adds the dependency on request library. diff --git a/doc/api.rst b/doc/api.rst index 96144779..913c94d2 100644 --- a/doc/api.rst +++ b/doc/api.rst @@ -1,11 +1,11 @@ .. _api: -API文档 +API document ========== .. module:: oss2 -基础类 +Base classes ------ .. autoclass:: oss2.Auth @@ -15,38 +15,38 @@ API文档 .. autoclass:: oss2.Service .. autoclass:: oss2.Session -输入、输出和异常说明 +Input, Output and Exceptions ------------------ .. automodule:: oss2.api -文件(Object)相关操作 +Object operations -------------------- -上传 +Upload ~~~~ .. automethod:: oss2.Bucket.put_object .. automethod:: oss2.Bucket.put_object_from_file .. automethod:: oss2.Bucket.append_object -下载 +Download ~~~~ .. automethod:: oss2.Bucket.get_object .. automethod:: oss2.Bucket.get_object_to_file -删除 +Delete ~~~~ .. automethod:: oss2.Bucket.delete_object .. automethod:: oss2.Bucket.batch_delete_objects -罗列 +List ~~~~ .. automethod:: oss2.Bucket.list_objects -获取、更改文件信息 +Get/Update file information ~~~~~~~~~~~~~~~ .. automethod:: oss2.Bucket.head_object @@ -57,7 +57,7 @@ API文档 .. automethod:: oss2.Bucket.get_object_meta -分片上传 +Multipart upload ~~~~~~~~ .. automethod:: oss2.Bucket.init_multipart_upload @@ -68,57 +68,57 @@ API文档 .. automethod:: oss2.Bucket.list_parts -符号链接 +Symlink ~~~~~~~~ .. automethod:: oss2.Bucket.put_symlink .. automethod:: oss2.Bucket.get_symlink -存储空间(Bucket)相关操作 +Bucket operations ------------------------- -创建、删除、查询 +Create, Delete, Query ~~~~~~~~~~~~~~ .. automethod:: oss2.Bucket.create_bucket .. automethod:: oss2.Bucket.delete_bucket .. automethod:: oss2.Bucket.get_bucket_location -Bucket权限管理 +Bucket ACL ~~~~~~~~~~~~~~ .. automethod:: oss2.Bucket.put_bucket_acl .. automethod:: oss2.Bucket.get_bucket_acl -跨域资源共享(CORS) +CORS (cross origin resource sharing) ~~~~~~~~~~~~~~~~~~~~ .. automethod:: oss2.Bucket.put_bucket_cors .. automethod:: oss2.Bucket.get_bucket_cors .. automethod:: oss2.Bucket.delete_bucket_cors -生命周期管理 +Lifecycle management ~~~~~~~~~~~ .. automethod:: oss2.Bucket.put_bucket_lifecycle .. automethod:: oss2.Bucket.get_bucket_lifecycle .. automethod:: oss2.Bucket.delete_bucket_lifecycle -日志收集 +Logging ~~~~~~~~ .. automethod:: oss2.Bucket.put_bucket_logging .. automethod:: oss2.Bucket.get_bucket_logging .. automethod:: oss2.Bucket.delete_bucket_logging -防盗链 +Referrer ~~~~~~ .. automethod:: oss2.Bucket.put_bucket_referer .. automethod:: oss2.Bucket.get_bucket_referer -静态网站托管 +Static website ~~~~~~~~~~~~ .. automethod:: oss2.Bucket.put_bucket_website @@ -126,7 +126,7 @@ Bucket权限管理 .. automethod:: oss2.Bucket.delete_bucket_website -RTPM推流操作 +RTPM pushing streaming operations ~~~~~~~~~~~~ .. automethod:: oss2.Bucket.create_live_channel diff --git a/doc/easy.rst b/doc/easy.rst index 40740af2..156bba9e 100644 --- a/doc/easy.rst +++ b/doc/easy.rst @@ -1,12 +1,12 @@ .. _easy: -易用性接口 +interfaces for easy to use ========== .. module:: oss2 -迭代器 +Iterators ~~~~~~ .. autoclass:: oss2.BucketIterator @@ -16,13 +16,13 @@ .. autoclass:: oss2.PartIterator -断点续传(上传、下载) +Resumable upload or download. ~~~~~~~~~~~~~~~~~~~ .. autofunction:: oss2.resumable_upload .. autofunction:: oss2.resumable_download -FileObject适配器 +FileObject Adapter ~~~~~~~~~~~~~~~~~~ .. autoclass:: oss2.SizedFileAdapter \ No newline at end of file diff --git a/doc/index.rst b/doc/index.rst index dda9b322..661ae506 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -7,7 +7,7 @@ Aliyun OSS SDK for Python ========================= -开发文档 +Development documentation -------- .. toctree:: diff --git a/oss2/api.py b/oss2/api.py old mode 100644 new mode 100755 index 8562bfdf..383f61fe --- a/oss2/api.py +++ b/oss2/api.py @@ -1,113 +1,108 @@ # -*- coding: utf-8 -*- """ -文件上传方法中的data参数 +The data parameters in file upload methods ------------------------ -诸如 :func:`put_object ` 这样的上传接口都会有 `data` 参数用于接收用户数据。`data` 可以是下述类型 - - unicode类型(对于Python3则是str类型):内部会自动转换为UTF-8的bytes - - bytes类型:不做任何转换 - - file-like object:对于可以seek和tell的file object,从当前位置读取直到结束。其他类型,请确保当前位置是文件开始。 - - 可迭代类型:对于无法探知长度的数据,要求一定是可迭代的。此时会通过Chunked Encoding传输。 +For example, func:`put_object ` has the 'data' parameter. It could be any one of the following types: + - unicode type (for Python3 it's str): Internally it's converted to UTF-8 bytes. + - bytes types:No convertion + - file-like object:For seek()-able and tell()-able file object, it will read from the current position to the end. Otherwise, make sure the current position is the file start position. + - Iterator types:The data must be iterator type if its length is not predictable. Internally it's using `Chunk encoding` for transfer. -Bucket配置修改方法中的input参数 +The input parmater in bucket config update ----------------------------- -诸如 :func:`put_bucket_cors ` 这样的Bucket配置修改接口都会有 `input` 参数接收用户提供的配置数据。 -`input` 可以是下述类型 - - Bucket配置信息相关的类,如 `BucketCors` - - unicode类型(对于Python3则是str类型) - - 经过utf-8编码的bytes类型 +For example :func:`put_bucket_cors ` has `input` parameter. It could be any one of the following types: + - Bucket config related class such as `BucketCors` + - unicode type(For Python3, it's str) + - UTF-8 encoded bytes - file-like object - - 可迭代类型,会通过Chunked Encoding传输 -也就是说 `input` 参数可以比 `data` 参数多接受第一种类型的输入。 + - Iterator types,uses `Chunked Encoding` for transfer +In other words, except supporting `Bucket config related class`, `input` has same types of `data` parameter. -返回值 +Return value ------ -:class:`Service` 和 :class:`Bucket` 类的大多数方法都是返回 :class:`RequestResult ` -及其子类。`RequestResult` 包含了HTTP响应的状态码、头部以及OSS Request ID,而它的子类则包含用户真正想要的结果。例如, -`ListBucketsResult.buckets` 就是返回的Bucket信息列表;`GetObjectResult` 则是一个file-like object,可以调用 `read()` 来获取响应的 -HTTP包体。 +Most get methods in :class:`Service` and :class:`Bucket` return :class:`RequestResult ` or its subclasses. +`RequestResult` class defines the HTTP status code、response headers and OSS Request ID. And its subclasses define the scenario specific data that are interesting to uses. +For example, +`ListBucketsResult.buckets` returns the Bucket instance list; and `GetObjectResult` returns a file-like object which could call `read()` to get the response body. - -异常 +Exceptions ---- -一般来说Python SDK可能会抛出三种类型的异常,这些异常都继承于 :class:`OssError ` : - - :class:`ClientError ` :由于用户参数错误而引发的异常; - - :class:`ServerError ` 及其子类:OSS服务器返回非成功的状态码,如4xx或5xx; - - :class:`RequestError ` :底层requests库抛出的异常,如DNS解析错误,超时等; -当然,`Bucket.put_object_from_file` 和 `Bucket.get_object_to_file` 这类函数还会抛出文件相关的异常。 +Generally speaking Python SDK could throw 3 Exception types, which all inhert from :class:`OssError ` : + - :class:`ClientError `:It's the client side exception that is likely due to user's incorrect parameters of usage; + - :class:`ServerError ` and its subclasses.Server side exceptions which contain error code such as 4xx or 5xx. + - :class:`RequestError ` :The underlying requests lib's exceptions, such as DNS error or timeout; +Besides, `Bucket.put_object_from_file` and `Bucket.get_object_to_file` may throw file related exceptions. .. _byte_range: -指定下载范围 +Download range ------------ -诸如 :func:`get_object ` 以及 :func:`upload_part_copy ` 这样的函数,可以接受 -`byte_range` 参数,表明读取数据的范围。该参数是一个二元tuple:(start, last)。这些接口会把它转换为Range头部的值,如: - - byte_range 为 (0, 99) 转换为 'bytes=0-99',表示读取前100个字节 - - byte_range 为 (None, 99) 转换为 'bytes=-99',表示读取最后99个字节 - - byte_range 为 (100, None) 转换为 'bytes=100-',表示读取第101个字节到文件结尾的部分(包含第101个字节) +For example, :func:`get_object ` and :func:`upload_part_copy ` accept 'byte_range' parameters, which specifies the read range. +It's a 2-tuple: (start,last). These methods interanlly would translate the tuple into the value of Http header Range, such as: + - For (0, 99), the translated Range header is 'bytes=0-99',which means reading the first 100 bytes. + - For (None, 99), the translated Range header is 'bytes=-99',which means reading the last 99 bytes + - For (100, None), the translated Range header is 'bytes=100-', which means reading the whole data starting from the 101th character (The index is 100 and index starts with 0). -分页罗列 +Paging ------- -罗列各种资源的接口,如 :func:`list_buckets ` 、 :func:`list_objects ` 都支持 -分页查询。通过设定分页标记(如:`marker` 、 `key_marker` )的方式可以指定查询某一页。首次调用将分页标记设为空(缺省值,可以不设), -后续的调用使用返回值中的 `next_marker` 、 `next_key_marker` 等。每次调用后检查返回值中的 `is_truncated` ,其值为 `False` 说明 -已经到了最后一页。 - +Listing APIs such as :func:`list_buckets ` and :func:`list_objects ` support paging. +Specify the paging markers (e.g. marker, key_marker) to query a specific page after that marker. +For the first page, the marker is empty, which is the by default value. +For next pages, use the next_marker or next_key_marker value as the marker value. Check the is_truncated value after each call to determine if it's the last page--false means it's the last page. .. _progress_callback: -上传下载进度 +Upload or Download Progress ----------- -上传下载接口,诸如 `get_object` 、 `put_object` 、`resumable_upload`,都支持进度回调函数,可以用它实现进度条等功能。 +Upload or Download APIs such as `get_object`, `put_object`, `resumable_upload` support progress callback method. User can use it to implement progress bar or other functions that needs the progress data. -`progress_callback` 的函数原型如下 :: +`progress_callback` definition :: def progress_callback(bytes_consumed, total_bytes): - '''进度回调函数。 + '''progress callback - :param int bytes_consumed: 已经消费的字节数。对于上传,就是已经上传的量;对于下载,就是已经下载的量。 - :param int total_bytes: 总长度。 + :param int bytes_consumed: Consumed bytes. For upload it's uploaded bytes; for download it's download bytes. + :param int total_bytes: Total bytes. ''' -其中 `total_bytes` 对于上传和下载有不同的含义: - - 上传:当输入是bytes或可以seek/tell的文件对象,那么它的值就是总的字节数;否则,其值为None - - 下载:当返回的HTTP相应中有Content-Length头部,那么它的值就是Content-Length的值;否则,其值为None +Note that `total_bytes` has different meanings for download and upload. + - For upload, if the input is bytes or file object supports seek/tell, it's the total size. Otherwise it's none. + - Download: When http headers returned has content-length header, then it's the value of content-length. Otherwise it's none. .. _unix_time: Unix Time --------- -OSS Python SDK会把从服务器获得时间戳都转换为自1970年1月1日UTC零点以来的秒数,即Unix Time。 -参见 `Unix Time `_ +OSS Python SDK will automatically convert the server time to Unix time (or epoch time, ``) -OSS中常用的时间格式有 - - HTTP Date格式,形如 `Sat, 05 Dec 2015 11:04:39 GMT` 这样的GMT时间。 - 用在If-Modified-Since、Last-Modified这些HTTP请求、响应头里。 - - ISO8601格式,形如 `2015-12-05T00:00:00.000Z`。 - 用在生命周期管理配置、列举Bucket结果中的创建时间、列举文件结果中的最后修改时间等处。 +The common time format in OSS is: + - HTTP Date format,like `Sat, 05 Dec 2015 11:04:39 GMT`. It's used in http headers such as If-Modified-Since or Last-Modified. + - ISO8601 format,for example `2015-12-05T00:00:00.000Z`. + It's used in lifecycle management configuration, create/last modified time in result of bucket list or file list. -`http_date` 函数把Unix Time转换为HTTP Date;而 `http_to_unixtime` 则做相反的转换。如 :: +`http_date` converts the Unix Time to HTTP Date;`http_to_unixtime` does the opposite. For example :: >>> import oss2, time - >>> unix_time = int(time.time()) # 当前UNIX Time,设其值为 1449313829 - >>> date_str = oss2.http_date(unix_time) # 得到 'Sat, 05 Dec 2015 11:10:29 GMT' - >>> oss2.http_to_unixtime(date_str) # 得到 1449313829 + >>> unix_time = int(time.time()) # Current time in UNIX Time,Value is 1449313829 + >>> date_str = oss2.http_date(unix_time) # date_str will be 'Sat, 05 Dec 2015 11:10:29 GMT' + >>> oss2.http_to_unixtime(date_str) # the result is 1449313829 .. note:: - 生成HTTP协议所需的日期(即HTTP Date)时,请使用 `http_date` , 不要使用 `strftime` 这样的函数。因为后者是和locale相关的。 - 比如,`strftime` 结果中可能会出现中文,而这样的格式,OSS服务器是不能识别的。 + Please use `http_date` instead of 'strftime' to generate date in http protocol.Because the latter depends on the locale. + For example, `strftime`result could contain Chinese which could not be parsed by OSS server. -`iso8601_to_unixtime` 把ISO8601格式转换为Unix Time;`date_to_iso8601` 和 `iso8601_to_date` 则在ISO8601格式的字符串和 -datetime.date之间相互转换。如 :: +`iso8601_to_unixtime` converts date in ISO8601 format to Unix Time;`date_to_iso8601` and `iso8601_to_date` does the translation between ISO8601 and datetime.date. +For example :: >>> import oss2 - >>> d = oss2.iso8601_to_date('2015-12-05T00:00:00.000Z') # 得到 datetime.date(2015, 12, 5) - >>> date_str = oss2.date_to_iso8601(d) # 得到 '2015-12-05T00:00:00.000Z' - >>> oss2.iso8601_to_unixtime(date_str) # 得到 1449273600 + >>> d = oss2.iso8601_to_date('2015-12-05T00:00:00.000Z') # Gets datetime.date(2015, 12, 5) + >>> date_str = oss2.date_to_iso8601(d) # Gets '2015-12-05T00:00:00.000Z' + >>> oss2.iso8601_to_unixtime(date_str) # Gets 1449273600 """ from . import xml_utils @@ -162,9 +157,9 @@ def _parse_result(self, resp, parse_func, klass): class Service(_Base): - """用于Service操作的类,如罗列用户所有的Bucket。 + """The class for interacting with Service. For example list all buckets. - 用法 :: + For example :: >>> import oss2 >>> auth = oss2.Auth('your-access-key-id', 'your-access-key-secret') @@ -172,17 +167,17 @@ class Service(_Base): >>> service.list_buckets() - :param auth: 包含了用户认证信息的Auth对象 + :param auth: the auth instance that contains access-key-id and access-key-secret. :type auth: oss2.Auth - :param str endpoint: 访问域名,如杭州区域的域名为oss-cn-hangzhou.aliyuncs.com + :param str endpoint: the domain of endpoint, such as 'oss-cn-hangzhou.aliyuncs.com' - :param session: 会话。如果是None表示新开会话,非None则复用传入的会话 + :param session: session instance. If it's None, then it will use a new session. :type session: oss2.Session - :param float connect_timeout: 连接超时时间,以秒为单位。 - :param str app_name: 应用名。该参数不为空,则在User Agent中加入其值。 - 注意到,最终这个字符串是要作为HTTP Header的值传输的,所以必须要遵循HTTP标准。 + :param float connect_timeout: connection timeout in seconds. + :param str app_name: App name. If it's not null, it will be appended in User Agent header. + Note that this value will be part of the Http header value and thus must follow the http protocol. """ def __init__(self, auth, endpoint, session=None, @@ -192,13 +187,13 @@ def __init__(self, auth, endpoint, app_name=app_name) def list_buckets(self, prefix='', marker='', max_keys=100): - """根据前缀罗列用户的Bucket。 + """List buckets by prefix - :param str prefix: 只罗列Bucket名为该前缀的Bucket,空串表示罗列所有的Bucket - :param str marker: 分页标志。首次调用传空串,后续使用返回值中的next_marker - :param int max_keys: 每次调用最多返回的Bucket数目 + :param str prefix: The prefix of buckets tolist. List all buckets if it's empty. + :param str marker: The paging maker. It's empty for first page and then use next_marker in the response of the previous page. + :param int max_keys: Max bucket count to return. - :return: 罗列的结果 + :return: The bucket lists :rtype: oss2.models.ListBucketsResult """ resp = self._do('GET', '', '', @@ -209,9 +204,9 @@ def list_buckets(self, prefix='', marker='', max_keys=100): class Bucket(_Base): - """用于Bucket和Object操作的类,诸如创建、删除Bucket,上传、下载Object等。 + """The class for Bucket or Object related operations, such as create, delete bucket, upload or download object. - 用法(假设Bucket属于杭州区域) :: + Examples(Assuming Bucket is in Hangzhou datacenter) :: >>> import oss2 >>> auth = oss2.Auth('your-access-key-id', 'your-access-key-secret') @@ -219,20 +214,20 @@ class Bucket(_Base): >>> bucket.put_object('readme.txt', 'content of the object') - :param auth: 包含了用户认证信息的Auth对象 + :param auth: Auth object that has the user's access key id and access key secret. :type auth: oss2.Auth - :param str endpoint: 访问域名或者CNAME - :param str bucket_name: Bucket名 - :param bool is_cname: 如果endpoint是CNAME则设为True;反之,则为False。 + :param str endpoint: Domain name of endpoint or the CName. + :param str bucket_name: Bucket name + :param bool is_cname: True if the endpoint is CNAME; Otherwise it's False. - :param session: 会话。如果是None表示新开会话,非None则复用传入的会话 + :param session: Session instance. None for creating a new session. :type session: oss2.Session - :param float connect_timeout: 连接超时时间,以秒为单位。 + :param float connect_timeout: connectio timeout in seconds. - :param str app_name: 应用名。该参数不为空,则在User Agent中加入其值。 - 注意到,最终这个字符串是要作为HTTP Header的值传输的,所以必须要遵循HTTP标准。 + :param str app_name: App name. If it's not empty, it would be appended in User Agent. + Note that this value needs to follow the http header value's requirement as it's part of the User Agent header. """ ACL = 'acl' @@ -260,25 +255,25 @@ def __init__(self, auth, endpoint, bucket_name, self.bucket_name = bucket_name.strip() def sign_url(self, method, key, expires, headers=None, params=None): - """生成签名URL。 + """generate the presigned url. - 常见的用法是生成加签的URL以供授信用户下载,如为log.jpg生成一个5分钟后过期的下载链接:: + The signed url could be used for accessing the object by any user who has the url. For example, in the code below, it generates the signed url with 5 minutes TTL for log.jpg file: >>> bucket.sign_url('GET', 'log.jpg', 5 * 60) 'http://your-bucket.oss-cn-hangzhou.aliyuncs.com/logo.jpg?OSSAccessKeyId=YourAccessKeyId\&Expires=1447178011&Signature=UJfeJgvcypWq6Q%2Bm3IJcSHbvSak%3D' - :param method: HTTP方法,如'GET'、'PUT'、'DELETE'等 + :param method: HTTP method such as 'GET', 'PUT', 'DELETE', etc :type method: str - :param key: 文件名 - :param expires: 过期时间(单位:秒),链接在当前时间再过expires秒后过期 + :param key: object key + :param expires: Expiration time in seconds. The url is invalid after it's expired. - :param headers: 需要签名的HTTP头部,如名称以x-oss-meta-开头的头部(作为用户自定义元数据)、 - Content-Type头部等。对于下载,不需要填。 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: The http headers to sign. For example the headers startign with x-oss-meta- (user's custom metadata), Content-Type, etc. + Not needed for download. + :type headers: Could be dict, but recommendation is oss2.CaseInsensitiveDict - :param params: 需要签名的HTTP查询参数 + :param params: http query parameters to sign - :return: 签名URL。 + :return: Signed url. """ key = to_string(key) req = http.Request(method, self._make_url(self.bucket_name, key), @@ -287,15 +282,15 @@ def sign_url(self, method, key, expires, headers=None, params=None): return self.auth._sign_url(req, self.bucket_name, key, expires) def sign_rtmp_url(self, channel_name, playlist_name, expires): - """生成RTMP推流的签名URL。 - 常见的用法是生成加签的URL以供授信用户向OSS推RTMP流。 + """Sign RTMP pushing streaming url. + It's used to push the RTMP streaming to OSS for trusted user who has the url. - :param channel_name: 直播频道的名称 - :param expires: 过期时间(单位:秒),链接在当前时间再过expires秒后过期 - :param playlist_name: 播放列表名称,注意与创建live channel时一致 - :param params: 需要签名的HTTP查询参数 + :param channel_name: channel name + :param expires: Expiration time in seconds.The url is invalid after it's expired. + :param playlist_name: playlist name,it should be the one created in live channel creation time. + :param params: Http query parameters to sign. - :return: 签名URL。 + :return: Signed url. """ url = self._make_url(self.bucket_name, 'live').replace('http://', 'rtmp://').replace('https://', 'rtmp://') + '/' + channel_name params = {} @@ -303,12 +298,12 @@ def sign_rtmp_url(self, channel_name, playlist_name, expires): return self.auth._sign_rtmp_url(url, self.bucket_name, channel_name, playlist_name, expires, params) def list_objects(self, prefix='', delimiter='', marker='', max_keys=100): - """根据前缀罗列Bucket里的文件。 + """List objects by the prefix under a bucket. - :param str prefix: 只罗列文件名为该前缀的文件 - :param str delimiter: 分隔符。可以用来模拟目录 - :param str marker: 分页标志。首次调用传空串,后续使用返回值的next_marker - :param int max_keys: 最多返回文件的个数,文件和目录的和不能超过该值 + :param str prefix: The prefix of the objects to list. + :param str delimiter: The folder separator + :param str marker: Paging marker. It's empty for first page and then use next_marker in the response of the previous page. + :param int max_keys: Max entries to return. :return: :class:`ListObjectsResult ` """ @@ -323,22 +318,26 @@ def list_objects(self, prefix='', delimiter='', marker='', max_keys=100): def put_object(self, key, data, headers=None, progress_callback=None): - """上传一个普通文件。 + """Upload a normal file (not appendable). - 用法 :: + Example :: >>> bucket.put_object('readme.txt', 'content of readme.txt') >>> with open(u'local_file.txt', 'rb') as f: >>> bucket.put_object('remote_file.txt', f) - :param key: 上传到OSS的文件名 + Upload a folder + >>> bucket.enable_crc = False # this is needed as by default crc is enabled and it will not work when creating folder. + >>> bucket.put_object('testfolder/', None) + + :param key: file name in OSS - :param data: 待上传的内容。 - :type data: bytes,str或file-like object + :param data: file content. + :type data: bytes,str or file-like object - :param headers: 用户指定的HTTP头部。可以指定Content-Type、Content-MD5、x-oss-meta-开头的头部等 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: Http headers. It could be content-type, Content-MD5 or x-oss-meta- prefixed headers. + :type headers: dict,but recommendation is oss2.CaseInsensitiveDict - :param progress_callback: 用户指定的进度回调函数。可以用来实现进度条等功能。参考 :ref:`progress_callback` 。 + :param progress_callback: The user's callback. Typical usage is progress bar. Check out :ref:`progress_callback`. :return: :class:`PutObjectResult ` """ @@ -361,15 +360,15 @@ def put_object(self, key, data, def put_object_from_file(self, key, filename, headers=None, progress_callback=None): - """上传一个本地文件到OSS的普通文件。 + """Upload a normal file to OSS. - :param str key: 上传到OSS的文件名 - :param str filename: 本地文件名,需要有可读权限 + :param str key: file name in oss. + :param str filename: Local file path, called needs the read permission. - :param headers: 用户指定的HTTP头部。可以指定Content-Type、Content-MD5、x-oss-meta-开头的头部等 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: Http headers. It could be content-type, Content-MD5 or x-oss-meta- prefixed headers. + :type headers: dict,but recommendation is oss2.CaseInsensitiveDict - :param progress_callback: 用户指定的进度回调函数。参考 :ref:`progress_callback` + :param progress_callback: The user's callback. Typical usage is progress bar. Check out :ref:`progress_callback`. :return: :class:`PutObjectResult ` """ @@ -382,25 +381,25 @@ def append_object(self, key, position, data, headers=None, progress_callback=None, init_crc=None): - """追加上传一个文件。 + """Append the data to an existing object or create a new appendable file if not existing. - :param str key: 新的文件名,或已经存在的可追加文件名 - :param int position: 追加上传一个新的文件, `position` 设为0;追加一个已经存在的可追加文件, `position` 设为文件的当前长度。 - `position` 可以从上次追加的结果 `AppendObjectResult.next_position` 中获得。 + :param str key: existing file name or new file name. + :param int position: 0 for creating a new appendable file or current length for appending an existing file. + `position` value could be from `AppendObjectResult.next_position` of the previous append_object result's. - :param data: 用户数据 - :type data: str、bytes、file-like object或可迭代对象 + :param data: User data + :type data: str、bytes、file-like object or Iterator object - :param headers: 用户指定的HTTP头部。可以指定Content-Type、Content-MD5、x-oss-开头的头部等 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: Http headers. It could be content-type, Content-MD5 or x-oss-meta- prefixed headers. + :type headers: dict,but recommendation is oss2.CaseInsensitiveDict - :param progress_callback: 用户指定的进度回调函数。参考 :ref:`progress_callback` + :param progress_callback: The user's callback. Typical usage is progress bar. Check out :ref:`progress_callback`. :return: :class:`AppendObjectResult ` - :raises: 如果 `position` 和当前文件长度不一致,抛出 :class:`PositionNotEqualToLength ` ; - 如果当前文件不是可追加类型,抛出 :class:`ObjectNotAppendable ` ; - 还会抛出其他一些异常 + :raises: If the position is not same as the current file's length, :class:`PositionNotEqualToLength ` will be thrown; + If the file is not appendable, :class:`ObjectNotAppendable ` is thrown ; + Other client side exceptions could be thrown as well """ headers = utils.set_content_type(http.CaseInsensitiveDict(headers), key) @@ -426,27 +425,27 @@ def get_object(self, key, headers=None, progress_callback=None, process=None): - """下载一个文件。 + """Download a file. - 用法 :: + Example :: >>> result = bucket.get_object('readme.txt') >>> print(result.read()) 'hello world' - :param key: 文件名 - :param byte_range: 指定下载范围。参见 :ref:`byte_range` + :param key: object name in OSS + :param byte_range: Download range. Check out :ref:`byte_range` for more information. - :param headers: HTTP头部 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: HTTP headers + :type headers: dict or oss2.CaseInsensitiveDict (recommended) - :param progress_callback: 用户指定的进度回调函数。参考 :ref:`progress_callback` + :param progress_callback: User callback. Please check out :ref:`progress_callback` - :param process: oss文件处理,如图像服务等。指定后process,返回的内容为处理后的文件。 + :param process: oss file process,For example image processing. The returne object is applied with the process. :return: file-like object - :raises: 如果文件不存在,则抛出 :class:`NoSuchKey ` ;还可能抛出其他异常 + :raises: If the file does not exist, :class:`NoSuchKey ` is thrown ;Other exception could also happen though. """ headers = http.CaseInsensitiveDict(headers) @@ -466,20 +465,20 @@ def get_object_to_file(self, key, filename, headers=None, progress_callback=None, process=None): - """下载一个文件到本地文件。 + """Download a file to the local file. - :param key: 文件名 - :param filename: 本地文件名。要求父目录已经存在,且有写权限。 - :param byte_range: 指定下载范围。参见 :ref:`byte_range` + :param key: object name in OSS + :param filename: Local file name. The folder of the file must be available for write and pre-existing. + :param byte_range: The download range. Check out :ref:`byte_range`. - :param headers: HTTP头部 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: HTTP headers + :type headers: dict or oss2.CaseInsensitiveDict (recommended) - :param progress_callback: 用户指定的进度回调函数。参考 :ref:`progress_callback` + :param progress_callback: user callback. Check out :ref:`progress_callback` - :param process: oss文件处理,如图像服务等。指定后process,返回的内容为处理后的文件。 + :param process: oss file process,For example image processing. The returne object is applied with the process. - :return: 如果文件不存在,则抛出 :class:`NoSuchKey ` ;还可能抛出其他异常 + :return: If the file does not exist, :class:`NoSuchKey ` is thrown ;Other exception could also happen though. """ with open(to_unicode(filename), 'wb') as f: result = self.get_object(key, byte_range=byte_range, headers=headers, progress_callback=progress_callback, @@ -493,51 +492,53 @@ def get_object_to_file(self, key, filename, return result def head_object(self, key, headers=None): - """获取文件元信息。 + """Gets object metadata - HTTP响应的头部包含了文件元信息,可以通过 `RequestResult` 的 `headers` 成员获得。 - 用法 :: + The metadata is in HTTP response headers, which could be accessed by `RequestResult.headers`. + Usage :: >>> result = bucket.head_object('readme.txt') >>> print(result.content_type) text/plain - :param key: 文件名 + :param key: object name in OSS - :param headers: HTTP头部 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: HTTP headers. + :type headers: dict or oss2.CaseInsensitiveDict (recommended) :return: :class:`HeadObjectResult ` - :raises: 如果Bucket不存在或者Object不存在,则抛出 :class:`NotFound ` + :raises: If the bucket or file does not exist, :class:`NotFound ` is thrown. """ resp = self.__do_object('HEAD', key, headers=headers) return HeadObjectResult(resp) def get_object_meta(self, key): - """获取文件基本元信息,包括该Object的ETag、Size(文件大小)、LastModified,并不返回其内容。 + """Gets the object's basic metadata, which includes ETag, Size, LastModified. - HTTP响应的头部包含了文件基本元信息,可以通过 `GetObjectMetaResult` 的 `last_modified`,`content_length`,`etag` 成员获得。 + The metadata is in HTTP response headers, which could be accessed by `GetObjectMetaResult`'s 'last_modified`,`content_length`,`etag` - :param key: 文件名 + :param key: object key in OSS. :return: :class:`GetObjectMetaResult ` - :raises: 如果文件不存在,则抛出 :class:`NoSuchKey ` ;还可能抛出其他异常 + :raises: If file does not exist, :class:`NoSuchKey ` is thrown;Other exception could also happen though. """ resp = self.__do_object('GET', key, params={'objectMeta': ''}) return GetObjectMetaResult(resp) def object_exists(self, key): - """如果文件存在就返回True,否则返回False。如果Bucket不存在,或是发生其他错误,则抛出异常。""" + """If the file exists, return true. Otherwise false. If the bucket does not exist or other errors happen, exceptions will be thrown.""" - # 如果我们用head_object来实现的话,由于HTTP HEAD请求没有响应体,只有响应头部,这样当发生404时, - # 我们无法区分是NoSuchBucket还是NoSuchKey错误。 + # If head_object is used as the implementation, as it only has response header, when 404 is returned, no way to tell if it's a NoSuchBucket or NoSuchKey. # - # 2.2.0之前的实现是通过get_object的if-modified-since头部,把date设为当前时间24小时后,这样如果文件存在,则会返回 - # 304 (NotModified);不存在,则会返回NoSuchKey。get_object会受回源的影响,如果配置会404回源,get_object会判断错误。 + # Before version 2.2.0, it calls get_object with current + 24h as the if-modified-since parameter. + # If file exists, it returns 304 (NotModified); If file does not exists, returns NoSuchkey. + # However get_object would retrieve object in other sites if "Retrieve from source" is set and object is not found in OSS. + # That is the file could be from other sites and thus should have return 404 instead of the object in this case. # - # 目前的实现是通过get_object_meta判断文件是否存在。 + # So the current solution is to call get_object_meta which is not impacted by "Retrieve from source" feature. + # Meanwhile it could differentiate bucket not found or key not found. try: self.get_object_meta(key) @@ -549,14 +550,14 @@ def object_exists(self, key): return True def copy_object(self, source_bucket_name, source_key, target_key, headers=None): - """拷贝一个文件到当前Bucket。 + """Copy a file to current bucket. - :param str source_bucket_name: 源Bucket名 - :param str source_key: 源文件名 - :param str target_key: 目标文件名 + :param str source_bucket_name: Source bucket name + :param str source_key: Source file name + :param str target_key: Target file name - :param headers: HTTP头部 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: HTTP headers + :type headers: dict or oss2.CaseInsensitiveDict (recommended) :return: :class:`PutObjectResult ` """ @@ -568,23 +569,23 @@ def copy_object(self, source_bucket_name, source_key, target_key, headers=None): return PutObjectResult(resp) def update_object_meta(self, key, headers): - """更改Object的元数据信息,包括Content-Type这类标准的HTTP头部,以及以x-oss-meta-开头的自定义元数据。 + """Update Object's metadata information, including HTTP standard headers such as Content-Type or x-oss-meta- prefixed custom metadata. + If user specifies invalid headers (e.g. not standard headers or non x-oss-meta- headers), the call would still succeed but no operation is done in server side. - 用户可以通过 :func:`head_object` 获得元数据信息。 + User could call :func:`head_object` to get the updated information. Note that get_object_meta does return all metadata, but head_object does. - :param str key: 文件名 + :param str key: object key - :param headers: HTTP头部,包含了元数据信息 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: HTTP headers, it could be a dict or oss2.CaseInsensitiveDict (recommended) :return: :class:`RequestResult ` """ return self.copy_object(self.bucket_name, key, key, headers=headers) def delete_object(self, key): - """删除一个文件。 + """Delete a file - :param str key: 文件名 + :param str key: object key :return: :class:`RequestResult ` """ @@ -592,11 +593,11 @@ def delete_object(self, key): return RequestResult(resp) def put_object_acl(self, key, permission): - """设置文件的ACL。 + """Sets the object ACL - :param str key: 文件名 - :param str permission: 可以是oss2.OBJECT_ACL_DEFAULT、oss2.OBJECT_ACL_PRIVATE、oss2.OBJECT_ACL_PUBLIC_READ或 - oss2.OBJECT_ACL_PUBLIC_READ_WRITE。 + :param str key: object name + :param str permission: Valid values are oss2.OBJECT_ACL_DEFAULT、oss2.OBJECT_ACL_PRIVATE、oss2.OBJECT_ACL_PUBLIC_READ or + oss2.OBJECT_ACL_PUBLIC_READ_WRITE. :return: :class:`RequestResult ` """ @@ -604,7 +605,7 @@ def put_object_acl(self, key, permission): return RequestResult(resp) def get_object_acl(self, key): - """获取文件的ACL。 + """Gets the object ACL :return: :class:`GetObjectAclResult ` """ @@ -612,9 +613,9 @@ def get_object_acl(self, key): return self._parse_result(resp, xml_utils.parse_get_object_acl, GetObjectAclResult) def batch_delete_objects(self, key_list): - """批量删除文件。待删除文件列表不能为空。 + """delete objects specified in key_list. - :param key_list: 文件名列表,不能为空。 + :param key_list: object key list, non-empty. :type key_list: list of str :return: :class:`BatchDeleteObjectsResult ` @@ -630,14 +631,14 @@ def batch_delete_objects(self, key_list): return self._parse_result(resp, xml_utils.parse_batch_delete_objects, BatchDeleteObjectsResult) def init_multipart_upload(self, key, headers=None): - """初始化分片上传。 + """initialize a multipart upload. - 返回值中的 `upload_id` 以及Bucket名和Object名三元组唯一对应了此次分片上传事件。 + `upload_id`, Bucket name and object key in the returned value forms a 3-tuple which is a unique id for the upload. - :param str key: 待上传的文件名 + :param str key: object key - :param headers: HTTP头部 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: HTTP headers + :type headers: dict or oss2.CaseInsensitiveDict (recommended) :return: :class:`InitMultipartUploadResult ` """ @@ -647,16 +648,16 @@ def init_multipart_upload(self, key, headers=None): return self._parse_result(resp, xml_utils.parse_init_multipart_upload, InitMultipartUploadResult) def upload_part(self, key, upload_id, part_number, data, progress_callback=None, headers=None): - """上传一个分片。 + """upload a part - :param str key: 待上传文件名,这个文件名要和 :func:`init_multipart_upload` 的文件名一致。 - :param str upload_id: 分片上传ID - :param int part_number: 分片号,最小值是1. - :param data: 待上传数据。 - :param progress_callback: 用户指定进度回调函数。可以用来实现进度条等功能。参考 :ref:`progress_callback` 。 + :param str key: object key, must be same as the on in :func:`init_multipart_upload`. + :param str upload_id: upload Id + :param int part_number: part number, starting with 1. + :param data: data to upload + :param progress_callback: user callback. Check out :ref:`progress_callback`. - :param headers: 用户指定的HTTP头部。可以指定Content-MD5头部等 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: http headers. such as Content-MD5 + :type headers: dict or oss2.CaseInsensitiveDict (recommended) :return: :class:`PutObjectResult ` """ @@ -678,16 +679,16 @@ def upload_part(self, key, upload_id, part_number, data, progress_callback=None, return result def complete_multipart_upload(self, key, upload_id, parts, headers=None): - """完成分片上传,创建文件。 + """Completes a multipart upload. A file would be created with all the parts' data and these parts will not be available to user. - :param str key: 待上传的文件名,这个文件名要和 :func:`init_multipart_upload` 的文件名一致。 - :param str upload_id: 分片上传ID + :param str key: The object key which should be same as the on in :func:`init_multipart_upload`. + :param str upload_id: upload id. - :param parts: PartInfo列表。PartInfo中的part_number和etag是必填项。其中的etag可以从 :func:`upload_part` 的返回值中得到。 + :param parts: PartInfo list. The part_number and Etag are required in PartInfo. The etag comes from the result of :func:`upload_part`. :type parts: list of `PartInfo ` - :param headers: HTTP头部 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: HTTP headers + :type headers: dict or oss2.CaseInsensitiveDict (recommended) :return: :class:`PutObjectResult ` """ @@ -700,10 +701,10 @@ def complete_multipart_upload(self, key, upload_id, parts, headers=None): return PutObjectResult(resp) def abort_multipart_upload(self, key, upload_id): - """取消分片上传。 + """abort a multipart upload. - :param str key: 待上传的文件名,这个文件名要和 :func:`init_multipart_upload` 的文件名一致。 - :param str upload_id: 分片上传ID + :param str key: The object key, must be same as the one in :func:`init_multipart_upload`. + :param str upload_id: Upload Id. :return: :class:`RequestResult ` """ @@ -717,13 +718,13 @@ def list_multipart_uploads(self, key_marker='', upload_id_marker='', max_uploads=1000): - """罗列正在进行中的分片上传。支持分页。 + """Lists all the ongoing uploads,and it supports paging. - :param str prefix: 只罗列匹配该前缀的文件的分片上传 - :param str delimiter: 目录分割符 - :param str key_marker: 文件名分页符。第一次调用可以不传,后续设为返回值中的 `next_key_marker` - :param str upload_id_marker: 分片ID分页符。第一次调用可以不传,后续设为返回值中的 `next_upload_id_marker` - :param int max_uploads: 一次罗列最多能够返回的条目数 + :param str prefix: Prefix filter + :param str delimiter: delimiter of folder + :param str key_marker: Key marker for paging. + :param str upload_id_marker: It's empty for first page and then use next_marker in the response of the previous page. + :param int max_uploads: Max entries to return. :return: :class:`ListMultipartUploadsResult ` """ @@ -740,12 +741,12 @@ def list_multipart_uploads(self, def upload_part_copy(self, source_bucket_name, source_key, byte_range, target_key, target_upload_id, target_part_number, headers=None): - """分片拷贝。把一个已有文件的一部分或整体拷贝成目标文件的一个分片。 + """Uploads a part from another file. - :param byte_range: 指定待拷贝内容在源文件里的范围。参见 :ref:`byte_range` + :param byte_range: The range to copy in the source file :ref:`byte_range` - :param headers: HTTP头部 - :type headers: 可以是dict,建议是oss2.CaseInsensitiveDict + :param headers: HTTP headers + :type headers: dict or oss2.CaseInsensitiveDict (recommended) :return: :class:`PutObjectResult ` """ @@ -765,12 +766,12 @@ def upload_part_copy(self, source_bucket_name, source_key, byte_range, def list_parts(self, key, upload_id, marker='', max_parts=1000): - """列举已经上传的分片。支持分页。 + """Lists uploaded parts and it supports paging. As comparison, list_multipart_uploads lists ongoing parts. - :param str key: 文件名 - :param str upload_id: 分片上传ID - :param str marker: 分页符 - :param int max_parts: 一次最多罗列多少分片 + :param str key: object key. + :param str upload_id: upload Id. + :param str marker: key marker for paging. + :param int max_parts: Max entries to return. :return: :class:`ListPartsResult ` """ @@ -781,10 +782,10 @@ def list_parts(self, key, upload_id, return self._parse_result(resp, xml_utils.parse_list_parts, ListPartsResult) def put_symlink(self, target_key, symlink_key, headers=None): - """创建Symlink。 + """Creates a symbolic file - :param str target_key: 目标文件,目标文件不能为符号连接 - :param str symlink_key: 符号连接类文件,其实质是一个特殊的文件,数据指向目标文件 + :param str target_key: Target file, which cannot be another symbolic file. + :param str symlink_key: The symbolic file name. :return: :class:`RequestResult ` """ @@ -794,22 +795,23 @@ def put_symlink(self, target_key, symlink_key, headers=None): return RequestResult(resp) def get_symlink(self, symlink_key): - """获取符号连接文件的目标文件。 + """Gets the symbolic file's information. - :param str symlink_key: 符号连接类文件 + :param str symlink_key: The symbolic file key. :return: :class:`GetSymlinkResult ` - :raises: 如果文件的符号链接不存在,则抛出 :class:`NoSuchKey ` ;还可能抛出其他异常 + :raises: If the symbolic file does not exist,:class:`NoSuchKey ` is thrown ; + If the key is the symbolic file, then ServerError with error code NotSymlink is returned. Other exceptions are also possible though. """ resp = self.__do_object('GET', symlink_key, params={Bucket.SYMLINK: ''}) return GetSymlinkResult(resp) def create_bucket(self, permission=None): - """创建新的Bucket。 + """Creates a new bucket。 - :param str permission: 指定Bucket的ACL。可以是oss2.BUCKET_ACL_PRIVATE(推荐、缺省)、oss2.BUCKET_ACL_PUBLIC_READ或是 - oss2.BUCKET_ACL_PUBLIC_READ_WRITE。 + :param str permission: Bucket ACL. It could be oss2.BUCKET_ACL_PRIVATE(recommended,default value) or oss2.BUCKET_ACL_PUBLIC_READ or + oss2.BUCKET_ACL_PUBLIC_READ_WRITE. """ if permission: headers = {'x-oss-acl': permission} @@ -819,26 +821,26 @@ def create_bucket(self, permission=None): return RequestResult(resp) def delete_bucket(self): - """删除一个Bucket。只有没有任何文件,也没有任何未完成的分片上传的Bucket才能被删除。 + """Deletes a bucket. A bucket could be deleted only when the bucket is empty. :return: :class:`RequestResult ` - ":raises: 如果试图删除一个非空Bucket,则抛出 :class:`BucketNotEmpty ` + ":raises: If the bucket is not empty,:class:`BucketNotEmpty ` is thrown """ resp = self.__do_bucket('DELETE') return RequestResult(resp) def put_bucket_acl(self, permission): - """设置Bucket的ACL。 + """Sets the bucket ACL. - :param str permission: 新的ACL,可以是oss2.BUCKET_ACL_PRIVATE、oss2.BUCKET_ACL_PUBLIC_READ或 + :param str permission: The value could be oss2.BUCKET_ACL_PRIVATE、oss2.BUCKET_ACL_PUBLIC_READ或 oss2.BUCKET_ACL_PUBLIC_READ_WRITE """ resp = self.__do_bucket('PUT', headers={'x-oss-acl': permission}, params={Bucket.ACL: ''}) return RequestResult(resp) def get_bucket_acl(self): - """获取Bucket的ACL。 + """Gets bucket ACL :return: :class:`GetBucketAclResult ` """ @@ -846,16 +848,16 @@ def get_bucket_acl(self): return self._parse_result(resp, xml_utils.parse_get_bucket_acl, GetBucketAclResult) def put_bucket_cors(self, input): - """设置Bucket的CORS。 + """Sets the bucket CORS. - :param input: :class:`BucketCors ` 对象或其他 + :param input: :class:`BucketCors ` instance or data could be converted to BucketCor by xml_utils.to_put_bucket_cors. """ data = self.__convert_data(BucketCors, xml_utils.to_put_bucket_cors, input) resp = self.__do_bucket('PUT', data=data, params={Bucket.CORS: ''}) return RequestResult(resp) def get_bucket_cors(self): - """获取Bucket的CORS配置。 + """Gets the bucket's CORS. :return: :class:`GetBucketCorsResult ` """ @@ -863,36 +865,36 @@ def get_bucket_cors(self): return self._parse_result(resp, xml_utils.parse_get_bucket_cors, GetBucketCorsResult) def delete_bucket_cors(self): - """删除Bucket的CORS配置。""" + """Deletes the bucket's CORS""" resp = self.__do_bucket('DELETE', params={Bucket.CORS: ''}) return RequestResult(resp) def put_bucket_lifecycle(self, input): - """设置生命周期管理的配置。 + """Sets the life cycle of the bucket. - :param input: :class:`BucketLifecycle ` 对象或其他 + :param input: :class:`BucketLifecycle ` instance or data could be convered to BucketLifecycle by xml_utils.to_put_bucket_lifecycle. """ data = self.__convert_data(BucketLifecycle, xml_utils.to_put_bucket_lifecycle, input) resp = self.__do_bucket('PUT', data=data, params={Bucket.LIFECYCLE: ''}) return RequestResult(resp) def get_bucket_lifecycle(self): - """获取生命周期管理配置。 + """Gets the bucket lifecycle. :return: :class:`GetBucketLifecycleResult ` - :raises: 如果没有设置Lifecycle,则抛出 :class:`NoSuchLifecycle ` + :raises: If the life cycle is not set in the bucket, :class:`NoSuchLifecycle ` is thrown. """ resp = self.__do_bucket('GET', params={Bucket.LIFECYCLE: ''}) return self._parse_result(resp, xml_utils.parse_get_bucket_lifecycle, GetBucketLifecycleResult) def delete_bucket_lifecycle(self): - """删除生命周期管理配置。如果Lifecycle没有设置,也返回成功。""" + """Deletes the life cycle of the bucket. It still return 200 OK if the life cycle does not exist.""" resp = self.__do_bucket('DELETE', params={Bucket.LIFECYCLE: ''}) return RequestResult(resp) def get_bucket_location(self): - """获取Bucket的数据中心。 + """Gets the bucket location. :return: :class:`GetBucketLocationResult ` """ @@ -900,16 +902,16 @@ def get_bucket_location(self): return self._parse_result(resp, xml_utils.parse_get_bucket_location, GetBucketLocationResult) def put_bucket_logging(self, input): - """设置Bucket的访问日志功能。 + """Sets the bucket's logging. - :param input: :class:`BucketLogging ` 对象或其他 + :param input: :class:`BucketLogging ` instance or other data that could be converted to BucketLogging by xml_utils.to_put_bucket_logging. """ data = self.__convert_data(BucketLogging, xml_utils.to_put_bucket_logging, input) resp = self.__do_bucket('PUT', data=data, params={Bucket.LOGGING: ''}) return RequestResult(resp) def get_bucket_logging(self): - """获取Bucket的访问日志功能配置。 + """Gets the bucket's logging. :return: :class:`GetBucketLoggingResult ` """ @@ -917,21 +919,21 @@ def get_bucket_logging(self): return self._parse_result(resp, xml_utils.parse_get_bucket_logging, GetBucketLoggingResult) def delete_bucket_logging(self): - """关闭Bucket的访问日志功能。""" + """Deletes the bucket's logging configuration---the existing logging files are not deleted.""" resp = self.__do_bucket('DELETE', params={Bucket.LOGGING: ''}) return RequestResult(resp) def put_bucket_referer(self, input): - """为Bucket设置防盗链。 + """Sets the bucket's allowed referer. - :param input: :class:`BucketReferer ` 对象或其他 + :param input: :class:`BucketReferer ` instance or other data that could be convered to BucketReferer by xml_utils.to_put_bucket_referer. """ data = self.__convert_data(BucketReferer, xml_utils.to_put_bucket_referer, input) resp = self.__do_bucket('PUT', data=data, params={Bucket.REFERER: ''}) return RequestResult(resp) def get_bucket_referer(self): - """获取Bucket的防盗链配置。 + """Gets the bucket's allowed referer. :return: :class:`GetBucketRefererResult ` """ @@ -939,7 +941,7 @@ def get_bucket_referer(self): return self._parse_result(resp, xml_utils.parse_get_bucket_referer, GetBucketRefererResult) def put_bucket_website(self, input): - """为Bucket配置静态网站托管功能。 + """Sets the static website config for the bucket. :param input: :class:`BucketWebsite ` """ @@ -948,11 +950,11 @@ def put_bucket_website(self, input): return RequestResult(resp) def get_bucket_website(self): - """获取Bucket的静态网站托管配置。 + """Gets the static website config. :return: :class:`GetBucketWebsiteResult ` - :raises: 如果没有设置静态网站托管,那么就抛出 :class:`NoSuchWebsite ` + :raises: If the static website config is not set, :class:`NoSuchWebsite ` is thrown. """ resp = self.__do_bucket('GET', params={Bucket.WEBSITE: ''}) return self._parse_result(resp, xml_utils.parse_get_bucket_websiste, GetBucketWebsiteResult) @@ -963,10 +965,10 @@ def delete_bucket_website(self): return RequestResult(resp) def create_live_channel(self, channel_name, input): - """创建推流直播频道 + """Creates a live channel. - :param str channel_name: 要创建的live channel的名称 - :param input: LiveChannelInfo类型,包含了live channel中的描述信息 + :param str channel_name: The live channel name. + :param input: LiveChannelInfo instance, which includes the live channel's description information. :return: :class:`CreateLiveChannelResult ` """ @@ -975,17 +977,17 @@ def create_live_channel(self, channel_name, input): return self._parse_result(resp, xml_utils.parse_create_live_channel, CreateLiveChannelResult) def delete_live_channel(self, channel_name): - """删除推流直播频道 + """Deletes the live channel. - :param str channel_name: 要删除的live channel的名称 + :param str channel_name: The live channel name. """ resp = self.__do_object('DELETE', channel_name, params={Bucket.LIVE: ''}) return RequestResult(resp) def get_live_channel(self, channel_name): - """获取直播频道配置 + """Gets the live channel configuration. - :param str channel_name: 要获取的live channel的名称 + :param str channel_name: live channel name :return: :class:`GetLiveChannelResult ` """ @@ -993,11 +995,11 @@ def get_live_channel(self, channel_name): return self._parse_result(resp, xml_utils.parse_get_live_channel, GetLiveChannelResult) def list_live_channel(self, prefix='', marker='', max_keys=100): - """列举出Bucket下所有符合条件的live channel + """Lists all live channels under the bucket according to the prefix and marker filters - param: str prefix: list时channel_id的公共前缀 - param: str marker: list时指定的起始标记 - param: int max_keys: 本次list返回live channel的最大个数 + param: str prefix: The channel Id must start with this prefix. + param: str marker: The channel Id marker for paging. + param: int max_keys: The max channel count to return. return: :class:`ListLiveChannelResult ` """ @@ -1008,9 +1010,9 @@ def list_live_channel(self, prefix='', marker='', max_keys=100): return self._parse_result(resp, xml_utils.parse_list_live_channel, ListLiveChannelResult) def get_live_channel_stat(self, channel_name): - """获取live channel当前推流的状态 + """Gets the live channel's pushing streaming status. - param str channel_name: 要获取推流状态的live channel的名称 + param str channel_name: The live channel name return: :class:`GetLiveChannelStatResult ` """ @@ -1018,18 +1020,19 @@ def get_live_channel_stat(self, channel_name): return self._parse_result(resp, xml_utils.parse_live_channel_stat, GetLiveChannelStatResult) def put_live_channel_status(self, channel_name, status): - """更改live channel的status,仅能在“enabled”和“disabled”两种状态中更改 + """Update the live channel's status. Supported status is “enabled” or “disabled”. - param str channel_name: 要更改status的live channel的名称 - param str status: live channel的目标status + param str channel_name: live channel name, + param str status: live channel's desired status """ resp = self.__do_object('PUT', channel_name, params={Bucket.LIVE: '', Bucket.STATUS: status}) return RequestResult(resp) def get_live_channel_history(self, channel_name): - """获取live channel中最近的最多十次的推流记录,记录中包含推流的起止时间和远端的地址 + """Gets the up to 10's recent pushing streaming record of the live channel. Each record includes the + start/end time and the remote address (the source of the pushing streaming). - param str channel_name: 要获取最近推流记录的live channel的名称 + param str channel_name: live channel name. return: :class:`GetLiveChannelHistoryResult ` """ @@ -1037,12 +1040,12 @@ def get_live_channel_history(self, channel_name): return self._parse_result(resp, xml_utils.parse_live_channel_history, GetLiveChannelHistoryResult) def post_vod_playlist(self, channel_name, playlist_name, start_time = 0, end_time = 0): - """根据指定的playlist name以及startTime和endTime生成一个点播的播放列表 + """Generates a VOD play list according to the play list name, start time and end time. - param str channel_name: 要生成点播列表的live channel的名称 - param str playlist_name: 要生成点播列表m3u8文件的名称 - param int start_time: 点播的起始时间,Unix Time格式,可以使用int(time.time())获取 - param int end_time: 点播的结束时间,Unix Time格式,可以使用int(time.time())获取 + param str channel_name: Live channel name. + param str playlist_name: The play list name (*.m3u8 file) + param int start_time: Start time in Unix Time,which could be got from int(time.time()) + param int end_time: End time in Unix Time,which could be got from int(time.time()) """ key = channel_name + "/" + playlist_name resp = self.__do_object('POST', key, params={Bucket.VOD: '', @@ -1051,10 +1054,10 @@ def post_vod_playlist(self, channel_name, playlist_name, start_time = 0, end_tim return RequestResult(resp) def _get_bucket_config(self, config): - """获得Bucket某项配置,具体哪种配置由 `config` 指定。该接口直接返回 `RequestResult` 对象。 - 通过read()接口可以获得XML字符串。不建议使用。 + """Gets the bucket config. + The raw xml string could be get by result.read() (not recommended though). - :param str config: 可以是 `Bucket.ACL` 、 `Bucket.LOGGING` 等。 + :param str config: Supported values are `Bucket.ACL` 、 `Bucket.LOGGING`, etc (check out the beginning part of Bucket class for the complete list). :return: :class:`RequestResult ` """ diff --git a/oss2/auth.py b/oss2/auth.py old mode 100644 new mode 100755 index 10856e53..c2725836 --- a/oss2/auth.py +++ b/oss2/auth.py @@ -11,7 +11,7 @@ class Auth(object): - """用于保存用户AccessKeyId、AccessKeySecret,以及计算签名的对象。""" + """Store user's AccessKeyId、AccessKeySecret information and calcualte the signature""" _subresource_key_set = frozenset( ['response-content-type', 'response-content-language', @@ -142,11 +142,11 @@ def _sign_rtmp_url(self, url, bucket_name, channel_name, playlist_name, expires, class AnonymousAuth(object): - """用于匿名访问。 + """Anonymous Auth .. note:: - 匿名用户只能读取public-read的Bucket,或只能读取、写入public-read-write的Bucket。 - 不能进行Service、Bucket相关的操作,也不能罗列文件等。 + Anonymous auth can only read bucket with public-read permission, or read/write bucket with public-read-write permissions. + It cannot execute service or bucket related operations(e.g. add a new bucket or list files under a bucket). """ def _sign_request(self, req, bucket_name, key): pass @@ -159,10 +159,10 @@ def _sign_rtmp_url(self, url, bucket_name, channel_name, playlist_name, expires, class StsAuth(object): - """用于STS临时凭证访问。可以通过官方STS客户端获得临时密钥(AccessKeyId、AccessKeySecret)以及临时安全令牌(SecurityToken)。 + """For STS Auth. User could get the AccessKeyId, AccessKeySecret and SecurityToken from the AliCloud's STS service (https://sts.aliyuncs.com) - 注意到临时凭证会在一段时间后过期,在此之前需要重新获取临时凭证,并更新 :class:`Bucket ` 的 `auth` 成员变量为新 - 的 `StsAuth` 实例。 + Note that the AccessKeyId/Secret and SecurtyToken has the expiration time. Once they're renewed, the STSAuth property in class Bucket instance needs + to be updated with the new credentials. :param str access_key_id: 临时AccessKeyId :param str access_key_secret: 临时AccessKeySecret diff --git a/oss2/compat.py b/oss2/compat.py old mode 100644 new mode 100755 index 708bdda6..97ddefaf --- a/oss2/compat.py +++ b/oss2/compat.py @@ -1,7 +1,7 @@ # -*- coding: utf-8 -*- """ -兼容Python版本 +Compatible Python versions """ import sys @@ -22,18 +22,18 @@ def to_bytes(data): - """若输入为unicode, 则转为utf-8编码的bytes;其他则原样返回。""" + """Covert to UTF-8 encoding if the input is unicode; otherwise return the original data.""" if isinstance(data, unicode): return data.encode('utf-8') else: return data def to_string(data): - """把输入转换为str对象""" + """convert to str object""" return to_bytes(data) def to_unicode(data): - """把输入转换为unicode,要求输入是unicode或者utf-8编码的bytes。""" + """Convert the input to unicode if it's utf-8 bytes.""" if isinstance(data, bytes): return data.decode('utf-8') else: @@ -59,21 +59,21 @@ def stringify(input): from urllib.parse import urlparse def to_bytes(data): - """若输入为str(即unicode),则转为utf-8编码的bytes;其他则原样返回""" + """Covert to UTF-8 encoding if the input is unicode; otherwise return the original data.""" if isinstance(data, str): return data.encode(encoding='utf-8') else: return data def to_string(data): - """若输入为bytes,则认为是utf-8编码,并返回str""" + """Convert the input to unicode if it's utf-8 bytes.""" if isinstance(data, bytes): return data.decode('utf-8') else: return data def to_unicode(data): - """把输入转换为unicode,要求输入是unicode或者utf-8编码的bytes。""" + """Convert the input to unicode if it's utf-8 bytes.""" return to_string(data) def stringify(input): diff --git a/oss2/defaults.py b/oss2/defaults.py old mode 100644 new mode 100755 index 480e6685..0e5af9d2 --- a/oss2/defaults.py +++ b/oss2/defaults.py @@ -4,7 +4,7 @@ oss2.defaults ~~~~~~~~~~~~~ -全局缺省变量。 +Global Default variables. """ @@ -18,36 +18,36 @@ def get(value, default_value): return value -#: 连接超时时间 +#: connection timeout connect_timeout = 60 -#: 缺省重试次数 +#: retry count request_retries = 3 -#: 对于某些接口,上传数据长度大于或等于该值时,就采用分片上传。 +#: The threshold of file size for using multipart upload in some APIs. multipart_threshold = 10 * 1024 * 1024 -#: 分片上传缺省线程数 +#: Default thread count for multipart upload. multipart_num_threads = 1 -#: 缺省分片大小 +#: Default part size. part_size = 10 * 1024 * 1024 -#: 每个Session连接池大小 +#: Connection pool size for each session. connection_pool_size = 10 -#: 对于断点下载,如果OSS文件大小大于该值就进行并行下载(multiget) +#: The threshold of file size for using multipart download (multiget) in some APIs. multiget_threshold = 100 * 1024 * 1024 -#: 并行下载(multiget)缺省线程数 +#: Default thread count for multipart download (multiget) multiget_num_threads = 4 -#: 并行下载(multiget)的缺省分片大小 +#: Default part size in multipart download (multiget) multiget_part_size = 10 * 1024 * 1024 -#: 缺省 Logger +#: Default Logger logger = logging.getLogger() diff --git a/oss2/exceptions.py b/oss2/exceptions.py old mode 100644 new mode 100755 index ebfb76b4..3611b585 --- a/oss2/exceptions.py +++ b/oss2/exceptions.py @@ -4,7 +4,7 @@ oss2.exceptions ~~~~~~~~~~~~~~ -异常类。 +Exception classes """ import re @@ -26,22 +26,22 @@ class OssError(Exception): def __init__(self, status, headers, body, details): - #: HTTP 状态码 + #: HTTP Status code (such as 200) self.status = status - #: 请求ID,用于跟踪一个OSS请求。提交工单时,最好能够提供请求ID + #: Request Id, which represents a unique OSS request. It would be very useful when submitting a support ticket. self.request_id = headers.get('x-oss-request-id', '') - #: HTTP响应体(部分) + #: HTTP response body self.body = body - #: 详细错误信息,是一个string到string的dict + #: The error messages It's a dict of self.details = details - #: OSS错误码 + #: OSS error code self.code = self.details.get('Code', '') - #: OSS错误信息 + #: OSS error message self.message = self.details.get('Message', '') def __str__(self): diff --git a/oss2/http.py b/oss2/http.py old mode 100644 new mode 100755 index 69e0cd48..6e1c8a24 --- a/oss2/http.py +++ b/oss2/http.py @@ -3,9 +3,8 @@ """ oss2.http ~~~~~~~~ - -这个模块包含了HTTP Adapters。尽管OSS Python SDK内部使用requests库进行HTTP通信,但是对使用者是透明的。 -该模块中的 `Session` 、 `Request` 、`Response` 对requests的对应的类做了简单的封装。 +This is the HTTP Adapters for requests library. So that the dependency of request library is totally transparent to the SDK caller. +It has the wrapper class `Session` 、 `Request` 、`Response` for its counterparts in the requests library. """ import platform @@ -24,7 +23,7 @@ class Session(object): - """属于同一个Session的请求共享一组连接池,如有可能也会重用HTTP连接。""" + """Requests of the same session share the same connection pool and possiblly same HTTP connectoin.""" def __init__(self): self.session = requests.Session() @@ -96,11 +95,9 @@ def __iter__(self): return self.response.iter_content(_CHUNK_SIZE) -# requests对于具有fileno()方法的file object,会用fileno()的返回值作为Content-Length。 -# 这对于已经读取了部分内容,或执行了seek()的file object是不正确的。 -# -# _convert_request_body()对于支持seek()和tell() file object,确保是从 -# 当前位置读取,且只读取当前位置到文件结束的内容。 +# For data which has the len() method (which means it has the length), returns the whole data as the request's content. +# For data which supports seek() and tell(), but not len(), then returns the remaining data from the current position. +# Note that for file, it does not support len(), but it supports seek() and tell(). def _convert_request_body(data): data = to_bytes(data) diff --git a/oss2/iterators.py b/oss2/iterators.py old mode 100644 new mode 100755 index bb1f1365..bfbfd84b --- a/oss2/iterators.py +++ b/oss2/iterators.py @@ -4,7 +4,7 @@ oss2.iterators ~~~~~~~~~~~~~~ -该模块包含了一些易于使用的迭代器,可以用来遍历Bucket、文件、分片上传等。 +This module contains some easy to use iterators for enumerating bucket, file, parts, etc. """ from .models import MultipartUploadInfo, SimplifiedObjectInfo @@ -57,14 +57,14 @@ def fetch_with_retry(self): class BucketIterator(_BaseIterator): - """遍历用户Bucket的迭代器。 + """Iterator for bucket - 每次迭代返回的是 :class:`SimplifiedBucketInfo ` 对象。 + It returns a :class:`SimplifiedBucketInfo ` instance in each iteration (via next()). - :param service: :class:`Service ` 对象 - :param prefix: 只列举匹配该前缀的Bucket - :param marker: 分页符。只列举Bucket名字典序在此之后的Bucket - :param max_keys: 每次调用 `list_buckets` 时的max_keys参数。注意迭代器返回的数目可能会大于该值。 + :param service: :class:`Service ` instance + :param prefix: Bucket name prefix---only buckets with the prefix are listed. + :param marker: Paging marker. Only lists bucket whose name is after the marker in the lexicographic order. + :param max_keys: The max keys to return for list_buckets. Note that it does **not** mean the max keys for the iterator to return is no more than it. The iterator could return more items than max_keys. """ def __init__(self, service, prefix='', marker='', max_keys=100, max_retries=None): super(BucketIterator, self).__init__(marker, max_retries) @@ -82,16 +82,16 @@ def _fetch(self): class ObjectIterator(_BaseIterator): - """遍历Bucket里文件的迭代器。 + """Iterator for files in bucket. - 每次迭代返回的是 :class:`SimplifiedObjectInfo ` 对象。 - 当 `SimplifiedObjectInfo.is_prefix()` 返回True时,表明是公共前缀(目录)。 + It returns a :class:`SimplifiedObjectInfo ` instance for each iteration (via next()). + When `SimplifiedObjectInfo.is_prefix()` is True, it means the objet is common prefix (directory, not a file); Otherwise it's a file. - :param bucket: :class:`Bucket ` 对象 - :param prefix: 只列举匹配该前缀的文件 - :param delimiter: 目录分隔符 - :param marker: 分页符 - :param max_keys: 每次调用 `list_objects` 时的max_keys参数。注意迭代器返回的数目可能会大于该值。 + :param bucket: :class:`Bucket ` instance + :param prefix: The file name prefix + :param delimiter: delimiter for the directory + :param marker: Paging marker + :param max_keys: The max keys to return for each `list_objects` call. However the total entries iterator returns could be more than that. """ def __init__(self, bucket, prefix='', delimiter='', marker='', max_keys=100, max_retries=None): super(ObjectIterator, self).__init__(marker, max_retries) @@ -114,17 +114,17 @@ def _fetch(self): class MultipartUploadIterator(_BaseIterator): - """遍历Bucket里未完成的分片上传。 + """Iterator of ongoing parts in multiparts upload. - 每次返回 :class:`MultipartUploadInfo ` 对象。 - 当 `MultipartUploadInfo.is_prefix()` 返回True时,表明是公共前缀(目录)。 + It returns a :class:`MultipartUploadInfo ` instance for each iteration. + When `MultipartUploadInfo.is_prefix()` returns true, it means it's a folder. Otherwise it's a file. - :param bucket: :class:`Bucket ` 对象 - :param prefix: 仅列举匹配该前缀的文件的分片上传 - :param delimiter: 目录分隔符 - :param key_marker: 文件名分页符 - :param upload_id_marker: 分片上传ID分页符 - :param max_uploads: 每次调用 `list_multipart_uploads` 时的max_uploads参数。注意迭代器返回的数目可能会大于该值。 + :param bucket: :class:`Bucket ` instance + :param prefix: file key prefix---only parts of those files will be listed. + :param delimiter: directory delimeter. + :param key_marker: Paging marker. + :param upload_id_marker: Paging upload Id marker. + :param max_uploads: Max entries for each `list_multipart_uploads` call. Note that the total count the iterator returns could be more than that. """ def __init__(self, bucket, prefix='', delimiter='', key_marker='', upload_id_marker='', @@ -151,14 +151,14 @@ def _fetch(self): class ObjectUploadIterator(_BaseIterator): - """遍历一个Object所有未完成的分片上传。 + """Iterator of ongoing multiparts upload. - 每次返回 :class:`MultipartUploadInfo ` 对象。 - 当 `MultipartUploadInfo.is_prefix()` 返回True时,表明是公共前缀(目录)。 + It returns a :class:`MultipartUploadInfo ` instance for each iteration. + When `MultipartUploadInfo.is_prefix()` is true, it means the common prefix (folder). - :param bucket: :class:`Bucket ` 对象 - :param key: 文件名 - :param max_uploads: 每次调用 `list_multipart_uploads` 时的max_uploads参数。注意迭代器返回的数目可能会大于该值。 + :param bucket: :class:`Bucket ` instance + :param key: object key. + :param max_uploads: Max entries for each `list_multipart_uploads` call. Note the total count the iterator returns could be more than that. """ def __init__(self, bucket, key, max_uploads=1000, max_retries=None): super(ObjectUploadIterator, self).__init__('', max_retries) @@ -186,15 +186,15 @@ def _fetch(self): class PartIterator(_BaseIterator): - """遍历一个分片上传会话中已经上传的分片。 + """Iterator of uploaded parts of a specific multipart upload. - 每次返回 :class:`PartInfo ` 对象。 + It returns a :class:`PartInfo ` instance for each iteration. - :param bucket: :class:`Bucket ` 对象 - :param key: 文件名 - :param upload_id: 分片上传ID - :param marker: 分页符 - :param max_parts: 每次调用 `list_parts` 时的max_parts参数。注意迭代器返回的数目可能会大于该值。 + :param bucket: :class:`Bucket ` instance. + :param key: object key. + :param upload_id: Upload Id. + :param marker: Paging marker + :param max_parts: The max parts for each `list_parts` call. Note that the total count the iterator returns could be more than that. """ def __init__(self, bucket, key, upload_id, marker='0', max_parts=1000, max_retries=None): @@ -215,14 +215,14 @@ def _fetch(self): class LiveChannelIterator(_BaseIterator): - """遍历Bucket里文件的迭代器。 + """Iterator of Live Channel in a bucket. - 每次迭代返回的是 :class:`LiveChannelInfo ` 对象。 + It returns a :class:`LiveChannelInfo ` instance for each iteration. - :param bucket: :class:`Bucket ` 对象 - :param prefix: 只列举匹配该前缀的文件 - :param marker: 分页符 - :param max_keys: 每次调用 `list_live_channel` 时的max_keys参数。注意迭代器返回的数目可能会大于该值。 + :param bucket: :class:`Bucket ` instance. + :param prefix: Live Channel prefix + :param marker: Paging marker. + :param max_keys: Max entries for each `list_live_channel` call. Note that the total count the iterator returns could be more than that. """ def __init__(self, bucket, prefix='', marker='', max_keys=100, max_retries=None): super(LiveChannelIterator, self).__init__(marker, max_retries) diff --git a/oss2/models.py b/oss2/models.py old mode 100644 new mode 100755 index 3c617ac3..038e51f2 --- a/oss2/models.py +++ b/oss2/models.py @@ -4,7 +4,7 @@ oss2.models ~~~~~~~~~~ -该模块包含Python SDK API接口所需要的输入参数以及返回值类型。 +The module contains all classes' definition for parameters and return values in the Python SDK API. """ from .utils import http_to_unixtime, make_progress_adapter, make_crc_adapter @@ -12,15 +12,15 @@ from .compat import urlunquote class PartInfo(object): - """表示分片信息的文件。 + """Part information. - 该文件既用于 :func:`list_parts ` 的输出,也用于 :func:`complete_multipart_upload - ` 的输入。 + This class is the output object of :func:`list_parts `, and the input parameters for :func:`complete_multipart_upload + `. - :param int part_number: 分片号 - :param str etag: 分片的ETag - :param int size: 分片的大小。仅用在 `list_parts` 的结果里。 - :param int last_modified: 该分片最后修改的时间戳,类型为int。参考 :ref:`unix_time` + :param int part_number: part number (starting from 1) + :param str etag: ETag + :param int size: Part size in bytes.It's only used in `list_parts` result. + :param int last_modified: The last modified of that part in unix time, type is int. Check out :ref:`unix_time` for more information. """ def __init__(self, part_number, etag, size=None, last_modified=None): self.part_number = part_number @@ -42,16 +42,16 @@ def _get_etag(headers): class RequestResult(object): def __init__(self, resp): - #: HTTP响应 + #: HTTP response self.resp = resp - #: HTTP状态码 + #: HTTP status code (such as 200,404, etc) self.status = resp.status - #: HTTP头 + #: HTTP headers self.headers = resp.headers - #: 请求ID,用于跟踪一个OSS请求。提交工单时,最后能够提供请求ID + #: Request ID which is used for tracking OSS request.It's very useful when submitting a customer ticket. self.request_id = resp.headers.get('x-oss-request-id', '') @@ -59,17 +59,17 @@ class HeadObjectResult(RequestResult): def __init__(self, resp): super(HeadObjectResult, self).__init__(resp) - #: 文件类型,可以是'Normal'、'Multipart'、'Appendable'等 + #: File type, three types are supported: 'Normal'、'Multipart'、'Appendable'. self.object_type = _hget(self.headers, 'x-oss-object-type') - #: 文件最后修改时间,类型为int。参考 :ref:`unix_time` 。 + #: File's last modified time in unix time, type is int. self.last_modified = _hget(self.headers, 'last-modified', http_to_unixtime) - #: 文件的MIME类型 + #: File's MIME type. self.content_type = _hget(self.headers, 'content-type') - #: Content-Length,可能是None。 + #: Content-Length,it could be None。 self.content_length = _hget(self.headers, 'content-length', int) #: HTTP ETag @@ -80,10 +80,10 @@ class GetObjectMetaResult(RequestResult): def __init__(self, resp): super(GetObjectMetaResult, self).__init__(resp) - #: 文件最后修改时间,类型为int。参考 :ref:`unix_time` 。 + #: Last modified time of a file, in unix time. Check out :ref:`unix_time` for more information. self.last_modified = _hget(self.headers, 'last-modified', http_to_unixtime) - #: Content-Length,文件大小,类型为int。 + #: Content-Length,file size in bytes. Type is int. self.content_length = _hget(self.headers, 'content-length', int) #: HTTP ETag @@ -94,7 +94,7 @@ class GetSymlinkResult(RequestResult): def __init__(self, resp): super(GetSymlinkResult, self).__init__(resp) - #: 符号连接的目标文件 + #: The target file of the symlink file. self.target_key = urlunquote(_hget(self.headers, 'x-oss-symlink-target')) @@ -137,7 +137,7 @@ def __init__(self, resp): #: HTTP ETag self.etag = _get_etag(self.headers) - #: 文件上传后,OSS上文件的CRC64值 + #: CRC value of the file uploaded. self.crc = _hget(resp.headers, 'x-oss-hash-crc64ecma', int) @@ -148,10 +148,10 @@ def __init__(self, resp): #: HTTP ETag self.etag = _get_etag(self.headers) - #: 本次追加写完成后,OSS上文件的CRC64值 + #: The updated CRC64 value after the append operation. self.crc = _hget(resp.headers, 'x-oss-hash-crc64ecma', int) - #: 下次追加写的偏移 + #: The next position for append operation. self.next_position = _hget(resp.headers, 'x-oss-next-append-position', int) @@ -159,7 +159,7 @@ class BatchDeleteObjectsResult(RequestResult): def __init__(self, resp): super(BatchDeleteObjectsResult, self).__init__(resp) - #: 已经删除的文件名列表 + #: The deleted file name list self.deleted_keys = [] @@ -167,7 +167,7 @@ class InitMultipartUploadResult(RequestResult): def __init__(self, resp): super(InitMultipartUploadResult, self).__init__(resp) - #: 新生成的Upload ID + #: initial Upload ID self.upload_id = None @@ -175,41 +175,41 @@ class ListObjectsResult(RequestResult): def __init__(self, resp): super(ListObjectsResult, self).__init__(resp) - #: True表示还有更多的文件可以罗列;False表示已经列举完毕。 + #: True means there's more files to list; False means all files are listed. self.is_truncated = False - #: 下一次罗列的分页标记符,即,可以作为 :func:`list_objects ` 的 `marker` 参数。 + #: Paging marker for next call. It should be the value of parameter marker in next :func:`list_objects ` call. self.next_marker = '' - #: 本次罗列得到的文件列表。其中元素的类型为 :class:`SimplifiedObjectInfo` 。 + #: The object list. The object type is :class:`SimplifiedObjectInfo`. self.object_list = [] - #: 本次罗列得到的公共前缀列表,类型为str列表。 + #: The prefix list self.prefix_list = [] class SimplifiedObjectInfo(object): def __init__(self, key, last_modified, etag, type, size, storage_class): - #: 文件名,或公共前缀名。 + #: The file name or common prefix name (folder name). self.key = key - #: 文件的最后修改时间 + #: Last modified time. self.last_modified = last_modified #: HTTP ETag self.etag = etag - #: 文件类型 + #: File type self.type = type - #: 文件大小 + #: File size self.size = size - #: 文件的存储类别,是一个字符串。 + #: Storage class (Standard, IA and Archive) self.storage_class = storage_class def is_prefix(self): - """如果是公共前缀,返回True;是文件,则返回False""" + """If it's common prefix (folder), returns true; Otherwise returns False.""" return self.last_modified is None @@ -223,7 +223,7 @@ class GetObjectAclResult(RequestResult): def __init__(self, resp): super(GetObjectAclResult, self).__init__(resp) - #: 文件的ACL,其值可以是 `OBJECT_ACL_DEFAULT`、`OBJECT_ACL_PRIVATE`、`OBJECT_ACL_PUBLIC_READ`或 + #: File ACL, The value could be `OBJECT_ACL_DEFAULT`、`OBJECT_ACL_PRIVATE`、`OBJECT_ACL_PUBLIC_READ` or #: `OBJECT_ACL_PUBLIC_READ_WRITE` self.acl = '' @@ -231,13 +231,13 @@ def __init__(self, resp): class SimplifiedBucketInfo(object): """:func:`list_buckets ` 结果中的单个元素类型。""" def __init__(self, name, location, creation_date): - #: Bucket名 + #: Bucket name self.name = name - #: Bucket的区域 + #: Bucket location self.location = location - #: Bucket的创建时间,类型为int。参考 :ref:`unix_time`。 + #: Bucket created time in unix time. Check out :ref:`unix_time` for more information. self.creation_date = creation_date @@ -245,29 +245,29 @@ class ListBucketsResult(RequestResult): def __init__(self, resp): super(ListBucketsResult, self).__init__(resp) - #: True表示还有更多的Bucket可以罗列;False表示已经列举完毕。 + #: True means more buckets to list; False means all buckets have been listed. self.is_truncated = False - #: 下一次罗列的分页标记符,即,可以作为 :func:`list_buckets ` 的 `marker` 参数。 + #: The next paging marker--that is it could be the value of parameter `marker` in :func:`list_buckets `. self.next_marker = '' - #: 得到的Bucket列表,类型为 :class:`SimplifiedBucketInfo` 。 + #: Gets the bucket list. The type is :class:`SimplifiedBucketInfo`. self.buckets = [] class MultipartUploadInfo(object): def __init__(self, key, upload_id, initiation_date): - #: 文件名 + #: File name self.key = key - #: 分片上传ID + #: upload Id self.upload_id = upload_id - #: 分片上传初始化的时间,类型为int。参考 :ref:`unix_time` + #: The initialization time of a multipart upload in unix time. Please check out :ref:`unix_time`. self.initiation_date = initiation_date def is_prefix(self): - """如果是公共前缀则返回True""" + """If it's common prefix then return true;Otherwise return false""" return self.upload_id is None @@ -275,19 +275,19 @@ class ListMultipartUploadsResult(RequestResult): def __init__(self, resp): super(ListMultipartUploadsResult, self).__init__(resp) - #: True表示还有更多的为完成分片上传可以罗列;False表示已经列举完毕。 + #: True means more unfinished multiparts uploads to list;False means no more multiparts uploads. self.is_truncated = False - #: 文件名分页符 + #: The paging key marker. self.next_key_marker = '' - #: 分片上传ID分页符 + #: The paging upload Id marker self.next_upload_id_marker = '' - #: 分片上传列表。类型为`MultipartUploadInfo`列表。 + #: list of the multiparts upload. The type is `MultipartUploadInfo`. self.upload_list = [] - #: 公共前缀列表。类型为str列表。 + #: The common prefix. The type is str. self.prefix_list = [] @@ -295,13 +295,13 @@ class ListPartsResult(RequestResult): def __init__(self, resp): super(ListPartsResult, self).__init__(resp) - # True表示还有更多的Part可以罗列;False表示已经列举完毕。 + # True means more parts to list. False means no more parts to list. self.is_truncated = False - # 下一个分页符 + # Next paging marker self.next_marker = '' - # 罗列出的Part信息,类型为 `PartInfo` 列表。 + # parts list. The type is `PartInfo`. self.parts = [] @@ -314,7 +314,7 @@ class GetBucketAclResult(RequestResult): def __init__(self, resp): super(GetBucketAclResult, self).__init__(resp) - #: Bucket的ACL,其值可以是 `BUCKET_ACL_PRIVATE`、`BUCKET_ACL_PUBLIC_READ`或`BUCKET_ACL_PUBLIC_READ_WRITE`。 + #: Bucket ACL, the value could be `BUCKET_ACL_PRIVATE`、`BUCKET_ACL_PUBLIC_READ`或`BUCKET_ACL_PUBLIC_READ_WRITE`。 self.acl = '' @@ -322,15 +322,15 @@ class GetBucketLocationResult(RequestResult): def __init__(self, resp): super(GetBucketLocationResult, self).__init__(resp) - #: Bucket所在的数据中心 + #: Bucket's datacenter location self.location = '' class BucketLogging(object): - """Bucket日志配置信息。 + """Bucket logging configuration. - :param str target_bucket: 存储日志到这个Bucket。 - :param str target_prefix: 生成的日志文件名加上该前缀。 + :param str target_bucket: logging files' bucket。 + :param str target_prefix: The prefix of the logging files. """ def __init__(self, target_bucket, target_prefix): self.target_bucket = target_bucket @@ -344,10 +344,10 @@ def __init__(self, resp): class BucketReferer(object): - """Bucket防盗链设置。 + """Bucket referer settings - :param bool allow_empty_referer: 是否允许空的Referer。 - :param referers: Referer列表,每个元素是一个str。 + :param bool allow_empty_referer: Flag of allowing empty Referer。 + :param referers: Referer list. The type of element is str. """ def __init__(self, allow_empty_referer, referers): self.allow_empty_referer = allow_empty_referer @@ -361,10 +361,10 @@ def __init__(self, resp): class BucketWebsite(object): - """静态网站托管配置。 + """Static website configuraiton. - :param str index_file: 索引页面文件 - :param str error_file: 404页面文件 + :param str index_file: The home page file. + :param str error_file: 404 not found file. """ def __init__(self, index_file, error_file): self.index_file = index_file @@ -378,11 +378,11 @@ def __init__(self, resp): class LifecycleExpiration(object): - """过期删除操作。 + """Life cycle expiration。 - :param days: 表示在文件修改后过了这么多天,就会匹配规则,从而被删除 - :param date: 表示在该日期之后,规则就一直生效。即每天都会对符合前缀的文件执行删除操作(如,删除),而不管文件是什么时候生成的。 - *不建议使用* + :param days: The days after last modified to trigger the expiration rule (such as delete files). + :param date: The date threshold to trigger the expiration rule---after this date the expiration rule will be always valid (not recommended). + :type date: `datetime.date` """ def __init__(self, days=None, date=None): @@ -394,13 +394,13 @@ def __init__(self, days=None, date=None): class LifecycleRule(object): - """生命周期规则。 + """Life cycle rule - :param id: 规则名 - :param prefix: 只有文件名匹配该前缀的文件才适用本规则 - :param expiration: 过期删除操作。 + :param id: Rule name + :param prefix: File prefix to match the rule + :param expiration: Expiration time :type expiration: :class:`LifecycleExpiration` - :param status: 启用还是禁止该规则。可选值为 `LifecycleRule.ENABLED` 或 `LifecycleRule.DISABLED` + :param status: Enable or disable the rule. The value is either `LifecycleRule.ENABLED` or `LifecycleRule.DISABLED` """ ENABLED = 'Enabled' @@ -415,9 +415,9 @@ def __init__(self, id, prefix, class BucketLifecycle(object): - """Bucket的生命周期配置。 + """Bucket's life cycle configuration。 - :param rules: 规则列表, + :param rules: Life cycle rule list, :type rules: list of :class:`LifecycleRule` """ def __init__(self, rules=None): @@ -431,15 +431,15 @@ def __init__(self, resp): class CorsRule(object): - """CORS(跨域资源共享)规则。 + """CORS (cross origin resource sharing) rules - :param allowed_origins: 允许跨域访问的域。 + :param allowed_origins: Allow origins to access the bucket :type allowed_origins: list of str - :param allowed_methods: 允许跨域访问的HTTP方法,如'GET'等。 + :param allowed_methods: Allowed HTTP methods for CORS. :type allowed_methods: list of str - :param allowed_headers: 允许跨域访问的HTTP头部。 + :param allowed_headers: Allowed HTTP headers for CORS. :type allowed_headers: list of str @@ -469,15 +469,15 @@ def __init__(self, resp): class LiveChannelInfoTarget(object): - """Live channel中的Target节点,包含目标协议的一些参数。 + """Target information in the Live channel,which includes the parameters of the target protocol. - :param type: 协议,目前仅支持HLS。 + :param type: prtocol, only HLS is supported for now. :type type: str - :param frag_duration: HLS协议下生成的ts文件的期望时长,单位为秒。 + :param frag_duration: The expected time length in seconds of HLS protocol's TS files. :type frag_duration: int - :param frag_count: HLS协议下m3u8文件里ts文件的数量。 + :param frag_count: TS file count in the m3u8 file of HLS protocol. :type frag_count: int""" def __init__(self, @@ -492,27 +492,27 @@ def __init__(self, class LiveChannelInfo(object): - """Live channel(直播频道)配置。 + """Live channel configuration - :param status: 直播频道的状态,合法的值为"enabled"和"disabled"。 + :param status: status: the value is either "enabled" or "disabled". :type status: str - :param description: 直播频道的描述信息,最长为128字节。 + :param description: The live channel's description, the max length is 128 bytes. :type description: str - :param target: 直播频道的推流目标节点,包含目标协议相关的参数。 + :param target: The target informtion of a pushing streaming, including the parameters about the target protocol. :type class:`LiveChannelInfoTarget ` - :param last_modified: 直播频道的最后修改时间,这个字段仅在`ListLiveChannel`时使用。 - :type last_modified: int, 参考 :ref:`unix_time`。 + :param last_modified: The last modified time of the live channel. It's only used in `ListLiveChannel`. + :type last_modified: Last modified time in unix time (int type), Check out :ref:`unix_time`. - :param name: 直播频道的名称。 + :param name: The live channel name. :type name: str - :param play_url: 播放地址。 + :param play_url: play url. :type play_url: str - :param publish_url: 推流地址。 + :param publish_url: publish url :type publish_url: str""" def __init__(self, @@ -533,25 +533,25 @@ def __init__(self, class LiveChannelList(object): - """List直播频道的结果。 + """The result of live channel list operation. - :param prefix: List直播频道使用的前缀。 + :param prefix: The live channel to list :type prefix: str - :param marker: List直播频道使用的marker。 + :param marker: The paging marker in the live channel list operation. :type marker: str - :param max_keys: List时返回的最多的直播频道的条数。 + :param max_keys: Max entries to return. :type max_keys: int - :param is_truncated: 本次List是否列举完所有的直播频道 + :param is_truncated: Is there more live channels to list. :type is_truncated: bool - :param next_marker: 下一次List直播频道使用的marker。 + :param next_marker: The next paging marker :type marker: str - :param channels: List返回的直播频道列表 - :type channels: list,类型为 :class:`LiveChannelInfo`""" + :param channels: the live channel list returned. + :type channels: list,the type is :class:`LiveChannelInfo`""" def __init__(self, prefix = '', @@ -568,21 +568,21 @@ def __init__(self, class LiveChannelVideoStat(object): - """LiveStat中的Video节点。 + """The video node in LiveStat - :param width: 视频的宽度。 + :param width: video width :type width: int - :param height: 视频的高度。 + :param height: video height :type height: int - :param frame_rate: 帧率。 + :param frame_rate: frame rate :type frame_rate: int - :param codec: 编码方式。 + :param codec: codec :type codec: str - :param bandwidth: 码率。 + :param bandwidth: bandwith of the video :type bandwidth: int""" def __init__(self, @@ -599,15 +599,15 @@ def __init__(self, class LiveChannelAudioStat(object): - """LiveStat中的Audio节点。 + """Audio node in LiveStat - :param codec: 编码方式。 + :param codec: Audio codec. :type codec: str - :param sample_rate: 采样率。 + :param sample_rate: Sample rate :type sample_rate: int - :param bandwidth: 码率。 + :param bandwidth: bandwidth :type bandwidth: int""" def __init__(self, @@ -620,21 +620,21 @@ def __init__(self, class LiveChannelStat(object): - """LiveStat结果。 + """LiveStat result. - :param status: 直播状态。 + :param status: live channel status :type codec: str - :param remote_addr: 客户端的地址。 + :param remote_addr: remote address :type remote_addr: str - :param connected_time: 本次推流开始时间。 + :param connected_time: The connected time for the pusing streaming. :type connected_time: int, unix time - :param video: 视频描述信息。 + :param video: Video description information :type video: class:`LiveChannelVideoStat ` - :param audio: 音频描述信息。 + :param audio: Audio description information :type audio: class:`LiveChannelAudioStat `""" def __init__(self, @@ -651,15 +651,15 @@ def __init__(self, class LiveRecord(object): - """直播频道中的推流记录信息 + """Pushing streaming record - :param start_time: 本次推流开始时间。 - :type start_time: int,参考 :ref:`unix_time`。 + :param start_time: The start time of the push streaming. + :type start_time: int, check out :ref:`unix_time`. - :param end_time: 本次推流结束时间。 - :type end_time: int, 参考 :ref:`unix_time`。 + :param end_time: The end time of the push streaming. + :type end_time: int, check out :ref:`unix_time` for more information. - :param remote_addr: 推流时客户端的地址。 + :param remote_addr: The remote address of the pushing streaming. :type remote_addr: str""" def __init__(self, @@ -672,7 +672,7 @@ def __init__(self, class LiveChannelHistory(object): - """直播频道下的推流记录。""" + """Pushing streaming record of the live channel""" def __init__(self): self.records = [] diff --git a/oss2/resumable.py b/oss2/resumable.py old mode 100644 new mode 100755 index 96718c0c..b9129dc2 --- a/oss2/resumable.py +++ b/oss2/resumable.py @@ -4,7 +4,7 @@ oss2.resumable ~~~~~~~~~~~~~~ -该模块包含了断点续传相关的函数和类。 +The module contains the classes for resumable upload. """ import os @@ -37,24 +37,21 @@ def resumable_upload(bucket, key, filename, part_size=None, progress_callback=None, num_threads=None): - """断点上传本地文件。 - - 实现中采用分片上传方式上传本地文件,缺省的并发数是 `oss2.defaults.multipart_num_threads` ,并且在 - 本地磁盘保存已经上传的分片信息。如果因为某种原因上传被中断,下次上传同样的文件,即源文件和目标文件路径都 - 一样,就只会上传缺失的分片。 - - 缺省条件下,该函数会在用户 `HOME` 目录下保存断点续传的信息。当待上传的本地文件没有发生变化, - 且目标文件名没有变化时,会根据本地保存的信息,从断点开始上传。 - - :param bucket: :class:`Bucket ` 对象 - :param key: 上传到用户空间的文件名 - :param filename: 待上传本地文件名 - :param store: 用来保存断点信息的持久存储,参见 :class:`ResumableStore` 的接口。如不指定,则使用 `ResumableStore` 。 - :param headers: 传给 `put_object` 或 `init_multipart_upload` 的HTTP头部 - :param multipart_threshold: 文件长度大于该值时,则用分片上传。 - :param part_size: 指定分片上传的每个分片的大小。如不指定,则自动计算。 - :param progress_callback: 上传进度回调函数。参见 :ref:`progress_callback` 。 - :param num_threads: 并发上传的线程数,如不指定则使用 `oss2.defaults.multipart_num_threads` 。 + """resumable upload from local file. + + It uses multiparts upload with `oss2.defaults.multipart_num_threads` as the default thread number. + It saves the checkpoint file in local disk (by default in home folder) which could be used for next resumable upload in case this upload is interupted. + The resumable upload only uploads the remaininig parts according to the checkpoint file, as long as the local files and uploaded parts are not updated since the last upload. + + :param bucket: :class:`Bucket ` instance. + :param key: object key in OSS + :param filename: local file name + :param store: Store for the checkpoint information. If not specified, use `ResumableStore`. + :param headers: Http headers for `put_object` or `init_multipart_upload`. + :param multipart_threshold: The threshold of the file size to use multipart upload + :param part_size: Part size. If not specified, the value will be calculated automatically. + :param progress_callback: The progress callback. Check out ref:`progress_callback` for more information. + :param num_threads: The parallel thread count for upload. If not specified, `oss2.defaults.multipart_num_threads` will be used. """ size = os.path.getsize(filename) multipart_threshold = defaults.get(multipart_threshold, defaults.multipart_threshold) @@ -80,36 +77,37 @@ def resumable_download(bucket, key, filename, progress_callback=None, num_threads=None, store=None): - """断点下载。 + """Resumable download. - 实现的方法是: - #. 在本地创建一个临时文件,文件名由原始文件名加上一个随机的后缀组成; - #. 通过指定请求的 `Range` 头按照范围并发读取OSS文件,并写入到临时文件里对应的位置; - #. 全部完成之后,把临时文件重命名为目标文件 (即 `filename` ) + The imlementation: + #. Creates a temp file with same original file name plus a random suffix. + #. Parallel download OSS file with specified `Range` into the temp file. + #. Once finished, rename the temp file to the target file name. - 在上述过程中,断点信息,即已经完成的范围,会保存在磁盘上。因为某种原因下载中断,后续如果下载 - 同样的文件,也就是源文件和目标文件一样,就会先读取断点信息,然后只下载缺失的部分。 - - 缺省设置下,断点信息保存在 `HOME` 目录的一个子目录下。可以通过 `store` 参数更改保存位置。 + During the download, the checkpoint information (finished range) is stored in disk as the checkpoint file. + If the download is interrupted somehow the latter download could resume from it if the source and target file matches. + Only the missing parts will be downloaded. + + By default, the checkpoint file is in a Home subfolder, which could be specified by `store` parameter. - 使用该函数应注意如下细节: - #. 对同样的源文件、目标文件,避免多个程序(线程)同时调用该函数。因为断点信息会在磁盘上互相覆盖,或临时文件名会冲突。 - #. 避免使用太小的范围(分片),即 `part_size` 不宜过小,建议大于或等于 `oss2.defaults.multiget_part_size` 。 - #. 如果目标文件已经存在,那么该函数会覆盖此文件。 + Notes: + #. For the same source and target file, at any given time, there should be only one running instance of this API. Otherwise multiple calls could lead to checkpoint file be overwritten by each other. + #. Don't use too small part size. The suggested size is no less than `oss2.defaults.multiget_part_size`. + #. The API will overwrite the target file if it exists already. - :param bucket: :class:`Bucket ` 对象。 - :param str key: 待下载的远程文件名。 - :param str filename: 本地的目标文件名。 - :param int multiget_threshold: 文件长度大于该值时,则使用断点下载。 - :param int part_size: 指定期望的分片大小,即每个请求获得的字节数,实际的分片大小可能有所不同。 - :param progress_callback: 下载进度回调函数。参见 :ref:`progress_callback` 。 - :param num_threads: 并发下载的线程数,如不指定则使用 `oss2.defaults.multiget_num_threads` 。 + :param bucket: :class:`Bucket ` instance + :param str key: OSS key object. + :param str filename: Local file name. + :param int multiget_threshold: The threshold of the file size to use multiget download. + :param int part_size: The preferred part size. The actual part size might be slightly different according to determine_part_size(). + :param progress_callback: Progress callback. Check out :ref:`progress_callback`. + :param num_threads: Parallel thread number. Default value is `oss2.defaults.multiget_num_threads`. - :param store: 用来保存断点信息的持久存储,可以指定断点信息所在的目录。 + :param store: To specify the persistent storage for checkpoint information. For example, the folder of the checkpoin file. :type store: `ResumableDownloadStore` - :raises: 如果OSS文件不存在,则抛出 :class:`NotFound ` ;也有可能抛出其他因下载文件而产生的异常。 + :raises: If the source OSS file does not exist,:class:`NotFound ` is thrown;Other exception may be thrown as well upon other issues. """ multiget_threshold = defaults.get(multiget_threshold, defaults.multiget_threshold) @@ -132,12 +130,12 @@ def resumable_download(bucket, key, filename, def determine_part_size(total_size, preferred_size=None): - """确定分片上传是分片的大小。 + """Determine the part size of the multiparts upload. - :param int total_size: 总共需要上传的长度 - :param int preferred_size: 用户期望的分片大小。如果不指定则采用defaults.part_size + :param int total_size: Total size to upload. + :param int preferred_size: User's preferred size. By default it's defaults.part_size. - :return: 分片大小 + :return: Part size """ if not preferred_size: preferred_size = defaults.part_size @@ -371,17 +369,17 @@ def __gen_tmp_suffix(self): class _ResumableUploader(_ResumableOperation): - """以断点续传方式上传文件。 - - :param bucket: :class:`Bucket ` 对象 - :param key: 文件名 - :param filename: 待上传的文件名 - :param size: 文件总长度 - :param store: 用来保存进度的持久化存储 - :param headers: 传给 `init_multipart_upload` 的HTTP头部 - :param part_size: 分片大小。优先使用用户提供的值。如果用户没有指定,那么对于新上传,计算出一个合理值;对于老的上传,采用第一个 - 分片的大小。 - :param progress_callback: 上传进度回调函数。参见 :ref:`progress_callback` 。 + """Resumable upload + + :param bucket: :class:`Bucket ` instance + :param key: OSS object key. + :param filename: The file name to upload. + :param size: Total file size. + :param store: The store for persisting checkpoint information. + :param headers: The http headers for `init_multipart_upload` + :param part_size: Part size. If it's specified, then it has higher priority than the calculated part size. If not specified, for the retry upload, the original upload's part size will be used. + + :param progress_callback: Progress callback. Check out :ref:`progress_callback`. """ def __init__(self, bucket, key, filename, size, store=None, @@ -550,8 +548,7 @@ def get(self, key): if not os.path.exists(pathname): return None - # json.load()返回的总是unicode,对于Python2,我们将其转换 - # 为str。 + # json.load() returns unicode. For Python2, it's converted to str. try: with open(to_unicode(pathname), 'r') as f: @@ -585,12 +582,12 @@ def _normalize_path(path): class ResumableStore(_ResumableStoreBase): - """保存断点上传断点信息的类。 + """The class for persisting uploading checkpoint information. - 每次上传的信息会保存在 `root/dir/` 下面的某个文件里。 + The checkpoint information would be a subfolder of `root/dir/` - :param str root: 父目录,缺省为HOME - :param str dir: 子目录,缺省为 `_UPLOAD_TEMP_DIR` + :param str root: Root folder, default is `HOME`. + :param str dir: Subfoder,default is `_UPLOAD_TEMP_DIR` """ def __init__(self, root=None, dir=None): super(ResumableStore, self).__init__(root or os.path.expanduser('~'), dir or _UPLOAD_TEMP_DIR) @@ -604,12 +601,12 @@ def make_store_key(bucket_name, key, filename): class ResumableDownloadStore(_ResumableStoreBase): - """保存断点下载断点信息的类。 + """The class for persisting downloading checkpoint information. - 每次下载的断点信息会保存在 `root/dir/` 下面的某个文件里。 + The checkpoint information would be a subfolder of `root/dir/` - :param str root: 父目录,缺省为HOME - :param str dir: 子目录,缺省为 `_DOWNLOAD_TEMP_DIR` + :param str root: Root folder, default is `HOME`. + :param str dir: Subfoder,default is `_UPLOAD_TEMP_DIR` """ def __init__(self, root=None, dir=None): super(ResumableDownloadStore, self).__init__(root or os.path.expanduser('~'), dir or _DOWNLOAD_TEMP_DIR) diff --git a/oss2/utils.py b/oss2/utils.py old mode 100644 new mode 100755 index 645ea73c..79d1ea0d --- a/oss2/utils.py +++ b/oss2/utils.py @@ -4,7 +4,7 @@ oss2.utils ---------- -工具函数模块。 +Utils module """ from email.utils import formatdate @@ -46,21 +46,21 @@ def b64encode_as_string(data): def content_md5(data): - """计算data的MD5值,经过Base64编码并返回str类型。 + """Calculate the MD5 of the data. The return value is base64 encoded str. - 返回值可以直接作为HTTP Content-Type头部的值 + The return value could be value of of HTTP Content-MD5 header. """ m = hashlib.md5(to_bytes(data)) return b64encode_as_string(m.digest()) def md5_string(data): - """返回 `data` 的MD5值,以十六进制可读字符串(32个小写字符)的方式。""" + """Returns MD5 value of `data` in hex string (hexdigest())""" return hashlib.md5(to_bytes(data)).hexdigest() def content_type_by_name(name): - """根据文件名,返回Content-Type。""" + """Return the Content-Type by file name.""" ext = os.path.splitext(name)[1].lower() if ext in _EXTRA_TYPES_MAP: return _EXTRA_TYPES_MAP[ext] @@ -69,7 +69,7 @@ def content_type_by_name(name): def set_content_type(headers, name): - """根据文件名在headers里设置Content-Type。如果headers中已经存在Content-Type,则直接返回。""" + """Sets the content type by the name. If the content-type has been set, no-op and return.""" headers = headers or {} if 'Content-Type' in headers: @@ -102,7 +102,7 @@ def is_ip_or_localhost(netloc): def is_valid_bucket_name(name): - """判断是否为合法的Bucket名""" + """Checks if the bucket name is valid.""" if len(name) < 3 or len(name) > 63: return False @@ -116,7 +116,7 @@ def is_valid_bucket_name(name): class SizedFileAdapter(object): - """通过这个适配器(Adapter),可以把原先的 `file_object` 的长度限制到等于 `size`。""" + """It guarantees only read up to the specified 'size' data, even if the original 'file_object' size (Adapter)is bigger.""" def __init__(self, file_object, size): self.file_object = file_object self.size = size @@ -174,14 +174,14 @@ def _get_data_size(data): def make_progress_adapter(data, progress_callback, size=None): - """返回一个适配器,从而在读取 `data` ,即调用read或者对其进行迭代的时候,能够 - 调用进度回调函数。当 `size` 没有指定,且无法确定时,上传回调函数返回的总字节数为None。 + """Returns an adapter instance so that the progress callback is called when reading the data. + When parameter `size` is not specified and cannot be dertermined, the total size in the callback is None. - :param data: 可以是bytes、file object或iterable - :param progress_callback: 进度回调函数,参见 :ref:`progress_callback` - :param size: 指定 `data` 的大小,可选 + :param data: It could be bytes、file object or iterable + :param progress_callback: Progress callback. Check out :ref:`progress_callback` + :param size: Specify the `data` size, optional. - :return: 能够调用进度回调函数的适配器 + :return: The adapters that could call the progress callback. """ data = to_bytes(data) @@ -200,12 +200,12 @@ def make_progress_adapter(data, progress_callback, size=None): def make_crc_adapter(data, init_crc=0): - """返回一个适配器,从而在读取 `data` ,即调用read或者对其进行迭代的时候,能够计算CRC。 + """Returns an adapter instance so that the CRC could be calculated during read. - :param data: 可以是bytes、file object或iterable - :param init_crc: 初始CRC值,可选 + :param data: It could be bytes、file object or iterable + :param init_crc: Init CRC value, optional. - :return: 能够调用计算CRC函数的适配器 + :return: A adapter that could calls CRC caluclating function. """ data = to_bytes(data) @@ -269,10 +269,10 @@ def crc(self): class _FileLikeAdapter(object): - """通过这个适配器,可以给无法确定内容长度的 `fileobj` 加上进度监控。 + """The adapter to monior the progress for `fileobj` that the content length could not be termined. - :param fileobj: file-like object,只要支持read即可 - :param progress_callback: 进度回调函数 + :param fileobj: file-like object,as long as read() is supported. + :param progress_callback: Progress callback. """ def __init__(self, fileobj, progress_callback=None, crc_callback=None): self.fileobj = fileobj @@ -314,12 +314,12 @@ def crc(self): class _BytesAndFileAdapter(object): - """通过这个适配器,可以给 `data` 加上进度监控。 + """The adapter to monitor data's progress. - :param data: 可以是unicode字符串(内部会转换为UTF-8编码的bytes)、bytes或file object - :param progress_callback: 用户提供的进度报告回调,形如 callback(bytes_read, total_bytes)。 - 其中bytes_read是已经读取的字节数;total_bytes是总的字节数。 - :param int size: `data` 包含的字节数。 + :param data: It could be unicode(internally it's convereted to UTF-8 bytes)、bytes or file object + :param progress_callback: Progress callback,The signature is callback(bytes_read, total_bytes). + `bytes_read` is the bytes read and `total_bytes` is the total bytes. + :param int size: The size of the `data`. """ def __init__(self, data, progress_callback=None, size=None, crc_callback=None): self.data = to_bytes(data) @@ -411,22 +411,22 @@ def to_unixtime(time_string, format_string): def http_date(timeval=None): - """返回符合HTTP标准的GMT时间字符串,用strftime的格式表示就是"%a, %d %b %Y %H:%M:%S GMT"。 - 但不能使用strftime,因为strftime的结果是和locale相关的。 + """Returns the HTTP standard GMT time string. If using strftime format, it would be "%a, %d %b %Y %H:%M:%S GMT". + But strftime() cannot be used as it's locale dependent. """ return formatdate(timeval, usegmt=True) def http_to_unixtime(time_string): - """把HTTP Date格式的字符串转换为UNIX时间(自1970年1月1日UTC零点的秒数)。 + """Converts the Http date to unix time(total seconds since 1970 Jan First, 00:00). - HTTP Date形如 `Sat, 05 Dec 2015 11:10:29 GMT` 。 + HTTP Date such as `Sat, 05 Dec 2015 11:10:29 GMT` 。 """ return to_unixtime(time_string, _GMT_FORMAT) def iso8601_to_unixtime(time_string): - """把ISO8601时间字符串(形如,2012-02-24T06:07:48.000Z)转换为UNIX时间,精确到秒。""" + """Coverts the ISO8601 time string (e.g. 2012-02-24T06:07:48.000Z)to unix time in seconds""" return to_unixtime(time_string, _ISO8601_FORMAT) @@ -448,7 +448,7 @@ def makedir_p(dirpath): def silently_remove(filename): - """删除文件,如果文件不存在也不报错。""" + """Silently remove the file. If the file does not exist, no op and return without error.""" try: os.remove(filename) except OSError as e: diff --git a/oss2/xml_utils.py b/oss2/xml_utils.py old mode 100644 new mode 100755 index 54b0f184..c84fedb5 --- a/oss2/xml_utils.py +++ b/oss2/xml_utils.py @@ -4,11 +4,11 @@ oss2.xml_utils ~~~~~~~~~~~~~~ -XML处理相关。 +Utility class for XML processing. -主要包括两类接口: - - parse_开头的函数:用来解析服务器端返回的XML - - to_开头的函数:用来生成发往服务器端的XML +It includes two kind of APIs: + - APIs starting with parse_:This is for paring the xml from OSS server + - APIs starting with to_:This is for generating the xml to sent to OSS servers """ diff --git a/tests/test_api_base.py b/tests/test_api_base.py index fd4f36fb..d7c9c512 100644 --- a/tests/test_api_base.py +++ b/tests/test_api_base.py @@ -23,7 +23,7 @@ def test_https(self): bucket = oss2.Bucket(oss2.AnonymousAuth(), OSS_ENDPOINT.replace('http://', 'https://'), bucket_name) self.assertRaises(oss2.exceptions.NoSuchBucket, bucket.get_object, 'hello.txt') - # 只是为了测试,请不要用IP访问OSS,除非你是在VPC环境下。 + # Test only. Do not use IP to access OSS, unless it's under VPC def test_ip(self): bucket_name = random_string(63) ip = socket.gethostbyname(OSS_ENDPOINT.replace('https://', '').replace('http://', '')) @@ -61,14 +61,14 @@ def do_request(session_self, req, timeout): from unittest.mock import patch with patch.object(oss2.Session, 'do_request', side_effect=do_request, autospec=True): - # 不加 app_name + # no app_name assert_found = False self.assertRaises(oss2.exceptions.ClientError, self.bucket.get_bucket_acl) service = oss2.Service(oss2.Auth(OSS_ID, OSS_SECRET), OSS_ENDPOINT) self.assertRaises(oss2.exceptions.ClientError, service.list_buckets) - # 加app_name + # add app_name assert_found = True bucket = oss2.Bucket(oss2.Auth(OSS_ID, OSS_SECRET), OSS_ENDPOINT, OSS_BUCKET, app_name=app) diff --git a/tests/test_bucket.py b/tests/test_bucket.py index bf7c1410..ab778b84 100644 --- a/tests/test_bucket.py +++ b/tests/test_bucket.py @@ -77,28 +77,28 @@ def test_website(self): self.bucket.put_object('index.html', content) - # 设置index页面和error页面 + # Sets index page and error page self.bucket.put_bucket_website(oss2.models.BucketWebsite('index.html', 'error.html')) time.sleep(5) def same_website(website, index, error): return website.index_file == index and website.error_file == error - # 验证index页面和error页面 + # Verify index page and error page self.retry_assert(lambda: same_website(self.bucket.get_bucket_website(), 'index.html', 'error.html')) - # 验证读取目录会重定向到index页面 + # Verify reading the folder would redirect to the index page. result = self.bucket.get_object(key) self.assertEqual(result.read(), content) self.bucket.delete_object('index.html') - # 中文 + # Chinese for index, error in [('index+中文.html', 'error.中文'), (u'index+中文.html', u'error.中文')]: self.bucket.put_bucket_website(oss2.models.BucketWebsite(index, error)) self.retry_assert(lambda: same_website(self.bucket.get_bucket_website(), to_string(index), to_string(error))) - # 关闭静态网站托管模式 + # Deletes the static website settings in the bucket self.bucket.delete_bucket_website() self.bucket.delete_bucket_website() diff --git a/tests/test_download.py b/tests/test_download.py index 2b15de47..c97257f3 100644 --- a/tests/test_download.py +++ b/tests/test_download.py @@ -76,7 +76,7 @@ def test_large_single_threaded(self): self.__test_normal(2 * 1024 * 1024 + 1) def test_large_multi_threaded(self): - """多线程,线程数少于分片数""" + """thread count is smaller than parts count""" oss2.defaults.multiget_threshold = 1024 * 1024 oss2.defaults.multiget_part_size = 100 * 1024 @@ -85,7 +85,7 @@ def test_large_multi_threaded(self): self.__test_normal(2 * 1024 * 1024) def test_large_many_threads(self): - """线程数多余分片数""" + """thread count is bigger than parts count""" oss2.defaults.multiget_threshold = 1024 * 1024 oss2.defaults.multiget_part_size = 100 * 1024 @@ -130,7 +130,7 @@ def mock_download_part(self, part, failed_parts=None): self.assertFileContent(filename, content) def test_resume_hole_start(self): - """第一个part失败""" + """The first part fails""" oss2.defaults.multiget_threshold = 1 oss2.defaults.multiget_part_size = 500 @@ -139,7 +139,7 @@ def test_resume_hole_start(self): self.__test_resume(500 * 10 + 16, [1]) def test_resume_hole_end(self): - """最后一个part失败""" + """The last part fails""" oss2.defaults.multiget_threshold = 1 oss2.defaults.multiget_part_size = 500 @@ -148,7 +148,7 @@ def test_resume_hole_end(self): self.__test_resume(500 * 10 + 16, [11]) def test_resume_hole_mid(self): - """中间part失败""" + """The middle part fails""" oss2.defaults.multiget_threshold = 1 oss2.defaults.multiget_part_size = 500 @@ -296,7 +296,7 @@ def corrupt_record(store, store_key, r): self.__test_insane_record(400, corrupt_record) def test_remote_changed_before_start(self): - """在开始下载之前,OSS上的文件就已经被修改了""" + """The OSS file has been updated before the download starts""" oss2.defaults.multiget_threshold = 1 # reuse __test_insane_record to simulate @@ -350,7 +350,7 @@ def mock_rename(src, dst): self.assertTrue(new_context['etag'] != old_context['etag']) def test_two_downloaders(self): - """两个downloader同时跑,但是store的目录不一样。""" + """Two downloads run concurrently with different target folders.""" oss2.defaults.multiget_threshold = 1 oss2.defaults.multiget_part_size = 100 diff --git a/tests/test_image.py b/tests/test_image.py index 56e44e3d..33472e9c 100644 --- a/tests/test_image.py +++ b/tests/test_image.py @@ -37,49 +37,49 @@ def __check(self, image_key, image_height, image_width, image_size, image_format self.assertEqual(decoded_json['Format']['value'], image_format) def test_resize(self): - style = "image/resize,m_fixed,w_100,h_100" # 缩放 + style = "image/resize,m_fixed,w_100,h_100" # resize original_image, new_image = self.__prepare() self.__test(original_image, new_image, style) self.__check(new_image, 100, 100, 3267, 'jpg') def test_crop(self): - style = "image/crop,w_100,h_100,x_100,y_100,r_1" # 裁剪 + style = "image/crop,w_100,h_100,x_100,y_100,r_1" # crop original_image, new_image = self.__prepare() self.__test(original_image, new_image, style) self.__check(new_image, 100, 100, 1969, 'jpg') def test_rotate(self): - style = "image/rotate,90" # 旋转 + style = "image/rotate,90" # rotate original_image, new_image = self.__prepare() self.__test(original_image, new_image, style) self.__check(new_image, 400, 267, 20998, 'jpg') def test_sharpen(self): - style = "image/sharpen,100" # 锐化 + style = "image/sharpen,100" # sharpen original_image, new_image = self.__prepare() self.__test(original_image, new_image, style) self.__check(new_image, 267, 400, 23015, 'jpg') def test_watermark(self): - style = "image/watermark,text_SGVsbG8g5Zu-54mH5pyN5YqhIQ" # 文字水印 + style = "image/watermark,text_SGVsbG8g5Zu-54mH5pyN5YqhIQ" # watermark original_image, new_image = self.__prepare() self.__test(original_image, new_image, style) self.__check(new_image, 267, 400, 26369, 'jpg') def test_format(self): - style = "image/format,png" # 图像格式转换 + style = "image/format,png" # format transcode original_image, new_image = self.__prepare() self.__test(original_image, new_image, style) self.__check(new_image, 267, 400, 160733, 'png') def test_resize_to_file(self): - style = "image/resize,m_fixed,w_100,h_100" # 缩放 + style = "image/resize,m_fixed,w_100,h_100" # resize original_image, new_image = self.__prepare() self.__test_to_file(original_image, new_image, style) diff --git a/tests/test_iterator.py b/tests/test_iterator.py index 40b80c17..3bad246b 100644 --- a/tests/test_iterator.py +++ b/tests/test_iterator.py @@ -20,17 +20,17 @@ def test_object_iterator(self): object_list = [] dir_list = [] - # 准备文件 + # Prepares the file for i in range(20): object_list.append(prefix + random_string(16)) self.bucket.put_object(object_list[-1], random_bytes(10)) - # 准备目录 + # Prepares the folder for i in range(5): dir_list.append(prefix + random_string(5) + '/') self.bucket.put_object(dir_list[-1] + random_string(5), random_bytes(3)) - # 验证 + # Verification objects_got = [] dirs_got = [] for info in oss2.ObjectIterator(self.bucket, prefix, delimiter='/', max_keys=4): @@ -60,16 +60,16 @@ def test_upload_iterator(self): upload_list = [] dir_list = [] - # 准备分片上传 + # Prepares the upload parts for i in range(10): upload_list.append(self.bucket.init_multipart_upload(key).upload_id) - # 准备碎片目录 + # prepares the folders for i in range(4): dir_list.append(prefix + random_string(5) + '/') self.bucket.init_multipart_upload(dir_list[-1] + random_string(5)) - # 验证 + # Verification uploads_got = [] dirs_got = [] for u in oss2.MultipartUploadIterator(self.bucket, prefix=prefix, delimiter='/', max_uploads=2): @@ -97,27 +97,27 @@ def test_upload_iterator_chinese(self): self.assertEqual(sorted(upload_list), sorted(uploads_got)) def test_object_upload_iterator(self): - # target_object是想要列举的文件,而intact_object则不是。 - # 这里intact_object故意以target_object为前缀 + # target_object is the file to list while intact is not. + # intact_object is prefixed by target_object on purposely. target_object = self.random_key() intact_object = self.random_key() target_list = [] intact_list = [] - # 准备分片 + # prepares the multipart uploads for i in range(10): target_list.append(self.bucket.init_multipart_upload(target_object).upload_id) intact_list.append(self.bucket.init_multipart_upload(intact_object).upload_id) - # 验证:max_uploads能被分片数整除 + # Verify:max_uploads could be divided by part count uploads_got = [] for u in oss2.ObjectUploadIterator(self.bucket, target_object, max_uploads=5): uploads_got.append(u.upload_id) self.assertEqual(sorted(target_list), uploads_got) - # 验证:max_uploads不能被分片数整除 + # Verify:max_uploads could not be divided by part count uploads_got = [] for u in oss2.ObjectUploadIterator(self.bucket, target_object, max_uploads=3): uploads_got.append(u.upload_id) @@ -125,7 +125,7 @@ def test_object_upload_iterator(self): self.assertEqual(sorted(target_list), uploads_got) - # 清理 + # Clean up for upload_id in target_list: self.bucket.abort_multipart_upload(target_object, upload_id) @@ -136,7 +136,7 @@ def test_part_iterator(self): for key in [random_string(16), '中文+_)(*&^%$#@!前缀', u'中文+_)(*&^%$#@!前缀']: upload_id = self.bucket.init_multipart_upload(key).upload_id - # 准备分片 + # prepares the multipart uploads part_list = [] for part_number in [1, 3, 6, 7, 9, 10]: content = random_string(128 * 1024) @@ -145,7 +145,7 @@ def test_part_iterator(self): self.bucket.upload_part(key, upload_id, part_number, content) - # 验证 + # Verification parts_got = [] for part_info in oss2.PartIterator(self.bucket, key, upload_id): parts_got.append(part_info) @@ -165,12 +165,12 @@ def test_live_channel_iterator(self): channel_target = oss2.models.LiveChannelInfoTarget(playlist_name = 'test.m3u8') channel_info = oss2.models.LiveChannelInfo(target = channel_target) - # 准备频道 + # Prepares the live channel for i in range(20): channel_name_list.append(prefix + random_string(16)) self.bucket.create_live_channel(channel_name_list[-1], channel_info) - # 验证 + # Verify live_channel_got = [] for info in oss2.LiveChannelIterator(self.bucket, prefix, max_keys=4): live_channel_got.append(info.name) diff --git a/tests/test_multipart.py b/tests/test_multipart.py index b7c1ac00..734a6cb1 100644 --- a/tests/test_multipart.py +++ b/tests/test_multipart.py @@ -70,10 +70,10 @@ def test_upload_part_copy(self): content = random_bytes(200 * 1024) - # 上传源文件 + # uploads source file self.bucket.put_object(src_object, content) - # part copy到目标文件 + # Copy the file to the target parts. parts = [] upload_id = self.bucket.init_multipart_upload(dst_object).upload_id @@ -87,7 +87,7 @@ def test_upload_part_copy(self): self.bucket.complete_multipart_upload(dst_object, upload_id, parts) - # 验证 + # Verify content_got = self.bucket.get_object(dst_object).read() self.assertEqual(len(content_got), len(content)) self.assertEqual(content_got, content) diff --git a/tests/test_object.py b/tests/test_object.py index 22cf4d95..8112e264 100644 --- a/tests/test_object.py +++ b/tests/test_object.py @@ -62,31 +62,31 @@ def test_file(self): with open(filename, 'wb') as f: f.write(content) - # 上传本地文件到OSS + # Upload local file to OSS self.bucket.put_object_from_file(key, filename) - # 检查Content-Type应该是javascript + # Checks Content-Type is javascript result = self.bucket.head_object(key) self.assertEqual(result.headers['content-type'], 'application/javascript') - # 下载到本地文件 + # Download it to a local file self.bucket.get_object_to_file(key, filename2) self.assertTrue(filecmp.cmp(filename, filename2)) - # 上传本地文件的一部分到OSS + # Upload some parts of the local file to OSS key_partial = self.random_key('-partial.txt') offset = 100 with open(filename, 'rb') as f: f.seek(offset, os.SEEK_SET) self.bucket.put_object(key_partial, f) - # 检查上传后的文件 + # Verify the uploaded file result = self.bucket.get_object(key_partial) self.assertEqual(result.content_length, len(content) - offset) self.assertEqual(result.read(), content[offset:]) - # 清理 + # clear os.remove(filename) os.remove(filename2) @@ -98,7 +98,7 @@ def test_streaming(self): self.bucket.put_object(src_key, content) - # 获取OSS上的文件,一边读取一边写入到另外一个OSS文件 + # Gets the OSS file. Reading and writing to another OSS file. src = self.bucket.get_object(src_key) result = self.bucket.put_object(dst_key, src) @@ -183,7 +183,7 @@ def test_anonymous(self): key = self.random_key() content = random_bytes(512) - # 设置bucket为public-read,并确认可以上传和下载 + # Sets the bucket as public-read,and make sure upload or download works. self.bucket.put_bucket_acl('public-read-write') time.sleep(2) @@ -192,12 +192,12 @@ def test_anonymous(self): result = b.get_object(key) self.assertEqual(result.read(), content) - # 测试sign_url + # test sign url url = b.sign_url('GET', key, 100) resp = requests.get(url) self.assertEqual(content, resp.content) - # 设置bucket为private,并确认上传和下载都会失败 + # Sets the bucket as private. Make sure upload or download fails. self.bucket.put_bucket_acl('private') time.sleep(1) @@ -309,7 +309,7 @@ def test_update_object_meta(self): self.bucket.put_object(key, content) - # 更改Content-Type,增加用户自定义元数据 + # Update Content-Type and add user's custom header. self.bucket.update_object_meta(key, {'Content-Type': 'whatever', 'x-oss-meta-category': 'novel'}) @@ -387,28 +387,28 @@ def progress_callback(bytes_consumed, total_bytes): key = self.random_key() content = random_bytes(2 * 1024 * 1024) - # 上传内存中的内容 + # Uploads the in-memory content stats = {'previous': -1} self.bucket.put_object(key, content, progress_callback=progress_callback) self.assertEqual(stats['previous'], len(content)) - # 追加内容 + # Appends the content stats = {'previous': -1} self.bucket.append_object(self.random_key(), 0, content, progress_callback=progress_callback) self.assertEqual(stats['previous'], len(content)) - # 下载到文件 + # Downloads it to local file stats = {'previous': -1} filename = random_string(12) + '.txt' self.bucket.get_object_to_file(key, filename, progress_callback=progress_callback) self.assertEqual(stats['previous'], len(content)) - # 上传本地文件 + # Upload the local file stats = {'previous': -1} self.bucket.put_object_from_file(key, filename, progress_callback=progress_callback) self.assertEqual(stats['previous'], len(content)) - # 下载到本地,采用iterator语法 + # Download the file to local, use iterator syntax. stats = {'previous': -1} result = self.bucket.get_object(key, progress_callback=progress_callback) content_got = b''