forked from delta-io/delta
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fork update #10
Merged
Merged
fork update #10
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Noticed while preparing some Delta community work around Data and AI Summit that this project doesn't actually have a defined code of conduct. Pilfering the one we're using for delta-rs with just the Enforcement/reporting section changed,
This PR is mainly a refactoring and passes existing tests. Author: Ali Afroozeh <ali.afroozeh@databricks.com> GitOrigin-RevId: b5d7f16d250349d1480d56ab46ed16cf91d896bd
Author: Meng Tong <meng.tong@databricks.com> GitOrigin-RevId: c18fd13bd6417c4c933f8da3304ec3b7cd3c7c5c
Author: Wenchen Fan <wenchen@databricks.com> GitOrigin-RevId: 759f76862fd4ff38c674dc56330cdbb9daad68bf
Author: Shixiong Zhu <zsxwing@gmail.com> GitOrigin-RevId: 4cfe588476481e497364147c1f0429a5e5121728
Author: Meng Tong <meng.tong@databricks.com> GitOrigin-RevId: 991b1d5537abe52eca2739a86746e8b2e75a0a17
…empty path When two concurrent writers write to an empty path for the first time, the 2nd writer would fail because of ProtocolChangedException. In particular, the first writer would have updated the reader and writer version of the protocol and the second writer (even though could be updating the same reader-writer version) would consider this as a transaction conflict and throw a ProtocolChangedException error message. This change adds an additional information to the error message that this was due to concurrent transactions writing to the same empty path. Author: Vijayan Prabhakaran <vijayan.prabhakaran@databricks.com> GitOrigin-RevId: 96e85c3363fd04181f7880e5907457b14b35b197
Author: Yijia Cui <yijia.cui@databricks.com> GitOrigin-RevId: 459ce29075cddf7436ba4f67a783d2314a0445bd
This PR adds the new LogStore API designed to be a public API. The existing LogStore API is not changed. The new API is based on the existing API but we cleaned up a few places to make it simpler and easier to work with. As the new API and old API are not binary compatible, this PR also adds `LogStoreAdaptor` which adapts all implementations of new API to existing API. All implementations of existing LogStore API will continue to work as they are today without any changes. Author: Meng Tong <meng.tong@databricks.com> GitOrigin-RevId: b389717c28fed9ea4ffbfdd999dc4265615270af
Currently, when a source column of a generated column is updated in UPDATE command, we don't update the generated column accordingly. For example, given a table that has a generated column `g` defined as `c1 + 10`. For the following update command: ``` UPDATE target SET c1 = 100 ``` We will copy the old value of `g` . This is not correct, and it will fail because of the constraints we apply for generated columns. The correct approach should be updating `g` to `110` according to the new value of `c1`. This PR updates `PreprocessTableUpdate` to generate update expressions for generated columns automatically when the user doesn't provide update expressions. New unit tests Author: Shixiong Zhu <zsxwing@gmail.com> GitOrigin-RevId: 5b6d3c5d37439d18b158269c2165e1b12c068209
Author: Meng Tong <meng.tong@databricks.com> GitOrigin-RevId: 329ef04133858b76c512dbdd372dad2cee5fc6e5
New tags and copy method Author: sabir-akhadov <sabir.akhadov@databricks.com> GitOrigin-RevId: 2490fcef902cdedcb8ec59c4f1210c9fe39f3cdb
Author: Shixiong Zhu <zsxwing@gmail.com> GitOrigin-RevId: b063a0d34b833df4f5fb999d0a49719890cd3b3d
Related to #282, contains code from #353. This PR contains two changes: - Creates a PyPI release - A Python-only function, `delta.configure_spark_with_delta_pip()` that gets the currently installed version, to be used in an IDE when initializing a spark session, like: `spark_session = delta.configure_spark_with_delta_pip(spark_session_builder).getOrCreate()`. The idea here is to allow the package to be self-referential, avoiding version mismatch problems which can be tough to debug. A few items need to be addressed: - Get a username and password for PyPI. The username will be added in config.yml (<username>), whereas the password will be stored in $PYPI_PASSWORD env variable in circleci. Other notes: - This change only targets IDE workflows. Command-line programs like `pyspark` and `spark-submit` will still need to specify the delta.io package on launch, see [quickstart docs](https://docs.delta.io/latest/quick-start.html#pyspark). Closes #659 Co-authored-by: Nikolaos Tsipas <nicktgr15@gmail.com> Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> Author: Tathagata Das <tathagata.das1565@gmail.com> Author: Tathagata Das <tdas@databricks.com> Author: Christopher Grant <chrisgrant@lavabit.com> #21456 is resolved by tdas/htj80y82. GitOrigin-RevId: 138065a1d1659bd3bc5c303868c15d1b2747655b
…orage This PR adds support for IBM Cloud Object Storage (IBM COS) by creating `COSLogStore` which extends the `HadoopFileSystemLogStore` and relies on IBM COS ability to handle [atomic writes using Etags](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-upload#upload-conditional). The support for IBM COS relies on the following properties: 1. Write on COS is all-or-nothing, whether overwrite or not. 2. List-after-write is consistent. 3. Write is atomic when using the [Stocator - Storage Connector for Apache Spark](https://github.com/CODAIT/stocator) (v1.1.1+) and setting the configuration `fs.cos.atomic.write` to `true`. In addition I propose the following [documentation](https://docs.google.com/document/d/1ued0rajmIZPZXJZ65uvvUTb088rcsxLl3zct1Y5A4p8/edit) to be added to the [Storage Configuration](https://docs.delta.io/latest/delta-storage.html) page. Closes #302 Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> Author: Guy Khazma <33684427+guykhazma@users.noreply.github.com> Author: Tathagata Das <tathagata.das1565@gmail.com> #21738 is resolved by tdas/7pvlaz2d. GitOrigin-RevId: 00d961bad7e2e15521ac51f06a4a101cd5bd925f
Create CODE_OF_CONDUCT.md
Speed up the vacuum suite by lowering the parallelParitionDiscovery parallelism tested locally went from ~500s to ~300s Author: Rahul Mahadev <rahul.mahadev@databricks.com> GitOrigin-RevId: 0e70eee2f4beeba93218c00a4fc7a228e81f1da6
Now we can ensure a checkpoint operation don't overwrite others, so we can remove the lock during a Delta commit to speed up concurrent commits. Author: Shixiong Zhu <zsxwing@gmail.com> GitOrigin-RevId: 482edc3da0511c659baeae3fa3d44357b3277152
While digging into the DeltaLog code, I realized that there's some code that can be refactored, and that's the main goal of this Pull request. 1. I have removed some unused imports 2. removed an unused method 3. remove an unnecessary variable 4. convert some code to Single abstract methods Closes #651 Signed-off-by: Shixiong Zhu <zsxwing@gmail.com> Author: Shixiong Zhu <zsxwing@gmail.com> Author: mahmoud mahdi <mahmoudmahdi24@gmail.com> #21874 is resolved by zsxwing/qum30qw7. GitOrigin-RevId: dcadfb75bbdcdc56ed681038892ac396e4ba2862
…mposite ids of two deltaLogs The main goal of this Pull Request is to use the ```isSameLogAs```function when possible. Closes #650 Signed-off-by: Shixiong Zhu <zsxwing@gmail.com> Author: mahmoud mahdi <mahmoudmahdi24@gmail.com> #21878 is resolved by zsxwing/95ddzfsx. GitOrigin-RevId: 19023f7b0291c368bbaf234c4abfaf228833521e
…ge is null while merging. Hi, When we interrupt a stream that updates a Delta table using _merge_ mode, the JVM runtime throws a plain `InterruptedException` with no description and the `AnalysisHelper` expects that all exceptions have a message and this causes a `NullPointerException` on such cases. This PR adds a null check to avoid that. Also added a unit test for this scenario. Thank you, Bruno Closes #648 Signed-off-by: Shixiong Zhu <zsxwing@gmail.com> Author: Bruno Palos <brunopalos@gmail.com> Author: Shixiong Zhu <zsxwing@gmail.com> #21879 is resolved by zsxwing/pouscxop. GitOrigin-RevId: 6be7561f24dd6b26a8f8732e82e25896234071a8
…OCI) Object Store as Delta Storage Adding support for Oracle Cloud Infrastructure (OCI) Object Store as Delta Storage by introducing OCILogStore. Regarding [Storage configuration](https://docs.delta.io/latest/delta-storage.html) page in Delta Documentation, I request following changes mentioned [here](https://docs.google.com/document/d/1DJvRAuUWUov5kepAQb176uSUsdlgU2MxaGBDBYiRGsg/edit?usp=sharing). Closes #468 Co-authored-by: Vivek Bhaskar <vivek.bhaskar@oracle.com> Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> Author: Tathagata Das <tathagata.das1565@gmail.com> Author: Vivek Bhaskar <vibhaska@gmail.com> #21793 is resolved by tdas/5i600laq. GitOrigin-RevId: 9b0460f06e2cd9ab7fbf6bbce688d92ab10975f0
## What changes were proposed in this pull request? Spark 3.1 supports unlimited merge clauses in SQL. So we enable the existing tests. ## How was this patch tested? newly enabled unit tests Author: Tathagata Das <tathagata.das1565@gmail.com> #21787 is resolved by tdas/SC-77172. GitOrigin-RevId: d5760ea79e7a8173953abe53e9705b1b5beebd3a
As explained in the title Author: Jose Torres <joseph.torres@databricks.com> GitOrigin-RevId: f55a27328b6e0c97ac9ce57f126ebbae968bd06e
## What changes were proposed in this pull request? This PR adds `DelegatingLogStore` for OSS Delta with the capability of resolving the LogStore implementation based on the scheme of a path. The decision logic is as follows: 1. Check `spark.delta.logStore.class`. If it is set, use the value. If not, go to next step. 2. `DelegatingLogStore` will be used, which will 2.1 Check `spark.delta.logStore.scheme.impl`. If it is set, use the value. If not, go to next step. 2.2 Check if we have default implementation for `scheme`. If we do, use the corresponding default value. If not, go to next step. 2.3 Use `HDFSLogStore`. ## How was this patch tested? Added new unit test for the LogStore resolution logic. ## This PR introduces the following *user-facing* changes AFFECTED VERSIONS: OSS Delta only. PROBLEM DESCRIPTION: OSS Delta users will now be able to specify log store implementation for different schemes using `spark.delta.logStore.scheme.impl`. Author: Meng Tong <meng.tong@databricks.com> #20836 is resolved by mengtong-db/logstore-public-api-oss-delegate. GitOrigin-RevId: 97bbd862405a2370a3d6b1fa46445297f60b172a
…or Generated Columns. ## What changes were proposed in this pull request? Add Create Delta Table APIs as DeltaTableBuilder in scala with support for Generated Columns. See https://groups.google.com/a/databricks.com/g/spark-api/c/5vkssqZUmP0 for more details. ## How was this patch tested? unit test. ## This PR introduces the following *user-facing* changes AFFECTED VERSIONS: OSS Delta only. PROBLEM DESCRIPTION: OSS Delta users will now be able to create / replace DeltaTable with GeneratedColumn supported. Author: Yijia Cui <yijia.cui@databricks.com> #21310 is resolved by yijiacui-db/SC-69796. GitOrigin-RevId: f12cb173d9ebd20535801a316a665d62a7695ab4
Made the code cleaner Existing tests Author: Meng Tong <meng.tong@databricks.com> GitOrigin-RevId: 15ee044259329088b32a8fe9662709b2af205588
…aMergeBuilder ## What changes were proposed in this pull request? Remove Evolving Annotation From DeltaTable and DeltaMergeBuilder ## How was this patch tested? No need to test - comment change only. Author: Yijia Cui <yijia.cui@databricks.com> #22038 is resolved by yijiacui-db/SC-77334. GitOrigin-RevId: 1d7559dad455ace3734caa32e04eabb72f52a608
Currently there are usage logs for the commitLarge code flow - which is used by CONVERT commands. This PR add the same with a new tag "delta.commitLarge.stats". Added UTs. Author: Prakhar Jain <prakhar.jain@databricks.com> GitOrigin-RevId: 6b4d2466aa41370cc95605edef282bd2759c9e8d
## What changes were proposed in this pull request? As title says. ## How was this patch tested? Tool fix. No need to test. Author: Yijia Cui <yijia.cui@databricks.com> #22131 is resolved by yijiacui-db/SC-77585. GitOrigin-RevId: 737e76d164058c77d4dfae68871146eb377a16ae
…ted Columns ## What changes were proposed in this pull request? Add DeltaTableBuilder Python API. ## How was this patch tested? Unit tests. ## This PR introduces the following *user-facing* changes AFFECTED VERSIONS: OSS Delta 1.0 and DBR 8.3. PROBLEM DESCRIPTION: OSS Delta users and DBR customers will now be able to create / replace DeltaTable using Python APIs. Lazy consensus on https://groups.google.com/a/databricks.com/g/eng-streamteam/c/THVJ4DvrQGM/m/Sg6WcpcAAgAJ Author: Yijia Cui <yijia.cui@databricks.com> #21551 is resolved by yijiacui-db/SC-69796-python. GitOrigin-RevId: 4a8d321194a2902883126b981c5066c176b148fb
Adding support for Google Cloud Storage(GCS) as Delta Storage by introducing GcsLogStore. This PR addresses [issue #294]. File creation is an all-or-nothing approach to achieve atomicity and uses Gcs [preconditions]to avoid race conditions among multiple writers/drivers. This implementation relies on gcs-connector to provide necessary `FileSystem` implementations. This has been tested on a Google Dataproc cluster. #### GcsLogStore requirements 1. spark.delta.logStore.class=org.apache.spark.sql.delta.storage.GcsLogStore 2. Include gcs-connector in classpath. The Cloud Storage connector is automatically installed on Dataproc clusters. #### Usage ``` TABLE_LOCATION = 'gs://ranuvikram-test/test/delta-table' # Write data to table. data = spark.range(5, 10) data.write.format("delta").mode("append").save(TABLE_LOCATION) # Read data from table. df = spark.read.format("delta").load(TABLE_LOCATION) df.show() ``` : #294: https://cloud.google.com/storage/docs/generations-preconditions#_Preconditions Closes #560 Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> Author: Tathagata Das <tathagata.das1565@gmail.com> Author: Ranu Vikram <ranu010101@users.noreply.github.com> #22070 is resolved by tdas/o9ixtoaw. GitOrigin-RevId: 0a1ce1d4407637d7697b93a25d8fd6be3efe2f6d
## What changes were proposed in this pull request? As titled. Also moved the classes to the correct package locations ## How was this patch tested? Doc/comment change. Existing tests are sufficient. Author: Meng Tong <meng.tong@databricks.com> #22045 is resolved by mengtong-db/annotatiion. GitOrigin-RevId: 11e3d22206362a9de4d869812dd6b58074212cb8
- Add Writer Version 4 requirements. - Add Generated Columns. - Make the `Writer Version Requirements` table use GitHub markdown format. - Update the table of contents. Closes #671 Signed-off-by: Shixiong Zhu <zsxwing@gmail.com> Author: Shixiong Zhu <zsxwing@gmail.com> #22252 is resolved by zsxwing/bgtk1zni. GitOrigin-RevId: 6a3c66920e67829814088060522dd19eb4a0393b
- show all classes in python docs - fix developer API tag in scala/java docs - add annotations to the new exceptions and logstores - refactored DeltaTableBuilder's options to hide them from docs Closes #672 Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> Author: Tathagata Das <tathagata.das1565@gmail.com> #22282 is resolved by tdas/ok3cd8hv. GitOrigin-RevId: fba84e46059f667a9aabf6387efa65533c1e660e
its shorter! Closes #674 Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> Author: Tathagata Das <tathagata.das1565@gmail.com> #22309 is resolved by tdas/gvazkgm1. GitOrigin-RevId: 6cf1e197e0ac0ad7f6330af4733a40416f185a02
As the title says Author: Rahul Mahadev <rahul.mahadev@databricks.com> GitOrigin-RevId: a42e1ad347d3f0665070d6ddb43ef311e058b576
## What changes were proposed in this pull request? As the title says. ## How was this patch tested? Existing checks Author: Tathagata Das <tathagata.das1565@gmail.com> #22485 is resolved by tdas/SC-77952. GitOrigin-RevId: 85a17d069efe112989ac883c1395e2bbeebd9ae4
…ite in OSS ## What changes were proposed in this pull request? Fix JavaDeltaTableBuilderSuite and JavaDeltaTableSuite in OSS ## How was this patch tested? Unit test only Author: Yijia Cui <yijia.cui@databricks.com> #22428 is resolved by yijiacui-db/SC-69796-java-suite. GitOrigin-RevId: 26053a0d4e517e7d3f472175b25910fa03b04948
…on Schema Add a new column in checkpoint schema to test actions that shouldn't fail. Now the checkpoint schema is with a new column "unknown" Write json log file with a new column to test actions that shouldn't fail. Now the json schema is `{"some_new_feature":{"a":1}}`. test-only PR. Author: Yijia Cui <yijia.cui@databricks.com> GitOrigin-RevId: 5ede8bc24bfc78dd80468bd3bf7bde0cd2cef057
…elta table This PR fixes a bug when adding `userMetadata` during table creation (specifically `saveAsTable` API). The following example would yield `null` for `userMetadata`: ``` spark.range(10).write.format("delta") .option("userMetadata", "someMeta").mode("overwrite").saveAsTable("user_meta") ``` This was due to `userMetadata` only being included with `DeltaOperations.Write`, not `CreateTable` or `ReplaceTable`. This PR adds support to pass `userMetadata` through from `saveAsTable()`/`createOrReplace()`, fixing the above behavior. Additional unit test explicitly using `saveAsTable` and `createOrReplace` APIs. Author: Zach Schuermann <zach.schuermann@databricks.com> GitOrigin-RevId: b149bebb7fc27446ace70714452599aa51c54b8e
Skip paths the file system fails to make them qualified. Author: yaohua <yaohua.zhao@databricks.com> GitOrigin-RevId: 1340e76c5d1c4d23e593a2694437b6123e4c2131
Minor refactor of GeneratedColumn. Author: Meng Tong <meng.tong@databricks.com> GitOrigin-RevId: 60dbb5d9a0e15d9b74c901beb94d70a196cfef0b
Made following updates to the integration test - Added test for pypi package, both main pypi and testpypi - Added support for providing staging maven repo for testing staged maven release artifacts Closes #675 Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com> Author: Tathagata Das <tdas@databricks.com> Author: Tathagata Das <tathagata.das1565@gmail.com> #22601 is resolved by tdas/lpmdfft7. GitOrigin-RevId: 679b0b4df9e57a0a106c9258d40381f5b0c37f00
ActiveOptimisticTransactionRule is not idempotent, but V2 write plans will be planned again based on the optimized version of the original plan through the V1 fallback paths. So we have to skip these rules for V2 write plans, or the file indices will be pre-pinned and thus not invoke the proper logic to signal their scans to the transaction. new unit test Author: Jose Torres <joseph.torres@databricks.com> GitOrigin-RevId: 51fc053d022ebcf6b45bf4441eed799d7d68d969
Minor refactor of WriteIntoDelta. Author: Vijayan Prabhakaran <vijayan.prabhakaran@databricks.com> GitOrigin-RevId: 5c2fe8b996085a1f447035e5aa79e865c76bf14a
Fix partitionedBy in DeltaTableBuilder Python API Added more usages in the unit tests. Author: Yijia Cui <yijia.cui@databricks.com> GitOrigin-RevId: 99e2e89242c8fa41c960df25dcc382faae439ab1
Upgrade Antlr4 to 4.8 to fix Antlr4 incompatible warning. Before: ``` yumwang@LM-SHC-16508156 spark-3.1.1-bin-hadoop2.7 % bin/spark-sql --conf spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension --conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog ... spark-sql> create table test_delta using delta as select id from range(10); ANTLR Tool version 4.7 used for code generation does not match the current runtime version 4.8ANTLR Tool version 4.7 used for code generation does not match the current runtime version 4.8 21/05/21 21:14:53 WARN HiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider delta. Persisting data source table `default`.`test_delta` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive. Time taken: 9.841 seconds ``` After: ``` yumwang@LM-SHC-16508156 spark-3.1.1-bin-hadoop2.7 % bin/spark-sql --conf spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension --conf spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog ... spark-sql> create table test_delta using delta as select id from range(10); 21/05/21 21:10:27 WARN HiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider delta. Persisting data source table `default`.`test_delta` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive. Time taken: 5.949 seconds ``` Closes #676 Signed-off-by: Shixiong Zhu <zsxwing@gmail.com> Author: Yuming Wang <wgyumg@gmail.com> #22678 is resolved by zsxwing/0vx467nv. GitOrigin-RevId: 7627cfbe7ed84448f5b37a44cc538d328a472585
…Logic We use the schema in the catalog in `updateMetadata`. Once the schema is fixed, downstream logic can use that. Author: Meng Tong <meng.tong@databricks.com> GitOrigin-RevId: cf3ae4feec1ecaa8a6e420432069aa90fe7a29d7
record tahoeEvent in product logs api: recordProductEvent Author: Shuting Zhang <shuting.zhang@databricks.com> GitOrigin-RevId: 4d7df82f27e9baae971b183f429bb8fcded740fc
…vior Added logic and unit tests to block queries like ``CREATE TABLE delta.`/foo` USING delta LOCATION "/bar"`` where ambiguous paths are supplied. Users can allow such queries by setting the `DELTA_LEGACY_ALLOW_AMBIGUOUS_PATHS` flag to true, in which case the `USING delta LOCATION "/bar"` statement will be ignored. Closes #688 Signed-off-by: FX196 <yuhong.chen@databricks.com> **PROBLEM DESCRIPTION:** - Queries like ``CREATE TABLE delta.`/foo` USING delta LOCATION "/bar"`` will be blocked because there are two different paths in the query. - We still allow queries that use the same location such as ``CREATE TABLE delta.`/foo` USING delta LOCATION "/foo"``. - We also add a legacy flag `spark.databricks.delta.legacy.allowAmbiguousPathsInCreateTable` to allow such ambiguous queries since this is a behavior change. Author: FX196 <yuhong.chen@databricks.com> Author: Yuhong Chen <mikechen212@gmail.com> GitOrigin-RevId: 10513d45b134706ebb760ef719376cbbc3e9420b
GitOrigin-RevId: df9c3b2ffcbe4dd07fecf929278f4469d42f1dc7 modified: core/src/main/scala/org/apache/spark/sql/delta/Snapshot.scala
…t Delta file version - 1 (#23115) GitOrigin-RevId: 95b7d8f509cc0a8b11346966cf9f68e708320ae5
… smallest Delta file version - 1" Reverts databricks/runtime#23115 Author: lizhangdatabricks <85116904+lizhangdatabricks@users.noreply.github.com> GitOrigin-RevId: 9a525374b7ceeb723a92a51771802f94af9d32cf
… the existing action. Improve Evolvability test by adding a new column in the existing action. test only PR. Author: Yijia Cui <yijia.cui@databricks.com> GitOrigin-RevId: d0dc71791c2e56cb67386a5b0dc6e601aaa418c6
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.