Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test failure: test_runtime_backend_errors_handled[\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors import NotFound\nbackend = RuntimeBackend()\ntry:\n query_response = backend.fetch("SELECT * FROM default.__RANDOM__")\n return "FAILED"\nexcept NotFound as e:\n return "PASSED"\n] #326

Closed
github-actions bot opened this issue Nov 15, 2024 · 0 comments · Fixed by #328
Labels
bug Something isn't working

Comments

@github-actions
Copy link

❌ test_runtime_backend_errors_handled[\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors import NotFound\nbackend = RuntimeBackend()\ntry:\n query_response = backend.fetch("SELECT * FROM default.__RANDOM__")\n return "FAILED"\nexcept NotFound as e:\n return "PASSED"\n]: assert '{"ts": "2024...]}}\n"PASSED"' == 'PASSED' (23.919s)
assert '{"ts": "2024...]}}\n"PASSED"' == 'PASSED'
  
  + {"ts": "2024-11-15 13:55:06,601", "level": "ERROR", "logger": "SQLQueryContextLogger", "msg": "[TABLE_OR_VIEW_NOT_FOUND] The table or view `TEST_SCHEMA`.`__RANDOM__` cannot be found. Verify the spelling and correctness of the schema and catalog.\nIf you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.\nTo tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS. SQLSTATE: 42P01", "context": {"error_class": "TABLE_OR_VIEW_NOT_FOUND"}, "exception": {"class": "Py4JJavaError", "msg": "An error occurred while calling o389.sql.\n: org.apache.spark.sql.catalyst.ExtendedAnalysisException: [TABLE_OR_VIEW_NOT_FOUND] The table or view `TEST_SCHEMA`.`__RANDOM__` cannot be found. Verify the spelling and correctness of the schema and catalog.\nIf you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.\nTo tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS. SQLSTATE: 42P01; line 1 pos 14;\n'Project [*]\n+- 'UnresolvedRelation [TEST_SCHEMA, __RANDOM__], [], false\n\n\tat org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.tableNotFound(package.scala:90)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2(CheckAnalysis.scala:258)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2$adapted(CheckAnalysis.scala:231)\n\tat org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:287)\n\tat org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:286)\n\tat org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:286)\n\tat scala.collection.Iterator.foreach(Iterator.scala:943)\n\tat scala.collection.Iterator.foreach$(Iterator.scala:943)\n\tat scala.collection.AbstractIterator.foreach(Iterator.scala:1431)\n\tat scala.collection.IterableLike.foreach(IterableLike.scala:74)\n\tat scala.collection.IterableLike.foreach$(IterableLike.scala:73)\n\tat scala.collection.AbstractIterable.foreach(Iterable.scala:56)\n\tat org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:286)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0(CheckAnalysis.scala:231)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0$(CheckAnalysis.scala:213)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis0(Analyzer.scala:388)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:198)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:185)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:185)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:388)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$2(Analyzer.scala:443)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:193)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:443)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:443)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:440)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:264)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:472)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$5(QueryExecution.scala:562)\n\tat org.apache.spark.sql.execution.SQLExecution$.withExecutionPhase(SQLExecution.scala:144)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$4(QueryExecution.scala:562)\n\tat org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1125)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:561)\n\tat com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:557)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1273)\n\tat org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:557)\n\tat org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:258)\n\tat org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:257)\n\tat org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:239)\n\tat org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:131)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1273)\n\tat org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1280)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1280)\n\tat org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:123)\n\tat org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:969)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1273)\n\tat org.apache.spark.sql.SparkSession.sql(SparkSession.scala:933)\n\tat org.apache.spark.sql.SparkSession.sql(SparkSession.scala:992)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)\n\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.base/java.lang.reflect.Method.invoke(Method.java:568)\n\tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)\n\tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:397)\n\tat py4j.Gateway.invoke(Gateway.java:306)\n\tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)\n\tat py4j.commands.CallCommand.execute(CallCommand.java:79)\n\tat py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:199)\n\tat py4j.ClientServerConnection.run(ClientServerConnection.java:119)\n\tat java.base/java.lang.Thread.run(Thread.java:840)\n", "stacktrace": ["Traceback (most recent call last):", "  File \"/databricks/spark/python/pyspark/errors/exceptions/captured.py\", line 263, in deco", "    return f(*a, **kw)", "           ^^^^^^^^^^^", "  File \"/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py\", line 326, in get_return_value", "    raise Py4JJavaError(", "py4j.protocol.Py4JJavaError: An error occurred while calling o389.sql.", ": org.apache.spark.sql.catalyst.ExtendedAnalysisException: [TABLE_OR_VIEW_NOT_FOUND] The table or view `TEST_SCHEMA`.`__RANDOM__` cannot be found. Verify the spelling and correctness of the schema and catalog.", "If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.", "To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS. SQLSTATE: 42P01; line 1 pos 14;", "'Project [*]", "+- 'UnresolvedRelation [TEST_SCHEMA, __RANDOM__], [], false", "", "\tat org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.tableNotFound(package.scala:90)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2(CheckAnalysis.scala:258)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2$adapted(CheckAnalysis.scala:231)", "\tat org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:287)", "\tat org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:286)", "\tat org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:286)", "\tat scala.collection.Iterator.foreach(Iterator.scala:943)", "\tat scala.collection.Iterator.foreach$(Iterator.scala:943)", "\tat scala.collection.AbstractIterator.foreach(Iterator.scala:1431)", "\tat scala.collection.IterableLike.foreach(IterableLike.scala:74)", "\tat scala.collection.IterableLike.foreach$(IterableLike.scala:73)", "\tat scala.collection.AbstractIterable.foreach(Iterable.scala:56)", "\tat org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:286)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0(CheckAnalysis.scala:231)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0$(CheckAnalysis.scala:213)", "\tat org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis0(Analyzer.scala:388)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:198)", "\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)", "\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:185)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:185)", "\tat org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:388)", "\tat org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$2(Analyzer.scala:443)", "\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)", "\tat org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:193)", "\tat org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:443)", "\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:443)", "\tat org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:440)", "\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:264)", "\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)", "\tat org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:472)", "\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$5(QueryExecution.scala:562)", "\tat org.apache.spark.sql.execution.SQLExecution$.withExecutionPhase(SQLExecution.scala:144)", "\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$4(QueryExecution.scala:562)", "\tat org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1125)", "\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:561)", "\tat com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)", "\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:557)", "\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1273)", "\tat org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:557)", "\tat org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:258)", "\tat org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:257)", "\tat org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:239)", "\tat org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:131)", "\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1273)", "\tat org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1280)", "\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)", "\tat org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1280)", "\tat org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:123)", "\tat org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:969)", "\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1273)", "\tat org.apache.spark.sql.SparkSession.sql(SparkSession.scala:933)", "\tat org.apache.spark.sql.SparkSession.sql(SparkSession.scala:992)", "\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)", "\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)", "\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)", "\tat java.base/java.lang.reflect.Method.invoke(Method.java:568)", "\tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)", "\tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:397)", "\tat py4j.Gateway.invoke(Gateway.java:306)", "\tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)", "\tat py4j.commands.CallCommand.execute(CallCommand.java:79)", "\tat py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:199)", "\tat py4j.ClientServerConnection.run(ClientServerConnection.java:119)", "\tat java.base/java.lang.Thread.run(Thread.java:840)"]}}
  - PASSED
  + "PASSED"
  ? +      +
13:54 DEBUG [databricks.sdk] Loaded from environment
13:54 DEBUG [databricks.sdk] Ignoring pat auth, because metadata-service is preferred
13:54 DEBUG [databricks.sdk] Ignoring basic auth, because metadata-service is preferred
13:54 DEBUG [databricks.sdk] Attempting to configure auth: metadata-service
13:54 INFO [databricks.sdk] Using Databricks Metadata Service authentication
[gw3] linux -- Python 3.10.15 /home/runner/work/lsql/lsql/.venv/bin/python
13:54 DEBUG [databricks.sdk] Loaded from environment
13:54 DEBUG [databricks.sdk] Ignoring pat auth, because metadata-service is preferred
13:54 DEBUG [databricks.sdk] Ignoring basic auth, because metadata-service is preferred
13:54 DEBUG [databricks.sdk] Attempting to configure auth: metadata-service
13:54 INFO [databricks.sdk] Using Databricks Metadata Service authentication
13:54 DEBUG [databricks.sdk] GET /api/2.0/preview/scim/v2/Me
< 200 OK
< {
<   "active": true,
<   "displayName": "labs-runtime-identity",
<   "emails": [
<     {
<       "primary": true,
<       "type": "work",
<       "value": "**REDACTED**"
<     }
<   ],
<   "entitlements": [
<     {
<       "value": "**REDACTED**"
<     },
<     "... (1 additional elements)"
<   ],
<   "externalId": "d0f9bd2c-5651-45fd-b648-12a3fc6375c4",
<   "groups": [
<     {
<       "$ref": "Groups/300667344111082",
<       "display": "labs.scope.runtime",
<       "type": "direct",
<       "value": "**REDACTED**"
<     }
<   ],
<   "id": "4643477475987733",
<   "name": {
<     "givenName": "labs-runtime-identity"
<   },
<   "schemas": [
<     "urn:ietf:params:scim:schemas:core:2.0:User",
<     "... (1 additional elements)"
<   ],
<   "userName": "4106dc97-a963-48f0-a079-a578238959a6"
< }
13:54 DEBUG [databricks.labs.blueprint.wheels] Building wheel for /tmp/tmpl9o_yf9t/working-copy in /tmp/tmpl9o_yf9t
13:54 DEBUG [databricks.labs.blueprint.installation] Uploading: /Users/4106dc97-a963-48f0-a079-a578238959a6/.12Jx/wheels/databricks_labs_lsql-0.13.1+320241115135446-py3-none-any.whl
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 404 Not Found
< {
<   "error_code": "RESOURCE_DOES_NOT_EXIST",
<   "message": "The parent folder (/Users/4106dc97-a963-48f0-a079-a578238959a6/.12Jx/wheels) does not exist."
< }
13:54 DEBUG [databricks.labs.blueprint.installation] Creating missing folders: /Users/4106dc97-a963-48f0-a079-a578238959a6/.12Jx/wheels
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/mkdirs
> {
>   "path": "/Users/4106dc97-a963-48f0-a079-a578238959a6/.12Jx/wheels"
> }
< 200 OK
< {}
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 200 OK
< {
<   "object_id": 804190547935362
< }
13:54 DEBUG [databricks.labs.blueprint.installation] Converting Version into JSON format
13:54 DEBUG [databricks.labs.blueprint.installation] Uploading: /Users/4106dc97-a963-48f0-a079-a578238959a6/.12Jx/version.json
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 200 OK
< {
<   "object_id": 804190547935369
< }
13:54 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_cores": 8.0,
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_memory_mb": 32768,
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "serge.smertin@databricks.com",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "serge.smertin@databricks.com",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "labs-oss@databricks.com",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver": {
<     "host_private_ip": "10.179.8.14",
<     "instance_id": "f335b24df03e466b8efb19a708cf7d9c",
<     "node_attributes": {
<       "is_spot": false
<     },
<     "node_id": "b993377df44e4a408921281be8db0393",
<     "private_ip": "10.179.10.14",
<     "public_dns": "",
<     "start_timestamp": 1731678282507
<   },
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "jdbc_port": 10000,
<   "last_activity_time": 1731678347945,
<   "last_restarted_time": 1731678323374,
<   "last_state_loss_time": 1731678323349,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 7133597207159756379,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "RUNNING",
<   "state_message": ""
< }
13:54 DEBUG [databricks.sdk] POST /api/1.2/contexts/create
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "4443440695219401141"
< }
13:54 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=4443440695219401141
< 200 OK
< {
<   "id": "4443440695219401141",
<   "status": "Pending"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=4443440695219401141: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~1s)
13:54 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=4443440695219401141
< 200 OK
< {
<   "id": "4443440695219401141",
<   "status": "Pending"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=4443440695219401141: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~2s)
13:54 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=4443440695219401141
< 200 OK
< {
<   "id": "4443440695219401141",
<   "status": "Running"
< }
13:54 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "get_ipython().run_line_magic('pip', 'install /Workspace/Users/4106dc97-a963-48f0-a079-a578238959... (110 more bytes)",
>   "contextId": "4443440695219401141",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "f4d426bb506b432f95219310a72d8446"
< }
13:54 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=f4d426bb506b432f95219310a72d8446&contextId=4443440695219401141
< 200 OK
< {
<   "id": "f4d426bb506b432f95219310a72d8446",
<   "results": null,
<   "status": "Running"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=f4d426bb506b432f95219310a72d8446, context_id=4443440695219401141: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~1s)
13:54 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=f4d426bb506b432f95219310a72d8446&contextId=4443440695219401141
< 200 OK
< {
<   "id": "f4d426bb506b432f95219310a72d8446",
<   "results": null,
<   "status": "Running"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=f4d426bb506b432f95219310a72d8446, context_id=4443440695219401141: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~2s)
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=f4d426bb506b432f95219310a72d8446&contextId=4443440695219401141
< 200 OK
< {
<   "id": "f4d426bb506b432f95219310a72d8446",
<   "results": null,
<   "status": "Running"
< }
13:55 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=f4d426bb506b432f95219310a72d8446, context_id=4443440695219401141: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~3s)
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=f4d426bb506b432f95219310a72d8446&contextId=4443440695219401141
< 200 OK
< {
<   "id": "f4d426bb506b432f95219310a72d8446",
<   "results": {
<     "data": "Processing /Workspace/Users/4106dc97-a963-48f0-a079-a578238959a6/.12Jx/wheels/databricks_labs_ls... (3270 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:55 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "import json\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors ... (204 more bytes)",
>   "contextId": "4443440695219401141",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "3d0c9ea09b0b4649a3ec1b0dbac54df6"
< }
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=3d0c9ea09b0b4649a3ec1b0dbac54df6&contextId=4443440695219401141
< 200 OK
< {
<   "id": "3d0c9ea09b0b4649a3ec1b0dbac54df6",
<   "results": null,
<   "status": "Running"
< }
13:55 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=3d0c9ea09b0b4649a3ec1b0dbac54df6, context_id=4443440695219401141: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~1s)
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=3d0c9ea09b0b4649a3ec1b0dbac54df6&contextId=4443440695219401141
< 200 OK
< {
<   "id": "3d0c9ea09b0b4649a3ec1b0dbac54df6",
<   "results": {
<     "data": "{\"ts\": \"2024-11-15 13:55:06,601\", \"level\": \"ERROR\", \"logger\": \"SQLQueryContextLogger\", \"msg\": \"[... (13306 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:55 WARNING [databricks.sdk] cannot parse converted return statement. Just returning text
Traceback (most recent call last):
  File "/home/runner/work/lsql/lsql/.venv/lib/python3.10/site-packages/databricks/labs/blueprint/commands.py", line 123, in run
    return json.loads(results.data)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/__init__.py", line 346, in loads
    return _TEST_SCHEMA_decoder.decode(s)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 13394)
13:54 DEBUG [databricks.sdk] Loaded from environment
13:54 DEBUG [databricks.sdk] Ignoring pat auth, because metadata-service is preferred
13:54 DEBUG [databricks.sdk] Ignoring basic auth, because metadata-service is preferred
13:54 DEBUG [databricks.sdk] Attempting to configure auth: metadata-service
13:54 INFO [databricks.sdk] Using Databricks Metadata Service authentication
13:54 DEBUG [databricks.sdk] GET /api/2.0/preview/scim/v2/Me
< 200 OK
< {
<   "active": true,
<   "displayName": "labs-runtime-identity",
<   "emails": [
<     {
<       "primary": true,
<       "type": "work",
<       "value": "**REDACTED**"
<     }
<   ],
<   "entitlements": [
<     {
<       "value": "**REDACTED**"
<     },
<     "... (1 additional elements)"
<   ],
<   "externalId": "d0f9bd2c-5651-45fd-b648-12a3fc6375c4",
<   "groups": [
<     {
<       "$ref": "Groups/300667344111082",
<       "display": "labs.scope.runtime",
<       "type": "direct",
<       "value": "**REDACTED**"
<     }
<   ],
<   "id": "4643477475987733",
<   "name": {
<     "givenName": "labs-runtime-identity"
<   },
<   "schemas": [
<     "urn:ietf:params:scim:schemas:core:2.0:User",
<     "... (1 additional elements)"
<   ],
<   "userName": "4106dc97-a963-48f0-a079-a578238959a6"
< }
13:54 DEBUG [databricks.labs.blueprint.wheels] Building wheel for /tmp/tmpl9o_yf9t/working-copy in /tmp/tmpl9o_yf9t
13:54 DEBUG [databricks.labs.blueprint.installation] Uploading: /Users/4106dc97-a963-48f0-a079-a578238959a6/.12Jx/wheels/databricks_labs_lsql-0.13.1+320241115135446-py3-none-any.whl
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 404 Not Found
< {
<   "error_code": "RESOURCE_DOES_NOT_EXIST",
<   "message": "The parent folder (/Users/4106dc97-a963-48f0-a079-a578238959a6/.12Jx/wheels) does not exist."
< }
13:54 DEBUG [databricks.labs.blueprint.installation] Creating missing folders: /Users/4106dc97-a963-48f0-a079-a578238959a6/.12Jx/wheels
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/mkdirs
> {
>   "path": "/Users/4106dc97-a963-48f0-a079-a578238959a6/.12Jx/wheels"
> }
< 200 OK
< {}
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 200 OK
< {
<   "object_id": 804190547935362
< }
13:54 DEBUG [databricks.labs.blueprint.installation] Converting Version into JSON format
13:54 DEBUG [databricks.labs.blueprint.installation] Uploading: /Users/4106dc97-a963-48f0-a079-a578238959a6/.12Jx/version.json
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 200 OK
< {
<   "object_id": 804190547935369
< }
13:54 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_cores": 8.0,
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_memory_mb": 32768,
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "serge.smertin@databricks.com",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "serge.smertin@databricks.com",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "labs-oss@databricks.com",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver": {
<     "host_private_ip": "10.179.8.14",
<     "instance_id": "f335b24df03e466b8efb19a708cf7d9c",
<     "node_attributes": {
<       "is_spot": false
<     },
<     "node_id": "b993377df44e4a408921281be8db0393",
<     "private_ip": "10.179.10.14",
<     "public_dns": "",
<     "start_timestamp": 1731678282507
<   },
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "jdbc_port": 10000,
<   "last_activity_time": 1731678347945,
<   "last_restarted_time": 1731678323374,
<   "last_state_loss_time": 1731678323349,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 7133597207159756379,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "RUNNING",
<   "state_message": ""
< }
13:54 DEBUG [databricks.sdk] POST /api/1.2/contexts/create
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "4443440695219401141"
< }
13:54 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=4443440695219401141
< 200 OK
< {
<   "id": "4443440695219401141",
<   "status": "Pending"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=4443440695219401141: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~1s)
13:54 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=4443440695219401141
< 200 OK
< {
<   "id": "4443440695219401141",
<   "status": "Pending"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=4443440695219401141: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~2s)
13:54 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=4443440695219401141
< 200 OK
< {
<   "id": "4443440695219401141",
<   "status": "Running"
< }
13:54 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "get_ipython().run_line_magic('pip', 'install /Workspace/Users/4106dc97-a963-48f0-a079-a578238959... (110 more bytes)",
>   "contextId": "4443440695219401141",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "f4d426bb506b432f95219310a72d8446"
< }
13:54 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=f4d426bb506b432f95219310a72d8446&contextId=4443440695219401141
< 200 OK
< {
<   "id": "f4d426bb506b432f95219310a72d8446",
<   "results": null,
<   "status": "Running"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=f4d426bb506b432f95219310a72d8446, context_id=4443440695219401141: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~1s)
13:54 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=f4d426bb506b432f95219310a72d8446&contextId=4443440695219401141
< 200 OK
< {
<   "id": "f4d426bb506b432f95219310a72d8446",
<   "results": null,
<   "status": "Running"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=f4d426bb506b432f95219310a72d8446, context_id=4443440695219401141: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~2s)
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=f4d426bb506b432f95219310a72d8446&contextId=4443440695219401141
< 200 OK
< {
<   "id": "f4d426bb506b432f95219310a72d8446",
<   "results": null,
<   "status": "Running"
< }
13:55 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=f4d426bb506b432f95219310a72d8446, context_id=4443440695219401141: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~3s)
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=f4d426bb506b432f95219310a72d8446&contextId=4443440695219401141
< 200 OK
< {
<   "id": "f4d426bb506b432f95219310a72d8446",
<   "results": {
<     "data": "Processing /Workspace/Users/4106dc97-a963-48f0-a079-a578238959a6/.12Jx/wheels/databricks_labs_ls... (3270 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:55 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "import json\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors ... (204 more bytes)",
>   "contextId": "4443440695219401141",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "3d0c9ea09b0b4649a3ec1b0dbac54df6"
< }
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=3d0c9ea09b0b4649a3ec1b0dbac54df6&contextId=4443440695219401141
< 200 OK
< {
<   "id": "3d0c9ea09b0b4649a3ec1b0dbac54df6",
<   "results": null,
<   "status": "Running"
< }
13:55 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=3d0c9ea09b0b4649a3ec1b0dbac54df6, context_id=4443440695219401141: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~1s)
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=3d0c9ea09b0b4649a3ec1b0dbac54df6&contextId=4443440695219401141
< 200 OK
< {
<   "id": "3d0c9ea09b0b4649a3ec1b0dbac54df6",
<   "results": {
<     "data": "{\"ts\": \"2024-11-15 13:55:06,601\", \"level\": \"ERROR\", \"logger\": \"SQLQueryContextLogger\", \"msg\": \"[... (13306 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:55 WARNING [databricks.sdk] cannot parse converted return statement. Just returning text
Traceback (most recent call last):
  File "/home/runner/work/lsql/lsql/.venv/lib/python3.10/site-packages/databricks/labs/blueprint/commands.py", line 123, in run
    return json.loads(results.data)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/__init__.py", line 346, in loads
    return _TEST_SCHEMA_decoder.decode(s)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 13394)
[gw3] linux -- Python 3.10.15 /home/runner/work/lsql/lsql/.venv/bin/python

Running from nightly #1

@github-actions github-actions bot added the bug Something isn't working label Nov 15, 2024
@nfx nfx closed this as completed in #328 Nov 15, 2024
@nfx nfx closed this as completed in 7ba1ca0 Nov 15, 2024
nfx added a commit that referenced this issue Nov 15, 2024
* Added nightly tests run at 4:45am UTC ([#318](#318)). A new nightly workflow has been added to the codebase, designed to automate a series of jobs every day at 4:45am UTC on the `larger` environment. The workflow includes permissions for writing id-tokens, accessing issues, reading contents and pull-requests. It checks out the code with a full fetch-depth, installs Python 3.10, and uses hatch 1.9.4. The key step in this workflow is the execution of nightly tests using the databrickslabs/sandbox/acceptance action, which creates issues if necessary. The workflow utilizes several secrets, including VAULT_URI, GITHUB_TOKEN, ARM_CLIENT_ID, and ARM_TENANT_ID, and sets the TEST_NIGHTLY environment variable to true. Additionally, the workflow is part of a concurrency group called "single-acceptance-job-per-repo", ensuring that only one acceptance job runs at a time per repository.
* Bump codecov/codecov-action from 4 to 5 ([#319](#319)). In this version update, the Codecov GitHub Action has been upgraded from 4 to 5, bringing improved functionality and new features. This new version utilizes the Codecov Wrapper to encapsulate the CLI, enabling faster updates. Additionally, an opt-out feature has been introduced for tokens in public repositories, allowing contributors and other members to upload coverage reports without requiring access to the Codecov token. The upgrade also includes changes to the arguments: `file` is now deprecated and replaced with `files`, and `plugin` is deprecated and replaced with `plugins`. New arguments have been added, including `binary`, `gcov_args`, `gcov_executable`, `gcov_ignore`, `gcov_include`, `report_type`, `skip_validation`, and `swift_project`. Comprehensive documentation on these changes can be found in the release notes and changelog.
* Fixed `RuntimeBackend` exception handling ([#328](#328)). In this release, we have made significant improvements to the exception handling in the `RuntimeBackend` component, addressing issues reported in tickets [#328](#328), [#327](#327), [#326](#326), and [#325](#325). We have updated the `execute` and `fetch` methods to handle exceptions more gracefully and changed exception handling from catching `Exception` to catching `BaseException` for more comprehensive error handling. Additionally, we have updated the `pyproject.toml` file to use a newer version of the `databricks-labs-pytester` package (0.2.1 to 0.5.0) which may have contributed to the resolution of these issues. Furthermore, the `test_backends.py` file has been updated to improve the readability and user-friendliness of the test output for the functions testing if a `NotFound`, `BadRequest`, or `Unknown` exception is raised when executing and fetching statements. The `test_runtime_backend_use_statements` function has also been updated to print `PASSED` or `FAILED` instead of returning those values. These changes enhance the robustness of the exception handling mechanism in the `RuntimeBackend` class and update related unit tests.

Dependency updates:

 * Bump codecov/codecov-action from 4 to 5 ([#319](#319)).
@nfx nfx mentioned this issue Nov 15, 2024
nfx added a commit that referenced this issue Nov 15, 2024
* Added nightly tests run at 4:45am UTC
([#318](#318)). A new
nightly workflow has been added to the codebase, designed to automate a
series of jobs every day at 4:45am UTC on the `larger` environment. The
workflow includes permissions for writing id-tokens, accessing issues,
reading contents and pull-requests. It checks out the code with a full
fetch-depth, installs Python 3.10, and uses hatch 1.9.4. The key step in
this workflow is the execution of nightly tests using the
databrickslabs/sandbox/acceptance action, which creates issues if
necessary. The workflow utilizes several secrets, including VAULT_URI,
GITHUB_TOKEN, ARM_CLIENT_ID, and ARM_TENANT_ID, and sets the
TEST_NIGHTLY environment variable to true. Additionally, the workflow is
part of a concurrency group called "single-acceptance-job-per-repo",
ensuring that only one acceptance job runs at a time per repository.
* Bump codecov/codecov-action from 4 to 5
([#319](#319)). In this
version update, the Codecov GitHub Action has been upgraded from 4 to 5,
bringing improved functionality and new features. This new version
utilizes the Codecov Wrapper to encapsulate the CLI, enabling faster
updates. Additionally, an opt-out feature has been introduced for tokens
in public repositories, allowing contributors and other members to
upload coverage reports without requiring access to the Codecov token.
The upgrade also includes changes to the arguments: `file` is now
deprecated and replaced with `files`, and `plugin` is deprecated and
replaced with `plugins`. New arguments have been added, including
`binary`, `gcov_args`, `gcov_executable`, `gcov_ignore`, `gcov_include`,
`report_type`, `skip_validation`, and `swift_project`. Comprehensive
documentation on these changes can be found in the release notes and
changelog.
* Fixed `RuntimeBackend` exception handling
([#328](#328)). In this
release, we have made significant improvements to the exception handling
in the `RuntimeBackend` component, addressing issues reported in tickets
[#328](#328),
[#327](#327),
[#326](#326), and
[#325](#325). We have
updated the `execute` and `fetch` methods to handle exceptions more
gracefully and changed exception handling from catching `Exception` to
catching `BaseException` for more comprehensive error handling.
Additionally, we have updated the `pyproject.toml` file to use a newer
version of the `databricks-labs-pytester` package (0.2.1 to 0.5.0) which
may have contributed to the resolution of these issues. Furthermore, the
`test_backends.py` file has been updated to improve the readability and
user-friendliness of the test output for the functions testing if a
`NotFound`, `BadRequest`, or `Unknown` exception is raised when
executing and fetching statements. The
`test_runtime_backend_use_statements` function has also been updated to
print `PASSED` or `FAILED` instead of returning those values. These
changes enhance the robustness of the exception handling mechanism in
the `RuntimeBackend` class and update related unit tests.

Dependency updates:

* Bump codecov/codecov-action from 4 to 5
([#319](#319)).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

0 participants