You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
spark.sql("""SELECT * FROM tpc_ds_1gb_qbeast_store_sales"").show()
Throws the following error:
org.apache.spark.sql.AnalysisException: Unable to resolve ss_sold_date_sk given []
at org.apache.spark.sql.errors.QueryCompilationErrors$.cannotResolveAttributeError(QueryCompilationErrors.scala:1020)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.$anonfun$resolve$3(LogicalPlan.scala:91)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.$anonfun$resolve$1(LogicalPlan.scala:90)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
at scala.collection.Iterator.foreach(Iterator.scala:943)
What went wrong?
When creating an already existing table using qbeast format, the schema is not saved properly on the Glue Catalog.
And trting
How to reproduce?
1. Code that triggered the bug, or steps to reproduce:
And then execute:
spark.sql("""SELECT * FROM tpc_ds_1gb_qbeast_store_sales"").show()
Throws the following error:
And when describing the table:
spark.sql("DESCRIBE EXTENDED tpc_ds_1gb_qbeast_store_sales").show()
Only the properties appear:
2. Branch and commit id:
Main at 49163e9
3. Spark version:
On the spark shell run
spark.version
.3.2.2
4. Hadoop version:
On the spark shell run
org.apache.hadoop.util.VersionInfo.getVersion()
.3.3.1
5. How are you running Spark?
Are you running Spark inside a container? Are you launching the app on a remote K8s cluster? Or are you just running the tests in a local computer?
EMR cluster
6. Stack trace:
Trace of the log/error messages.
The text was updated successfully, but these errors were encountered: