You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When resultDecimalType was introduced in Spark 3.4.0+ and Databricks 330+, we started doing the right thing when dividing DecimalTypes while Spark still gave the wrong results according to the bug
Steps/Code to reproduce bug
scala> val df = Seq(("-0.172787979", "533704665545018957788294905796.5"), ("1", "2")).toDF("_1", "_2")
scala> df.repartition(3).selectExpr("cast(_1 as decimal(9,9)) / cast(_2 as decimal(31,1))").show(false)
+------------------------------------------------------+
|(CAST(_1 AS DECIMAL(9,9)) / CAST(_2 AS DECIMAL(31,1)))|
+------------------------------------------------------+
|-0.0000000000000000000000000000003237520 |
|NULL |
+------------------------------------------------------+
scala> spark.conf.set("spark.rapids.sql.enabled", false)
scala> df.repartition(3).selectExpr("cast(_1 as decimal(9,9)) / cast(_2 as decimal(31,1))").show(false)
+------------------------------------------------------+
|(CAST(_1 AS DECIMAL(9,9)) / CAST(_2 AS DECIMAL(31,1)))|
+------------------------------------------------------+
|-0.0000000000000000000000000000003237521 |
|NULL |
+------------------------------------------------------+
Expected behavior
We should match Spark bug for bug for versions of Databricks 330+ and Spark 340+
Additional context
The original bug was created as part of the audit process. In resolving that as part of the spark-rapids-jni PR, it was discovered that the division got broken on Spark versions listed above after the release of Spark 330db.
The text was updated successfully, but these errors were encountered:
Describe the bug
When
resultDecimalType
was introduced in Spark 3.4.0+ and Databricks 330+, we started doing the right thing when dividing DecimalTypes while Spark still gave the wrong results according to the bugSteps/Code to reproduce bug
Expected behavior
We should match Spark bug for bug for versions of Databricks 330+ and Spark 340+
Additional context
The original bug was created as part of the audit process. In resolving that as part of the spark-rapids-jni PR, it was discovered that the division got broken on Spark versions listed above after the release of Spark 330db.
The text was updated successfully, but these errors were encountered: