Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DNM] Bump Spark to 3.5.4 #8597

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

[DNM] Bump Spark to 3.5.4

05b828b
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Draft

[DNM] Bump Spark to 3.5.4 #8597

[DNM] Bump Spark to 3.5.4
05b828b
Select commit
Loading
Failed to load commit list.
GitHub Actions / Report test results failed Jan 23, 2025 in 0s

33263 tests run, 4815 skipped, 3 failed.

Annotations

Check failure on line 83 in org/apache/gluten/execution/VeloxTPCHIcebergSuite

See this annotation in the file changed.

@github-actions github-actions / Report test results

VeloxTPCHIcebergSuite.iceberg transformer exists

Multiple failures in stage materialization.
Raw output
org.apache.spark.SparkException: Multiple failures in stage materialization.
      at org.apache.spark.sql.errors.QueryExecutionErrors$.multiFailuresInStageMaterializationError(QueryExecutionErrors.scala:2076)
      at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.cleanUpAndThrowException(AdaptiveSparkPlanExec.scala:821)
      at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$getFinalPhysicalPlan$1(AdaptiveSparkPlanExec.scala:335)
      at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
      at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.getFinalPhysicalPlan(AdaptiveSparkPlanExec.scala:272)
      at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.withFinalPlanUpdate(AdaptiveSparkPlanExec.scala:419)
      at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.executeCollect(AdaptiveSparkPlanExec.scala:392)
      at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:4333)
      at org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:3575)
      at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:4323)
      at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:546)
      at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:4321)
      at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125)
      at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)
      at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108)
      at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
      at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66)
      at org.apache.spark.sql.Dataset.withAction(Dataset.scala:4321)
      at org.apache.spark.sql.Dataset.collect(Dataset.scala:3575)
      at org.apache.gluten.execution.WholeStageTransformerSuite.$anonfun$compareDfResultsAgainstVanillaSpark$1(WholeStageTransformerSuite.scala:206)
      at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:54)
      at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:38)
      at org.apache.gluten.execution.WholeStageTransformerSuite.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(WholeStageTransformerSuite.scala:37)
      at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:247)
      at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:245)
      at org.apache.gluten.execution.WholeStageTransformerSuite.withSQLConf(WholeStageTransformerSuite.scala:37)
      at org.apache.gluten.execution.WholeStageTransformerSuite.compareDfResultsAgainstVanillaSpark(WholeStageTransformerSuite.scala:204)
      at org.apache.gluten.execution.WholeStageTransformerSuite.runQueryAndCompare(WholeStageTransformerSuite.scala:261)
      at org.apache.gluten.execution.VeloxTPCHIcebergSuite.$anonfun$new$1(VeloxTPCHIcebergSuite.scala:83)
      at org.scalatest.enablers.Timed$$anon$1.timeoutAfter(Timed.scala:127)
      at org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:282)
      at org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231)
      at org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230)
      at org.apache.spark.SparkFunSuite.failAfter(SparkFunSuite.scala:69)
      at org.apache.spark.SparkFunSuite.$anonfun$test$2(SparkFunSuite.scala:155)
      at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
      at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
      at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
      at org.scalatest.Transformer.apply(Transformer.scala:22)
      at org.scalatest.Transformer.apply(Transformer.scala:20)
      at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:226)
      at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:227)
      at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:224)
      at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:236)
      at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
      at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:236)
      at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:218)
      at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:69)
      at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
      at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
      at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:69)
      at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:269)
      at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
      at scala.collection.immutable.List.foreach(List.scala:431)
      at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
      at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
      at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
      at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:269)
      at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:268)
      at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1564)
      at org.scalatest.Suite.run(Suite.scala:1114)
      at org.scalatest.Suite.run$(Suite.scala:1096)
      at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1564)
      at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:273)
      at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
      at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:273)
      at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:272)
      at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:69)
      at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
      at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
      at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
      at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:69)
      at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1178)
      at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1225)
      at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
      at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
      at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
      at org.scalatest.Suite.runNestedSuites(Suite.scala:1223)
      at org.scalatest.Suite.runNestedSuites$(Suite.scala:1156)
      at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:30)
      at org.scalatest.Suite.run(Suite.scala:1111)
      at org.scalatest.Suite.run$(Suite.scala:1096)
      at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:30)
      at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:47)
      at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13(Runner.scala:1321)
      at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13$adapted(Runner.scala:1315)
      at scala.collection.immutable.List.foreach(List.scala:431)
      at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1315)
      at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24(Runner.scala:992)
      at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24$adapted(Runner.scala:970)
      at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1481)
      at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:970)
      at org.scalatest.tools.Runner$.main(Runner.scala:775)
      at org.scalatest.tools.Runner.main(Runner.scala)
      Cause: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 711.0 failed 1 times, most recent failure: Lost task 0.0 in stage 711.0 (TID 192) (e823dc437b60 executor driver): java.lang.IndexOutOfBoundsException: index: 0, length: 1 (expected: range(0, 0))
	at org.apache.arrow.memory.ArrowBuf.checkIndexD(ArrowBuf.java:319)
	at org.apache.arrow.memory.ArrowBuf.chk(ArrowBuf.java:306)
	at org.apache.arrow.memory.ArrowBuf.getByte(ArrowBuf.java:508)
	at org.apache.arrow.vector.BitVectorHelper.setBit(BitVectorHelper.java:82)
	at org.apache.arrow.vector.IntVector.set(IntVector.java:160)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedDictionaryEncodedParquetValuesReader$DictionaryIdReader.nextVal(VectorizedDictionaryEncodedParquetValuesReader.java:90)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedDictionaryEncodedParquetValuesReader$BaseDictEncodedReader.nextBatch(VectorizedDictionaryEncodedParquetValuesReader.java:69)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader$DictionaryIdReader.nextDictEncodedVal(VectorizedParquetDefinitionLevelReader.java:760)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader$BaseReader.nextDictEncodedBatch(VectorizedParquetDefinitionLevelReader.java:358)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedPageIterator.nextBatchDictionaryIds(VectorizedPageIterator.java:160)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedColumnIterator$DictionaryBatchReader.nextBatchOf(VectorizedColumnIterator.java:114)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedColumnIterator$BatchReader.nextBatch(VectorizedColumnIterator.java:77)
	at org.apache.iceberg.arrow.vectorized.VectorizedArrowReader.read(VectorizedArrowReader.java:150)
	at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader$ColumnBatchLoader.readDataToColumnVectors(ColumnarBatchReader.java:123)
	at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader$ColumnBatchLoader.loadDataToColumnBatch(ColumnarBatchReader.java:98)
	at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader.read(ColumnarBatchReader.java:72)
	at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader.read(ColumnarBatchReader.java:44)
	at org.apache.iceberg.parquet.VectorizedParquetReader$FileIterator.next(VectorizedParquetReader.java:147)
	at org.apache.iceberg.spark.source.BaseReader.next(BaseReader.java:138)
	at org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:120)
	at org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:158)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1(DataSourceRDD.scala:63)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1$adapted(DataSourceRDD.scala:63)
	at scala.Option.exists(Option.scala:376)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:63)
	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:168)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:104)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
      at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2856)
      at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2792)
      at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2791)
      at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
      at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
      at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
      at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2791)
      at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1247)
      at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1247)
      at scala.Option.foreach(Option.scala:407)
      at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1247)
      at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3060)
      at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2994)
      at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2983)
      at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
      Cause: java.lang.IndexOutOfBoundsException: index: 0, length: 1 (expected: range(0, 0))
      at org.apache.arrow.memory.ArrowBuf.checkIndexD(ArrowBuf.java:319)
      at org.apache.arrow.memory.ArrowBuf.chk(ArrowBuf.java:306)
      at org.apache.arrow.memory.ArrowBuf.getByte(ArrowBuf.java:508)
      at org.apache.arrow.vector.BitVectorHelper.setBit(BitVectorHelper.java:82)
      at org.apache.arrow.vector.IntVector.set(IntVector.java:160)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedDictionaryEncodedParquetValuesReader$DictionaryIdReader.nextVal(VectorizedDictionaryEncodedParquetValuesReader.java:90)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedDictionaryEncodedParquetValuesReader$BaseDictEncodedReader.nextBatch(VectorizedDictionaryEncodedParquetValuesReader.java:69)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader$DictionaryIdReader.nextDictEncodedVal(VectorizedParquetDefinitionLevelReader.java:760)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader$BaseReader.nextDictEncodedBatch(VectorizedParquetDefinitionLevelReader.java:358)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedPageIterator.nextBatchDictionaryIds(VectorizedPageIterator.java:160)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedColumnIterator$DictionaryBatchReader.nextBatchOf(VectorizedColumnIterator.java:114)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedColumnIterator$BatchReader.nextBatch(VectorizedColumnIterator.java:77)
      at org.apache.iceberg.arrow.vectorized.VectorizedArrowReader.read(VectorizedArrowReader.java:150)
      at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader$ColumnBatchLoader.readDataToColumnVectors(ColumnarBatchReader.java:123)
      at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader$ColumnBatchLoader.loadDataToColumnBatch(ColumnarBatchReader.java:98)
      at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader.read(ColumnarBatchReader.java:72)
      at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader.read(ColumnarBatchReader.java:44)
      at org.apache.iceberg.parquet.VectorizedParquetReader$FileIterator.next(VectorizedParquetReader.java:147)
      at org.apache.iceberg.spark.source.BaseReader.next(BaseReader.java:138)
      at org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:120)
      at org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:158)
      at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1(DataSourceRDD.scala:63)
      at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1$adapted(DataSourceRDD.scala:63)
      at scala.Option.exists(Option.scala:376)
      at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:63)
      at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
      at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
      at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.columnartorow_nextBatch_0$(Unknown Source)
      at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
      at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
      at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43)
      at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
      at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:168)
      at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:104)
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54)
      at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
      at org.apache.spark.scheduler.Task.run(Task.scala:141)
      at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
      at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
      at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
      at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      at java.lang.Thread.run(Thread.java:748)

Check failure on line 1 in org/apache/gluten/execution/VeloxTPCHIcebergSuite

See this annotation in the file changed.

@github-actions github-actions / Report test results

VeloxTPCHIcebergSuite.iceberg transformer exists

Job aborted due to stage failure: Task 0 in stage 710.0 failed 1 times, most recent failure: Lost task 0.0 in stage 710.0 (TID 191) (05e73e3d8203 executor driver): java.lang.IndexOutOfBoundsException: index: 0, length: 40000 (expected: range(0, 0))
	at org.apache.arrow.memory.ArrowBuf.checkIndex(ArrowBuf.java:701)
	at org.apache.arrow.memory.ArrowBuf.setBytes(ArrowBuf.java:829)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader.setNextNValuesInVector(VectorizedParquetDefinitionLevelReader.java:795)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader.access$000(VectorizedParquetDefinitionLevelReader.java:37)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader$NumericBaseReader.nextBatch(VectorizedParquetDefinitionLevelReader.java:67)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedPageIterator$LongPageReader.nextVal(VectorizedPageIterator.java:235)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedPageIterator$BasePageReader.nextBatch(VectorizedPageIterator.java:187)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedColumnIterator$LongBatchReader.nextBatchOf(VectorizedColumnIterator.java:129)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedColumnIterator$BatchReader.nextBatch(VectorizedColumnIterator.java:77)
	at org.apache.iceberg.arrow.vectorized.VectorizedArrowReader.read(VectorizedArrowReader.java:170)
	at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader$ColumnBatchLoader.readDataToColumnVectors(ColumnarBatchReader.java:123)
	at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader$ColumnBatchLoader.loadDataToColumnBatch(ColumnarBatchReader.java:98)
	at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader.read(ColumnarBatchReader.java:72)
	at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader.read(ColumnarBatchReader.java:44)
	at org.apache.iceberg.parquet.VectorizedParquetReader$FileIterator.next(VectorizedParquetReader.java:147)
	at org.apache.iceberg.spark.source.BaseReader.next(BaseReader.java:138)
	at org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:120)
	at org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:158)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1(DataSourceRDD.scala:63)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1$adapted(DataSourceRDD.scala:63)
	at scala.Option.exists(Option.scala:406)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:63)
	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
	at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:576)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43)
	at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:576)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:168)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:104)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
Raw output
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 710.0 failed 1 times, most recent failure: Lost task 0.0 in stage 710.0 (TID 191) (05e73e3d8203 executor driver): java.lang.IndexOutOfBoundsException: index: 0, length: 40000 (expected: range(0, 0))
	at org.apache.arrow.memory.ArrowBuf.checkIndex(ArrowBuf.java:701)
	at org.apache.arrow.memory.ArrowBuf.setBytes(ArrowBuf.java:829)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader.setNextNValuesInVector(VectorizedParquetDefinitionLevelReader.java:795)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader.access$000(VectorizedParquetDefinitionLevelReader.java:37)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader$NumericBaseReader.nextBatch(VectorizedParquetDefinitionLevelReader.java:67)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedPageIterator$LongPageReader.nextVal(VectorizedPageIterator.java:235)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedPageIterator$BasePageReader.nextBatch(VectorizedPageIterator.java:187)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedColumnIterator$LongBatchReader.nextBatchOf(VectorizedColumnIterator.java:129)
	at org.apache.iceberg.arrow.vectorized.parquet.VectorizedColumnIterator$BatchReader.nextBatch(VectorizedColumnIterator.java:77)
	at org.apache.iceberg.arrow.vectorized.VectorizedArrowReader.read(VectorizedArrowReader.java:170)
	at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader$ColumnBatchLoader.readDataToColumnVectors(ColumnarBatchReader.java:123)
	at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader$ColumnBatchLoader.loadDataToColumnBatch(ColumnarBatchReader.java:98)
	at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader.read(ColumnarBatchReader.java:72)
	at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader.read(ColumnarBatchReader.java:44)
	at org.apache.iceberg.parquet.VectorizedParquetReader$FileIterator.next(VectorizedParquetReader.java:147)
	at org.apache.iceberg.spark.source.BaseReader.next(BaseReader.java:138)
	at org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:120)
	at org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:158)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1(DataSourceRDD.scala:63)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1$adapted(DataSourceRDD.scala:63)
	at scala.Option.exists(Option.scala:406)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:63)
	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
	at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:576)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43)
	at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:576)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:168)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:104)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
      at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2856)
      at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2792)
      at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2791)
      at scala.collection.immutable.List.foreach(List.scala:333)
      at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2791)
      at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1247)
      at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1247)
      at scala.Option.foreach(Option.scala:437)
      at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1247)
      at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3060)
      at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2994)
      at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2983)
      at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
      Cause: java.lang.IndexOutOfBoundsException: index: 0, length: 40000 (expected: range(0, 0))
      at org.apache.arrow.memory.ArrowBuf.checkIndex(ArrowBuf.java:701)
      at org.apache.arrow.memory.ArrowBuf.setBytes(ArrowBuf.java:829)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader.setNextNValuesInVector(VectorizedParquetDefinitionLevelReader.java:795)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader.access$000(VectorizedParquetDefinitionLevelReader.java:37)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedParquetDefinitionLevelReader$NumericBaseReader.nextBatch(VectorizedParquetDefinitionLevelReader.java:67)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedPageIterator$LongPageReader.nextVal(VectorizedPageIterator.java:235)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedPageIterator$BasePageReader.nextBatch(VectorizedPageIterator.java:187)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedColumnIterator$LongBatchReader.nextBatchOf(VectorizedColumnIterator.java:129)
      at org.apache.iceberg.arrow.vectorized.parquet.VectorizedColumnIterator$BatchReader.nextBatch(VectorizedColumnIterator.java:77)
      at org.apache.iceberg.arrow.vectorized.VectorizedArrowReader.read(VectorizedArrowReader.java:170)
      at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader$ColumnBatchLoader.readDataToColumnVectors(ColumnarBatchReader.java:123)
      at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader$ColumnBatchLoader.loadDataToColumnBatch(ColumnarBatchReader.java:98)
      at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader.read(ColumnarBatchReader.java:72)
      at org.apache.iceberg.spark.data.vectorized.ColumnarBatchReader.read(ColumnarBatchReader.java:44)
      at org.apache.iceberg.parquet.VectorizedParquetReader$FileIterator.next(VectorizedParquetReader.java:147)
      at org.apache.iceberg.spark.source.BaseReader.next(BaseReader.java:138)
      at org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:120)
      at org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:158)
      at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1(DataSourceRDD.scala:63)
      at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1$adapted(DataSourceRDD.scala:63)
      at scala.Option.exists(Option.scala:406)
      at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:63)
      at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
      at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:576)
      at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
      at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
      at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
      at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43)
      at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:576)
      at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:168)
      at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:104)
      at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54)
      at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
      at org.apache.spark.scheduler.Task.run(Task.scala:141)
      at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
      at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
      at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
      at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      at java.lang.Thread.run(Thread.java:748)

Check failure on line 346 in org/apache/spark/sql/GlutenSQLQueryTestSuite

See this annotation in the file changed.

@github-actions github-actions / Report test results

GlutenSQLQueryTestSuite.show-create-table.sql

show-create-table.sql
Expected "...uet
LOCATION 'file:/[//]path/to/table'", but got "...uet
LOCATION 'file:/[]path/to/table'" Result did not match for query #7
SHOW CREATE TABLE tbl
Raw output
org.scalatest.exceptions.TestFailedException: show-create-table.sql
Expected "...uet
LOCATION 'file:/[//]path/to/table'", but got "...uet
LOCATION 'file:/[]path/to/table'" Result did not match for query #7
SHOW CREATE TABLE tbl
      at org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472)
      at org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:471)
      at org.scalatest.funsuite.AnyFunSuite.newAssertionFailedException(AnyFunSuite.scala:1564)
      at org.scalatest.Assertions.assertResult(Assertions.scala:847)
      at org.scalatest.Assertions.assertResult$(Assertions.scala:842)
      at org.scalatest.funsuite.AnyFunSuite.assertResult(AnyFunSuite.scala:1564)
      at org.apache.spark.sql.GlutenSQLQueryTestSuite.$anonfun$runQueries$11(GlutenSQLQueryTestSuite.scala:592)
      at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
      at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
      at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
      at org.apache.spark.sql.GlutenSQLQueryTestSuite.$anonfun$runQueries$9(GlutenSQLQueryTestSuite.scala:579)
      at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
      at org.scalatest.Assertions.withClue(Assertions.scala:1065)
      at org.scalatest.Assertions.withClue$(Assertions.scala:1052)
      at org.scalatest.funsuite.AnyFunSuite.withClue(AnyFunSuite.scala:1564)
      at org.apache.spark.sql.GlutenSQLQueryTestSuite.runQueries(GlutenSQLQueryTestSuite.scala:553)
      at org.apache.spark.sql.GlutenSQLQueryTestSuite.$anonfun$runTest$34(GlutenSQLQueryTestSuite.scala:459)
      at org.apache.spark.sql.GlutenSQLQueryTestSuite.$anonfun$runTest$34$adapted(GlutenSQLQueryTestSuite.scala:457)
      at scala.collection.immutable.List.foreach(List.scala:431)
      at org.apache.spark.sql.GlutenSQLQueryTestSuite.runTest(GlutenSQLQueryTestSuite.scala:457)
      at org.apache.spark.sql.GlutenSQLQueryTestSuite.$anonfun$createScalaTestCase$6(GlutenSQLQueryTestSuite.scala:346)
      at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
      at org.scalatest.enablers.Timed$$anon$1.timeoutAfter(Timed.scala:127)
      at org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:282)
      at org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231)
      at org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230)
      at org.apache.spark.SparkFunSuite.failAfter(SparkFunSuite.scala:69)
      at org.apache.spark.SparkFunSuite.$anonfun$test$2(SparkFunSuite.scala:155)
      at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
      at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
      at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
      at org.scalatest.Transformer.apply(Transformer.scala:22)
      at org.scalatest.Transformer.apply(Transformer.scala:20)
      at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:226)
      at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:227)
      at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:224)
      at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:236)
      at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
      at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:236)
      at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:218)
      at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:69)
      at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
      at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
      at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:69)
      at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:269)
      at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
      at scala.collection.immutable.List.foreach(List.scala:431)
      at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
      at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
      at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
      at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:269)
      at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:268)
      at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1564)
      at org.scalatest.Suite.run(Suite.scala:1114)
      at org.scalatest.Suite.run$(Suite.scala:1096)
      at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1564)
      at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:273)
      at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
      at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:273)
      at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:272)
      at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:69)
      at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
      at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
      at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
      at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:69)
      at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1178)
      at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1225)
      at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
      at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
      at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
      at org.scalatest.Suite.runNestedSuites(Suite.scala:1223)
      at org.scalatest.Suite.runNestedSuites$(Suite.scala:1156)
      at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:30)
      at org.scalatest.Suite.run(Suite.scala:1111)
      at org.scalatest.Suite.run$(Suite.scala:1096)
      at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:30)
      at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:47)
      at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13(Runner.scala:1321)
      at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13$adapted(Runner.scala:1315)
      at scala.collection.immutable.List.foreach(List.scala:431)
      at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1315)
      at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24(Runner.scala:992)
      at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24$adapted(Runner.scala:970)
      at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1481)
      at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:970)
      at org.scalatest.tools.Runner$.main(Runner.scala:775)
      at org.scalatest.tools.Runner.main(Runner.scala)