一個神奇的 spark 報錯
寫 spark 跑的時候遇到一個神奇的錯誤,報錯如下
18/11/20 16:44:44 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. org.apache.spark.SparkException: Could not find CoarseGrainedScheduler. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:160) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:140) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:655) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:208) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:113) at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:278) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:278) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:278) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:278) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:962) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:485) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:399) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:371) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:745)
在 spark web ui 上的報錯就更加無厘頭,說
Job aborted due to stage failure: Task 194 in stage 31.0 failed 4 times, most recent failure: Lost task 194.3 in stage 31.0 (TID 24324, 10.56.83.212, executor 73): UnknownReason
重新審視了一下程式碼,並沒有發現問題,於是採用二分大法,加上日誌看,同時也放狗搜,本以為這種虛無縹緲的報錯是查不到啥的,結果居然有,看到這裡,ofollow,noindex">https://stackoverflow.com/ques… 說
Yeah now I know the meaning of that cryptic exception, the executor got killed because it exceeds the container threshold.There are couple of reasons that could happen but the first culprit is to check your job or try adding more nodes/executors to your cluster.
這裡https://blog.csdn.net/u0137092… 也說
分析解決方案:
1、這個可能是一個資源問題,應該給任務分配更多的 cores 和Executors,並且分配更多的記憶體。並且需要給RDD分配更多的分割槽
2、在配置資源中加入這句話也許能解決你的問題:–conf spark.dynamicAllocation.enabled=false
———————
作者:JeemyJohn
來源:CSDN
原文:https://blog.csdn.net/u013709270/article/details/78879869
版權宣告:本文為博主原創文章,轉載請附上博文連結!
好吧,先試試加大 executor 和記憶體看看,其實原來的 executor 已經夠多了,200 個,但是 core 是 2,記憶體也是 2g,這回都搞到 4 看看
。。。
依然不行
加上打日誌的邏輯,前面半段是能跑完的,看來,問題就出在後面半段了,其實這裡的改動並不大,就是加了一個新的表的 rdd 的 join 邏輯而已
正在一籌莫展之際,回頭仔細的翻了翻丟擲來的堆疊,結果赫然發現一個 null pointer exception 在列,原來根本不是這次的改動引入的,而是之前的一個改動沒有做空指標保護,人蔘啊。。