博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
480000 millis timeout while waiting for channel to be ready for write异常处理
阅读量:6237 次
发布时间:2019-06-22

本文共 4872 字,大约阅读时间需要 16 分钟。

 2014-08-25 15:35:05,691 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.130.136.136:50010, storageID=DS-1533727399-10.130.136.136-50010-1388038551296, infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.130.136.136:50010 remote=/10.130.136.136:34264]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
        at java.lang.Thread.run(Thread.java:724)
2014-08-25 15:35:06,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.130.136.136:37121, dest: /10.130.136.136:50010, bytes: 67108864, op: HDFS_WRITE, cliID: DFSClient_hb_rs_xx,60020,1388115177740_1837727868_26, offset: 0, srvID: DS-1533727399-10.130.136.136-50010-1388038551296, blockid: blk_-3628597342762703578_40720686, duration: 6339411379
2014-08-25 15:35:06,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block blk_-3628597342762703578_40720686 terminating
2014-08-25 15:35:06,465 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-7509787569548089877_40720689 src: /10.130.136.136:37142 dest: /10.130.136.136:50010
2014-08-25 15:35:06,724 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.130.136.136:50010, dest: /10.130.136.136:33647, bytes: 5921280, op: HDFS_READ, cliID: DFSClient_hb_rs_xx,60020,1388115177740_1837727868_26, offset: 388096, srvID: DS-1533727399-10.130.136.136-50010-1388038551296, blockid: blk_2616588945174162483_33797955, duration: 547889496646
2014-08-25 15:35:06,725 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.130.136.136:50010, storageID=DS-1533727399-10.130.136.136-50010-1388038551296, infoPort=50075, ipcPort=50020):Got exception while serving blk_2616588945174162483_33797955 to /10.130.136.136:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.130.136.136:50010 remote=/10.130.136.136:33647]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
        at java.lang.Thread.run(Thread.java:724)
2014-08-25 15:35:06,725 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.130.136.136:50010, storageID=DS-1533727399-10.130.136.136-50010-1388038551296, infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.130.136.136:50010 remote=/10.130.136.136:33647]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
        at java.lang.Thread.run(Thread.java:724)

 解决方法:
  <property>
    <name>dfs.socket.timeout</name>
    <value>900000</value>
  </property>
  <property>
    <name>dfs.datanode.handler.count</name>
    <value>20</value>
  </property>
  <property>
    <name>dfs.namenode.handler.count</name>
    <value>30</value>
  </property>
  <property>
    <name>dfs.datanode.socket.write.timeout</name>
    <value>10800000</value>
    <description>set to 30 minutes ,default 8 * 60 * 1000,just for 480000 millis timeout while waiting for channel to be ready for w
rite</description>
  </property>

转载地址:http://dlkia.baihongyu.com/

你可能感兴趣的文章
无需软件合并多个TS流文件
查看>>
HDU——2119 Matrix
查看>>
我的友情链接
查看>>
SCCM 2012 sp1无法连接远程数据库命名实例
查看>>
Linux学习之查看文件和文件夹大小
查看>>
明明白白你的Linux服务器——硬件篇(1)
查看>>
osgi启动级别
查看>>
php防止快速刷屏留言
查看>>
python 登录验证程序
查看>>
Linux
查看>>
限制linux 用户使用su命令转化root权限
查看>>
headfirst PMP-忽视项目职责范围会带来哪些问题?
查看>>
PHP缓存学习之redis
查看>>
以lnmp为基础搭建discuz论坛
查看>>
云计算网络基础-金老师
查看>>
Vim编辑器与Shell编辑器
查看>>
Cisco交换路由配置命令及说明
查看>>
解析sort命令
查看>>
mysql慢查询
查看>>
体会瞬间的舒爽
查看>>