MongoDB副本集集群从节点控制台报错10061怎么办

网友投稿 400 2024-01-02

MongoDB副本集集群从节点控制台报错10061怎么办

这篇文章主要介绍MongoDB副本集集群从节点控制台报错10061怎么办,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!

MongoDB副本集集群从节点控制台报错10061怎么办

--------------------------------------------------------------------------------------------------------------------------------------------

    首先查看集群3个节点的控制台日志

1、集群三台服务器控制台日志

192.168.72.33

2018-01-05T09:46:24.281+0800 I STORAGE [initandlisten] Placing a marker at optime Jan 05 05:16:28:3e9

2018-01-05T09:46:24.432+0800 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker

2018-01-05T09:46:24.432+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory d:/mongodata/rs0-2/diagnostic.data

2018-01-05T09:46:24.443+0800 I NETWORK [initandlisten] waiting for connections on port 27013

2018-01-05T09:46:25.485+0800 W

NETWORK [ReplicationExecutor] Failed to connect 

to 192.168.72.31:27011, reason: errno:10061
由于目标计算机积极拒绝,无法连接。 

2018-01-05T09:46:25.533+0800 I REPL [ReplicationExecutor] New replica set co

nfig in use: { _id: "rs0", version: 8, protocolVersion: 1, members: [ { _id: 0,

host: "mongodb-rs0-0:27011", arbiterOnly: false, buildIndexes: true, hidden: fal 

se, priority: 100.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongo

db-rs0-1:27012", arbiterOnly: false, buildIndexes: true, hidden: false, priority

: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "mongodb-rs0-2:27013

", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {

}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatInte

rvalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLas

tErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: Obje

ctId(59365592734d0747ee26e2a6) } } 

2018-01-05T09:46:25.534+0800 I REPL [ReplicationExecutor] This node is mongodb-rs0-2:27013 in the config

192.168.72.32

2018-01-05T09:46:17.064+0800 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker

2018-01-05T09:46:17.064+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory d:/mongodata/rs0-1/diagnostic.data

2018-01-05T09:46:17.076+0800 I NETWORK [initandlisten] waiting for connections on port 27012

2018-01-05T09:46:18.102+0800 W

NETWORK [ReplicationExecutor] Failed to connect 

to 192.168.72.31:27011, reason: errno:10061
由于目标计算机积极拒绝,无法连接。 2018-01-05T09:46:19.149+0800 W 

NETWORK [ReplicationExecutor] Failed to connect

to 192.168.72.33:27013, reason: errno:10061
由于目标计算机积极拒绝,无法连接。

2018-01-05T09:46:19.150+0800 I REPL [ReplicationExecutor] New replica set co

nfig in use: { _id: "rs0", version: 8, protocolVersion: 1, members: [ { _id: 0,

host: "mongodb-rs0-0:27011", arbiterOnly: false, buildIndexes: true, hidden: fal

se, priority: 100.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongo

db-rs0-1:27012", arbiterOnly: false, buildIndexes: true, hidden: false, priority

: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "mongodb-rs0-2:27013

", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {

}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatInte

rvalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLas 

tErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: Obje

ctId(59365592734d0747ee26e2a6) } }

2018-01-05T09:46:19.150+0800 I REPL [ReplicationExecutor] This node is mongo db-rs0-1:27012 in the config

192.168.72.31

2018-01-05T15:56:42.999+0800 I STORAGE

[initandlisten] Placing a marker at optime Jan 05 05:12:59:b4a

2018-01-05T15:56:43.000+0800 I STORAGE

[initandlisten] Placing a marker at optime Jan 05 05:13:08:8df

2018-01-05T15:56:43.000+0800 I STORAGE

[initandlisten] Placing a marker at optime Jan 05 05:14:05:329

2018-01-05T15:56:43.001+0800 I STORAGE

 [initandlisten] Placing a marker at optime Jan 05 05:15:30:25f 

2018-01-05T15:56:43.002+0800 ISTORAGE [initandlisten] Placing a marker at optime Jan 05 05:15:39:4b1

根据以上日志信息推测:由于集群主节点192.168.72.31发生存储类型的等待事件,导致主节点192.168.72.31拒绝2个从节点192.168.72.32/33的TCP连接

2、根据步骤1中的提示,查看mongo服务在操作系统层次的日志,操作系统日志从2018-1-5 4:59:25秒就已经告警提示D盘已经满载

3、查看192.168.72.31存储情况,果然如操作系统日志提示,D盘只剩余58MB的可用空间

4、由以上信息可以断定:由于Mongo集群主节点192.168.72.31存储空间满,导致主节点192.168.72.31的Mongo进程无法完成写操作从而拒绝2个从节点的连接导致整个mongo集群服务中断。经沟通得知,地市技术对当前Mongo主节点192.168.72.31数据做了备份,没有注意到D盘存储情况。

事后,地市技术立即删除节点192.168.72.31的冗余数据备份释放D盘空间,由于调度程序处于僵死状态,地市技术决定重启整个mongo集群服务器192.168.72.31/32/33。

5、重启完成后,mongo集群恢复正常,主节点192.168.72.31的mongo控制台提示调度程序bmi被接受连接到mongo集群的admin库

以上是“MongoDB副本集集群从节点控制台报错10061怎么办”这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注行业资讯频道!

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:layui首页模板?
下一篇:vue项目引入静态js文件(vue 引入静态js)
相关文章

 发表评论

暂时没有评论,来抢沙发吧~