手撸golang学etcd手写raft协议之12单元测试

ioly      2022-02-12     773

关键词:

手撸golang 学etcd 手写raft协议之12 单元测试

缘起

最近阅读 [云原生分布式存储基石:etcd深入解析] (杜军 , 2019.1)
本系列笔记拟采用golang练习之

raft分布式一致性算法

分布式存储系统通常会通过维护多个副本来进行容错,
以提高系统的可用性。
这就引出了分布式存储系统的核心问题——如何保证多个副本的一致性?

Raft算法把问题分解成了四个子问题:
1. 领袖选举(leader election)、
2. 日志复制(log replication)、
3. 安全性(safety)
4. 成员关系变化(membership changes)
这几个子问题。

源码gitee地址:
https://gitee.com/ioly/learning.gooop

原文链接:
https://my.oschina.net/ioly/blog/5011356

目标

  • 根据raft协议,实现高可用分布式强一致的kv存储

子目标(Day 12)

  • 终于可以“点火”了,来到这里不容易 _

    • 添加大量诊断日志
    • 修复若干细节问题
  • 编写单元测试代码:

    • 启动多个raft节点
    • 检测Leader选举是否成功
    • 向节点1写入若干数据
    • 向节点2写入若干数据
    • 在节点3读取数据
    • kill掉当前Leader节点,观察重新选举是否成功

单元测试

tRaftKVServer_test.go,在本地启动四个raft节点进行功能性测试

package server

import (
    "learning/gooop/etcd/raft/debug"
    "learning/gooop/etcd/raft/logger"
    "learning/gooop/etcd/raft/rpc"
    "testing"
    "time"
    nrpc "net/rpc"
)

func Test_RaftKVServer(t *testing.T) {
    fnAssertTrue := func(b bool, msg string) {
        if !b {
            t.Fatal(msg)
        }
    }

    logger.Exclude("RaftRPCServer.Ping")
    logger.Exclude("RaftRPCServer.Heartbeat")
    logger.Exclude("feLeaderHeartbeat")
    logger.Exclude(").Heartbeat")

    // start node 1 to 3
    _ = new(tRaftKVServer).BeginServeTCP("./node-01")
    _ = new(tRaftKVServer).BeginServeTCP("./node-02")
    _ = new(tRaftKVServer).BeginServeTCP("./node-03")
    _ = new(tRaftKVServer).BeginServeTCP("./node-04")

    // wait for up
    time.Sleep(1 * time.Second)
    // tRaftLSMImplement(node-01,1).HandleStateChanged, state=2
    fnAssertTrue(logger.Count("HandleStateChanged, state=3") == 1, "expecting leader node")
    t.Logf("passing electing, leader=%v", debug.LeaderNodeID)

    // put into node-1
    c1,_ := nrpc.Dial("tcp", "localhost:3331")
    defer c1.Close()
    kcmd := new(rpc.KVCmd)
    kcmd.OPCode = rpc.KVPut
    kcmd.Key = []byte("key-01")
    kcmd.Content = []byte("content 01")
    kret := new(rpc.KVRet)
    err := c1.Call("KVStoreRPCServer.ExecuteKVCmd", kcmd, kret)
    fnAssertTrue(err == nil && kret.Code == rpc.KVOk, "expecting KVOk")
    t.Log("passing put into node-01")

    // put into node-2
    c2,_ := nrpc.Dial("tcp", "localhost:3332")
    defer c2.Close()
    kcmd.Key = []byte("key-02")
    kcmd.Content = []byte("content 02")
    err = c2.Call("KVStoreRPCServer.ExecuteKVCmd", kcmd, kret)
    fnAssertTrue(err == nil && kret.Code == rpc.KVOk, "expecting KVOk")
    t.Log("passing put into node-02")

    // get from node-3
    c3,_ := nrpc.Dial("tcp", "localhost:3333")
    defer c3.Close()
    kcmd.OPCode = rpc.KVGet
    kcmd.Key = []byte("key-02")
    kcmd.Content = nil
    kret.Content = nil
    kret.Key = nil
    err = c3.Call("KVStoreRPCServer.ExecuteKVCmd", kcmd, kret)
    fnAssertTrue(err == nil && kret.Code == rpc.KVOk, "expecting KVOk")
    fnAssertTrue(kret.Content != nil && string(kret.Content) == "content 02", "expecting content 02")
    t.Log("passing get from node-04")

    // kill leader node
    debug.KilledNodeID = debug.LeaderNodeID
    time.Sleep(2 * time.Second)
    fnAssertTrue(logger.Count("HandleStateChanged, state=3") == 2, "expecting reelecting leader node")
    t.Logf("passing reelecting, leader=%v", debug.LeaderNodeID)

    time.Sleep(2 * time.Second)
}

测试输出

可以观察到5个passing,测试ok,重新选举的时延也在预期范围内,约700ms

API server listening at: [::]:46709
=== RUN   Test_RaftKVServer
16:51:09.329792609 tRaftKVServer.BeginServeTCP, starting node-01, port=3331
16:51:09.329864584 tBrokenState(from=node-01, to=node-01@localhost:3331).whenStartThenBeginDial
16:51:09.329888978 tBrokenState(from=node-01, to=node-02@localhost:3332).whenStartThenBeginDial
16:51:09.329903778 tBrokenState(from=node-01, to=node-03@localhost:3333).whenStartThenBeginDial
16:51:09.329912231 tBrokenState(from=node-01, to=node-04@localhost:3334).whenStartThenBeginDial
16:51:09.329920585 tFollowerState(node-01).init
16:51:09.329926372 tFollowerState(node-01).initEventHandlers
16:51:09.329941794 tFollowerState(node-01).Start
16:51:09.330218761 tRaftKVServer.BeginServeTCP, service ready at port=3331
16:51:09.330549519 tFollowerState(node-01).whenStartThenBeginWatchLeaderTimeout, begin
16:51:09.333852427 tRaftKVServer.BeginServeTCP, starting node-02, port=3332
16:51:09.333893483 tBrokenState(from=node-02, to=node-01@localhost:3331).whenStartThenBeginDial
16:51:09.333925018 tBrokenState(from=node-02, to=node-02@localhost:3332).whenStartThenBeginDial
16:51:09.333955573 tBrokenState(from=node-02, to=node-03@localhost:3333).whenStartThenBeginDial
16:51:09.33397762 tBrokenState(from=node-02, to=node-04@localhost:3334).whenStartThenBeginDial
16:51:09.333990318 tFollowerState(node-02).init
16:51:09.333997643 tFollowerState(node-02).initEventHandlers
16:51:09.334015293 tFollowerState(node-02).Start
16:51:09.334089713 tRaftKVServer.BeginServeTCP, service ready at port=3332
16:51:09.334290701 tFollowerState(node-02).whenStartThenBeginWatchLeaderTimeout, begin
16:51:09.337803901 tRaftKVServer.BeginServeTCP, starting node-03, port=3333
16:51:09.337842816 tBrokenState(from=node-03, to=node-01@localhost:3331).whenStartThenBeginDial
16:51:09.337866444 tBrokenState(from=node-03, to=node-02@localhost:3332).whenStartThenBeginDial
16:51:09.337880481 tBrokenState(from=node-03, to=node-03@localhost:3333).whenStartThenBeginDial
16:51:09.337893773 tBrokenState(from=node-03, to=node-04@localhost:3334).whenStartThenBeginDial
16:51:09.337905184 tFollowerState(node-03).init
16:51:09.337912795 tFollowerState(node-03).initEventHandlers
16:51:09.337945677 tFollowerState(node-03).Start
16:51:09.338027861 tRaftKVServer.BeginServeTCP, service ready at port=3333
16:51:09.338089164 tFollowerState(node-03).whenStartThenBeginWatchLeaderTimeout, begin
16:51:09.341594205 tRaftKVServer.BeginServeTCP, starting node-04, port=3334
16:51:09.34163547 tBrokenState(from=node-04, to=node-01@localhost:3331).whenStartThenBeginDial
16:51:09.341679869 tBrokenState(from=node-04, to=node-02@localhost:3332).whenStartThenBeginDial
16:51:09.341694419 tBrokenState(from=node-04, to=node-03@localhost:3333).whenStartThenBeginDial
16:51:09.3417269 tBrokenState(from=node-04, to=node-04@localhost:3334).whenStartThenBeginDial
16:51:09.341741739 tFollowerState(node-04).init
16:51:09.341770267 tFollowerState(node-04).initEventHandlers
16:51:09.341793763 tFollowerState(node-04).Start
16:51:09.34213956 tRaftKVServer.BeginServeTCP, service ready at port=3334
16:51:09.342361058 tFollowerState(node-04).whenStartThenBeginWatchLeaderTimeout, begin
16:51:09.481747744 tBrokenState(from=node-01, to=node-04@localhost:3334).whenDialOKThenSetConn
16:51:09.481770012 tBrokenState(from=node-01, to=node-01@localhost:3331).whenDialOKThenSetConn
16:51:09.481771692 tBrokenState(from=node-01, to=node-04@localhost:3334).whenDialOKThenSwitchToConnectedState
16:51:09.481791046 tBrokenState(from=node-01, to=node-04@localhost:3334).beDisposing
16:51:09.481781787 tBrokenState(from=node-01, to=node-01@localhost:3331).whenDialOKThenSwitchToConnectedState
16:51:09.481807689 tBrokenState(from=node-01, to=node-01@localhost:3331).beDisposing
16:51:09.481747893 tBrokenState(from=node-01, to=node-02@localhost:3332).whenDialOKThenSetConn
16:51:09.481933708 tBrokenState(from=node-01, to=node-02@localhost:3332).whenDialOKThenSwitchToConnectedState
16:51:09.481955515 tBrokenState(from=node-01, to=node-02@localhost:3332).beDisposing
16:51:09.481747742 tBrokenState(from=node-01, to=node-03@localhost:3333).whenDialOKThenSetConn
16:51:09.481973577 tBrokenState(from=node-01, to=node-03@localhost:3333).whenDialOKThenSwitchToConnectedState
16:51:09.481980127 tBrokenState(from=node-01, to=node-03@localhost:3333).beDisposing
16:51:09.485403927 tBrokenState(from=node-02, to=node-01@localhost:3331).whenDialOKThenSetConn
16:51:09.485692968 tBrokenState(from=node-02, to=node-01@localhost:3331).whenDialOKThenSwitchToConnectedState
16:51:09.485707781 tBrokenState(from=node-02, to=node-01@localhost:3331).beDisposing
16:51:09.485462572 tBrokenState(from=node-02, to=node-02@localhost:3332).whenDialOKThenSetConn
16:51:09.485520127 tBrokenState(from=node-02, to=node-03@localhost:3333).whenDialOKThenSetConn
16:51:09.485723854 tBrokenState(from=node-02, to=node-02@localhost:3332).whenDialOKThenSwitchToConnectedState
16:51:09.485733962 tBrokenState(from=node-02, to=node-02@localhost:3332).beDisposing
16:51:09.485733667 tBrokenState(from=node-02, to=node-03@localhost:3333).whenDialOKThenSwitchToConnectedState
16:51:09.485749968 tBrokenState(from=node-02, to=node-03@localhost:3333).beDisposing
16:51:09.485474638 tBrokenState(from=node-02, to=node-04@localhost:3334).whenDialOKThenSetConn
16:51:09.485780798 tBrokenState(from=node-02, to=node-04@localhost:3334).whenDialOKThenSwitchToConnectedState
16:51:09.485787997 tBrokenState(from=node-02, to=node-04@localhost:3334).beDisposing
16:51:09.489019463 tBrokenState(from=node-03, to=node-02@localhost:3332).whenDialOKThenSetConn
16:51:09.489141518 tBrokenState(from=node-03, to=node-02@localhost:3332).whenDialOKThenSwitchToConnectedState
16:51:09.489165663 tBrokenState(from=node-03, to=node-02@localhost:3332).beDisposing
16:51:09.489021724 tBrokenState(from=node-03, to=node-03@localhost:3333).whenDialOKThenSetConn
16:51:09.489191277 tBrokenState(from=node-03, to=node-03@localhost:3333).whenDialOKThenSwitchToConnectedState
16:51:09.489199495 tBrokenState(from=node-03, to=node-03@localhost:3333).beDisposing
16:51:09.489021727 tBrokenState(from=node-03, to=node-04@localhost:3334).whenDialOKThenSetConn
16:51:09.489019621 tBrokenState(from=node-03, to=node-01@localhost:3331).whenDialOKThenSetConn
16:51:09.489217044 tBrokenState(from=node-03, to=node-04@localhost:3334).whenDialOKThenSwitchToConnectedState
16:51:09.489222223 tBrokenState(from=node-03, to=node-01@localhost:3331).whenDialOKThenSwitchToConnectedState
16:51:09.489234054 tBrokenState(from=node-03, to=node-01@localhost:3331).beDisposing
16:51:09.489225544 tBrokenState(from=node-03, to=node-04@localhost:3334).beDisposing
16:51:09.492701804 tBrokenState(from=node-04, to=node-01@localhost:3331).whenDialOKThenSetConn
16:51:09.492720605 tBrokenState(from=node-04, to=node-01@localhost:3331).whenDialOKThenSwitchToConnectedState
16:51:09.492728029 tBrokenState(from=node-04, to=node-01@localhost:3331).beDisposing
16:51:09.492702391 tBrokenState(from=node-04, to=node-02@localhost:3332).whenDialOKThenSetConn
16:51:09.492764 tBrokenState(from=node-04, to=node-02@localhost:3332).whenDialOKThenSwitchToConnectedState
16:51:09.492771402 tBrokenState(from=node-04, to=node-02@localhost:3332).beDisposing
16:51:09.492778635 tBrokenState(from=node-04, to=node-04@localhost:3334).whenDialOKThenSetConn
16:51:09.492791174 tBrokenState(from=node-04, to=node-04@localhost:3334).whenDialOKThenSwitchToConnectedState
16:51:09.492799699 tBrokenState(from=node-04, to=node-04@localhost:3334).beDisposing
16:51:09.492844734 tBrokenState(from=node-04, to=node-03@localhost:3333).whenDialOKThenSetConn
16:51:09.492855638 tBrokenState(from=node-04, to=node-03@localhost:3333).whenDialOKThenSwitchToConnectedState
16:51:09.492863777 tBrokenState(from=node-04, to=node-03@localhost:3333).beDisposing
16:51:10.238765817 tFollowerState(node-01).whenLeaderHeartbeatTimeoutThenSwitchToCandidateState, term=0
16:51:10.238808459 tFollowerState(node-01).feDisposing, disposed=true
16:51:10.238885964 tRaftLSMImplement(node-01,1).HandleStateChanged, state=2
16:51:10.238892892 tRaftLSMImplement(node-01,1).meStateChanged, 2
16:51:10.238897706 tCandidateState(node-01).whenStartThenAskForVote
16:51:10.238902038 tCandidateState(node-01).ceAskingForVote, term=1
16:51:10.238907133 tCandidateState(node-01).ceAskingForVote, vote to myself
16:51:10.2389139 tCandidateState(node-01).ceAskingForVote, ticketCount=1
16:51:10.238920737 tCandidateState(node-01).whenAskingForVoteThenWatchElectionTimeout
16:51:10.239208777 tFollowerState(node-04).feCandidateRequestVote, reset last vote
16:51:10.239233375 tFollowerState(node-04).feVoteToCandidate, candidate=node-01, term=1
16:51:10.239261011 tFollowerState(node-02).feCandidateRequestVote, reset last vote
16:51:10.239273156 tFollowerState(node-02).feVoteToCandidate, candidate=node-01, term=1
16:51:10.239288823 tRaftLSMImplement(node-04,1).RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, err=<nil>
16:51:10.239303552 RaftRPCServer.RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, e=<nil>
16:51:10.239343533 tRaftLSMImplement(node-02,1).RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, err=<nil>
16:51:10.239390716 tFollowerState(node-03).feCandidateRequestVote, reset last vote
16:51:10.239431327 tFollowerState(node-03).feVoteToCandidate, candidate=node-01, term=1
16:51:10.239442927 tCandidateState(node-01).handleRequestVoteOK, peer=node-04, term=1
16:51:10.239455262 tCandidateState(node-01).ceReceiveTicket, mTicketCount=2
16:51:10.239463079 tCandidateState(node-01).whenReceiveTicketThenCheckTicketCount
16:51:10.239473836 tRaftLSMImplement(node-03,1).RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, err=<nil>
16:51:10.239488078 RaftRPCServer.RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, e=<nil>
16:51:10.239412948 RaftRPCServer.RequestVote, cmd=&{node-01 1 0 0}, ret=&{0 1}, e=<nil>
16:51:10.239578689 tCandidateState(node-01).handleRequestVoteOK, peer=node-03, term=1
16:51:10.239593183 tCandidateState(node-01).ceReceiveTicket, mTicketCount=3
16:51:10.239601334 tCandidateState(node-01).whenReceiveTicketThenCheckTicketCount
16:51:10.239629478 tCandidateState(node-01).whenWinningTheVoteThenSwitchToLeader
16:51:10.239639823 tCandidateState(node-01).ceDisposing, mTicketCount=0
16:51:10.239696198 tCandidateState(node-01).ceDisposing, mDisposedFlag=true
16:51:10.239752502 tRaftLSMImplement(node-01,2).HandleStateChanged, state=3
16:51:10.239764172 tRaftLSMImplement(node-01,2).meStateChanged, 3
    tRaftKVServer_test.go:34: passing electing, leader=node-01
16:51:10.366875446 tRaftLSMImplement(node-02,1).AppendLog, cmd=&{node-01 1 0xc0004961c0}, ret=&{0 1 0 0}, err=<nil>
16:51:10.366931566 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc0004961c0}, ret=&{0 1 0 0}, e=<nil>
16:51:10.370788589 tRaftLSMImplement(node-03,1).AppendLog, cmd=&{node-01 1 0xc00043c5c0}, ret=&{0 1 0 0}, err=<nil>
16:51:10.370829944 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc00043c5c0}, ret=&{0 1 0 0}, e=<nil>
16:51:10.374865684 tRaftLSMImplement(node-04,1).AppendLog, cmd=&{node-01 1 0xc000496580}, ret=&{0 1 0 0}, err=<nil>
16:51:10.374904568 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc000496580}, ret=&{0 1 0 0}, e=<nil>
16:51:10.375163435 tRaftLSMImplement(node-02,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err=<nil>
16:51:10.375176692 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e=<nil>
16:51:10.375444843 tRaftLSMImplement(node-03,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err=<nil>
16:51:10.375512284 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e=<nil>
16:51:10.375797446 tRaftLSMImplement(node-04,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err=<nil>
16:51:10.375859612 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e=<nil>
16:51:10.379551174 tRaftLSMImplement(node-01,3).ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 49] [99 111 110 116 101 110 116 32 48 49]}, ret=&{0 [] []}, err=<nil>
16:51:10.379577233 KVStoreRPCServer.ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 49] [99 111 110 116 101 110 116 32 48 49]}, ret=&{0 [] []}, e=<nil>
    tRaftKVServer_test.go:46: passing put into node-01
16:51:10.387761245 tRaftLSMImplement(node-02,1).AppendLog, cmd=&{node-01 1 0xc000496d80}, ret=&{0 1 0 0}, err=<nil>
16:51:10.387777654 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc000496d80}, ret=&{0 1 0 0}, e=<nil>
16:51:10.391348874 tRaftLSMImplement(node-03,1).AppendLog, cmd=&{node-01 1 0xc000496e40}, ret=&{0 1 0 0}, err=<nil>
16:51:10.391387707 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc000496e40}, ret=&{0 1 0 0}, e=<nil>
16:51:10.395137344 tRaftLSMImplement(node-04,1).AppendLog, cmd=&{node-01 1 0xc000496f00}, ret=&{0 1 0 0}, err=<nil>
16:51:10.395155304 RaftRPCServer.AppendLog, cmd=&{node-01 1 0xc000496f00}, ret=&{0 1 0 0}, e=<nil>
16:51:10.395343688 tRaftLSMImplement(node-02,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err=<nil>
16:51:10.395357145 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e=<nil>
16:51:10.395495604 tRaftLSMImplement(node-03,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err=<nil>
16:51:10.3955081 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e=<nil>
16:51:10.395667457 tRaftLSMImplement(node-04,1).CommitLog, cmd=&{node-01 1 1}, ret=&{1}, err=<nil>
16:51:10.395688067 RaftRPCServer.CommitLog, cmd=&{node-01 1 1}, ret=&{1}, e=<nil>
16:51:10.399174064 tRaftLSMImplement(node-01,3).ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 50] [99 111 110 116 101 110 116 32 48 50]}, ret=&{0 [] []}, err=<nil>
16:51:10.399217896 KVStoreRPCServer.ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 50] [99 111 110 116 101 110 116 32 48 50]}, ret=&{0 [] []}, e=<nil>
16:51:10.399373787 tRaftLSMImplement(node-02,1).ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 50] [99 111 110 116 101 110 116 32 48 50]}, ret=&{0 [] []}, err=<nil>
16:51:10.399397275 KVStoreRPCServer.ExecuteKVCmd, cmd=&{1 [107 101 121 45 48 50] [99 111 110 116 101 110 116 32 48 50]}, ret=&{0 [] []}, e=<nil>
    tRaftKVServer_test.go:55: passing put into node-02
16:51:10.400256236 tRaftLSMImplement(node-01,3).ExecuteKVCmd, cmd=&{0 [107 101 121 45 48 50] []}, ret=&{0 [] [99 111 110 116 101 110 116 32 48 50]}, err=<nil>
16:51:10.400298117 KVStoreRPCServer.ExecuteKVCmd, cmd=&{0 [107 101 121 45 48 50] []}, ret=&{0 [] [99 111 110 116 101 110 116 32 48 50]}, e=<nil>
16:51:10.400639059 tRaftLSMImplement(node-03,1).ExecuteKVCmd, cmd=&{0 [107 101 121 45 48 50] []}, ret=&{0 [] [99 111 110 116 101 110 116 32 48 50]}, err=<nil>
16:51:10.400663438 KVStoreRPCServer.ExecuteKVCmd, cmd=&{0 [107 101 121 45 48 50] []}, ret=&{0 [] [99 111 110 116 101 110 116 32 48 50]}, e=<nil>
    tRaftKVServer_test.go:68: passing get from node-04
16:51:10.431051964 tRaftKVServer.whenStartThenWatchDebugKill, killing node-01
2021/04/07 16:51:10 rpc.Serve: accept:accept tcp [::]:3331: use of closed network connection
16:51:11.19072568 tFollowerState(node-02).whenLeaderHeartbeatTimeoutThenSwitchToCandidateState, term=1
16:51:11.190755031 tFollowerState(node-02).feDisposing, disposed=true
16:51:11.190856259 tRaftLSMImplement(node-02,1).HandleStateChanged, state=2
16:51:11.190885201 tRaftLSMImplement(node-02,1).meStateChanged, 2
16:51:11.190898966 tCandidateState(node-02).whenStartThenAskForVote
16:51:11.190908485 tCandidateState(node-02).ceAskingForVote, term=2
16:51:11.1909172 tCandidateState(node-02).ceAskingForVote, vote to myself
16:51:11.19093098 tCandidateState(node-02).ceAskingForVote, ticketCount=1
16:51:11.190944035 tCandidateState(node-02).whenAskingForVoteThenWatchElectionTimeout
16:51:11.191694746 tFollowerState(node-03).feVoteToCandidate, candidate=node-02, term=2
16:51:11.191724769 tRaftLSMImplement(node-01,3).RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 0}, err=<nil>
16:51:11.192305012 RaftRPCServer.RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 0}, e=<nil>
16:51:11.192223342 tFollowerState(node-04).feCandidateRequestVote, reset last vote
16:51:11.192464666 tFollowerState(node-04).feVoteToCandidate, candidate=node-02, term=2
16:51:11.19253627 tRaftLSMImplement(node-04,1).RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 2}, err=<nil>
16:51:11.192208542 tRaftLSMImplement(node-03,1).RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 2}, err=<nil>
16:51:11.192613581 tCandidateState(node-02).handleRequestVoteOK, peer=node-01, term=2
16:51:11.192627483 tCandidateState(node-02).ceReceiveTicket, mTicketCount=2
16:51:11.192634994 tCandidateState(node-02).whenReceiveTicketThenCheckTicketCount
16:51:11.19260158 RaftRPCServer.RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 2}, e=<nil>
16:51:11.192764937 tCandidateState(node-02).handleRequestVoteOK, peer=node-03, term=2
16:51:11.192778197 tCandidateState(node-02).ceReceiveTicket, mTicketCount=3
16:51:11.192784986 tCandidateState(node-02).whenReceiveTicketThenCheckTicketCount
16:51:11.192806525 tCandidateState(node-02).whenWinningTheVoteThenSwitchToLeader
16:51:11.192815315 tCandidateState(node-02).ceDisposing, mTicketCount=0
16:51:11.192836837 tCandidateState(node-02).ceDisposing, mDisposedFlag=true
16:51:11.192853274 tRaftLSMImplement(node-02,2).HandleStateChanged, state=3
16:51:11.192863098 tRaftLSMImplement(node-02,2).meStateChanged, 3
16:51:11.193007386 tFollowerState(node-01).init
16:51:11.193017792 tFollowerState(node-01).initEventHandlers
16:51:11.193037127 tRaftLSMImplement(node-01,3).HandleStateChanged, state=1
16:51:11.193046674 tRaftLSMImplement(node-01,3).meStateChanged, 1
16:51:11.193053504 tFollowerState(node-01).Start
16:51:11.19313721 tFollowerState(node-01).whenStartThenBeginWatchLeaderTimeout, begin
16:51:11.192549822 RaftRPCServer.RequestVote, cmd=&{node-02 2 0 0}, ret=&{0 2}, e=<nil>
    tRaftKVServer_test.go:74: passing reelecting, leader=node-02
--- PASS: Test_RaftKVServer (5.09s)
PASS

Debugger finished with exit code 0

debug.go

支持单元测试的上下文变量

package debug

// KilledNodeID was used to detect whether a node should stop wroking, written by unit test code
var KilledNodeID = ""

// LeaderNodeID presents current leader node's ID, written by lsm/tLeaderState
var LeaderNodeID = ""

(未完待续)

浅入深出etcd之raft原理

...又是如何保证在网络分区的情况下能正常工作下去?raft协议到底是什么?带着这些问题我们继续往下看。raft选举策略我们知道etcd使用raft协议来保证整个分布式的节点网络能正常的运转并且能正确的将数据复制到每个节点上面... 查看详情

手撸golang仿springioc/aop之12增强3

手撸golang仿springioc/aop之12增强3缘起最近阅读[SpringBoot技术内幕:架构设计与实现原理](朱智胜,2020.6)本系列笔记拟采用golang练习之Talkischeap,showmethecode.SpringSpring的主要特性:1.控制反转(InversionofControl,IoC)2.面向容器3.面向切面(Aspe... 查看详情

etcd概念介绍

etcd内部采用raft协议作为一致性算法,go语言实现。特点:简单:安装配置简单,而且提供了HTTP API进行交互,使用也很简单安全:支持SSL证书验证快速:根据官方提供的benchmark数据,单实例支持每秒2k+读操作可靠:采用raft算... 查看详情

手撸golang仿springioc/aop之4蓝图

手撸golang仿springioc/aop之4蓝图缘起最近阅读[SpringBoot技术内幕:架构设计与实现原理](朱智胜,2020.6)本系列笔记拟采用golang练习之Talkischeap,showmethecode.SpringSpring的主要特性:1.控制反转(InversionofControl,IoC)2.面向容器3.面向切面(Aspect... 查看详情

手撸golang仿springioc/aop之5如何扫描

手撸golang仿springioc/aop之5如何扫描缘起最近阅读[SpringBoot技术内幕:架构设计与实现原理](朱智胜,2020.6)本系列笔记拟采用golang练习之Talkischeap,showmethecode.SpringSpring的主要特性:1.控制反转(InversionofControl,IoC)2.面向容器3.面向切面(... 查看详情

分布式存储之etcd的集群管理

...主,配置共享和节点状态监控的问题。通过etcd(基于Raft协议))可以实现超大规模集群的管理,以及多节点的服务可靠性。今天,我们就聊聊etcd在分布式存储中的具体应用。什么是etcd?etcd是由CoreOS公司开发的一个开源的分布式KV... 查看详情

.netcore微服务之:基于consul实现服务治理

...“一站式”——内置了服务注册与发现框架、分布一致性协议实现、健康检查、Key/Value存储、多数据中心方案,不再需要依赖其它工具。Consul本身使用go语言开发,具有跨平台、运行高效等特点,也非常方便和Docker配合使用。与... 查看详情

手撸golanggo与微服务chatserver之1

缘起最近阅读<<Go微服务实战>>(刘金亮,2021.1)本系列笔记拟采用golang练习之案例需求(聊天服务器)用户可以连接到服务器。用户可以设定自己的用户名。用户可以向服务器发送消息,同时服务器也会向其他用户广播该消息... 查看详情

raft协议实战之redissentinel的选举leader源码解析

http://www.blogjava.net/jinfeng_wang/archive/2016/12/14/432108.htmlRaft协议是用来解决分布式系统一致性问题的协议,在很长一段时间,Paxos被认为是解决分布式系统一致性的代名词。但是Paxos难于理解,更难以实现,诸如Google大... 查看详情

手撸golang行为型设计模式模板方法模式

手撸golang行为型设计模式模板方法模式缘起最近复习设计模式拜读谭勇德的<<设计模式就该这样学>>本系列笔记拟采用golang练习之模板方法模式模板方法模式(TemplateMethodPattern)又叫作模板模式,指定义一个操作中的算... 查看详情

Hazelcast (Java) 和 ETCD (golang) 的区别/相似之处?

】Hazelcast(Java)和ETCD(golang)的区别/相似之处?【英文标题】:Hazelcast(Java)andETCD(golang)differences/similarities?【发布时间】:2015-09-0917:50:43【问题描述】:现在我们构建了一个实时分析系统,它应该是高度分布式的。我们计划使用分布... 查看详情

kubernetes之核心组件etcd介绍

目录Kubernetes之核心组件ETCD介绍ETCD的主要功能ETCD实用注意事项1、ETCDcluster初始化的问题2、ETCD读请求的机制3、ETCD的compact机制ETCD的问题Kubernetes之核心组件ETCD介绍Etcd是CoreOS基于Raft开发的分布式key-value存储,可用于服务发现、共... 查看详情

手撸golang行为型设计模式访问者模式

手撸golang行为型设计模式访问者模式缘起最近复习设计模式拜读谭勇德的<<设计模式就该这样学>>本系列笔记拟采用golang练习之访问者模式访问者模式(VisitorPattern)是一种将数据结构与数据操作分离的设计模式,指封装... 查看详情

常用postgresqlha(高可用)工具收集

....·支持健康检查.etcd不提供此功能.·支持http和dns协议接口.zookeeper的集成较为复杂,etcd只支持http协议.·官方提供web管理界面,etcd无此功能.6.pglookout-PostgreSQLreplicationmoni 查看详情

手撸golangspringioc/aop之2

手撸golangspringioc/aop之2缘起最近阅读[Offer来了:Java面试核心知识点精讲(框架篇)](王磊,2020.6)本系列笔记拟采用golang练习之Talkischeap,showmethecode.SpringSpring基于J2EE技术实现了一套轻量级的JavaWebService系统应用框架。它有很多优秀的... 查看详情

hyperledgerfabric(高可用之raft部署)

...基于1.4.4版本部署Raft版的Fabric网络。由于Raft共识集成了etcd,不再需要使用kafka、zookeeper等中间件。本次部署将搭建3Orderer节点、2组织(2peer)的Fabric网络,使用vagrant创建8台centos虚拟机,其中一台用于nfs共享文件,具体主机组件... 查看详情

浅入深出etcd之集群部署与golang客户端使用(代码片段)

...使用,一些基本原理。这次来说说现实一点的集群部署和golang版本的客户端使用。因为在实际使用过程中,etcd的节点肯定是需要2N+1个进行部署的,所以有必要说明一下集群的部署。集群部署网上有很多集群部署的教程,有的很... 查看详情

手撸golangspringioc/aop之2

参考技术A手撸golangspringioc/aop之2最近阅读[Offer来了:Java面试核心知识点精讲(框架篇)](王磊,2020.6)本系列笔记拟采用golang练习之Talkischeap,showmethecode.配置接口指令接口指令构建器接口指令执行上下文接口保存配置另存配置添加... 查看详情