165
vendor/github.com/waku-org/go-discover/COPYING.LESSER
generated
vendored
Normal file
165
vendor/github.com/waku-org/go-discover/COPYING.LESSER
generated
vendored
Normal file
@@ -0,0 +1,165 @@
|
||||
GNU LESSER GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
|
||||
This version of the GNU Lesser General Public License incorporates
|
||||
the terms and conditions of version 3 of the GNU General Public
|
||||
License, supplemented by the additional permissions listed below.
|
||||
|
||||
0. Additional Definitions.
|
||||
|
||||
As used herein, "this License" refers to version 3 of the GNU Lesser
|
||||
General Public License, and the "GNU GPL" refers to version 3 of the GNU
|
||||
General Public License.
|
||||
|
||||
"The Library" refers to a covered work governed by this License,
|
||||
other than an Application or a Combined Work as defined below.
|
||||
|
||||
An "Application" is any work that makes use of an interface provided
|
||||
by the Library, but which is not otherwise based on the Library.
|
||||
Defining a subclass of a class defined by the Library is deemed a mode
|
||||
of using an interface provided by the Library.
|
||||
|
||||
A "Combined Work" is a work produced by combining or linking an
|
||||
Application with the Library. The particular version of the Library
|
||||
with which the Combined Work was made is also called the "Linked
|
||||
Version".
|
||||
|
||||
The "Minimal Corresponding Source" for a Combined Work means the
|
||||
Corresponding Source for the Combined Work, excluding any source code
|
||||
for portions of the Combined Work that, considered in isolation, are
|
||||
based on the Application, and not on the Linked Version.
|
||||
|
||||
The "Corresponding Application Code" for a Combined Work means the
|
||||
object code and/or source code for the Application, including any data
|
||||
and utility programs needed for reproducing the Combined Work from the
|
||||
Application, but excluding the System Libraries of the Combined Work.
|
||||
|
||||
1. Exception to Section 3 of the GNU GPL.
|
||||
|
||||
You may convey a covered work under sections 3 and 4 of this License
|
||||
without being bound by section 3 of the GNU GPL.
|
||||
|
||||
2. Conveying Modified Versions.
|
||||
|
||||
If you modify a copy of the Library, and, in your modifications, a
|
||||
facility refers to a function or data to be supplied by an Application
|
||||
that uses the facility (other than as an argument passed when the
|
||||
facility is invoked), then you may convey a copy of the modified
|
||||
version:
|
||||
|
||||
a) under this License, provided that you make a good faith effort to
|
||||
ensure that, in the event an Application does not supply the
|
||||
function or data, the facility still operates, and performs
|
||||
whatever part of its purpose remains meaningful, or
|
||||
|
||||
b) under the GNU GPL, with none of the additional permissions of
|
||||
this License applicable to that copy.
|
||||
|
||||
3. Object Code Incorporating Material from Library Header Files.
|
||||
|
||||
The object code form of an Application may incorporate material from
|
||||
a header file that is part of the Library. You may convey such object
|
||||
code under terms of your choice, provided that, if the incorporated
|
||||
material is not limited to numerical parameters, data structure
|
||||
layouts and accessors, or small macros, inline functions and templates
|
||||
(ten or fewer lines in length), you do both of the following:
|
||||
|
||||
a) Give prominent notice with each copy of the object code that the
|
||||
Library is used in it and that the Library and its use are
|
||||
covered by this License.
|
||||
|
||||
b) Accompany the object code with a copy of the GNU GPL and this license
|
||||
document.
|
||||
|
||||
4. Combined Works.
|
||||
|
||||
You may convey a Combined Work under terms of your choice that,
|
||||
taken together, effectively do not restrict modification of the
|
||||
portions of the Library contained in the Combined Work and reverse
|
||||
engineering for debugging such modifications, if you also do each of
|
||||
the following:
|
||||
|
||||
a) Give prominent notice with each copy of the Combined Work that
|
||||
the Library is used in it and that the Library and its use are
|
||||
covered by this License.
|
||||
|
||||
b) Accompany the Combined Work with a copy of the GNU GPL and this license
|
||||
document.
|
||||
|
||||
c) For a Combined Work that displays copyright notices during
|
||||
execution, include the copyright notice for the Library among
|
||||
these notices, as well as a reference directing the user to the
|
||||
copies of the GNU GPL and this license document.
|
||||
|
||||
d) Do one of the following:
|
||||
|
||||
0) Convey the Minimal Corresponding Source under the terms of this
|
||||
License, and the Corresponding Application Code in a form
|
||||
suitable for, and under terms that permit, the user to
|
||||
recombine or relink the Application with a modified version of
|
||||
the Linked Version to produce a modified Combined Work, in the
|
||||
manner specified by section 6 of the GNU GPL for conveying
|
||||
Corresponding Source.
|
||||
|
||||
1) Use a suitable shared library mechanism for linking with the
|
||||
Library. A suitable mechanism is one that (a) uses at run time
|
||||
a copy of the Library already present on the user's computer
|
||||
system, and (b) will operate properly with a modified version
|
||||
of the Library that is interface-compatible with the Linked
|
||||
Version.
|
||||
|
||||
e) Provide Installation Information, but only if you would otherwise
|
||||
be required to provide such information under section 6 of the
|
||||
GNU GPL, and only to the extent that such information is
|
||||
necessary to install and execute a modified version of the
|
||||
Combined Work produced by recombining or relinking the
|
||||
Application with a modified version of the Linked Version. (If
|
||||
you use option 4d0, the Installation Information must accompany
|
||||
the Minimal Corresponding Source and Corresponding Application
|
||||
Code. If you use option 4d1, you must provide the Installation
|
||||
Information in the manner specified by section 6 of the GNU GPL
|
||||
for conveying Corresponding Source.)
|
||||
|
||||
5. Combined Libraries.
|
||||
|
||||
You may place library facilities that are a work based on the
|
||||
Library side by side in a single library together with other library
|
||||
facilities that are not Applications and are not covered by this
|
||||
License, and convey such a combined library under terms of your
|
||||
choice, if you do both of the following:
|
||||
|
||||
a) Accompany the combined library with a copy of the same work based
|
||||
on the Library, uncombined with any other library facilities,
|
||||
conveyed under the terms of this License.
|
||||
|
||||
b) Give prominent notice with the combined library that part of it
|
||||
is a work based on the Library, and explaining where to find the
|
||||
accompanying uncombined form of the same work.
|
||||
|
||||
6. Revised Versions of the GNU Lesser General Public License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions
|
||||
of the GNU Lesser General Public License from time to time. Such new
|
||||
versions will be similar in spirit to the present version, but may
|
||||
differ in detail to address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Library as you received it specifies that a certain numbered version
|
||||
of the GNU Lesser General Public License "or any later version"
|
||||
applies to it, you have the option of following the terms and
|
||||
conditions either of that published version or of any later version
|
||||
published by the Free Software Foundation. If the Library as you
|
||||
received it does not specify a version number of the GNU Lesser
|
||||
General Public License, you may choose any version of the GNU Lesser
|
||||
General Public License ever published by the Free Software Foundation.
|
||||
|
||||
If the Library as you received it specifies that a proxy can decide
|
||||
whether future versions of the GNU Lesser General Public License shall
|
||||
apply, that proxy's public statement of acceptance of any version is
|
||||
permanent authorization for you to choose that version for the
|
||||
Library.
|
||||
92
vendor/github.com/waku-org/go-discover/discover/common.go
generated
vendored
Normal file
92
vendor/github.com/waku-org/go-discover/discover/common.go
generated
vendored
Normal file
@@ -0,0 +1,92 @@
|
||||
// Copyright 2019 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package discover
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"net"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/mclock"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/ethereum/go-ethereum/p2p/netutil"
|
||||
"github.com/waku-org/go-discover/discover/v5wire"
|
||||
)
|
||||
|
||||
// UDPConn is a network connection on which discovery can operate.
|
||||
type UDPConn interface {
|
||||
ReadFromUDP(b []byte) (n int, addr *net.UDPAddr, err error)
|
||||
WriteToUDP(b []byte, addr *net.UDPAddr) (n int, err error)
|
||||
Close() error
|
||||
LocalAddr() net.Addr
|
||||
}
|
||||
|
||||
type V5Config struct {
|
||||
ProtocolID *[6]byte
|
||||
}
|
||||
|
||||
// Config holds settings for the discovery listener.
|
||||
type Config struct {
|
||||
// These settings are required and configure the UDP listener:
|
||||
PrivateKey *ecdsa.PrivateKey
|
||||
|
||||
// These settings are optional:
|
||||
NetRestrict *netutil.Netlist // list of allowed IP networks
|
||||
Bootnodes []*enode.Node // list of bootstrap nodes
|
||||
Unhandled chan<- ReadPacket // unhandled packets are sent on this channel
|
||||
Log log.Logger // if set, log messages go here
|
||||
ValidSchemes enr.IdentityScheme // allowed identity schemes
|
||||
V5Config V5Config // DiscV5 settings
|
||||
ValidNodeFn func(enode.Node) bool // function to validate a node before it's added to routing tables
|
||||
Clock mclock.Clock
|
||||
}
|
||||
|
||||
func (cfg Config) withDefaults() Config {
|
||||
if cfg.Log == nil {
|
||||
cfg.Log = log.Root()
|
||||
}
|
||||
if cfg.ValidSchemes == nil {
|
||||
cfg.ValidSchemes = enode.ValidSchemes
|
||||
}
|
||||
if cfg.Clock == nil {
|
||||
cfg.Clock = mclock.System{}
|
||||
}
|
||||
if cfg.V5Config.ProtocolID == nil {
|
||||
cfg.V5Config.ProtocolID = &v5wire.DefaultProtocolID
|
||||
}
|
||||
return cfg
|
||||
}
|
||||
|
||||
// ListenUDP starts listening for discovery packets on the given UDP socket.
|
||||
func ListenUDP(c UDPConn, ln *enode.LocalNode, cfg Config) (*UDPv4, error) {
|
||||
return ListenV4(c, ln, cfg)
|
||||
}
|
||||
|
||||
// ReadPacket is a packet that couldn't be handled. Those packets are sent to the unhandled
|
||||
// channel if configured.
|
||||
type ReadPacket struct {
|
||||
Data []byte
|
||||
Addr *net.UDPAddr
|
||||
}
|
||||
|
||||
func min(x, y int) int {
|
||||
if x > y {
|
||||
return y
|
||||
}
|
||||
return x
|
||||
}
|
||||
227
vendor/github.com/waku-org/go-discover/discover/lookup.go
generated
vendored
Normal file
227
vendor/github.com/waku-org/go-discover/discover/lookup.go
generated
vendored
Normal file
@@ -0,0 +1,227 @@
|
||||
// Copyright 2019 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package discover
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
)
|
||||
|
||||
// lookup performs a network search for nodes close to the given target. It approaches the
|
||||
// target by querying nodes that are closer to it on each iteration. The given target does
|
||||
// not need to be an actual node identifier.
|
||||
type lookup struct {
|
||||
tab *Table
|
||||
queryfunc func(*node) ([]*node, error)
|
||||
replyCh chan []*node
|
||||
cancelCh <-chan struct{}
|
||||
asked, seen map[enode.ID]bool
|
||||
result nodesByDistance
|
||||
replyBuffer []*node
|
||||
queries int
|
||||
}
|
||||
|
||||
type queryFunc func(*node) ([]*node, error)
|
||||
|
||||
func newLookup(ctx context.Context, tab *Table, target enode.ID, q queryFunc) *lookup {
|
||||
it := &lookup{
|
||||
tab: tab,
|
||||
queryfunc: q,
|
||||
asked: make(map[enode.ID]bool),
|
||||
seen: make(map[enode.ID]bool),
|
||||
result: nodesByDistance{target: target},
|
||||
replyCh: make(chan []*node, alpha),
|
||||
cancelCh: ctx.Done(),
|
||||
queries: -1,
|
||||
}
|
||||
// Don't query further if we hit ourself.
|
||||
// Unlikely to happen often in practice.
|
||||
it.asked[tab.self().ID()] = true
|
||||
return it
|
||||
}
|
||||
|
||||
// run runs the lookup to completion and returns the closest nodes found.
|
||||
func (it *lookup) run() []*enode.Node {
|
||||
for it.advance() {
|
||||
}
|
||||
return unwrapNodes(it.result.entries)
|
||||
}
|
||||
|
||||
// advance advances the lookup until any new nodes have been found.
|
||||
// It returns false when the lookup has ended.
|
||||
func (it *lookup) advance() bool {
|
||||
for it.startQueries() {
|
||||
select {
|
||||
case nodes := <-it.replyCh:
|
||||
it.replyBuffer = it.replyBuffer[:0]
|
||||
for _, n := range nodes {
|
||||
if n != nil && !it.seen[n.ID()] {
|
||||
it.seen[n.ID()] = true
|
||||
it.result.push(n, bucketSize)
|
||||
it.replyBuffer = append(it.replyBuffer, n)
|
||||
}
|
||||
}
|
||||
it.queries--
|
||||
if len(it.replyBuffer) > 0 {
|
||||
return true
|
||||
}
|
||||
case <-it.cancelCh:
|
||||
it.shutdown()
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (it *lookup) shutdown() {
|
||||
for it.queries > 0 {
|
||||
<-it.replyCh
|
||||
it.queries--
|
||||
}
|
||||
it.queryfunc = nil
|
||||
it.replyBuffer = nil
|
||||
}
|
||||
|
||||
func (it *lookup) startQueries() bool {
|
||||
if it.queryfunc == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// The first query returns nodes from the local table.
|
||||
if it.queries == -1 {
|
||||
closest := it.tab.findnodeByID(it.result.target, bucketSize, false)
|
||||
// Avoid finishing the lookup too quickly if table is empty. It'd be better to wait
|
||||
// for the table to fill in this case, but there is no good mechanism for that
|
||||
// yet.
|
||||
if len(closest.entries) == 0 {
|
||||
it.slowdown()
|
||||
}
|
||||
it.queries = 1
|
||||
it.replyCh <- closest.entries
|
||||
return true
|
||||
}
|
||||
|
||||
// Ask the closest nodes that we haven't asked yet.
|
||||
for i := 0; i < len(it.result.entries) && it.queries < alpha; i++ {
|
||||
n := it.result.entries[i]
|
||||
if !it.asked[n.ID()] {
|
||||
it.asked[n.ID()] = true
|
||||
it.queries++
|
||||
go it.query(n, it.replyCh)
|
||||
}
|
||||
}
|
||||
// The lookup ends when no more nodes can be asked.
|
||||
return it.queries > 0
|
||||
}
|
||||
|
||||
func (it *lookup) slowdown() {
|
||||
sleep := time.NewTimer(1 * time.Second)
|
||||
defer sleep.Stop()
|
||||
select {
|
||||
case <-sleep.C:
|
||||
case <-it.tab.closeReq:
|
||||
}
|
||||
}
|
||||
|
||||
func (it *lookup) query(n *node, reply chan<- []*node) {
|
||||
fails := it.tab.db.FindFails(n.ID(), n.IP())
|
||||
r, err := it.queryfunc(n)
|
||||
if errors.Is(err, errClosed) {
|
||||
// Avoid recording failures on shutdown.
|
||||
reply <- nil
|
||||
return
|
||||
} else if len(r) == 0 {
|
||||
fails++
|
||||
it.tab.db.UpdateFindFails(n.ID(), n.IP(), fails)
|
||||
// Remove the node from the local table if it fails to return anything useful too
|
||||
// many times, but only if there are enough other nodes in the bucket.
|
||||
dropped := false
|
||||
if fails >= maxFindnodeFailures && it.tab.bucketLen(n.ID()) >= bucketSize/2 {
|
||||
dropped = true
|
||||
it.tab.delete(n)
|
||||
}
|
||||
it.tab.log.Trace("FINDNODE failed", "id", n.ID(), "failcount", fails, "dropped", dropped, "err", err)
|
||||
} else if fails > 0 {
|
||||
// Reset failure counter because it counts _consecutive_ failures.
|
||||
it.tab.db.UpdateFindFails(n.ID(), n.IP(), 0)
|
||||
}
|
||||
|
||||
// Grab as many nodes as possible. Some of them might not be alive anymore, but we'll
|
||||
// just remove those again during revalidation.
|
||||
for _, n := range r {
|
||||
it.tab.addSeenNode(n)
|
||||
}
|
||||
reply <- r
|
||||
}
|
||||
|
||||
// lookupIterator performs lookup operations and iterates over all seen nodes.
|
||||
// When a lookup finishes, a new one is created through nextLookup.
|
||||
type lookupIterator struct {
|
||||
buffer []*node
|
||||
nextLookup lookupFunc
|
||||
ctx context.Context
|
||||
cancel func()
|
||||
lookup *lookup
|
||||
}
|
||||
|
||||
type lookupFunc func(ctx context.Context) *lookup
|
||||
|
||||
func newLookupIterator(ctx context.Context, next lookupFunc) *lookupIterator {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
return &lookupIterator{ctx: ctx, cancel: cancel, nextLookup: next}
|
||||
}
|
||||
|
||||
// Node returns the current node.
|
||||
func (it *lookupIterator) Node() *enode.Node {
|
||||
if len(it.buffer) == 0 {
|
||||
return nil
|
||||
}
|
||||
return unwrapNode(it.buffer[0])
|
||||
}
|
||||
|
||||
// Next moves to the next node.
|
||||
func (it *lookupIterator) Next() bool {
|
||||
// Consume next node in buffer.
|
||||
if len(it.buffer) > 0 {
|
||||
it.buffer = it.buffer[1:]
|
||||
}
|
||||
// Advance the lookup to refill the buffer.
|
||||
for len(it.buffer) == 0 {
|
||||
if it.ctx.Err() != nil {
|
||||
it.lookup = nil
|
||||
it.buffer = nil
|
||||
return false
|
||||
}
|
||||
if it.lookup == nil {
|
||||
it.lookup = it.nextLookup(it.ctx)
|
||||
continue
|
||||
}
|
||||
if !it.lookup.advance() {
|
||||
it.lookup = nil
|
||||
continue
|
||||
}
|
||||
it.buffer = it.lookup.replyBuffer
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Close ends the iterator.
|
||||
func (it *lookupIterator) Close() {
|
||||
it.cancel()
|
||||
}
|
||||
97
vendor/github.com/waku-org/go-discover/discover/node.go
generated
vendored
Normal file
97
vendor/github.com/waku-org/go-discover/discover/node.go
generated
vendored
Normal file
@@ -0,0 +1,97 @@
|
||||
// Copyright 2015 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package discover
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"errors"
|
||||
"math/big"
|
||||
"net"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/math"
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
)
|
||||
|
||||
// node represents a host on the network.
|
||||
// The fields of Node may not be modified.
|
||||
type node struct {
|
||||
enode.Node
|
||||
addedAt time.Time // time when the node was added to the table
|
||||
livenessChecks uint // how often liveness was checked
|
||||
}
|
||||
|
||||
type encPubkey [64]byte
|
||||
|
||||
func encodePubkey(key *ecdsa.PublicKey) encPubkey {
|
||||
var e encPubkey
|
||||
math.ReadBits(key.X, e[:len(e)/2])
|
||||
math.ReadBits(key.Y, e[len(e)/2:])
|
||||
return e
|
||||
}
|
||||
|
||||
func decodePubkey(curve elliptic.Curve, e []byte) (*ecdsa.PublicKey, error) {
|
||||
if len(e) != len(encPubkey{}) {
|
||||
return nil, errors.New("wrong size public key data")
|
||||
}
|
||||
p := &ecdsa.PublicKey{Curve: curve, X: new(big.Int), Y: new(big.Int)}
|
||||
half := len(e) / 2
|
||||
p.X.SetBytes(e[:half])
|
||||
p.Y.SetBytes(e[half:])
|
||||
if !p.Curve.IsOnCurve(p.X, p.Y) {
|
||||
return nil, errors.New("invalid curve point")
|
||||
}
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func (e encPubkey) id() enode.ID {
|
||||
return enode.ID(crypto.Keccak256Hash(e[:]))
|
||||
}
|
||||
|
||||
func wrapNode(n *enode.Node) *node {
|
||||
return &node{Node: *n}
|
||||
}
|
||||
|
||||
func wrapNodes(ns []*enode.Node) []*node {
|
||||
result := make([]*node, len(ns))
|
||||
for i, n := range ns {
|
||||
result[i] = wrapNode(n)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func unwrapNode(n *node) *enode.Node {
|
||||
return &n.Node
|
||||
}
|
||||
|
||||
func unwrapNodes(ns []*node) []*enode.Node {
|
||||
result := make([]*enode.Node, len(ns))
|
||||
for i, n := range ns {
|
||||
result[i] = unwrapNode(n)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func (n *node) addr() *net.UDPAddr {
|
||||
return &net.UDPAddr{IP: n.IP(), Port: n.UDP()}
|
||||
}
|
||||
|
||||
func (n *node) String() string {
|
||||
return n.Node.String()
|
||||
}
|
||||
119
vendor/github.com/waku-org/go-discover/discover/ntp.go
generated
vendored
Normal file
119
vendor/github.com/waku-org/go-discover/discover/ntp.go
generated
vendored
Normal file
@@ -0,0 +1,119 @@
|
||||
// Copyright 2016 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// Contains the NTP time drift detection via the SNTP protocol:
|
||||
// https://tools.ietf.org/html/rfc4330
|
||||
|
||||
package discover
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
)
|
||||
|
||||
const (
|
||||
ntpPool = "pool.ntp.org" // ntpPool is the NTP server to query for the current time
|
||||
ntpChecks = 3 // Number of measurements to do against the NTP server
|
||||
)
|
||||
|
||||
// durationSlice attaches the methods of sort.Interface to []time.Duration,
|
||||
// sorting in increasing order.
|
||||
type durationSlice []time.Duration
|
||||
|
||||
func (s durationSlice) Len() int { return len(s) }
|
||||
func (s durationSlice) Less(i, j int) bool { return s[i] < s[j] }
|
||||
func (s durationSlice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
|
||||
|
||||
// checkClockDrift queries an NTP server for clock drifts and warns the user if
|
||||
// one large enough is detected.
|
||||
func checkClockDrift() {
|
||||
drift, err := sntpDrift(ntpChecks)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if drift < -driftThreshold || drift > driftThreshold {
|
||||
log.Warn(fmt.Sprintf("System clock seems off by %v, which can prevent network connectivity", drift))
|
||||
log.Warn("Please enable network time synchronisation in system settings.")
|
||||
} else {
|
||||
log.Debug("NTP sanity check done", "drift", drift)
|
||||
}
|
||||
}
|
||||
|
||||
// sntpDrift does a naive time resolution against an NTP server and returns the
|
||||
// measured drift. This method uses the simple version of NTP. It's not precise
|
||||
// but should be fine for these purposes.
|
||||
//
|
||||
// Note, it executes two extra measurements compared to the number of requested
|
||||
// ones to be able to discard the two extremes as outliers.
|
||||
func sntpDrift(measurements int) (time.Duration, error) {
|
||||
// Resolve the address of the NTP server
|
||||
addr, err := net.ResolveUDPAddr("udp", ntpPool+":123")
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
// Construct the time request (empty package with only 2 fields set):
|
||||
// Bits 3-5: Protocol version, 3
|
||||
// Bits 6-8: Mode of operation, client, 3
|
||||
request := make([]byte, 48)
|
||||
request[0] = 3<<3 | 3
|
||||
|
||||
// Execute each of the measurements
|
||||
drifts := []time.Duration{}
|
||||
for i := 0; i < measurements+2; i++ {
|
||||
// Dial the NTP server and send the time retrieval request
|
||||
conn, err := net.DialUDP("udp", nil, addr)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
sent := time.Now()
|
||||
if _, err = conn.Write(request); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
// Retrieve the reply and calculate the elapsed time
|
||||
conn.SetDeadline(time.Now().Add(5 * time.Second))
|
||||
|
||||
reply := make([]byte, 48)
|
||||
if _, err = conn.Read(reply); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
elapsed := time.Since(sent)
|
||||
|
||||
// Reconstruct the time from the reply data
|
||||
sec := uint64(reply[43]) | uint64(reply[42])<<8 | uint64(reply[41])<<16 | uint64(reply[40])<<24
|
||||
frac := uint64(reply[47]) | uint64(reply[46])<<8 | uint64(reply[45])<<16 | uint64(reply[44])<<24
|
||||
|
||||
nanosec := sec*1e9 + (frac*1e9)>>32
|
||||
|
||||
t := time.Date(1900, 1, 1, 0, 0, 0, 0, time.UTC).Add(time.Duration(nanosec)).Local()
|
||||
|
||||
// Calculate the drift based on an assumed answer time of RRT/2
|
||||
drifts = append(drifts, sent.Sub(t)+elapsed/2)
|
||||
}
|
||||
// Calculate average drift (drop two extremities to avoid outliers)
|
||||
sort.Sort(durationSlice(drifts))
|
||||
|
||||
drift := time.Duration(0)
|
||||
for i := 1; i < len(drifts)-1; i++ {
|
||||
drift += drifts[i]
|
||||
}
|
||||
return drift / time.Duration(measurements), nil
|
||||
}
|
||||
698
vendor/github.com/waku-org/go-discover/discover/table.go
generated
vendored
Normal file
698
vendor/github.com/waku-org/go-discover/discover/table.go
generated
vendored
Normal file
@@ -0,0 +1,698 @@
|
||||
// Copyright 2015 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// Package discover implements the Node Discovery Protocol.
|
||||
//
|
||||
// The Node Discovery protocol provides a way to find RLPx nodes that
|
||||
// can be connected to. It uses a Kademlia-like protocol to maintain a
|
||||
// distributed database of the IDs and endpoints of all listening
|
||||
// nodes.
|
||||
package discover
|
||||
|
||||
import (
|
||||
crand "crypto/rand"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
mrand "math/rand"
|
||||
"net"
|
||||
"sort"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/netutil"
|
||||
)
|
||||
|
||||
const (
|
||||
alpha = 3 // Kademlia concurrency factor
|
||||
bucketSize = 16 // Kademlia bucket size
|
||||
maxReplacements = 10 // Size of per-bucket replacement list
|
||||
|
||||
// We keep buckets for the upper 1/15 of distances because
|
||||
// it's very unlikely we'll ever encounter a node that's closer.
|
||||
hashBits = len(common.Hash{}) * 8
|
||||
nBuckets = hashBits / 15 // Number of buckets
|
||||
bucketMinDistance = hashBits - nBuckets // Log distance of closest bucket
|
||||
|
||||
// IP address limits.
|
||||
bucketIPLimit, bucketSubnet = 2, 24 // at most 2 addresses from the same /24
|
||||
tableIPLimit, tableSubnet = 10, 24
|
||||
|
||||
refreshInterval = 30 * time.Minute
|
||||
revalidateInterval = 10 * time.Second
|
||||
copyNodesInterval = 30 * time.Second
|
||||
seedMinTableTime = 5 * time.Minute
|
||||
seedCount = 30
|
||||
seedMaxAge = 5 * 24 * time.Hour
|
||||
)
|
||||
|
||||
// Table is the 'node table', a Kademlia-like index of neighbor nodes. The table keeps
|
||||
// itself up-to-date by verifying the liveness of neighbors and requesting their node
|
||||
// records when announcements of a new record version are received.
|
||||
type Table struct {
|
||||
mutex sync.Mutex // protects buckets, bucket content, nursery, rand
|
||||
buckets [nBuckets]*bucket // index of known nodes by distance
|
||||
nursery []*node // bootstrap nodes
|
||||
rand *mrand.Rand // source of randomness, periodically reseeded
|
||||
ips netutil.DistinctNetSet
|
||||
|
||||
log log.Logger
|
||||
db *enode.DB // database of known nodes
|
||||
net transport
|
||||
refreshReq chan chan struct{}
|
||||
initDone chan struct{}
|
||||
closeReq chan struct{}
|
||||
closed chan struct{}
|
||||
|
||||
nodeIsValidFn func(enode.Node) bool
|
||||
|
||||
nodeAddedHook func(*node) // for testing
|
||||
}
|
||||
|
||||
// transport is implemented by the UDP transports.
|
||||
type transport interface {
|
||||
Self() *enode.Node
|
||||
RequestENR(*enode.Node) (*enode.Node, error)
|
||||
lookupRandom() []*enode.Node
|
||||
lookupSelf() []*enode.Node
|
||||
ping(*enode.Node) (seq uint64, err error)
|
||||
}
|
||||
|
||||
// bucket contains nodes, ordered by their last activity. the entry
|
||||
// that was most recently active is the first element in entries.
|
||||
type bucket struct {
|
||||
entries []*node // live entries, sorted by time of last contact
|
||||
replacements []*node // recently seen nodes to be used if revalidation fails
|
||||
ips netutil.DistinctNetSet
|
||||
}
|
||||
|
||||
func newTable(t transport, db *enode.DB, bootnodes []*enode.Node, nodeIsValidFn func(enode.Node) bool, log log.Logger) (*Table, error) {
|
||||
tab := &Table{
|
||||
net: t,
|
||||
db: db,
|
||||
refreshReq: make(chan chan struct{}),
|
||||
initDone: make(chan struct{}),
|
||||
closeReq: make(chan struct{}),
|
||||
closed: make(chan struct{}),
|
||||
rand: mrand.New(mrand.NewSource(0)),
|
||||
ips: netutil.DistinctNetSet{Subnet: tableSubnet, Limit: tableIPLimit},
|
||||
nodeIsValidFn: nodeIsValidFn,
|
||||
log: log,
|
||||
}
|
||||
if err := tab.setFallbackNodes(bootnodes); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for i := range tab.buckets {
|
||||
tab.buckets[i] = &bucket{
|
||||
ips: netutil.DistinctNetSet{Subnet: bucketSubnet, Limit: bucketIPLimit},
|
||||
}
|
||||
}
|
||||
tab.seedRand()
|
||||
tab.loadSeedNodes()
|
||||
|
||||
return tab, nil
|
||||
}
|
||||
|
||||
func (tab *Table) self() *enode.Node {
|
||||
return tab.net.Self()
|
||||
}
|
||||
|
||||
func (tab *Table) seedRand() {
|
||||
var b [8]byte
|
||||
crand.Read(b[:])
|
||||
|
||||
tab.mutex.Lock()
|
||||
tab.rand.Seed(int64(binary.BigEndian.Uint64(b[:])))
|
||||
tab.mutex.Unlock()
|
||||
}
|
||||
|
||||
// ReadRandomNodes fills the given slice with random nodes from the table. The results
|
||||
// are guaranteed to be unique for a single invocation, no node will appear twice.
|
||||
func (tab *Table) ReadRandomNodes(buf []*enode.Node) (n int) {
|
||||
if !tab.isInitDone() {
|
||||
return 0
|
||||
}
|
||||
tab.mutex.Lock()
|
||||
defer tab.mutex.Unlock()
|
||||
|
||||
var nodes []*enode.Node
|
||||
for _, b := range &tab.buckets {
|
||||
for _, n := range b.entries {
|
||||
nodes = append(nodes, unwrapNode(n))
|
||||
}
|
||||
}
|
||||
// Shuffle.
|
||||
for i := 0; i < len(nodes); i++ {
|
||||
j := tab.rand.Intn(len(nodes))
|
||||
nodes[i], nodes[j] = nodes[j], nodes[i]
|
||||
}
|
||||
return copy(buf, nodes)
|
||||
}
|
||||
|
||||
// getNode returns the node with the given ID or nil if it isn't in the table.
|
||||
func (tab *Table) getNode(id enode.ID) *enode.Node {
|
||||
tab.mutex.Lock()
|
||||
defer tab.mutex.Unlock()
|
||||
|
||||
b := tab.bucket(id)
|
||||
for _, e := range b.entries {
|
||||
if e.ID() == id {
|
||||
return unwrapNode(e)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// close terminates the network listener and flushes the node database.
|
||||
func (tab *Table) close() {
|
||||
close(tab.closeReq)
|
||||
<-tab.closed
|
||||
}
|
||||
|
||||
// setFallbackNodes sets the initial points of contact. These nodes
|
||||
// are used to connect to the network if the table is empty and there
|
||||
// are no known nodes in the database.
|
||||
func (tab *Table) setFallbackNodes(nodes []*enode.Node) error {
|
||||
for _, n := range nodes {
|
||||
if err := n.ValidateComplete(); err != nil {
|
||||
return fmt.Errorf("bad bootstrap node %q: %v", n, err)
|
||||
}
|
||||
}
|
||||
tab.nursery = wrapNodes(nodes)
|
||||
return nil
|
||||
}
|
||||
|
||||
// isInitDone returns whether the table's initial seeding procedure has completed.
|
||||
func (tab *Table) isInitDone() bool {
|
||||
select {
|
||||
case <-tab.initDone:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func (tab *Table) refresh() <-chan struct{} {
|
||||
done := make(chan struct{})
|
||||
select {
|
||||
case tab.refreshReq <- done:
|
||||
case <-tab.closeReq:
|
||||
close(done)
|
||||
}
|
||||
return done
|
||||
}
|
||||
|
||||
// loop schedules runs of doRefresh, doRevalidate and copyLiveNodes.
|
||||
func (tab *Table) loop() {
|
||||
var (
|
||||
revalidate = time.NewTimer(tab.nextRevalidateTime())
|
||||
refresh = time.NewTicker(refreshInterval)
|
||||
copyNodes = time.NewTicker(copyNodesInterval)
|
||||
refreshDone = make(chan struct{}) // where doRefresh reports completion
|
||||
revalidateDone chan struct{} // where doRevalidate reports completion
|
||||
waiting = []chan struct{}{tab.initDone} // holds waiting callers while doRefresh runs
|
||||
)
|
||||
defer refresh.Stop()
|
||||
defer revalidate.Stop()
|
||||
defer copyNodes.Stop()
|
||||
|
||||
// Start initial refresh.
|
||||
go tab.doRefresh(refreshDone)
|
||||
|
||||
loop:
|
||||
for {
|
||||
select {
|
||||
case <-refresh.C:
|
||||
tab.seedRand()
|
||||
if refreshDone == nil {
|
||||
refreshDone = make(chan struct{})
|
||||
go tab.doRefresh(refreshDone)
|
||||
}
|
||||
case req := <-tab.refreshReq:
|
||||
waiting = append(waiting, req)
|
||||
if refreshDone == nil {
|
||||
refreshDone = make(chan struct{})
|
||||
go tab.doRefresh(refreshDone)
|
||||
}
|
||||
case <-refreshDone:
|
||||
for _, ch := range waiting {
|
||||
close(ch)
|
||||
}
|
||||
waiting, refreshDone = nil, nil
|
||||
case <-revalidate.C:
|
||||
revalidateDone = make(chan struct{})
|
||||
go tab.doRevalidate(revalidateDone)
|
||||
case <-revalidateDone:
|
||||
revalidate.Reset(tab.nextRevalidateTime())
|
||||
revalidateDone = nil
|
||||
case <-copyNodes.C:
|
||||
go tab.copyLiveNodes()
|
||||
case <-tab.closeReq:
|
||||
break loop
|
||||
}
|
||||
}
|
||||
|
||||
if refreshDone != nil {
|
||||
<-refreshDone
|
||||
}
|
||||
for _, ch := range waiting {
|
||||
close(ch)
|
||||
}
|
||||
if revalidateDone != nil {
|
||||
<-revalidateDone
|
||||
}
|
||||
close(tab.closed)
|
||||
}
|
||||
|
||||
// doRefresh performs a lookup for a random target to keep buckets full. seed nodes are
|
||||
// inserted if the table is empty (initial bootstrap or discarded faulty peers).
|
||||
func (tab *Table) doRefresh(done chan struct{}) {
|
||||
defer close(done)
|
||||
|
||||
// Load nodes from the database and insert
|
||||
// them. This should yield a few previously seen nodes that are
|
||||
// (hopefully) still alive.
|
||||
tab.loadSeedNodes()
|
||||
|
||||
// Run self lookup to discover new neighbor nodes.
|
||||
tab.net.lookupSelf()
|
||||
|
||||
// The Kademlia paper specifies that the bucket refresh should
|
||||
// perform a lookup in the least recently used bucket. We cannot
|
||||
// adhere to this because the findnode target is a 512bit value
|
||||
// (not hash-sized) and it is not easily possible to generate a
|
||||
// sha3 preimage that falls into a chosen bucket.
|
||||
// We perform a few lookups with a random target instead.
|
||||
for i := 0; i < 3; i++ {
|
||||
tab.net.lookupRandom()
|
||||
}
|
||||
}
|
||||
|
||||
func (tab *Table) loadSeedNodes() {
|
||||
seeds := wrapNodes(tab.db.QuerySeeds(seedCount, seedMaxAge))
|
||||
seeds = append(seeds, tab.nursery...)
|
||||
for i := range seeds {
|
||||
seed := seeds[i]
|
||||
age := log.Lazy{Fn: func() interface{} { return time.Since(tab.db.LastPongReceived(seed.ID(), seed.IP())) }}
|
||||
tab.log.Trace("Found seed node in database", "id", seed.ID(), "addr", seed.addr(), "age", age)
|
||||
tab.addSeenNode(seed)
|
||||
}
|
||||
}
|
||||
|
||||
// doRevalidate checks that the last node in a random bucket is still live and replaces or
|
||||
// deletes the node if it isn't.
|
||||
func (tab *Table) doRevalidate(done chan<- struct{}) {
|
||||
defer func() { done <- struct{}{} }()
|
||||
|
||||
last, bi := tab.nodeToRevalidate()
|
||||
if last == nil {
|
||||
// No non-empty bucket found.
|
||||
return
|
||||
}
|
||||
|
||||
// Ping the selected node and wait for a pong.
|
||||
remoteSeq, err := tab.net.ping(unwrapNode(last))
|
||||
|
||||
// Also fetch record if the node replied and returned a higher sequence number.
|
||||
if last.Seq() < remoteSeq {
|
||||
n, err := tab.net.RequestENR(unwrapNode(last))
|
||||
if err != nil {
|
||||
tab.log.Debug("ENR request failed", "id", last.ID(), "addr", last.addr(), "err", err)
|
||||
} else {
|
||||
last = &node{Node: *n, addedAt: last.addedAt, livenessChecks: last.livenessChecks}
|
||||
}
|
||||
}
|
||||
|
||||
tab.mutex.Lock()
|
||||
defer tab.mutex.Unlock()
|
||||
b := tab.buckets[bi]
|
||||
if err == nil {
|
||||
// The node responded, move it to the front.
|
||||
last.livenessChecks++
|
||||
tab.log.Debug("Revalidated node", "b", bi, "id", last.ID(), "checks", last.livenessChecks)
|
||||
tab.bumpInBucket(b, last)
|
||||
return
|
||||
}
|
||||
// No reply received, pick a replacement or delete the node if there aren't
|
||||
// any replacements.
|
||||
if r := tab.replace(b, last); r != nil {
|
||||
tab.log.Debug("Replaced dead node", "b", bi, "id", last.ID(), "ip", last.IP(), "checks", last.livenessChecks, "r", r.ID(), "rip", r.IP())
|
||||
} else {
|
||||
tab.log.Debug("Removed dead node", "b", bi, "id", last.ID(), "ip", last.IP(), "checks", last.livenessChecks)
|
||||
}
|
||||
}
|
||||
|
||||
// nodeToRevalidate returns the last node in a random, non-empty bucket.
|
||||
func (tab *Table) nodeToRevalidate() (n *node, bi int) {
|
||||
tab.mutex.Lock()
|
||||
defer tab.mutex.Unlock()
|
||||
|
||||
for _, bi = range tab.rand.Perm(len(tab.buckets)) {
|
||||
b := tab.buckets[bi]
|
||||
if len(b.entries) > 0 {
|
||||
last := b.entries[len(b.entries)-1]
|
||||
return last, bi
|
||||
}
|
||||
}
|
||||
return nil, 0
|
||||
}
|
||||
|
||||
func (tab *Table) nextRevalidateTime() time.Duration {
|
||||
tab.mutex.Lock()
|
||||
defer tab.mutex.Unlock()
|
||||
|
||||
return time.Duration(tab.rand.Int63n(int64(revalidateInterval)))
|
||||
}
|
||||
|
||||
// copyLiveNodes adds nodes from the table to the database if they have been in the table
|
||||
// longer than seedMinTableTime.
|
||||
func (tab *Table) copyLiveNodes() {
|
||||
tab.mutex.Lock()
|
||||
defer tab.mutex.Unlock()
|
||||
|
||||
now := time.Now()
|
||||
for _, b := range &tab.buckets {
|
||||
for _, n := range b.entries {
|
||||
if n.livenessChecks > 0 && now.Sub(n.addedAt) >= seedMinTableTime {
|
||||
tab.db.UpdateNode(unwrapNode(n))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// findnodeByID returns the n nodes in the table that are closest to the given id.
|
||||
// This is used by the FINDNODE/v4 handler.
|
||||
//
|
||||
// The preferLive parameter says whether the caller wants liveness-checked results. If
|
||||
// preferLive is true and the table contains any verified nodes, the result will not
|
||||
// contain unverified nodes. However, if there are no verified nodes at all, the result
|
||||
// will contain unverified nodes.
|
||||
func (tab *Table) findnodeByID(target enode.ID, nresults int, preferLive bool) *nodesByDistance {
|
||||
tab.mutex.Lock()
|
||||
defer tab.mutex.Unlock()
|
||||
|
||||
// Scan all buckets. There might be a better way to do this, but there aren't that many
|
||||
// buckets, so this solution should be fine. The worst-case complexity of this loop
|
||||
// is O(tab.len() * nresults).
|
||||
nodes := &nodesByDistance{target: target}
|
||||
liveNodes := &nodesByDistance{target: target}
|
||||
for _, b := range &tab.buckets {
|
||||
for _, n := range b.entries {
|
||||
nodes.push(n, nresults)
|
||||
if preferLive && n.livenessChecks > 0 {
|
||||
liveNodes.push(n, nresults)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if preferLive && len(liveNodes.entries) > 0 {
|
||||
return liveNodes
|
||||
}
|
||||
return nodes
|
||||
}
|
||||
|
||||
// len returns the number of nodes in the table.
|
||||
func (tab *Table) len() (n int) {
|
||||
tab.mutex.Lock()
|
||||
defer tab.mutex.Unlock()
|
||||
|
||||
for _, b := range &tab.buckets {
|
||||
n += len(b.entries)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
// bucketLen returns the number of nodes in the bucket for the given ID.
|
||||
func (tab *Table) bucketLen(id enode.ID) int {
|
||||
tab.mutex.Lock()
|
||||
defer tab.mutex.Unlock()
|
||||
|
||||
return len(tab.bucket(id).entries)
|
||||
}
|
||||
|
||||
// bucket returns the bucket for the given node ID hash.
|
||||
func (tab *Table) bucket(id enode.ID) *bucket {
|
||||
d := enode.LogDist(tab.self().ID(), id)
|
||||
return tab.bucketAtDistance(d)
|
||||
}
|
||||
|
||||
func (tab *Table) bucketAtDistance(d int) *bucket {
|
||||
if d <= bucketMinDistance {
|
||||
return tab.buckets[0]
|
||||
}
|
||||
return tab.buckets[d-bucketMinDistance-1]
|
||||
}
|
||||
|
||||
// addSeenNode adds a node which may or may not be live to the end of a bucket. If the
|
||||
// bucket has space available, adding the node succeeds immediately. Otherwise, the node is
|
||||
// added to the replacements list.
|
||||
//
|
||||
// The caller must not hold tab.mutex.
|
||||
func (tab *Table) addSeenNode(n *node) {
|
||||
if n.ID() == tab.self().ID() {
|
||||
return
|
||||
}
|
||||
|
||||
if tab.nodeIsValidFn != nil && !tab.nodeIsValidFn(n.Node) {
|
||||
return
|
||||
}
|
||||
|
||||
tab.mutex.Lock()
|
||||
defer tab.mutex.Unlock()
|
||||
b := tab.bucket(n.ID())
|
||||
if contains(b.entries, n.ID()) {
|
||||
// Already in bucket, don't add.
|
||||
return
|
||||
}
|
||||
if len(b.entries) >= bucketSize {
|
||||
// Bucket full, maybe add as replacement.
|
||||
tab.addReplacement(b, n)
|
||||
return
|
||||
}
|
||||
if !tab.addIP(b, n.IP()) {
|
||||
// Can't add: IP limit reached.
|
||||
return
|
||||
}
|
||||
// Add to end of bucket:
|
||||
b.entries = append(b.entries, n)
|
||||
b.replacements = deleteNode(b.replacements, n)
|
||||
n.addedAt = time.Now()
|
||||
if tab.nodeAddedHook != nil {
|
||||
tab.nodeAddedHook(n)
|
||||
}
|
||||
}
|
||||
|
||||
// addVerifiedNode adds a node whose existence has been verified recently to the front of a
|
||||
// bucket. If the node is already in the bucket, it is moved to the front. If the bucket
|
||||
// has no space, the node is added to the replacements list.
|
||||
//
|
||||
// There is an additional safety measure: if the table is still initializing the node
|
||||
// is not added. This prevents an attack where the table could be filled by just sending
|
||||
// ping repeatedly.
|
||||
//
|
||||
// The caller must not hold tab.mutex.
|
||||
func (tab *Table) addVerifiedNode(n *node) {
|
||||
if !tab.isInitDone() {
|
||||
return
|
||||
}
|
||||
if n.ID() == tab.self().ID() {
|
||||
return
|
||||
}
|
||||
|
||||
if tab.nodeIsValidFn != nil && !tab.nodeIsValidFn(n.Node) {
|
||||
return
|
||||
}
|
||||
|
||||
tab.mutex.Lock()
|
||||
defer tab.mutex.Unlock()
|
||||
b := tab.bucket(n.ID())
|
||||
if tab.bumpInBucket(b, n) {
|
||||
// Already in bucket, moved to front.
|
||||
return
|
||||
}
|
||||
if len(b.entries) >= bucketSize {
|
||||
// Bucket full, maybe add as replacement.
|
||||
tab.addReplacement(b, n)
|
||||
return
|
||||
}
|
||||
if !tab.addIP(b, n.IP()) {
|
||||
// Can't add: IP limit reached.
|
||||
return
|
||||
}
|
||||
// Add to front of bucket.
|
||||
b.entries, _ = pushNode(b.entries, n, bucketSize)
|
||||
b.replacements = deleteNode(b.replacements, n)
|
||||
n.addedAt = time.Now()
|
||||
if tab.nodeAddedHook != nil {
|
||||
tab.nodeAddedHook(n)
|
||||
}
|
||||
}
|
||||
|
||||
// delete removes an entry from the node table. It is used to evacuate dead nodes.
|
||||
func (tab *Table) delete(node *node) {
|
||||
tab.mutex.Lock()
|
||||
defer tab.mutex.Unlock()
|
||||
|
||||
tab.deleteInBucket(tab.bucket(node.ID()), node)
|
||||
}
|
||||
|
||||
func (tab *Table) addIP(b *bucket, ip net.IP) bool {
|
||||
if len(ip) == 0 {
|
||||
return false // Nodes without IP cannot be added.
|
||||
}
|
||||
if netutil.IsLAN(ip) {
|
||||
return true
|
||||
}
|
||||
if !tab.ips.Add(ip) {
|
||||
tab.log.Debug("IP exceeds table limit", "ip", ip)
|
||||
return false
|
||||
}
|
||||
if !b.ips.Add(ip) {
|
||||
tab.log.Debug("IP exceeds bucket limit", "ip", ip)
|
||||
tab.ips.Remove(ip)
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (tab *Table) removeIP(b *bucket, ip net.IP) {
|
||||
if netutil.IsLAN(ip) {
|
||||
return
|
||||
}
|
||||
tab.ips.Remove(ip)
|
||||
b.ips.Remove(ip)
|
||||
}
|
||||
|
||||
func (tab *Table) addReplacement(b *bucket, n *node) {
|
||||
for _, e := range b.replacements {
|
||||
if e.ID() == n.ID() {
|
||||
return // already in list
|
||||
}
|
||||
}
|
||||
if !tab.addIP(b, n.IP()) {
|
||||
return
|
||||
}
|
||||
var removed *node
|
||||
b.replacements, removed = pushNode(b.replacements, n, maxReplacements)
|
||||
if removed != nil {
|
||||
tab.removeIP(b, removed.IP())
|
||||
}
|
||||
}
|
||||
|
||||
// replace removes n from the replacement list and replaces 'last' with it if it is the
|
||||
// last entry in the bucket. If 'last' isn't the last entry, it has either been replaced
|
||||
// with someone else or became active.
|
||||
func (tab *Table) replace(b *bucket, last *node) *node {
|
||||
if len(b.entries) == 0 || b.entries[len(b.entries)-1].ID() != last.ID() {
|
||||
// Entry has moved, don't replace it.
|
||||
return nil
|
||||
}
|
||||
// Still the last entry.
|
||||
if len(b.replacements) == 0 {
|
||||
tab.deleteInBucket(b, last)
|
||||
return nil
|
||||
}
|
||||
r := b.replacements[tab.rand.Intn(len(b.replacements))]
|
||||
b.replacements = deleteNode(b.replacements, r)
|
||||
b.entries[len(b.entries)-1] = r
|
||||
tab.removeIP(b, last.IP())
|
||||
return r
|
||||
}
|
||||
|
||||
// bumpInBucket moves the given node to the front of the bucket entry list
|
||||
// if it is contained in that list.
|
||||
func (tab *Table) bumpInBucket(b *bucket, n *node) bool {
|
||||
for i := range b.entries {
|
||||
if b.entries[i].ID() == n.ID() {
|
||||
if !n.IP().Equal(b.entries[i].IP()) {
|
||||
// Endpoint has changed, ensure that the new IP fits into table limits.
|
||||
tab.removeIP(b, b.entries[i].IP())
|
||||
if !tab.addIP(b, n.IP()) {
|
||||
// It doesn't, put the previous one back.
|
||||
tab.addIP(b, b.entries[i].IP())
|
||||
return false
|
||||
}
|
||||
}
|
||||
// Move it to the front.
|
||||
copy(b.entries[1:], b.entries[:i])
|
||||
b.entries[0] = n
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (tab *Table) deleteInBucket(b *bucket, n *node) {
|
||||
b.entries = deleteNode(b.entries, n)
|
||||
tab.removeIP(b, n.IP())
|
||||
}
|
||||
|
||||
func contains(ns []*node, id enode.ID) bool {
|
||||
for _, n := range ns {
|
||||
if n.ID() == id {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// pushNode adds n to the front of list, keeping at most max items.
|
||||
func pushNode(list []*node, n *node, max int) ([]*node, *node) {
|
||||
if len(list) < max {
|
||||
list = append(list, nil)
|
||||
}
|
||||
removed := list[len(list)-1]
|
||||
copy(list[1:], list)
|
||||
list[0] = n
|
||||
return list, removed
|
||||
}
|
||||
|
||||
// deleteNode removes n from list.
|
||||
func deleteNode(list []*node, n *node) []*node {
|
||||
for i := range list {
|
||||
if list[i].ID() == n.ID() {
|
||||
return append(list[:i], list[i+1:]...)
|
||||
}
|
||||
}
|
||||
return list
|
||||
}
|
||||
|
||||
// nodesByDistance is a list of nodes, ordered by distance to target.
|
||||
type nodesByDistance struct {
|
||||
entries []*node
|
||||
target enode.ID
|
||||
}
|
||||
|
||||
// push adds the given node to the list, keeping the total size below maxElems.
|
||||
func (h *nodesByDistance) push(n *node, maxElems int) {
|
||||
ix := sort.Search(len(h.entries), func(i int) bool {
|
||||
return enode.DistCmp(h.target, h.entries[i].ID(), n.ID()) > 0
|
||||
})
|
||||
if len(h.entries) < maxElems {
|
||||
h.entries = append(h.entries, n)
|
||||
}
|
||||
if ix == len(h.entries) {
|
||||
// farther away than all nodes we already have.
|
||||
// if there was room for it, the node is now the last element.
|
||||
} else {
|
||||
// slide existing entries down to make room
|
||||
// this will overwrite the entry we just appended.
|
||||
copy(h.entries[ix+1:], h.entries[ix:])
|
||||
h.entries[ix] = n
|
||||
}
|
||||
}
|
||||
787
vendor/github.com/waku-org/go-discover/discover/v4_udp.go
generated
vendored
Normal file
787
vendor/github.com/waku-org/go-discover/discover/v4_udp.go
generated
vendored
Normal file
@@ -0,0 +1,787 @@
|
||||
// Copyright 2019 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package discover
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"container/list"
|
||||
"context"
|
||||
"crypto/ecdsa"
|
||||
crand "crypto/rand"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/netutil"
|
||||
"github.com/waku-org/go-discover/discover/v4wire"
|
||||
)
|
||||
|
||||
// Errors
|
||||
var (
|
||||
errExpired = errors.New("expired")
|
||||
errUnsolicitedReply = errors.New("unsolicited reply")
|
||||
errUnknownNode = errors.New("unknown node")
|
||||
errTimeout = errors.New("RPC timeout")
|
||||
errClockWarp = errors.New("reply deadline too far in the future")
|
||||
errClosed = errors.New("socket closed")
|
||||
errLowPort = errors.New("low port")
|
||||
)
|
||||
|
||||
const (
|
||||
respTimeout = 500 * time.Millisecond
|
||||
expiration = 20 * time.Second
|
||||
bondExpiration = 24 * time.Hour
|
||||
|
||||
maxFindnodeFailures = 5 // nodes exceeding this limit are dropped
|
||||
ntpFailureThreshold = 32 // Continuous timeouts after which to check NTP
|
||||
ntpWarningCooldown = 10 * time.Minute // Minimum amount of time to pass before repeating NTP warning
|
||||
driftThreshold = 10 * time.Second // Allowed clock drift before warning user
|
||||
|
||||
// Discovery packets are defined to be no larger than 1280 bytes.
|
||||
// Packets larger than this size will be cut at the end and treated
|
||||
// as invalid because their hash won't match.
|
||||
maxPacketSize = 1280
|
||||
)
|
||||
|
||||
// UDPv4 implements the v4 wire protocol.
|
||||
type UDPv4 struct {
|
||||
conn UDPConn
|
||||
log log.Logger
|
||||
netrestrict *netutil.Netlist
|
||||
priv *ecdsa.PrivateKey
|
||||
localNode *enode.LocalNode
|
||||
db *enode.DB
|
||||
tab *Table
|
||||
closeOnce sync.Once
|
||||
wg sync.WaitGroup
|
||||
|
||||
addReplyMatcher chan *replyMatcher
|
||||
gotreply chan reply
|
||||
closeCtx context.Context
|
||||
cancelCloseCtx context.CancelFunc
|
||||
}
|
||||
|
||||
// replyMatcher represents a pending reply.
|
||||
//
|
||||
// Some implementations of the protocol wish to send more than one
|
||||
// reply packet to findnode. In general, any neighbors packet cannot
|
||||
// be matched up with a specific findnode packet.
|
||||
//
|
||||
// Our implementation handles this by storing a callback function for
|
||||
// each pending reply. Incoming packets from a node are dispatched
|
||||
// to all callback functions for that node.
|
||||
type replyMatcher struct {
|
||||
// these fields must match in the reply.
|
||||
from enode.ID
|
||||
ip net.IP
|
||||
ptype byte
|
||||
|
||||
// time when the request must complete
|
||||
deadline time.Time
|
||||
|
||||
// callback is called when a matching reply arrives. If it returns matched == true, the
|
||||
// reply was acceptable. The second return value indicates whether the callback should
|
||||
// be removed from the pending reply queue. If it returns false, the reply is considered
|
||||
// incomplete and the callback will be invoked again for the next matching reply.
|
||||
callback replyMatchFunc
|
||||
|
||||
// errc receives nil when the callback indicates completion or an
|
||||
// error if no further reply is received within the timeout.
|
||||
errc chan error
|
||||
|
||||
// reply contains the most recent reply. This field is safe for reading after errc has
|
||||
// received a value.
|
||||
reply v4wire.Packet
|
||||
}
|
||||
|
||||
type replyMatchFunc func(v4wire.Packet) (matched bool, requestDone bool)
|
||||
|
||||
// reply is a reply packet from a certain node.
|
||||
type reply struct {
|
||||
from enode.ID
|
||||
ip net.IP
|
||||
data v4wire.Packet
|
||||
// loop indicates whether there was
|
||||
// a matching request by sending on this channel.
|
||||
matched chan<- bool
|
||||
}
|
||||
|
||||
func ListenV4(c UDPConn, ln *enode.LocalNode, cfg Config) (*UDPv4, error) {
|
||||
cfg = cfg.withDefaults()
|
||||
closeCtx, cancel := context.WithCancel(context.Background())
|
||||
t := &UDPv4{
|
||||
conn: c,
|
||||
priv: cfg.PrivateKey,
|
||||
netrestrict: cfg.NetRestrict,
|
||||
localNode: ln,
|
||||
db: ln.Database(),
|
||||
gotreply: make(chan reply),
|
||||
addReplyMatcher: make(chan *replyMatcher),
|
||||
closeCtx: closeCtx,
|
||||
cancelCloseCtx: cancel,
|
||||
log: cfg.Log,
|
||||
}
|
||||
|
||||
tab, err := newTable(t, ln.Database(), cfg.Bootnodes, cfg.ValidNodeFn, t.log)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
t.tab = tab
|
||||
go tab.loop()
|
||||
|
||||
t.wg.Add(2)
|
||||
go t.loop()
|
||||
go t.readLoop(cfg.Unhandled)
|
||||
return t, nil
|
||||
}
|
||||
|
||||
// Self returns the local node.
|
||||
func (t *UDPv4) Self() *enode.Node {
|
||||
return t.localNode.Node()
|
||||
}
|
||||
|
||||
// Close shuts down the socket and aborts any running queries.
|
||||
func (t *UDPv4) Close() {
|
||||
t.closeOnce.Do(func() {
|
||||
t.cancelCloseCtx()
|
||||
t.conn.Close()
|
||||
t.wg.Wait()
|
||||
t.tab.close()
|
||||
})
|
||||
}
|
||||
|
||||
// Resolve searches for a specific node with the given ID and tries to get the most recent
|
||||
// version of the node record for it. It returns n if the node could not be resolved.
|
||||
func (t *UDPv4) Resolve(n *enode.Node) *enode.Node {
|
||||
// Try asking directly. This works if the node is still responding on the endpoint we have.
|
||||
if rn, err := t.RequestENR(n); err == nil {
|
||||
return rn
|
||||
}
|
||||
// Check table for the ID, we might have a newer version there.
|
||||
if intable := t.tab.getNode(n.ID()); intable != nil && intable.Seq() > n.Seq() {
|
||||
n = intable
|
||||
if rn, err := t.RequestENR(n); err == nil {
|
||||
return rn
|
||||
}
|
||||
}
|
||||
// Otherwise perform a network lookup.
|
||||
var key enode.Secp256k1
|
||||
if n.Load(&key) != nil {
|
||||
return n // no secp256k1 key
|
||||
}
|
||||
result := t.LookupPubkey((*ecdsa.PublicKey)(&key))
|
||||
for _, rn := range result {
|
||||
if rn.ID() == n.ID() {
|
||||
if rn, err := t.RequestENR(rn); err == nil {
|
||||
return rn
|
||||
}
|
||||
}
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (t *UDPv4) ourEndpoint() v4wire.Endpoint {
|
||||
n := t.Self()
|
||||
a := &net.UDPAddr{IP: n.IP(), Port: n.UDP()}
|
||||
return v4wire.NewEndpoint(a, uint16(n.TCP()))
|
||||
}
|
||||
|
||||
// Ping sends a ping message to the given node.
|
||||
func (t *UDPv4) Ping(n *enode.Node) error {
|
||||
_, err := t.ping(n)
|
||||
return err
|
||||
}
|
||||
|
||||
// ping sends a ping message to the given node and waits for a reply.
|
||||
func (t *UDPv4) ping(n *enode.Node) (seq uint64, err error) {
|
||||
rm := t.sendPing(n.ID(), &net.UDPAddr{IP: n.IP(), Port: n.UDP()}, nil)
|
||||
if err = <-rm.errc; err == nil {
|
||||
seq = rm.reply.(*v4wire.Pong).ENRSeq
|
||||
}
|
||||
return seq, err
|
||||
}
|
||||
|
||||
// sendPing sends a ping message to the given node and invokes the callback
|
||||
// when the reply arrives.
|
||||
func (t *UDPv4) sendPing(toid enode.ID, toaddr *net.UDPAddr, callback func()) *replyMatcher {
|
||||
req := t.makePing(toaddr)
|
||||
packet, hash, err := v4wire.Encode(t.priv, req)
|
||||
if err != nil {
|
||||
errc := make(chan error, 1)
|
||||
errc <- err
|
||||
return &replyMatcher{errc: errc}
|
||||
}
|
||||
// Add a matcher for the reply to the pending reply queue. Pongs are matched if they
|
||||
// reference the ping we're about to send.
|
||||
rm := t.pending(toid, toaddr.IP, v4wire.PongPacket, func(p v4wire.Packet) (matched bool, requestDone bool) {
|
||||
matched = bytes.Equal(p.(*v4wire.Pong).ReplyTok, hash)
|
||||
if matched && callback != nil {
|
||||
callback()
|
||||
}
|
||||
return matched, matched
|
||||
})
|
||||
// Send the packet.
|
||||
t.localNode.UDPContact(toaddr)
|
||||
t.write(toaddr, toid, req.Name(), packet)
|
||||
return rm
|
||||
}
|
||||
|
||||
func (t *UDPv4) makePing(toaddr *net.UDPAddr) *v4wire.Ping {
|
||||
return &v4wire.Ping{
|
||||
Version: 4,
|
||||
From: t.ourEndpoint(),
|
||||
To: v4wire.NewEndpoint(toaddr, 0),
|
||||
Expiration: uint64(time.Now().Add(expiration).Unix()),
|
||||
ENRSeq: t.localNode.Node().Seq(),
|
||||
}
|
||||
}
|
||||
|
||||
// LookupPubkey finds the closest nodes to the given public key.
|
||||
func (t *UDPv4) LookupPubkey(key *ecdsa.PublicKey) []*enode.Node {
|
||||
if t.tab.len() == 0 {
|
||||
// All nodes were dropped, refresh. The very first query will hit this
|
||||
// case and run the bootstrapping logic.
|
||||
<-t.tab.refresh()
|
||||
}
|
||||
return t.newLookup(t.closeCtx, encodePubkey(key)).run()
|
||||
}
|
||||
|
||||
// RandomNodes is an iterator yielding nodes from a random walk of the DHT.
|
||||
func (t *UDPv4) RandomNodes() enode.Iterator {
|
||||
return newLookupIterator(t.closeCtx, t.newRandomLookup)
|
||||
}
|
||||
|
||||
// lookupRandom implements transport.
|
||||
func (t *UDPv4) lookupRandom() []*enode.Node {
|
||||
return t.newRandomLookup(t.closeCtx).run()
|
||||
}
|
||||
|
||||
// lookupSelf implements transport.
|
||||
func (t *UDPv4) lookupSelf() []*enode.Node {
|
||||
return t.newLookup(t.closeCtx, encodePubkey(&t.priv.PublicKey)).run()
|
||||
}
|
||||
|
||||
func (t *UDPv4) newRandomLookup(ctx context.Context) *lookup {
|
||||
var target encPubkey
|
||||
crand.Read(target[:])
|
||||
return t.newLookup(ctx, target)
|
||||
}
|
||||
|
||||
func (t *UDPv4) newLookup(ctx context.Context, targetKey encPubkey) *lookup {
|
||||
target := enode.ID(crypto.Keccak256Hash(targetKey[:]))
|
||||
ekey := v4wire.Pubkey(targetKey)
|
||||
it := newLookup(ctx, t.tab, target, func(n *node) ([]*node, error) {
|
||||
return t.findnode(n.ID(), n.addr(), ekey)
|
||||
})
|
||||
return it
|
||||
}
|
||||
|
||||
// findnode sends a findnode request to the given node and waits until
|
||||
// the node has sent up to k neighbors.
|
||||
func (t *UDPv4) findnode(toid enode.ID, toaddr *net.UDPAddr, target v4wire.Pubkey) ([]*node, error) {
|
||||
t.ensureBond(toid, toaddr)
|
||||
|
||||
// Add a matcher for 'neighbours' replies to the pending reply queue. The matcher is
|
||||
// active until enough nodes have been received.
|
||||
nodes := make([]*node, 0, bucketSize)
|
||||
nreceived := 0
|
||||
rm := t.pending(toid, toaddr.IP, v4wire.NeighborsPacket, func(r v4wire.Packet) (matched bool, requestDone bool) {
|
||||
reply := r.(*v4wire.Neighbors)
|
||||
for _, rn := range reply.Nodes {
|
||||
nreceived++
|
||||
n, err := t.nodeFromRPC(toaddr, rn)
|
||||
if err != nil {
|
||||
t.log.Trace("Invalid neighbor node received", "ip", rn.IP, "addr", toaddr, "err", err)
|
||||
continue
|
||||
}
|
||||
nodes = append(nodes, n)
|
||||
}
|
||||
return true, nreceived >= bucketSize
|
||||
})
|
||||
t.send(toaddr, toid, &v4wire.Findnode{
|
||||
Target: target,
|
||||
Expiration: uint64(time.Now().Add(expiration).Unix()),
|
||||
})
|
||||
// Ensure that callers don't see a timeout if the node actually responded. Since
|
||||
// findnode can receive more than one neighbors response, the reply matcher will be
|
||||
// active until the remote node sends enough nodes. If the remote end doesn't have
|
||||
// enough nodes the reply matcher will time out waiting for the second reply, but
|
||||
// there's no need for an error in that case.
|
||||
err := <-rm.errc
|
||||
if errors.Is(err, errTimeout) && rm.reply != nil {
|
||||
err = nil
|
||||
}
|
||||
return nodes, err
|
||||
}
|
||||
|
||||
// RequestENR sends ENRRequest to the given node and waits for a response.
|
||||
func (t *UDPv4) RequestENR(n *enode.Node) (*enode.Node, error) {
|
||||
addr := &net.UDPAddr{IP: n.IP(), Port: n.UDP()}
|
||||
t.ensureBond(n.ID(), addr)
|
||||
|
||||
req := &v4wire.ENRRequest{
|
||||
Expiration: uint64(time.Now().Add(expiration).Unix()),
|
||||
}
|
||||
packet, hash, err := v4wire.Encode(t.priv, req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Add a matcher for the reply to the pending reply queue. Responses are matched if
|
||||
// they reference the request we're about to send.
|
||||
rm := t.pending(n.ID(), addr.IP, v4wire.ENRResponsePacket, func(r v4wire.Packet) (matched bool, requestDone bool) {
|
||||
matched = bytes.Equal(r.(*v4wire.ENRResponse).ReplyTok, hash)
|
||||
return matched, matched
|
||||
})
|
||||
// Send the packet and wait for the reply.
|
||||
t.write(addr, n.ID(), req.Name(), packet)
|
||||
if err := <-rm.errc; err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Verify the response record.
|
||||
respN, err := enode.New(enode.ValidSchemes, &rm.reply.(*v4wire.ENRResponse).Record)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if respN.ID() != n.ID() {
|
||||
return nil, fmt.Errorf("invalid ID in response record")
|
||||
}
|
||||
if respN.Seq() < n.Seq() {
|
||||
return n, nil // response record is older
|
||||
}
|
||||
if err := netutil.CheckRelayIP(addr.IP, respN.IP()); err != nil {
|
||||
return nil, fmt.Errorf("invalid IP in response record: %v", err)
|
||||
}
|
||||
return respN, nil
|
||||
}
|
||||
|
||||
// pending adds a reply matcher to the pending reply queue.
|
||||
// see the documentation of type replyMatcher for a detailed explanation.
|
||||
func (t *UDPv4) pending(id enode.ID, ip net.IP, ptype byte, callback replyMatchFunc) *replyMatcher {
|
||||
ch := make(chan error, 1)
|
||||
p := &replyMatcher{from: id, ip: ip, ptype: ptype, callback: callback, errc: ch}
|
||||
select {
|
||||
case t.addReplyMatcher <- p:
|
||||
// loop will handle it
|
||||
case <-t.closeCtx.Done():
|
||||
ch <- errClosed
|
||||
}
|
||||
return p
|
||||
}
|
||||
|
||||
// handleReply dispatches a reply packet, invoking reply matchers. It returns
|
||||
// whether any matcher considered the packet acceptable.
|
||||
func (t *UDPv4) handleReply(from enode.ID, fromIP net.IP, req v4wire.Packet) bool {
|
||||
matched := make(chan bool, 1)
|
||||
select {
|
||||
case t.gotreply <- reply{from, fromIP, req, matched}:
|
||||
// loop will handle it
|
||||
return <-matched
|
||||
case <-t.closeCtx.Done():
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// loop runs in its own goroutine. it keeps track of
|
||||
// the refresh timer and the pending reply queue.
|
||||
func (t *UDPv4) loop() {
|
||||
defer t.wg.Done()
|
||||
|
||||
var (
|
||||
plist = list.New()
|
||||
timeout = time.NewTimer(0)
|
||||
nextTimeout *replyMatcher // head of plist when timeout was last reset
|
||||
contTimeouts = 0 // number of continuous timeouts to do NTP checks
|
||||
ntpWarnTime = time.Unix(0, 0)
|
||||
)
|
||||
<-timeout.C // ignore first timeout
|
||||
defer timeout.Stop()
|
||||
|
||||
resetTimeout := func() {
|
||||
if plist.Front() == nil || nextTimeout == plist.Front().Value {
|
||||
return
|
||||
}
|
||||
// Start the timer so it fires when the next pending reply has expired.
|
||||
now := time.Now()
|
||||
for el := plist.Front(); el != nil; el = el.Next() {
|
||||
nextTimeout = el.Value.(*replyMatcher)
|
||||
if dist := nextTimeout.deadline.Sub(now); dist < 2*respTimeout {
|
||||
timeout.Reset(dist)
|
||||
return
|
||||
}
|
||||
// Remove pending replies whose deadline is too far in the
|
||||
// future. These can occur if the system clock jumped
|
||||
// backwards after the deadline was assigned.
|
||||
nextTimeout.errc <- errClockWarp
|
||||
plist.Remove(el)
|
||||
}
|
||||
nextTimeout = nil
|
||||
timeout.Stop()
|
||||
}
|
||||
|
||||
for {
|
||||
resetTimeout()
|
||||
|
||||
select {
|
||||
case <-t.closeCtx.Done():
|
||||
for el := plist.Front(); el != nil; el = el.Next() {
|
||||
el.Value.(*replyMatcher).errc <- errClosed
|
||||
}
|
||||
return
|
||||
|
||||
case p := <-t.addReplyMatcher:
|
||||
p.deadline = time.Now().Add(respTimeout)
|
||||
plist.PushBack(p)
|
||||
|
||||
case r := <-t.gotreply:
|
||||
var matched bool // whether any replyMatcher considered the reply acceptable.
|
||||
for el := plist.Front(); el != nil; el = el.Next() {
|
||||
p := el.Value.(*replyMatcher)
|
||||
if p.from == r.from && p.ptype == r.data.Kind() && p.ip.Equal(r.ip) {
|
||||
ok, requestDone := p.callback(r.data)
|
||||
matched = matched || ok
|
||||
p.reply = r.data
|
||||
// Remove the matcher if callback indicates that all replies have been received.
|
||||
if requestDone {
|
||||
p.errc <- nil
|
||||
plist.Remove(el)
|
||||
}
|
||||
// Reset the continuous timeout counter (time drift detection)
|
||||
contTimeouts = 0
|
||||
}
|
||||
}
|
||||
r.matched <- matched
|
||||
|
||||
case now := <-timeout.C:
|
||||
nextTimeout = nil
|
||||
|
||||
// Notify and remove callbacks whose deadline is in the past.
|
||||
for el := plist.Front(); el != nil; el = el.Next() {
|
||||
p := el.Value.(*replyMatcher)
|
||||
if now.After(p.deadline) || now.Equal(p.deadline) {
|
||||
p.errc <- errTimeout
|
||||
plist.Remove(el)
|
||||
contTimeouts++
|
||||
}
|
||||
}
|
||||
// If we've accumulated too many timeouts, do an NTP time sync check
|
||||
if contTimeouts > ntpFailureThreshold {
|
||||
if time.Since(ntpWarnTime) >= ntpWarningCooldown {
|
||||
ntpWarnTime = time.Now()
|
||||
go checkClockDrift()
|
||||
}
|
||||
contTimeouts = 0
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (t *UDPv4) send(toaddr *net.UDPAddr, toid enode.ID, req v4wire.Packet) ([]byte, error) {
|
||||
packet, hash, err := v4wire.Encode(t.priv, req)
|
||||
if err != nil {
|
||||
return hash, err
|
||||
}
|
||||
return hash, t.write(toaddr, toid, req.Name(), packet)
|
||||
}
|
||||
|
||||
func (t *UDPv4) write(toaddr *net.UDPAddr, toid enode.ID, what string, packet []byte) error {
|
||||
_, err := t.conn.WriteToUDP(packet, toaddr)
|
||||
t.log.Trace(">> "+what, "id", toid, "addr", toaddr, "err", err)
|
||||
return err
|
||||
}
|
||||
|
||||
// readLoop runs in its own goroutine. it handles incoming UDP packets.
|
||||
func (t *UDPv4) readLoop(unhandled chan<- ReadPacket) {
|
||||
defer t.wg.Done()
|
||||
if unhandled != nil {
|
||||
defer close(unhandled)
|
||||
}
|
||||
|
||||
buf := make([]byte, maxPacketSize)
|
||||
for {
|
||||
nbytes, from, err := t.conn.ReadFromUDP(buf)
|
||||
if netutil.IsTemporaryError(err) {
|
||||
// Ignore temporary read errors.
|
||||
t.log.Debug("Temporary UDP read error", "err", err)
|
||||
continue
|
||||
} else if err != nil {
|
||||
// Shut down the loop for permanent errors.
|
||||
if !errors.Is(err, io.EOF) {
|
||||
t.log.Debug("UDP read error", "err", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
if t.handlePacket(from, buf[:nbytes]) != nil && unhandled != nil {
|
||||
select {
|
||||
case unhandled <- ReadPacket{buf[:nbytes], from}:
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (t *UDPv4) handlePacket(from *net.UDPAddr, buf []byte) error {
|
||||
rawpacket, fromKey, hash, err := v4wire.Decode(buf)
|
||||
if err != nil {
|
||||
t.log.Debug("Bad discv4 packet", "addr", from, "err", err)
|
||||
return err
|
||||
}
|
||||
packet := t.wrapPacket(rawpacket)
|
||||
fromID := fromKey.ID()
|
||||
if err == nil && packet.preverify != nil {
|
||||
err = packet.preverify(packet, from, fromID, fromKey)
|
||||
}
|
||||
t.log.Trace("<< "+packet.Name(), "id", fromID, "addr", from, "err", err)
|
||||
if err == nil && packet.handle != nil {
|
||||
packet.handle(packet, from, fromID, hash)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// checkBond checks if the given node has a recent enough endpoint proof.
|
||||
func (t *UDPv4) checkBond(id enode.ID, ip net.IP) bool {
|
||||
return time.Since(t.db.LastPongReceived(id, ip)) < bondExpiration
|
||||
}
|
||||
|
||||
// ensureBond solicits a ping from a node if we haven't seen a ping from it for a while.
|
||||
// This ensures there is a valid endpoint proof on the remote end.
|
||||
func (t *UDPv4) ensureBond(toid enode.ID, toaddr *net.UDPAddr) {
|
||||
tooOld := time.Since(t.db.LastPingReceived(toid, toaddr.IP)) > bondExpiration
|
||||
if tooOld || t.db.FindFails(toid, toaddr.IP) > maxFindnodeFailures {
|
||||
rm := t.sendPing(toid, toaddr, nil)
|
||||
<-rm.errc
|
||||
// Wait for them to ping back and process our pong.
|
||||
time.Sleep(respTimeout)
|
||||
}
|
||||
}
|
||||
|
||||
func (t *UDPv4) nodeFromRPC(sender *net.UDPAddr, rn v4wire.Node) (*node, error) {
|
||||
if rn.UDP <= 1024 {
|
||||
return nil, errLowPort
|
||||
}
|
||||
if err := netutil.CheckRelayIP(sender.IP, rn.IP); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if t.netrestrict != nil && !t.netrestrict.Contains(rn.IP) {
|
||||
return nil, errors.New("not contained in netrestrict list")
|
||||
}
|
||||
key, err := v4wire.DecodePubkey(crypto.S256(), rn.ID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
n := wrapNode(enode.NewV4(key, rn.IP, int(rn.TCP), int(rn.UDP)))
|
||||
err = n.ValidateComplete()
|
||||
return n, err
|
||||
}
|
||||
|
||||
func nodeToRPC(n *node) v4wire.Node {
|
||||
var key ecdsa.PublicKey
|
||||
var ekey v4wire.Pubkey
|
||||
if err := n.Load((*enode.Secp256k1)(&key)); err == nil {
|
||||
ekey = v4wire.EncodePubkey(&key)
|
||||
}
|
||||
return v4wire.Node{ID: ekey, IP: n.IP(), UDP: uint16(n.UDP()), TCP: uint16(n.TCP())}
|
||||
}
|
||||
|
||||
// wrapPacket returns the handler functions applicable to a packet.
|
||||
func (t *UDPv4) wrapPacket(p v4wire.Packet) *packetHandlerV4 {
|
||||
var h packetHandlerV4
|
||||
h.Packet = p
|
||||
switch p.(type) {
|
||||
case *v4wire.Ping:
|
||||
h.preverify = t.verifyPing
|
||||
h.handle = t.handlePing
|
||||
case *v4wire.Pong:
|
||||
h.preverify = t.verifyPong
|
||||
case *v4wire.Findnode:
|
||||
h.preverify = t.verifyFindnode
|
||||
h.handle = t.handleFindnode
|
||||
case *v4wire.Neighbors:
|
||||
h.preverify = t.verifyNeighbors
|
||||
case *v4wire.ENRRequest:
|
||||
h.preverify = t.verifyENRRequest
|
||||
h.handle = t.handleENRRequest
|
||||
case *v4wire.ENRResponse:
|
||||
h.preverify = t.verifyENRResponse
|
||||
}
|
||||
return &h
|
||||
}
|
||||
|
||||
// packetHandlerV4 wraps a packet with handler functions.
|
||||
type packetHandlerV4 struct {
|
||||
v4wire.Packet
|
||||
senderKey *ecdsa.PublicKey // used for ping
|
||||
|
||||
// preverify checks whether the packet is valid and should be handled at all.
|
||||
preverify func(p *packetHandlerV4, from *net.UDPAddr, fromID enode.ID, fromKey v4wire.Pubkey) error
|
||||
// handle handles the packet.
|
||||
handle func(req *packetHandlerV4, from *net.UDPAddr, fromID enode.ID, mac []byte)
|
||||
}
|
||||
|
||||
// PING/v4
|
||||
|
||||
func (t *UDPv4) verifyPing(h *packetHandlerV4, from *net.UDPAddr, fromID enode.ID, fromKey v4wire.Pubkey) error {
|
||||
req := h.Packet.(*v4wire.Ping)
|
||||
|
||||
senderKey, err := v4wire.DecodePubkey(crypto.S256(), fromKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if v4wire.Expired(req.Expiration) {
|
||||
return errExpired
|
||||
}
|
||||
h.senderKey = senderKey
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *UDPv4) handlePing(h *packetHandlerV4, from *net.UDPAddr, fromID enode.ID, mac []byte) {
|
||||
req := h.Packet.(*v4wire.Ping)
|
||||
|
||||
// Reply.
|
||||
t.send(from, fromID, &v4wire.Pong{
|
||||
To: v4wire.NewEndpoint(from, req.From.TCP),
|
||||
ReplyTok: mac,
|
||||
Expiration: uint64(time.Now().Add(expiration).Unix()),
|
||||
ENRSeq: t.localNode.Node().Seq(),
|
||||
})
|
||||
|
||||
// Ping back if our last pong on file is too far in the past.
|
||||
n := wrapNode(enode.NewV4(h.senderKey, from.IP, int(req.From.TCP), from.Port))
|
||||
if time.Since(t.db.LastPongReceived(n.ID(), from.IP)) > bondExpiration {
|
||||
t.sendPing(fromID, from, func() {
|
||||
t.tab.addVerifiedNode(n)
|
||||
})
|
||||
} else {
|
||||
t.tab.addVerifiedNode(n)
|
||||
}
|
||||
|
||||
// Update node database and endpoint predictor.
|
||||
t.db.UpdateLastPingReceived(n.ID(), from.IP, time.Now())
|
||||
t.localNode.UDPEndpointStatement(from, &net.UDPAddr{IP: req.To.IP, Port: int(req.To.UDP)})
|
||||
}
|
||||
|
||||
// PONG/v4
|
||||
|
||||
func (t *UDPv4) verifyPong(h *packetHandlerV4, from *net.UDPAddr, fromID enode.ID, fromKey v4wire.Pubkey) error {
|
||||
req := h.Packet.(*v4wire.Pong)
|
||||
|
||||
if v4wire.Expired(req.Expiration) {
|
||||
return errExpired
|
||||
}
|
||||
if !t.handleReply(fromID, from.IP, req) {
|
||||
return errUnsolicitedReply
|
||||
}
|
||||
t.localNode.UDPEndpointStatement(from, &net.UDPAddr{IP: req.To.IP, Port: int(req.To.UDP)})
|
||||
t.db.UpdateLastPongReceived(fromID, from.IP, time.Now())
|
||||
return nil
|
||||
}
|
||||
|
||||
// FINDNODE/v4
|
||||
|
||||
func (t *UDPv4) verifyFindnode(h *packetHandlerV4, from *net.UDPAddr, fromID enode.ID, fromKey v4wire.Pubkey) error {
|
||||
req := h.Packet.(*v4wire.Findnode)
|
||||
|
||||
if v4wire.Expired(req.Expiration) {
|
||||
return errExpired
|
||||
}
|
||||
if !t.checkBond(fromID, from.IP) {
|
||||
// No endpoint proof pong exists, we don't process the packet. This prevents an
|
||||
// attack vector where the discovery protocol could be used to amplify traffic in a
|
||||
// DDOS attack. A malicious actor would send a findnode request with the IP address
|
||||
// and UDP port of the target as the source address. The recipient of the findnode
|
||||
// packet would then send a neighbors packet (which is a much bigger packet than
|
||||
// findnode) to the victim.
|
||||
return errUnknownNode
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *UDPv4) handleFindnode(h *packetHandlerV4, from *net.UDPAddr, fromID enode.ID, mac []byte) {
|
||||
req := h.Packet.(*v4wire.Findnode)
|
||||
|
||||
// Determine closest nodes.
|
||||
target := enode.ID(crypto.Keccak256Hash(req.Target[:]))
|
||||
closest := t.tab.findnodeByID(target, bucketSize, true).entries
|
||||
|
||||
// Send neighbors in chunks with at most maxNeighbors per packet
|
||||
// to stay below the packet size limit.
|
||||
p := v4wire.Neighbors{Expiration: uint64(time.Now().Add(expiration).Unix())}
|
||||
var sent bool
|
||||
for _, n := range closest {
|
||||
if netutil.CheckRelayIP(from.IP, n.IP()) == nil {
|
||||
p.Nodes = append(p.Nodes, nodeToRPC(n))
|
||||
}
|
||||
if len(p.Nodes) == v4wire.MaxNeighbors {
|
||||
t.send(from, fromID, &p)
|
||||
p.Nodes = p.Nodes[:0]
|
||||
sent = true
|
||||
}
|
||||
}
|
||||
if len(p.Nodes) > 0 || !sent {
|
||||
t.send(from, fromID, &p)
|
||||
}
|
||||
}
|
||||
|
||||
// NEIGHBORS/v4
|
||||
|
||||
func (t *UDPv4) verifyNeighbors(h *packetHandlerV4, from *net.UDPAddr, fromID enode.ID, fromKey v4wire.Pubkey) error {
|
||||
req := h.Packet.(*v4wire.Neighbors)
|
||||
|
||||
if v4wire.Expired(req.Expiration) {
|
||||
return errExpired
|
||||
}
|
||||
if !t.handleReply(fromID, from.IP, h.Packet) {
|
||||
return errUnsolicitedReply
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ENRREQUEST/v4
|
||||
|
||||
func (t *UDPv4) verifyENRRequest(h *packetHandlerV4, from *net.UDPAddr, fromID enode.ID, fromKey v4wire.Pubkey) error {
|
||||
req := h.Packet.(*v4wire.ENRRequest)
|
||||
|
||||
if v4wire.Expired(req.Expiration) {
|
||||
return errExpired
|
||||
}
|
||||
if !t.checkBond(fromID, from.IP) {
|
||||
return errUnknownNode
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *UDPv4) handleENRRequest(h *packetHandlerV4, from *net.UDPAddr, fromID enode.ID, mac []byte) {
|
||||
t.send(from, fromID, &v4wire.ENRResponse{
|
||||
ReplyTok: mac,
|
||||
Record: *t.localNode.Node().Record(),
|
||||
})
|
||||
}
|
||||
|
||||
// ENRRESPONSE/v4
|
||||
|
||||
func (t *UDPv4) verifyENRResponse(h *packetHandlerV4, from *net.UDPAddr, fromID enode.ID, fromKey v4wire.Pubkey) error {
|
||||
if !t.handleReply(fromID, from.IP, h.Packet) {
|
||||
return errUnsolicitedReply
|
||||
}
|
||||
return nil
|
||||
}
|
||||
294
vendor/github.com/waku-org/go-discover/discover/v4wire/v4wire.go
generated
vendored
Normal file
294
vendor/github.com/waku-org/go-discover/discover/v4wire/v4wire.go
generated
vendored
Normal file
@@ -0,0 +1,294 @@
|
||||
// Copyright 2020 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// Package v4wire implements the Discovery v4 Wire Protocol.
|
||||
package v4wire
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"net"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/math"
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/ethereum/go-ethereum/rlp"
|
||||
)
|
||||
|
||||
// RPC packet types
|
||||
const (
|
||||
PingPacket = iota + 1 // zero is 'reserved'
|
||||
PongPacket
|
||||
FindnodePacket
|
||||
NeighborsPacket
|
||||
ENRRequestPacket
|
||||
ENRResponsePacket
|
||||
)
|
||||
|
||||
// RPC request structures
|
||||
type (
|
||||
Ping struct {
|
||||
Version uint
|
||||
From, To Endpoint
|
||||
Expiration uint64
|
||||
ENRSeq uint64 `rlp:"optional"` // Sequence number of local record, added by EIP-868.
|
||||
|
||||
// Ignore additional fields (for forward compatibility).
|
||||
Rest []rlp.RawValue `rlp:"tail"`
|
||||
}
|
||||
|
||||
// Pong is the reply to ping.
|
||||
Pong struct {
|
||||
// This field should mirror the UDP envelope address
|
||||
// of the ping packet, which provides a way to discover the
|
||||
// external address (after NAT).
|
||||
To Endpoint
|
||||
ReplyTok []byte // This contains the hash of the ping packet.
|
||||
Expiration uint64 // Absolute timestamp at which the packet becomes invalid.
|
||||
ENRSeq uint64 `rlp:"optional"` // Sequence number of local record, added by EIP-868.
|
||||
|
||||
// Ignore additional fields (for forward compatibility).
|
||||
Rest []rlp.RawValue `rlp:"tail"`
|
||||
}
|
||||
|
||||
// Findnode is a query for nodes close to the given target.
|
||||
Findnode struct {
|
||||
Target Pubkey
|
||||
Expiration uint64
|
||||
// Ignore additional fields (for forward compatibility).
|
||||
Rest []rlp.RawValue `rlp:"tail"`
|
||||
}
|
||||
|
||||
// Neighbors is the reply to findnode.
|
||||
Neighbors struct {
|
||||
Nodes []Node
|
||||
Expiration uint64
|
||||
// Ignore additional fields (for forward compatibility).
|
||||
Rest []rlp.RawValue `rlp:"tail"`
|
||||
}
|
||||
|
||||
// ENRRequest queries for the remote node's record.
|
||||
ENRRequest struct {
|
||||
Expiration uint64
|
||||
// Ignore additional fields (for forward compatibility).
|
||||
Rest []rlp.RawValue `rlp:"tail"`
|
||||
}
|
||||
|
||||
// ENRResponse is the reply to ENRRequest.
|
||||
ENRResponse struct {
|
||||
ReplyTok []byte // Hash of the ENRRequest packet.
|
||||
Record enr.Record
|
||||
// Ignore additional fields (for forward compatibility).
|
||||
Rest []rlp.RawValue `rlp:"tail"`
|
||||
}
|
||||
)
|
||||
|
||||
// MaxNeighbors is the maximum number of neighbor nodes in a Neighbors packet.
|
||||
const MaxNeighbors = 12
|
||||
|
||||
// This code computes the MaxNeighbors constant value.
|
||||
|
||||
// func init() {
|
||||
// var maxNeighbors int
|
||||
// p := Neighbors{Expiration: ^uint64(0)}
|
||||
// maxSizeNode := Node{IP: make(net.IP, 16), UDP: ^uint16(0), TCP: ^uint16(0)}
|
||||
// for n := 0; ; n++ {
|
||||
// p.Nodes = append(p.Nodes, maxSizeNode)
|
||||
// size, _, err := rlp.EncodeToReader(p)
|
||||
// if err != nil {
|
||||
// // If this ever happens, it will be caught by the unit tests.
|
||||
// panic("cannot encode: " + err.Error())
|
||||
// }
|
||||
// if headSize+size+1 >= 1280 {
|
||||
// maxNeighbors = n
|
||||
// break
|
||||
// }
|
||||
// }
|
||||
// fmt.Println("maxNeighbors", maxNeighbors)
|
||||
// }
|
||||
|
||||
// Pubkey represents an encoded 64-byte secp256k1 public key.
|
||||
type Pubkey [64]byte
|
||||
|
||||
// ID returns the node ID corresponding to the public key.
|
||||
func (e Pubkey) ID() enode.ID {
|
||||
return enode.ID(crypto.Keccak256Hash(e[:]))
|
||||
}
|
||||
|
||||
// Node represents information about a node.
|
||||
type Node struct {
|
||||
IP net.IP // len 4 for IPv4 or 16 for IPv6
|
||||
UDP uint16 // for discovery protocol
|
||||
TCP uint16 // for RLPx protocol
|
||||
ID Pubkey
|
||||
}
|
||||
|
||||
// Endpoint represents a network endpoint.
|
||||
type Endpoint struct {
|
||||
IP net.IP // len 4 for IPv4 or 16 for IPv6
|
||||
UDP uint16 // for discovery protocol
|
||||
TCP uint16 // for RLPx protocol
|
||||
}
|
||||
|
||||
// NewEndpoint creates an endpoint.
|
||||
func NewEndpoint(addr *net.UDPAddr, tcpPort uint16) Endpoint {
|
||||
ip := net.IP{}
|
||||
if ip4 := addr.IP.To4(); ip4 != nil {
|
||||
ip = ip4
|
||||
} else if ip6 := addr.IP.To16(); ip6 != nil {
|
||||
ip = ip6
|
||||
}
|
||||
return Endpoint{IP: ip, UDP: uint16(addr.Port), TCP: tcpPort}
|
||||
}
|
||||
|
||||
type Packet interface {
|
||||
// Name is the name of the package, for logging purposes.
|
||||
Name() string
|
||||
// Kind is the packet type, for logging purposes.
|
||||
Kind() byte
|
||||
}
|
||||
|
||||
func (req *Ping) Name() string { return "PING/v4" }
|
||||
func (req *Ping) Kind() byte { return PingPacket }
|
||||
|
||||
func (req *Pong) Name() string { return "PONG/v4" }
|
||||
func (req *Pong) Kind() byte { return PongPacket }
|
||||
|
||||
func (req *Findnode) Name() string { return "FINDNODE/v4" }
|
||||
func (req *Findnode) Kind() byte { return FindnodePacket }
|
||||
|
||||
func (req *Neighbors) Name() string { return "NEIGHBORS/v4" }
|
||||
func (req *Neighbors) Kind() byte { return NeighborsPacket }
|
||||
|
||||
func (req *ENRRequest) Name() string { return "ENRREQUEST/v4" }
|
||||
func (req *ENRRequest) Kind() byte { return ENRRequestPacket }
|
||||
|
||||
func (req *ENRResponse) Name() string { return "ENRRESPONSE/v4" }
|
||||
func (req *ENRResponse) Kind() byte { return ENRResponsePacket }
|
||||
|
||||
// Expired checks whether the given UNIX time stamp is in the past.
|
||||
func Expired(ts uint64) bool {
|
||||
return time.Unix(int64(ts), 0).Before(time.Now())
|
||||
}
|
||||
|
||||
// Encoder/decoder.
|
||||
|
||||
const (
|
||||
macSize = 32
|
||||
sigSize = crypto.SignatureLength
|
||||
headSize = macSize + sigSize // space of packet frame data
|
||||
)
|
||||
|
||||
var (
|
||||
ErrPacketTooSmall = errors.New("too small")
|
||||
ErrBadHash = errors.New("bad hash")
|
||||
ErrBadPoint = errors.New("invalid curve point")
|
||||
)
|
||||
|
||||
var headSpace = make([]byte, headSize)
|
||||
|
||||
// Decode reads a discovery v4 packet.
|
||||
func Decode(input []byte) (Packet, Pubkey, []byte, error) {
|
||||
if len(input) < headSize+1 {
|
||||
return nil, Pubkey{}, nil, ErrPacketTooSmall
|
||||
}
|
||||
hash, sig, sigdata := input[:macSize], input[macSize:headSize], input[headSize:]
|
||||
shouldhash := crypto.Keccak256(input[macSize:])
|
||||
if !bytes.Equal(hash, shouldhash) {
|
||||
return nil, Pubkey{}, nil, ErrBadHash
|
||||
}
|
||||
fromKey, err := recoverNodeKey(crypto.Keccak256(input[headSize:]), sig)
|
||||
if err != nil {
|
||||
return nil, fromKey, hash, err
|
||||
}
|
||||
|
||||
var req Packet
|
||||
switch ptype := sigdata[0]; ptype {
|
||||
case PingPacket:
|
||||
req = new(Ping)
|
||||
case PongPacket:
|
||||
req = new(Pong)
|
||||
case FindnodePacket:
|
||||
req = new(Findnode)
|
||||
case NeighborsPacket:
|
||||
req = new(Neighbors)
|
||||
case ENRRequestPacket:
|
||||
req = new(ENRRequest)
|
||||
case ENRResponsePacket:
|
||||
req = new(ENRResponse)
|
||||
default:
|
||||
return nil, fromKey, hash, fmt.Errorf("unknown type: %d", ptype)
|
||||
}
|
||||
s := rlp.NewStream(bytes.NewReader(sigdata[1:]), 0)
|
||||
err = s.Decode(req)
|
||||
return req, fromKey, hash, err
|
||||
}
|
||||
|
||||
// Encode encodes a discovery packet.
|
||||
func Encode(priv *ecdsa.PrivateKey, req Packet) (packet, hash []byte, err error) {
|
||||
b := new(bytes.Buffer)
|
||||
b.Write(headSpace)
|
||||
b.WriteByte(req.Kind())
|
||||
if err := rlp.Encode(b, req); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
packet = b.Bytes()
|
||||
sig, err := crypto.Sign(crypto.Keccak256(packet[headSize:]), priv)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
copy(packet[macSize:], sig)
|
||||
// Add the hash to the front. Note: this doesn't protect the packet in any way.
|
||||
hash = crypto.Keccak256(packet[macSize:])
|
||||
copy(packet, hash)
|
||||
return packet, hash, nil
|
||||
}
|
||||
|
||||
// recoverNodeKey computes the public key used to sign the given hash from the signature.
|
||||
func recoverNodeKey(hash, sig []byte) (key Pubkey, err error) {
|
||||
pubkey, err := crypto.Ecrecover(hash, sig)
|
||||
if err != nil {
|
||||
return key, err
|
||||
}
|
||||
copy(key[:], pubkey[1:])
|
||||
return key, nil
|
||||
}
|
||||
|
||||
// EncodePubkey encodes a secp256k1 public key.
|
||||
func EncodePubkey(key *ecdsa.PublicKey) Pubkey {
|
||||
var e Pubkey
|
||||
math.ReadBits(key.X, e[:len(e)/2])
|
||||
math.ReadBits(key.Y, e[len(e)/2:])
|
||||
return e
|
||||
}
|
||||
|
||||
// DecodePubkey reads an encoded secp256k1 public key.
|
||||
func DecodePubkey(curve elliptic.Curve, e Pubkey) (*ecdsa.PublicKey, error) {
|
||||
p := &ecdsa.PublicKey{Curve: curve, X: new(big.Int), Y: new(big.Int)}
|
||||
half := len(e) / 2
|
||||
p.X.SetBytes(e[:half])
|
||||
p.Y.SetBytes(e[half:])
|
||||
if !p.Curve.IsOnCurve(p.X, p.Y) {
|
||||
return nil, ErrBadPoint
|
||||
}
|
||||
return p, nil
|
||||
}
|
||||
873
vendor/github.com/waku-org/go-discover/discover/v5_udp.go
generated
vendored
Normal file
873
vendor/github.com/waku-org/go-discover/discover/v5_udp.go
generated
vendored
Normal file
@@ -0,0 +1,873 @@
|
||||
// Copyright 2020 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package discover
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/ecdsa"
|
||||
crand "crypto/rand"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"net"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/waku-org/go-discover/discover/v5wire"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/mclock"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/ethereum/go-ethereum/p2p/netutil"
|
||||
)
|
||||
|
||||
const (
|
||||
lookupRequestLimit = 3 // max requests against a single node during lookup
|
||||
findnodeResultLimit = 16 // applies in FINDNODE handler
|
||||
totalNodesResponseLimit = 5 // applies in waitForNodes
|
||||
nodesResponseItemLimit = 3 // applies in sendNodes
|
||||
|
||||
respTimeoutV5 = 700 * time.Millisecond
|
||||
)
|
||||
|
||||
// codecV5 is implemented by v5wire.Codec (and testCodec).
|
||||
//
|
||||
// The UDPv5 transport is split into two objects: the codec object deals with
|
||||
// encoding/decoding and with the handshake; the UDPv5 object handles higher-level concerns.
|
||||
type codecV5 interface {
|
||||
// Encode encodes a packet.
|
||||
Encode(enode.ID, string, v5wire.Packet, *v5wire.Whoareyou) ([]byte, v5wire.Nonce, error)
|
||||
|
||||
// Decode decodes a packet. It returns a *v5wire.Unknown packet if decryption fails.
|
||||
// The *enode.Node return value is non-nil when the input contains a handshake response.
|
||||
Decode([]byte, string) (enode.ID, *enode.Node, v5wire.Packet, error)
|
||||
}
|
||||
|
||||
// UDPv5 is the implementation of protocol version 5.
|
||||
type UDPv5 struct {
|
||||
// static fields
|
||||
conn UDPConn
|
||||
tab *Table
|
||||
netrestrict *netutil.Netlist
|
||||
priv *ecdsa.PrivateKey
|
||||
localNode *enode.LocalNode
|
||||
db *enode.DB
|
||||
log log.Logger
|
||||
clock mclock.Clock
|
||||
validSchemes enr.IdentityScheme
|
||||
|
||||
// talkreq handler registry
|
||||
trlock sync.Mutex
|
||||
trhandlers map[string]TalkRequestHandler
|
||||
|
||||
// channels into dispatch
|
||||
packetInCh chan ReadPacket
|
||||
readNextCh chan struct{}
|
||||
callCh chan *callV5
|
||||
callDoneCh chan *callV5
|
||||
respTimeoutCh chan *callTimeout
|
||||
|
||||
// state of dispatch
|
||||
codec codecV5
|
||||
activeCallByNode map[enode.ID]*callV5
|
||||
activeCallByAuth map[v5wire.Nonce]*callV5
|
||||
callQueue map[enode.ID][]*callV5
|
||||
|
||||
// shutdown stuff
|
||||
closeOnce sync.Once
|
||||
closeCtx context.Context
|
||||
cancelCloseCtx context.CancelFunc
|
||||
wg sync.WaitGroup
|
||||
}
|
||||
|
||||
// TalkRequestHandler callback processes a talk request and optionally returns a reply
|
||||
type TalkRequestHandler func(enode.ID, *net.UDPAddr, []byte) []byte
|
||||
|
||||
// callV5 represents a remote procedure call against another node.
|
||||
type callV5 struct {
|
||||
node *enode.Node
|
||||
packet v5wire.Packet
|
||||
responseType byte // expected packet type of response
|
||||
reqid []byte
|
||||
ch chan v5wire.Packet // responses sent here
|
||||
err chan error // errors sent here
|
||||
|
||||
// Valid for active calls only:
|
||||
nonce v5wire.Nonce // nonce of request packet
|
||||
handshakeCount int // # times we attempted handshake for this call
|
||||
challenge *v5wire.Whoareyou // last sent handshake challenge
|
||||
timeout mclock.Timer
|
||||
}
|
||||
|
||||
// callTimeout is the response timeout event of a call.
|
||||
type callTimeout struct {
|
||||
c *callV5
|
||||
timer mclock.Timer
|
||||
}
|
||||
|
||||
// ListenV5 listens on the given connection.
|
||||
func ListenV5(ctx context.Context, conn UDPConn, ln *enode.LocalNode, cfg Config) (*UDPv5, error) {
|
||||
t, err := newUDPv5(ctx, conn, ln, cfg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
go t.tab.loop()
|
||||
t.wg.Add(2)
|
||||
go t.readLoop()
|
||||
go t.dispatch()
|
||||
return t, nil
|
||||
}
|
||||
|
||||
// newUDPv5 creates a UDPv5 transport, but doesn't start any goroutines.
|
||||
func newUDPv5(ctx context.Context, conn UDPConn, ln *enode.LocalNode, cfg Config) (*UDPv5, error) {
|
||||
closeCtx, cancelCloseCtx := context.WithCancel(ctx)
|
||||
cfg = cfg.withDefaults()
|
||||
t := &UDPv5{
|
||||
// static fields
|
||||
conn: conn,
|
||||
localNode: ln,
|
||||
db: ln.Database(),
|
||||
netrestrict: cfg.NetRestrict,
|
||||
priv: cfg.PrivateKey,
|
||||
log: cfg.Log,
|
||||
validSchemes: cfg.ValidSchemes,
|
||||
clock: cfg.Clock,
|
||||
trhandlers: make(map[string]TalkRequestHandler),
|
||||
// channels into dispatch
|
||||
packetInCh: make(chan ReadPacket, 1),
|
||||
readNextCh: make(chan struct{}, 1),
|
||||
callCh: make(chan *callV5),
|
||||
callDoneCh: make(chan *callV5),
|
||||
respTimeoutCh: make(chan *callTimeout),
|
||||
// state of dispatch
|
||||
codec: v5wire.NewCodec(ln, cfg.PrivateKey, cfg.Clock, *cfg.V5Config.ProtocolID),
|
||||
activeCallByNode: make(map[enode.ID]*callV5),
|
||||
activeCallByAuth: make(map[v5wire.Nonce]*callV5),
|
||||
callQueue: make(map[enode.ID][]*callV5),
|
||||
// shutdown
|
||||
closeCtx: closeCtx,
|
||||
cancelCloseCtx: cancelCloseCtx,
|
||||
}
|
||||
tab, err := newTable(t, t.db, cfg.Bootnodes, cfg.ValidNodeFn, cfg.Log)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
t.tab = tab
|
||||
return t, nil
|
||||
}
|
||||
|
||||
// Self returns the local node record.
|
||||
func (t *UDPv5) Self() *enode.Node {
|
||||
return t.localNode.Node()
|
||||
}
|
||||
|
||||
// Close shuts down packet processing.
|
||||
func (t *UDPv5) Close() {
|
||||
t.closeOnce.Do(func() {
|
||||
t.cancelCloseCtx()
|
||||
t.conn.Close()
|
||||
t.wg.Wait()
|
||||
t.tab.close()
|
||||
})
|
||||
}
|
||||
|
||||
// Ping sends a ping message to the given node.
|
||||
func (t *UDPv5) Ping(n *enode.Node) error {
|
||||
_, err := t.ping(n)
|
||||
return err
|
||||
}
|
||||
|
||||
// Resolve searches for a specific node with the given ID and tries to get the most recent
|
||||
// version of the node record for it. It returns n if the node could not be resolved.
|
||||
func (t *UDPv5) Resolve(n *enode.Node) *enode.Node {
|
||||
if intable := t.tab.getNode(n.ID()); intable != nil && intable.Seq() > n.Seq() {
|
||||
n = intable
|
||||
}
|
||||
// Try asking directly. This works if the node is still responding on the endpoint we have.
|
||||
if resp, err := t.RequestENR(n); err == nil {
|
||||
return resp
|
||||
}
|
||||
// Otherwise do a network lookup.
|
||||
result := t.Lookup(n.ID())
|
||||
for _, rn := range result {
|
||||
if rn.ID() == n.ID() && rn.Seq() > n.Seq() {
|
||||
return rn
|
||||
}
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
// AllNodes returns all the nodes stored in the local table.
|
||||
func (t *UDPv5) AllNodes() []*enode.Node {
|
||||
t.tab.mutex.Lock()
|
||||
defer t.tab.mutex.Unlock()
|
||||
nodes := make([]*enode.Node, 0)
|
||||
|
||||
for _, b := range &t.tab.buckets {
|
||||
for _, n := range b.entries {
|
||||
nodes = append(nodes, unwrapNode(n))
|
||||
}
|
||||
}
|
||||
return nodes
|
||||
}
|
||||
|
||||
// LocalNode returns the current local node running the
|
||||
// protocol.
|
||||
func (t *UDPv5) LocalNode() *enode.LocalNode {
|
||||
return t.localNode
|
||||
}
|
||||
|
||||
// RegisterTalkHandler adds a handler for 'talk requests'. The handler function is called
|
||||
// whenever a request for the given protocol is received and should return the response
|
||||
// data or nil.
|
||||
func (t *UDPv5) RegisterTalkHandler(protocol string, handler TalkRequestHandler) {
|
||||
t.trlock.Lock()
|
||||
defer t.trlock.Unlock()
|
||||
t.trhandlers[protocol] = handler
|
||||
}
|
||||
|
||||
// TalkRequest sends a talk request to n and waits for a response.
|
||||
func (t *UDPv5) TalkRequest(n *enode.Node, protocol string, request []byte) ([]byte, error) {
|
||||
req := &v5wire.TalkRequest{Protocol: protocol, Message: request}
|
||||
resp := t.call(n, v5wire.TalkResponseMsg, req)
|
||||
defer t.callDone(resp)
|
||||
select {
|
||||
case respMsg := <-resp.ch:
|
||||
return respMsg.(*v5wire.TalkResponse).Message, nil
|
||||
case err := <-resp.err:
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// RandomNodes returns an iterator that finds random nodes in the DHT.
|
||||
func (t *UDPv5) RandomNodes() enode.Iterator {
|
||||
if t.tab.len() == 0 {
|
||||
// All nodes were dropped, refresh. The very first query will hit this
|
||||
// case and run the bootstrapping logic.
|
||||
<-t.tab.refresh()
|
||||
}
|
||||
|
||||
return newLookupIterator(t.closeCtx, t.newRandomLookup)
|
||||
}
|
||||
|
||||
// Lookup performs a recursive lookup for the given target.
|
||||
// It returns the closest nodes to target.
|
||||
func (t *UDPv5) Lookup(target enode.ID) []*enode.Node {
|
||||
return t.newLookup(t.closeCtx, target).run()
|
||||
}
|
||||
|
||||
// lookupRandom looks up a random target.
|
||||
// This is needed to satisfy the transport interface.
|
||||
func (t *UDPv5) lookupRandom() []*enode.Node {
|
||||
return t.newRandomLookup(t.closeCtx).run()
|
||||
}
|
||||
|
||||
// lookupSelf looks up our own node ID.
|
||||
// This is needed to satisfy the transport interface.
|
||||
func (t *UDPv5) lookupSelf() []*enode.Node {
|
||||
return t.newLookup(t.closeCtx, t.Self().ID()).run()
|
||||
}
|
||||
|
||||
func (t *UDPv5) newRandomLookup(ctx context.Context) *lookup {
|
||||
var target enode.ID
|
||||
crand.Read(target[:])
|
||||
return t.newLookup(ctx, target)
|
||||
}
|
||||
|
||||
func (t *UDPv5) newLookup(ctx context.Context, target enode.ID) *lookup {
|
||||
return newLookup(ctx, t.tab, target, func(n *node) ([]*node, error) {
|
||||
return t.lookupWorker(n, target)
|
||||
})
|
||||
}
|
||||
|
||||
// lookupWorker performs FINDNODE calls against a single node during lookup.
|
||||
func (t *UDPv5) lookupWorker(destNode *node, target enode.ID) ([]*node, error) {
|
||||
var (
|
||||
dists = lookupDistances(target, destNode.ID())
|
||||
nodes = nodesByDistance{target: target}
|
||||
err error
|
||||
)
|
||||
var r []*enode.Node
|
||||
r, err = t.findnode(unwrapNode(destNode), dists)
|
||||
if errors.Is(err, errClosed) {
|
||||
return nil, err
|
||||
}
|
||||
for _, n := range r {
|
||||
if n.ID() != t.Self().ID() {
|
||||
nodes.push(wrapNode(n), findnodeResultLimit)
|
||||
}
|
||||
}
|
||||
return nodes.entries, err
|
||||
}
|
||||
|
||||
// lookupDistances computes the distance parameter for FINDNODE calls to dest.
|
||||
// It chooses distances adjacent to logdist(target, dest), e.g. for a target
|
||||
// with logdist(target, dest) = 255 the result is [255, 256, 254].
|
||||
func lookupDistances(target, dest enode.ID) (dists []uint) {
|
||||
td := enode.LogDist(target, dest)
|
||||
dists = append(dists, uint(td))
|
||||
for i := 1; len(dists) < lookupRequestLimit; i++ {
|
||||
if td+i < 256 {
|
||||
dists = append(dists, uint(td+i))
|
||||
}
|
||||
if td-i > 0 {
|
||||
dists = append(dists, uint(td-i))
|
||||
}
|
||||
}
|
||||
return dists
|
||||
}
|
||||
|
||||
// ping calls PING on a node and waits for a PONG response.
|
||||
func (t *UDPv5) ping(n *enode.Node) (uint64, error) {
|
||||
req := &v5wire.Ping{ENRSeq: t.localNode.Node().Seq()}
|
||||
resp := t.call(n, v5wire.PongMsg, req)
|
||||
defer t.callDone(resp)
|
||||
|
||||
select {
|
||||
case pong := <-resp.ch:
|
||||
return pong.(*v5wire.Pong).ENRSeq, nil
|
||||
case err := <-resp.err:
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
|
||||
// RequestENR requests n's record.
|
||||
func (t *UDPv5) RequestENR(n *enode.Node) (*enode.Node, error) {
|
||||
nodes, err := t.findnode(n, []uint{0})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(nodes) != 1 {
|
||||
return nil, fmt.Errorf("%d nodes in response for distance zero", len(nodes))
|
||||
}
|
||||
return nodes[0], nil
|
||||
}
|
||||
|
||||
// findnode calls FINDNODE on a node and waits for responses.
|
||||
func (t *UDPv5) findnode(n *enode.Node, distances []uint) ([]*enode.Node, error) {
|
||||
resp := t.call(n, v5wire.NodesMsg, &v5wire.Findnode{Distances: distances})
|
||||
return t.waitForNodes(resp, distances)
|
||||
}
|
||||
|
||||
// waitForNodes waits for NODES responses to the given call.
|
||||
func (t *UDPv5) waitForNodes(c *callV5, distances []uint) ([]*enode.Node, error) {
|
||||
defer t.callDone(c)
|
||||
|
||||
var (
|
||||
nodes []*enode.Node
|
||||
seen = make(map[enode.ID]struct{})
|
||||
received, total = 0, -1
|
||||
)
|
||||
for {
|
||||
select {
|
||||
case responseP := <-c.ch:
|
||||
response := responseP.(*v5wire.Nodes)
|
||||
for _, record := range response.Nodes {
|
||||
node, err := t.verifyResponseNode(c, record, distances, seen)
|
||||
if err != nil {
|
||||
t.log.Debug("Invalid record in "+response.Name(), "id", c.node.ID(), "err", err)
|
||||
continue
|
||||
}
|
||||
nodes = append(nodes, node)
|
||||
}
|
||||
if total == -1 {
|
||||
total = min(int(response.Total), totalNodesResponseLimit)
|
||||
}
|
||||
if received++; received == total {
|
||||
return nodes, nil
|
||||
}
|
||||
case err := <-c.err:
|
||||
return nodes, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// verifyResponseNode checks validity of a record in a NODES response.
|
||||
func (t *UDPv5) verifyResponseNode(c *callV5, r *enr.Record, distances []uint, seen map[enode.ID]struct{}) (*enode.Node, error) {
|
||||
node, err := enode.New(t.validSchemes, r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := netutil.CheckRelayIP(c.node.IP(), node.IP()); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if t.netrestrict != nil && !t.netrestrict.Contains(node.IP()) {
|
||||
return nil, errors.New("not contained in netrestrict list")
|
||||
}
|
||||
if c.node.UDP() <= 1024 {
|
||||
return nil, errLowPort
|
||||
}
|
||||
if distances != nil {
|
||||
nd := enode.LogDist(c.node.ID(), node.ID())
|
||||
if !containsUint(uint(nd), distances) {
|
||||
return nil, errors.New("does not match any requested distance")
|
||||
}
|
||||
}
|
||||
if _, ok := seen[node.ID()]; ok {
|
||||
return nil, fmt.Errorf("duplicate record")
|
||||
}
|
||||
seen[node.ID()] = struct{}{}
|
||||
return node, nil
|
||||
}
|
||||
|
||||
func containsUint(x uint, xs []uint) bool {
|
||||
for _, v := range xs {
|
||||
if x == v {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// call sends the given call and sets up a handler for response packets (of message type
|
||||
// responseType). Responses are dispatched to the call's response channel.
|
||||
func (t *UDPv5) call(node *enode.Node, responseType byte, packet v5wire.Packet) *callV5 {
|
||||
c := &callV5{
|
||||
node: node,
|
||||
packet: packet,
|
||||
responseType: responseType,
|
||||
reqid: make([]byte, 8),
|
||||
ch: make(chan v5wire.Packet, 1),
|
||||
err: make(chan error, 1),
|
||||
}
|
||||
// Assign request ID.
|
||||
crand.Read(c.reqid)
|
||||
packet.SetRequestID(c.reqid)
|
||||
// Send call to dispatch.
|
||||
select {
|
||||
case t.callCh <- c:
|
||||
case <-t.closeCtx.Done():
|
||||
c.err <- errClosed
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
// callDone tells dispatch that the active call is done.
|
||||
func (t *UDPv5) callDone(c *callV5) {
|
||||
// This needs a loop because further responses may be incoming until the
|
||||
// send to callDoneCh has completed. Such responses need to be discarded
|
||||
// in order to avoid blocking the dispatch loop.
|
||||
for {
|
||||
select {
|
||||
case <-c.ch:
|
||||
// late response, discard.
|
||||
case <-c.err:
|
||||
// late error, discard.
|
||||
case t.callDoneCh <- c:
|
||||
return
|
||||
case <-t.closeCtx.Done():
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// dispatch runs in its own goroutine, handles incoming packets and deals with calls.
|
||||
//
|
||||
// For any destination node there is at most one 'active call', stored in the t.activeCall*
|
||||
// maps. A call is made active when it is sent. The active call can be answered by a
|
||||
// matching response, in which case c.ch receives the response; or by timing out, in which case
|
||||
// c.err receives the error. When the function that created the call signals the active
|
||||
// call is done through callDone, the next call from the call queue is started.
|
||||
//
|
||||
// Calls may also be answered by a WHOAREYOU packet referencing the call packet's authTag.
|
||||
// When that happens the call is simply re-sent to complete the handshake. We allow one
|
||||
// handshake attempt per call.
|
||||
func (t *UDPv5) dispatch() {
|
||||
defer t.wg.Done()
|
||||
|
||||
// Arm first read.
|
||||
t.readNextCh <- struct{}{}
|
||||
|
||||
for {
|
||||
select {
|
||||
case c := <-t.callCh:
|
||||
id := c.node.ID()
|
||||
t.callQueue[id] = append(t.callQueue[id], c)
|
||||
t.sendNextCall(id)
|
||||
|
||||
case ct := <-t.respTimeoutCh:
|
||||
active := t.activeCallByNode[ct.c.node.ID()]
|
||||
if ct.c == active && ct.timer == active.timeout {
|
||||
ct.c.err <- errTimeout
|
||||
}
|
||||
|
||||
case c := <-t.callDoneCh:
|
||||
id := c.node.ID()
|
||||
active := t.activeCallByNode[id]
|
||||
if active != c {
|
||||
panic("BUG: callDone for inactive call")
|
||||
}
|
||||
c.timeout.Stop()
|
||||
delete(t.activeCallByAuth, c.nonce)
|
||||
delete(t.activeCallByNode, id)
|
||||
t.sendNextCall(id)
|
||||
|
||||
case p := <-t.packetInCh:
|
||||
t.handlePacket(p.Data, p.Addr)
|
||||
// Arm next read.
|
||||
t.readNextCh <- struct{}{}
|
||||
|
||||
case <-t.closeCtx.Done():
|
||||
close(t.readNextCh)
|
||||
for id, queue := range t.callQueue {
|
||||
for _, c := range queue {
|
||||
c.err <- errClosed
|
||||
}
|
||||
delete(t.callQueue, id)
|
||||
}
|
||||
for id, c := range t.activeCallByNode {
|
||||
c.err <- errClosed
|
||||
delete(t.activeCallByNode, id)
|
||||
delete(t.activeCallByAuth, c.nonce)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// startResponseTimeout sets the response timer for a call.
|
||||
func (t *UDPv5) startResponseTimeout(c *callV5) {
|
||||
if c.timeout != nil {
|
||||
c.timeout.Stop()
|
||||
}
|
||||
var (
|
||||
timer mclock.Timer
|
||||
done = make(chan struct{})
|
||||
)
|
||||
timer = t.clock.AfterFunc(respTimeoutV5, func() {
|
||||
<-done
|
||||
select {
|
||||
case t.respTimeoutCh <- &callTimeout{c, timer}:
|
||||
case <-t.closeCtx.Done():
|
||||
}
|
||||
})
|
||||
c.timeout = timer
|
||||
close(done)
|
||||
}
|
||||
|
||||
// sendNextCall sends the next call in the call queue if there is no active call.
|
||||
func (t *UDPv5) sendNextCall(id enode.ID) {
|
||||
queue := t.callQueue[id]
|
||||
if len(queue) == 0 || t.activeCallByNode[id] != nil {
|
||||
return
|
||||
}
|
||||
t.activeCallByNode[id] = queue[0]
|
||||
t.sendCall(t.activeCallByNode[id])
|
||||
if len(queue) == 1 {
|
||||
delete(t.callQueue, id)
|
||||
} else {
|
||||
copy(queue, queue[1:])
|
||||
t.callQueue[id] = queue[:len(queue)-1]
|
||||
}
|
||||
}
|
||||
|
||||
// sendCall encodes and sends a request packet to the call's recipient node.
|
||||
// This performs a handshake if needed.
|
||||
func (t *UDPv5) sendCall(c *callV5) {
|
||||
// The call might have a nonce from a previous handshake attempt. Remove the entry for
|
||||
// the old nonce because we're about to generate a new nonce for this call.
|
||||
if c.nonce != (v5wire.Nonce{}) {
|
||||
delete(t.activeCallByAuth, c.nonce)
|
||||
}
|
||||
|
||||
addr := &net.UDPAddr{IP: c.node.IP(), Port: c.node.UDP()}
|
||||
newNonce, _ := t.send(c.node.ID(), addr, c.packet, c.challenge)
|
||||
c.nonce = newNonce
|
||||
t.activeCallByAuth[newNonce] = c
|
||||
t.startResponseTimeout(c)
|
||||
}
|
||||
|
||||
// sendResponse sends a response packet to the given node.
|
||||
// This doesn't trigger a handshake even if no keys are available.
|
||||
func (t *UDPv5) sendResponse(toID enode.ID, toAddr *net.UDPAddr, packet v5wire.Packet) error {
|
||||
_, err := t.send(toID, toAddr, packet, nil)
|
||||
return err
|
||||
}
|
||||
|
||||
// send sends a packet to the given node.
|
||||
func (t *UDPv5) send(toID enode.ID, toAddr *net.UDPAddr, packet v5wire.Packet, c *v5wire.Whoareyou) (v5wire.Nonce, error) {
|
||||
addr := toAddr.String()
|
||||
enc, nonce, err := t.codec.Encode(toID, addr, packet, c)
|
||||
if err != nil {
|
||||
t.log.Warn(">> "+packet.Name(), "id", toID, "addr", addr, "err", err)
|
||||
return nonce, err
|
||||
}
|
||||
_, err = t.conn.WriteToUDP(enc, toAddr)
|
||||
t.log.Trace(">> "+packet.Name(), "id", toID, "addr", addr)
|
||||
return nonce, err
|
||||
}
|
||||
|
||||
// readLoop runs in its own goroutine and reads packets from the network.
|
||||
func (t *UDPv5) readLoop() {
|
||||
defer t.wg.Done()
|
||||
|
||||
buf := make([]byte, maxPacketSize)
|
||||
for range t.readNextCh {
|
||||
nbytes, from, err := t.conn.ReadFromUDP(buf)
|
||||
if netutil.IsTemporaryError(err) {
|
||||
// Ignore temporary read errors.
|
||||
t.log.Debug("Temporary UDP read error", "err", err)
|
||||
continue
|
||||
} else if err != nil {
|
||||
// Shut down the loop for permanent errors.
|
||||
if !errors.Is(err, io.EOF) {
|
||||
t.log.Debug("UDP read error", "err", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
t.dispatchReadPacket(from, buf[:nbytes])
|
||||
}
|
||||
}
|
||||
|
||||
// dispatchReadPacket sends a packet into the dispatch loop.
|
||||
func (t *UDPv5) dispatchReadPacket(from *net.UDPAddr, content []byte) bool {
|
||||
select {
|
||||
case t.packetInCh <- ReadPacket{content, from}:
|
||||
return true
|
||||
case <-t.closeCtx.Done():
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// handlePacket decodes and processes an incoming packet from the network.
|
||||
func (t *UDPv5) handlePacket(rawpacket []byte, fromAddr *net.UDPAddr) error {
|
||||
addr := fromAddr.String()
|
||||
fromID, fromNode, packet, err := t.codec.Decode(rawpacket, addr)
|
||||
if err != nil {
|
||||
t.log.Debug("Bad discv5 packet", "id", fromID, "addr", addr, "err", err)
|
||||
return err
|
||||
}
|
||||
if fromNode != nil {
|
||||
// Handshake succeeded, add to table.
|
||||
t.tab.addSeenNode(wrapNode(fromNode))
|
||||
}
|
||||
if packet.Kind() != v5wire.WhoareyouPacket {
|
||||
// WHOAREYOU logged separately to report errors.
|
||||
t.log.Trace("<< "+packet.Name(), "id", fromID, "addr", addr)
|
||||
}
|
||||
t.handle(packet, fromID, fromAddr)
|
||||
return nil
|
||||
}
|
||||
|
||||
// handleCallResponse dispatches a response packet to the call waiting for it.
|
||||
func (t *UDPv5) handleCallResponse(fromID enode.ID, fromAddr *net.UDPAddr, p v5wire.Packet) bool {
|
||||
ac := t.activeCallByNode[fromID]
|
||||
if ac == nil || !bytes.Equal(p.RequestID(), ac.reqid) {
|
||||
t.log.Debug(fmt.Sprintf("Unsolicited/late %s response", p.Name()), "id", fromID, "addr", fromAddr)
|
||||
return false
|
||||
}
|
||||
if !fromAddr.IP.Equal(ac.node.IP()) || fromAddr.Port != ac.node.UDP() {
|
||||
t.log.Debug(fmt.Sprintf("%s from wrong endpoint", p.Name()), "id", fromID, "addr", fromAddr)
|
||||
return false
|
||||
}
|
||||
if p.Kind() != ac.responseType {
|
||||
t.log.Debug(fmt.Sprintf("Wrong discv5 response type %s", p.Name()), "id", fromID, "addr", fromAddr)
|
||||
return false
|
||||
}
|
||||
t.startResponseTimeout(ac)
|
||||
ac.ch <- p
|
||||
return true
|
||||
}
|
||||
|
||||
// getNode looks for a node record in table and database.
|
||||
func (t *UDPv5) getNode(id enode.ID) *enode.Node {
|
||||
if n := t.tab.getNode(id); n != nil {
|
||||
return n
|
||||
}
|
||||
if n := t.localNode.Database().Node(id); n != nil {
|
||||
return n
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// handle processes incoming packets according to their message type.
|
||||
func (t *UDPv5) handle(p v5wire.Packet, fromID enode.ID, fromAddr *net.UDPAddr) {
|
||||
switch p := p.(type) {
|
||||
case *v5wire.Unknown:
|
||||
t.handleUnknown(p, fromID, fromAddr)
|
||||
case *v5wire.Whoareyou:
|
||||
t.handleWhoareyou(p, fromID, fromAddr)
|
||||
case *v5wire.Ping:
|
||||
t.handlePing(p, fromID, fromAddr)
|
||||
case *v5wire.Pong:
|
||||
if t.handleCallResponse(fromID, fromAddr, p) {
|
||||
t.localNode.UDPEndpointStatement(fromAddr, &net.UDPAddr{IP: p.ToIP, Port: int(p.ToPort)})
|
||||
}
|
||||
case *v5wire.Findnode:
|
||||
t.handleFindnode(p, fromID, fromAddr)
|
||||
case *v5wire.Nodes:
|
||||
t.handleCallResponse(fromID, fromAddr, p)
|
||||
case *v5wire.TalkRequest:
|
||||
t.handleTalkRequest(p, fromID, fromAddr)
|
||||
case *v5wire.TalkResponse:
|
||||
t.handleCallResponse(fromID, fromAddr, p)
|
||||
}
|
||||
}
|
||||
|
||||
// handleUnknown initiates a handshake by responding with WHOAREYOU.
|
||||
func (t *UDPv5) handleUnknown(p *v5wire.Unknown, fromID enode.ID, fromAddr *net.UDPAddr) {
|
||||
challenge := &v5wire.Whoareyou{Nonce: p.Nonce}
|
||||
crand.Read(challenge.IDNonce[:])
|
||||
if n := t.getNode(fromID); n != nil {
|
||||
challenge.Node = n
|
||||
challenge.RecordSeq = n.Seq()
|
||||
}
|
||||
t.sendResponse(fromID, fromAddr, challenge)
|
||||
}
|
||||
|
||||
var (
|
||||
errChallengeNoCall = errors.New("no matching call")
|
||||
errChallengeTwice = errors.New("second handshake")
|
||||
)
|
||||
|
||||
// handleWhoareyou resends the active call as a handshake packet.
|
||||
func (t *UDPv5) handleWhoareyou(p *v5wire.Whoareyou, fromID enode.ID, fromAddr *net.UDPAddr) {
|
||||
c, err := t.matchWithCall(fromID, p.Nonce)
|
||||
if err != nil {
|
||||
t.log.Debug("Invalid "+p.Name(), "addr", fromAddr, "err", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Resend the call that was answered by WHOAREYOU.
|
||||
t.log.Trace("<< "+p.Name(), "id", c.node.ID(), "addr", fromAddr)
|
||||
c.handshakeCount++
|
||||
c.challenge = p
|
||||
p.Node = c.node
|
||||
t.sendCall(c)
|
||||
}
|
||||
|
||||
// matchWithCall checks whether a handshake attempt matches the active call.
|
||||
func (t *UDPv5) matchWithCall(fromID enode.ID, nonce v5wire.Nonce) (*callV5, error) {
|
||||
c := t.activeCallByAuth[nonce]
|
||||
if c == nil {
|
||||
return nil, errChallengeNoCall
|
||||
}
|
||||
if c.handshakeCount > 0 {
|
||||
return nil, errChallengeTwice
|
||||
}
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// handlePing sends a PONG response.
|
||||
func (t *UDPv5) handlePing(p *v5wire.Ping, fromID enode.ID, fromAddr *net.UDPAddr) {
|
||||
remoteIP := fromAddr.IP
|
||||
// Handle IPv4 mapped IPv6 addresses in the
|
||||
// event the local node is binded to an
|
||||
// ipv6 interface.
|
||||
if remoteIP.To4() != nil {
|
||||
remoteIP = remoteIP.To4()
|
||||
}
|
||||
t.sendResponse(fromID, fromAddr, &v5wire.Pong{
|
||||
ReqID: p.ReqID,
|
||||
ToIP: remoteIP,
|
||||
ToPort: uint16(fromAddr.Port),
|
||||
ENRSeq: t.localNode.Node().Seq(),
|
||||
})
|
||||
}
|
||||
|
||||
// handleFindnode returns nodes to the requester.
|
||||
func (t *UDPv5) handleFindnode(p *v5wire.Findnode, fromID enode.ID, fromAddr *net.UDPAddr) {
|
||||
nodes := t.collectTableNodes(fromAddr.IP, p.Distances, findnodeResultLimit)
|
||||
for _, resp := range packNodes(p.ReqID, nodes) {
|
||||
t.sendResponse(fromID, fromAddr, resp)
|
||||
}
|
||||
}
|
||||
|
||||
// collectTableNodes creates a FINDNODE result set for the given distances.
|
||||
func (t *UDPv5) collectTableNodes(rip net.IP, distances []uint, limit int) []*enode.Node {
|
||||
var nodes []*enode.Node
|
||||
var processed = make(map[uint]struct{})
|
||||
for _, dist := range distances {
|
||||
// Reject duplicate / invalid distances.
|
||||
_, seen := processed[dist]
|
||||
if seen || dist > 256 {
|
||||
continue
|
||||
}
|
||||
|
||||
// Get the nodes.
|
||||
var bn []*enode.Node
|
||||
if dist == 0 {
|
||||
bn = []*enode.Node{t.Self()}
|
||||
} else if dist <= 256 {
|
||||
t.tab.mutex.Lock()
|
||||
bn = unwrapNodes(t.tab.bucketAtDistance(int(dist)).entries)
|
||||
t.tab.mutex.Unlock()
|
||||
}
|
||||
processed[dist] = struct{}{}
|
||||
|
||||
// Apply some pre-checks to avoid sending invalid nodes.
|
||||
for _, n := range bn {
|
||||
// TODO livenessChecks > 1
|
||||
if netutil.CheckRelayIP(rip, n.IP()) != nil {
|
||||
continue
|
||||
}
|
||||
nodes = append(nodes, n)
|
||||
if len(nodes) >= limit {
|
||||
return nodes
|
||||
}
|
||||
}
|
||||
}
|
||||
return nodes
|
||||
}
|
||||
|
||||
// packNodes creates NODES response packets for the given node list.
|
||||
func packNodes(reqid []byte, nodes []*enode.Node) []*v5wire.Nodes {
|
||||
if len(nodes) == 0 {
|
||||
return []*v5wire.Nodes{{ReqID: reqid, Total: 1}}
|
||||
}
|
||||
|
||||
total := uint8(math.Ceil(float64(len(nodes)) / 3))
|
||||
var resp []*v5wire.Nodes
|
||||
for len(nodes) > 0 {
|
||||
p := &v5wire.Nodes{ReqID: reqid, Total: total}
|
||||
items := min(nodesResponseItemLimit, len(nodes))
|
||||
for i := 0; i < items; i++ {
|
||||
p.Nodes = append(p.Nodes, nodes[i].Record())
|
||||
}
|
||||
nodes = nodes[items:]
|
||||
resp = append(resp, p)
|
||||
}
|
||||
return resp
|
||||
}
|
||||
|
||||
func (t *UDPv5) SetFallbackNodes(nodes []*enode.Node) error {
|
||||
err := t.tab.setFallbackNodes(nodes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
refreshDone := make(chan struct{})
|
||||
t.tab.doRefresh(refreshDone)
|
||||
<-refreshDone
|
||||
return nil
|
||||
}
|
||||
|
||||
// handleTalkRequest runs the talk request handler of the requested protocol.
|
||||
func (t *UDPv5) handleTalkRequest(p *v5wire.TalkRequest, fromID enode.ID, fromAddr *net.UDPAddr) {
|
||||
t.trlock.Lock()
|
||||
handler := t.trhandlers[p.Protocol]
|
||||
t.trlock.Unlock()
|
||||
|
||||
var response []byte
|
||||
if handler != nil {
|
||||
response = handler(fromID, fromAddr, p.Message)
|
||||
}
|
||||
resp := &v5wire.TalkResponse{ReqID: p.ReqID, Message: response}
|
||||
t.sendResponse(fromID, fromAddr, resp)
|
||||
}
|
||||
180
vendor/github.com/waku-org/go-discover/discover/v5wire/crypto.go
generated
vendored
Normal file
180
vendor/github.com/waku-org/go-discover/discover/v5wire/crypto.go
generated
vendored
Normal file
@@ -0,0 +1,180 @@
|
||||
// Copyright 2020 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package v5wire
|
||||
|
||||
import (
|
||||
"crypto/aes"
|
||||
"crypto/cipher"
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"errors"
|
||||
"fmt"
|
||||
"hash"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/math"
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"golang.org/x/crypto/hkdf"
|
||||
)
|
||||
|
||||
const (
|
||||
// Encryption/authentication parameters.
|
||||
aesKeySize = 16
|
||||
gcmNonceSize = 12
|
||||
)
|
||||
|
||||
// Nonce represents a nonce used for AES/GCM.
|
||||
type Nonce [gcmNonceSize]byte
|
||||
|
||||
// EncodePubkey encodes a public key.
|
||||
func EncodePubkey(key *ecdsa.PublicKey) []byte {
|
||||
switch key.Curve {
|
||||
case crypto.S256():
|
||||
return crypto.CompressPubkey(key)
|
||||
default:
|
||||
panic("unsupported curve " + key.Curve.Params().Name + " in EncodePubkey")
|
||||
}
|
||||
}
|
||||
|
||||
// DecodePubkey decodes a public key in compressed format.
|
||||
func DecodePubkey(curve elliptic.Curve, e []byte) (*ecdsa.PublicKey, error) {
|
||||
switch curve {
|
||||
case crypto.S256():
|
||||
if len(e) != 33 {
|
||||
return nil, errors.New("wrong size public key data")
|
||||
}
|
||||
return crypto.DecompressPubkey(e)
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported curve %s in DecodePubkey", curve.Params().Name)
|
||||
}
|
||||
}
|
||||
|
||||
// idNonceHash computes the ID signature hash used in the handshake.
|
||||
func idNonceHash(h hash.Hash, challenge, ephkey []byte, destID enode.ID) []byte {
|
||||
h.Reset()
|
||||
h.Write([]byte("discovery v5 identity proof"))
|
||||
h.Write(challenge)
|
||||
h.Write(ephkey)
|
||||
h.Write(destID[:])
|
||||
return h.Sum(nil)
|
||||
}
|
||||
|
||||
// makeIDSignature creates the ID nonce signature.
|
||||
func makeIDSignature(hash hash.Hash, key *ecdsa.PrivateKey, challenge, ephkey []byte, destID enode.ID) ([]byte, error) {
|
||||
input := idNonceHash(hash, challenge, ephkey, destID)
|
||||
switch key.Curve {
|
||||
case crypto.S256():
|
||||
idsig, err := crypto.Sign(input, key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return idsig[:len(idsig)-1], nil // remove recovery ID
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported curve %s", key.Curve.Params().Name)
|
||||
}
|
||||
}
|
||||
|
||||
// s256raw is an unparsed secp256k1 public key ENR entry.
|
||||
type s256raw []byte
|
||||
|
||||
func (s256raw) ENRKey() string { return "secp256k1" }
|
||||
|
||||
// verifyIDSignature checks that signature over idnonce was made by the given node.
|
||||
func verifyIDSignature(hash hash.Hash, sig []byte, n *enode.Node, challenge, ephkey []byte, destID enode.ID) error {
|
||||
switch idscheme := n.Record().IdentityScheme(); idscheme {
|
||||
case "v4":
|
||||
var pubkey s256raw
|
||||
if n.Load(&pubkey) != nil {
|
||||
return errors.New("no secp256k1 public key in record")
|
||||
}
|
||||
input := idNonceHash(hash, challenge, ephkey, destID)
|
||||
if !crypto.VerifySignature(pubkey, input, sig) {
|
||||
return errInvalidNonceSig
|
||||
}
|
||||
return nil
|
||||
default:
|
||||
return fmt.Errorf("can't verify ID nonce signature against scheme %q", idscheme)
|
||||
}
|
||||
}
|
||||
|
||||
type hashFn func() hash.Hash
|
||||
|
||||
// deriveKeys creates the session keys.
|
||||
func deriveKeys(hash hashFn, priv *ecdsa.PrivateKey, pub *ecdsa.PublicKey, n1, n2 enode.ID, challenge []byte) *session {
|
||||
const text = "discovery v5 key agreement"
|
||||
var info = make([]byte, 0, len(text)+len(n1)+len(n2))
|
||||
info = append(info, text...)
|
||||
info = append(info, n1[:]...)
|
||||
info = append(info, n2[:]...)
|
||||
|
||||
eph := ecdh(priv, pub)
|
||||
if eph == nil {
|
||||
return nil
|
||||
}
|
||||
kdf := hkdf.New(hash, eph, challenge, info)
|
||||
sec := session{writeKey: make([]byte, aesKeySize), readKey: make([]byte, aesKeySize)}
|
||||
kdf.Read(sec.writeKey)
|
||||
kdf.Read(sec.readKey)
|
||||
for i := range eph {
|
||||
eph[i] = 0
|
||||
}
|
||||
return &sec
|
||||
}
|
||||
|
||||
// ecdh creates a shared secret.
|
||||
func ecdh(privkey *ecdsa.PrivateKey, pubkey *ecdsa.PublicKey) []byte {
|
||||
secX, secY := pubkey.ScalarMult(pubkey.X, pubkey.Y, privkey.D.Bytes())
|
||||
if secX == nil {
|
||||
return nil
|
||||
}
|
||||
sec := make([]byte, 33)
|
||||
sec[0] = 0x02 | byte(secY.Bit(0))
|
||||
math.ReadBits(secX, sec[1:])
|
||||
return sec
|
||||
}
|
||||
|
||||
// encryptGCM encrypts pt using AES-GCM with the given key and nonce. The ciphertext is
|
||||
// appended to dest, which must not overlap with plaintext. The resulting ciphertext is 16
|
||||
// bytes longer than plaintext because it contains an authentication tag.
|
||||
func encryptGCM(dest, key, nonce, plaintext, authData []byte) ([]byte, error) {
|
||||
block, err := aes.NewCipher(key)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("can't create block cipher: %v", err))
|
||||
}
|
||||
aesgcm, err := cipher.NewGCMWithNonceSize(block, gcmNonceSize)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("can't create GCM: %v", err))
|
||||
}
|
||||
return aesgcm.Seal(dest, nonce, plaintext, authData), nil
|
||||
}
|
||||
|
||||
// decryptGCM decrypts ct using AES-GCM with the given key and nonce.
|
||||
func decryptGCM(key, nonce, ct, authData []byte) ([]byte, error) {
|
||||
block, err := aes.NewCipher(key)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("can't create block cipher: %v", err)
|
||||
}
|
||||
if len(nonce) != gcmNonceSize {
|
||||
return nil, fmt.Errorf("invalid GCM nonce size: %d", len(nonce))
|
||||
}
|
||||
aesgcm, err := cipher.NewGCMWithNonceSize(block, gcmNonceSize)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("can't create GCM: %v", err)
|
||||
}
|
||||
pt := make([]byte, 0, len(ct))
|
||||
return aesgcm.Open(pt, nonce, ct, authData)
|
||||
}
|
||||
655
vendor/github.com/waku-org/go-discover/discover/v5wire/encoding.go
generated
vendored
Normal file
655
vendor/github.com/waku-org/go-discover/discover/v5wire/encoding.go
generated
vendored
Normal file
@@ -0,0 +1,655 @@
|
||||
// Copyright 2020 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package v5wire
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/aes"
|
||||
"crypto/cipher"
|
||||
"crypto/ecdsa"
|
||||
crand "crypto/rand"
|
||||
"crypto/sha256"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"hash"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/mclock"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/ethereum/go-ethereum/rlp"
|
||||
)
|
||||
|
||||
// TODO concurrent WHOAREYOU tie-breaker
|
||||
// TODO rehandshake after X packets
|
||||
|
||||
// Header represents a packet header.
|
||||
type Header struct {
|
||||
IV [sizeofMaskingIV]byte
|
||||
StaticHeader
|
||||
AuthData []byte
|
||||
|
||||
src enode.ID // used by decoder
|
||||
}
|
||||
|
||||
// StaticHeader contains the static fields of a packet header.
|
||||
type StaticHeader struct {
|
||||
ProtocolID [6]byte
|
||||
Version uint16
|
||||
Flag byte
|
||||
Nonce Nonce
|
||||
AuthSize uint16
|
||||
}
|
||||
|
||||
// Authdata layouts.
|
||||
type (
|
||||
whoareyouAuthData struct {
|
||||
IDNonce [16]byte // ID proof data
|
||||
RecordSeq uint64 // highest known ENR sequence of requester
|
||||
}
|
||||
|
||||
handshakeAuthData struct {
|
||||
h struct {
|
||||
SrcID enode.ID
|
||||
SigSize byte // ignature data
|
||||
PubkeySize byte // offset of
|
||||
}
|
||||
// Trailing variable-size data.
|
||||
signature, pubkey, record []byte
|
||||
}
|
||||
|
||||
messageAuthData struct {
|
||||
SrcID enode.ID
|
||||
}
|
||||
)
|
||||
|
||||
// Packet header flag values.
|
||||
const (
|
||||
flagMessage = iota
|
||||
flagWhoareyou
|
||||
flagHandshake
|
||||
)
|
||||
|
||||
// Protocol constants.
|
||||
const (
|
||||
version = 1
|
||||
minVersion = 1
|
||||
sizeofMaskingIV = 16
|
||||
|
||||
// The minimum size of any Discovery v5 packet is 63 bytes.
|
||||
// Should reject packets smaller than minPacketSize.
|
||||
minPacketSize = 63
|
||||
|
||||
minMessageSize = 48 // this refers to data after static headers
|
||||
randomPacketMsgSize = 20
|
||||
)
|
||||
|
||||
var DefaultProtocolID = [6]byte{'d', 'i', 's', 'c', 'v', '5'}
|
||||
|
||||
// Errors.
|
||||
var (
|
||||
errTooShort = errors.New("packet too short")
|
||||
errInvalidHeader = errors.New("invalid packet header")
|
||||
errInvalidFlag = errors.New("invalid flag value in header")
|
||||
errMinVersion = errors.New("version of packet header below minimum")
|
||||
errMsgTooShort = errors.New("message/handshake packet below minimum size")
|
||||
errAuthSize = errors.New("declared auth size is beyond packet length")
|
||||
errUnexpectedHandshake = errors.New("unexpected auth response, not in handshake")
|
||||
errInvalidAuthKey = errors.New("invalid ephemeral pubkey")
|
||||
errNoRecord = errors.New("expected ENR in handshake but none sent")
|
||||
errInvalidNonceSig = errors.New("invalid ID nonce signature")
|
||||
errMessageTooShort = errors.New("message contains no data")
|
||||
errMessageDecrypt = errors.New("cannot decrypt message")
|
||||
)
|
||||
|
||||
// Public errors.
|
||||
var (
|
||||
// ErrInvalidReqID represents error when the ID is invalid.
|
||||
ErrInvalidReqID = errors.New("request ID larger than 8 bytes")
|
||||
)
|
||||
|
||||
// Packet sizes.
|
||||
var (
|
||||
sizeofStaticHeader = binary.Size(StaticHeader{})
|
||||
sizeofWhoareyouAuthData = binary.Size(whoareyouAuthData{})
|
||||
sizeofHandshakeAuthData = binary.Size(handshakeAuthData{}.h)
|
||||
sizeofMessageAuthData = binary.Size(messageAuthData{})
|
||||
sizeofStaticPacketData = sizeofMaskingIV + sizeofStaticHeader
|
||||
)
|
||||
|
||||
// Codec encodes and decodes Discovery v5 packets.
|
||||
// This type is not safe for concurrent use.
|
||||
type Codec struct {
|
||||
sha256 hash.Hash
|
||||
localnode *enode.LocalNode
|
||||
privkey *ecdsa.PrivateKey
|
||||
sc *SessionCache
|
||||
protocolID [6]byte
|
||||
|
||||
// encoder buffers
|
||||
buf bytes.Buffer // whole packet
|
||||
headbuf bytes.Buffer // packet header
|
||||
msgbuf bytes.Buffer // message RLP plaintext
|
||||
msgctbuf []byte // message data ciphertext
|
||||
|
||||
// decoder buffer
|
||||
reader bytes.Reader
|
||||
}
|
||||
|
||||
// NewCodec creates a wire codec.
|
||||
func NewCodec(ln *enode.LocalNode, key *ecdsa.PrivateKey, clock mclock.Clock, protocolID [6]byte) *Codec {
|
||||
c := &Codec{
|
||||
sha256: sha256.New(),
|
||||
localnode: ln,
|
||||
privkey: key,
|
||||
sc: NewSessionCache(1024, clock),
|
||||
protocolID: protocolID,
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
// Encode encodes a packet to a node. 'id' and 'addr' specify the destination node. The
|
||||
// 'challenge' parameter should be the most recently received WHOAREYOU packet from that
|
||||
// node.
|
||||
func (c *Codec) Encode(id enode.ID, addr string, packet Packet, challenge *Whoareyou) ([]byte, Nonce, error) {
|
||||
// Create the packet header.
|
||||
var (
|
||||
head Header
|
||||
session *session
|
||||
msgData []byte
|
||||
err error
|
||||
)
|
||||
switch {
|
||||
case packet.Kind() == WhoareyouPacket:
|
||||
head, err = c.encodeWhoareyou(id, packet.(*Whoareyou))
|
||||
case challenge != nil:
|
||||
// We have an unanswered challenge, send handshake.
|
||||
head, session, err = c.encodeHandshakeHeader(id, addr, challenge)
|
||||
default:
|
||||
session = c.sc.session(id, addr)
|
||||
if session != nil {
|
||||
// There is a session, use it.
|
||||
head, err = c.encodeMessageHeader(id, session)
|
||||
} else {
|
||||
// No keys, send random data to kick off the handshake.
|
||||
head, msgData, err = c.encodeRandom(id)
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
return nil, Nonce{}, err
|
||||
}
|
||||
|
||||
// Generate masking IV.
|
||||
if err := c.sc.maskingIVGen(head.IV[:]); err != nil {
|
||||
return nil, Nonce{}, fmt.Errorf("can't generate masking IV: %v", err)
|
||||
}
|
||||
|
||||
// Encode header data.
|
||||
c.writeHeaders(&head)
|
||||
|
||||
// Store sent WHOAREYOU challenges.
|
||||
if challenge, ok := packet.(*Whoareyou); ok {
|
||||
challenge.ChallengeData = bytesCopy(&c.buf)
|
||||
c.sc.storeSentHandshake(id, addr, challenge)
|
||||
} else if msgData == nil {
|
||||
headerData := c.buf.Bytes()
|
||||
msgData, err = c.encryptMessage(session, packet, &head, headerData)
|
||||
if err != nil {
|
||||
return nil, Nonce{}, err
|
||||
}
|
||||
}
|
||||
|
||||
enc, err := c.EncodeRaw(id, head, msgData)
|
||||
return enc, head.Nonce, err
|
||||
}
|
||||
|
||||
// EncodeRaw encodes a packet with the given header.
|
||||
func (c *Codec) EncodeRaw(id enode.ID, head Header, msgdata []byte) ([]byte, error) {
|
||||
c.writeHeaders(&head)
|
||||
|
||||
// Apply masking.
|
||||
masked := c.buf.Bytes()[sizeofMaskingIV:]
|
||||
mask := head.mask(id)
|
||||
mask.XORKeyStream(masked[:], masked[:])
|
||||
|
||||
// Write message data.
|
||||
c.buf.Write(msgdata)
|
||||
return c.buf.Bytes(), nil
|
||||
}
|
||||
|
||||
func (c *Codec) writeHeaders(head *Header) {
|
||||
c.buf.Reset()
|
||||
c.buf.Write(head.IV[:])
|
||||
binary.Write(&c.buf, binary.BigEndian, &head.StaticHeader)
|
||||
c.buf.Write(head.AuthData)
|
||||
}
|
||||
|
||||
// makeHeader creates a packet header.
|
||||
func (c *Codec) makeHeader(toID enode.ID, flag byte, authsizeExtra int) Header {
|
||||
var authsize int
|
||||
switch flag {
|
||||
case flagMessage:
|
||||
authsize = sizeofMessageAuthData
|
||||
case flagWhoareyou:
|
||||
authsize = sizeofWhoareyouAuthData
|
||||
case flagHandshake:
|
||||
authsize = sizeofHandshakeAuthData
|
||||
default:
|
||||
panic(fmt.Errorf("BUG: invalid packet header flag %x", flag))
|
||||
}
|
||||
authsize += authsizeExtra
|
||||
if authsize > int(^uint16(0)) {
|
||||
panic(fmt.Errorf("BUG: auth size %d overflows uint16", authsize))
|
||||
}
|
||||
return Header{
|
||||
StaticHeader: StaticHeader{
|
||||
ProtocolID: c.protocolID,
|
||||
Version: version,
|
||||
Flag: flag,
|
||||
AuthSize: uint16(authsize),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// encodeRandom encodes a packet with random content.
|
||||
func (c *Codec) encodeRandom(toID enode.ID) (Header, []byte, error) {
|
||||
head := c.makeHeader(toID, flagMessage, 0)
|
||||
|
||||
// Encode auth data.
|
||||
auth := messageAuthData{SrcID: c.localnode.ID()}
|
||||
if _, err := crand.Read(head.Nonce[:]); err != nil {
|
||||
return head, nil, fmt.Errorf("can't get random data: %v", err)
|
||||
}
|
||||
c.headbuf.Reset()
|
||||
binary.Write(&c.headbuf, binary.BigEndian, auth)
|
||||
head.AuthData = c.headbuf.Bytes()
|
||||
|
||||
// Fill message ciphertext buffer with random bytes.
|
||||
c.msgctbuf = append(c.msgctbuf[:0], make([]byte, randomPacketMsgSize)...)
|
||||
crand.Read(c.msgctbuf)
|
||||
return head, c.msgctbuf, nil
|
||||
}
|
||||
|
||||
// encodeWhoareyou encodes a WHOAREYOU packet.
|
||||
func (c *Codec) encodeWhoareyou(toID enode.ID, packet *Whoareyou) (Header, error) {
|
||||
// Sanity check node field to catch misbehaving callers.
|
||||
if packet.RecordSeq > 0 && packet.Node == nil {
|
||||
panic("BUG: missing node in whoareyou with non-zero seq")
|
||||
}
|
||||
|
||||
// Create header.
|
||||
head := c.makeHeader(toID, flagWhoareyou, 0)
|
||||
head.AuthData = bytesCopy(&c.buf)
|
||||
head.Nonce = packet.Nonce
|
||||
|
||||
// Encode auth data.
|
||||
auth := &whoareyouAuthData{
|
||||
IDNonce: packet.IDNonce,
|
||||
RecordSeq: packet.RecordSeq,
|
||||
}
|
||||
c.headbuf.Reset()
|
||||
binary.Write(&c.headbuf, binary.BigEndian, auth)
|
||||
head.AuthData = c.headbuf.Bytes()
|
||||
return head, nil
|
||||
}
|
||||
|
||||
// encodeHandshakeHeader encodes the handshake message packet header.
|
||||
func (c *Codec) encodeHandshakeHeader(toID enode.ID, addr string, challenge *Whoareyou) (Header, *session, error) {
|
||||
// Ensure calling code sets challenge.node.
|
||||
if challenge.Node == nil {
|
||||
panic("BUG: missing challenge.Node in encode")
|
||||
}
|
||||
|
||||
// Generate new secrets.
|
||||
auth, session, err := c.makeHandshakeAuth(toID, addr, challenge)
|
||||
if err != nil {
|
||||
return Header{}, nil, err
|
||||
}
|
||||
|
||||
// Generate nonce for message.
|
||||
nonce, err := c.sc.nextNonce(session)
|
||||
if err != nil {
|
||||
return Header{}, nil, fmt.Errorf("can't generate nonce: %v", err)
|
||||
}
|
||||
|
||||
// TODO: this should happen when the first authenticated message is received
|
||||
c.sc.storeNewSession(toID, addr, session)
|
||||
|
||||
// Encode the auth header.
|
||||
var (
|
||||
authsizeExtra = len(auth.pubkey) + len(auth.signature) + len(auth.record)
|
||||
head = c.makeHeader(toID, flagHandshake, authsizeExtra)
|
||||
)
|
||||
c.headbuf.Reset()
|
||||
binary.Write(&c.headbuf, binary.BigEndian, &auth.h)
|
||||
c.headbuf.Write(auth.signature)
|
||||
c.headbuf.Write(auth.pubkey)
|
||||
c.headbuf.Write(auth.record)
|
||||
head.AuthData = c.headbuf.Bytes()
|
||||
head.Nonce = nonce
|
||||
return head, session, err
|
||||
}
|
||||
|
||||
// makeHandshakeAuth creates the auth header on a request packet following WHOAREYOU.
|
||||
func (c *Codec) makeHandshakeAuth(toID enode.ID, addr string, challenge *Whoareyou) (*handshakeAuthData, *session, error) {
|
||||
auth := new(handshakeAuthData)
|
||||
auth.h.SrcID = c.localnode.ID()
|
||||
|
||||
// Create the ephemeral key. This needs to be first because the
|
||||
// key is part of the ID nonce signature.
|
||||
var remotePubkey = new(ecdsa.PublicKey)
|
||||
if err := challenge.Node.Load((*enode.Secp256k1)(remotePubkey)); err != nil {
|
||||
return nil, nil, fmt.Errorf("can't find secp256k1 key for recipient")
|
||||
}
|
||||
ephkey, err := c.sc.ephemeralKeyGen()
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("can't generate ephemeral key")
|
||||
}
|
||||
ephpubkey := EncodePubkey(&ephkey.PublicKey)
|
||||
auth.pubkey = ephpubkey[:]
|
||||
auth.h.PubkeySize = byte(len(auth.pubkey))
|
||||
|
||||
// Add ID nonce signature to response.
|
||||
cdata := challenge.ChallengeData
|
||||
idsig, err := makeIDSignature(c.sha256, c.privkey, cdata, ephpubkey[:], toID)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("can't sign: %v", err)
|
||||
}
|
||||
auth.signature = idsig
|
||||
auth.h.SigSize = byte(len(auth.signature))
|
||||
|
||||
// Add our record to response if it's newer than what remote side has.
|
||||
ln := c.localnode.Node()
|
||||
if challenge.RecordSeq < ln.Seq() {
|
||||
auth.record, _ = rlp.EncodeToBytes(ln.Record())
|
||||
}
|
||||
|
||||
// Create session keys.
|
||||
sec := deriveKeys(sha256.New, ephkey, remotePubkey, c.localnode.ID(), challenge.Node.ID(), cdata)
|
||||
if sec == nil {
|
||||
return nil, nil, fmt.Errorf("key derivation failed")
|
||||
}
|
||||
return auth, sec, err
|
||||
}
|
||||
|
||||
// encodeMessageHeader encodes an encrypted message packet.
|
||||
func (c *Codec) encodeMessageHeader(toID enode.ID, s *session) (Header, error) {
|
||||
head := c.makeHeader(toID, flagMessage, 0)
|
||||
|
||||
// Create the header.
|
||||
nonce, err := c.sc.nextNonce(s)
|
||||
if err != nil {
|
||||
return Header{}, fmt.Errorf("can't generate nonce: %v", err)
|
||||
}
|
||||
auth := messageAuthData{SrcID: c.localnode.ID()}
|
||||
c.buf.Reset()
|
||||
binary.Write(&c.buf, binary.BigEndian, &auth)
|
||||
head.AuthData = bytesCopy(&c.buf)
|
||||
head.Nonce = nonce
|
||||
return head, err
|
||||
}
|
||||
|
||||
func (c *Codec) encryptMessage(s *session, p Packet, head *Header, headerData []byte) ([]byte, error) {
|
||||
// Encode message plaintext.
|
||||
c.msgbuf.Reset()
|
||||
c.msgbuf.WriteByte(p.Kind())
|
||||
if err := rlp.Encode(&c.msgbuf, p); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
messagePT := c.msgbuf.Bytes()
|
||||
|
||||
// Encrypt into message ciphertext buffer.
|
||||
messageCT, err := encryptGCM(c.msgctbuf[:0], s.writeKey, head.Nonce[:], messagePT, headerData)
|
||||
if err == nil {
|
||||
c.msgctbuf = messageCT
|
||||
}
|
||||
return messageCT, err
|
||||
}
|
||||
|
||||
// Decode decodes a discovery packet.
|
||||
func (c *Codec) Decode(input []byte, addr string) (src enode.ID, n *enode.Node, p Packet, err error) {
|
||||
if len(input) < minPacketSize {
|
||||
return enode.ID{}, nil, nil, errTooShort
|
||||
}
|
||||
// Unmask the static header.
|
||||
var head Header
|
||||
copy(head.IV[:], input[:sizeofMaskingIV])
|
||||
mask := head.mask(c.localnode.ID())
|
||||
staticHeader := input[sizeofMaskingIV:sizeofStaticPacketData]
|
||||
mask.XORKeyStream(staticHeader, staticHeader)
|
||||
|
||||
// Decode and verify the static header.
|
||||
c.reader.Reset(staticHeader)
|
||||
binary.Read(&c.reader, binary.BigEndian, &head.StaticHeader)
|
||||
remainingInput := len(input) - sizeofStaticPacketData
|
||||
if err := head.checkValid(remainingInput, c.protocolID); err != nil {
|
||||
return enode.ID{}, nil, nil, err
|
||||
}
|
||||
|
||||
// Unmask auth data.
|
||||
authDataEnd := sizeofStaticPacketData + int(head.AuthSize)
|
||||
authData := input[sizeofStaticPacketData:authDataEnd]
|
||||
mask.XORKeyStream(authData, authData)
|
||||
head.AuthData = authData
|
||||
|
||||
// Delete timed-out handshakes. This must happen before decoding to avoid
|
||||
// processing the same handshake twice.
|
||||
c.sc.handshakeGC()
|
||||
|
||||
// Decode auth part and message.
|
||||
headerData := input[:authDataEnd]
|
||||
msgData := input[authDataEnd:]
|
||||
switch head.Flag {
|
||||
case flagWhoareyou:
|
||||
p, err = c.decodeWhoareyou(&head, headerData)
|
||||
case flagHandshake:
|
||||
n, p, err = c.decodeHandshakeMessage(addr, &head, headerData, msgData)
|
||||
case flagMessage:
|
||||
p, err = c.decodeMessage(addr, &head, headerData, msgData)
|
||||
default:
|
||||
err = errInvalidFlag
|
||||
}
|
||||
return head.src, n, p, err
|
||||
}
|
||||
|
||||
// decodeWhoareyou reads packet data after the header as a WHOAREYOU packet.
|
||||
func (c *Codec) decodeWhoareyou(head *Header, headerData []byte) (Packet, error) {
|
||||
if len(head.AuthData) != sizeofWhoareyouAuthData {
|
||||
return nil, fmt.Errorf("invalid auth size %d for WHOAREYOU", len(head.AuthData))
|
||||
}
|
||||
var auth whoareyouAuthData
|
||||
c.reader.Reset(head.AuthData)
|
||||
binary.Read(&c.reader, binary.BigEndian, &auth)
|
||||
p := &Whoareyou{
|
||||
Nonce: head.Nonce,
|
||||
IDNonce: auth.IDNonce,
|
||||
RecordSeq: auth.RecordSeq,
|
||||
ChallengeData: make([]byte, len(headerData)),
|
||||
}
|
||||
copy(p.ChallengeData, headerData)
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func (c *Codec) decodeHandshakeMessage(fromAddr string, head *Header, headerData, msgData []byte) (n *enode.Node, p Packet, err error) {
|
||||
node, auth, session, err := c.decodeHandshake(fromAddr, head)
|
||||
if err != nil {
|
||||
c.sc.deleteHandshake(auth.h.SrcID, fromAddr)
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
// Decrypt the message using the new session keys.
|
||||
msg, err := c.decryptMessage(msgData, head.Nonce[:], headerData, session.readKey)
|
||||
if err != nil {
|
||||
c.sc.deleteHandshake(auth.h.SrcID, fromAddr)
|
||||
return node, msg, err
|
||||
}
|
||||
|
||||
// Handshake OK, drop the challenge and store the new session keys.
|
||||
c.sc.storeNewSession(auth.h.SrcID, fromAddr, session)
|
||||
c.sc.deleteHandshake(auth.h.SrcID, fromAddr)
|
||||
return node, msg, nil
|
||||
}
|
||||
|
||||
func (c *Codec) decodeHandshake(fromAddr string, head *Header) (n *enode.Node, auth handshakeAuthData, s *session, err error) {
|
||||
if auth, err = c.decodeHandshakeAuthData(head); err != nil {
|
||||
return nil, auth, nil, err
|
||||
}
|
||||
|
||||
// Verify against our last WHOAREYOU.
|
||||
challenge := c.sc.getHandshake(auth.h.SrcID, fromAddr)
|
||||
if challenge == nil {
|
||||
return nil, auth, nil, errUnexpectedHandshake
|
||||
}
|
||||
// Get node record.
|
||||
n, err = c.decodeHandshakeRecord(challenge.Node, auth.h.SrcID, auth.record)
|
||||
if err != nil {
|
||||
return nil, auth, nil, err
|
||||
}
|
||||
// Verify ID nonce signature.
|
||||
sig := auth.signature
|
||||
cdata := challenge.ChallengeData
|
||||
err = verifyIDSignature(c.sha256, sig, n, cdata, auth.pubkey, c.localnode.ID())
|
||||
if err != nil {
|
||||
return nil, auth, nil, err
|
||||
}
|
||||
// Verify ephemeral key is on curve.
|
||||
ephkey, err := DecodePubkey(c.privkey.Curve, auth.pubkey)
|
||||
if err != nil {
|
||||
return nil, auth, nil, errInvalidAuthKey
|
||||
}
|
||||
// Derive sesssion keys.
|
||||
session := deriveKeys(sha256.New, c.privkey, ephkey, auth.h.SrcID, c.localnode.ID(), cdata)
|
||||
session = session.keysFlipped()
|
||||
return n, auth, session, nil
|
||||
}
|
||||
|
||||
// decodeHandshakeAuthData reads the authdata section of a handshake packet.
|
||||
func (c *Codec) decodeHandshakeAuthData(head *Header) (auth handshakeAuthData, err error) {
|
||||
// Decode fixed size part.
|
||||
if len(head.AuthData) < sizeofHandshakeAuthData {
|
||||
return auth, fmt.Errorf("header authsize %d too low for handshake", head.AuthSize)
|
||||
}
|
||||
c.reader.Reset(head.AuthData)
|
||||
binary.Read(&c.reader, binary.BigEndian, &auth.h)
|
||||
head.src = auth.h.SrcID
|
||||
|
||||
// Decode variable-size part.
|
||||
var (
|
||||
vardata = head.AuthData[sizeofHandshakeAuthData:]
|
||||
sigAndKeySize = int(auth.h.SigSize) + int(auth.h.PubkeySize)
|
||||
keyOffset = int(auth.h.SigSize)
|
||||
recOffset = keyOffset + int(auth.h.PubkeySize)
|
||||
)
|
||||
if len(vardata) < sigAndKeySize {
|
||||
return auth, errTooShort
|
||||
}
|
||||
auth.signature = vardata[:keyOffset]
|
||||
auth.pubkey = vardata[keyOffset:recOffset]
|
||||
auth.record = vardata[recOffset:]
|
||||
return auth, nil
|
||||
}
|
||||
|
||||
// decodeHandshakeRecord verifies the node record contained in a handshake packet. The
|
||||
// remote node should include the record if we don't have one or if ours is older than the
|
||||
// latest sequence number.
|
||||
func (c *Codec) decodeHandshakeRecord(local *enode.Node, wantID enode.ID, remote []byte) (*enode.Node, error) {
|
||||
node := local
|
||||
if len(remote) > 0 {
|
||||
var record enr.Record
|
||||
if err := rlp.DecodeBytes(remote, &record); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if local == nil || local.Seq() < record.Seq() {
|
||||
n, err := enode.New(enode.ValidSchemes, &record)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid node record: %v", err)
|
||||
}
|
||||
if n.ID() != wantID {
|
||||
return nil, fmt.Errorf("record in handshake has wrong ID: %v", n.ID())
|
||||
}
|
||||
node = n
|
||||
}
|
||||
}
|
||||
if node == nil {
|
||||
return nil, errNoRecord
|
||||
}
|
||||
return node, nil
|
||||
}
|
||||
|
||||
// decodeMessage reads packet data following the header as an ordinary message packet.
|
||||
func (c *Codec) decodeMessage(fromAddr string, head *Header, headerData, msgData []byte) (Packet, error) {
|
||||
if len(head.AuthData) != sizeofMessageAuthData {
|
||||
return nil, fmt.Errorf("invalid auth size %d for message packet", len(head.AuthData))
|
||||
}
|
||||
var auth messageAuthData
|
||||
c.reader.Reset(head.AuthData)
|
||||
binary.Read(&c.reader, binary.BigEndian, &auth)
|
||||
head.src = auth.SrcID
|
||||
|
||||
// Try decrypting the message.
|
||||
key := c.sc.readKey(auth.SrcID, fromAddr)
|
||||
msg, err := c.decryptMessage(msgData, head.Nonce[:], headerData, key)
|
||||
if errors.Is(err, errMessageDecrypt) {
|
||||
// It didn't work. Start the handshake since this is an ordinary message packet.
|
||||
return &Unknown{Nonce: head.Nonce}, nil
|
||||
}
|
||||
return msg, err
|
||||
}
|
||||
|
||||
func (c *Codec) decryptMessage(input, nonce, headerData, readKey []byte) (Packet, error) {
|
||||
msgdata, err := decryptGCM(readKey, nonce, input, headerData)
|
||||
if err != nil {
|
||||
return nil, errMessageDecrypt
|
||||
}
|
||||
if len(msgdata) == 0 {
|
||||
return nil, errMessageTooShort
|
||||
}
|
||||
return DecodeMessage(msgdata[0], msgdata[1:])
|
||||
}
|
||||
|
||||
// checkValid performs some basic validity checks on the header.
|
||||
// The packetLen here is the length remaining after the static header.
|
||||
func (h *StaticHeader) checkValid(packetLen int, protocolID [6]byte) error {
|
||||
if h.ProtocolID != protocolID {
|
||||
return errInvalidHeader
|
||||
}
|
||||
if h.Version < minVersion {
|
||||
return errMinVersion
|
||||
}
|
||||
if h.Flag != flagWhoareyou && packetLen < minMessageSize {
|
||||
return errMsgTooShort
|
||||
}
|
||||
if int(h.AuthSize) > packetLen {
|
||||
return errAuthSize
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// mask returns a cipher for 'masking' / 'unmasking' packet headers.
|
||||
func (h *Header) mask(destID enode.ID) cipher.Stream {
|
||||
block, err := aes.NewCipher(destID[:16])
|
||||
if err != nil {
|
||||
panic("can't create cipher")
|
||||
}
|
||||
return cipher.NewCTR(block, h.IV[:])
|
||||
}
|
||||
|
||||
func bytesCopy(r *bytes.Buffer) []byte {
|
||||
b := make([]byte, r.Len())
|
||||
copy(b, r.Bytes())
|
||||
return b
|
||||
}
|
||||
249
vendor/github.com/waku-org/go-discover/discover/v5wire/msg.go
generated
vendored
Normal file
249
vendor/github.com/waku-org/go-discover/discover/v5wire/msg.go
generated
vendored
Normal file
@@ -0,0 +1,249 @@
|
||||
// Copyright 2020 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package v5wire
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/mclock"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/ethereum/go-ethereum/rlp"
|
||||
)
|
||||
|
||||
// Packet is implemented by all message types.
|
||||
type Packet interface {
|
||||
Name() string // Name returns a string corresponding to the message type.
|
||||
Kind() byte // Kind returns the message type.
|
||||
RequestID() []byte // Returns the request ID.
|
||||
SetRequestID([]byte) // Sets the request ID.
|
||||
}
|
||||
|
||||
// Message types.
|
||||
const (
|
||||
PingMsg byte = iota + 1
|
||||
PongMsg
|
||||
FindnodeMsg
|
||||
NodesMsg
|
||||
TalkRequestMsg
|
||||
TalkResponseMsg
|
||||
RequestTicketMsg
|
||||
TicketMsg
|
||||
RegtopicMsg
|
||||
RegconfirmationMsg
|
||||
TopicQueryMsg
|
||||
|
||||
UnknownPacket = byte(255) // any non-decryptable packet
|
||||
WhoareyouPacket = byte(254) // the WHOAREYOU packet
|
||||
)
|
||||
|
||||
// Protocol messages.
|
||||
type (
|
||||
// Unknown represents any packet that can't be decrypted.
|
||||
Unknown struct {
|
||||
Nonce Nonce
|
||||
}
|
||||
|
||||
// Whoareyou contains the handshake challenge.
|
||||
Whoareyou struct {
|
||||
ChallengeData []byte // Encoded challenge
|
||||
Nonce Nonce // Nonce of request packet
|
||||
IDNonce [16]byte // Identity proof data
|
||||
RecordSeq uint64 // ENR sequence number of recipient
|
||||
|
||||
// Node is the locally known node record of recipient.
|
||||
// This must be set by the caller of Encode.
|
||||
Node *enode.Node
|
||||
|
||||
sent mclock.AbsTime // for handshake GC.
|
||||
}
|
||||
|
||||
// Ping is sent during liveness checks.
|
||||
Ping struct {
|
||||
ReqID []byte
|
||||
ENRSeq uint64
|
||||
}
|
||||
|
||||
// Pong is the reply to Ping.
|
||||
Pong struct {
|
||||
ReqID []byte
|
||||
ENRSeq uint64
|
||||
ToIP net.IP // These fields should mirror the UDP envelope address of the ping
|
||||
ToPort uint16 // packet, which provides a way to discover the external address (after NAT).
|
||||
}
|
||||
|
||||
// Findnode is a query for nodes in the given bucket.
|
||||
Findnode struct {
|
||||
ReqID []byte
|
||||
Distances []uint
|
||||
}
|
||||
|
||||
// Nodes is the reply to Findnode and Topicquery.
|
||||
Nodes struct {
|
||||
ReqID []byte
|
||||
Total uint8
|
||||
Nodes []*enr.Record
|
||||
}
|
||||
|
||||
// TalkRequest is an application-level request.
|
||||
TalkRequest struct {
|
||||
ReqID []byte
|
||||
Protocol string
|
||||
Message []byte
|
||||
}
|
||||
|
||||
// TalkResponse is the reply to TalkRequest.
|
||||
TalkResponse struct {
|
||||
ReqID []byte
|
||||
Message []byte
|
||||
}
|
||||
|
||||
// RequestTicket requests a ticket for a topic queue.
|
||||
RequestTicket struct {
|
||||
ReqID []byte
|
||||
Topic []byte
|
||||
}
|
||||
|
||||
// Ticket is the response to RequestTicket.
|
||||
Ticket struct {
|
||||
ReqID []byte
|
||||
Ticket []byte
|
||||
}
|
||||
|
||||
// Regtopic registers the sender in a topic queue using a ticket.
|
||||
Regtopic struct {
|
||||
ReqID []byte
|
||||
Ticket []byte
|
||||
ENR *enr.Record
|
||||
}
|
||||
|
||||
// Regconfirmation is the reply to Regtopic.
|
||||
Regconfirmation struct {
|
||||
ReqID []byte
|
||||
Registered bool
|
||||
}
|
||||
|
||||
// TopicQuery asks for nodes with the given topic.
|
||||
TopicQuery struct {
|
||||
ReqID []byte
|
||||
Topic []byte
|
||||
}
|
||||
)
|
||||
|
||||
// DecodeMessage decodes the message body of a packet.
|
||||
func DecodeMessage(ptype byte, body []byte) (Packet, error) {
|
||||
var dec Packet
|
||||
switch ptype {
|
||||
case PingMsg:
|
||||
dec = new(Ping)
|
||||
case PongMsg:
|
||||
dec = new(Pong)
|
||||
case FindnodeMsg:
|
||||
dec = new(Findnode)
|
||||
case NodesMsg:
|
||||
dec = new(Nodes)
|
||||
case TalkRequestMsg:
|
||||
dec = new(TalkRequest)
|
||||
case TalkResponseMsg:
|
||||
dec = new(TalkResponse)
|
||||
case RequestTicketMsg:
|
||||
dec = new(RequestTicket)
|
||||
case TicketMsg:
|
||||
dec = new(Ticket)
|
||||
case RegtopicMsg:
|
||||
dec = new(Regtopic)
|
||||
case RegconfirmationMsg:
|
||||
dec = new(Regconfirmation)
|
||||
case TopicQueryMsg:
|
||||
dec = new(TopicQuery)
|
||||
default:
|
||||
return nil, fmt.Errorf("unknown packet type %d", ptype)
|
||||
}
|
||||
if err := rlp.DecodeBytes(body, dec); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if dec.RequestID() != nil && len(dec.RequestID()) > 8 {
|
||||
return nil, ErrInvalidReqID
|
||||
}
|
||||
return dec, nil
|
||||
}
|
||||
|
||||
func (*Whoareyou) Name() string { return "WHOAREYOU/v5" }
|
||||
func (*Whoareyou) Kind() byte { return WhoareyouPacket }
|
||||
func (*Whoareyou) RequestID() []byte { return nil }
|
||||
func (*Whoareyou) SetRequestID([]byte) {}
|
||||
|
||||
func (*Unknown) Name() string { return "UNKNOWN/v5" }
|
||||
func (*Unknown) Kind() byte { return UnknownPacket }
|
||||
func (*Unknown) RequestID() []byte { return nil }
|
||||
func (*Unknown) SetRequestID([]byte) {}
|
||||
|
||||
func (*Ping) Name() string { return "PING/v5" }
|
||||
func (*Ping) Kind() byte { return PingMsg }
|
||||
func (p *Ping) RequestID() []byte { return p.ReqID }
|
||||
func (p *Ping) SetRequestID(id []byte) { p.ReqID = id }
|
||||
|
||||
func (*Pong) Name() string { return "PONG/v5" }
|
||||
func (*Pong) Kind() byte { return PongMsg }
|
||||
func (p *Pong) RequestID() []byte { return p.ReqID }
|
||||
func (p *Pong) SetRequestID(id []byte) { p.ReqID = id }
|
||||
|
||||
func (*Findnode) Name() string { return "FINDNODE/v5" }
|
||||
func (*Findnode) Kind() byte { return FindnodeMsg }
|
||||
func (p *Findnode) RequestID() []byte { return p.ReqID }
|
||||
func (p *Findnode) SetRequestID(id []byte) { p.ReqID = id }
|
||||
|
||||
func (*Nodes) Name() string { return "NODES/v5" }
|
||||
func (*Nodes) Kind() byte { return NodesMsg }
|
||||
func (p *Nodes) RequestID() []byte { return p.ReqID }
|
||||
func (p *Nodes) SetRequestID(id []byte) { p.ReqID = id }
|
||||
|
||||
func (*TalkRequest) Name() string { return "TALKREQ/v5" }
|
||||
func (*TalkRequest) Kind() byte { return TalkRequestMsg }
|
||||
func (p *TalkRequest) RequestID() []byte { return p.ReqID }
|
||||
func (p *TalkRequest) SetRequestID(id []byte) { p.ReqID = id }
|
||||
|
||||
func (*TalkResponse) Name() string { return "TALKRESP/v5" }
|
||||
func (*TalkResponse) Kind() byte { return TalkResponseMsg }
|
||||
func (p *TalkResponse) RequestID() []byte { return p.ReqID }
|
||||
func (p *TalkResponse) SetRequestID(id []byte) { p.ReqID = id }
|
||||
|
||||
func (*RequestTicket) Name() string { return "REQTICKET/v5" }
|
||||
func (*RequestTicket) Kind() byte { return RequestTicketMsg }
|
||||
func (p *RequestTicket) RequestID() []byte { return p.ReqID }
|
||||
func (p *RequestTicket) SetRequestID(id []byte) { p.ReqID = id }
|
||||
|
||||
func (*Regtopic) Name() string { return "REGTOPIC/v5" }
|
||||
func (*Regtopic) Kind() byte { return RegtopicMsg }
|
||||
func (p *Regtopic) RequestID() []byte { return p.ReqID }
|
||||
func (p *Regtopic) SetRequestID(id []byte) { p.ReqID = id }
|
||||
|
||||
func (*Ticket) Name() string { return "TICKET/v5" }
|
||||
func (*Ticket) Kind() byte { return TicketMsg }
|
||||
func (p *Ticket) RequestID() []byte { return p.ReqID }
|
||||
func (p *Ticket) SetRequestID(id []byte) { p.ReqID = id }
|
||||
|
||||
func (*Regconfirmation) Name() string { return "REGCONFIRMATION/v5" }
|
||||
func (*Regconfirmation) Kind() byte { return RegconfirmationMsg }
|
||||
func (p *Regconfirmation) RequestID() []byte { return p.ReqID }
|
||||
func (p *Regconfirmation) SetRequestID(id []byte) { p.ReqID = id }
|
||||
|
||||
func (*TopicQuery) Name() string { return "TOPICQUERY/v5" }
|
||||
func (*TopicQuery) Kind() byte { return TopicQueryMsg }
|
||||
func (p *TopicQuery) RequestID() []byte { return p.ReqID }
|
||||
func (p *TopicQuery) SetRequestID(id []byte) { p.ReqID = id }
|
||||
142
vendor/github.com/waku-org/go-discover/discover/v5wire/session.go
generated
vendored
Normal file
142
vendor/github.com/waku-org/go-discover/discover/v5wire/session.go
generated
vendored
Normal file
@@ -0,0 +1,142 @@
|
||||
// Copyright 2020 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package v5wire
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
crand "crypto/rand"
|
||||
"encoding/binary"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/mclock"
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/hashicorp/golang-lru/simplelru"
|
||||
)
|
||||
|
||||
const handshakeTimeout = time.Second
|
||||
|
||||
// The SessionCache keeps negotiated encryption keys and
|
||||
// state for in-progress handshakes in the Discovery v5 wire protocol.
|
||||
type SessionCache struct {
|
||||
sessions *simplelru.LRU
|
||||
handshakes map[sessionID]*Whoareyou
|
||||
clock mclock.Clock
|
||||
|
||||
// hooks for overriding randomness.
|
||||
nonceGen func(uint32) (Nonce, error)
|
||||
maskingIVGen func([]byte) error
|
||||
ephemeralKeyGen func() (*ecdsa.PrivateKey, error)
|
||||
}
|
||||
|
||||
// sessionID identifies a session or handshake.
|
||||
type sessionID struct {
|
||||
id enode.ID
|
||||
addr string
|
||||
}
|
||||
|
||||
// session contains session information
|
||||
type session struct {
|
||||
writeKey []byte
|
||||
readKey []byte
|
||||
nonceCounter uint32
|
||||
}
|
||||
|
||||
// keysFlipped returns a copy of s with the read and write keys flipped.
|
||||
func (s *session) keysFlipped() *session {
|
||||
return &session{s.readKey, s.writeKey, s.nonceCounter}
|
||||
}
|
||||
|
||||
func NewSessionCache(maxItems int, clock mclock.Clock) *SessionCache {
|
||||
cache, err := simplelru.NewLRU(maxItems, nil)
|
||||
if err != nil {
|
||||
panic("can't create session cache")
|
||||
}
|
||||
return &SessionCache{
|
||||
sessions: cache,
|
||||
handshakes: make(map[sessionID]*Whoareyou),
|
||||
clock: clock,
|
||||
nonceGen: generateNonce,
|
||||
maskingIVGen: generateMaskingIV,
|
||||
ephemeralKeyGen: crypto.GenerateKey,
|
||||
}
|
||||
}
|
||||
|
||||
func generateNonce(counter uint32) (n Nonce, err error) {
|
||||
binary.BigEndian.PutUint32(n[:4], counter)
|
||||
_, err = crand.Read(n[4:])
|
||||
return n, err
|
||||
}
|
||||
|
||||
func generateMaskingIV(buf []byte) error {
|
||||
_, err := crand.Read(buf)
|
||||
return err
|
||||
}
|
||||
|
||||
// nextNonce creates a nonce for encrypting a message to the given session.
|
||||
func (sc *SessionCache) nextNonce(s *session) (Nonce, error) {
|
||||
s.nonceCounter++
|
||||
return sc.nonceGen(s.nonceCounter)
|
||||
}
|
||||
|
||||
// session returns the current session for the given node, if any.
|
||||
func (sc *SessionCache) session(id enode.ID, addr string) *session {
|
||||
item, ok := sc.sessions.Get(sessionID{id, addr})
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return item.(*session)
|
||||
}
|
||||
|
||||
// readKey returns the current read key for the given node.
|
||||
func (sc *SessionCache) readKey(id enode.ID, addr string) []byte {
|
||||
if s := sc.session(id, addr); s != nil {
|
||||
return s.readKey
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// storeNewSession stores new encryption keys in the cache.
|
||||
func (sc *SessionCache) storeNewSession(id enode.ID, addr string, s *session) {
|
||||
sc.sessions.Add(sessionID{id, addr}, s)
|
||||
}
|
||||
|
||||
// getHandshake gets the handshake challenge we previously sent to the given remote node.
|
||||
func (sc *SessionCache) getHandshake(id enode.ID, addr string) *Whoareyou {
|
||||
return sc.handshakes[sessionID{id, addr}]
|
||||
}
|
||||
|
||||
// storeSentHandshake stores the handshake challenge sent to the given remote node.
|
||||
func (sc *SessionCache) storeSentHandshake(id enode.ID, addr string, challenge *Whoareyou) {
|
||||
challenge.sent = sc.clock.Now()
|
||||
sc.handshakes[sessionID{id, addr}] = challenge
|
||||
}
|
||||
|
||||
// deleteHandshake deletes handshake data for the given node.
|
||||
func (sc *SessionCache) deleteHandshake(id enode.ID, addr string) {
|
||||
delete(sc.handshakes, sessionID{id, addr})
|
||||
}
|
||||
|
||||
// handshakeGC deletes timed-out handshakes.
|
||||
func (sc *SessionCache) handshakeGC() {
|
||||
deadline := sc.clock.Now().Add(-handshakeTimeout)
|
||||
for key, challenge := range sc.handshakes {
|
||||
if challenge.sent < deadline {
|
||||
delete(sc.handshakes, key)
|
||||
}
|
||||
}
|
||||
}
|
||||
2
vendor/github.com/waku-org/go-libp2p-rendezvous/.gitattributes
generated
vendored
Normal file
2
vendor/github.com/waku-org/go-libp2p-rendezvous/.gitattributes
generated
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
*.pb.go linguist-generated merge=ours -diff
|
||||
go.sum linguist-generated text
|
||||
14
vendor/github.com/waku-org/go-libp2p-rendezvous/.gitignore
generated
vendored
Normal file
14
vendor/github.com/waku-org/go-libp2p-rendezvous/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
# Binaries for programs and plugins
|
||||
*.exe
|
||||
*.dll
|
||||
*.so
|
||||
*.dylib
|
||||
|
||||
# Test binary, build with `go test -c`
|
||||
*.test
|
||||
|
||||
# Output of the go coverage tool, specifically when used with LiteIDE
|
||||
*.out
|
||||
|
||||
# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
|
||||
.glide/
|
||||
32
vendor/github.com/waku-org/go-libp2p-rendezvous/.golangci.yml
generated
vendored
Normal file
32
vendor/github.com/waku-org/go-libp2p-rendezvous/.golangci.yml
generated
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
run:
|
||||
deadline: 1m
|
||||
tests: false
|
||||
skip-files:
|
||||
- "test/.*"
|
||||
- "test/.*/.*"
|
||||
|
||||
linters-settings:
|
||||
golint:
|
||||
min-confidence: 0
|
||||
maligned:
|
||||
suggest-new: true
|
||||
goconst:
|
||||
min-len: 5
|
||||
min-occurrences: 4
|
||||
misspell:
|
||||
locale: US
|
||||
|
||||
linters:
|
||||
disable-all: true
|
||||
enable:
|
||||
- goconst
|
||||
- misspell
|
||||
- unused
|
||||
- staticcheck
|
||||
- unconvert
|
||||
- gofmt
|
||||
- goimports
|
||||
# @TODO(gfanton): disable revive for now has it generate to many errors,
|
||||
# it should be enable in a dedicated PR
|
||||
# - revive
|
||||
- ineffassign
|
||||
10
vendor/github.com/waku-org/go-libp2p-rendezvous/.releaserc
generated
vendored
Normal file
10
vendor/github.com/waku-org/go-libp2p-rendezvous/.releaserc
generated
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"release": {
|
||||
"branches": ["master"]
|
||||
},
|
||||
"plugins": [
|
||||
"@semantic-release/commit-analyzer",
|
||||
"@semantic-release/release-notes-generator",
|
||||
"@semantic-release/github"
|
||||
]
|
||||
}
|
||||
21
vendor/github.com/waku-org/go-libp2p-rendezvous/LICENSE
generated
vendored
Normal file
21
vendor/github.com/waku-org/go-libp2p-rendezvous/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2018 libp2p
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
330
vendor/github.com/waku-org/go-libp2p-rendezvous/client.go
generated
vendored
Normal file
330
vendor/github.com/waku-org/go-libp2p-rendezvous/client.go
generated
vendored
Normal file
@@ -0,0 +1,330 @@
|
||||
package rendezvous
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"time"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
inet "github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-msgio/pbio"
|
||||
|
||||
pb "github.com/waku-org/go-libp2p-rendezvous/pb"
|
||||
)
|
||||
|
||||
var (
|
||||
DiscoverAsyncInterval = 2 * time.Minute
|
||||
)
|
||||
|
||||
type RendezvousPoint interface {
|
||||
Register(ctx context.Context, ns string, ttl int) (time.Duration, error)
|
||||
Unregister(ctx context.Context, ns string) error
|
||||
Discover(ctx context.Context, ns string, limit int, cookie []byte) ([]Registration, []byte, error)
|
||||
DiscoverAsync(ctx context.Context, ns string) (<-chan Registration, error)
|
||||
}
|
||||
|
||||
type Registration struct {
|
||||
Peer peer.AddrInfo
|
||||
Ns string
|
||||
Ttl int
|
||||
}
|
||||
|
||||
type RendezvousClient interface {
|
||||
Register(ctx context.Context, ns string, ttl int) (time.Duration, error)
|
||||
Unregister(ctx context.Context, ns string) error
|
||||
Discover(ctx context.Context, ns string, limit int, cookie []byte) ([]peer.AddrInfo, []byte, error)
|
||||
DiscoverAsync(ctx context.Context, ns string) (<-chan peer.AddrInfo, error)
|
||||
}
|
||||
|
||||
func NewRendezvousPoint(host host.Host, p peer.ID, opts ...RendezvousPointOption) RendezvousPoint {
|
||||
cfg := defaultRendezvousPointConfig
|
||||
cfg.apply(opts...)
|
||||
return &rendezvousPoint{
|
||||
addrFactory: cfg.AddrsFactory,
|
||||
host: host,
|
||||
p: p,
|
||||
}
|
||||
}
|
||||
|
||||
type rendezvousPoint struct {
|
||||
addrFactory AddrsFactory
|
||||
host host.Host
|
||||
p peer.ID
|
||||
}
|
||||
|
||||
func NewRendezvousClient(host host.Host, rp peer.ID) RendezvousClient {
|
||||
return NewRendezvousClientWithPoint(NewRendezvousPoint(host, rp))
|
||||
}
|
||||
|
||||
func NewRendezvousClientWithPoint(rp RendezvousPoint) RendezvousClient {
|
||||
return &rendezvousClient{rp: rp}
|
||||
}
|
||||
|
||||
type rendezvousClient struct {
|
||||
rp RendezvousPoint
|
||||
}
|
||||
|
||||
func (rp *rendezvousPoint) Register(ctx context.Context, ns string, ttl int) (time.Duration, error) {
|
||||
s, err := rp.host.NewStream(ctx, rp.p, RendezvousProto)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer s.Reset()
|
||||
|
||||
r := pbio.NewDelimitedReader(s, inet.MessageSizeMax)
|
||||
w := pbio.NewDelimitedWriter(s)
|
||||
|
||||
addrs := rp.addrFactory(rp.host.Addrs())
|
||||
if len(addrs) == 0 {
|
||||
return 0, fmt.Errorf("no addrs available to advertise: %s", ns)
|
||||
}
|
||||
|
||||
log.Debugf("advertising on `%s` with: %v", ns, addrs)
|
||||
|
||||
privKey := rp.host.Peerstore().PrivKey(rp.host.ID())
|
||||
req, err := newRegisterMessage(privKey, ns, peer.AddrInfo{ID: rp.host.ID(), Addrs: addrs}, ttl)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
err = w.WriteMsg(req)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
var res pb.Message
|
||||
err = r.ReadMsg(&res)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if res.GetType() != pb.Message_REGISTER_RESPONSE {
|
||||
return 0, fmt.Errorf("unexpected response: %s", res.GetType().String())
|
||||
}
|
||||
|
||||
response := res.GetRegisterResponse()
|
||||
status := response.GetStatus()
|
||||
if status != pb.Message_OK {
|
||||
return 0, RendezvousError{Status: status, Text: res.GetRegisterResponse().GetStatusText()}
|
||||
}
|
||||
|
||||
responseTTL := int64(0)
|
||||
if response.Ttl != nil {
|
||||
responseTTL = int64(*response.Ttl)
|
||||
}
|
||||
|
||||
return time.Duration(responseTTL) * time.Second, nil
|
||||
}
|
||||
|
||||
func (rc *rendezvousClient) Register(ctx context.Context, ns string, ttl int) (time.Duration, error) {
|
||||
if ttl < 120 {
|
||||
return 0, fmt.Errorf("registration TTL is too short")
|
||||
}
|
||||
|
||||
returnedTTL, err := rc.rp.Register(ctx, ns, ttl)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
go registerRefresh(ctx, rc.rp, ns, ttl)
|
||||
return returnedTTL, nil
|
||||
}
|
||||
|
||||
func registerRefresh(ctx context.Context, rz RendezvousPoint, ns string, ttl int) {
|
||||
var refresh time.Duration
|
||||
errcount := 0
|
||||
|
||||
for {
|
||||
if errcount > 0 {
|
||||
// do randomized exponential backoff, up to ~4 hours
|
||||
if errcount > 7 {
|
||||
errcount = 7
|
||||
}
|
||||
backoff := 2 << uint(errcount)
|
||||
refresh = 5*time.Minute + time.Duration(rand.Intn(backoff*60000))*time.Millisecond
|
||||
} else {
|
||||
refresh = time.Duration(ttl-30) * time.Second
|
||||
}
|
||||
|
||||
select {
|
||||
case <-time.After(refresh):
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
|
||||
_, err := rz.Register(ctx, ns, ttl)
|
||||
if err != nil {
|
||||
log.Errorf("Error registering [%s]: %s", ns, err.Error())
|
||||
errcount++
|
||||
} else {
|
||||
errcount = 0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (rp *rendezvousPoint) Unregister(ctx context.Context, ns string) error {
|
||||
s, err := rp.host.NewStream(ctx, rp.p, RendezvousProto)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer s.Close()
|
||||
|
||||
w := pbio.NewDelimitedWriter(s)
|
||||
req := newUnregisterMessage(ns, rp.host.ID())
|
||||
return w.WriteMsg(req)
|
||||
}
|
||||
|
||||
func (rc *rendezvousClient) Unregister(ctx context.Context, ns string) error {
|
||||
return rc.rp.Unregister(ctx, ns)
|
||||
}
|
||||
|
||||
func (rp *rendezvousPoint) Discover(ctx context.Context, ns string, limit int, cookie []byte) ([]Registration, []byte, error) {
|
||||
s, err := rp.host.NewStream(ctx, rp.p, RendezvousProto)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
defer s.Reset()
|
||||
|
||||
r := pbio.NewDelimitedReader(s, inet.MessageSizeMax)
|
||||
w := pbio.NewDelimitedWriter(s)
|
||||
|
||||
return discoverQuery(ns, limit, cookie, r, w)
|
||||
}
|
||||
|
||||
func discoverQuery(ns string, limit int, cookie []byte, r pbio.Reader, w pbio.Writer) ([]Registration, []byte, error) {
|
||||
req := newDiscoverMessage(ns, limit, cookie)
|
||||
err := w.WriteMsg(req)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
var res pb.Message
|
||||
err = r.ReadMsg(&res)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
if res.GetType() != pb.Message_DISCOVER_RESPONSE {
|
||||
return nil, nil, fmt.Errorf("unexpected response: %s", res.GetType().String())
|
||||
}
|
||||
|
||||
status := res.GetDiscoverResponse().GetStatus()
|
||||
if status != pb.Message_OK {
|
||||
return nil, nil, RendezvousError{Status: status, Text: res.GetDiscoverResponse().GetStatusText()}
|
||||
}
|
||||
|
||||
regs := res.GetDiscoverResponse().GetRegistrations()
|
||||
result := make([]Registration, 0, len(regs))
|
||||
for _, reg := range regs {
|
||||
pi, err := pbToPeerRecord(reg.SignedPeerRecord)
|
||||
if err != nil {
|
||||
log.Errorf("Invalid peer info: %s", err.Error())
|
||||
continue
|
||||
}
|
||||
result = append(result, Registration{Peer: pi, Ns: reg.GetNs(), Ttl: int(reg.GetTtl())})
|
||||
}
|
||||
|
||||
return result, res.GetDiscoverResponse().GetCookie(), nil
|
||||
}
|
||||
|
||||
func (rp *rendezvousPoint) DiscoverAsync(ctx context.Context, ns string) (<-chan Registration, error) {
|
||||
s, err := rp.host.NewStream(ctx, rp.p, RendezvousProto)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ch := make(chan Registration)
|
||||
go discoverAsync(ctx, ns, s, ch)
|
||||
return ch, nil
|
||||
}
|
||||
|
||||
func discoverAsync(ctx context.Context, ns string, s inet.Stream, ch chan Registration) {
|
||||
defer s.Reset()
|
||||
defer close(ch)
|
||||
|
||||
r := pbio.NewDelimitedReader(s, inet.MessageSizeMax)
|
||||
w := pbio.NewDelimitedWriter(s)
|
||||
|
||||
const batch = 200
|
||||
|
||||
var (
|
||||
cookie []byte
|
||||
regs []Registration
|
||||
err error
|
||||
)
|
||||
|
||||
for {
|
||||
regs, cookie, err = discoverQuery(ns, batch, cookie, r, w)
|
||||
if err != nil {
|
||||
// TODO robust error recovery
|
||||
// - handle closed streams with backoff + new stream, preserving the cookie
|
||||
// - handle E_INVALID_COOKIE errors in that case to restart the discovery
|
||||
log.Errorf("Error in discovery [%s]: %s", ns, err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
for _, reg := range regs {
|
||||
select {
|
||||
case ch <- reg:
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if len(regs) < batch {
|
||||
// TODO adaptive backoff for heavily loaded rendezvous points
|
||||
select {
|
||||
case <-time.After(DiscoverAsyncInterval):
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (rc *rendezvousClient) Discover(ctx context.Context, ns string, limit int, cookie []byte) ([]peer.AddrInfo, []byte, error) {
|
||||
regs, cookie, err := rc.rp.Discover(ctx, ns, limit, cookie)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
pinfos := make([]peer.AddrInfo, len(regs))
|
||||
for i, reg := range regs {
|
||||
pinfos[i] = reg.Peer
|
||||
}
|
||||
|
||||
return pinfos, cookie, nil
|
||||
}
|
||||
|
||||
func (rc *rendezvousClient) DiscoverAsync(ctx context.Context, ns string) (<-chan peer.AddrInfo, error) {
|
||||
rch, err := rc.rp.DiscoverAsync(ctx, ns)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ch := make(chan peer.AddrInfo)
|
||||
go discoverPeersAsync(ctx, rch, ch)
|
||||
return ch, nil
|
||||
}
|
||||
|
||||
func discoverPeersAsync(ctx context.Context, rch <-chan Registration, ch chan peer.AddrInfo) {
|
||||
defer close(ch)
|
||||
for {
|
||||
select {
|
||||
case reg, ok := <-rch:
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
|
||||
select {
|
||||
case ch <- reg.Peer:
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
21
vendor/github.com/waku-org/go-libp2p-rendezvous/db/dbi.go
generated
vendored
Normal file
21
vendor/github.com/waku-org/go-libp2p-rendezvous/db/dbi.go
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
package dbi
|
||||
|
||||
import (
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
)
|
||||
|
||||
type RegistrationRecord struct {
|
||||
Id peer.ID
|
||||
SignedPeerRecord []byte
|
||||
Ns string
|
||||
Ttl int
|
||||
}
|
||||
|
||||
type DB interface {
|
||||
Close() error
|
||||
Register(p peer.ID, ns string, signedPeerRecord []byte, ttl int) (uint64, error)
|
||||
Unregister(p peer.ID, ns string) error
|
||||
CountRegistrations(p peer.ID) (int, error)
|
||||
Discover(ns string, cookie []byte, limit int) ([]RegistrationRecord, []byte, error)
|
||||
ValidCookie(ns string, cookie []byte) bool
|
||||
}
|
||||
156
vendor/github.com/waku-org/go-libp2p-rendezvous/discovery.go
generated
vendored
Normal file
156
vendor/github.com/waku-org/go-libp2p-rendezvous/discovery.go
generated
vendored
Normal file
@@ -0,0 +1,156 @@
|
||||
package rendezvous
|
||||
|
||||
import (
|
||||
"context"
|
||||
"math"
|
||||
"math/rand"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/discovery"
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
)
|
||||
|
||||
type rendezvousDiscovery struct {
|
||||
rp RendezvousPoint
|
||||
peerCache map[string]*discoveryCache
|
||||
peerCacheMux sync.RWMutex
|
||||
rng *rand.Rand
|
||||
rngMux sync.Mutex
|
||||
}
|
||||
|
||||
type discoveryCache struct {
|
||||
recs map[peer.ID]*peerRecord
|
||||
cookie []byte
|
||||
mux sync.Mutex
|
||||
}
|
||||
|
||||
type peerRecord struct {
|
||||
peer peer.AddrInfo
|
||||
expire int64
|
||||
}
|
||||
|
||||
func NewRendezvousDiscovery(host host.Host, rendezvousPeer peer.ID) discovery.Discovery {
|
||||
rp := NewRendezvousPoint(host, rendezvousPeer)
|
||||
return &rendezvousDiscovery{rp: rp, peerCache: make(map[string]*discoveryCache), rng: rand.New(rand.NewSource(rand.Int63()))}
|
||||
}
|
||||
|
||||
func (c *rendezvousDiscovery) Advertise(ctx context.Context, ns string, opts ...discovery.Option) (time.Duration, error) {
|
||||
// Get options
|
||||
var options discovery.Options
|
||||
err := options.Apply(opts...)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
ttl := options.Ttl
|
||||
var ttlSeconds int
|
||||
|
||||
if ttl == 0 {
|
||||
ttlSeconds = 7200
|
||||
} else {
|
||||
ttlSeconds = int(math.Round(ttl.Seconds()))
|
||||
}
|
||||
|
||||
if rttl, err := c.rp.Register(ctx, ns, ttlSeconds); err != nil {
|
||||
return 0, err
|
||||
} else {
|
||||
return rttl, nil
|
||||
}
|
||||
}
|
||||
|
||||
func (c *rendezvousDiscovery) FindPeers(ctx context.Context, ns string, opts ...discovery.Option) (<-chan peer.AddrInfo, error) {
|
||||
// Get options
|
||||
var options discovery.Options
|
||||
err := options.Apply(opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
const maxLimit = 1000
|
||||
limit := options.Limit
|
||||
if limit == 0 || limit > maxLimit {
|
||||
limit = maxLimit
|
||||
}
|
||||
|
||||
// Get cached peers
|
||||
var cache *discoveryCache
|
||||
|
||||
c.peerCacheMux.RLock()
|
||||
cache, ok := c.peerCache[ns]
|
||||
c.peerCacheMux.RUnlock()
|
||||
if !ok {
|
||||
c.peerCacheMux.Lock()
|
||||
cache, ok = c.peerCache[ns]
|
||||
if !ok {
|
||||
cache = &discoveryCache{recs: make(map[peer.ID]*peerRecord)}
|
||||
c.peerCache[ns] = cache
|
||||
}
|
||||
c.peerCacheMux.Unlock()
|
||||
}
|
||||
|
||||
cache.mux.Lock()
|
||||
defer cache.mux.Unlock()
|
||||
|
||||
// Remove all expired entries from cache
|
||||
currentTime := time.Now().Unix()
|
||||
newCacheSize := len(cache.recs)
|
||||
|
||||
for p := range cache.recs {
|
||||
rec := cache.recs[p]
|
||||
if rec.expire < currentTime {
|
||||
newCacheSize--
|
||||
delete(cache.recs, p)
|
||||
}
|
||||
}
|
||||
|
||||
cookie := cache.cookie
|
||||
|
||||
// Discover new records if we don't have enough
|
||||
if newCacheSize < limit {
|
||||
// TODO: Should we return error even if we have valid cached results?
|
||||
var regs []Registration
|
||||
var newCookie []byte
|
||||
if regs, newCookie, err = c.rp.Discover(ctx, ns, limit, cookie); err == nil {
|
||||
for _, reg := range regs {
|
||||
rec := &peerRecord{peer: reg.Peer, expire: int64(reg.Ttl) + currentTime}
|
||||
cache.recs[rec.peer.ID] = rec
|
||||
}
|
||||
cache.cookie = newCookie
|
||||
}
|
||||
}
|
||||
|
||||
// Randomize and fill channel with available records
|
||||
count := len(cache.recs)
|
||||
if limit < count {
|
||||
count = limit
|
||||
}
|
||||
|
||||
chPeer := make(chan peer.AddrInfo, count)
|
||||
|
||||
c.rngMux.Lock()
|
||||
perm := c.rng.Perm(len(cache.recs))[0:count]
|
||||
c.rngMux.Unlock()
|
||||
|
||||
permSet := make(map[int]int)
|
||||
for i, v := range perm {
|
||||
permSet[v] = i
|
||||
}
|
||||
|
||||
sendLst := make([]*peer.AddrInfo, count)
|
||||
iter := 0
|
||||
for k := range cache.recs {
|
||||
if sendIndex, ok := permSet[iter]; ok {
|
||||
sendLst[sendIndex] = &cache.recs[k].peer
|
||||
}
|
||||
iter++
|
||||
}
|
||||
|
||||
for _, send := range sendLst {
|
||||
chPeer <- *send
|
||||
}
|
||||
|
||||
close(chPeer)
|
||||
return chPeer, err
|
||||
}
|
||||
32
vendor/github.com/waku-org/go-libp2p-rendezvous/options.go
generated
vendored
Normal file
32
vendor/github.com/waku-org/go-libp2p-rendezvous/options.go
generated
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
package rendezvous
|
||||
|
||||
import (
|
||||
ma "github.com/multiformats/go-multiaddr"
|
||||
)
|
||||
|
||||
type RendezvousPointOption func(cfg *rendezvousPointConfig)
|
||||
|
||||
type AddrsFactory func(addrs []ma.Multiaddr) []ma.Multiaddr
|
||||
|
||||
var DefaultAddrFactory = func(addrs []ma.Multiaddr) []ma.Multiaddr { return addrs }
|
||||
|
||||
var defaultRendezvousPointConfig = rendezvousPointConfig{
|
||||
AddrsFactory: DefaultAddrFactory,
|
||||
}
|
||||
|
||||
type rendezvousPointConfig struct {
|
||||
AddrsFactory AddrsFactory
|
||||
}
|
||||
|
||||
func (cfg *rendezvousPointConfig) apply(opts ...RendezvousPointOption) {
|
||||
for _, opt := range opts {
|
||||
opt(cfg)
|
||||
}
|
||||
}
|
||||
|
||||
// AddrsFactory configures libp2p to use the given address factory.
|
||||
func ClientWithAddrsFactory(factory AddrsFactory) RendezvousPointOption {
|
||||
return func(cfg *rendezvousPointConfig) {
|
||||
cfg.AddrsFactory = factory
|
||||
}
|
||||
}
|
||||
3
vendor/github.com/waku-org/go-libp2p-rendezvous/pb/generate.go
generated
vendored
Normal file
3
vendor/github.com/waku-org/go-libp2p-rendezvous/pb/generate.go
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
package rendezvous_pb
|
||||
|
||||
//go:generate protoc -I. --proto_path=./ --go_opt=paths=source_relative --go_opt=Mrendezvous.proto=github.com/waku-org/go-libp2p-rendezvous/rendezvous_pb --go_out=. ./rendezvous.proto
|
||||
782
vendor/github.com/waku-org/go-libp2p-rendezvous/pb/rendezvous.pb.go
generated
vendored
Normal file
782
vendor/github.com/waku-org/go-libp2p-rendezvous/pb/rendezvous.pb.go
generated
vendored
Normal file
@@ -0,0 +1,782 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.26.0
|
||||
// protoc v3.21.12
|
||||
// source: rendezvous.proto
|
||||
|
||||
package rendezvous_pb
|
||||
|
||||
import (
|
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
reflect "reflect"
|
||||
sync "sync"
|
||||
)
|
||||
|
||||
const (
|
||||
// Verify that this generated code is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
||||
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
type Message_MessageType int32
|
||||
|
||||
const (
|
||||
Message_REGISTER Message_MessageType = 0
|
||||
Message_REGISTER_RESPONSE Message_MessageType = 1
|
||||
Message_UNREGISTER Message_MessageType = 2
|
||||
Message_DISCOVER Message_MessageType = 3
|
||||
Message_DISCOVER_RESPONSE Message_MessageType = 4
|
||||
)
|
||||
|
||||
// Enum value maps for Message_MessageType.
|
||||
var (
|
||||
Message_MessageType_name = map[int32]string{
|
||||
0: "REGISTER",
|
||||
1: "REGISTER_RESPONSE",
|
||||
2: "UNREGISTER",
|
||||
3: "DISCOVER",
|
||||
4: "DISCOVER_RESPONSE",
|
||||
}
|
||||
Message_MessageType_value = map[string]int32{
|
||||
"REGISTER": 0,
|
||||
"REGISTER_RESPONSE": 1,
|
||||
"UNREGISTER": 2,
|
||||
"DISCOVER": 3,
|
||||
"DISCOVER_RESPONSE": 4,
|
||||
}
|
||||
)
|
||||
|
||||
func (x Message_MessageType) Enum() *Message_MessageType {
|
||||
p := new(Message_MessageType)
|
||||
*p = x
|
||||
return p
|
||||
}
|
||||
|
||||
func (x Message_MessageType) String() string {
|
||||
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
|
||||
}
|
||||
|
||||
func (Message_MessageType) Descriptor() protoreflect.EnumDescriptor {
|
||||
return file_rendezvous_proto_enumTypes[0].Descriptor()
|
||||
}
|
||||
|
||||
func (Message_MessageType) Type() protoreflect.EnumType {
|
||||
return &file_rendezvous_proto_enumTypes[0]
|
||||
}
|
||||
|
||||
func (x Message_MessageType) Number() protoreflect.EnumNumber {
|
||||
return protoreflect.EnumNumber(x)
|
||||
}
|
||||
|
||||
// Deprecated: Do not use.
|
||||
func (x *Message_MessageType) UnmarshalJSON(b []byte) error {
|
||||
num, err := protoimpl.X.UnmarshalJSONEnum(x.Descriptor(), b)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*x = Message_MessageType(num)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Deprecated: Use Message_MessageType.Descriptor instead.
|
||||
func (Message_MessageType) EnumDescriptor() ([]byte, []int) {
|
||||
return file_rendezvous_proto_rawDescGZIP(), []int{0, 0}
|
||||
}
|
||||
|
||||
type Message_ResponseStatus int32
|
||||
|
||||
const (
|
||||
Message_OK Message_ResponseStatus = 0
|
||||
Message_E_INVALID_NAMESPACE Message_ResponseStatus = 100
|
||||
Message_E_INVALID_SIGNED_PEER_RECORD Message_ResponseStatus = 101
|
||||
Message_E_INVALID_TTL Message_ResponseStatus = 102
|
||||
Message_E_INVALID_COOKIE Message_ResponseStatus = 103
|
||||
Message_E_NOT_AUTHORIZED Message_ResponseStatus = 200
|
||||
Message_E_INTERNAL_ERROR Message_ResponseStatus = 300
|
||||
Message_E_UNAVAILABLE Message_ResponseStatus = 400
|
||||
)
|
||||
|
||||
// Enum value maps for Message_ResponseStatus.
|
||||
var (
|
||||
Message_ResponseStatus_name = map[int32]string{
|
||||
0: "OK",
|
||||
100: "E_INVALID_NAMESPACE",
|
||||
101: "E_INVALID_SIGNED_PEER_RECORD",
|
||||
102: "E_INVALID_TTL",
|
||||
103: "E_INVALID_COOKIE",
|
||||
200: "E_NOT_AUTHORIZED",
|
||||
300: "E_INTERNAL_ERROR",
|
||||
400: "E_UNAVAILABLE",
|
||||
}
|
||||
Message_ResponseStatus_value = map[string]int32{
|
||||
"OK": 0,
|
||||
"E_INVALID_NAMESPACE": 100,
|
||||
"E_INVALID_SIGNED_PEER_RECORD": 101,
|
||||
"E_INVALID_TTL": 102,
|
||||
"E_INVALID_COOKIE": 103,
|
||||
"E_NOT_AUTHORIZED": 200,
|
||||
"E_INTERNAL_ERROR": 300,
|
||||
"E_UNAVAILABLE": 400,
|
||||
}
|
||||
)
|
||||
|
||||
func (x Message_ResponseStatus) Enum() *Message_ResponseStatus {
|
||||
p := new(Message_ResponseStatus)
|
||||
*p = x
|
||||
return p
|
||||
}
|
||||
|
||||
func (x Message_ResponseStatus) String() string {
|
||||
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
|
||||
}
|
||||
|
||||
func (Message_ResponseStatus) Descriptor() protoreflect.EnumDescriptor {
|
||||
return file_rendezvous_proto_enumTypes[1].Descriptor()
|
||||
}
|
||||
|
||||
func (Message_ResponseStatus) Type() protoreflect.EnumType {
|
||||
return &file_rendezvous_proto_enumTypes[1]
|
||||
}
|
||||
|
||||
func (x Message_ResponseStatus) Number() protoreflect.EnumNumber {
|
||||
return protoreflect.EnumNumber(x)
|
||||
}
|
||||
|
||||
// Deprecated: Do not use.
|
||||
func (x *Message_ResponseStatus) UnmarshalJSON(b []byte) error {
|
||||
num, err := protoimpl.X.UnmarshalJSONEnum(x.Descriptor(), b)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*x = Message_ResponseStatus(num)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Deprecated: Use Message_ResponseStatus.Descriptor instead.
|
||||
func (Message_ResponseStatus) EnumDescriptor() ([]byte, []int) {
|
||||
return file_rendezvous_proto_rawDescGZIP(), []int{0, 1}
|
||||
}
|
||||
|
||||
type Message struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Type *Message_MessageType `protobuf:"varint,1,opt,name=type,enum=rendezvous.pb.Message_MessageType" json:"type,omitempty"`
|
||||
Register *Message_Register `protobuf:"bytes,2,opt,name=register" json:"register,omitempty"`
|
||||
RegisterResponse *Message_RegisterResponse `protobuf:"bytes,3,opt,name=registerResponse" json:"registerResponse,omitempty"`
|
||||
Unregister *Message_Unregister `protobuf:"bytes,4,opt,name=unregister" json:"unregister,omitempty"`
|
||||
Discover *Message_Discover `protobuf:"bytes,5,opt,name=discover" json:"discover,omitempty"`
|
||||
DiscoverResponse *Message_DiscoverResponse `protobuf:"bytes,6,opt,name=discoverResponse" json:"discoverResponse,omitempty"`
|
||||
}
|
||||
|
||||
func (x *Message) Reset() {
|
||||
*x = Message{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_rendezvous_proto_msgTypes[0]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *Message) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*Message) ProtoMessage() {}
|
||||
|
||||
func (x *Message) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_rendezvous_proto_msgTypes[0]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use Message.ProtoReflect.Descriptor instead.
|
||||
func (*Message) Descriptor() ([]byte, []int) {
|
||||
return file_rendezvous_proto_rawDescGZIP(), []int{0}
|
||||
}
|
||||
|
||||
func (x *Message) GetType() Message_MessageType {
|
||||
if x != nil && x.Type != nil {
|
||||
return *x.Type
|
||||
}
|
||||
return Message_REGISTER
|
||||
}
|
||||
|
||||
func (x *Message) GetRegister() *Message_Register {
|
||||
if x != nil {
|
||||
return x.Register
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *Message) GetRegisterResponse() *Message_RegisterResponse {
|
||||
if x != nil {
|
||||
return x.RegisterResponse
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *Message) GetUnregister() *Message_Unregister {
|
||||
if x != nil {
|
||||
return x.Unregister
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *Message) GetDiscover() *Message_Discover {
|
||||
if x != nil {
|
||||
return x.Discover
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *Message) GetDiscoverResponse() *Message_DiscoverResponse {
|
||||
if x != nil {
|
||||
return x.DiscoverResponse
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type Message_Register struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Ns *string `protobuf:"bytes,1,opt,name=ns" json:"ns,omitempty"`
|
||||
SignedPeerRecord []byte `protobuf:"bytes,2,opt,name=signedPeerRecord" json:"signedPeerRecord,omitempty"`
|
||||
Ttl *uint64 `protobuf:"varint,3,opt,name=ttl" json:"ttl,omitempty"` // in seconds
|
||||
}
|
||||
|
||||
func (x *Message_Register) Reset() {
|
||||
*x = Message_Register{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_rendezvous_proto_msgTypes[1]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *Message_Register) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*Message_Register) ProtoMessage() {}
|
||||
|
||||
func (x *Message_Register) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_rendezvous_proto_msgTypes[1]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use Message_Register.ProtoReflect.Descriptor instead.
|
||||
func (*Message_Register) Descriptor() ([]byte, []int) {
|
||||
return file_rendezvous_proto_rawDescGZIP(), []int{0, 0}
|
||||
}
|
||||
|
||||
func (x *Message_Register) GetNs() string {
|
||||
if x != nil && x.Ns != nil {
|
||||
return *x.Ns
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *Message_Register) GetSignedPeerRecord() []byte {
|
||||
if x != nil {
|
||||
return x.SignedPeerRecord
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *Message_Register) GetTtl() uint64 {
|
||||
if x != nil && x.Ttl != nil {
|
||||
return *x.Ttl
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type Message_RegisterResponse struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Status *Message_ResponseStatus `protobuf:"varint,1,opt,name=status,enum=rendezvous.pb.Message_ResponseStatus" json:"status,omitempty"`
|
||||
StatusText *string `protobuf:"bytes,2,opt,name=statusText" json:"statusText,omitempty"`
|
||||
Ttl *uint64 `protobuf:"varint,3,opt,name=ttl" json:"ttl,omitempty"` // in seconds
|
||||
}
|
||||
|
||||
func (x *Message_RegisterResponse) Reset() {
|
||||
*x = Message_RegisterResponse{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_rendezvous_proto_msgTypes[2]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *Message_RegisterResponse) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*Message_RegisterResponse) ProtoMessage() {}
|
||||
|
||||
func (x *Message_RegisterResponse) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_rendezvous_proto_msgTypes[2]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use Message_RegisterResponse.ProtoReflect.Descriptor instead.
|
||||
func (*Message_RegisterResponse) Descriptor() ([]byte, []int) {
|
||||
return file_rendezvous_proto_rawDescGZIP(), []int{0, 1}
|
||||
}
|
||||
|
||||
func (x *Message_RegisterResponse) GetStatus() Message_ResponseStatus {
|
||||
if x != nil && x.Status != nil {
|
||||
return *x.Status
|
||||
}
|
||||
return Message_OK
|
||||
}
|
||||
|
||||
func (x *Message_RegisterResponse) GetStatusText() string {
|
||||
if x != nil && x.StatusText != nil {
|
||||
return *x.StatusText
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *Message_RegisterResponse) GetTtl() uint64 {
|
||||
if x != nil && x.Ttl != nil {
|
||||
return *x.Ttl
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type Message_Unregister struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Ns *string `protobuf:"bytes,1,opt,name=ns" json:"ns,omitempty"` // optional bytes id = 2; deprecated as per https://github.com/libp2p/specs/issues/335
|
||||
}
|
||||
|
||||
func (x *Message_Unregister) Reset() {
|
||||
*x = Message_Unregister{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_rendezvous_proto_msgTypes[3]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *Message_Unregister) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*Message_Unregister) ProtoMessage() {}
|
||||
|
||||
func (x *Message_Unregister) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_rendezvous_proto_msgTypes[3]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use Message_Unregister.ProtoReflect.Descriptor instead.
|
||||
func (*Message_Unregister) Descriptor() ([]byte, []int) {
|
||||
return file_rendezvous_proto_rawDescGZIP(), []int{0, 2}
|
||||
}
|
||||
|
||||
func (x *Message_Unregister) GetNs() string {
|
||||
if x != nil && x.Ns != nil {
|
||||
return *x.Ns
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
type Message_Discover struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Ns *string `protobuf:"bytes,1,opt,name=ns" json:"ns,omitempty"`
|
||||
Limit *uint64 `protobuf:"varint,2,opt,name=limit" json:"limit,omitempty"`
|
||||
Cookie []byte `protobuf:"bytes,3,opt,name=cookie" json:"cookie,omitempty"`
|
||||
}
|
||||
|
||||
func (x *Message_Discover) Reset() {
|
||||
*x = Message_Discover{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_rendezvous_proto_msgTypes[4]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *Message_Discover) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*Message_Discover) ProtoMessage() {}
|
||||
|
||||
func (x *Message_Discover) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_rendezvous_proto_msgTypes[4]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use Message_Discover.ProtoReflect.Descriptor instead.
|
||||
func (*Message_Discover) Descriptor() ([]byte, []int) {
|
||||
return file_rendezvous_proto_rawDescGZIP(), []int{0, 3}
|
||||
}
|
||||
|
||||
func (x *Message_Discover) GetNs() string {
|
||||
if x != nil && x.Ns != nil {
|
||||
return *x.Ns
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *Message_Discover) GetLimit() uint64 {
|
||||
if x != nil && x.Limit != nil {
|
||||
return *x.Limit
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *Message_Discover) GetCookie() []byte {
|
||||
if x != nil {
|
||||
return x.Cookie
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type Message_DiscoverResponse struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Registrations []*Message_Register `protobuf:"bytes,1,rep,name=registrations" json:"registrations,omitempty"`
|
||||
Cookie []byte `protobuf:"bytes,2,opt,name=cookie" json:"cookie,omitempty"`
|
||||
Status *Message_ResponseStatus `protobuf:"varint,3,opt,name=status,enum=rendezvous.pb.Message_ResponseStatus" json:"status,omitempty"`
|
||||
StatusText *string `protobuf:"bytes,4,opt,name=statusText" json:"statusText,omitempty"`
|
||||
}
|
||||
|
||||
func (x *Message_DiscoverResponse) Reset() {
|
||||
*x = Message_DiscoverResponse{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_rendezvous_proto_msgTypes[5]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *Message_DiscoverResponse) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*Message_DiscoverResponse) ProtoMessage() {}
|
||||
|
||||
func (x *Message_DiscoverResponse) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_rendezvous_proto_msgTypes[5]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use Message_DiscoverResponse.ProtoReflect.Descriptor instead.
|
||||
func (*Message_DiscoverResponse) Descriptor() ([]byte, []int) {
|
||||
return file_rendezvous_proto_rawDescGZIP(), []int{0, 4}
|
||||
}
|
||||
|
||||
func (x *Message_DiscoverResponse) GetRegistrations() []*Message_Register {
|
||||
if x != nil {
|
||||
return x.Registrations
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *Message_DiscoverResponse) GetCookie() []byte {
|
||||
if x != nil {
|
||||
return x.Cookie
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *Message_DiscoverResponse) GetStatus() Message_ResponseStatus {
|
||||
if x != nil && x.Status != nil {
|
||||
return *x.Status
|
||||
}
|
||||
return Message_OK
|
||||
}
|
||||
|
||||
func (x *Message_DiscoverResponse) GetStatusText() string {
|
||||
if x != nil && x.StatusText != nil {
|
||||
return *x.StatusText
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
var File_rendezvous_proto protoreflect.FileDescriptor
|
||||
|
||||
var file_rendezvous_proto_rawDesc = []byte{
|
||||
0x0a, 0x10, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x7a, 0x76, 0x6f, 0x75, 0x73, 0x2e, 0x70, 0x72, 0x6f,
|
||||
0x74, 0x6f, 0x12, 0x0d, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x7a, 0x76, 0x6f, 0x75, 0x73, 0x2e, 0x70,
|
||||
0x62, 0x22, 0xed, 0x09, 0x0a, 0x07, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x36, 0x0a,
|
||||
0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x22, 0x2e, 0x72, 0x65,
|
||||
0x6e, 0x64, 0x65, 0x7a, 0x76, 0x6f, 0x75, 0x73, 0x2e, 0x70, 0x62, 0x2e, 0x4d, 0x65, 0x73, 0x73,
|
||||
0x61, 0x67, 0x65, 0x2e, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x54, 0x79, 0x70, 0x65, 0x52,
|
||||
0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x3b, 0x0a, 0x08, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65,
|
||||
0x72, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f, 0x2e, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x7a,
|
||||
0x76, 0x6f, 0x75, 0x73, 0x2e, 0x70, 0x62, 0x2e, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x2e,
|
||||
0x52, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, 0x52, 0x08, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74,
|
||||
0x65, 0x72, 0x12, 0x53, 0x0a, 0x10, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, 0x52, 0x65,
|
||||
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x27, 0x2e, 0x72,
|
||||
0x65, 0x6e, 0x64, 0x65, 0x7a, 0x76, 0x6f, 0x75, 0x73, 0x2e, 0x70, 0x62, 0x2e, 0x4d, 0x65, 0x73,
|
||||
0x73, 0x61, 0x67, 0x65, 0x2e, 0x52, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, 0x52, 0x65, 0x73,
|
||||
0x70, 0x6f, 0x6e, 0x73, 0x65, 0x52, 0x10, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, 0x52,
|
||||
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x41, 0x0a, 0x0a, 0x75, 0x6e, 0x72, 0x65, 0x67,
|
||||
0x69, 0x73, 0x74, 0x65, 0x72, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21, 0x2e, 0x72, 0x65,
|
||||
0x6e, 0x64, 0x65, 0x7a, 0x76, 0x6f, 0x75, 0x73, 0x2e, 0x70, 0x62, 0x2e, 0x4d, 0x65, 0x73, 0x73,
|
||||
0x61, 0x67, 0x65, 0x2e, 0x55, 0x6e, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, 0x52, 0x0a,
|
||||
0x75, 0x6e, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, 0x12, 0x3b, 0x0a, 0x08, 0x64, 0x69,
|
||||
0x73, 0x63, 0x6f, 0x76, 0x65, 0x72, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f, 0x2e, 0x72,
|
||||
0x65, 0x6e, 0x64, 0x65, 0x7a, 0x76, 0x6f, 0x75, 0x73, 0x2e, 0x70, 0x62, 0x2e, 0x4d, 0x65, 0x73,
|
||||
0x73, 0x61, 0x67, 0x65, 0x2e, 0x44, 0x69, 0x73, 0x63, 0x6f, 0x76, 0x65, 0x72, 0x52, 0x08, 0x64,
|
||||
0x69, 0x73, 0x63, 0x6f, 0x76, 0x65, 0x72, 0x12, 0x53, 0x0a, 0x10, 0x64, 0x69, 0x73, 0x63, 0x6f,
|
||||
0x76, 0x65, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x06, 0x20, 0x01, 0x28,
|
||||
0x0b, 0x32, 0x27, 0x2e, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x7a, 0x76, 0x6f, 0x75, 0x73, 0x2e, 0x70,
|
||||
0x62, 0x2e, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x2e, 0x44, 0x69, 0x73, 0x63, 0x6f, 0x76,
|
||||
0x65, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x52, 0x10, 0x64, 0x69, 0x73, 0x63,
|
||||
0x6f, 0x76, 0x65, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x1a, 0x58, 0x0a, 0x08,
|
||||
0x52, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, 0x12, 0x0e, 0x0a, 0x02, 0x6e, 0x73, 0x18, 0x01,
|
||||
0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x6e, 0x73, 0x12, 0x2a, 0x0a, 0x10, 0x73, 0x69, 0x67, 0x6e,
|
||||
0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x52, 0x65, 0x63, 0x6f, 0x72, 0x64, 0x18, 0x02, 0x20, 0x01,
|
||||
0x28, 0x0c, 0x52, 0x10, 0x73, 0x69, 0x67, 0x6e, 0x65, 0x64, 0x50, 0x65, 0x65, 0x72, 0x52, 0x65,
|
||||
0x63, 0x6f, 0x72, 0x64, 0x12, 0x10, 0x0a, 0x03, 0x74, 0x74, 0x6c, 0x18, 0x03, 0x20, 0x01, 0x28,
|
||||
0x04, 0x52, 0x03, 0x74, 0x74, 0x6c, 0x1a, 0x83, 0x01, 0x0a, 0x10, 0x52, 0x65, 0x67, 0x69, 0x73,
|
||||
0x74, 0x65, 0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x3d, 0x0a, 0x06, 0x73,
|
||||
0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x25, 0x2e, 0x72, 0x65,
|
||||
0x6e, 0x64, 0x65, 0x7a, 0x76, 0x6f, 0x75, 0x73, 0x2e, 0x70, 0x62, 0x2e, 0x4d, 0x65, 0x73, 0x73,
|
||||
0x61, 0x67, 0x65, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x53, 0x74, 0x61, 0x74,
|
||||
0x75, 0x73, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x1e, 0x0a, 0x0a, 0x73, 0x74,
|
||||
0x61, 0x74, 0x75, 0x73, 0x54, 0x65, 0x78, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a,
|
||||
0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x54, 0x65, 0x78, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x74, 0x74,
|
||||
0x6c, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x03, 0x74, 0x74, 0x6c, 0x1a, 0x1c, 0x0a, 0x0a,
|
||||
0x55, 0x6e, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, 0x12, 0x0e, 0x0a, 0x02, 0x6e, 0x73,
|
||||
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x6e, 0x73, 0x1a, 0x48, 0x0a, 0x08, 0x44, 0x69,
|
||||
0x73, 0x63, 0x6f, 0x76, 0x65, 0x72, 0x12, 0x0e, 0x0a, 0x02, 0x6e, 0x73, 0x18, 0x01, 0x20, 0x01,
|
||||
0x28, 0x09, 0x52, 0x02, 0x6e, 0x73, 0x12, 0x14, 0x0a, 0x05, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x18,
|
||||
0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x12, 0x16, 0x0a, 0x06,
|
||||
0x63, 0x6f, 0x6f, 0x6b, 0x69, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x63, 0x6f,
|
||||
0x6f, 0x6b, 0x69, 0x65, 0x1a, 0xd0, 0x01, 0x0a, 0x10, 0x44, 0x69, 0x73, 0x63, 0x6f, 0x76, 0x65,
|
||||
0x72, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x45, 0x0a, 0x0d, 0x72, 0x65, 0x67,
|
||||
0x69, 0x73, 0x74, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b,
|
||||
0x32, 0x1f, 0x2e, 0x72, 0x65, 0x6e, 0x64, 0x65, 0x7a, 0x76, 0x6f, 0x75, 0x73, 0x2e, 0x70, 0x62,
|
||||
0x2e, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x2e, 0x52, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65,
|
||||
0x72, 0x52, 0x0d, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73,
|
||||
0x12, 0x16, 0x0a, 0x06, 0x63, 0x6f, 0x6f, 0x6b, 0x69, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c,
|
||||
0x52, 0x06, 0x63, 0x6f, 0x6f, 0x6b, 0x69, 0x65, 0x12, 0x3d, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74,
|
||||
0x75, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x25, 0x2e, 0x72, 0x65, 0x6e, 0x64, 0x65,
|
||||
0x7a, 0x76, 0x6f, 0x75, 0x73, 0x2e, 0x70, 0x62, 0x2e, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65,
|
||||
0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52,
|
||||
0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x1e, 0x0a, 0x0a, 0x73, 0x74, 0x61, 0x74, 0x75,
|
||||
0x73, 0x54, 0x65, 0x78, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x73, 0x74, 0x61,
|
||||
0x74, 0x75, 0x73, 0x54, 0x65, 0x78, 0x74, 0x22, 0x67, 0x0a, 0x0b, 0x4d, 0x65, 0x73, 0x73, 0x61,
|
||||
0x67, 0x65, 0x54, 0x79, 0x70, 0x65, 0x12, 0x0c, 0x0a, 0x08, 0x52, 0x45, 0x47, 0x49, 0x53, 0x54,
|
||||
0x45, 0x52, 0x10, 0x00, 0x12, 0x15, 0x0a, 0x11, 0x52, 0x45, 0x47, 0x49, 0x53, 0x54, 0x45, 0x52,
|
||||
0x5f, 0x52, 0x45, 0x53, 0x50, 0x4f, 0x4e, 0x53, 0x45, 0x10, 0x01, 0x12, 0x0e, 0x0a, 0x0a, 0x55,
|
||||
0x4e, 0x52, 0x45, 0x47, 0x49, 0x53, 0x54, 0x45, 0x52, 0x10, 0x02, 0x12, 0x0c, 0x0a, 0x08, 0x44,
|
||||
0x49, 0x53, 0x43, 0x4f, 0x56, 0x45, 0x52, 0x10, 0x03, 0x12, 0x15, 0x0a, 0x11, 0x44, 0x49, 0x53,
|
||||
0x43, 0x4f, 0x56, 0x45, 0x52, 0x5f, 0x52, 0x45, 0x53, 0x50, 0x4f, 0x4e, 0x53, 0x45, 0x10, 0x04,
|
||||
0x22, 0xbe, 0x01, 0x0a, 0x0e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x53, 0x74, 0x61,
|
||||
0x74, 0x75, 0x73, 0x12, 0x06, 0x0a, 0x02, 0x4f, 0x4b, 0x10, 0x00, 0x12, 0x17, 0x0a, 0x13, 0x45,
|
||||
0x5f, 0x49, 0x4e, 0x56, 0x41, 0x4c, 0x49, 0x44, 0x5f, 0x4e, 0x41, 0x4d, 0x45, 0x53, 0x50, 0x41,
|
||||
0x43, 0x45, 0x10, 0x64, 0x12, 0x20, 0x0a, 0x1c, 0x45, 0x5f, 0x49, 0x4e, 0x56, 0x41, 0x4c, 0x49,
|
||||
0x44, 0x5f, 0x53, 0x49, 0x47, 0x4e, 0x45, 0x44, 0x5f, 0x50, 0x45, 0x45, 0x52, 0x5f, 0x52, 0x45,
|
||||
0x43, 0x4f, 0x52, 0x44, 0x10, 0x65, 0x12, 0x11, 0x0a, 0x0d, 0x45, 0x5f, 0x49, 0x4e, 0x56, 0x41,
|
||||
0x4c, 0x49, 0x44, 0x5f, 0x54, 0x54, 0x4c, 0x10, 0x66, 0x12, 0x14, 0x0a, 0x10, 0x45, 0x5f, 0x49,
|
||||
0x4e, 0x56, 0x41, 0x4c, 0x49, 0x44, 0x5f, 0x43, 0x4f, 0x4f, 0x4b, 0x49, 0x45, 0x10, 0x67, 0x12,
|
||||
0x15, 0x0a, 0x10, 0x45, 0x5f, 0x4e, 0x4f, 0x54, 0x5f, 0x41, 0x55, 0x54, 0x48, 0x4f, 0x52, 0x49,
|
||||
0x5a, 0x45, 0x44, 0x10, 0xc8, 0x01, 0x12, 0x15, 0x0a, 0x10, 0x45, 0x5f, 0x49, 0x4e, 0x54, 0x45,
|
||||
0x52, 0x4e, 0x41, 0x4c, 0x5f, 0x45, 0x52, 0x52, 0x4f, 0x52, 0x10, 0xac, 0x02, 0x12, 0x12, 0x0a,
|
||||
0x0d, 0x45, 0x5f, 0x55, 0x4e, 0x41, 0x56, 0x41, 0x49, 0x4c, 0x41, 0x42, 0x4c, 0x45, 0x10, 0x90,
|
||||
0x03,
|
||||
}
|
||||
|
||||
var (
|
||||
file_rendezvous_proto_rawDescOnce sync.Once
|
||||
file_rendezvous_proto_rawDescData = file_rendezvous_proto_rawDesc
|
||||
)
|
||||
|
||||
func file_rendezvous_proto_rawDescGZIP() []byte {
|
||||
file_rendezvous_proto_rawDescOnce.Do(func() {
|
||||
file_rendezvous_proto_rawDescData = protoimpl.X.CompressGZIP(file_rendezvous_proto_rawDescData)
|
||||
})
|
||||
return file_rendezvous_proto_rawDescData
|
||||
}
|
||||
|
||||
var file_rendezvous_proto_enumTypes = make([]protoimpl.EnumInfo, 2)
|
||||
var file_rendezvous_proto_msgTypes = make([]protoimpl.MessageInfo, 6)
|
||||
var file_rendezvous_proto_goTypes = []interface{}{
|
||||
(Message_MessageType)(0), // 0: rendezvous.pb.Message.MessageType
|
||||
(Message_ResponseStatus)(0), // 1: rendezvous.pb.Message.ResponseStatus
|
||||
(*Message)(nil), // 2: rendezvous.pb.Message
|
||||
(*Message_Register)(nil), // 3: rendezvous.pb.Message.Register
|
||||
(*Message_RegisterResponse)(nil), // 4: rendezvous.pb.Message.RegisterResponse
|
||||
(*Message_Unregister)(nil), // 5: rendezvous.pb.Message.Unregister
|
||||
(*Message_Discover)(nil), // 6: rendezvous.pb.Message.Discover
|
||||
(*Message_DiscoverResponse)(nil), // 7: rendezvous.pb.Message.DiscoverResponse
|
||||
}
|
||||
var file_rendezvous_proto_depIdxs = []int32{
|
||||
0, // 0: rendezvous.pb.Message.type:type_name -> rendezvous.pb.Message.MessageType
|
||||
3, // 1: rendezvous.pb.Message.register:type_name -> rendezvous.pb.Message.Register
|
||||
4, // 2: rendezvous.pb.Message.registerResponse:type_name -> rendezvous.pb.Message.RegisterResponse
|
||||
5, // 3: rendezvous.pb.Message.unregister:type_name -> rendezvous.pb.Message.Unregister
|
||||
6, // 4: rendezvous.pb.Message.discover:type_name -> rendezvous.pb.Message.Discover
|
||||
7, // 5: rendezvous.pb.Message.discoverResponse:type_name -> rendezvous.pb.Message.DiscoverResponse
|
||||
1, // 6: rendezvous.pb.Message.RegisterResponse.status:type_name -> rendezvous.pb.Message.ResponseStatus
|
||||
3, // 7: rendezvous.pb.Message.DiscoverResponse.registrations:type_name -> rendezvous.pb.Message.Register
|
||||
1, // 8: rendezvous.pb.Message.DiscoverResponse.status:type_name -> rendezvous.pb.Message.ResponseStatus
|
||||
9, // [9:9] is the sub-list for method output_type
|
||||
9, // [9:9] is the sub-list for method input_type
|
||||
9, // [9:9] is the sub-list for extension type_name
|
||||
9, // [9:9] is the sub-list for extension extendee
|
||||
0, // [0:9] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_rendezvous_proto_init() }
|
||||
func file_rendezvous_proto_init() {
|
||||
if File_rendezvous_proto != nil {
|
||||
return
|
||||
}
|
||||
if !protoimpl.UnsafeEnabled {
|
||||
file_rendezvous_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*Message); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_rendezvous_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*Message_Register); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_rendezvous_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*Message_RegisterResponse); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_rendezvous_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*Message_Unregister); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_rendezvous_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*Message_Discover); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_rendezvous_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*Message_DiscoverResponse); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
type x struct{}
|
||||
out := protoimpl.TypeBuilder{
|
||||
File: protoimpl.DescBuilder{
|
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: file_rendezvous_proto_rawDesc,
|
||||
NumEnums: 2,
|
||||
NumMessages: 6,
|
||||
NumExtensions: 0,
|
||||
NumServices: 0,
|
||||
},
|
||||
GoTypes: file_rendezvous_proto_goTypes,
|
||||
DependencyIndexes: file_rendezvous_proto_depIdxs,
|
||||
EnumInfos: file_rendezvous_proto_enumTypes,
|
||||
MessageInfos: file_rendezvous_proto_msgTypes,
|
||||
}.Build()
|
||||
File_rendezvous_proto = out.File
|
||||
file_rendezvous_proto_rawDesc = nil
|
||||
file_rendezvous_proto_goTypes = nil
|
||||
file_rendezvous_proto_depIdxs = nil
|
||||
}
|
||||
61
vendor/github.com/waku-org/go-libp2p-rendezvous/pb/rendezvous.proto
generated
vendored
Normal file
61
vendor/github.com/waku-org/go-libp2p-rendezvous/pb/rendezvous.proto
generated
vendored
Normal file
@@ -0,0 +1,61 @@
|
||||
syntax = "proto2";
|
||||
|
||||
package rendezvous.pb;
|
||||
|
||||
message Message {
|
||||
enum MessageType {
|
||||
REGISTER = 0;
|
||||
REGISTER_RESPONSE = 1;
|
||||
UNREGISTER = 2;
|
||||
DISCOVER = 3;
|
||||
DISCOVER_RESPONSE = 4;
|
||||
}
|
||||
|
||||
enum ResponseStatus {
|
||||
OK = 0;
|
||||
E_INVALID_NAMESPACE = 100;
|
||||
E_INVALID_SIGNED_PEER_RECORD = 101;
|
||||
E_INVALID_TTL = 102;
|
||||
E_INVALID_COOKIE = 103;
|
||||
E_NOT_AUTHORIZED = 200;
|
||||
E_INTERNAL_ERROR = 300;
|
||||
E_UNAVAILABLE = 400;
|
||||
}
|
||||
|
||||
message Register {
|
||||
optional string ns = 1;
|
||||
optional bytes signedPeerRecord = 2;
|
||||
optional uint64 ttl = 3; // in seconds
|
||||
}
|
||||
|
||||
message RegisterResponse {
|
||||
optional ResponseStatus status = 1;
|
||||
optional string statusText = 2;
|
||||
optional uint64 ttl = 3; // in seconds
|
||||
}
|
||||
|
||||
message Unregister {
|
||||
optional string ns = 1;
|
||||
// optional bytes id = 2; deprecated as per https://github.com/libp2p/specs/issues/335
|
||||
}
|
||||
|
||||
message Discover {
|
||||
optional string ns = 1;
|
||||
optional uint64 limit = 2;
|
||||
optional bytes cookie = 3;
|
||||
}
|
||||
|
||||
message DiscoverResponse {
|
||||
repeated Register registrations = 1;
|
||||
optional bytes cookie = 2;
|
||||
optional ResponseStatus status = 3;
|
||||
optional string statusText = 4;
|
||||
}
|
||||
|
||||
optional MessageType type = 1;
|
||||
optional Register register = 2;
|
||||
optional RegisterResponse registerResponse = 3;
|
||||
optional Unregister unregister = 4;
|
||||
optional Discover discover = 5;
|
||||
optional DiscoverResponse discoverResponse = 6;
|
||||
}
|
||||
160
vendor/github.com/waku-org/go-libp2p-rendezvous/proto.go
generated
vendored
Normal file
160
vendor/github.com/waku-org/go-libp2p-rendezvous/proto.go
generated
vendored
Normal file
@@ -0,0 +1,160 @@
|
||||
package rendezvous
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
db "github.com/waku-org/go-libp2p-rendezvous/db"
|
||||
pb "github.com/waku-org/go-libp2p-rendezvous/pb"
|
||||
|
||||
logging "github.com/ipfs/go-log/v2"
|
||||
crypto "github.com/libp2p/go-libp2p/core/crypto"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/libp2p/go-libp2p/core/record"
|
||||
)
|
||||
|
||||
var log = logging.Logger("rendezvous")
|
||||
|
||||
const (
|
||||
RendezvousProto = protocol.ID("/rendezvous/1.0.0")
|
||||
|
||||
DefaultTTL = 2 * 3600 // 2hr
|
||||
)
|
||||
|
||||
type RendezvousError struct {
|
||||
Status pb.Message_ResponseStatus
|
||||
Text string
|
||||
}
|
||||
|
||||
func (e RendezvousError) Error() string {
|
||||
return fmt.Sprintf("Rendezvous error: %s (%s)", e.Text, e.Status.String())
|
||||
}
|
||||
|
||||
func NewRegisterMessage(privKey crypto.PrivKey, ns string, pi peer.AddrInfo, ttl int) (*pb.Message, error) {
|
||||
return newRegisterMessage(privKey, ns, pi, ttl)
|
||||
}
|
||||
|
||||
func newRegisterMessage(privKey crypto.PrivKey, ns string, pi peer.AddrInfo, ttl int) (*pb.Message, error) {
|
||||
msg := new(pb.Message)
|
||||
msg.Type = pb.Message_REGISTER.Enum()
|
||||
msg.Register = new(pb.Message_Register)
|
||||
if ns != "" {
|
||||
msg.Register.Ns = &ns
|
||||
}
|
||||
if ttl > 0 {
|
||||
ttlu64 := uint64(ttl)
|
||||
msg.Register.Ttl = &ttlu64
|
||||
}
|
||||
|
||||
peerInfo := &peer.PeerRecord{
|
||||
PeerID: pi.ID,
|
||||
Addrs: pi.Addrs,
|
||||
Seq: uint64(time.Now().Unix()),
|
||||
}
|
||||
|
||||
envelope, err := record.Seal(peerInfo, privKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
envPayload, err := envelope.Marshal()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
msg.Register.SignedPeerRecord = envPayload
|
||||
|
||||
return msg, nil
|
||||
}
|
||||
|
||||
func newUnregisterMessage(ns string, pid peer.ID) *pb.Message {
|
||||
msg := new(pb.Message)
|
||||
msg.Type = pb.Message_UNREGISTER.Enum()
|
||||
msg.Unregister = new(pb.Message_Unregister)
|
||||
if ns != "" {
|
||||
msg.Unregister.Ns = &ns
|
||||
}
|
||||
return msg
|
||||
}
|
||||
|
||||
func NewDiscoverMessage(ns string, limit int, cookie []byte) *pb.Message {
|
||||
return newDiscoverMessage(ns, limit, cookie)
|
||||
}
|
||||
|
||||
func newDiscoverMessage(ns string, limit int, cookie []byte) *pb.Message {
|
||||
msg := new(pb.Message)
|
||||
msg.Type = pb.Message_DISCOVER.Enum()
|
||||
msg.Discover = new(pb.Message_Discover)
|
||||
if ns != "" {
|
||||
msg.Discover.Ns = &ns
|
||||
}
|
||||
if limit > 0 {
|
||||
limitu64 := uint64(limit)
|
||||
msg.Discover.Limit = &limitu64
|
||||
}
|
||||
if cookie != nil {
|
||||
msg.Discover.Cookie = cookie
|
||||
}
|
||||
return msg
|
||||
}
|
||||
func pbToPeerRecord(envelopeBytes []byte) (peer.AddrInfo, error) {
|
||||
envelope, rec, err := record.ConsumeEnvelope(envelopeBytes, peer.PeerRecordEnvelopeDomain)
|
||||
if err != nil {
|
||||
return peer.AddrInfo{}, err
|
||||
}
|
||||
|
||||
peerRec, ok := rec.(*peer.PeerRecord)
|
||||
if !ok {
|
||||
return peer.AddrInfo{}, errors.New("invalid peer record")
|
||||
}
|
||||
|
||||
if !peerRec.PeerID.MatchesPublicKey(envelope.PublicKey) {
|
||||
return peer.AddrInfo{}, errors.New("signing key does not match peer record")
|
||||
}
|
||||
|
||||
return peer.AddrInfo{ID: peerRec.PeerID, Addrs: peerRec.Addrs}, nil
|
||||
}
|
||||
|
||||
func newRegisterResponse(ttl int) *pb.Message_RegisterResponse {
|
||||
ttlu64 := uint64(ttl)
|
||||
r := new(pb.Message_RegisterResponse)
|
||||
r.Status = pb.Message_OK.Enum()
|
||||
r.Ttl = &ttlu64
|
||||
return r
|
||||
}
|
||||
|
||||
func newRegisterResponseError(status pb.Message_ResponseStatus, text string) *pb.Message_RegisterResponse {
|
||||
r := new(pb.Message_RegisterResponse)
|
||||
r.Status = status.Enum()
|
||||
r.StatusText = &text
|
||||
return r
|
||||
}
|
||||
|
||||
func newDiscoverResponse(regs []db.RegistrationRecord, cookie []byte) *pb.Message_DiscoverResponse {
|
||||
r := new(pb.Message_DiscoverResponse)
|
||||
r.Status = pb.Message_OK.Enum()
|
||||
|
||||
rregs := make([]*pb.Message_Register, len(regs))
|
||||
for i, reg := range regs {
|
||||
rreg := new(pb.Message_Register)
|
||||
rreg.Ns = ®.Ns
|
||||
rreg.SignedPeerRecord = reg.SignedPeerRecord
|
||||
rttl := uint64(reg.Ttl)
|
||||
rreg.Ttl = &rttl
|
||||
rregs[i] = rreg
|
||||
}
|
||||
|
||||
r.Registrations = rregs
|
||||
r.Cookie = cookie
|
||||
|
||||
return r
|
||||
}
|
||||
|
||||
func newDiscoverResponseError(status pb.Message_ResponseStatus, text string) *pb.Message_DiscoverResponse {
|
||||
r := new(pb.Message_DiscoverResponse)
|
||||
r.Status = status.Enum()
|
||||
r.StatusText = &text
|
||||
return r
|
||||
}
|
||||
200
vendor/github.com/waku-org/go-libp2p-rendezvous/svc.go
generated
vendored
Normal file
200
vendor/github.com/waku-org/go-libp2p-rendezvous/svc.go
generated
vendored
Normal file
@@ -0,0 +1,200 @@
|
||||
package rendezvous
|
||||
|
||||
import (
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
inet "github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-msgio/pbio"
|
||||
|
||||
db "github.com/waku-org/go-libp2p-rendezvous/db"
|
||||
pb "github.com/waku-org/go-libp2p-rendezvous/pb"
|
||||
)
|
||||
|
||||
const (
|
||||
MaxTTL = 72 * 3600 // 72hr
|
||||
MaxNamespaceLength = 256
|
||||
MaxPeerAddressLength = 2048
|
||||
MaxRegistrations = 1000
|
||||
MaxDiscoverLimit = 1000
|
||||
)
|
||||
|
||||
type RendezvousService struct {
|
||||
DB db.DB
|
||||
}
|
||||
|
||||
func NewRendezvousService(host host.Host, db db.DB) *RendezvousService {
|
||||
rz := &RendezvousService{DB: db}
|
||||
host.SetStreamHandler(RendezvousProto, rz.handleStream)
|
||||
return rz
|
||||
}
|
||||
|
||||
func (rz *RendezvousService) handleStream(s inet.Stream) {
|
||||
defer s.Reset()
|
||||
|
||||
pid := s.Conn().RemotePeer()
|
||||
log.Debugf("New stream from %s", pid.Pretty())
|
||||
|
||||
r := pbio.NewDelimitedReader(s, inet.MessageSizeMax)
|
||||
w := pbio.NewDelimitedWriter(s)
|
||||
|
||||
for {
|
||||
var req pb.Message
|
||||
var res pb.Message
|
||||
|
||||
err := r.ReadMsg(&req)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
t := req.GetType()
|
||||
switch t {
|
||||
case pb.Message_REGISTER:
|
||||
r := rz.handleRegister(pid, req.GetRegister())
|
||||
res.Type = pb.Message_REGISTER_RESPONSE.Enum()
|
||||
res.RegisterResponse = r
|
||||
err = w.WriteMsg(&res)
|
||||
if err != nil {
|
||||
log.Debugf("Error writing response: %s", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
case pb.Message_UNREGISTER:
|
||||
err := rz.handleUnregister(pid, req.GetUnregister())
|
||||
if err != nil {
|
||||
log.Debugf("Error unregistering peer: %s", err.Error())
|
||||
}
|
||||
|
||||
case pb.Message_DISCOVER:
|
||||
r := rz.handleDiscover(pid, req.GetDiscover())
|
||||
res.Type = pb.Message_DISCOVER_RESPONSE.Enum()
|
||||
res.DiscoverResponse = r
|
||||
err = w.WriteMsg(&res)
|
||||
if err != nil {
|
||||
log.Debugf("Error writing response: %s", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
default:
|
||||
log.Debugf("Unexpected message: %s", t.String())
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (rz *RendezvousService) handleRegister(p peer.ID, m *pb.Message_Register) *pb.Message_RegisterResponse {
|
||||
ns := m.GetNs()
|
||||
if ns == "" {
|
||||
return newRegisterResponseError(pb.Message_E_INVALID_NAMESPACE, "unspecified namespace")
|
||||
}
|
||||
|
||||
if len(ns) > MaxNamespaceLength {
|
||||
return newRegisterResponseError(pb.Message_E_INVALID_NAMESPACE, "namespace too long")
|
||||
}
|
||||
|
||||
signedPeerRecord := m.GetSignedPeerRecord()
|
||||
if signedPeerRecord == nil {
|
||||
return newRegisterResponseError(pb.Message_E_INVALID_SIGNED_PEER_RECORD, "missing signed peer record")
|
||||
}
|
||||
|
||||
peerRecord, err := pbToPeerRecord(signedPeerRecord)
|
||||
if err != nil {
|
||||
return newRegisterResponseError(pb.Message_E_INVALID_SIGNED_PEER_RECORD, "invalid peer record")
|
||||
}
|
||||
|
||||
if peerRecord.ID != p {
|
||||
return newRegisterResponseError(pb.Message_E_INVALID_SIGNED_PEER_RECORD, "peer id mismatch")
|
||||
}
|
||||
|
||||
if len(peerRecord.Addrs) == 0 {
|
||||
return newRegisterResponseError(pb.Message_E_INVALID_SIGNED_PEER_RECORD, "missing peer addresses")
|
||||
}
|
||||
|
||||
mlen := 0
|
||||
for _, maddr := range peerRecord.Addrs {
|
||||
mlen += len(maddr.Bytes())
|
||||
}
|
||||
if mlen > MaxPeerAddressLength {
|
||||
return newRegisterResponseError(pb.Message_E_INVALID_SIGNED_PEER_RECORD, "peer info too long")
|
||||
}
|
||||
|
||||
// Note:
|
||||
// We don't validate the addresses, because they could include protocols we don't understand
|
||||
// Perhaps we should though.
|
||||
|
||||
mttl := m.GetTtl()
|
||||
if mttl > MaxTTL {
|
||||
return newRegisterResponseError(pb.Message_E_INVALID_TTL, "bad ttl")
|
||||
}
|
||||
|
||||
ttl := DefaultTTL
|
||||
if mttl > 0 {
|
||||
ttl = int(mttl)
|
||||
}
|
||||
|
||||
// now check how many registrations we have for this peer -- simple limit to defend
|
||||
// against trivial DoS attacks (eg a peer connects and keeps registering until it
|
||||
// fills our db)
|
||||
rcount, err := rz.DB.CountRegistrations(p)
|
||||
if err != nil {
|
||||
log.Errorf("Error counting registrations: %s", err.Error())
|
||||
return newRegisterResponseError(pb.Message_E_INTERNAL_ERROR, "database error")
|
||||
}
|
||||
|
||||
if rcount > MaxRegistrations {
|
||||
log.Warningf("Too many registrations for %s", p)
|
||||
return newRegisterResponseError(pb.Message_E_NOT_AUTHORIZED, "too many registrations")
|
||||
}
|
||||
|
||||
// ok, seems like we can register
|
||||
_, err = rz.DB.Register(p, ns, signedPeerRecord, ttl)
|
||||
if err != nil {
|
||||
log.Errorf("Error registering: %s", err.Error())
|
||||
return newRegisterResponseError(pb.Message_E_INTERNAL_ERROR, "database error")
|
||||
}
|
||||
|
||||
log.Infof("registered peer %s %s (%d)", p, ns, ttl)
|
||||
|
||||
return newRegisterResponse(ttl)
|
||||
}
|
||||
|
||||
func (rz *RendezvousService) handleUnregister(p peer.ID, m *pb.Message_Unregister) error {
|
||||
ns := m.GetNs()
|
||||
|
||||
err := rz.DB.Unregister(p, ns)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.Infof("unregistered peer %s %s", p, ns)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (rz *RendezvousService) handleDiscover(p peer.ID, m *pb.Message_Discover) *pb.Message_DiscoverResponse {
|
||||
ns := m.GetNs()
|
||||
|
||||
if len(ns) > MaxNamespaceLength {
|
||||
return newDiscoverResponseError(pb.Message_E_INVALID_NAMESPACE, "namespace too long")
|
||||
}
|
||||
|
||||
limit := MaxDiscoverLimit
|
||||
mlimit := m.GetLimit()
|
||||
if mlimit > 0 && mlimit < uint64(limit) {
|
||||
limit = int(mlimit)
|
||||
}
|
||||
|
||||
cookie := m.GetCookie()
|
||||
if cookie != nil && !rz.DB.ValidCookie(ns, cookie) {
|
||||
return newDiscoverResponseError(pb.Message_E_INVALID_COOKIE, "bad cookie")
|
||||
}
|
||||
|
||||
regs, cookie, err := rz.DB.Discover(ns, cookie, limit)
|
||||
if err != nil {
|
||||
log.Errorf("Error in query: %s", err.Error())
|
||||
return newDiscoverResponseError(pb.Message_E_INTERNAL_ERROR, "database error")
|
||||
}
|
||||
|
||||
log.Debugf("discover query: %s %s -> %d", p, ns, len(regs))
|
||||
|
||||
return newDiscoverResponse(regs, cookie)
|
||||
}
|
||||
205
vendor/github.com/waku-org/go-waku/LICENSE-APACHEv2
generated
vendored
Normal file
205
vendor/github.com/waku-org/go-waku/LICENSE-APACHEv2
generated
vendored
Normal file
@@ -0,0 +1,205 @@
|
||||
go-waku is licensed under the Apache License version 2
|
||||
Copyright (c) 2018 Status Research & Development GmbH
|
||||
-----------------------------------------------------
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright 2018 Status Research & Development GmbH
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
25
vendor/github.com/waku-org/go-waku/LICENSE-MIT
generated
vendored
Normal file
25
vendor/github.com/waku-org/go-waku/LICENSE-MIT
generated
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
go-waku is licensed under the MIT License
|
||||
Copyright (c) 2018 Status Research & Development GmbH
|
||||
-----------------------------------------------------
|
||||
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2018 Status Research & Development GmbH
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
27
vendor/github.com/waku-org/go-waku/logging/README.md
generated
vendored
Normal file
27
vendor/github.com/waku-org/go-waku/logging/README.md
generated
vendored
Normal file
@@ -0,0 +1,27 @@
|
||||
# Logging Style Guide
|
||||
|
||||
The goal of the style described here is to yield logs that are amenable to searching and aggregating. Structured logging is the best foundation for that. The log entries should be consistent and predictable to support search efficiency and high fidelity of search results. This style puts forward guidelines that promote this outcome.
|
||||
|
||||
## Log messages
|
||||
|
||||
* Messages should be fixed strings, never interpolate values into the messages. Use log entry fields for values.
|
||||
|
||||
* Message strings should be consistent identification of what action/event was/is happening. Consistent messages makes searching the logs and aggregating correlated events easier.
|
||||
|
||||
* Error messages should look like any other log messages. No need to say "x failed", the log level and error field are sufficient indication of failure.
|
||||
|
||||
## Log entry fields
|
||||
|
||||
* Adding fields to log entries is not free, but the fields are the discriminators that allow distinguishing similar log entries from each other. Insufficient field structure will makes it more difficult to find the entries you are looking for.
|
||||
|
||||
* Create/Use field helpers for commonly used field value types (see logging.go). It promotes consistent formatting and allows changing it easily in one place.
|
||||
|
||||
# Log entry field helpers
|
||||
|
||||
* Make the field creation do as little as possible, i.e. just capture the existing value/object. Postpone any transformation to log emission time by employing generic zap.Stringer, zap.Array, zap.Object fields (see logging.go). It avoids unnecessary transformation for entries that may not even be emitted in the end.
|
||||
|
||||
## Logger management
|
||||
|
||||
* Adorn the logger with fields and reuse the adorned logger rather than repeatedly creating fields with each log entry.
|
||||
|
||||
* Prefer passing the adorned logger down the call chain using Context. It promotes consistent log entry structure, i.e. fields will exist consistently in related entries.
|
||||
22
vendor/github.com/waku-org/go-waku/logging/context.go
generated
vendored
Normal file
22
vendor/github.com/waku-org/go-waku/logging/context.go
generated
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
package logging
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
var logKey = &struct{}{}
|
||||
|
||||
// From allows retrieving the Logger from a Context.
|
||||
// Returns nil if Context does not have one.
|
||||
func From(ctx context.Context) *zap.Logger {
|
||||
logger, _ := ctx.Value(logKey).(*zap.Logger)
|
||||
return logger
|
||||
}
|
||||
|
||||
// With associates a Logger with a Context to allow passing
|
||||
// a logger down the call chain.
|
||||
func With(ctx context.Context, log *zap.Logger) context.Context {
|
||||
return context.WithValue(ctx, logKey, log)
|
||||
}
|
||||
144
vendor/github.com/waku-org/go-waku/logging/logging.go
generated
vendored
Normal file
144
vendor/github.com/waku-org/go-waku/logging/logging.go
generated
vendored
Normal file
@@ -0,0 +1,144 @@
|
||||
// logging implements custom logging field types for commonly
|
||||
// logged values like host ID or wallet address.
|
||||
//
|
||||
// implementation purposely does as little as possible at field creation time,
|
||||
// and postpones any transformation to output time by relying on the generic
|
||||
// zap types like zap.Stringer, zap.Array, zap.Object
|
||||
package logging
|
||||
|
||||
import (
|
||||
"encoding/hex"
|
||||
"net"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/multiformats/go-multiaddr"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/store/pb"
|
||||
"go.uber.org/zap"
|
||||
"go.uber.org/zap/zapcore"
|
||||
)
|
||||
|
||||
// List of []byte
|
||||
type byteArr [][]byte
|
||||
|
||||
// HexArray creates a field with an array of bytes that will be shown as a hexadecimal string in logs
|
||||
func HexArray(key string, byteVal byteArr) zapcore.Field {
|
||||
return zap.Array(key, byteVal)
|
||||
}
|
||||
|
||||
func (bArr byteArr) MarshalLogArray(encoder zapcore.ArrayEncoder) error {
|
||||
for _, b := range bArr {
|
||||
encoder.AppendString("0x" + hex.EncodeToString(b))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// List of multiaddrs
|
||||
type multiaddrs []multiaddr.Multiaddr
|
||||
|
||||
// MultiAddrs creates a field with an array of multiaddrs
|
||||
func MultiAddrs(key string, addrs ...multiaddr.Multiaddr) zapcore.Field {
|
||||
return zap.Array(key, multiaddrs(addrs))
|
||||
}
|
||||
|
||||
func (addrs multiaddrs) MarshalLogArray(encoder zapcore.ArrayEncoder) error {
|
||||
for _, addr := range addrs {
|
||||
encoder.AppendString(addr.String())
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Host ID/Peer ID
|
||||
type hostID peer.ID
|
||||
|
||||
// HostID creates a field for a peer.ID
|
||||
func HostID(key string, id peer.ID) zapcore.Field {
|
||||
return zap.Stringer(key, hostID(id))
|
||||
}
|
||||
|
||||
func (id hostID) String() string { return peer.ID(id).String() }
|
||||
|
||||
// Time - Waku uses Nanosecond Unix Time
|
||||
type timestamp int64
|
||||
|
||||
// Time creates a field for Waku time value
|
||||
func Time(key string, time int64) zapcore.Field {
|
||||
return zap.Stringer(key, timestamp(time))
|
||||
}
|
||||
|
||||
func (t timestamp) String() string {
|
||||
return time.Unix(0, int64(t)).Format(time.RFC3339)
|
||||
}
|
||||
|
||||
// History Query Filters
|
||||
type historyFilters []*pb.ContentFilter
|
||||
|
||||
// Filters creates a field with an array of history query filters.
|
||||
// The assumption is that log entries won't have more than one of these,
|
||||
// so the field key/name is hardcoded to be "filters" to promote consistency.
|
||||
func Filters(filters []*pb.ContentFilter) zapcore.Field {
|
||||
return zap.Array("filters", historyFilters(filters))
|
||||
}
|
||||
|
||||
func (filters historyFilters) MarshalLogArray(encoder zapcore.ArrayEncoder) error {
|
||||
for _, filter := range filters {
|
||||
encoder.AppendString(filter.ContentTopic)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// History Paging Info
|
||||
// Probably too detailed for normal log levels, but useful for debugging.
|
||||
// Also a good example of nested object value.
|
||||
type pagingInfo pb.PagingInfo
|
||||
type index pb.Index
|
||||
|
||||
// PagingInfo creates a field with history query paging info.
|
||||
func PagingInfo(pi *pb.PagingInfo) zapcore.Field {
|
||||
return zap.Object("paging_info", (*pagingInfo)(pi))
|
||||
}
|
||||
|
||||
func (pi *pagingInfo) MarshalLogObject(encoder zapcore.ObjectEncoder) error {
|
||||
encoder.AddUint64("page_size", pi.PageSize)
|
||||
encoder.AddString("direction", pi.Direction.String())
|
||||
if pi.Cursor != nil {
|
||||
return encoder.AddObject("cursor", (*index)(pi.Cursor))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *index) MarshalLogObject(encoder zapcore.ObjectEncoder) error {
|
||||
encoder.AddBinary("digest", i.Digest)
|
||||
encoder.AddTime("sent", time.Unix(0, i.SenderTime))
|
||||
encoder.AddTime("received", time.Unix(0, i.ReceiverTime))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Hex encoded bytes
|
||||
type hexBytes []byte
|
||||
|
||||
// HexBytes creates a field for a byte slice that should be emitted as hex encoded string.
|
||||
func HexBytes(key string, bytes []byte) zap.Field {
|
||||
return zap.Stringer(key, hexBytes(bytes))
|
||||
}
|
||||
|
||||
func (bytes hexBytes) String() string {
|
||||
return hexutil.Encode(bytes)
|
||||
}
|
||||
|
||||
// ENode creates a field for ENR node.
|
||||
func ENode(key string, node *enode.Node) zap.Field {
|
||||
return zap.Stringer(key, node)
|
||||
}
|
||||
|
||||
// TCPAddr creates a field for TCP v4/v6 address and port
|
||||
func TCPAddr(key string, ip net.IP, port int) zap.Field {
|
||||
return zap.Stringer(key, &net.TCPAddr{IP: ip, Port: port})
|
||||
}
|
||||
|
||||
// UDPAddr creates a field for UDP v4/v6 address and port
|
||||
func UDPAddr(key string, ip net.IP, port int) zap.Field {
|
||||
return zap.Stringer(key, &net.UDPAddr{IP: ip, Port: port})
|
||||
}
|
||||
22
vendor/github.com/waku-org/go-waku/waku/persistence/driver_type.go
generated
vendored
Normal file
22
vendor/github.com/waku-org/go-waku/waku/persistence/driver_type.go
generated
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
package persistence
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
const (
|
||||
UndefinedDriver = iota
|
||||
PostgresDriver
|
||||
SQLiteDriver
|
||||
)
|
||||
|
||||
func GetDriverType(db *sql.DB) int {
|
||||
switch reflect.TypeOf(db.Driver()).String() {
|
||||
case "*sqlite3.SQLiteDriver":
|
||||
return SQLiteDriver
|
||||
case "*stdlib.Driver":
|
||||
return PostgresDriver
|
||||
}
|
||||
return UndefinedDriver
|
||||
}
|
||||
87
vendor/github.com/waku-org/go-waku/waku/persistence/metrics.go
generated
vendored
Normal file
87
vendor/github.com/waku-org/go-waku/waku/persistence/metrics.go
generated
vendored
Normal file
@@ -0,0 +1,87 @@
|
||||
package persistence
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/libp2p/go-libp2p/p2p/metricshelper"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
var archiveMessages = prometheus.NewCounter(
|
||||
prometheus.CounterOpts{
|
||||
Name: "waku_archive_messages",
|
||||
Help: "The number of messages stored via archive protocol",
|
||||
})
|
||||
|
||||
var archiveErrors = prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Name: "waku_archive_errors",
|
||||
Help: "The distribution of the archive protocol errors",
|
||||
},
|
||||
[]string{"error_type"},
|
||||
)
|
||||
|
||||
var archiveInsertDurationSeconds = prometheus.NewHistogram(
|
||||
prometheus.HistogramOpts{
|
||||
Name: "waku_archive_insert_duration_seconds",
|
||||
Help: "Message insertion duration",
|
||||
})
|
||||
|
||||
var archiveQueryDurationSeconds = prometheus.NewHistogram(
|
||||
prometheus.HistogramOpts{
|
||||
Name: "waku_archive_query_duration_seconds",
|
||||
Help: "History query duration",
|
||||
})
|
||||
|
||||
var collectors = []prometheus.Collector{
|
||||
archiveMessages,
|
||||
archiveErrors,
|
||||
archiveInsertDurationSeconds,
|
||||
archiveQueryDurationSeconds,
|
||||
}
|
||||
|
||||
// Metrics exposes the functions required to update prometheus metrics for archive protocol
|
||||
type Metrics interface {
|
||||
RecordMessage(num int)
|
||||
RecordError(err metricsErrCategory)
|
||||
RecordInsertDuration(duration time.Duration)
|
||||
RecordQueryDuration(duration time.Duration)
|
||||
}
|
||||
|
||||
type metricsImpl struct {
|
||||
reg prometheus.Registerer
|
||||
}
|
||||
|
||||
func newMetrics(reg prometheus.Registerer) Metrics {
|
||||
metricshelper.RegisterCollectors(reg, collectors...)
|
||||
return &metricsImpl{
|
||||
reg: reg,
|
||||
}
|
||||
}
|
||||
|
||||
// RecordMessage increases the counter for the number of messages stored in the archive
|
||||
func (m *metricsImpl) RecordMessage(num int) {
|
||||
archiveMessages.Add(float64(num))
|
||||
}
|
||||
|
||||
type metricsErrCategory string
|
||||
|
||||
var (
|
||||
retPolicyFailure metricsErrCategory = "retpolicy_failure"
|
||||
insertFailure metricsErrCategory = "retpolicy_failure"
|
||||
)
|
||||
|
||||
// RecordError increases the counter for different error types
|
||||
func (m *metricsImpl) RecordError(err metricsErrCategory) {
|
||||
archiveErrors.WithLabelValues(string(err)).Inc()
|
||||
}
|
||||
|
||||
// RecordInsertDuration tracks the duration for inserting a record in the archive database
|
||||
func (m *metricsImpl) RecordInsertDuration(duration time.Duration) {
|
||||
archiveInsertDurationSeconds.Observe(duration.Seconds())
|
||||
}
|
||||
|
||||
// RecordQueryDuration tracks the duration for executing a query in the archive database
|
||||
func (m *metricsImpl) RecordQueryDuration(duration time.Duration) {
|
||||
archiveQueryDurationSeconds.Observe(duration.Seconds())
|
||||
}
|
||||
79
vendor/github.com/waku-org/go-waku/waku/persistence/sql_queries.go
generated
vendored
Normal file
79
vendor/github.com/waku-org/go-waku/waku/persistence/sql_queries.go
generated
vendored
Normal file
@@ -0,0 +1,79 @@
|
||||
package persistence
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Queries are the SQL queries for a given table.
|
||||
type Queries struct {
|
||||
deleteQuery string
|
||||
existsQuery string
|
||||
getQuery string
|
||||
putQuery string
|
||||
queryQuery string
|
||||
prefixQuery string
|
||||
limitQuery string
|
||||
offsetQuery string
|
||||
getSizeQuery string
|
||||
}
|
||||
|
||||
// CreateQueries Function creates a set of queries for an SQL table.
|
||||
// Note: Do not use this function to create queries for a table, rather use <rdb>.NewQueries to create table as well as queries.
|
||||
func CreateQueries(tbl string) *Queries {
|
||||
return &Queries{
|
||||
deleteQuery: fmt.Sprintf("DELETE FROM %s WHERE key = $1", tbl),
|
||||
existsQuery: fmt.Sprintf("SELECT exists(SELECT 1 FROM %s WHERE key=$1)", tbl),
|
||||
getQuery: fmt.Sprintf("SELECT data FROM %s WHERE key = $1", tbl),
|
||||
putQuery: fmt.Sprintf("INSERT INTO %s (key, data) VALUES ($1, $2) ON CONFLICT (key) DO UPDATE SET data = $2", tbl),
|
||||
queryQuery: fmt.Sprintf("SELECT key, data FROM %s", tbl),
|
||||
prefixQuery: ` WHERE key LIKE '%s%%' ORDER BY key`,
|
||||
limitQuery: ` LIMIT %d`,
|
||||
offsetQuery: ` OFFSET %d`,
|
||||
getSizeQuery: fmt.Sprintf("SELECT length(data) FROM %s WHERE key = $1", tbl),
|
||||
}
|
||||
}
|
||||
|
||||
// Delete returns the query for deleting a row.
|
||||
func (q Queries) Delete() string {
|
||||
return q.deleteQuery
|
||||
}
|
||||
|
||||
// Exists returns the query for determining if a row exists.
|
||||
func (q Queries) Exists() string {
|
||||
return q.existsQuery
|
||||
}
|
||||
|
||||
// Get returns the query for getting a row.
|
||||
func (q Queries) Get() string {
|
||||
return q.getQuery
|
||||
}
|
||||
|
||||
// Put returns the query for putting a row.
|
||||
func (q Queries) Put() string {
|
||||
return q.putQuery
|
||||
}
|
||||
|
||||
// Query returns the query for getting multiple rows.
|
||||
func (q Queries) Query() string {
|
||||
return q.queryQuery
|
||||
}
|
||||
|
||||
// Prefix returns the query fragment for getting a rows with a key prefix.
|
||||
func (q Queries) Prefix() string {
|
||||
return q.prefixQuery
|
||||
}
|
||||
|
||||
// Limit returns the query fragment for limiting results.
|
||||
func (q Queries) Limit() string {
|
||||
return q.limitQuery
|
||||
}
|
||||
|
||||
// Offset returns the query fragment for returning rows from a given offset.
|
||||
func (q Queries) Offset() string {
|
||||
return q.offsetQuery
|
||||
}
|
||||
|
||||
// GetSize returns the query for determining the size of a value.
|
||||
func (q Queries) GetSize() string {
|
||||
return q.getSizeQuery
|
||||
}
|
||||
591
vendor/github.com/waku-org/go-waku/waku/persistence/store.go
generated
vendored
Normal file
591
vendor/github.com/waku-org/go-waku/waku/persistence/store.go
generated
vendored
Normal file
@@ -0,0 +1,591 @@
|
||||
package persistence
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
wpb "github.com/waku-org/go-waku/waku/v2/protocol/pb"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/store/pb"
|
||||
"github.com/waku-org/go-waku/waku/v2/timesource"
|
||||
"go.uber.org/zap"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
// MessageProvider is an interface that provides access to store/retrieve messages from a persistence store.
|
||||
type MessageProvider interface {
|
||||
GetAll() ([]StoredMessage, error)
|
||||
Validate(env *protocol.Envelope) error
|
||||
Put(env *protocol.Envelope) error
|
||||
Query(query *pb.HistoryQuery) ([]StoredMessage, error)
|
||||
MostRecentTimestamp() (int64, error)
|
||||
Start(ctx context.Context, timesource timesource.Timesource) error
|
||||
Stop()
|
||||
}
|
||||
|
||||
// ErrInvalidCursor indicates that an invalid cursor has been passed to access store
|
||||
var ErrInvalidCursor = errors.New("invalid cursor")
|
||||
|
||||
// ErrFutureMessage indicates that a message with timestamp in future was requested to be stored
|
||||
var ErrFutureMessage = errors.New("message timestamp in the future")
|
||||
|
||||
// ErrMessageTooOld indicates that a message that was too old was requested to be stored.
|
||||
var ErrMessageTooOld = errors.New("message too old")
|
||||
|
||||
// WALMode for sqlite.
|
||||
const WALMode = "wal"
|
||||
|
||||
// MaxTimeVariance is the maximum duration in the future allowed for a message timestamp
|
||||
const MaxTimeVariance = time.Duration(20) * time.Second
|
||||
|
||||
// DBStore is a MessageProvider that has a *sql.DB connection
|
||||
type DBStore struct {
|
||||
MessageProvider
|
||||
|
||||
db *sql.DB
|
||||
migrationFn func(db *sql.DB, logger *zap.Logger) error
|
||||
|
||||
metrics Metrics
|
||||
timesource timesource.Timesource
|
||||
log *zap.Logger
|
||||
|
||||
maxMessages int
|
||||
maxDuration time.Duration
|
||||
|
||||
enableMigrations bool
|
||||
|
||||
wg sync.WaitGroup
|
||||
cancel context.CancelFunc
|
||||
}
|
||||
|
||||
// StoredMessage is the format of the message stored in persistence store
|
||||
type StoredMessage struct {
|
||||
ID []byte
|
||||
PubsubTopic string
|
||||
ReceiverTime int64
|
||||
Message *wpb.WakuMessage
|
||||
}
|
||||
|
||||
// DBOption is an optional setting that can be used to configure the DBStore
|
||||
type DBOption func(*DBStore) error
|
||||
|
||||
// WithDB is a DBOption that lets you use any custom *sql.DB with a DBStore.
|
||||
func WithDB(db *sql.DB) DBOption {
|
||||
return func(d *DBStore) error {
|
||||
d.db = db
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// ConnectionPoolOptions is the options to be used for DB connection pooling
|
||||
type ConnectionPoolOptions struct {
|
||||
MaxOpenConnections int
|
||||
MaxIdleConnections int
|
||||
ConnectionMaxLifetime time.Duration
|
||||
ConnectionMaxIdleTime time.Duration
|
||||
}
|
||||
|
||||
// WithDriver is a DBOption that will open a *sql.DB connection
|
||||
func WithDriver(driverName string, datasourceName string, connectionPoolOptions ...ConnectionPoolOptions) DBOption {
|
||||
return func(d *DBStore) error {
|
||||
db, err := sql.Open(driverName, datasourceName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(connectionPoolOptions) != 0 {
|
||||
db.SetConnMaxIdleTime(connectionPoolOptions[0].ConnectionMaxIdleTime)
|
||||
db.SetConnMaxLifetime(connectionPoolOptions[0].ConnectionMaxLifetime)
|
||||
db.SetMaxIdleConns(connectionPoolOptions[0].MaxIdleConnections)
|
||||
db.SetMaxOpenConns(connectionPoolOptions[0].MaxOpenConnections)
|
||||
}
|
||||
|
||||
d.db = db
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithRetentionPolicy is a DBOption that specifies the max number of messages
|
||||
// to be stored and duration before they're removed from the message store
|
||||
func WithRetentionPolicy(maxMessages int, maxDuration time.Duration) DBOption {
|
||||
return func(d *DBStore) error {
|
||||
d.maxDuration = maxDuration
|
||||
d.maxMessages = maxMessages
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
type MigrationFn func(db *sql.DB, logger *zap.Logger) error
|
||||
|
||||
// WithMigrations is a DBOption used to determine if migrations should
|
||||
// be executed, and what driver to use
|
||||
func WithMigrations(migrationFn MigrationFn) DBOption {
|
||||
return func(d *DBStore) error {
|
||||
d.enableMigrations = true
|
||||
d.migrationFn = migrationFn
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// DefaultOptions returns the default DBoptions to be used.
|
||||
func DefaultOptions() []DBOption {
|
||||
return []DBOption{}
|
||||
}
|
||||
|
||||
// Creates a new DB store using the db specified via options.
|
||||
// It will create a messages table if it does not exist and
|
||||
// clean up records according to the retention policy used
|
||||
func NewDBStore(reg prometheus.Registerer, log *zap.Logger, options ...DBOption) (*DBStore, error) {
|
||||
result := new(DBStore)
|
||||
result.log = log.Named("dbstore")
|
||||
result.metrics = newMetrics(reg)
|
||||
|
||||
optList := DefaultOptions()
|
||||
optList = append(optList, options...)
|
||||
|
||||
for _, opt := range optList {
|
||||
err := opt(result)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if result.enableMigrations {
|
||||
err := result.migrationFn(result.db, log)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// Start starts the store server functionality
|
||||
func (d *DBStore) Start(ctx context.Context, timesource timesource.Timesource) error {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
|
||||
d.cancel = cancel
|
||||
d.timesource = timesource
|
||||
|
||||
err := d.cleanOlderRecords(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.wg.Add(2)
|
||||
go d.checkForOlderRecords(ctx, 60*time.Second)
|
||||
go d.updateMetrics(ctx)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *DBStore) updateMetrics(ctx context.Context) {
|
||||
ticker := time.NewTicker(5 * time.Second)
|
||||
defer ticker.Stop()
|
||||
defer d.wg.Done()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
msgCount, err := d.Count()
|
||||
if err != nil {
|
||||
d.log.Error("updating store metrics", zap.Error(err))
|
||||
} else {
|
||||
d.metrics.RecordMessage(msgCount)
|
||||
}
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (d *DBStore) cleanOlderRecords(ctx context.Context) error {
|
||||
d.log.Info("Cleaning older records...")
|
||||
|
||||
// Delete older messages
|
||||
if d.maxDuration > 0 {
|
||||
start := time.Now()
|
||||
sqlStmt := `DELETE FROM message WHERE storedAt < $1`
|
||||
_, err := d.db.Exec(sqlStmt, d.timesource.Now().Add(-d.maxDuration).UnixNano())
|
||||
if err != nil {
|
||||
d.metrics.RecordError(retPolicyFailure)
|
||||
return err
|
||||
}
|
||||
elapsed := time.Since(start)
|
||||
d.log.Debug("deleting older records from the DB", zap.Duration("duration", elapsed))
|
||||
}
|
||||
|
||||
// Limit number of records to a max N
|
||||
if d.maxMessages > 0 {
|
||||
start := time.Now()
|
||||
|
||||
_, err := d.db.Exec(d.getDeleteOldRowsQuery(), d.maxMessages)
|
||||
if err != nil {
|
||||
d.metrics.RecordError(retPolicyFailure)
|
||||
return err
|
||||
}
|
||||
elapsed := time.Since(start)
|
||||
d.log.Debug("deleting excess records from the DB", zap.Duration("duration", elapsed))
|
||||
}
|
||||
|
||||
d.log.Info("Older records removed")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *DBStore) getDeleteOldRowsQuery() string {
|
||||
sqlStmt := `DELETE FROM message WHERE id IN (SELECT id FROM message ORDER BY storedAt DESC %s OFFSET $1)`
|
||||
switch GetDriverType(d.db) {
|
||||
case SQLiteDriver:
|
||||
sqlStmt = fmt.Sprintf(sqlStmt, "LIMIT -1")
|
||||
case PostgresDriver:
|
||||
sqlStmt = fmt.Sprintf(sqlStmt, "")
|
||||
}
|
||||
return sqlStmt
|
||||
}
|
||||
|
||||
func (d *DBStore) checkForOlderRecords(ctx context.Context, t time.Duration) {
|
||||
defer d.wg.Done()
|
||||
|
||||
ticker := time.NewTicker(t)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
err := d.cleanOlderRecords(ctx)
|
||||
if err != nil {
|
||||
d.log.Error("cleaning older records", zap.Error(err))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Stop closes a DB connection
|
||||
func (d *DBStore) Stop() {
|
||||
if d.cancel == nil {
|
||||
return
|
||||
}
|
||||
|
||||
d.cancel()
|
||||
d.wg.Wait()
|
||||
d.db.Close()
|
||||
}
|
||||
|
||||
// Validate validates the message to be stored against possible fradulent conditions.
|
||||
func (d *DBStore) Validate(env *protocol.Envelope) error {
|
||||
timestamp := env.Message().GetTimestamp()
|
||||
if timestamp == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
n := time.Unix(0, timestamp)
|
||||
upperBound := n.Add(MaxTimeVariance)
|
||||
lowerBound := n.Add(-MaxTimeVariance)
|
||||
|
||||
// Ensure that messages don't "jump" to the front of the queue with future timestamps
|
||||
if env.Message().GetTimestamp() > upperBound.UnixNano() {
|
||||
return ErrFutureMessage
|
||||
}
|
||||
|
||||
if env.Message().GetTimestamp() < lowerBound.UnixNano() {
|
||||
return ErrMessageTooOld
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Put inserts a WakuMessage into the DB
|
||||
func (d *DBStore) Put(env *protocol.Envelope) error {
|
||||
|
||||
stmt, err := d.db.Prepare("INSERT INTO message (id, messageHash, storedAt, timestamp, contentTopic, pubsubTopic, payload, version) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)")
|
||||
if err != nil {
|
||||
d.metrics.RecordError(insertFailure)
|
||||
return err
|
||||
}
|
||||
|
||||
storedAt := env.Message().GetTimestamp()
|
||||
if storedAt == 0 {
|
||||
storedAt = env.Index().ReceiverTime
|
||||
}
|
||||
|
||||
start := time.Now()
|
||||
_, err = stmt.Exec(env.Index().Digest, env.Hash(), storedAt, env.Message().GetTimestamp(), env.Message().ContentTopic, env.PubsubTopic(), env.Message().Payload, env.Message().GetVersion())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.metrics.RecordInsertDuration(time.Since(start))
|
||||
|
||||
err = stmt.Close()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *DBStore) handleQueryCursor(query *pb.HistoryQuery, paramCnt *int, conditions []string, parameters []interface{}) ([]string, []interface{}, error) {
|
||||
usesCursor := false
|
||||
if query.PagingInfo.Cursor != nil {
|
||||
usesCursor = true
|
||||
|
||||
var exists bool
|
||||
err := d.db.QueryRow("SELECT EXISTS(SELECT 1 FROM message WHERE storedAt = $1 AND id = $2)",
|
||||
query.PagingInfo.Cursor.ReceiverTime, query.PagingInfo.Cursor.Digest,
|
||||
).Scan(&exists)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
if !exists {
|
||||
return nil, nil, ErrInvalidCursor
|
||||
}
|
||||
|
||||
eqOp := ">"
|
||||
if query.PagingInfo.Direction == pb.PagingInfo_BACKWARD {
|
||||
eqOp = "<"
|
||||
}
|
||||
conditions = append(conditions, fmt.Sprintf("(storedAt, id) %s ($%d, $%d)", eqOp, *paramCnt+1, *paramCnt+2))
|
||||
*paramCnt += 2
|
||||
|
||||
parameters = append(parameters, query.PagingInfo.Cursor.ReceiverTime, query.PagingInfo.Cursor.Digest)
|
||||
}
|
||||
|
||||
handleTimeParam := func(time int64, op string) {
|
||||
*paramCnt++
|
||||
conditions = append(conditions, fmt.Sprintf("storedAt %s $%d", op, *paramCnt))
|
||||
parameters = append(parameters, time)
|
||||
}
|
||||
|
||||
startTime := query.GetStartTime()
|
||||
if startTime != 0 {
|
||||
if !usesCursor || query.PagingInfo.Direction == pb.PagingInfo_BACKWARD {
|
||||
handleTimeParam(startTime, ">=")
|
||||
}
|
||||
}
|
||||
|
||||
endTime := query.GetEndTime()
|
||||
if endTime != 0 {
|
||||
if !usesCursor || query.PagingInfo.Direction == pb.PagingInfo_FORWARD {
|
||||
handleTimeParam(endTime+1, "<")
|
||||
}
|
||||
}
|
||||
return conditions, parameters, nil
|
||||
}
|
||||
|
||||
func (d *DBStore) prepareQuerySQL(query *pb.HistoryQuery) (string, []interface{}, error) {
|
||||
sqlQuery := `SELECT id, storedAt, timestamp, contentTopic, pubsubTopic, payload, version
|
||||
FROM message
|
||||
%s
|
||||
ORDER BY timestamp %s, id %s, pubsubTopic %s, storedAt %s `
|
||||
|
||||
var conditions []string
|
||||
//var parameters []interface{}
|
||||
parameters := make([]interface{}, 0) //Allocating as a slice so that references get passed rather than value
|
||||
paramCnt := 0
|
||||
|
||||
if query.PubsubTopic != "" {
|
||||
paramCnt++
|
||||
conditions = append(conditions, fmt.Sprintf("pubsubTopic = $%d", paramCnt))
|
||||
parameters = append(parameters, query.PubsubTopic)
|
||||
}
|
||||
|
||||
if len(query.ContentFilters) != 0 {
|
||||
var ctPlaceHolder []string
|
||||
for _, ct := range query.ContentFilters {
|
||||
if ct.ContentTopic != "" {
|
||||
paramCnt++
|
||||
ctPlaceHolder = append(ctPlaceHolder, fmt.Sprintf("$%d", paramCnt))
|
||||
parameters = append(parameters, ct.ContentTopic)
|
||||
}
|
||||
}
|
||||
conditions = append(conditions, "contentTopic IN ("+strings.Join(ctPlaceHolder, ", ")+")")
|
||||
}
|
||||
|
||||
conditions, parameters, err := d.handleQueryCursor(query, ¶mCnt, conditions, parameters)
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
conditionStr := ""
|
||||
if len(conditions) != 0 {
|
||||
conditionStr = "WHERE " + strings.Join(conditions, " AND ")
|
||||
}
|
||||
|
||||
orderDirection := "ASC"
|
||||
if query.PagingInfo.Direction == pb.PagingInfo_BACKWARD {
|
||||
orderDirection = "DESC"
|
||||
}
|
||||
|
||||
paramCnt++
|
||||
|
||||
sqlQuery += fmt.Sprintf("LIMIT $%d", paramCnt)
|
||||
// Always search for _max page size_ + 1. If the extra row does not exist, do not return pagination info.
|
||||
pageSize := query.PagingInfo.PageSize + 1
|
||||
parameters = append(parameters, pageSize)
|
||||
|
||||
sqlQuery = fmt.Sprintf(sqlQuery, conditionStr, orderDirection, orderDirection, orderDirection, orderDirection)
|
||||
d.log.Debug(fmt.Sprintf("sqlQuery: %s", sqlQuery))
|
||||
|
||||
return sqlQuery, parameters, nil
|
||||
|
||||
}
|
||||
|
||||
// Query retrieves messages from the DB
|
||||
func (d *DBStore) Query(query *pb.HistoryQuery) (*pb.Index, []StoredMessage, error) {
|
||||
start := time.Now()
|
||||
defer func() {
|
||||
elapsed := time.Since(start)
|
||||
d.log.Info(fmt.Sprintf("Loading records from the DB took %s", elapsed))
|
||||
}()
|
||||
|
||||
sqlQuery, parameters, err := d.prepareQuerySQL(query)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
stmt, err := d.db.Prepare(sqlQuery)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
defer stmt.Close()
|
||||
//
|
||||
measurementStart := time.Now()
|
||||
rows, err := stmt.Query(parameters...)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
d.metrics.RecordQueryDuration(time.Since(measurementStart))
|
||||
|
||||
var result []StoredMessage
|
||||
for rows.Next() {
|
||||
record, err := d.GetStoredMessage(rows)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
result = append(result, record)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var cursor *pb.Index
|
||||
if len(result) != 0 {
|
||||
// since there are more rows than pagingInfo.PageSize, we need to return a cursor, for pagination
|
||||
if len(result) > int(query.PagingInfo.PageSize) {
|
||||
result = result[0:query.PagingInfo.PageSize]
|
||||
lastMsgIdx := len(result) - 1
|
||||
cursor = protocol.NewEnvelope(result[lastMsgIdx].Message, result[lastMsgIdx].ReceiverTime, result[lastMsgIdx].PubsubTopic).Index()
|
||||
}
|
||||
}
|
||||
|
||||
// The retrieved messages list should always be in chronological order
|
||||
if query.PagingInfo.Direction == pb.PagingInfo_BACKWARD {
|
||||
for i, j := 0, len(result)-1; i < j; i, j = i+1, j-1 {
|
||||
result[i], result[j] = result[j], result[i]
|
||||
}
|
||||
}
|
||||
|
||||
return cursor, result, nil
|
||||
}
|
||||
|
||||
// MostRecentTimestamp returns an unix timestamp with the most recent timestamp
|
||||
// in the message table
|
||||
func (d *DBStore) MostRecentTimestamp() (int64, error) {
|
||||
result := sql.NullInt64{}
|
||||
|
||||
err := d.db.QueryRow(`SELECT max(timestamp) FROM message`).Scan(&result)
|
||||
if err != nil && err != sql.ErrNoRows {
|
||||
return 0, err
|
||||
}
|
||||
return result.Int64, nil
|
||||
}
|
||||
|
||||
// Count returns the number of rows in the message table
|
||||
func (d *DBStore) Count() (int, error) {
|
||||
var result int
|
||||
err := d.db.QueryRow(`SELECT COUNT(*) FROM message`).Scan(&result)
|
||||
if err != nil && err != sql.ErrNoRows {
|
||||
return 0, err
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// GetAll returns all the stored WakuMessages
|
||||
func (d *DBStore) GetAll() ([]StoredMessage, error) {
|
||||
start := time.Now()
|
||||
defer func() {
|
||||
elapsed := time.Since(start)
|
||||
d.log.Info("loading records from the DB", zap.Duration("duration", elapsed))
|
||||
}()
|
||||
|
||||
rows, err := d.db.Query("SELECT id, storedAt, timestamp, contentTopic, pubsubTopic, payload, version FROM message ORDER BY timestamp ASC")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var result []StoredMessage
|
||||
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
record, err := d.GetStoredMessage(rows)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result = append(result, record)
|
||||
}
|
||||
|
||||
d.log.Info("DB returned records", zap.Int("count", len(result)))
|
||||
|
||||
err = rows.Err()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// GetStoredMessage is a helper function used to convert a `*sql.Rows` into a `StoredMessage`
|
||||
func (d *DBStore) GetStoredMessage(row *sql.Rows) (StoredMessage, error) {
|
||||
var id []byte
|
||||
var storedAt int64
|
||||
var timestamp int64
|
||||
var contentTopic string
|
||||
var payload []byte
|
||||
var version uint32
|
||||
var pubsubTopic string
|
||||
|
||||
err := row.Scan(&id, &storedAt, ×tamp, &contentTopic, &pubsubTopic, &payload, &version)
|
||||
if err != nil {
|
||||
d.log.Error("scanning messages from db", zap.Error(err))
|
||||
return StoredMessage{}, err
|
||||
}
|
||||
|
||||
msg := new(wpb.WakuMessage)
|
||||
msg.ContentTopic = contentTopic
|
||||
msg.Payload = payload
|
||||
|
||||
if timestamp != 0 {
|
||||
msg.Timestamp = proto.Int64(timestamp)
|
||||
}
|
||||
|
||||
if version > 0 {
|
||||
msg.Version = proto.Uint32(version)
|
||||
}
|
||||
|
||||
record := StoredMessage{
|
||||
ID: id,
|
||||
PubsubTopic: pubsubTopic,
|
||||
ReceiverTime: storedAt,
|
||||
Message: msg,
|
||||
}
|
||||
|
||||
return record, nil
|
||||
}
|
||||
488
vendor/github.com/waku-org/go-waku/waku/v2/discv5/discover.go
generated
vendored
Normal file
488
vendor/github.com/waku-org/go-waku/waku/v2/discv5/discover.go
generated
vendored
Normal file
@@ -0,0 +1,488 @@
|
||||
package discv5
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ecdsa"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"time"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/multiformats/go-multiaddr"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/waku-org/go-discover/discover"
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
"github.com/waku-org/go-waku/waku/v2/peerstore"
|
||||
wenr "github.com/waku-org/go-waku/waku/v2/protocol/enr"
|
||||
"github.com/waku-org/go-waku/waku/v2/service"
|
||||
"github.com/waku-org/go-waku/waku/v2/utils"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/ethereum/go-ethereum/p2p/nat"
|
||||
)
|
||||
|
||||
var ErrNoDiscV5Listener = errors.New("no discv5 listener")
|
||||
|
||||
// PeerConnector will subscribe to a channel containing the information for all peers found by this discovery protocol
|
||||
type PeerConnector interface {
|
||||
Subscribe(context.Context, <-chan service.PeerData)
|
||||
}
|
||||
|
||||
type DiscoveryV5 struct {
|
||||
params *discV5Parameters
|
||||
host host.Host
|
||||
config discover.Config
|
||||
udpAddr *net.UDPAddr
|
||||
listener *discover.UDPv5
|
||||
localnode *enode.LocalNode
|
||||
metrics Metrics
|
||||
|
||||
peerConnector PeerConnector
|
||||
NAT nat.Interface
|
||||
|
||||
log *zap.Logger
|
||||
|
||||
*service.CommonDiscoveryService
|
||||
}
|
||||
|
||||
type discV5Parameters struct {
|
||||
autoUpdate bool
|
||||
autoFindPeers bool
|
||||
bootnodes map[enode.ID]*enode.Node
|
||||
udpPort uint
|
||||
advertiseAddr []multiaddr.Multiaddr
|
||||
loopPredicate func(*enode.Node) bool
|
||||
}
|
||||
|
||||
type DiscoveryV5Option func(*discV5Parameters)
|
||||
|
||||
var protocolID = [6]byte{'d', '5', 'w', 'a', 'k', 'u'}
|
||||
|
||||
const peerDelay = 100 * time.Millisecond
|
||||
const bucketSize = 16
|
||||
const delayBetweenDiscoveredPeerCnt = 5 * time.Second
|
||||
|
||||
func WithAutoUpdate(autoUpdate bool) DiscoveryV5Option {
|
||||
return func(params *discV5Parameters) {
|
||||
params.autoUpdate = autoUpdate
|
||||
}
|
||||
}
|
||||
|
||||
// WithBootnodes is an option used to specify the bootstrap nodes to use with DiscV5
|
||||
func WithBootnodes(bootnodes []*enode.Node) DiscoveryV5Option {
|
||||
return func(params *discV5Parameters) {
|
||||
params.bootnodes = make(map[enode.ID]*enode.Node)
|
||||
for _, b := range bootnodes {
|
||||
params.bootnodes[b.ID()] = b
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func WithAdvertiseAddr(addr []multiaddr.Multiaddr) DiscoveryV5Option {
|
||||
return func(params *discV5Parameters) {
|
||||
params.advertiseAddr = addr
|
||||
}
|
||||
}
|
||||
|
||||
func WithUDPPort(port uint) DiscoveryV5Option {
|
||||
return func(params *discV5Parameters) {
|
||||
params.udpPort = port
|
||||
}
|
||||
}
|
||||
|
||||
func WithPredicate(predicate func(*enode.Node) bool) DiscoveryV5Option {
|
||||
return func(params *discV5Parameters) {
|
||||
params.loopPredicate = predicate
|
||||
}
|
||||
}
|
||||
|
||||
func WithAutoFindPeers(find bool) DiscoveryV5Option {
|
||||
return func(params *discV5Parameters) {
|
||||
params.autoFindPeers = find
|
||||
}
|
||||
}
|
||||
|
||||
// DefaultOptions contains the default list of options used when setting up DiscoveryV5
|
||||
func DefaultOptions() []DiscoveryV5Option {
|
||||
return []DiscoveryV5Option{
|
||||
WithUDPPort(9000),
|
||||
WithAutoFindPeers(true),
|
||||
}
|
||||
}
|
||||
|
||||
// NewDiscoveryV5 returns a new instance of a DiscoveryV5 struct
|
||||
func NewDiscoveryV5(priv *ecdsa.PrivateKey, localnode *enode.LocalNode, peerConnector PeerConnector, reg prometheus.Registerer, log *zap.Logger, opts ...DiscoveryV5Option) (*DiscoveryV5, error) {
|
||||
params := new(discV5Parameters)
|
||||
optList := DefaultOptions()
|
||||
optList = append(optList, opts...)
|
||||
for _, opt := range optList {
|
||||
opt(params)
|
||||
}
|
||||
|
||||
logger := log.Named("discv5")
|
||||
|
||||
var NAT nat.Interface
|
||||
if params.advertiseAddr == nil {
|
||||
NAT = nat.Any()
|
||||
}
|
||||
|
||||
var bootnodes []*enode.Node
|
||||
for _, bootnode := range params.bootnodes {
|
||||
bootnodes = append(bootnodes, bootnode)
|
||||
}
|
||||
|
||||
return &DiscoveryV5{
|
||||
params: params,
|
||||
peerConnector: peerConnector,
|
||||
NAT: NAT,
|
||||
CommonDiscoveryService: service.NewCommonDiscoveryService(),
|
||||
localnode: localnode,
|
||||
metrics: newMetrics(reg),
|
||||
config: discover.Config{
|
||||
PrivateKey: priv,
|
||||
Bootnodes: bootnodes,
|
||||
V5Config: discover.V5Config{
|
||||
ProtocolID: &protocolID,
|
||||
},
|
||||
},
|
||||
udpAddr: &net.UDPAddr{
|
||||
IP: net.IPv4zero,
|
||||
Port: int(params.udpPort),
|
||||
},
|
||||
log: logger,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *DiscoveryV5) Node() *enode.Node {
|
||||
return d.localnode.Node()
|
||||
}
|
||||
|
||||
func (d *DiscoveryV5) listen(ctx context.Context) error {
|
||||
conn, err := net.ListenUDP("udp", d.udpAddr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.udpAddr = conn.LocalAddr().(*net.UDPAddr)
|
||||
|
||||
if d.NAT != nil && !d.udpAddr.IP.IsLoopback() {
|
||||
d.WaitGroup().Add(1)
|
||||
go func() {
|
||||
defer d.WaitGroup().Done()
|
||||
nat.Map(d.NAT, ctx.Done(), "udp", d.udpAddr.Port, d.udpAddr.Port, "go-waku discv5 discovery")
|
||||
}()
|
||||
|
||||
}
|
||||
|
||||
d.localnode.SetFallbackUDP(d.udpAddr.Port)
|
||||
|
||||
listener, err := discover.ListenV5(ctx, conn, d.localnode, d.config)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.listener = listener
|
||||
|
||||
d.log.Info("started Discovery V5",
|
||||
zap.Stringer("listening", d.udpAddr),
|
||||
logging.TCPAddr("advertising", d.localnode.Node().IP(), d.localnode.Node().TCP()))
|
||||
d.log.Info("Discovery V5: discoverable ENR ", logging.ENode("enr", d.localnode.Node()))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sets the host to be able to mount or consume a protocol
|
||||
func (d *DiscoveryV5) SetHost(h host.Host) {
|
||||
d.host = h
|
||||
}
|
||||
|
||||
// only works if the discovery v5 hasn't been started yet.
|
||||
func (d *DiscoveryV5) Start(ctx context.Context) error {
|
||||
return d.CommonDiscoveryService.Start(ctx, d.start)
|
||||
}
|
||||
|
||||
func (d *DiscoveryV5) start() error {
|
||||
d.peerConnector.Subscribe(d.Context(), d.GetListeningChan())
|
||||
|
||||
err := d.listen(d.Context())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if d.params.autoFindPeers {
|
||||
d.WaitGroup().Add(1)
|
||||
go func() {
|
||||
defer d.WaitGroup().Done()
|
||||
d.runDiscoveryV5Loop(d.Context())
|
||||
}()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetBootnodes is used to setup the bootstrap nodes to use for discovering new peers
|
||||
func (d *DiscoveryV5) SetBootnodes(nodes []*enode.Node) error {
|
||||
if d.listener == nil {
|
||||
return ErrNoDiscV5Listener
|
||||
}
|
||||
|
||||
return d.listener.SetFallbackNodes(nodes)
|
||||
}
|
||||
|
||||
// Stop is a function that stops the execution of DiscV5.
|
||||
// only works if the discovery v5 is in running state
|
||||
// so we can assume that cancel method is set
|
||||
func (d *DiscoveryV5) Stop() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
d.log.Info("recovering from panic and quitting")
|
||||
}
|
||||
}()
|
||||
d.CommonDiscoveryService.Stop(func() {
|
||||
if d.listener != nil {
|
||||
d.listener.Close()
|
||||
d.listener = nil
|
||||
d.log.Info("stopped Discovery V5")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func isWakuNode(node *enode.Node) bool {
|
||||
enrField := new(wenr.WakuEnrBitfield)
|
||||
if err := node.Record().Load(enr.WithEntry(wenr.WakuENRField, &enrField)); err != nil {
|
||||
if !enr.IsNotFound(err) {
|
||||
utils.Logger().Named("discv5").Error("could not retrieve waku2 ENR field for enr ", zap.Any("node", node))
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
if enrField != nil {
|
||||
return *enrField != uint8(0) // #RFC 31 requirement
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func (d *DiscoveryV5) evaluateNode() func(node *enode.Node) bool {
|
||||
return func(node *enode.Node) bool {
|
||||
if node == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// node filtering based on ENR; we do not filter based on ENR in the first waku discv5 beta stage
|
||||
if !isWakuNode(node) {
|
||||
d.log.Debug("peer is not waku node", logging.ENode("enr", node))
|
||||
return false
|
||||
}
|
||||
|
||||
_, err := wenr.EnodeToPeerInfo(node)
|
||||
if err != nil {
|
||||
d.metrics.RecordError(peerInfoFailure)
|
||||
d.log.Error("obtaining peer info from enode", logging.ENode("enr", node), zap.Error(err))
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// Predicate is a function that is applied to an iterator to filter the nodes to be retrieved according to some logic
|
||||
type Predicate func(enode.Iterator) enode.Iterator
|
||||
|
||||
// PeerIterator gets random nodes from DHT via discv5 listener.
|
||||
// Used for caching enr address in peerExchange
|
||||
// Used for connecting to peers in discovery_connector
|
||||
func (d *DiscoveryV5) PeerIterator(predicate ...Predicate) (enode.Iterator, error) {
|
||||
if d.listener == nil {
|
||||
return nil, ErrNoDiscV5Listener
|
||||
}
|
||||
|
||||
iterator := enode.Filter(d.listener.RandomNodes(), d.evaluateNode())
|
||||
if d.params.loopPredicate != nil {
|
||||
iterator = enode.Filter(iterator, d.params.loopPredicate)
|
||||
}
|
||||
|
||||
for _, p := range predicate {
|
||||
iterator = p(iterator)
|
||||
}
|
||||
|
||||
return iterator, nil
|
||||
}
|
||||
|
||||
func (d *DiscoveryV5) Iterate(ctx context.Context, iterator enode.Iterator, onNode func(*enode.Node, peer.AddrInfo) error) {
|
||||
defer iterator.Close()
|
||||
|
||||
peerCnt := 0
|
||||
for DelayedHasNext(ctx, iterator, &peerCnt) {
|
||||
_, addresses, err := wenr.Multiaddress(iterator.Node())
|
||||
if err != nil {
|
||||
d.metrics.RecordError(peerInfoFailure)
|
||||
d.log.Error("extracting multiaddrs from enr", zap.Error(err))
|
||||
continue
|
||||
}
|
||||
|
||||
peerAddrs, err := peer.AddrInfosFromP2pAddrs(addresses...)
|
||||
if err != nil {
|
||||
d.metrics.RecordError(peerInfoFailure)
|
||||
d.log.Error("converting multiaddrs to addrinfos", zap.Error(err))
|
||||
continue
|
||||
}
|
||||
|
||||
if len(peerAddrs) != 0 {
|
||||
err := onNode(iterator.Node(), peerAddrs[0])
|
||||
if err != nil {
|
||||
d.log.Error("processing node", zap.Error(err))
|
||||
}
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func DelayedHasNext(ctx context.Context, iterator enode.Iterator, peerCnt *int) bool {
|
||||
// Delay if .Next() is too fast
|
||||
start := time.Now()
|
||||
hasNext := iterator.Next()
|
||||
if !hasNext {
|
||||
return false
|
||||
}
|
||||
|
||||
elapsed := time.Since(start)
|
||||
if elapsed < peerDelay {
|
||||
t := time.NewTimer(peerDelay - elapsed)
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return false
|
||||
case <-t.C:
|
||||
t.Stop()
|
||||
}
|
||||
}
|
||||
|
||||
*peerCnt++
|
||||
if *peerCnt == bucketSize { // Delay every bucketSize peers discovered
|
||||
*peerCnt = 0
|
||||
t := time.NewTimer(delayBetweenDiscoveredPeerCnt)
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return false
|
||||
case <-t.C:
|
||||
t.Stop()
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// DefaultPredicate contains the conditions to be applied when filtering peers discovered via discv5
|
||||
func (d *DiscoveryV5) DefaultPredicate() Predicate {
|
||||
return FilterPredicate(func(n *enode.Node) bool {
|
||||
localRS, err := wenr.RelaySharding(d.localnode.Node().Record())
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
if localRS == nil { // No shard registered, so no need to check for shards
|
||||
return true
|
||||
}
|
||||
|
||||
if _, ok := d.params.bootnodes[n.ID()]; ok {
|
||||
return true // The record is a bootnode. Assume it's valid and dont filter it out
|
||||
}
|
||||
|
||||
nodeRS, err := wenr.RelaySharding(n.Record())
|
||||
if err != nil {
|
||||
d.log.Debug("failed to get relay shards from node record", logging.ENode("node", n), zap.Error(err))
|
||||
return false
|
||||
}
|
||||
|
||||
if nodeRS == nil {
|
||||
// Node has no shards registered.
|
||||
return false
|
||||
}
|
||||
|
||||
if nodeRS.ClusterID != localRS.ClusterID {
|
||||
return false
|
||||
}
|
||||
|
||||
// Contains any
|
||||
for _, idx := range localRS.ShardIDs {
|
||||
if nodeRS.Contains(localRS.ClusterID, idx) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
})
|
||||
}
|
||||
|
||||
// Iterates over the nodes found via discv5 belonging to the node's current shard, and sends them to peerConnector
|
||||
func (d *DiscoveryV5) peerLoop(ctx context.Context) error {
|
||||
iterator, err := d.PeerIterator(d.DefaultPredicate())
|
||||
if err != nil {
|
||||
d.metrics.RecordError(iteratorFailure)
|
||||
return fmt.Errorf("obtaining iterator: %w", err)
|
||||
}
|
||||
|
||||
defer iterator.Close()
|
||||
|
||||
d.Iterate(ctx, iterator, func(n *enode.Node, p peer.AddrInfo) error {
|
||||
peer := service.PeerData{
|
||||
Origin: peerstore.Discv5,
|
||||
AddrInfo: p,
|
||||
ENR: n,
|
||||
}
|
||||
|
||||
if d.PushToChan(peer) {
|
||||
d.log.Debug("published peer into peer channel", logging.HostID("peerID", peer.AddrInfo.ID))
|
||||
} else {
|
||||
d.log.Debug("could not publish peer into peer channel", logging.HostID("peerID", peer.AddrInfo.ID))
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *DiscoveryV5) runDiscoveryV5Loop(ctx context.Context) {
|
||||
if len(d.config.Bootnodes) > 0 {
|
||||
localRS, err := wenr.RelaySharding(d.localnode.Node().Record())
|
||||
if err == nil && localRS != nil {
|
||||
iterator := d.DefaultPredicate()(enode.IterNodes(d.config.Bootnodes))
|
||||
validBootCount := 0
|
||||
for iterator.Next() {
|
||||
validBootCount++
|
||||
}
|
||||
|
||||
if validBootCount == 0 {
|
||||
d.log.Warn("no discv5 bootstrap nodes share this node configured shards")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
restartLoop:
|
||||
for {
|
||||
err := d.peerLoop(ctx)
|
||||
if err != nil {
|
||||
d.log.Debug("iterating discv5", zap.Error(err))
|
||||
}
|
||||
|
||||
t := time.NewTimer(5 * time.Second)
|
||||
select {
|
||||
case <-t.C:
|
||||
t.Stop()
|
||||
case <-ctx.Done():
|
||||
t.Stop()
|
||||
break restartLoop
|
||||
}
|
||||
}
|
||||
d.log.Warn("Discv5 loop stopped")
|
||||
}
|
||||
52
vendor/github.com/waku-org/go-waku/waku/v2/discv5/filters.go
generated
vendored
Normal file
52
vendor/github.com/waku-org/go-waku/waku/v2/discv5/filters.go
generated
vendored
Normal file
@@ -0,0 +1,52 @@
|
||||
package discv5
|
||||
|
||||
import (
|
||||
wenr "github.com/waku-org/go-waku/waku/v2/protocol/enr"
|
||||
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
)
|
||||
|
||||
// FilterPredicate is to create a Predicate using a custom function
|
||||
func FilterPredicate(predicate func(*enode.Node) bool) Predicate {
|
||||
return func(iterator enode.Iterator) enode.Iterator {
|
||||
if predicate != nil {
|
||||
iterator = enode.Filter(iterator, predicate)
|
||||
}
|
||||
|
||||
return iterator
|
||||
}
|
||||
}
|
||||
|
||||
// FilterShard creates a Predicate that filters nodes that belong to a specific shard
|
||||
func FilterShard(cluster, index uint16) Predicate {
|
||||
return func(iterator enode.Iterator) enode.Iterator {
|
||||
predicate := func(node *enode.Node) bool {
|
||||
rs, err := wenr.RelaySharding(node.Record())
|
||||
if err != nil || rs == nil {
|
||||
return false
|
||||
}
|
||||
return rs.Contains(cluster, index)
|
||||
}
|
||||
return enode.Filter(iterator, predicate)
|
||||
}
|
||||
}
|
||||
|
||||
// FilterCapabilities creates a Predicate to filter nodes that support specific protocols
|
||||
func FilterCapabilities(flags wenr.WakuEnrBitfield) Predicate {
|
||||
return func(iterator enode.Iterator) enode.Iterator {
|
||||
predicate := func(node *enode.Node) bool {
|
||||
enrField := new(wenr.WakuEnrBitfield)
|
||||
if err := node.Record().Load(enr.WithEntry(wenr.WakuENRField, &enrField)); err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
if enrField == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return *enrField&flags == flags
|
||||
}
|
||||
return enode.Filter(iterator, predicate)
|
||||
}
|
||||
}
|
||||
46
vendor/github.com/waku-org/go-waku/waku/v2/discv5/metrics.go
generated
vendored
Normal file
46
vendor/github.com/waku-org/go-waku/waku/v2/discv5/metrics.go
generated
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
package discv5
|
||||
|
||||
import (
|
||||
"github.com/libp2p/go-libp2p/p2p/metricshelper"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
var discV5Errors = prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Name: "waku_discv5_errors",
|
||||
Help: "The distribution of the discv5 protocol errors",
|
||||
},
|
||||
[]string{"error_type"},
|
||||
)
|
||||
|
||||
var collectors = []prometheus.Collector{
|
||||
discV5Errors,
|
||||
}
|
||||
|
||||
// Metrics exposes the functions required to update prometheus metrics for discv5 protocol
|
||||
type Metrics interface {
|
||||
RecordError(err metricsErrCategory)
|
||||
}
|
||||
|
||||
type metricsImpl struct {
|
||||
reg prometheus.Registerer
|
||||
}
|
||||
|
||||
func newMetrics(reg prometheus.Registerer) Metrics {
|
||||
metricshelper.RegisterCollectors(reg, collectors...)
|
||||
return &metricsImpl{
|
||||
reg: reg,
|
||||
}
|
||||
}
|
||||
|
||||
type metricsErrCategory string
|
||||
|
||||
var (
|
||||
peerInfoFailure metricsErrCategory = "peer_info_failure"
|
||||
iteratorFailure metricsErrCategory = "iterator_failure"
|
||||
)
|
||||
|
||||
// RecordError increases the counter for different error types
|
||||
func (m *metricsImpl) RecordError(err metricsErrCategory) {
|
||||
discV5Errors.WithLabelValues(string(err)).Inc()
|
||||
}
|
||||
62
vendor/github.com/waku-org/go-waku/waku/v2/discv5/mock_peer_discoverer.go
generated
vendored
Normal file
62
vendor/github.com/waku-org/go-waku/waku/v2/discv5/mock_peer_discoverer.go
generated
vendored
Normal file
@@ -0,0 +1,62 @@
|
||||
package discv5
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/waku-org/go-waku/waku/v2/service"
|
||||
)
|
||||
|
||||
// TestPeerDiscoverer is mock peer discoverer for testing
|
||||
type TestPeerDiscoverer struct {
|
||||
sync.RWMutex
|
||||
peerMap map[peer.ID]struct{}
|
||||
}
|
||||
|
||||
// NewTestPeerDiscoverer is a constructor for TestPeerDiscoverer
|
||||
func NewTestPeerDiscoverer() *TestPeerDiscoverer {
|
||||
result := &TestPeerDiscoverer{
|
||||
peerMap: make(map[peer.ID]struct{}),
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// Subscribe is for subscribing to peer discoverer
|
||||
func (t *TestPeerDiscoverer) Subscribe(ctx context.Context, ch <-chan service.PeerData) {
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case p := <-ch:
|
||||
t.Lock()
|
||||
t.peerMap[p.AddrInfo.ID] = struct{}{}
|
||||
t.Unlock()
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// HasPeer is for checking if a peer is present in peer discoverer
|
||||
func (t *TestPeerDiscoverer) HasPeer(p peer.ID) bool {
|
||||
t.RLock()
|
||||
defer t.RUnlock()
|
||||
_, ok := t.peerMap[p]
|
||||
return ok
|
||||
}
|
||||
|
||||
// PeerCount is for getting the number of peers in peer discoverer
|
||||
func (t *TestPeerDiscoverer) PeerCount() int {
|
||||
t.RLock()
|
||||
defer t.RUnlock()
|
||||
return len(t.peerMap)
|
||||
}
|
||||
|
||||
// Clear is for clearing the peer discoverer
|
||||
func (t *TestPeerDiscoverer) Clear() {
|
||||
t.Lock()
|
||||
defer t.Unlock()
|
||||
t.peerMap = make(map[peer.ID]struct{})
|
||||
}
|
||||
133
vendor/github.com/waku-org/go-waku/waku/v2/dnsdisc/enr.go
generated
vendored
Normal file
133
vendor/github.com/waku-org/go-waku/waku/v2/dnsdisc/enr.go
generated
vendored
Normal file
@@ -0,0 +1,133 @@
|
||||
package dnsdisc
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
|
||||
"github.com/ethereum/go-ethereum/p2p/dnsdisc"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
wenr "github.com/waku-org/go-waku/waku/v2/protocol/enr"
|
||||
"github.com/waku-org/go-waku/waku/v2/utils"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
type dnsDiscoveryParameters struct {
|
||||
nameserver string
|
||||
resolver dnsdisc.Resolver
|
||||
}
|
||||
|
||||
type DNSDiscoveryOption func(*dnsDiscoveryParameters) error
|
||||
|
||||
var ErrExclusiveOpts = errors.New("cannot set both nameserver and resolver")
|
||||
|
||||
// WithNameserver is a DnsDiscoveryOption that configures the nameserver to use
|
||||
func WithNameserver(nameserver string) DNSDiscoveryOption {
|
||||
return func(params *dnsDiscoveryParameters) error {
|
||||
if params.resolver != nil {
|
||||
return ErrExclusiveOpts
|
||||
}
|
||||
params.nameserver = nameserver
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithResolver(resolver dnsdisc.Resolver) DNSDiscoveryOption {
|
||||
return func(params *dnsDiscoveryParameters) error {
|
||||
if params.nameserver != "" {
|
||||
return ErrExclusiveOpts
|
||||
}
|
||||
params.resolver = resolver
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
type DiscoveredNode struct {
|
||||
PeerID peer.ID
|
||||
PeerInfo peer.AddrInfo
|
||||
ENR *enode.Node
|
||||
}
|
||||
|
||||
var metrics Metrics
|
||||
|
||||
// SetPrometheusRegisterer is used to setup a custom prometheus registerer for metrics
|
||||
func SetPrometheusRegisterer(reg prometheus.Registerer, logger *zap.Logger) {
|
||||
metrics = newMetrics(reg)
|
||||
}
|
||||
|
||||
func init() {
|
||||
SetPrometheusRegisterer(prometheus.DefaultRegisterer, utils.Logger())
|
||||
}
|
||||
|
||||
// RetrieveNodes returns a list of multiaddress given a url to a DNS discoverable ENR tree
|
||||
func RetrieveNodes(ctx context.Context, url string, opts ...DNSDiscoveryOption) ([]DiscoveredNode, error) {
|
||||
var discoveredNodes []DiscoveredNode
|
||||
|
||||
params := new(dnsDiscoveryParameters)
|
||||
for _, opt := range opts {
|
||||
err := opt(params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if params.resolver == nil {
|
||||
params.resolver = GetResolver(ctx, params.nameserver)
|
||||
}
|
||||
|
||||
client := dnsdisc.NewClient(dnsdisc.Config{
|
||||
Resolver: params.resolver,
|
||||
})
|
||||
|
||||
tree, err := client.SyncTree(url)
|
||||
if err != nil {
|
||||
metrics.RecordError(treeSyncFailure)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, node := range tree.Nodes() {
|
||||
peerID, m, err := wenr.Multiaddress(node)
|
||||
if err != nil {
|
||||
metrics.RecordError(peerInfoFailure)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
infoAddr, err := peer.AddrInfosFromP2pAddrs(m...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var info peer.AddrInfo
|
||||
for _, i := range infoAddr {
|
||||
if i.ID == peerID {
|
||||
info = i
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
d := DiscoveredNode{
|
||||
PeerID: peerID,
|
||||
PeerInfo: info,
|
||||
}
|
||||
|
||||
if hasUDP(node) {
|
||||
d.ENR = node
|
||||
}
|
||||
|
||||
discoveredNodes = append(discoveredNodes, d)
|
||||
}
|
||||
|
||||
metrics.RecordDiscoveredNodes(len(discoveredNodes))
|
||||
|
||||
return discoveredNodes, nil
|
||||
}
|
||||
|
||||
func hasUDP(node *enode.Node) bool {
|
||||
enrUDP := new(enr.UDP)
|
||||
if err := node.Record().Load(enr.WithEntry(enrUDP.ENRKey(), enrUDP)); err != nil {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
59
vendor/github.com/waku-org/go-waku/waku/v2/dnsdisc/metrics.go
generated
vendored
Normal file
59
vendor/github.com/waku-org/go-waku/waku/v2/dnsdisc/metrics.go
generated
vendored
Normal file
@@ -0,0 +1,59 @@
|
||||
package dnsdisc
|
||||
|
||||
import (
|
||||
"github.com/libp2p/go-libp2p/p2p/metricshelper"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
var dnsDiscoveredNodes = prometheus.NewCounter(
|
||||
prometheus.CounterOpts{
|
||||
Name: "waku_dnsdisc_discovered",
|
||||
Help: "The number of nodes discovered via DNS discovery",
|
||||
},
|
||||
)
|
||||
|
||||
var dnsDiscoveryErrors = prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Name: "waku_dnsdisc_errors",
|
||||
Help: "The distribution of the dns discovery protocol errors",
|
||||
},
|
||||
[]string{"error_type"},
|
||||
)
|
||||
|
||||
var collectors = []prometheus.Collector{
|
||||
dnsDiscoveredNodes,
|
||||
dnsDiscoveryErrors,
|
||||
}
|
||||
|
||||
// Metrics exposes the functions required to update prometheus metrics for dnsdisc protocol
|
||||
type Metrics interface {
|
||||
RecordDiscoveredNodes(numNodes int)
|
||||
RecordError(err metricsErrCategory)
|
||||
}
|
||||
|
||||
type metricsImpl struct {
|
||||
reg prometheus.Registerer
|
||||
}
|
||||
|
||||
func newMetrics(reg prometheus.Registerer) Metrics {
|
||||
metricshelper.RegisterCollectors(reg, collectors...)
|
||||
return &metricsImpl{
|
||||
reg: reg,
|
||||
}
|
||||
}
|
||||
|
||||
type metricsErrCategory string
|
||||
|
||||
var (
|
||||
treeSyncFailure metricsErrCategory = "tree_sync_failure"
|
||||
peerInfoFailure metricsErrCategory = "peer_info_failure"
|
||||
)
|
||||
|
||||
// RecordError increases the counter for different error types
|
||||
func (m *metricsImpl) RecordError(err metricsErrCategory) {
|
||||
dnsDiscoveryErrors.WithLabelValues(string(err)).Inc()
|
||||
}
|
||||
|
||||
func (m *metricsImpl) RecordDiscoveredNodes(numNodes int) {
|
||||
dnsDiscoveredNodes.Add(float64(numNodes))
|
||||
}
|
||||
22
vendor/github.com/waku-org/go-waku/waku/v2/dnsdisc/resolver.go
generated
vendored
Normal file
22
vendor/github.com/waku-org/go-waku/waku/v2/dnsdisc/resolver.go
generated
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
package dnsdisc
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net"
|
||||
)
|
||||
|
||||
// GetResolver returns a *net.Resolver object using a custom nameserver, or
|
||||
// the default system resolver if no nameserver is specified
|
||||
func GetResolver(ctx context.Context, nameserver string) *net.Resolver {
|
||||
if nameserver == "" {
|
||||
return net.DefaultResolver
|
||||
}
|
||||
|
||||
return &net.Resolver{
|
||||
PreferGo: true,
|
||||
Dial: func(ctx context.Context, network, address string) (net.Conn, error) {
|
||||
d := net.Dialer{}
|
||||
return d.DialContext(ctx, network, net.JoinHostPort(nameserver, "53"))
|
||||
},
|
||||
}
|
||||
}
|
||||
25
vendor/github.com/waku-org/go-waku/waku/v2/hash/hash.go
generated
vendored
Normal file
25
vendor/github.com/waku-org/go-waku/waku/v2/hash/hash.go
generated
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
package hash
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"hash"
|
||||
"sync"
|
||||
)
|
||||
|
||||
var sha256Pool = sync.Pool{New: func() interface{} {
|
||||
return sha256.New()
|
||||
}}
|
||||
|
||||
// SHA256 generates the SHA256 hash from the input data
|
||||
func SHA256(data ...[]byte) []byte {
|
||||
h, ok := sha256Pool.Get().(hash.Hash)
|
||||
if !ok {
|
||||
h = sha256.New()
|
||||
}
|
||||
defer sha256Pool.Put(h)
|
||||
h.Reset()
|
||||
for i := range data {
|
||||
h.Write(data[i])
|
||||
}
|
||||
return h.Sum(nil)
|
||||
}
|
||||
180
vendor/github.com/waku-org/go-waku/waku/v2/node/connectedness.go
generated
vendored
Normal file
180
vendor/github.com/waku-org/go-waku/waku/v2/node/connectedness.go
generated
vendored
Normal file
@@ -0,0 +1,180 @@
|
||||
package node
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/multiformats/go-multiaddr"
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/lightpush"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/store"
|
||||
"go.uber.org/zap"
|
||||
|
||||
wps "github.com/waku-org/go-waku/waku/v2/peerstore"
|
||||
)
|
||||
|
||||
// PeerStatis is a map of peer IDs to supported protocols
|
||||
type PeerStats map[peer.ID][]protocol.ID
|
||||
|
||||
// ConnStatus is used to indicate if the node is online, has access to history
|
||||
// and also see the list of peers the node is aware of
|
||||
type ConnStatus struct {
|
||||
IsOnline bool
|
||||
HasHistory bool
|
||||
Peers PeerStats
|
||||
}
|
||||
|
||||
type PeerConnection struct {
|
||||
PeerID peer.ID
|
||||
Connected bool
|
||||
}
|
||||
|
||||
// ConnectionNotifier is a custom Notifier to be used to display when a peer
|
||||
// connects or disconnects to the node
|
||||
type ConnectionNotifier struct {
|
||||
h host.Host
|
||||
ctx context.Context
|
||||
log *zap.Logger
|
||||
metrics Metrics
|
||||
connNotifCh chan<- PeerConnection
|
||||
DisconnectChan chan peer.ID
|
||||
}
|
||||
|
||||
// NewConnectionNotifier creates an instance of ConnectionNotifier to react to peer connection changes
|
||||
func NewConnectionNotifier(ctx context.Context, h host.Host, connNotifCh chan<- PeerConnection, metrics Metrics, log *zap.Logger) ConnectionNotifier {
|
||||
return ConnectionNotifier{
|
||||
h: h,
|
||||
ctx: ctx,
|
||||
DisconnectChan: make(chan peer.ID, 100),
|
||||
connNotifCh: connNotifCh,
|
||||
metrics: metrics,
|
||||
log: log.Named("connection-notifier"),
|
||||
}
|
||||
}
|
||||
|
||||
// Listen is called when network starts listening on an addr
|
||||
func (c ConnectionNotifier) Listen(n network.Network, m multiaddr.Multiaddr) {
|
||||
}
|
||||
|
||||
// ListenClose is called when network stops listening on an address
|
||||
func (c ConnectionNotifier) ListenClose(n network.Network, m multiaddr.Multiaddr) {
|
||||
}
|
||||
|
||||
// Connected is called when a connection is opened
|
||||
func (c ConnectionNotifier) Connected(n network.Network, cc network.Conn) {
|
||||
c.log.Info("peer connected", logging.HostID("peer", cc.RemotePeer()), zap.String("direction", cc.Stat().Direction.String()))
|
||||
if c.connNotifCh != nil {
|
||||
select {
|
||||
case c.connNotifCh <- PeerConnection{cc.RemotePeer(), true}:
|
||||
default:
|
||||
c.log.Warn("subscriber is too slow")
|
||||
}
|
||||
}
|
||||
//TODO: Move this to be stored in Waku's own peerStore
|
||||
err := c.h.Peerstore().(wps.WakuPeerstore).SetDirection(cc.RemotePeer(), cc.Stat().Direction)
|
||||
if err != nil {
|
||||
c.log.Error("Failed to set peer direction for an outgoing connection", zap.Error(err))
|
||||
}
|
||||
|
||||
c.metrics.RecordPeerConnected()
|
||||
c.metrics.SetPeerStoreSize(c.h.Peerstore().Peers().Len())
|
||||
}
|
||||
|
||||
// Disconnected is called when a connection closed
|
||||
func (c ConnectionNotifier) Disconnected(n network.Network, cc network.Conn) {
|
||||
c.log.Info("peer disconnected", logging.HostID("peer", cc.RemotePeer()))
|
||||
c.metrics.RecordPeerDisconnected()
|
||||
c.DisconnectChan <- cc.RemotePeer()
|
||||
if c.connNotifCh != nil {
|
||||
select {
|
||||
case c.connNotifCh <- PeerConnection{cc.RemotePeer(), false}:
|
||||
default:
|
||||
c.log.Warn("subscriber is too slow")
|
||||
}
|
||||
}
|
||||
c.metrics.SetPeerStoreSize(c.h.Peerstore().Peers().Len())
|
||||
}
|
||||
|
||||
// OpenedStream is called when a stream opened
|
||||
func (c ConnectionNotifier) OpenedStream(n network.Network, s network.Stream) {
|
||||
}
|
||||
|
||||
// ClosedStream is called when a stream closed
|
||||
func (c ConnectionNotifier) ClosedStream(n network.Network, s network.Stream) {
|
||||
}
|
||||
|
||||
// Close quits the ConnectionNotifier
|
||||
func (c ConnectionNotifier) Close() {
|
||||
}
|
||||
|
||||
func (w *WakuNode) sendConnStatus() {
|
||||
isOnline, hasHistory := w.Status()
|
||||
if w.connStatusChan != nil {
|
||||
connStatus := ConnStatus{IsOnline: isOnline, HasHistory: hasHistory, Peers: w.PeerStats()}
|
||||
w.connStatusChan <- connStatus
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func (w *WakuNode) connectednessListener(ctx context.Context) {
|
||||
defer w.wg.Done()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-w.protocolEventSub.Out():
|
||||
case <-w.identificationEventSub.Out():
|
||||
case <-w.connectionNotif.DisconnectChan:
|
||||
}
|
||||
w.sendConnStatus()
|
||||
}
|
||||
}
|
||||
|
||||
// Status returns the current status of the node (online or not)
|
||||
// and if the node has access to history nodes or not
|
||||
func (w *WakuNode) Status() (isOnline bool, hasHistory bool) {
|
||||
hasRelay := false
|
||||
hasLightPush := false
|
||||
hasStore := false
|
||||
hasFilter := false
|
||||
|
||||
for _, peer := range w.host.Network().Peers() {
|
||||
protocols, err := w.host.Peerstore().GetProtocols(peer)
|
||||
if err != nil {
|
||||
w.log.Warn("reading peer protocols", logging.HostID("peer", peer), zap.Error(err))
|
||||
}
|
||||
|
||||
for _, protocol := range protocols {
|
||||
if !hasRelay && protocol == relay.WakuRelayID_v200 {
|
||||
hasRelay = true
|
||||
}
|
||||
if !hasLightPush && protocol == lightpush.LightPushID_v20beta1 {
|
||||
hasLightPush = true
|
||||
}
|
||||
if !hasStore && protocol == store.StoreID_v20beta4 {
|
||||
hasStore = true
|
||||
}
|
||||
if !hasFilter && protocol == legacy_filter.FilterID_v20beta1 {
|
||||
hasFilter = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if hasStore {
|
||||
hasHistory = true
|
||||
}
|
||||
|
||||
if w.opts.enableFilterLightNode && !w.opts.enableRelay {
|
||||
isOnline = hasLightPush && hasFilter
|
||||
} else {
|
||||
isOnline = hasRelay || hasLightPush && (hasStore || hasFilter)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
104
vendor/github.com/waku-org/go-waku/waku/v2/node/keepalive.go
generated
vendored
Normal file
104
vendor/github.com/waku-org/go-waku/waku/v2/node/keepalive.go
generated
vendored
Normal file
@@ -0,0 +1,104 @@
|
||||
package node
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-libp2p/p2p/protocol/ping"
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
const maxAllowedPingFailures = 2
|
||||
|
||||
// If the difference between the last time the keep alive code was executed and now is greater
|
||||
// than sleepDectectionIntervalFactor * keepAlivePeriod, force the ping verification to disconnect
|
||||
// the peers if they don't reply back
|
||||
const sleepDetectionIntervalFactor = 3
|
||||
|
||||
// startKeepAlive creates a go routine that periodically pings connected peers.
|
||||
// This is necessary because TCP connections are automatically closed due to inactivity,
|
||||
// and doing a ping will avoid this (with a small bandwidth cost)
|
||||
func (w *WakuNode) startKeepAlive(ctx context.Context, t time.Duration) {
|
||||
defer w.wg.Done()
|
||||
w.log.Info("setting up ping protocol", zap.Duration("duration", t))
|
||||
ticker := time.NewTicker(t)
|
||||
defer ticker.Stop()
|
||||
|
||||
lastTimeExecuted := w.timesource.Now()
|
||||
|
||||
sleepDetectionInterval := int64(t) * sleepDetectionIntervalFactor
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
difference := w.timesource.Now().UnixNano() - lastTimeExecuted.UnixNano()
|
||||
forceDisconnectOnPingFailure := false
|
||||
if difference > sleepDetectionInterval {
|
||||
forceDisconnectOnPingFailure = true
|
||||
lastTimeExecuted = w.timesource.Now()
|
||||
w.log.Warn("keep alive hasnt been executed recently. Killing connections to peers if ping fails")
|
||||
continue
|
||||
}
|
||||
|
||||
// Network's peers collection,
|
||||
// contains only currently active peers
|
||||
pingWg := sync.WaitGroup{}
|
||||
peersToPing := w.host.Network().Peers()
|
||||
pingWg.Add(len(peersToPing))
|
||||
for _, p := range peersToPing {
|
||||
if p != w.host.ID() {
|
||||
go w.pingPeer(ctx, &pingWg, p, forceDisconnectOnPingFailure)
|
||||
}
|
||||
}
|
||||
pingWg.Wait()
|
||||
|
||||
lastTimeExecuted = w.timesource.Now()
|
||||
case <-ctx.Done():
|
||||
w.log.Info("stopping ping protocol")
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (w *WakuNode) pingPeer(ctx context.Context, wg *sync.WaitGroup, peerID peer.ID, forceDisconnectOnFail bool) {
|
||||
defer wg.Done()
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, 7*time.Second)
|
||||
defer cancel()
|
||||
|
||||
logger := w.log.With(logging.HostID("peer", peerID))
|
||||
logger.Debug("pinging")
|
||||
pr := ping.Ping(ctx, w.host, peerID)
|
||||
select {
|
||||
case res := <-pr:
|
||||
if res.Error != nil {
|
||||
w.keepAliveMutex.Lock()
|
||||
w.keepAliveFails[peerID]++
|
||||
w.keepAliveMutex.Unlock()
|
||||
logger.Debug("could not ping", zap.Error(res.Error))
|
||||
} else {
|
||||
w.keepAliveMutex.Lock()
|
||||
delete(w.keepAliveFails, peerID)
|
||||
w.keepAliveMutex.Unlock()
|
||||
}
|
||||
case <-ctx.Done():
|
||||
w.keepAliveMutex.Lock()
|
||||
w.keepAliveFails[peerID]++
|
||||
w.keepAliveMutex.Unlock()
|
||||
logger.Debug("could not ping (context done)", zap.Error(ctx.Err()))
|
||||
}
|
||||
|
||||
w.keepAliveMutex.Lock()
|
||||
if (forceDisconnectOnFail || w.keepAliveFails[peerID] > maxAllowedPingFailures) && w.host.Network().Connectedness(peerID) == network.Connected {
|
||||
logger.Info("disconnecting peer")
|
||||
if err := w.host.Network().ClosePeer(peerID); err != nil {
|
||||
logger.Debug("closing conn to peer", zap.Error(err))
|
||||
}
|
||||
w.keepAliveFails[peerID] = 0
|
||||
}
|
||||
w.keepAliveMutex.Unlock()
|
||||
}
|
||||
363
vendor/github.com/waku-org/go-waku/waku/v2/node/localnode.go
generated
vendored
Normal file
363
vendor/github.com/waku-org/go-waku/waku/v2/node/localnode.go
generated
vendored
Normal file
@@ -0,0 +1,363 @@
|
||||
package node
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"net"
|
||||
"strconv"
|
||||
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/libp2p/go-libp2p/core/event"
|
||||
ma "github.com/multiformats/go-multiaddr"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
wenr "github.com/waku-org/go-waku/waku/v2/protocol/enr"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
func (w *WakuNode) updateLocalNode(localnode *enode.LocalNode, multiaddrs []ma.Multiaddr, ipAddr *net.TCPAddr, udpPort uint, wakuFlags wenr.WakuEnrBitfield, advertiseAddr []ma.Multiaddr, shouldAutoUpdate bool) error {
|
||||
var options []wenr.ENROption
|
||||
options = append(options, wenr.WithUDPPort(udpPort))
|
||||
options = append(options, wenr.WithWakuBitfield(wakuFlags))
|
||||
options = append(options, wenr.WithMultiaddress(multiaddrs...))
|
||||
|
||||
if advertiseAddr != nil {
|
||||
// An advertised address disables libp2p address updates
|
||||
// and discv5 predictions
|
||||
ipAddr, err := selectMostExternalAddress(advertiseAddr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
options = append(options, wenr.WithIP(ipAddr))
|
||||
} else if !shouldAutoUpdate {
|
||||
// We received a libp2p address update. Autoupdate is disabled
|
||||
// Using a static ip will disable endpoint prediction.
|
||||
options = append(options, wenr.WithIP(ipAddr))
|
||||
} else {
|
||||
// We received a libp2p address update, but we should still
|
||||
// allow discv5 to update the enr record. We set the localnode
|
||||
// keys manually. It's possible that the ENR record might get
|
||||
// updated automatically
|
||||
ip4 := ipAddr.IP.To4()
|
||||
ip6 := ipAddr.IP.To16()
|
||||
if ip4 != nil && !ip4.IsUnspecified() {
|
||||
localnode.SetFallbackIP(ip4)
|
||||
localnode.Set(enr.IPv4(ip4))
|
||||
localnode.Set(enr.TCP(uint16(ipAddr.Port)))
|
||||
} else {
|
||||
localnode.Delete(enr.IPv4{})
|
||||
localnode.Delete(enr.TCP(0))
|
||||
localnode.SetFallbackIP(net.IP{127, 0, 0, 1})
|
||||
}
|
||||
|
||||
if ip4 == nil && ip6 != nil && !ip6.IsUnspecified() {
|
||||
localnode.Set(enr.IPv6(ip6))
|
||||
localnode.Set(enr.TCP6(ipAddr.Port))
|
||||
} else {
|
||||
localnode.Delete(enr.IPv6{})
|
||||
localnode.Delete(enr.TCP6(0))
|
||||
}
|
||||
}
|
||||
|
||||
return wenr.Update(localnode, options...)
|
||||
}
|
||||
|
||||
func isPrivate(addr *net.TCPAddr) bool {
|
||||
return addr.IP.IsPrivate()
|
||||
}
|
||||
|
||||
func isExternal(addr *net.TCPAddr) bool {
|
||||
return !isPrivate(addr) && !addr.IP.IsLoopback() && !addr.IP.IsUnspecified()
|
||||
}
|
||||
|
||||
func isLoopback(addr *net.TCPAddr) bool {
|
||||
return addr.IP.IsLoopback()
|
||||
}
|
||||
|
||||
func filterIP(ss []*net.TCPAddr, fn func(*net.TCPAddr) bool) (ret []*net.TCPAddr) {
|
||||
for _, s := range ss {
|
||||
if fn(s) {
|
||||
ret = append(ret, s)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func extractIPAddressForENR(addr ma.Multiaddr) (*net.TCPAddr, error) {
|
||||
// It's a p2p-circuit address. We shouldnt use these
|
||||
// for building the ENR record default keys
|
||||
_, err := addr.ValueForProtocol(ma.P_CIRCUIT)
|
||||
if err == nil {
|
||||
return nil, errors.New("can't use IP address from a p2p-circuit address")
|
||||
}
|
||||
|
||||
// ws and wss addresses are handled by the multiaddr key
|
||||
// they shouldnt be used for building the ENR record default keys
|
||||
_, err = addr.ValueForProtocol(ma.P_WS)
|
||||
if err == nil {
|
||||
return nil, errors.New("can't use IP address from a ws address")
|
||||
}
|
||||
_, err = addr.ValueForProtocol(ma.P_WSS)
|
||||
if err == nil {
|
||||
return nil, errors.New("can't use IP address from a wss address")
|
||||
}
|
||||
|
||||
var ipStr string
|
||||
dns4, err := addr.ValueForProtocol(ma.P_DNS4)
|
||||
if err != nil {
|
||||
ipStr, err = addr.ValueForProtocol(ma.P_IP4)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
netIP, err := net.ResolveIPAddr("ip4", dns4)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ipStr = netIP.String()
|
||||
}
|
||||
|
||||
portStr, err := addr.ValueForProtocol(ma.P_TCP)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
port, err := strconv.Atoi(portStr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &net.TCPAddr{
|
||||
IP: net.ParseIP(ipStr),
|
||||
Port: port,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func selectMostExternalAddress(addresses []ma.Multiaddr) (*net.TCPAddr, error) {
|
||||
var ipAddrs []*net.TCPAddr
|
||||
for _, addr := range addresses {
|
||||
ipAddr, err := extractIPAddressForENR(addr)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
ipAddrs = append(ipAddrs, ipAddr)
|
||||
}
|
||||
|
||||
externalIPs := filterIP(ipAddrs, isExternal)
|
||||
if len(externalIPs) > 0 {
|
||||
return externalIPs[0], nil
|
||||
}
|
||||
|
||||
privateIPs := filterIP(ipAddrs, isPrivate)
|
||||
if len(privateIPs) > 0 {
|
||||
return privateIPs[0], nil
|
||||
}
|
||||
|
||||
loopback := filterIP(ipAddrs, isLoopback)
|
||||
if len(loopback) > 0 {
|
||||
return loopback[0], nil
|
||||
}
|
||||
|
||||
return nil, errors.New("could not obtain ip address")
|
||||
}
|
||||
|
||||
func decapsulateP2P(addr ma.Multiaddr) (ma.Multiaddr, error) {
|
||||
p2p, err := addr.ValueForProtocol(ma.P_P2P)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
p2pAddr, err := ma.NewMultiaddr("/p2p/" + p2p)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
addr = addr.Decapsulate(p2pAddr)
|
||||
|
||||
return addr, nil
|
||||
}
|
||||
|
||||
func decapsulateCircuitRelayAddr(addr ma.Multiaddr) (ma.Multiaddr, error) {
|
||||
_, err := addr.ValueForProtocol(ma.P_CIRCUIT)
|
||||
if err != nil {
|
||||
return nil, errors.New("not a circuit relay address")
|
||||
}
|
||||
|
||||
// We remove the node's multiaddress from the addr
|
||||
addr, _ = ma.SplitFunc(addr, func(c ma.Component) bool {
|
||||
return c.Protocol().Code == ma.P_CIRCUIT
|
||||
})
|
||||
|
||||
return addr, nil
|
||||
}
|
||||
|
||||
func selectWSListenAddresses(addresses []ma.Multiaddr) ([]ma.Multiaddr, error) {
|
||||
var result []ma.Multiaddr
|
||||
for _, addr := range addresses {
|
||||
// It's a p2p-circuit address. We dont use these at this stage yet
|
||||
_, err := addr.ValueForProtocol(ma.P_CIRCUIT)
|
||||
if err == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
_, noWS := addr.ValueForProtocol(ma.P_WSS)
|
||||
_, noWSS := addr.ValueForProtocol(ma.P_WS)
|
||||
if noWS != nil && noWSS != nil { // Neither WS or WSS found
|
||||
continue
|
||||
}
|
||||
|
||||
addr, err = decapsulateP2P(addr)
|
||||
if err == nil {
|
||||
result = append(result, addr)
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func selectCircuitRelayListenAddresses(addresses []ma.Multiaddr) ([]ma.Multiaddr, error) {
|
||||
var result []ma.Multiaddr
|
||||
for _, addr := range addresses {
|
||||
addr, err := decapsulateCircuitRelayAddr(addr)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
result = append(result, addr)
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (w *WakuNode) getENRAddresses(addrs []ma.Multiaddr) (extAddr *net.TCPAddr, multiaddr []ma.Multiaddr, err error) {
|
||||
|
||||
extAddr, err = selectMostExternalAddress(addrs)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
wssAddrs, err := selectWSListenAddresses(addrs)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
circuitAddrs, err := selectCircuitRelayListenAddresses(addrs)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
if len(circuitAddrs) != 0 {
|
||||
// Node is unreachable, hence why we have circuit relay multiaddr
|
||||
// We prefer these instead of any ws/s address
|
||||
multiaddr = append(multiaddr, circuitAddrs...)
|
||||
} else {
|
||||
multiaddr = append(multiaddr, wssAddrs...)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (w *WakuNode) setupENR(ctx context.Context, addrs []ma.Multiaddr) error {
|
||||
ipAddr, multiaddresses, err := w.getENRAddresses(addrs)
|
||||
if err != nil {
|
||||
w.log.Error("obtaining external address", zap.Error(err))
|
||||
return err
|
||||
}
|
||||
|
||||
err = w.updateLocalNode(w.localNode, multiaddresses, ipAddr, w.opts.udpPort, w.wakuFlag, w.opts.advertiseAddrs, w.opts.discV5autoUpdate)
|
||||
if err != nil {
|
||||
w.log.Error("updating localnode ENR record", zap.Error(err))
|
||||
return err
|
||||
}
|
||||
|
||||
if w.Relay() != nil {
|
||||
err = w.watchTopicShards(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
w.enrChangeCh <- struct{}{}
|
||||
|
||||
return nil
|
||||
|
||||
}
|
||||
|
||||
func (w *WakuNode) watchTopicShards(ctx context.Context) error {
|
||||
evtRelaySubscribed, err := w.Relay().Events().Subscribe(new(relay.EvtRelaySubscribed))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
evtRelayUnsubscribed, err := w.Relay().Events().Subscribe(new(relay.EvtRelayUnsubscribed))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
go func() {
|
||||
defer evtRelaySubscribed.Close()
|
||||
defer evtRelayUnsubscribed.Close()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-evtRelayUnsubscribed.Out():
|
||||
case <-evtRelaySubscribed.Out():
|
||||
topics := w.Relay().Topics()
|
||||
rs, err := protocol.TopicsToRelayShards(topics...)
|
||||
if err != nil {
|
||||
w.log.Warn("could not set ENR shard info", zap.Error(err))
|
||||
continue
|
||||
}
|
||||
|
||||
if len(rs) > 0 {
|
||||
if len(rs) > 1 {
|
||||
w.log.Warn("could not set ENR shard info", zap.String("error", "multiple clusters found, use sharded topics within the same cluster"))
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if len(rs) == 1 {
|
||||
w.log.Info("updating advertised relay shards in ENR")
|
||||
if len(rs[0].ShardIDs) != len(topics) {
|
||||
w.log.Warn("A mix of named and static shards found. ENR shard will contain only the following shards", zap.Any("shards", rs[0]))
|
||||
}
|
||||
|
||||
err = wenr.Update(w.localNode, wenr.WithWakuRelaySharding(rs[0]))
|
||||
if err != nil {
|
||||
w.log.Warn("could not set ENR shard info", zap.Error(err))
|
||||
continue
|
||||
}
|
||||
|
||||
w.enrChangeCh <- struct{}{}
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *WakuNode) registerAndMonitorReachability(ctx context.Context) {
|
||||
var myEventSub event.Subscription
|
||||
var err error
|
||||
if myEventSub, err = w.host.EventBus().Subscribe(new(event.EvtLocalReachabilityChanged)); err != nil {
|
||||
w.log.Error("failed to register with libp2p for reachability status", zap.Error(err))
|
||||
return
|
||||
}
|
||||
w.wg.Add(1)
|
||||
go func() {
|
||||
defer myEventSub.Close()
|
||||
defer w.wg.Done()
|
||||
|
||||
for {
|
||||
select {
|
||||
case evt := <-myEventSub.Out():
|
||||
reachability := evt.(event.EvtLocalReachabilityChanged).Reachability
|
||||
w.log.Info("Node reachability changed", zap.Stringer("newReachability", reachability))
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
86
vendor/github.com/waku-org/go-waku/waku/v2/node/metrics.go
generated
vendored
Normal file
86
vendor/github.com/waku-org/go-waku/waku/v2/node/metrics.go
generated
vendored
Normal file
@@ -0,0 +1,86 @@
|
||||
package node
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/libp2p/go-libp2p/p2p/metricshelper"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
var gitVersion = prometheus.NewGaugeVec(
|
||||
prometheus.GaugeOpts{
|
||||
Name: "waku_version",
|
||||
Help: "The go-waku version",
|
||||
},
|
||||
[]string{"git_version"},
|
||||
)
|
||||
|
||||
var peerDials = prometheus.NewCounter(
|
||||
prometheus.CounterOpts{
|
||||
Name: "waku_peers_dials",
|
||||
Help: "Number of peer dials",
|
||||
})
|
||||
|
||||
var connectedPeers = prometheus.NewGauge(
|
||||
prometheus.GaugeOpts{
|
||||
Name: "waku_connected_peers",
|
||||
Help: "Number of connected peers",
|
||||
})
|
||||
|
||||
var peerStoreSize = prometheus.NewGauge(
|
||||
prometheus.GaugeOpts{
|
||||
Name: "waku_peer_store_size",
|
||||
Help: "Size of Peer Store",
|
||||
})
|
||||
|
||||
var collectors = []prometheus.Collector{
|
||||
gitVersion,
|
||||
peerDials,
|
||||
connectedPeers,
|
||||
peerStoreSize,
|
||||
}
|
||||
|
||||
// Metrics exposes the functions required to update prometheus metrics for the waku node
|
||||
type Metrics interface {
|
||||
RecordVersion(version string, commit string)
|
||||
RecordDial()
|
||||
RecordPeerConnected()
|
||||
RecordPeerDisconnected()
|
||||
SetPeerStoreSize(int)
|
||||
}
|
||||
|
||||
type metricsImpl struct {
|
||||
reg prometheus.Registerer
|
||||
}
|
||||
|
||||
func newMetrics(reg prometheus.Registerer) Metrics {
|
||||
metricshelper.RegisterCollectors(reg, collectors...)
|
||||
return &metricsImpl{
|
||||
reg: reg,
|
||||
}
|
||||
}
|
||||
|
||||
// RecordVersion registers a metric with the current version and commit of go-waku
|
||||
func (m *metricsImpl) RecordVersion(version string, commit string) {
|
||||
v := fmt.Sprintf("%s-%s", version, commit)
|
||||
gitVersion.WithLabelValues(v).Inc()
|
||||
}
|
||||
|
||||
// RecordDial increases the counter for the number of dials
|
||||
func (m *metricsImpl) RecordDial() {
|
||||
peerDials.Inc()
|
||||
}
|
||||
|
||||
// RecordPeerConnected increases the metrics for the number of connected peers
|
||||
func (m *metricsImpl) RecordPeerConnected() {
|
||||
connectedPeers.Inc()
|
||||
}
|
||||
|
||||
// RecordPeerDisconnected decreases the metrics for the number of connected peers
|
||||
func (m *metricsImpl) RecordPeerDisconnected() {
|
||||
connectedPeers.Dec()
|
||||
}
|
||||
|
||||
func (m *metricsImpl) SetPeerStoreSize(size int) {
|
||||
peerStoreSize.Set(float64(size))
|
||||
}
|
||||
20
vendor/github.com/waku-org/go-waku/waku/v2/node/service.go
generated
vendored
Normal file
20
vendor/github.com/waku-org/go-waku/waku/v2/node/service.go
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
package node
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
)
|
||||
|
||||
type Service interface {
|
||||
SetHost(h host.Host)
|
||||
Start(context.Context) error
|
||||
Stop()
|
||||
}
|
||||
|
||||
type ReceptorService interface {
|
||||
SetHost(h host.Host)
|
||||
Stop()
|
||||
Start(context.Context, *relay.Subscription) error
|
||||
}
|
||||
32
vendor/github.com/waku-org/go-waku/waku/v2/node/version.go
generated
vendored
Normal file
32
vendor/github.com/waku-org/go-waku/waku/v2/node/version.go
generated
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
package node
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
)
|
||||
|
||||
// GitCommit is a commit hash.
|
||||
var GitCommit string
|
||||
|
||||
// Version is the version of go-waku at the time of compilation
|
||||
var Version string
|
||||
|
||||
type VersionInfo struct {
|
||||
Version string
|
||||
Commit string
|
||||
System string
|
||||
Golang string
|
||||
}
|
||||
|
||||
func GetVersionInfo() VersionInfo {
|
||||
return VersionInfo{
|
||||
Version: Version,
|
||||
Commit: GitCommit,
|
||||
System: runtime.GOARCH + "/" + runtime.GOOS,
|
||||
Golang: runtime.Version(),
|
||||
}
|
||||
}
|
||||
|
||||
func (v VersionInfo) String() string {
|
||||
return fmt.Sprintf("%s-%s", v.Version, v.Commit)
|
||||
}
|
||||
984
vendor/github.com/waku-org/go-waku/waku/v2/node/wakunode2.go
generated
vendored
Normal file
984
vendor/github.com/waku-org/go-waku/waku/v2/node/wakunode2.go
generated
vendored
Normal file
@@ -0,0 +1,984 @@
|
||||
package node
|
||||
|
||||
import (
|
||||
"context"
|
||||
"math/rand"
|
||||
"net"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
backoffv4 "github.com/cenkalti/backoff/v4"
|
||||
golog "github.com/ipfs/go-log/v2"
|
||||
"github.com/libp2p/go-libp2p"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/event"
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-libp2p/core/peerstore"
|
||||
"github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/libp2p/go-libp2p/p2p/host/autorelay"
|
||||
"github.com/libp2p/go-libp2p/p2p/host/peerstore/pstoremem"
|
||||
"github.com/libp2p/go-libp2p/p2p/protocol/circuitv2/proto"
|
||||
ws "github.com/libp2p/go-libp2p/p2p/transport/websocket"
|
||||
ma "github.com/multiformats/go-multiaddr"
|
||||
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
"github.com/waku-org/go-waku/waku/v2/discv5"
|
||||
"github.com/waku-org/go-waku/waku/v2/dnsdisc"
|
||||
"github.com/waku-org/go-waku/waku/v2/peermanager"
|
||||
wps "github.com/waku-org/go-waku/waku/v2/peerstore"
|
||||
wakuprotocol "github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/enr"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/filter"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/lightpush"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/metadata"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/pb"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/peer_exchange"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/store"
|
||||
"github.com/waku-org/go-waku/waku/v2/rendezvous"
|
||||
"github.com/waku-org/go-waku/waku/v2/service"
|
||||
"github.com/waku-org/go-waku/waku/v2/timesource"
|
||||
|
||||
"github.com/waku-org/go-waku/waku/v2/utils"
|
||||
)
|
||||
|
||||
const discoveryConnectTimeout = 20 * time.Second
|
||||
|
||||
type Peer struct {
|
||||
ID peer.ID `json:"peerID"`
|
||||
Protocols []protocol.ID `json:"protocols"`
|
||||
Addrs []ma.Multiaddr `json:"addrs"`
|
||||
Connected bool `json:"connected"`
|
||||
PubsubTopics []string `json:"pubsubTopics"`
|
||||
}
|
||||
|
||||
type storeFactory func(w *WakuNode) store.Store
|
||||
|
||||
type byte32 = [32]byte
|
||||
|
||||
type IdentityCredential = struct {
|
||||
IDTrapdoor byte32 `json:"idTrapdoor"`
|
||||
IDNullifier byte32 `json:"idNullifier"`
|
||||
IDSecretHash byte32 `json:"idSecretHash"`
|
||||
IDCommitment byte32 `json:"idCommitment"`
|
||||
}
|
||||
|
||||
type SpamHandler = func(message *pb.WakuMessage, topic string) error
|
||||
|
||||
type RLNRelay interface {
|
||||
IdentityCredential() (IdentityCredential, error)
|
||||
MembershipIndex() uint
|
||||
AppendRLNProof(msg *pb.WakuMessage, senderEpochTime time.Time) error
|
||||
Validator(spamHandler SpamHandler) func(ctx context.Context, message *pb.WakuMessage, topic string) bool
|
||||
Start(ctx context.Context) error
|
||||
Stop() error
|
||||
IsReady(ctx context.Context) (bool, error)
|
||||
}
|
||||
|
||||
type WakuNode struct {
|
||||
host host.Host
|
||||
opts *WakuNodeParameters
|
||||
log *zap.Logger
|
||||
timesource timesource.Timesource
|
||||
metrics Metrics
|
||||
|
||||
peerstore peerstore.Peerstore
|
||||
peerConnector *peermanager.PeerConnectionStrategy
|
||||
|
||||
relay Service
|
||||
lightPush Service
|
||||
discoveryV5 Service
|
||||
peerExchange Service
|
||||
rendezvous Service
|
||||
metadata Service
|
||||
legacyFilter ReceptorService
|
||||
filterFullNode ReceptorService
|
||||
filterLightNode Service
|
||||
store ReceptorService
|
||||
rlnRelay RLNRelay
|
||||
|
||||
wakuFlag enr.WakuEnrBitfield
|
||||
circuitRelayNodes chan peer.AddrInfo
|
||||
|
||||
localNode *enode.LocalNode
|
||||
|
||||
bcaster relay.Broadcaster
|
||||
|
||||
connectionNotif ConnectionNotifier
|
||||
protocolEventSub event.Subscription
|
||||
identificationEventSub event.Subscription
|
||||
addressChangesSub event.Subscription
|
||||
enrChangeCh chan struct{}
|
||||
|
||||
keepAliveMutex sync.Mutex
|
||||
keepAliveFails map[peer.ID]int
|
||||
|
||||
cancel context.CancelFunc
|
||||
wg *sync.WaitGroup
|
||||
|
||||
// Channel passed to WakuNode constructor
|
||||
// receiving connection status notifications
|
||||
connStatusChan chan<- ConnStatus
|
||||
|
||||
storeFactory storeFactory
|
||||
|
||||
peermanager *peermanager.PeerManager
|
||||
}
|
||||
|
||||
func defaultStoreFactory(w *WakuNode) store.Store {
|
||||
return store.NewWakuStore(w.opts.messageProvider, w.peermanager, w.timesource, w.opts.prometheusReg, w.log)
|
||||
}
|
||||
|
||||
// New is used to instantiate a WakuNode using a set of WakuNodeOptions
|
||||
func New(opts ...WakuNodeOption) (*WakuNode, error) {
|
||||
var err error
|
||||
params := new(WakuNodeParameters)
|
||||
params.libP2POpts = DefaultLibP2POptions
|
||||
|
||||
opts = append(DefaultWakuNodeOptions, opts...)
|
||||
for _, opt := range opts {
|
||||
err := opt(params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if params.logger == nil {
|
||||
params.logger = utils.Logger()
|
||||
//golog.SetPrimaryCore(params.logger.Core())
|
||||
golog.SetAllLoggers(params.logLevel)
|
||||
}
|
||||
|
||||
if params.privKey == nil {
|
||||
prvKey, err := crypto.GenerateKey()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
params.privKey = prvKey
|
||||
}
|
||||
|
||||
if params.enableWSS {
|
||||
params.libP2POpts = append(params.libP2POpts, libp2p.Transport(ws.New, ws.WithTLSConfig(params.tlsConfig)))
|
||||
} else {
|
||||
// Enable WS transport by default
|
||||
params.libP2POpts = append(params.libP2POpts, libp2p.Transport(ws.New))
|
||||
}
|
||||
|
||||
// Setting default host address if none was provided
|
||||
if params.hostAddr == nil {
|
||||
params.hostAddr, err = net.ResolveTCPAddr("tcp", "0.0.0.0:0")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = WithHostAddress(params.hostAddr)(params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if len(params.multiAddr) > 0 {
|
||||
params.libP2POpts = append(params.libP2POpts, libp2p.ListenAddrs(params.multiAddr...))
|
||||
}
|
||||
|
||||
params.libP2POpts = append(params.libP2POpts, params.Identity())
|
||||
|
||||
if params.addressFactory != nil {
|
||||
params.libP2POpts = append(params.libP2POpts, libp2p.AddrsFactory(params.addressFactory))
|
||||
}
|
||||
|
||||
w := new(WakuNode)
|
||||
w.bcaster = relay.NewBroadcaster(1024)
|
||||
w.opts = params
|
||||
w.log = params.logger.Named("node2")
|
||||
w.wg = &sync.WaitGroup{}
|
||||
w.keepAliveFails = make(map[peer.ID]int)
|
||||
w.wakuFlag = enr.NewWakuEnrBitfield(w.opts.enableLightPush, w.opts.enableLegacyFilter, w.opts.enableStore, w.opts.enableRelay)
|
||||
w.circuitRelayNodes = make(chan peer.AddrInfo)
|
||||
w.metrics = newMetrics(params.prometheusReg)
|
||||
|
||||
w.metrics.RecordVersion(Version, GitCommit)
|
||||
|
||||
// Setup peerstore wrapper
|
||||
if params.peerstore != nil {
|
||||
w.peerstore = wps.NewWakuPeerstore(params.peerstore)
|
||||
params.libP2POpts = append(params.libP2POpts, libp2p.Peerstore(w.peerstore))
|
||||
} else {
|
||||
ps, err := pstoremem.NewPeerstore()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
w.peerstore = wps.NewWakuPeerstore(ps)
|
||||
params.libP2POpts = append(params.libP2POpts, libp2p.Peerstore(w.peerstore))
|
||||
}
|
||||
|
||||
// Use circuit relay with nodes received on circuitRelayNodes channel
|
||||
params.libP2POpts = append(params.libP2POpts, libp2p.EnableAutoRelayWithPeerSource(
|
||||
func(ctx context.Context, numPeers int) <-chan peer.AddrInfo {
|
||||
r := make(chan peer.AddrInfo)
|
||||
go func() {
|
||||
defer close(r)
|
||||
for ; numPeers != 0; numPeers-- {
|
||||
select {
|
||||
case v, ok := <-w.circuitRelayNodes:
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
select {
|
||||
case r <- v:
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
return r
|
||||
},
|
||||
autorelay.WithMinInterval(params.circuitRelayMinInterval),
|
||||
autorelay.WithBootDelay(params.circuitRelayBootDelay),
|
||||
))
|
||||
|
||||
if params.enableNTP {
|
||||
w.timesource = timesource.NewNTPTimesource(w.opts.ntpURLs, w.log)
|
||||
} else {
|
||||
w.timesource = timesource.NewDefaultClock()
|
||||
}
|
||||
|
||||
w.localNode, err = enr.NewLocalnode(w.opts.privKey)
|
||||
if err != nil {
|
||||
w.log.Error("creating localnode", zap.Error(err))
|
||||
}
|
||||
|
||||
w.metadata = metadata.NewWakuMetadata(w.opts.clusterID, w.localNode, w.log)
|
||||
|
||||
//Initialize peer manager.
|
||||
w.peermanager = peermanager.NewPeerManager(w.opts.maxPeerConnections, w.opts.peerStoreCapacity, w.log)
|
||||
|
||||
w.peerConnector, err = peermanager.NewPeerConnectionStrategy(w.peermanager, discoveryConnectTimeout, w.log)
|
||||
if err != nil {
|
||||
w.log.Error("creating peer connection strategy", zap.Error(err))
|
||||
}
|
||||
|
||||
if w.opts.enableDiscV5 {
|
||||
err := w.mountDiscV5()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
w.peerExchange, err = peer_exchange.NewWakuPeerExchange(w.DiscV5(), w.peerConnector, w.peermanager, w.opts.prometheusReg, w.log)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
w.rendezvous = rendezvous.NewRendezvous(w.opts.rendezvousDB, w.peerConnector, w.log)
|
||||
w.relay = relay.NewWakuRelay(w.bcaster, w.opts.minRelayPeersToPublish, w.timesource, w.opts.prometheusReg, w.log,
|
||||
relay.WithPubSubOptions(w.opts.pubsubOpts),
|
||||
relay.WithMaxMsgSize(w.opts.maxMsgSizeBytes))
|
||||
|
||||
if w.opts.enableRelay {
|
||||
err = w.setupRLNRelay()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
w.opts.legacyFilterOpts = append(w.opts.legacyFilterOpts, legacy_filter.WithPeerManager(w.peermanager))
|
||||
w.opts.filterOpts = append(w.opts.filterOpts, filter.WithPeerManager(w.peermanager))
|
||||
|
||||
w.legacyFilter = legacy_filter.NewWakuFilter(w.bcaster, w.opts.isLegacyFilterFullNode, w.timesource, w.opts.prometheusReg, w.log, w.opts.legacyFilterOpts...)
|
||||
w.filterFullNode = filter.NewWakuFilterFullNode(w.timesource, w.opts.prometheusReg, w.log, w.opts.filterOpts...)
|
||||
w.filterLightNode = filter.NewWakuFilterLightNode(w.bcaster, w.peermanager, w.timesource, w.opts.prometheusReg, w.log)
|
||||
w.lightPush = lightpush.NewWakuLightPush(w.Relay(), w.peermanager, w.opts.prometheusReg, w.log)
|
||||
|
||||
if params.storeFactory != nil {
|
||||
w.storeFactory = params.storeFactory
|
||||
} else {
|
||||
w.storeFactory = defaultStoreFactory
|
||||
}
|
||||
|
||||
if params.connStatusC != nil {
|
||||
w.connStatusChan = params.connStatusC
|
||||
}
|
||||
|
||||
return w, nil
|
||||
}
|
||||
|
||||
func (w *WakuNode) watchMultiaddressChanges(ctx context.Context) {
|
||||
defer w.wg.Done()
|
||||
|
||||
addrsSet := utils.MultiAddrSet(w.ListenAddresses()...)
|
||||
|
||||
first := make(chan struct{}, 1)
|
||||
first <- struct{}{}
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-first:
|
||||
addr := utils.MultiAddrFromSet(addrsSet)
|
||||
w.log.Info("listening", logging.MultiAddrs("multiaddr", addr...))
|
||||
case <-w.addressChangesSub.Out():
|
||||
newAddrs := utils.MultiAddrSet(w.ListenAddresses()...)
|
||||
if !utils.MultiAddrSetEquals(addrsSet, newAddrs) {
|
||||
addrsSet = newAddrs
|
||||
addrs := utils.MultiAddrFromSet(addrsSet)
|
||||
w.log.Info("listening addresses update received", logging.MultiAddrs("multiaddr", addrs...))
|
||||
err := w.setupENR(ctx, addrs)
|
||||
if err != nil {
|
||||
w.log.Warn("could not update ENR", zap.Error(err))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Start initializes all the protocols that were setup in the WakuNode
|
||||
func (w *WakuNode) Start(ctx context.Context) error {
|
||||
connGater := peermanager.NewConnectionGater(w.log)
|
||||
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
w.cancel = cancel
|
||||
|
||||
libP2POpts := append(w.opts.libP2POpts, libp2p.ConnectionGater(connGater))
|
||||
|
||||
host, err := libp2p.New(libP2POpts...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
host.Network().Notify(&network.NotifyBundle{
|
||||
DisconnectedF: func(net network.Network, conn network.Conn) {
|
||||
go connGater.NotifyDisconnect(conn.RemoteMultiaddr())
|
||||
},
|
||||
})
|
||||
|
||||
w.host = host
|
||||
|
||||
if w.protocolEventSub, err = host.EventBus().Subscribe(new(event.EvtPeerProtocolsUpdated)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if w.identificationEventSub, err = host.EventBus().Subscribe(new(event.EvtPeerIdentificationCompleted)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if w.addressChangesSub, err = host.EventBus().Subscribe(new(event.EvtLocalAddressesUpdated)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
w.connectionNotif = NewConnectionNotifier(ctx, w.host, w.opts.connNotifCh, w.metrics, w.log)
|
||||
w.host.Network().Notify(w.connectionNotif)
|
||||
|
||||
w.enrChangeCh = make(chan struct{}, 10)
|
||||
|
||||
w.wg.Add(4)
|
||||
go w.connectednessListener(ctx)
|
||||
go w.watchMultiaddressChanges(ctx)
|
||||
go w.watchENRChanges(ctx)
|
||||
go w.findRelayNodes(ctx)
|
||||
|
||||
err = w.bcaster.Start(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if w.opts.keepAliveInterval > time.Duration(0) {
|
||||
w.wg.Add(1)
|
||||
go w.startKeepAlive(ctx, w.opts.keepAliveInterval)
|
||||
}
|
||||
|
||||
w.metadata.SetHost(host)
|
||||
err = w.metadata.Start(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
w.peerConnector.SetHost(host)
|
||||
w.peermanager.SetHost(host)
|
||||
err = w.peerConnector.Start(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if w.opts.enableNTP {
|
||||
err := w.timesource.Start(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if w.opts.enableRLN {
|
||||
err = w.startRlnRelay(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
w.relay.SetHost(host)
|
||||
|
||||
if w.opts.enableRelay {
|
||||
err := w.relay.Start(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = w.peermanager.SubscribeToRelayEvtBus(w.relay.(*relay.WakuRelay).Events())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
w.peermanager.Start(ctx)
|
||||
w.registerAndMonitorReachability(ctx)
|
||||
}
|
||||
|
||||
w.store = w.storeFactory(w)
|
||||
w.store.SetHost(host)
|
||||
if w.opts.enableStore {
|
||||
sub := w.bcaster.RegisterForAll()
|
||||
err := w.startStore(ctx, sub)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
w.log.Info("Subscribing store to broadcaster")
|
||||
}
|
||||
|
||||
w.lightPush.SetHost(host)
|
||||
if w.opts.enableLightPush {
|
||||
if err := w.lightPush.Start(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
w.legacyFilter.SetHost(host)
|
||||
if w.opts.enableLegacyFilter {
|
||||
sub := w.bcaster.RegisterForAll()
|
||||
err := w.legacyFilter.Start(ctx, sub)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
w.log.Info("Subscribing filter to broadcaster")
|
||||
}
|
||||
|
||||
w.filterFullNode.SetHost(host)
|
||||
if w.opts.enableFilterFullNode {
|
||||
sub := w.bcaster.RegisterForAll()
|
||||
err := w.filterFullNode.Start(ctx, sub)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
w.log.Info("Subscribing filterV2 to broadcaster")
|
||||
|
||||
}
|
||||
|
||||
w.filterLightNode.SetHost(host)
|
||||
if w.opts.enableFilterLightNode {
|
||||
err := w.filterLightNode.Start(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
err = w.setupENR(ctx, w.ListenAddresses())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
w.peerExchange.SetHost(host)
|
||||
if w.opts.enablePeerExchange {
|
||||
err := w.peerExchange.Start(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
w.rendezvous.SetHost(host)
|
||||
if w.opts.enableRendezvousPoint {
|
||||
err := w.rendezvous.Start(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop stops the WakuNode and closess all connections to the host
|
||||
func (w *WakuNode) Stop() {
|
||||
if w.cancel == nil {
|
||||
return
|
||||
}
|
||||
|
||||
w.bcaster.Stop()
|
||||
|
||||
defer w.connectionNotif.Close()
|
||||
defer w.protocolEventSub.Close()
|
||||
defer w.identificationEventSub.Close()
|
||||
defer w.addressChangesSub.Close()
|
||||
|
||||
w.host.Network().StopNotify(w.connectionNotif)
|
||||
|
||||
w.relay.Stop()
|
||||
w.lightPush.Stop()
|
||||
w.store.Stop()
|
||||
w.legacyFilter.Stop()
|
||||
w.filterFullNode.Stop()
|
||||
w.filterLightNode.Stop()
|
||||
|
||||
if w.opts.enableDiscV5 {
|
||||
w.discoveryV5.Stop()
|
||||
}
|
||||
w.peerExchange.Stop()
|
||||
w.rendezvous.Stop()
|
||||
|
||||
w.peerConnector.Stop()
|
||||
|
||||
_ = w.stopRlnRelay()
|
||||
|
||||
w.timesource.Stop()
|
||||
|
||||
w.host.Close()
|
||||
|
||||
w.cancel()
|
||||
|
||||
w.wg.Wait()
|
||||
|
||||
close(w.enrChangeCh)
|
||||
|
||||
w.cancel = nil
|
||||
}
|
||||
|
||||
// Host returns the libp2p Host used by the WakuNode
|
||||
func (w *WakuNode) Host() host.Host {
|
||||
return w.host
|
||||
}
|
||||
|
||||
// ID returns the base58 encoded ID from the host
|
||||
func (w *WakuNode) ID() string {
|
||||
return w.host.ID().Pretty()
|
||||
}
|
||||
|
||||
func (w *WakuNode) watchENRChanges(ctx context.Context) {
|
||||
defer w.wg.Done()
|
||||
|
||||
var prevNodeVal string
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-w.enrChangeCh:
|
||||
if w.localNode != nil {
|
||||
currNodeVal := w.localNode.Node().String()
|
||||
if prevNodeVal != currNodeVal {
|
||||
if prevNodeVal == "" {
|
||||
w.log.Info("enr record", logging.ENode("enr", w.localNode.Node()))
|
||||
} else {
|
||||
w.log.Info("new enr record", logging.ENode("enr", w.localNode.Node()))
|
||||
}
|
||||
prevNodeVal = currNodeVal
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ListenAddresses returns all the multiaddresses used by the host
|
||||
func (w *WakuNode) ListenAddresses() []ma.Multiaddr {
|
||||
return utils.EncapsulatePeerID(w.host.ID(), w.host.Addrs()...)
|
||||
}
|
||||
|
||||
// ENR returns the ENR address of the node
|
||||
func (w *WakuNode) ENR() *enode.Node {
|
||||
return w.localNode.Node()
|
||||
}
|
||||
|
||||
// Timesource returns the timesource used by this node to obtain the current wall time
|
||||
// Depending on the configuration it will be the local time or a ntp syncd time
|
||||
func (w *WakuNode) Timesource() timesource.Timesource {
|
||||
return w.timesource
|
||||
}
|
||||
|
||||
// Relay is used to access any operation related to Waku Relay protocol
|
||||
func (w *WakuNode) Relay() *relay.WakuRelay {
|
||||
if result, ok := w.relay.(*relay.WakuRelay); ok {
|
||||
return result
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Store is used to access any operation related to Waku Store protocol
|
||||
func (w *WakuNode) Store() store.Store {
|
||||
return w.store.(store.Store)
|
||||
}
|
||||
|
||||
// LegacyFilter is used to access any operation related to Waku LegacyFilter protocol
|
||||
func (w *WakuNode) LegacyFilter() *legacy_filter.WakuFilter {
|
||||
if result, ok := w.legacyFilter.(*legacy_filter.WakuFilter); ok {
|
||||
return result
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// FilterLightnode is used to access any operation related to Waku Filter protocol Full node feature
|
||||
func (w *WakuNode) FilterFullNode() *filter.WakuFilterFullNode {
|
||||
if result, ok := w.filterFullNode.(*filter.WakuFilterFullNode); ok {
|
||||
return result
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// FilterFullNode is used to access any operation related to Waku Filter protocol Light node feature
|
||||
func (w *WakuNode) FilterLightnode() *filter.WakuFilterLightNode {
|
||||
if result, ok := w.filterLightNode.(*filter.WakuFilterLightNode); ok {
|
||||
return result
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// PeerManager for getting peer filterv2 protocol
|
||||
func (w *WakuNode) PeerManager() *peermanager.PeerManager {
|
||||
return w.peermanager
|
||||
}
|
||||
|
||||
// Lightpush is used to access any operation related to Waku Lightpush protocol
|
||||
func (w *WakuNode) Lightpush() *lightpush.WakuLightPush {
|
||||
if result, ok := w.lightPush.(*lightpush.WakuLightPush); ok {
|
||||
return result
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DiscV5 is used to access any operation related to DiscoveryV5
|
||||
func (w *WakuNode) DiscV5() *discv5.DiscoveryV5 {
|
||||
if result, ok := w.discoveryV5.(*discv5.DiscoveryV5); ok {
|
||||
return result
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// PeerExchange is used to access any operation related to Peer Exchange
|
||||
func (w *WakuNode) PeerExchange() *peer_exchange.WakuPeerExchange {
|
||||
if result, ok := w.peerExchange.(*peer_exchange.WakuPeerExchange); ok {
|
||||
return result
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Rendezvous is used to access any operation related to Rendezvous
|
||||
func (w *WakuNode) Rendezvous() *rendezvous.Rendezvous {
|
||||
if result, ok := w.rendezvous.(*rendezvous.Rendezvous); ok {
|
||||
return result
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Broadcaster is used to access the message broadcaster that is used to push
|
||||
// messages to different protocols
|
||||
func (w *WakuNode) Broadcaster() relay.Broadcaster {
|
||||
return w.bcaster
|
||||
}
|
||||
|
||||
func (w *WakuNode) mountDiscV5() error {
|
||||
discV5Options := []discv5.DiscoveryV5Option{
|
||||
discv5.WithBootnodes(w.opts.discV5bootnodes),
|
||||
discv5.WithUDPPort(w.opts.udpPort),
|
||||
discv5.WithAutoUpdate(w.opts.discV5autoUpdate),
|
||||
}
|
||||
|
||||
if w.opts.advertiseAddrs != nil {
|
||||
discV5Options = append(discV5Options, discv5.WithAdvertiseAddr(w.opts.advertiseAddrs))
|
||||
}
|
||||
|
||||
var err error
|
||||
discv5Inst, err := discv5.NewDiscoveryV5(w.opts.privKey, w.localNode, w.peerConnector, w.opts.prometheusReg, w.log, discV5Options...)
|
||||
w.discoveryV5 = discv5Inst
|
||||
w.peermanager.SetDiscv5(discv5Inst)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (w *WakuNode) startStore(ctx context.Context, sub *relay.Subscription) error {
|
||||
err := w.store.Start(ctx, sub)
|
||||
if err != nil {
|
||||
w.log.Error("starting store", zap.Error(err))
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddPeer is used to add a peer and the protocols it support to the node peerstore
|
||||
// TODO: Need to update this for autosharding, to only take contentTopics and optional pubSubTopics or provide an alternate API only for contentTopics.
|
||||
func (w *WakuNode) AddPeer(address ma.Multiaddr, origin wps.Origin, pubSubTopics []string, protocols ...protocol.ID) (peer.ID, error) {
|
||||
pData, err := w.peermanager.AddPeer(address, origin, pubSubTopics, protocols...)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return pData.AddrInfo.ID, nil
|
||||
}
|
||||
|
||||
// AddDiscoveredPeer to add a discovered peer to the node peerStore
|
||||
func (w *WakuNode) AddDiscoveredPeer(ID peer.ID, addrs []ma.Multiaddr, origin wps.Origin, pubsubTopics []string, connectNow bool) {
|
||||
p := service.PeerData{
|
||||
Origin: origin,
|
||||
AddrInfo: peer.AddrInfo{
|
||||
ID: ID,
|
||||
Addrs: addrs,
|
||||
},
|
||||
PubsubTopics: pubsubTopics,
|
||||
}
|
||||
w.peermanager.AddDiscoveredPeer(p, connectNow)
|
||||
}
|
||||
|
||||
// DialPeerWithMultiAddress is used to connect to a peer using a multiaddress
|
||||
func (w *WakuNode) DialPeerWithMultiAddress(ctx context.Context, address ma.Multiaddr) error {
|
||||
info, err := peer.AddrInfoFromP2pAddr(address)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return w.connect(ctx, *info)
|
||||
}
|
||||
|
||||
// DialPeer is used to connect to a peer using a string containing a multiaddress
|
||||
func (w *WakuNode) DialPeer(ctx context.Context, address string) error {
|
||||
p, err := ma.NewMultiaddr(address)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
info, err := peer.AddrInfoFromP2pAddr(p)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return w.connect(ctx, *info)
|
||||
}
|
||||
|
||||
// DialPeerWithInfo is used to connect to a peer using its address information
|
||||
func (w *WakuNode) DialPeerWithInfo(ctx context.Context, peerInfo peer.AddrInfo) error {
|
||||
return w.connect(ctx, peerInfo)
|
||||
}
|
||||
|
||||
func (w *WakuNode) connect(ctx context.Context, info peer.AddrInfo) error {
|
||||
err := w.host.Connect(ctx, info)
|
||||
if err != nil {
|
||||
w.host.Peerstore().(wps.WakuPeerstore).AddConnFailure(info)
|
||||
return err
|
||||
}
|
||||
|
||||
for _, addr := range info.Addrs {
|
||||
// TODO: this is a temporary fix
|
||||
// host.Connect adds the addresses with a TempAddressTTL
|
||||
// however, identify will filter out all non IP addresses
|
||||
// and expire all temporary addrs. So in the meantime, let's
|
||||
// store dns4 addresses with a RecentlyConnectedAddrTTL, otherwise
|
||||
// it will have trouble with the status fleet circuit relay addresses
|
||||
// See https://github.com/libp2p/go-libp2p/issues/2550
|
||||
_, err := addr.ValueForProtocol(ma.P_DNS4)
|
||||
if err == nil {
|
||||
w.host.Peerstore().AddAddrs(info.ID, info.Addrs, peerstore.RecentlyConnectedAddrTTL)
|
||||
}
|
||||
}
|
||||
|
||||
w.host.Peerstore().(wps.WakuPeerstore).ResetConnFailures(info)
|
||||
|
||||
w.metrics.RecordDial()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// DialPeerByID is used to connect to an already known peer
|
||||
func (w *WakuNode) DialPeerByID(ctx context.Context, peerID peer.ID) error {
|
||||
info := w.host.Peerstore().PeerInfo(peerID)
|
||||
return w.connect(ctx, info)
|
||||
}
|
||||
|
||||
// ClosePeerByAddress is used to disconnect from a peer using its multiaddress
|
||||
func (w *WakuNode) ClosePeerByAddress(address string) error {
|
||||
p, err := ma.NewMultiaddr(address)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Extract the peer ID from the multiaddr.
|
||||
info, err := peer.AddrInfoFromP2pAddr(p)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return w.ClosePeerById(info.ID)
|
||||
}
|
||||
|
||||
// ClosePeerById is used to close a connection to a peer
|
||||
func (w *WakuNode) ClosePeerById(id peer.ID) error {
|
||||
err := w.host.Network().ClosePeer(id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// PeerCount return the number of connected peers
|
||||
func (w *WakuNode) PeerCount() int {
|
||||
return len(w.host.Network().Peers())
|
||||
}
|
||||
|
||||
// PeerStats returns a list of peers and the protocols supported by them
|
||||
func (w *WakuNode) PeerStats() PeerStats {
|
||||
p := make(PeerStats)
|
||||
for _, peerID := range w.host.Network().Peers() {
|
||||
protocols, err := w.host.Peerstore().GetProtocols(peerID)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
p[peerID] = protocols
|
||||
}
|
||||
return p
|
||||
}
|
||||
|
||||
// Set the bootnodes on discv5
|
||||
func (w *WakuNode) SetDiscV5Bootnodes(nodes []*enode.Node) error {
|
||||
w.opts.discV5bootnodes = nodes
|
||||
return w.DiscV5().SetBootnodes(nodes)
|
||||
}
|
||||
|
||||
// Peers return the list of peers, addresses, protocols supported and connection status
|
||||
func (w *WakuNode) Peers() ([]*Peer, error) {
|
||||
var peers []*Peer
|
||||
for _, peerId := range w.host.Peerstore().Peers() {
|
||||
connected := w.host.Network().Connectedness(peerId) == network.Connected
|
||||
protocols, err := w.host.Peerstore().GetProtocols(peerId)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
addrs := utils.EncapsulatePeerID(peerId, w.host.Peerstore().Addrs(peerId)...)
|
||||
topics, err := w.host.Peerstore().(*wps.WakuPeerstoreImpl).PubSubTopics(peerId)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
peers = append(peers, &Peer{
|
||||
ID: peerId,
|
||||
Protocols: protocols,
|
||||
Connected: connected,
|
||||
Addrs: addrs,
|
||||
PubsubTopics: topics,
|
||||
})
|
||||
}
|
||||
return peers, nil
|
||||
}
|
||||
|
||||
// PeersByShard filters peers based on shard information following static sharding
|
||||
func (w *WakuNode) PeersByStaticShard(cluster uint16, shard uint16) peer.IDSlice {
|
||||
pTopic := wakuprotocol.NewStaticShardingPubsubTopic(cluster, shard).String()
|
||||
return w.peerstore.(wps.WakuPeerstore).PeersByPubSubTopic(pTopic)
|
||||
}
|
||||
|
||||
// PeersByContentTopics filters peers based on contentTopic
|
||||
func (w *WakuNode) PeersByContentTopic(contentTopic string) peer.IDSlice {
|
||||
pTopic, err := wakuprotocol.GetPubSubTopicFromContentTopic(contentTopic)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return w.peerstore.(wps.WakuPeerstore).PeersByPubSubTopic(pTopic)
|
||||
}
|
||||
|
||||
func (w *WakuNode) findRelayNodes(ctx context.Context) {
|
||||
defer w.wg.Done()
|
||||
|
||||
// Feed peers more often right after the bootstrap, then backoff
|
||||
bo := backoffv4.NewExponentialBackOff()
|
||||
bo.InitialInterval = 15 * time.Second
|
||||
bo.Multiplier = 3
|
||||
bo.MaxInterval = 1 * time.Hour
|
||||
bo.MaxElapsedTime = 0 // never stop
|
||||
t := backoffv4.NewTicker(bo)
|
||||
defer t.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-t.C:
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
|
||||
peers, err := w.Peers()
|
||||
if err != nil {
|
||||
w.log.Error("failed to fetch peers", zap.Error(err))
|
||||
continue
|
||||
}
|
||||
|
||||
// Shuffle peers
|
||||
rand.Shuffle(len(peers), func(i, j int) { peers[i], peers[j] = peers[j], peers[i] })
|
||||
|
||||
for _, p := range peers {
|
||||
info := w.Host().Peerstore().PeerInfo(p.ID)
|
||||
supportedProtocols, err := w.Host().Peerstore().SupportsProtocols(p.ID, proto.ProtoIDv2Hop)
|
||||
if err != nil {
|
||||
w.log.Error("could not check supported protocols", zap.Error(err))
|
||||
continue
|
||||
}
|
||||
|
||||
if len(supportedProtocols) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
w.log.Debug("context done, auto-relay has enough peers")
|
||||
return
|
||||
|
||||
case w.circuitRelayNodes <- info:
|
||||
w.log.Debug("published auto-relay peer info", zap.Any("peer-id", p.ID))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func GetNodesFromDNSDiscovery(logger *zap.Logger, ctx context.Context, nameServer string, discoveryURLs []string) []dnsdisc.DiscoveredNode {
|
||||
var discoveredNodes []dnsdisc.DiscoveredNode
|
||||
for _, url := range discoveryURLs {
|
||||
logger.Info("attempting DNS discovery with ", zap.String("URL", url))
|
||||
nodes, err := dnsdisc.RetrieveNodes(ctx, url, dnsdisc.WithNameserver(nameServer))
|
||||
if err != nil {
|
||||
logger.Warn("dns discovery error ", zap.Error(err))
|
||||
} else {
|
||||
var discPeerInfo []peer.AddrInfo
|
||||
for _, n := range nodes {
|
||||
discPeerInfo = append(discPeerInfo, n.PeerInfo)
|
||||
}
|
||||
logger.Info("found dns entries ", zap.Any("nodes", discPeerInfo))
|
||||
discoveredNodes = append(discoveredNodes, nodes...)
|
||||
}
|
||||
}
|
||||
return discoveredNodes
|
||||
}
|
||||
|
||||
func GetDiscv5Option(dnsDiscoveredNodes []dnsdisc.DiscoveredNode, discv5Nodes []string, port uint, autoUpdate bool) (WakuNodeOption, error) {
|
||||
var bootnodes []*enode.Node
|
||||
for _, addr := range discv5Nodes {
|
||||
bootnode, err := enode.Parse(enode.ValidSchemes, addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bootnodes = append(bootnodes, bootnode)
|
||||
}
|
||||
|
||||
for _, n := range dnsDiscoveredNodes {
|
||||
if n.ENR != nil {
|
||||
bootnodes = append(bootnodes, n.ENR)
|
||||
}
|
||||
}
|
||||
|
||||
return WithDiscoveryV5(port, bootnodes, autoUpdate), nil
|
||||
}
|
||||
|
||||
func (w *WakuNode) ClusterID() uint16 {
|
||||
return w.opts.clusterID
|
||||
}
|
||||
23
vendor/github.com/waku-org/go-waku/waku/v2/node/wakunode2_no_rln.go
generated
vendored
Normal file
23
vendor/github.com/waku-org/go-waku/waku/v2/node/wakunode2_no_rln.go
generated
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
//go:build gowaku_no_rln
|
||||
// +build gowaku_no_rln
|
||||
|
||||
package node
|
||||
|
||||
import "context"
|
||||
|
||||
// RLNRelay is used to access any operation related to Waku RLN protocol
|
||||
func (w *WakuNode) RLNRelay() RLNRelay {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *WakuNode) setupRLNRelay() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *WakuNode) startRlnRelay(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *WakuNode) stopRlnRelay() error {
|
||||
return nil
|
||||
}
|
||||
135
vendor/github.com/waku-org/go-waku/waku/v2/node/wakunode2_rln.go
generated
vendored
Normal file
135
vendor/github.com/waku-org/go-waku/waku/v2/node/wakunode2_rln.go
generated
vendored
Normal file
@@ -0,0 +1,135 @@
|
||||
//go:build !gowaku_no_rln
|
||||
// +build !gowaku_no_rln
|
||||
|
||||
package node
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/rln"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/rln/group_manager"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/rln/group_manager/dynamic"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/rln/group_manager/static"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/rln/keystore"
|
||||
r "github.com/waku-org/go-zerokit-rln/rln"
|
||||
)
|
||||
|
||||
// RLNRelay is used to access any operation related to Waku RLN protocol
|
||||
func (w *WakuNode) RLNRelay() RLNRelay {
|
||||
return w.rlnRelay
|
||||
}
|
||||
|
||||
func (w *WakuNode) setupRLNRelay() error {
|
||||
var err error
|
||||
|
||||
if !w.opts.enableRLN {
|
||||
return nil
|
||||
}
|
||||
|
||||
if !w.opts.enableRelay {
|
||||
return errors.New("rln requires relay")
|
||||
}
|
||||
|
||||
var groupManager group_manager.GroupManager
|
||||
|
||||
rlnInstance, rootTracker, err := rln.GetRLNInstanceAndRootTracker(w.opts.rlnTreePath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !w.opts.rlnRelayDynamic {
|
||||
w.log.Info("setting up waku-rln-relay in off-chain mode")
|
||||
|
||||
index := uint(0)
|
||||
if w.opts.rlnRelayMemIndex != nil {
|
||||
index = *w.opts.rlnRelayMemIndex
|
||||
}
|
||||
|
||||
// set up rln relay inputs
|
||||
groupKeys, idCredential, err := static.Setup(index)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
groupManager, err = static.NewStaticGroupManager(groupKeys, idCredential, index, rlnInstance, rootTracker, w.log)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
w.log.Info("setting up waku-rln-relay in on-chain mode")
|
||||
|
||||
var appKeystore *keystore.AppKeystore
|
||||
if w.opts.keystorePath != "" {
|
||||
appKeystore, err = keystore.New(w.opts.keystorePath, dynamic.RLNAppInfo, w.log)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
groupManager, err = dynamic.NewDynamicGroupManager(
|
||||
w.opts.rlnETHClientAddress,
|
||||
w.opts.rlnMembershipContractAddress,
|
||||
w.opts.rlnRelayMemIndex,
|
||||
appKeystore,
|
||||
w.opts.keystorePassword,
|
||||
w.opts.prometheusReg,
|
||||
rlnInstance,
|
||||
rootTracker,
|
||||
w.log,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
rlnRelay := rln.New(group_manager.Details{
|
||||
GroupManager: groupManager,
|
||||
RootTracker: rootTracker,
|
||||
RLN: rlnInstance,
|
||||
}, w.timesource, w.opts.prometheusReg, w.log)
|
||||
|
||||
w.rlnRelay = rlnRelay
|
||||
|
||||
w.Relay().RegisterDefaultValidator(w.rlnRelay.Validator(w.opts.rlnSpamHandler))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *WakuNode) startRlnRelay(ctx context.Context) error {
|
||||
rlnRelay := w.rlnRelay.(*rln.WakuRLNRelay)
|
||||
|
||||
err := rlnRelay.Start(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !w.opts.rlnRelayDynamic {
|
||||
// check the correct construction of the tree by comparing the calculated root against the expected root
|
||||
// no error should happen as it is already captured in the unit tests
|
||||
root, err := rlnRelay.RLN.GetMerkleRoot()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
expectedRoot, err := r.ToBytes32LE(r.STATIC_GROUP_MERKLE_ROOT)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !bytes.Equal(expectedRoot[:], root[:]) {
|
||||
return errors.New("root mismatch: something went wrong not in Merkle tree construction")
|
||||
}
|
||||
}
|
||||
|
||||
w.log.Info("mounted waku RLN relay")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *WakuNode) stopRlnRelay() error {
|
||||
if w.rlnRelay != nil {
|
||||
return w.rlnRelay.Stop()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
579
vendor/github.com/waku-org/go-waku/waku/v2/node/wakuoptions.go
generated
vendored
Normal file
579
vendor/github.com/waku-org/go-waku/waku/v2/node/wakuoptions.go
generated
vendored
Normal file
@@ -0,0 +1,579 @@
|
||||
package node
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
logging "github.com/ipfs/go-log/v2"
|
||||
"github.com/libp2p/go-libp2p"
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
"github.com/libp2p/go-libp2p/config"
|
||||
"github.com/libp2p/go-libp2p/core/crypto"
|
||||
"github.com/libp2p/go-libp2p/core/peerstore"
|
||||
basichost "github.com/libp2p/go-libp2p/p2p/host/basic"
|
||||
"github.com/libp2p/go-libp2p/p2p/muxer/mplex"
|
||||
"github.com/libp2p/go-libp2p/p2p/muxer/yamux"
|
||||
"github.com/libp2p/go-libp2p/p2p/net/connmgr"
|
||||
quic "github.com/libp2p/go-libp2p/p2p/transport/quic"
|
||||
"github.com/libp2p/go-libp2p/p2p/transport/tcp"
|
||||
libp2pwebtransport "github.com/libp2p/go-libp2p/p2p/transport/webtransport"
|
||||
"github.com/multiformats/go-multiaddr"
|
||||
manet "github.com/multiformats/go-multiaddr/net"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/filter"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/pb"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/store"
|
||||
"github.com/waku-org/go-waku/waku/v2/rendezvous"
|
||||
"github.com/waku-org/go-waku/waku/v2/timesource"
|
||||
"github.com/waku-org/go-waku/waku/v2/utils"
|
||||
"go.uber.org/zap"
|
||||
"go.uber.org/zap/zapcore"
|
||||
)
|
||||
|
||||
// Default userAgent
|
||||
const userAgent string = "go-waku"
|
||||
|
||||
// Default minRelayPeersToPublish
|
||||
const defaultMinRelayPeersToPublish = 0
|
||||
|
||||
type WakuNodeParameters struct {
|
||||
hostAddr *net.TCPAddr
|
||||
clusterID uint16
|
||||
dns4Domain string
|
||||
advertiseAddrs []multiaddr.Multiaddr
|
||||
multiAddr []multiaddr.Multiaddr
|
||||
addressFactory basichost.AddrsFactory
|
||||
privKey *ecdsa.PrivateKey
|
||||
libP2POpts []libp2p.Option
|
||||
peerstore peerstore.Peerstore
|
||||
prometheusReg prometheus.Registerer
|
||||
|
||||
circuitRelayMinInterval time.Duration
|
||||
circuitRelayBootDelay time.Duration
|
||||
|
||||
enableNTP bool
|
||||
ntpURLs []string
|
||||
|
||||
enableWS bool
|
||||
wsPort int
|
||||
enableWSS bool
|
||||
wssPort int
|
||||
tlsConfig *tls.Config
|
||||
|
||||
logger *zap.Logger
|
||||
logLevel logging.LogLevel
|
||||
|
||||
enableRelay bool
|
||||
enableLegacyFilter bool
|
||||
isLegacyFilterFullNode bool
|
||||
enableFilterLightNode bool
|
||||
enableFilterFullNode bool
|
||||
legacyFilterOpts []legacy_filter.Option
|
||||
filterOpts []filter.Option
|
||||
pubsubOpts []pubsub.Option
|
||||
|
||||
minRelayPeersToPublish int
|
||||
maxMsgSizeBytes int
|
||||
|
||||
enableStore bool
|
||||
messageProvider store.MessageProvider
|
||||
|
||||
enableRendezvousPoint bool
|
||||
rendezvousDB *rendezvous.DB
|
||||
|
||||
maxPeerConnections int
|
||||
peerStoreCapacity int
|
||||
|
||||
enableDiscV5 bool
|
||||
udpPort uint
|
||||
discV5bootnodes []*enode.Node
|
||||
discV5autoUpdate bool
|
||||
|
||||
enablePeerExchange bool
|
||||
|
||||
enableRLN bool
|
||||
rlnRelayMemIndex *uint
|
||||
rlnRelayDynamic bool
|
||||
rlnSpamHandler func(message *pb.WakuMessage, topic string) error
|
||||
rlnETHClientAddress string
|
||||
keystorePath string
|
||||
keystorePassword string
|
||||
rlnTreePath string
|
||||
rlnMembershipContractAddress common.Address
|
||||
|
||||
keepAliveInterval time.Duration
|
||||
|
||||
enableLightPush bool
|
||||
|
||||
connStatusC chan<- ConnStatus
|
||||
connNotifCh chan<- PeerConnection
|
||||
|
||||
storeFactory storeFactory
|
||||
}
|
||||
|
||||
type WakuNodeOption func(*WakuNodeParameters) error
|
||||
|
||||
// Default options used in the libp2p node
|
||||
var DefaultWakuNodeOptions = []WakuNodeOption{
|
||||
WithPrometheusRegisterer(prometheus.NewRegistry()),
|
||||
WithMaxPeerConnections(50),
|
||||
WithCircuitRelayParams(2*time.Second, 3*time.Minute),
|
||||
}
|
||||
|
||||
// MultiAddresses return the list of multiaddresses configured in the node
|
||||
func (w WakuNodeParameters) MultiAddresses() []multiaddr.Multiaddr {
|
||||
return w.multiAddr
|
||||
}
|
||||
|
||||
// Identity returns a libp2p option containing the identity used by the node
|
||||
func (w WakuNodeParameters) Identity() config.Option {
|
||||
return libp2p.Identity(*w.GetPrivKey())
|
||||
}
|
||||
|
||||
// TLSConfig returns the TLS config used for setting up secure websockets
|
||||
func (w WakuNodeParameters) TLSConfig() *tls.Config {
|
||||
return w.tlsConfig
|
||||
}
|
||||
|
||||
// AddressFactory returns the address factory used by the node's host
|
||||
func (w WakuNodeParameters) AddressFactory() basichost.AddrsFactory {
|
||||
return w.addressFactory
|
||||
}
|
||||
|
||||
// WithLogger is a WakuNodeOption that adds a custom logger
|
||||
func WithLogger(l *zap.Logger) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.logger = l
|
||||
logging.SetPrimaryCore(l.Core())
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithLogLevel is a WakuNodeOption that sets the log level for go-waku
|
||||
func WithLogLevel(lvl zapcore.Level) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.logLevel = logging.LogLevel(lvl)
|
||||
logging.SetAllLoggers(params.logLevel)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithPrometheusRegisterer configures go-waku to use reg as the Registerer for all metrics subsystems
|
||||
func WithPrometheusRegisterer(reg prometheus.Registerer) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
if reg == nil {
|
||||
return errors.New("registerer cannot be nil")
|
||||
}
|
||||
|
||||
params.prometheusReg = reg
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithDNS4Domain is a WakuNodeOption that adds a custom domain name to listen
|
||||
func WithDNS4Domain(dns4Domain string) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.dns4Domain = dns4Domain
|
||||
previousAddrFactory := params.addressFactory
|
||||
params.addressFactory = func(inputAddr []multiaddr.Multiaddr) (addresses []multiaddr.Multiaddr) {
|
||||
addresses = append(addresses, inputAddr...)
|
||||
|
||||
hostAddrMA, err := multiaddr.NewMultiaddr("/dns4/" + params.dns4Domain)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("invalid dns4 address: %s", err.Error()))
|
||||
}
|
||||
|
||||
tcp, _ := multiaddr.NewMultiaddr(fmt.Sprintf("/tcp/%d", params.hostAddr.Port))
|
||||
|
||||
addresses = append(addresses, hostAddrMA.Encapsulate(tcp))
|
||||
|
||||
if params.enableWS || params.enableWSS {
|
||||
if params.enableWSS {
|
||||
// WSS is deprecated in https://github.com/multiformats/multiaddr/pull/109
|
||||
wss, _ := multiaddr.NewMultiaddr(fmt.Sprintf("/tcp/%d/wss", params.wssPort))
|
||||
addresses = append(addresses, hostAddrMA.Encapsulate(wss))
|
||||
tlsws, _ := multiaddr.NewMultiaddr(fmt.Sprintf("/tcp/%d/tls/ws", params.wssPort))
|
||||
addresses = append(addresses, hostAddrMA.Encapsulate(tlsws))
|
||||
} else {
|
||||
ws, _ := multiaddr.NewMultiaddr(fmt.Sprintf("/tcp/%d/ws", params.wsPort))
|
||||
addresses = append(addresses, hostAddrMA.Encapsulate(ws))
|
||||
}
|
||||
}
|
||||
|
||||
if previousAddrFactory != nil {
|
||||
return previousAddrFactory(addresses)
|
||||
}
|
||||
|
||||
return addresses
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithHostAddress is a WakuNodeOption that configures libp2p to listen on a specific address
|
||||
func WithHostAddress(hostAddr *net.TCPAddr) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.hostAddr = hostAddr
|
||||
hostAddrMA, err := manet.FromNetAddr(hostAddr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
params.multiAddr = append(params.multiAddr, hostAddrMA)
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithAdvertiseAddresses is a WakuNodeOption that allows overriding the address used in the waku node with custom value
|
||||
func WithAdvertiseAddresses(advertiseAddrs ...multiaddr.Multiaddr) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.advertiseAddrs = advertiseAddrs
|
||||
return WithMultiaddress(advertiseAddrs...)(params)
|
||||
}
|
||||
}
|
||||
|
||||
// WithExternalIP is a WakuNodeOption that allows overriding the advertised external IP used in the waku node with custom value
|
||||
func WithExternalIP(ip net.IP) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
oldAddrFactory := params.addressFactory
|
||||
params.addressFactory = func(inputAddr []multiaddr.Multiaddr) (addresses []multiaddr.Multiaddr) {
|
||||
addresses = append(addresses, inputAddr...)
|
||||
|
||||
ipType := "/ip4/"
|
||||
if utils.IsIPv6(ip.String()) {
|
||||
ipType = "/ip6/"
|
||||
}
|
||||
|
||||
hostAddrMA, err := multiaddr.NewMultiaddr(ipType + ip.String())
|
||||
if err != nil {
|
||||
panic("Could not build external IP")
|
||||
}
|
||||
|
||||
addrSet := make(map[string]multiaddr.Multiaddr)
|
||||
for _, addr := range inputAddr {
|
||||
_, rest := multiaddr.SplitFirst(addr)
|
||||
|
||||
addr := hostAddrMA.Encapsulate(rest)
|
||||
|
||||
addrSet[addr.String()] = addr
|
||||
}
|
||||
|
||||
for _, addr := range addrSet {
|
||||
addresses = append(addresses, addr)
|
||||
}
|
||||
|
||||
if oldAddrFactory != nil {
|
||||
return oldAddrFactory(addresses)
|
||||
} else {
|
||||
return addresses
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithMultiaddress is a WakuNodeOption that configures libp2p to listen on a list of multiaddresses
|
||||
func WithMultiaddress(addresses ...multiaddr.Multiaddr) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.multiAddr = append(params.multiAddr, addresses...)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithPrivateKey is used to set an ECDSA private key in a libp2p node
|
||||
func WithPrivateKey(privKey *ecdsa.PrivateKey) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.privKey = privKey
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithClusterID is used to set the node's ClusterID
|
||||
func WithClusterID(clusterID uint16) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.clusterID = clusterID
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithNTP is used to use ntp for any operation that requires obtaining time
|
||||
// A list of ntp servers can be passed but if none is specified, some defaults
|
||||
// will be used
|
||||
func WithNTP(ntpURLs ...string) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
if len(ntpURLs) == 0 {
|
||||
ntpURLs = timesource.DefaultServers
|
||||
}
|
||||
|
||||
params.enableNTP = true
|
||||
params.ntpURLs = ntpURLs
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// GetPrivKey returns the private key used in the node
|
||||
func (w *WakuNodeParameters) GetPrivKey() *crypto.PrivKey {
|
||||
privKey := crypto.PrivKey(utils.EcdsaPrivKeyToSecp256k1PrivKey(w.privKey))
|
||||
return &privKey
|
||||
}
|
||||
|
||||
// WithLibP2POptions is a WakuNodeOption used to configure the libp2p node.
|
||||
// This can potentially override any libp2p config that was set with other
|
||||
// WakuNodeOption
|
||||
func WithLibP2POptions(opts ...libp2p.Option) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.libP2POpts = opts
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithPeerStore(ps peerstore.Peerstore) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.peerstore = ps
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithWakuRelay enables the Waku V2 Relay protocol. This WakuNodeOption
|
||||
// accepts a list of WakuRelay gossipsub option to setup the protocol
|
||||
func WithWakuRelay(opts ...pubsub.Option) WakuNodeOption {
|
||||
return WithWakuRelayAndMinPeers(defaultMinRelayPeersToPublish, opts...)
|
||||
}
|
||||
|
||||
// WithWakuRelayAndMinPeers enables the Waku V2 Relay protocol. This WakuNodeOption
|
||||
// accepts a min peers require to publish and a list of WakuRelay gossipsub option to setup the protocol
|
||||
func WithWakuRelayAndMinPeers(minRelayPeersToPublish int, opts ...pubsub.Option) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enableRelay = true
|
||||
params.pubsubOpts = opts
|
||||
params.minRelayPeersToPublish = minRelayPeersToPublish
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithMaxMsgSize(maxMsgSizeBytes int) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.maxMsgSizeBytes = maxMsgSizeBytes
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithMaxPeerConnections(maxPeers int) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.maxPeerConnections = maxPeers
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithPeerStoreCapacity(capacity int) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.peerStoreCapacity = capacity
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithDiscoveryV5 is a WakuOption used to enable DiscV5 peer discovery
|
||||
func WithDiscoveryV5(udpPort uint, bootnodes []*enode.Node, autoUpdate bool) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enableDiscV5 = true
|
||||
params.udpPort = udpPort
|
||||
params.discV5bootnodes = bootnodes
|
||||
params.discV5autoUpdate = autoUpdate
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithPeerExchange is a WakuOption used to enable Peer Exchange
|
||||
func WithPeerExchange() WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enablePeerExchange = true
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithLegacyWakuFilter enables the legacy Waku Filter protocol. This WakuNodeOption
|
||||
// accepts a list of WakuFilter gossipsub options to setup the protocol
|
||||
func WithLegacyWakuFilter(fullnode bool, filterOpts ...legacy_filter.Option) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enableLegacyFilter = true
|
||||
params.isLegacyFilterFullNode = fullnode
|
||||
params.legacyFilterOpts = filterOpts
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithWakuFilter enables the Waku Filter V2 protocol for lightnode functionality
|
||||
func WithWakuFilterLightNode() WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enableFilterLightNode = true
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithWakuFilterFullNode enables the Waku Filter V2 protocol full node functionality.
|
||||
// This WakuNodeOption accepts a list of WakuFilter options to setup the protocol
|
||||
func WithWakuFilterFullNode(filterOpts ...filter.Option) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enableFilterFullNode = true
|
||||
params.filterOpts = filterOpts
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithWakuStore enables the Waku V2 Store protocol and if the messages should
|
||||
// be stored or not in a message provider.
|
||||
func WithWakuStore() WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enableStore = true
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithWakuStoreFactory is used to replace the default WakuStore with a custom
|
||||
// implementation that implements the store.Store interface
|
||||
func WithWakuStoreFactory(factory storeFactory) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.storeFactory = factory
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithMessageProvider is a WakuNodeOption that sets the MessageProvider
|
||||
// used to store and retrieve persisted messages
|
||||
func WithMessageProvider(s store.MessageProvider) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
if s == nil {
|
||||
return errors.New("message provider can't be nil")
|
||||
}
|
||||
params.messageProvider = s
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithLightPush is a WakuNodeOption that enables the lightpush protocol
|
||||
func WithLightPush() WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enableLightPush = true
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithKeepAlive is a WakuNodeOption used to set the interval of time when
|
||||
// each peer will be ping to keep the TCP connection alive
|
||||
func WithKeepAlive(t time.Duration) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.keepAliveInterval = t
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithConnectionStatusChannel is a WakuNodeOption used to set a channel where the
|
||||
// connection status changes will be pushed to. It's useful to identify when peer
|
||||
// connections and disconnections occur
|
||||
func WithConnectionStatusChannel(connStatus chan ConnStatus) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.connStatusC = connStatus
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithConnectionNotification(ch chan<- PeerConnection) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.connNotifCh = ch
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithWebsockets is a WakuNodeOption used to enable websockets support
|
||||
func WithWebsockets(address string, port int) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enableWS = true
|
||||
params.wsPort = port
|
||||
|
||||
wsMa, err := multiaddr.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d/%s", address, port, "ws"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
params.multiAddr = append(params.multiAddr, wsMa)
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithRendezvous is a WakuOption used to set the node as a rendezvous
|
||||
// point, using an specific storage for the peer information
|
||||
func WithRendezvous(db *rendezvous.DB) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enableRendezvousPoint = true
|
||||
params.rendezvousDB = db
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithSecureWebsockets is a WakuNodeOption used to enable secure websockets support
|
||||
func WithSecureWebsockets(address string, port int, certPath string, keyPath string) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enableWSS = true
|
||||
params.wssPort = port
|
||||
|
||||
wsMa, err := multiaddr.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d/%s", address, port, "wss"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
params.multiAddr = append(params.multiAddr, wsMa)
|
||||
|
||||
certificate, err := tls.LoadX509KeyPair(certPath, keyPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
params.tlsConfig = &tls.Config{
|
||||
Certificates: []tls.Certificate{certificate},
|
||||
MinVersion: tls.VersionTLS12,
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithCircuitRelayParams(minInterval time.Duration, bootDelay time.Duration) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.circuitRelayBootDelay = bootDelay
|
||||
params.circuitRelayMinInterval = minInterval
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Default options used in the libp2p node
|
||||
var DefaultLibP2POptions = []libp2p.Option{
|
||||
libp2p.ChainOptions(
|
||||
libp2p.Transport(tcp.NewTCPTransport),
|
||||
libp2p.Transport(quic.NewTransport),
|
||||
libp2p.Transport(libp2pwebtransport.New),
|
||||
),
|
||||
libp2p.UserAgent(userAgent),
|
||||
libp2p.ChainOptions(
|
||||
libp2p.Muxer("/yamux/1.0.0", yamux.DefaultTransport),
|
||||
libp2p.Muxer("/mplex/6.7.0", mplex.DefaultTransport),
|
||||
),
|
||||
libp2p.EnableNATService(),
|
||||
libp2p.ConnectionManager(newConnManager(200, 300, connmgr.WithGracePeriod(0))),
|
||||
libp2p.EnableHolePunching(),
|
||||
}
|
||||
|
||||
func newConnManager(lo int, hi int, opts ...connmgr.Option) *connmgr.BasicConnMgr {
|
||||
mgr, err := connmgr.NewConnManager(lo, hi, opts...)
|
||||
if err != nil {
|
||||
panic("could not create ConnManager: " + err.Error())
|
||||
}
|
||||
return mgr
|
||||
}
|
||||
37
vendor/github.com/waku-org/go-waku/waku/v2/node/wakuoptions_rln.go
generated
vendored
Normal file
37
vendor/github.com/waku-org/go-waku/waku/v2/node/wakuoptions_rln.go
generated
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
//go:build !gowaku_no_rln
|
||||
// +build !gowaku_no_rln
|
||||
|
||||
package node
|
||||
|
||||
import (
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/rln"
|
||||
r "github.com/waku-org/go-zerokit-rln/rln"
|
||||
)
|
||||
|
||||
// WithStaticRLNRelay enables the Waku V2 RLN protocol in offchain mode
|
||||
func WithStaticRLNRelay(memberIndex *r.MembershipIndex, spamHandler rln.SpamHandler) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enableRLN = true
|
||||
params.rlnRelayDynamic = false
|
||||
params.rlnRelayMemIndex = memberIndex
|
||||
params.rlnSpamHandler = spamHandler
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithDynamicRLNRelay enables the Waku V2 RLN protocol in onchain mode.
|
||||
func WithDynamicRLNRelay(keystorePath string, keystorePassword string, treePath string, membershipContract common.Address, membershipIndex *uint, spamHandler rln.SpamHandler, ethClientAddress string) WakuNodeOption {
|
||||
return func(params *WakuNodeParameters) error {
|
||||
params.enableRLN = true
|
||||
params.rlnRelayDynamic = true
|
||||
params.keystorePassword = keystorePassword
|
||||
params.keystorePath = keystorePath
|
||||
params.rlnSpamHandler = spamHandler
|
||||
params.rlnETHClientAddress = ethClientAddress
|
||||
params.rlnMembershipContractAddress = membershipContract
|
||||
params.rlnRelayMemIndex = membershipIndex
|
||||
params.rlnTreePath = treePath
|
||||
return nil
|
||||
}
|
||||
}
|
||||
452
vendor/github.com/waku-org/go-waku/waku/v2/payload/waku_payload.go
generated
vendored
Normal file
452
vendor/github.com/waku-org/go-waku/waku/v2/payload/waku_payload.go
generated
vendored
Normal file
@@ -0,0 +1,452 @@
|
||||
package payload
|
||||
|
||||
import (
|
||||
"crypto/aes"
|
||||
"crypto/cipher"
|
||||
"crypto/ecdsa"
|
||||
crand "crypto/rand"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
|
||||
"errors"
|
||||
"strconv"
|
||||
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/crypto/ecies"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/pb"
|
||||
)
|
||||
|
||||
// KeyKind indicates the type of encryption to apply
|
||||
type KeyKind string
|
||||
|
||||
const (
|
||||
Symmetric KeyKind = "Symmetric"
|
||||
Asymmetric KeyKind = "Asymmetric"
|
||||
None KeyKind = "None"
|
||||
)
|
||||
|
||||
const Unencrypted = 0
|
||||
const V1Encryption = 1
|
||||
|
||||
// Payload contains the data of the message to encode
|
||||
type Payload struct {
|
||||
Data []byte // Raw message payload
|
||||
Padding []byte // Used to align data size, since data size alone might reveal important metainformation.
|
||||
Key *KeyInfo // Contains the type of encryption to apply and the private key to use for signing the message
|
||||
}
|
||||
|
||||
// DecodedPayload contains the data of the received message after decrypting it
|
||||
type DecodedPayload struct {
|
||||
Data []byte // Decoded message payload
|
||||
Padding []byte // Used to align data size, since data size alone might reveal important metainformation.
|
||||
PubKey *ecdsa.PublicKey // The public key that signed the payload
|
||||
Signature []byte
|
||||
}
|
||||
|
||||
type KeyInfo struct {
|
||||
Kind KeyKind // Indicates the type of encryption to use
|
||||
SymKey []byte // If the encryption is Symmetric, a Symmetric key must be specified
|
||||
PubKey ecdsa.PublicKey // If the encryption is Asymmetric, the public key of the message receptor must be specified
|
||||
PrivKey *ecdsa.PrivateKey // Set a privkey if the message requires a signature
|
||||
}
|
||||
|
||||
// Encode encodes a payload depending on the version parameter.
|
||||
// 0 for raw unencrypted data, and 1 for using WakuV1 encoding.
|
||||
func (payload Payload) Encode(version uint32) ([]byte, error) {
|
||||
switch version {
|
||||
case 0:
|
||||
return payload.Data, nil
|
||||
case 1:
|
||||
data, err := payload.v1Data()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if payload.Key.PrivKey != nil {
|
||||
data, err = sign(data, *payload.Key.PrivKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
switch payload.Key.Kind {
|
||||
case Symmetric:
|
||||
encoded, err := encryptSymmetric(data, payload.Key.SymKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("couldn't encrypt using symmetric key: %w", err)
|
||||
}
|
||||
|
||||
return encoded, nil
|
||||
case Asymmetric:
|
||||
encoded, err := encryptAsymmetric(data, &payload.Key.PubKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("couldn't encrypt using asymmetric key: %w", err)
|
||||
}
|
||||
return encoded, nil
|
||||
case None:
|
||||
return nil, errors.New("non supported KeyKind")
|
||||
}
|
||||
}
|
||||
return nil, errors.New("unsupported wakumessage version")
|
||||
}
|
||||
|
||||
func EncodeWakuMessage(message *pb.WakuMessage, keyInfo *KeyInfo) error {
|
||||
msgPayload := message.Payload
|
||||
payload := Payload{
|
||||
Data: msgPayload,
|
||||
Key: keyInfo,
|
||||
}
|
||||
|
||||
encodedBytes, err := payload.Encode(message.GetVersion())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
message.Payload = encodedBytes
|
||||
return nil
|
||||
}
|
||||
|
||||
// DecodePayload decodes a WakuMessage depending on the version parameter.
|
||||
// 0 for raw unencrypted data, and 1 for using WakuV1 decoding
|
||||
func DecodePayload(message *pb.WakuMessage, keyInfo *KeyInfo) (*DecodedPayload, error) {
|
||||
switch message.GetVersion() {
|
||||
case uint32(0):
|
||||
return &DecodedPayload{Data: message.Payload}, nil
|
||||
case uint32(1):
|
||||
switch keyInfo.Kind {
|
||||
case Symmetric:
|
||||
if keyInfo.SymKey == nil {
|
||||
return nil, errors.New("symmetric key is required")
|
||||
}
|
||||
|
||||
decodedData, err := decryptSymmetric(message.Payload, keyInfo.SymKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("couldn't decrypt using symmetric key: %w", err)
|
||||
}
|
||||
|
||||
decodedPayload, err := validateAndParse(decodedData)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return decodedPayload, nil
|
||||
case Asymmetric:
|
||||
if keyInfo.PrivKey == nil {
|
||||
return nil, errors.New("private key is required")
|
||||
}
|
||||
|
||||
decodedData, err := decryptAsymmetric(message.Payload, keyInfo.PrivKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("couldn't decrypt using asymmetric key: %w", err)
|
||||
}
|
||||
|
||||
decodedPayload, err := validateAndParse(decodedData)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return decodedPayload, nil
|
||||
case None:
|
||||
return nil, errors.New("non supported KeyKind")
|
||||
}
|
||||
}
|
||||
return nil, errors.New("unsupported wakumessage version")
|
||||
}
|
||||
|
||||
func DecodeWakuMessage(message *pb.WakuMessage, keyInfo *KeyInfo) error {
|
||||
decodedPayload, err := DecodePayload(message, keyInfo)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
message.Payload = decodedPayload.Data
|
||||
return nil
|
||||
}
|
||||
|
||||
const aesNonceLength = 12
|
||||
const aesKeyLength = 32
|
||||
const signatureFlag = byte(4)
|
||||
const flagsLength = 1
|
||||
const padSizeLimit = 256 // just an arbitrary number, could be changed without breaking the protocol
|
||||
const signatureLength = 65
|
||||
const sizeMask = byte(3)
|
||||
|
||||
// Decrypts a message with a topic key, using AES-GCM-256.
|
||||
// nonce size should be 12 bytes (see cipher.gcmStandardNonceSize).
|
||||
func decryptSymmetric(payload []byte, key []byte) ([]byte, error) {
|
||||
// symmetric messages are expected to contain the 12-byte nonce at the end of the payload
|
||||
if len(payload) < aesNonceLength {
|
||||
return nil, errors.New("missing salt or invalid payload in symmetric message")
|
||||
}
|
||||
|
||||
salt := payload[len(payload)-aesNonceLength:]
|
||||
|
||||
block, err := aes.NewCipher(key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
aesgcm, err := cipher.NewGCM(block)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
decrypted, err := aesgcm.Open(nil, salt, payload[:len(payload)-aesNonceLength], nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return decrypted, nil
|
||||
}
|
||||
|
||||
// Decrypts an encrypted payload with a private key.
|
||||
func decryptAsymmetric(payload []byte, key *ecdsa.PrivateKey) ([]byte, error) {
|
||||
decrypted, err := ecies.ImportECDSA(key).Decrypt(payload, nil, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return decrypted, err
|
||||
}
|
||||
|
||||
// ValidatePublicKey checks the format of the given public key.
|
||||
func validatePublicKey(k *ecdsa.PublicKey) bool {
|
||||
return k != nil && k.X != nil && k.Y != nil && k.X.Sign() != 0 && k.Y.Sign() != 0
|
||||
}
|
||||
|
||||
// Encrypts and returns with a public key.
|
||||
func encryptAsymmetric(rawPayload []byte, key *ecdsa.PublicKey) ([]byte, error) {
|
||||
if !validatePublicKey(key) {
|
||||
return nil, errors.New("invalid public key provided for asymmetric encryption")
|
||||
}
|
||||
|
||||
encrypted, err := ecies.Encrypt(crand.Reader, ecies.ImportECDSAPublic(key), rawPayload, nil, nil)
|
||||
if err == nil {
|
||||
return encrypted, nil
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Encrypts a payload with a topic key, using AES-GCM-256.
|
||||
// nonce size should be 12 bytes (see cipher.gcmStandardNonceSize).
|
||||
func encryptSymmetric(rawPayload []byte, key []byte) ([]byte, error) {
|
||||
if !validateDataIntegrity(key, aesKeyLength) {
|
||||
return nil, errors.New("invalid key provided for symmetric encryption, size: " + strconv.Itoa(len(key)))
|
||||
}
|
||||
block, err := aes.NewCipher(key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
aesgcm, err := cipher.NewGCM(block)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
salt, err := generateSecureRandomData(aesNonceLength) // never use more than 2^32 random nonces with a given key
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
encrypted := aesgcm.Seal(nil, salt, rawPayload, nil)
|
||||
return append(encrypted, salt...), nil
|
||||
}
|
||||
|
||||
// validateDataIntegrity returns false if the data have the wrong or contains all zeros,
|
||||
// which is the simplest and the most common bug.
|
||||
func validateDataIntegrity(k []byte, expectedSize int) bool {
|
||||
if len(k) != expectedSize {
|
||||
return false
|
||||
}
|
||||
if expectedSize > 3 && containsOnlyZeros(k) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// containsOnlyZeros checks if the data contain only zeros.
|
||||
func containsOnlyZeros(data []byte) bool {
|
||||
for _, b := range data {
|
||||
if b != 0 {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// generateSecureRandomData generates random data where extra security is required.
|
||||
// The purpose of this function is to prevent some bugs in software or in hardware
|
||||
// from delivering not-very-random data. This is especially useful for AES nonce,
|
||||
// where true randomness does not really matter, but it is very important to have
|
||||
// a unique nonce for every message.
|
||||
func generateSecureRandomData(length int) ([]byte, error) {
|
||||
x := make([]byte, length)
|
||||
y := make([]byte, length)
|
||||
res := make([]byte, length)
|
||||
|
||||
_, err := crand.Read(x)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
} else if !validateDataIntegrity(x, length) {
|
||||
return nil, errors.New("crypto/rand failed to generate secure random data")
|
||||
}
|
||||
_, err = crand.Read(y)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
} else if !validateDataIntegrity(y, length) {
|
||||
return nil, errors.New("math/rand failed to generate secure random data")
|
||||
}
|
||||
for i := 0; i < length; i++ {
|
||||
res[i] = x[i] ^ y[i]
|
||||
}
|
||||
if !validateDataIntegrity(res, length) {
|
||||
return nil, errors.New("failed to generate secure random data")
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func isMessageSigned(flags byte) bool {
|
||||
return (flags & signatureFlag) != 0
|
||||
}
|
||||
|
||||
// sign calculates the cryptographic signature for the message,
|
||||
// also setting the sign flag.
|
||||
func sign(data []byte, privKey ecdsa.PrivateKey) ([]byte, error) {
|
||||
result := make([]byte, len(data))
|
||||
copy(result, data)
|
||||
|
||||
if isMessageSigned(result[0]) {
|
||||
// this should not happen, but no reason to panic
|
||||
return result, nil
|
||||
}
|
||||
|
||||
result[0] |= signatureFlag // it is important to set this flag before signing
|
||||
hash := crypto.Keccak256(result)
|
||||
signature, err := crypto.Sign(hash, &privKey)
|
||||
|
||||
if err != nil {
|
||||
result[0] &= (0xFF ^ signatureFlag) // clear the flag
|
||||
return nil, err
|
||||
}
|
||||
result = append(result, signature...)
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (payload Payload) v1Data() ([]byte, error) {
|
||||
const payloadSizeFieldMaxSize = 4
|
||||
result := make([]byte, 1, flagsLength+payloadSizeFieldMaxSize+len(payload.Data)+len(payload.Padding)+signatureLength+padSizeLimit)
|
||||
result[0] = 0 // set all the flags to zero
|
||||
result = payload.addPayloadSizeField(result)
|
||||
result = append(result, payload.Data...)
|
||||
result, err := payload.appendPadding(result)
|
||||
return result, err
|
||||
}
|
||||
|
||||
// addPayloadSizeField appends the auxiliary field containing the size of payload
|
||||
func (payload Payload) addPayloadSizeField(input []byte) []byte {
|
||||
fieldSize := getSizeOfPayloadSizeField(payload.Data)
|
||||
field := make([]byte, 4)
|
||||
binary.LittleEndian.PutUint32(field, uint32(len(payload.Data)))
|
||||
field = field[:fieldSize]
|
||||
result := append(input, field...)
|
||||
result[0] |= byte(fieldSize)
|
||||
return result
|
||||
}
|
||||
|
||||
// getSizeOfPayloadSizeField returns the number of bytes necessary to encode the size of payload
|
||||
func getSizeOfPayloadSizeField(payload []byte) int {
|
||||
s := 1
|
||||
for i := len(payload); i >= 256; i /= 256 {
|
||||
s++
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// appendPadding appends the padding specified in params.
|
||||
// If no padding is provided in params, then random padding is generated.
|
||||
func (payload Payload) appendPadding(input []byte) ([]byte, error) {
|
||||
if len(payload.Padding) != 0 {
|
||||
// padding data was provided by the Dapp, just use it as is
|
||||
result := append(input, payload.Padding...)
|
||||
return result, nil
|
||||
}
|
||||
|
||||
rawSize := flagsLength + getSizeOfPayloadSizeField(payload.Data) + len(payload.Data)
|
||||
if payload.Key.PrivKey != nil {
|
||||
rawSize += signatureLength
|
||||
}
|
||||
odd := rawSize % padSizeLimit
|
||||
paddingSize := padSizeLimit - odd
|
||||
pad := make([]byte, paddingSize)
|
||||
_, err := crand.Read(pad)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !validateDataIntegrity(pad, paddingSize) {
|
||||
return nil, errors.New("failed to generate random padding of size " + strconv.Itoa(paddingSize))
|
||||
}
|
||||
result := append(input, pad...)
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func validateAndParse(input []byte) (*DecodedPayload, error) {
|
||||
end := len(input)
|
||||
if end < 1 {
|
||||
return nil, errors.New("invalid message length")
|
||||
}
|
||||
|
||||
msg := new(DecodedPayload)
|
||||
|
||||
if isMessageSigned(input[0]) {
|
||||
end -= signatureLength
|
||||
if end <= 1 {
|
||||
return nil, errors.New("invalid message length")
|
||||
}
|
||||
msg.Signature = input[end : end+signatureLength]
|
||||
|
||||
var err error
|
||||
msg.PubKey, err = msg.sigToPubKey(input)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
beg := 1
|
||||
payloadSize := 0
|
||||
sizeOfPayloadSizeField := int(input[0] & sizeMask) // number of bytes indicating the size of payload
|
||||
|
||||
if sizeOfPayloadSizeField != 0 {
|
||||
if end < beg+sizeOfPayloadSizeField {
|
||||
return nil, errors.New("invalid message length")
|
||||
}
|
||||
payloadSize = int(bytesToUintLittleEndian(input[beg : beg+sizeOfPayloadSizeField]))
|
||||
beg += sizeOfPayloadSizeField
|
||||
if beg+payloadSize > end {
|
||||
return nil, errors.New("invalid message length")
|
||||
}
|
||||
msg.Data = input[beg : beg+payloadSize]
|
||||
}
|
||||
|
||||
beg += payloadSize
|
||||
msg.Padding = input[beg:end]
|
||||
|
||||
return msg, nil
|
||||
}
|
||||
|
||||
// SigToPubKey returns the public key associated to the message's
|
||||
// signature.
|
||||
func (p *DecodedPayload) sigToPubKey(input []byte) (*ecdsa.PublicKey, error) {
|
||||
defer func() { _ = recover() }() // in case of invalid signature
|
||||
hash := crypto.Keccak256(input[0 : len(input)-signatureLength])
|
||||
pub, err := crypto.SigToPub(hash, p.Signature)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return pub, nil
|
||||
}
|
||||
|
||||
// bytesToUintLittleEndian converts the slice to 64-bit unsigned integer.
|
||||
func bytesToUintLittleEndian(b []byte) (res uint64) {
|
||||
mul := uint64(1)
|
||||
for i := 0; i < len(b); i++ {
|
||||
res += uint64(b[i]) * mul
|
||||
mul *= 256
|
||||
}
|
||||
return res
|
||||
}
|
||||
112
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/connection_gater.go
generated
vendored
Normal file
112
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/connection_gater.go
generated
vendored
Normal file
@@ -0,0 +1,112 @@
|
||||
package peermanager
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
"sync"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/control"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/multiformats/go-multiaddr"
|
||||
manet "github.com/multiformats/go-multiaddr/net"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
// ConnectionGater is the implementation of the connection gater used to limit
|
||||
// the number of connections per IP address
|
||||
type ConnectionGater struct {
|
||||
sync.Mutex
|
||||
logger *zap.Logger
|
||||
limiter map[string]int
|
||||
}
|
||||
|
||||
const maxConnsPerIP = 10
|
||||
|
||||
// NewConnectionGater creates a new instance of ConnectionGater
|
||||
func NewConnectionGater(logger *zap.Logger) *ConnectionGater {
|
||||
c := &ConnectionGater{
|
||||
logger: logger.Named("connection-gater"),
|
||||
limiter: make(map[string]int),
|
||||
}
|
||||
|
||||
return c
|
||||
}
|
||||
|
||||
// InterceptPeerDial is called on an imminent outbound peer dial request, prior
|
||||
// to the addresses of that peer being available/resolved. Blocking connections
|
||||
// at this stage is typical for blacklisting scenarios.
|
||||
func (c *ConnectionGater) InterceptPeerDial(_ peer.ID) (allow bool) {
|
||||
return true
|
||||
}
|
||||
|
||||
// InterceptAddrDial is called on an imminent outbound dial to a peer on a
|
||||
// particular address. Blocking connections at this stage is typical for
|
||||
// address filtering.
|
||||
func (c *ConnectionGater) InterceptAddrDial(pid peer.ID, m multiaddr.Multiaddr) (allow bool) {
|
||||
return true
|
||||
}
|
||||
|
||||
// InterceptAccept is called as soon as a transport listener receives an
|
||||
// inbound connection request, before any upgrade takes place. Transports who
|
||||
// accept already secure and/or multiplexed connections (e.g. possibly QUIC)
|
||||
// MUST call this method regardless, for correctness/consistency.
|
||||
func (c *ConnectionGater) InterceptAccept(n network.ConnMultiaddrs) (allow bool) {
|
||||
if !c.validateInboundConn(n.RemoteMultiaddr()) {
|
||||
runtime.Gosched() // Allow other go-routines to run in the event
|
||||
c.logger.Info("exceeds allowed inbound connections from this ip", zap.String("multiaddr", n.RemoteMultiaddr().String()))
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// InterceptSecured is called for both inbound and outbound connections,
|
||||
// after a security handshake has taken place and we've authenticated the peer
|
||||
func (c *ConnectionGater) InterceptSecured(_ network.Direction, _ peer.ID, _ network.ConnMultiaddrs) (allow bool) {
|
||||
return true
|
||||
}
|
||||
|
||||
// InterceptUpgraded is called for inbound and outbound connections, after
|
||||
// libp2p has finished upgrading the connection entirely to a secure,
|
||||
// multiplexed channel.
|
||||
func (c *ConnectionGater) InterceptUpgraded(_ network.Conn) (allow bool, reason control.DisconnectReason) {
|
||||
return true, 0
|
||||
}
|
||||
|
||||
// NotifyDisconnect is called when a connection disconnects.
|
||||
func (c *ConnectionGater) NotifyDisconnect(addr multiaddr.Multiaddr) {
|
||||
ip, err := manet.ToIP(addr)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
|
||||
currConnections, ok := c.limiter[ip.String()]
|
||||
if ok {
|
||||
currConnections--
|
||||
if currConnections <= 0 {
|
||||
delete(c.limiter, ip.String())
|
||||
} else {
|
||||
c.limiter[ip.String()] = currConnections
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *ConnectionGater) validateInboundConn(addr multiaddr.Multiaddr) bool {
|
||||
ip, err := manet.ToIP(addr)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
|
||||
if currConnections := c.limiter[ip.String()]; currConnections+1 > maxConnsPerIP {
|
||||
return false
|
||||
}
|
||||
|
||||
c.limiter[ip.String()]++
|
||||
return true
|
||||
}
|
||||
257
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/peer_connector.go
generated
vendored
Normal file
257
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/peer_connector.go
generated
vendored
Normal file
@@ -0,0 +1,257 @@
|
||||
package peermanager
|
||||
|
||||
// Adapted from github.com/libp2p/go-libp2p@v0.23.2/p2p/discovery/backoff/backoffconnector.go
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"math/rand"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
|
||||
"github.com/libp2p/go-libp2p/p2p/discovery/backoff"
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
wps "github.com/waku-org/go-waku/waku/v2/peerstore"
|
||||
waku_proto "github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
"github.com/waku-org/go-waku/waku/v2/service"
|
||||
|
||||
"go.uber.org/zap"
|
||||
|
||||
lru "github.com/hashicorp/golang-lru"
|
||||
)
|
||||
|
||||
// PeerConnectionStrategy is a utility to connect to peers,
|
||||
// but only if we have not recently tried connecting to them already
|
||||
type PeerConnectionStrategy struct {
|
||||
mux sync.Mutex
|
||||
cache *lru.TwoQueueCache
|
||||
host host.Host
|
||||
pm *PeerManager
|
||||
|
||||
paused atomic.Bool
|
||||
dialTimeout time.Duration
|
||||
*service.CommonDiscoveryService
|
||||
subscriptions []subscription
|
||||
|
||||
backoff backoff.BackoffFactory
|
||||
logger *zap.Logger
|
||||
}
|
||||
|
||||
type subscription struct {
|
||||
ctx context.Context
|
||||
ch <-chan service.PeerData
|
||||
}
|
||||
|
||||
// backoff describes the strategy used to decide how long to backoff after previously attempting to connect to a peer
|
||||
func getBackOff() backoff.BackoffFactory {
|
||||
rngSrc := rand.NewSource(rand.Int63())
|
||||
minBackoff, maxBackoff := time.Minute, time.Hour
|
||||
bkf := backoff.NewExponentialBackoff(minBackoff, maxBackoff, backoff.FullJitter, time.Second, 5.0, 0, rand.New(rngSrc))
|
||||
return bkf
|
||||
}
|
||||
|
||||
// NewPeerConnectionStrategy creates a utility to connect to peers,
|
||||
// but only if we have not recently tried connecting to them already.
|
||||
//
|
||||
// dialTimeout is how long we attempt to connect to a peer before giving up
|
||||
// minPeers is the minimum number of peers that the node should have
|
||||
func NewPeerConnectionStrategy(pm *PeerManager,
|
||||
dialTimeout time.Duration, logger *zap.Logger) (*PeerConnectionStrategy, error) {
|
||||
// cacheSize is the size of a TwoQueueCache
|
||||
cacheSize := 600
|
||||
cache, err := lru.New2Q(cacheSize)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
//
|
||||
pc := &PeerConnectionStrategy{
|
||||
cache: cache,
|
||||
dialTimeout: dialTimeout,
|
||||
CommonDiscoveryService: service.NewCommonDiscoveryService(),
|
||||
pm: pm,
|
||||
backoff: getBackOff(),
|
||||
logger: logger.Named("discovery-connector"),
|
||||
}
|
||||
pm.SetPeerConnector(pc)
|
||||
return pc, nil
|
||||
}
|
||||
|
||||
type connCacheData struct {
|
||||
nextTry time.Time
|
||||
strat backoff.BackoffStrategy
|
||||
}
|
||||
|
||||
// Subscribe receives channels on which discovered peers should be pushed
|
||||
func (c *PeerConnectionStrategy) Subscribe(ctx context.Context, ch <-chan service.PeerData) {
|
||||
// if not running yet, store the subscription and return
|
||||
if err := c.ErrOnNotRunning(); err != nil {
|
||||
c.mux.Lock()
|
||||
c.subscriptions = append(c.subscriptions, subscription{ctx, ch})
|
||||
c.mux.Unlock()
|
||||
return
|
||||
}
|
||||
// if running start a goroutine to consume the subscription
|
||||
c.WaitGroup().Add(1)
|
||||
go func() {
|
||||
defer c.WaitGroup().Done()
|
||||
c.consumeSubscription(subscription{ctx, ch})
|
||||
}()
|
||||
}
|
||||
|
||||
func (c *PeerConnectionStrategy) consumeSubscription(s subscription) {
|
||||
for {
|
||||
// for returning from the loop when peerConnector is paused.
|
||||
select {
|
||||
case <-c.Context().Done():
|
||||
return
|
||||
case <-s.ctx.Done():
|
||||
return
|
||||
default:
|
||||
}
|
||||
//
|
||||
if !c.isPaused() {
|
||||
select {
|
||||
case <-c.Context().Done():
|
||||
return
|
||||
case <-s.ctx.Done():
|
||||
return
|
||||
case p, ok := <-s.ch:
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
triggerImmediateConnection := false
|
||||
//Not connecting to peer as soon as it is discovered,
|
||||
// rather expecting this to be pushed from PeerManager based on the need.
|
||||
if len(c.host.Network().Peers()) < waku_proto.GossipSubOptimalFullMeshSize {
|
||||
triggerImmediateConnection = true
|
||||
}
|
||||
c.logger.Debug("adding discovered peer", logging.HostID("peer", p.AddrInfo.ID))
|
||||
c.pm.AddDiscoveredPeer(p, triggerImmediateConnection)
|
||||
|
||||
case <-time.After(1 * time.Second):
|
||||
// This timeout is to not lock the goroutine
|
||||
break
|
||||
}
|
||||
} else {
|
||||
time.Sleep(1 * time.Second) // sleep while the peerConnector is paused.
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// SetHost sets the host to be able to mount or consume a protocol
|
||||
func (c *PeerConnectionStrategy) SetHost(h host.Host) {
|
||||
c.host = h
|
||||
}
|
||||
|
||||
// Start attempts to connect to the peers passed in by peerCh.
|
||||
// Will not connect to peers if they are within the backoff period.
|
||||
func (c *PeerConnectionStrategy) Start(ctx context.Context) error {
|
||||
return c.CommonDiscoveryService.Start(ctx, c.start)
|
||||
|
||||
}
|
||||
func (c *PeerConnectionStrategy) start() error {
|
||||
c.WaitGroup().Add(1)
|
||||
|
||||
go c.dialPeers()
|
||||
|
||||
c.consumeSubscriptions()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop terminates the peer-connector
|
||||
func (c *PeerConnectionStrategy) Stop() {
|
||||
c.CommonDiscoveryService.Stop(func() {})
|
||||
}
|
||||
|
||||
func (c *PeerConnectionStrategy) isPaused() bool {
|
||||
return c.paused.Load()
|
||||
}
|
||||
|
||||
// it might happen Subscribe is called before peerConnector has started so store these subscriptions in subscriptions array and custom after c.cancel is set.
|
||||
func (c *PeerConnectionStrategy) consumeSubscriptions() {
|
||||
for _, subs := range c.subscriptions {
|
||||
c.WaitGroup().Add(1)
|
||||
go func(s subscription) {
|
||||
defer c.WaitGroup().Done()
|
||||
c.consumeSubscription(s)
|
||||
}(subs)
|
||||
}
|
||||
c.subscriptions = nil
|
||||
}
|
||||
|
||||
const maxActiveDials = 5
|
||||
|
||||
// c.cache is thread safe
|
||||
// only reason why mutex is used: if canDialPeer is queried twice for the same peer.
|
||||
func (c *PeerConnectionStrategy) canDialPeer(pi peer.AddrInfo) bool {
|
||||
c.mux.Lock()
|
||||
defer c.mux.Unlock()
|
||||
val, ok := c.cache.Get(pi.ID)
|
||||
var cachedPeer *connCacheData
|
||||
if ok {
|
||||
tv := val.(*connCacheData)
|
||||
now := time.Now()
|
||||
if now.Before(tv.nextTry) {
|
||||
return false
|
||||
}
|
||||
|
||||
tv.nextTry = now.Add(tv.strat.Delay())
|
||||
} else {
|
||||
cachedPeer = &connCacheData{strat: c.backoff()}
|
||||
cachedPeer.nextTry = time.Now().Add(cachedPeer.strat.Delay())
|
||||
c.cache.Add(pi.ID, cachedPeer)
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (c *PeerConnectionStrategy) dialPeers() {
|
||||
defer c.WaitGroup().Done()
|
||||
|
||||
maxGoRoutines := c.pm.OutRelayPeersTarget
|
||||
if maxGoRoutines > maxActiveDials {
|
||||
maxGoRoutines = maxActiveDials
|
||||
}
|
||||
|
||||
sem := make(chan struct{}, maxGoRoutines)
|
||||
|
||||
for {
|
||||
select {
|
||||
case pd, ok := <-c.GetListeningChan():
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
addrInfo := pd.AddrInfo
|
||||
|
||||
if addrInfo.ID == c.host.ID() || addrInfo.ID == "" ||
|
||||
c.host.Network().Connectedness(addrInfo.ID) == network.Connected {
|
||||
continue
|
||||
}
|
||||
|
||||
if c.canDialPeer(addrInfo) {
|
||||
sem <- struct{}{}
|
||||
c.WaitGroup().Add(1)
|
||||
go c.dialPeer(addrInfo, sem)
|
||||
}
|
||||
case <-c.Context().Done():
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *PeerConnectionStrategy) dialPeer(pi peer.AddrInfo, sem chan struct{}) {
|
||||
defer c.WaitGroup().Done()
|
||||
ctx, cancel := context.WithTimeout(c.Context(), c.dialTimeout)
|
||||
defer cancel()
|
||||
err := c.host.Connect(ctx, pi)
|
||||
if err != nil && !errors.Is(err, context.Canceled) {
|
||||
c.host.Peerstore().(wps.WakuPeerstore).AddConnFailure(pi)
|
||||
c.logger.Warn("connecting to peer", logging.HostID("peerID", pi.ID), zap.Error(err))
|
||||
}
|
||||
<-sem
|
||||
}
|
||||
121
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/peer_discovery.go
generated
vendored
Normal file
121
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/peer_discovery.go
generated
vendored
Normal file
@@ -0,0 +1,121 @@
|
||||
package peermanager
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/waku-org/go-waku/waku/v2/discv5"
|
||||
wps "github.com/waku-org/go-waku/waku/v2/peerstore"
|
||||
waku_proto "github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
wenr "github.com/waku-org/go-waku/waku/v2/protocol/enr"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
"github.com/waku-org/go-waku/waku/v2/service"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
// DiscoverAndConnectToPeers discovers peers using discoveryv5 and connects to the peers.
|
||||
// It discovers peers till maxCount peers are found for the cluster,shard and protocol or the context passed expires.
|
||||
func (pm *PeerManager) DiscoverAndConnectToPeers(ctx context.Context, cluster uint16,
|
||||
shard uint16, serviceProtocol protocol.ID, maxCount int) error {
|
||||
if pm.discoveryService == nil {
|
||||
return nil
|
||||
}
|
||||
peers, err := pm.discoverOnDemand(cluster, shard, serviceProtocol, ctx, maxCount)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pm.logger.Debug("discovered peers on demand ", zap.Int("noOfPeers", len(peers)))
|
||||
connectNow := false
|
||||
//Add discovered peers to peerStore and connect to them
|
||||
for idx, p := range peers {
|
||||
if serviceProtocol != relay.WakuRelayID_v200 && idx <= maxCount {
|
||||
//how many connections to initiate? Maybe this could be a config exposed to client API.
|
||||
//For now just going ahead with initiating connections with 2 nodes in case of non-relay service peers
|
||||
//In case of relay let it go through connectivityLoop
|
||||
connectNow = true
|
||||
}
|
||||
pm.AddDiscoveredPeer(p, connectNow)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// RegisterWakuProtocol to be used by Waku protocols that could be used for peer discovery
|
||||
// Which means protoocl should be as defined in waku2 ENR key in https://rfc.vac.dev/spec/31/.
|
||||
func (pm *PeerManager) RegisterWakuProtocol(proto protocol.ID, bitField uint8) {
|
||||
pm.wakuprotoToENRFieldMap[proto] = WakuProtoInfo{waku2ENRBitField: bitField}
|
||||
}
|
||||
|
||||
// OnDemandPeerDiscovery initiates an on demand peer discovery and
|
||||
// filters peers based on cluster,shard and any wakuservice protocols specified
|
||||
func (pm *PeerManager) discoverOnDemand(cluster uint16,
|
||||
shard uint16, wakuProtocol protocol.ID, ctx context.Context, maxCount int) ([]service.PeerData, error) {
|
||||
var peers []service.PeerData
|
||||
|
||||
wakuProtoInfo, ok := pm.wakuprotoToENRFieldMap[wakuProtocol]
|
||||
if !ok {
|
||||
pm.logger.Error("cannot do on demand discovery for non-waku protocol", zap.String("protocol", string(wakuProtocol)))
|
||||
return nil, errors.New("cannot do on demand discovery for non-waku protocol")
|
||||
}
|
||||
iterator, err := pm.discoveryService.PeerIterator(
|
||||
discv5.FilterShard(cluster, shard),
|
||||
discv5.FilterCapabilities(wakuProtoInfo.waku2ENRBitField),
|
||||
)
|
||||
if err != nil {
|
||||
pm.logger.Error("failed to find peers for shard and services", zap.Uint16("cluster", cluster),
|
||||
zap.Uint16("shard", shard), zap.String("service", string(wakuProtocol)), zap.Error(err))
|
||||
return peers, err
|
||||
}
|
||||
|
||||
//Iterate and fill peers.
|
||||
defer iterator.Close()
|
||||
|
||||
for iterator.Next() {
|
||||
|
||||
pInfo, err := wenr.EnodeToPeerInfo(iterator.Node())
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
pData := service.PeerData{
|
||||
Origin: wps.Discv5,
|
||||
ENR: iterator.Node(),
|
||||
AddrInfo: *pInfo,
|
||||
}
|
||||
peers = append(peers, pData)
|
||||
|
||||
if len(peers) >= maxCount {
|
||||
pm.logger.Debug("found required number of nodes, stopping on demand discovery", zap.Uint16("cluster", cluster),
|
||||
zap.Uint16("shard", shard), zap.Int("required-nodes", maxCount))
|
||||
break
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
pm.logger.Error("failed to find peers for shard and services", zap.Uint16("cluster", cluster),
|
||||
zap.Uint16("shard", shard), zap.String("service", string(wakuProtocol)), zap.Error(ctx.Err()))
|
||||
return nil, ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
}
|
||||
return peers, nil
|
||||
}
|
||||
|
||||
func (pm *PeerManager) discoverPeersByPubsubTopics(pubsubTopics []string, proto protocol.ID, ctx context.Context, maxCount int) {
|
||||
shardsInfo, err := waku_proto.TopicsToRelayShards(pubsubTopics...)
|
||||
if err != nil {
|
||||
pm.logger.Error("failed to convert pubsub topic to shard", zap.Strings("topics", pubsubTopics), zap.Error(err))
|
||||
return
|
||||
}
|
||||
if len(shardsInfo) > 0 {
|
||||
for _, shardInfo := range shardsInfo {
|
||||
err = pm.DiscoverAndConnectToPeers(ctx, shardInfo.ClusterID, shardInfo.ShardIDs[0], proto, maxCount)
|
||||
if err != nil {
|
||||
pm.logger.Error("failed to discover and conenct to peers", zap.Error(err))
|
||||
}
|
||||
}
|
||||
} else {
|
||||
pm.logger.Debug("failed to convert pubsub topics to shards as one of the topics is named pubsubTopic", zap.Strings("topics", pubsubTopics))
|
||||
}
|
||||
}
|
||||
507
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/peer_manager.go
generated
vendored
Normal file
507
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/peer_manager.go
generated
vendored
Normal file
@@ -0,0 +1,507 @@
|
||||
package peermanager
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
"github.com/libp2p/go-libp2p/core/event"
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-libp2p/core/peerstore"
|
||||
"github.com/libp2p/go-libp2p/core/protocol"
|
||||
ma "github.com/multiformats/go-multiaddr"
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
"github.com/waku-org/go-waku/waku/v2/discv5"
|
||||
wps "github.com/waku-org/go-waku/waku/v2/peerstore"
|
||||
waku_proto "github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
wenr "github.com/waku-org/go-waku/waku/v2/protocol/enr"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
"github.com/waku-org/go-waku/waku/v2/service"
|
||||
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
// NodeTopicDetails stores pubSubTopic related data like topicHandle for the node.
|
||||
type NodeTopicDetails struct {
|
||||
topic *pubsub.Topic
|
||||
}
|
||||
|
||||
// WakuProtoInfo holds protocol specific info
|
||||
// To be used at a later stage to set various config such as criteria for peer management specific to each Waku protocols
|
||||
// This should make peer-manager agnostic to protocol
|
||||
type WakuProtoInfo struct {
|
||||
waku2ENRBitField uint8
|
||||
}
|
||||
|
||||
// PeerManager applies various controls and manage connections towards peers.
|
||||
type PeerManager struct {
|
||||
peerConnector *PeerConnectionStrategy
|
||||
maxPeers int
|
||||
maxRelayPeers int
|
||||
logger *zap.Logger
|
||||
InRelayPeersTarget int
|
||||
OutRelayPeersTarget int
|
||||
host host.Host
|
||||
serviceSlots *ServiceSlots
|
||||
ctx context.Context
|
||||
sub event.Subscription
|
||||
topicMutex sync.RWMutex
|
||||
subRelayTopics map[string]*NodeTopicDetails
|
||||
discoveryService *discv5.DiscoveryV5
|
||||
wakuprotoToENRFieldMap map[protocol.ID]WakuProtoInfo
|
||||
}
|
||||
|
||||
// PeerSelection provides various options based on which Peer is selected from a list of peers.
|
||||
type PeerSelection int
|
||||
|
||||
const (
|
||||
Automatic PeerSelection = iota
|
||||
LowestRTT
|
||||
)
|
||||
|
||||
// ErrNoPeersAvailable is emitted when no suitable peers are found for
|
||||
// some protocol
|
||||
var ErrNoPeersAvailable = errors.New("no suitable peers found")
|
||||
|
||||
const peerConnectivityLoopSecs = 15
|
||||
const maxConnsToPeerRatio = 5
|
||||
|
||||
// 80% relay peers 20% service peers
|
||||
func relayAndServicePeers(maxConnections int) (int, int) {
|
||||
return maxConnections - maxConnections/5, maxConnections / 5
|
||||
}
|
||||
|
||||
// 66% inRelayPeers 33% outRelayPeers
|
||||
func inAndOutRelayPeers(relayPeers int) (int, int) {
|
||||
outRelayPeers := relayPeers / 3
|
||||
//
|
||||
const minOutRelayConns = 10
|
||||
if outRelayPeers < minOutRelayConns {
|
||||
outRelayPeers = minOutRelayConns
|
||||
}
|
||||
return relayPeers - outRelayPeers, outRelayPeers
|
||||
}
|
||||
|
||||
// NewPeerManager creates a new peerManager instance.
|
||||
func NewPeerManager(maxConnections int, maxPeers int, logger *zap.Logger) *PeerManager {
|
||||
|
||||
maxRelayPeers, _ := relayAndServicePeers(maxConnections)
|
||||
inRelayPeersTarget, outRelayPeersTarget := inAndOutRelayPeers(maxRelayPeers)
|
||||
|
||||
if maxPeers == 0 || maxConnections > maxPeers {
|
||||
maxPeers = maxConnsToPeerRatio * maxConnections
|
||||
}
|
||||
|
||||
pm := &PeerManager{
|
||||
logger: logger.Named("peer-manager"),
|
||||
maxRelayPeers: maxRelayPeers,
|
||||
InRelayPeersTarget: inRelayPeersTarget,
|
||||
OutRelayPeersTarget: outRelayPeersTarget,
|
||||
serviceSlots: NewServiceSlot(),
|
||||
subRelayTopics: make(map[string]*NodeTopicDetails),
|
||||
maxPeers: maxPeers,
|
||||
wakuprotoToENRFieldMap: map[protocol.ID]WakuProtoInfo{},
|
||||
}
|
||||
logger.Info("PeerManager init values", zap.Int("maxConnections", maxConnections),
|
||||
zap.Int("maxRelayPeers", maxRelayPeers),
|
||||
zap.Int("outRelayPeersTarget", outRelayPeersTarget),
|
||||
zap.Int("inRelayPeersTarget", pm.InRelayPeersTarget),
|
||||
zap.Int("maxPeers", maxPeers))
|
||||
|
||||
return pm
|
||||
}
|
||||
|
||||
// SetDiscv5 sets the discoveryv5 service to be used for peer discovery.
|
||||
func (pm *PeerManager) SetDiscv5(discv5 *discv5.DiscoveryV5) {
|
||||
pm.discoveryService = discv5
|
||||
}
|
||||
|
||||
// SetHost sets the host to be used in order to access the peerStore.
|
||||
func (pm *PeerManager) SetHost(host host.Host) {
|
||||
pm.host = host
|
||||
}
|
||||
|
||||
// SetPeerConnector sets the peer connector to be used for establishing relay connections.
|
||||
func (pm *PeerManager) SetPeerConnector(pc *PeerConnectionStrategy) {
|
||||
pm.peerConnector = pc
|
||||
}
|
||||
|
||||
// Start starts the processing to be done by peer manager.
|
||||
func (pm *PeerManager) Start(ctx context.Context) {
|
||||
|
||||
pm.RegisterWakuProtocol(relay.WakuRelayID_v200, relay.WakuRelayENRField)
|
||||
|
||||
pm.ctx = ctx
|
||||
if pm.sub != nil {
|
||||
go pm.peerEventLoop(ctx)
|
||||
}
|
||||
go pm.connectivityLoop(ctx)
|
||||
}
|
||||
|
||||
// This is a connectivity loop, which currently checks and prunes inbound connections.
|
||||
func (pm *PeerManager) connectivityLoop(ctx context.Context) {
|
||||
pm.connectToRelayPeers()
|
||||
t := time.NewTicker(peerConnectivityLoopSecs * time.Second)
|
||||
defer t.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-t.C:
|
||||
pm.connectToRelayPeers()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// GroupPeersByDirection returns all the connected peers in peer store grouped by Inbound or outBound direction
|
||||
func (pm *PeerManager) GroupPeersByDirection(specificPeers ...peer.ID) (inPeers peer.IDSlice, outPeers peer.IDSlice, err error) {
|
||||
if len(specificPeers) == 0 {
|
||||
specificPeers = pm.host.Network().Peers()
|
||||
}
|
||||
|
||||
for _, p := range specificPeers {
|
||||
direction, err := pm.host.Peerstore().(wps.WakuPeerstore).Direction(p)
|
||||
if err == nil {
|
||||
if direction == network.DirInbound {
|
||||
inPeers = append(inPeers, p)
|
||||
} else if direction == network.DirOutbound {
|
||||
outPeers = append(outPeers, p)
|
||||
}
|
||||
} else {
|
||||
pm.logger.Error("failed to retrieve peer direction",
|
||||
logging.HostID("peerID", p), zap.Error(err))
|
||||
}
|
||||
}
|
||||
return inPeers, outPeers, nil
|
||||
}
|
||||
|
||||
// getRelayPeers - Returns list of in and out peers supporting WakuRelayProtocol within specifiedPeers.
|
||||
// If specifiedPeers is empty, it checks within all peers in peerStore.
|
||||
func (pm *PeerManager) getRelayPeers(specificPeers ...peer.ID) (inRelayPeers peer.IDSlice, outRelayPeers peer.IDSlice) {
|
||||
//Group peers by their connected direction inbound or outbound.
|
||||
inPeers, outPeers, err := pm.GroupPeersByDirection(specificPeers...)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
pm.logger.Debug("number of peers connected", zap.Int("inPeers", inPeers.Len()),
|
||||
zap.Int("outPeers", outPeers.Len()))
|
||||
|
||||
//Need to filter peers to check if they support relay
|
||||
if inPeers.Len() != 0 {
|
||||
inRelayPeers, _ = pm.FilterPeersByProto(inPeers, relay.WakuRelayID_v200)
|
||||
}
|
||||
if outPeers.Len() != 0 {
|
||||
outRelayPeers, _ = pm.FilterPeersByProto(outPeers, relay.WakuRelayID_v200)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// ensureMinRelayConnsPerTopic makes sure there are min of D conns per pubsubTopic.
|
||||
// If not it will look into peerStore to initiate more connections.
|
||||
// If peerStore doesn't have enough peers, will wait for discv5 to find more and try in next cycle
|
||||
func (pm *PeerManager) ensureMinRelayConnsPerTopic() {
|
||||
pm.topicMutex.RLock()
|
||||
defer pm.topicMutex.RUnlock()
|
||||
for topicStr, topicInst := range pm.subRelayTopics {
|
||||
|
||||
// @cammellos reported that ListPeers returned an invalid number of
|
||||
// peers. This will ensure that the peers returned by this function
|
||||
// match those peers that are currently connected
|
||||
curPeerLen := 0
|
||||
for _, p := range topicInst.topic.ListPeers() {
|
||||
if pm.host.Network().Connectedness(p) == network.Connected {
|
||||
curPeerLen++
|
||||
}
|
||||
}
|
||||
if curPeerLen < waku_proto.GossipSubOptimalFullMeshSize {
|
||||
pm.logger.Debug("subscribed topic is unhealthy, initiating more connections to maintain health",
|
||||
zap.String("pubSubTopic", topicStr), zap.Int("connectedPeerCount", curPeerLen),
|
||||
zap.Int("optimumPeers", waku_proto.GossipSubOptimalFullMeshSize))
|
||||
//Find not connected peers.
|
||||
notConnectedPeers := pm.getNotConnectedPers(topicStr)
|
||||
if notConnectedPeers.Len() == 0 {
|
||||
pm.logger.Debug("could not find any peers in peerstore to connect to, discovering more", zap.String("pubSubTopic", topicStr))
|
||||
pm.discoverPeersByPubsubTopics([]string{topicStr}, relay.WakuRelayID_v200, pm.ctx, 2)
|
||||
continue
|
||||
}
|
||||
pm.logger.Debug("connecting to eligible peers in peerstore", zap.String("pubSubTopic", topicStr))
|
||||
//Connect to eligible peers.
|
||||
numPeersToConnect := waku_proto.GossipSubOptimalFullMeshSize - curPeerLen
|
||||
|
||||
if numPeersToConnect > notConnectedPeers.Len() {
|
||||
numPeersToConnect = notConnectedPeers.Len()
|
||||
}
|
||||
pm.connectToPeers(notConnectedPeers[0:numPeersToConnect])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// connectToRelayPeers ensures minimum D connections are there for each pubSubTopic.
|
||||
// If not, initiates connections to additional peers.
|
||||
// It also checks for incoming relay connections and prunes once they cross inRelayTarget
|
||||
func (pm *PeerManager) connectToRelayPeers() {
|
||||
//Check for out peer connections and connect to more peers.
|
||||
pm.ensureMinRelayConnsPerTopic()
|
||||
|
||||
inRelayPeers, outRelayPeers := pm.getRelayPeers()
|
||||
pm.logger.Debug("number of relay peers connected",
|
||||
zap.Int("in", inRelayPeers.Len()),
|
||||
zap.Int("out", outRelayPeers.Len()))
|
||||
if inRelayPeers.Len() > 0 &&
|
||||
inRelayPeers.Len() > pm.InRelayPeersTarget {
|
||||
pm.pruneInRelayConns(inRelayPeers)
|
||||
}
|
||||
}
|
||||
|
||||
// connectToPeers connects to peers provided in the list if the addresses have not expired.
|
||||
func (pm *PeerManager) connectToPeers(peers peer.IDSlice) {
|
||||
for _, peerID := range peers {
|
||||
peerData := AddrInfoToPeerData(wps.PeerManager, peerID, pm.host)
|
||||
if peerData == nil {
|
||||
continue
|
||||
}
|
||||
pm.peerConnector.PushToChan(*peerData)
|
||||
}
|
||||
}
|
||||
|
||||
// getNotConnectedPers returns peers for a pubSubTopic that are not connected.
|
||||
func (pm *PeerManager) getNotConnectedPers(pubsubTopic string) (notConnectedPeers peer.IDSlice) {
|
||||
var peerList peer.IDSlice
|
||||
if pubsubTopic == "" {
|
||||
peerList = pm.host.Peerstore().Peers()
|
||||
} else {
|
||||
peerList = pm.host.Peerstore().(*wps.WakuPeerstoreImpl).PeersByPubSubTopic(pubsubTopic)
|
||||
}
|
||||
for _, peerID := range peerList {
|
||||
if pm.host.Network().Connectedness(peerID) != network.Connected {
|
||||
notConnectedPeers = append(notConnectedPeers, peerID)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// pruneInRelayConns prune any incoming relay connections crossing derived inrelayPeerTarget
|
||||
func (pm *PeerManager) pruneInRelayConns(inRelayPeers peer.IDSlice) {
|
||||
|
||||
//Start disconnecting peers, based on what?
|
||||
//For now no preference is used
|
||||
//TODO: Need to have more intelligent way of doing this, maybe peer scores.
|
||||
//TODO: Keep optimalPeersRequired for a pubSubTopic in mind while pruning connections to peers.
|
||||
pm.logger.Info("peer connections exceed target relay peers, hence pruning",
|
||||
zap.Int("cnt", inRelayPeers.Len()), zap.Int("target", pm.InRelayPeersTarget))
|
||||
for pruningStartIndex := pm.InRelayPeersTarget; pruningStartIndex < inRelayPeers.Len(); pruningStartIndex++ {
|
||||
p := inRelayPeers[pruningStartIndex]
|
||||
err := pm.host.Network().ClosePeer(p)
|
||||
if err != nil {
|
||||
pm.logger.Warn("failed to disconnect connection towards peer",
|
||||
logging.HostID("peerID", p))
|
||||
}
|
||||
pm.logger.Debug("successfully disconnected connection towards peer",
|
||||
logging.HostID("peerID", p))
|
||||
}
|
||||
}
|
||||
|
||||
func (pm *PeerManager) processPeerENR(p *service.PeerData) []protocol.ID {
|
||||
shards, err := wenr.RelaySharding(p.ENR.Record())
|
||||
if err != nil {
|
||||
pm.logger.Error("could not derive relayShards from ENR", zap.Error(err),
|
||||
logging.HostID("peer", p.AddrInfo.ID), zap.String("enr", p.ENR.String()))
|
||||
} else {
|
||||
if shards != nil {
|
||||
p.PubsubTopics = make([]string, 0)
|
||||
topics := shards.Topics()
|
||||
for _, topic := range topics {
|
||||
topicStr := topic.String()
|
||||
p.PubsubTopics = append(p.PubsubTopics, topicStr)
|
||||
}
|
||||
} else {
|
||||
pm.logger.Debug("ENR doesn't have relay shards", logging.HostID("peer", p.AddrInfo.ID))
|
||||
}
|
||||
}
|
||||
supportedProtos := []protocol.ID{}
|
||||
//Identify and specify protocols supported by the peer based on the discovered peer's ENR
|
||||
var enrField wenr.WakuEnrBitfield
|
||||
if err := p.ENR.Record().Load(enr.WithEntry(wenr.WakuENRField, &enrField)); err == nil {
|
||||
for proto, protoENR := range pm.wakuprotoToENRFieldMap {
|
||||
protoENRField := protoENR.waku2ENRBitField
|
||||
if protoENRField&enrField != 0 {
|
||||
supportedProtos = append(supportedProtos, proto)
|
||||
//Add Service peers to serviceSlots.
|
||||
pm.addPeerToServiceSlot(proto, p.AddrInfo.ID)
|
||||
}
|
||||
}
|
||||
}
|
||||
return supportedProtos
|
||||
}
|
||||
|
||||
// AddDiscoveredPeer to add dynamically discovered peers.
|
||||
// Note that these peers will not be set in service-slots.
|
||||
func (pm *PeerManager) AddDiscoveredPeer(p service.PeerData, connectNow bool) {
|
||||
//Doing this check again inside addPeer, in order to avoid additional complexity of rollingBack other changes.
|
||||
if pm.maxPeers <= pm.host.Peerstore().Peers().Len() {
|
||||
return
|
||||
}
|
||||
//Check if the peer is already present, if so skip adding
|
||||
_, err := pm.host.Peerstore().(wps.WakuPeerstore).Origin(p.AddrInfo.ID)
|
||||
if err == nil {
|
||||
enr, err := pm.host.Peerstore().(wps.WakuPeerstore).ENR(p.AddrInfo.ID)
|
||||
// Verifying if the enr record is more recent (DiscV5 and peer exchange can return peers already seen)
|
||||
if err == nil && enr.Record().Seq() >= p.ENR.Seq() {
|
||||
return
|
||||
}
|
||||
if err != nil {
|
||||
//Peer is already in peer-store but it doesn't have an enr, but discovered peer has ENR
|
||||
pm.logger.Info("peer already found in peerstore, but doesn't have an ENR record, re-adding",
|
||||
logging.HostID("peer", p.AddrInfo.ID), zap.Uint64("newENRSeq", p.ENR.Seq()))
|
||||
} else {
|
||||
//Peer is already in peer-store but stored ENR is older than discovered one.
|
||||
pm.logger.Info("peer already found in peerstore, but re-adding it as ENR sequence is higher than locally stored",
|
||||
logging.HostID("peer", p.AddrInfo.ID), zap.Uint64("newENRSeq", p.ENR.Seq()), zap.Uint64("storedENRSeq", enr.Record().Seq()))
|
||||
}
|
||||
}
|
||||
|
||||
supportedProtos := []protocol.ID{}
|
||||
if len(p.PubsubTopics) == 0 && p.ENR != nil {
|
||||
// Try to fetch shard info and supported protocols from ENR to arrive at pubSub topics.
|
||||
supportedProtos = pm.processPeerENR(&p)
|
||||
}
|
||||
|
||||
_ = pm.addPeer(p.AddrInfo.ID, p.AddrInfo.Addrs, p.Origin, p.PubsubTopics, supportedProtos...)
|
||||
|
||||
if p.ENR != nil {
|
||||
err := pm.host.Peerstore().(wps.WakuPeerstore).SetENR(p.AddrInfo.ID, p.ENR)
|
||||
if err != nil {
|
||||
pm.logger.Error("could not store enr", zap.Error(err),
|
||||
logging.HostID("peer", p.AddrInfo.ID), zap.String("enr", p.ENR.String()))
|
||||
}
|
||||
}
|
||||
if connectNow {
|
||||
pm.logger.Debug("connecting now to discovered peer", logging.HostID("peer", p.AddrInfo.ID))
|
||||
go pm.peerConnector.PushToChan(p)
|
||||
}
|
||||
}
|
||||
|
||||
// addPeer adds peer to only the peerStore.
|
||||
// It also sets additional metadata such as origin, ENR and supported protocols
|
||||
func (pm *PeerManager) addPeer(ID peer.ID, addrs []ma.Multiaddr, origin wps.Origin, pubSubTopics []string, protocols ...protocol.ID) error {
|
||||
if pm.maxPeers <= pm.host.Peerstore().Peers().Len() {
|
||||
pm.logger.Error("could not add peer as peer store capacity is reached", logging.HostID("peer", ID), zap.Int("capacity", pm.maxPeers))
|
||||
return errors.New("peer store capacity reached")
|
||||
}
|
||||
pm.logger.Info("adding peer to peerstore", logging.HostID("peer", ID))
|
||||
if origin == wps.Static {
|
||||
pm.host.Peerstore().AddAddrs(ID, addrs, peerstore.PermanentAddrTTL)
|
||||
} else {
|
||||
//Need to re-evaluate the address expiry
|
||||
// For now expiring them with default addressTTL which is an hour.
|
||||
pm.host.Peerstore().AddAddrs(ID, addrs, peerstore.AddressTTL)
|
||||
}
|
||||
err := pm.host.Peerstore().(wps.WakuPeerstore).SetOrigin(ID, origin)
|
||||
if err != nil {
|
||||
pm.logger.Error("could not set origin", zap.Error(err), logging.HostID("peer", ID))
|
||||
return err
|
||||
}
|
||||
|
||||
if len(protocols) > 0 {
|
||||
err = pm.host.Peerstore().AddProtocols(ID, protocols...)
|
||||
if err != nil {
|
||||
pm.logger.Error("could not set protocols", zap.Error(err), logging.HostID("peer", ID))
|
||||
return err
|
||||
}
|
||||
}
|
||||
if len(pubSubTopics) == 0 {
|
||||
// Probably the peer is discovered via DNSDiscovery (for which we don't have pubSubTopic info)
|
||||
//If pubSubTopic and enr is empty or no shard info in ENR,then set to defaultPubSubTopic
|
||||
pubSubTopics = []string{relay.DefaultWakuTopic}
|
||||
}
|
||||
err = pm.host.Peerstore().(wps.WakuPeerstore).SetPubSubTopics(ID, pubSubTopics)
|
||||
if err != nil {
|
||||
pm.logger.Error("could not store pubSubTopic", zap.Error(err),
|
||||
logging.HostID("peer", ID), zap.Strings("topics", pubSubTopics))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func AddrInfoToPeerData(origin wps.Origin, peerID peer.ID, host host.Host, pubsubTopics ...string) *service.PeerData {
|
||||
addrs := host.Peerstore().Addrs(peerID)
|
||||
if len(addrs) == 0 {
|
||||
//Addresses expired, remove peer from peerStore
|
||||
host.Peerstore().RemovePeer(peerID)
|
||||
return nil
|
||||
}
|
||||
return &service.PeerData{
|
||||
Origin: origin,
|
||||
AddrInfo: peer.AddrInfo{
|
||||
ID: peerID,
|
||||
Addrs: addrs,
|
||||
},
|
||||
PubsubTopics: pubsubTopics,
|
||||
}
|
||||
}
|
||||
|
||||
// AddPeer adds peer to the peerStore and also to service slots
|
||||
func (pm *PeerManager) AddPeer(address ma.Multiaddr, origin wps.Origin, pubsubTopics []string, protocols ...protocol.ID) (*service.PeerData, error) {
|
||||
//Assuming all addresses have peerId
|
||||
info, err := peer.AddrInfoFromP2pAddr(address)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
//Add Service peers to serviceSlots.
|
||||
for _, proto := range protocols {
|
||||
pm.addPeerToServiceSlot(proto, info.ID)
|
||||
}
|
||||
|
||||
//Add to the peer-store
|
||||
err = pm.addPeer(info.ID, info.Addrs, origin, pubsubTopics, protocols...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
pData := &service.PeerData{
|
||||
Origin: origin,
|
||||
AddrInfo: peer.AddrInfo{
|
||||
ID: info.ID,
|
||||
Addrs: info.Addrs,
|
||||
},
|
||||
PubsubTopics: pubsubTopics,
|
||||
}
|
||||
|
||||
return pData, nil
|
||||
}
|
||||
|
||||
// Connect establishes a connection to a
|
||||
func (pm *PeerManager) Connect(pData *service.PeerData) {
|
||||
go pm.peerConnector.PushToChan(*pData)
|
||||
}
|
||||
|
||||
// RemovePeer deletes peer from the peerStore after disconnecting it.
|
||||
// It also removes the peer from serviceSlot.
|
||||
func (pm *PeerManager) RemovePeer(peerID peer.ID) {
|
||||
pm.host.Peerstore().RemovePeer(peerID)
|
||||
//Search if this peer is in serviceSlot and if so, remove it from there
|
||||
// TODO:Add another peer which is statically configured to the serviceSlot.
|
||||
pm.serviceSlots.removePeer(peerID)
|
||||
}
|
||||
|
||||
// addPeerToServiceSlot adds a peerID to serviceSlot.
|
||||
// Adding to peerStore is expected to be already done by caller.
|
||||
// If relay proto is passed, it is not added to serviceSlot.
|
||||
func (pm *PeerManager) addPeerToServiceSlot(proto protocol.ID, peerID peer.ID) {
|
||||
if proto == relay.WakuRelayID_v200 {
|
||||
pm.logger.Debug("cannot add Relay peer to service peer slots")
|
||||
return
|
||||
}
|
||||
|
||||
//For now adding the peer to serviceSlot which means the latest added peer would be given priority.
|
||||
//TODO: Ideally we should sort the peers per service and return best peer based on peer score or RTT etc.
|
||||
pm.logger.Info("adding peer to service slots", logging.HostID("peer", peerID),
|
||||
zap.String("service", string(proto)))
|
||||
// getPeers returns nil for WakuRelayIDv200 protocol, but we don't run this ServiceSlot code for WakuRelayIDv200 protocol
|
||||
pm.serviceSlots.getPeers(proto).add(peerID)
|
||||
}
|
||||
232
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/peer_selection.go
generated
vendored
Normal file
232
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/peer_selection.go
generated
vendored
Normal file
@@ -0,0 +1,232 @@
|
||||
package peermanager
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"math/rand"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/libp2p/go-libp2p/p2p/protocol/ping"
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
wps "github.com/waku-org/go-waku/waku/v2/peerstore"
|
||||
waku_proto "github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
// SelectPeerByContentTopic is used to return a random peer that supports a given protocol for given contentTopic.
|
||||
// If a list of specific peers is passed, the peer will be chosen from that list assuming
|
||||
// it supports the chosen protocol and contentTopic, otherwise it will chose a peer from the service slot.
|
||||
// If a peer cannot be found in the service slot, a peer will be selected from node peerstore
|
||||
func (pm *PeerManager) SelectPeerByContentTopics(proto protocol.ID, contentTopics []string, specificPeers ...peer.ID) (peer.ID, error) {
|
||||
pubsubTopics := []string{}
|
||||
for _, cTopic := range contentTopics {
|
||||
pubsubTopic, err := waku_proto.GetPubSubTopicFromContentTopic(cTopic)
|
||||
if err != nil {
|
||||
pm.logger.Debug("selectPeer: failed to get contentTopic from pubsubTopic", zap.String("contentTopic", cTopic))
|
||||
return "", err
|
||||
}
|
||||
pubsubTopics = append(pubsubTopics, pubsubTopic)
|
||||
}
|
||||
return pm.SelectPeer(PeerSelectionCriteria{PubsubTopics: pubsubTopics, Proto: proto, SpecificPeers: specificPeers})
|
||||
}
|
||||
|
||||
// SelectRandomPeer is used to return a random peer that supports a given protocol.
|
||||
// If a list of specific peers is passed, the peer will be chosen from that list assuming
|
||||
// it supports the chosen protocol, otherwise it will chose a peer from the service slot.
|
||||
// If a peer cannot be found in the service slot, a peer will be selected from node peerstore
|
||||
// if pubSubTopic is specified, peer is selected from list that support the pubSubTopic
|
||||
func (pm *PeerManager) SelectRandomPeer(criteria PeerSelectionCriteria) (peer.ID, error) {
|
||||
// @TODO We need to be more strategic about which peers we dial. Right now we just set one on the service.
|
||||
// Ideally depending on the query and our set of peers we take a subset of ideal peers.
|
||||
// This will require us to check for various factors such as:
|
||||
// - which topics they track
|
||||
// - latency?
|
||||
|
||||
peerID, err := pm.selectServicePeer(criteria.Proto, criteria.PubsubTopics, criteria.Ctx, criteria.SpecificPeers...)
|
||||
if err == nil {
|
||||
return peerID, nil
|
||||
} else if !errors.Is(err, ErrNoPeersAvailable) {
|
||||
pm.logger.Debug("could not retrieve random peer from slot", zap.String("protocol", string(criteria.Proto)),
|
||||
zap.Strings("pubsubTopics", criteria.PubsubTopics), zap.Error(err))
|
||||
return "", err
|
||||
}
|
||||
|
||||
// if not found in serviceSlots or proto == WakuRelayIDv200
|
||||
filteredPeers, err := pm.FilterPeersByProto(criteria.SpecificPeers, criteria.Proto)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if len(criteria.PubsubTopics) > 0 {
|
||||
filteredPeers = pm.host.Peerstore().(wps.WakuPeerstore).PeersByPubSubTopics(criteria.PubsubTopics, filteredPeers...)
|
||||
}
|
||||
return selectRandomPeer(filteredPeers, pm.logger)
|
||||
}
|
||||
|
||||
func (pm *PeerManager) selectServicePeer(proto protocol.ID, pubsubTopics []string, ctx context.Context, specificPeers ...peer.ID) (peer.ID, error) {
|
||||
var peerID peer.ID
|
||||
var err error
|
||||
for retryCnt := 0; retryCnt < 1; retryCnt++ {
|
||||
//Try to fetch from serviceSlot
|
||||
if slot := pm.serviceSlots.getPeers(proto); slot != nil {
|
||||
if len(pubsubTopics) == 0 || (len(pubsubTopics) == 1 && pubsubTopics[0] == "") {
|
||||
return slot.getRandom()
|
||||
} else { //PubsubTopic based selection
|
||||
keys := make([]peer.ID, 0, len(slot.m))
|
||||
for i := range slot.m {
|
||||
keys = append(keys, i)
|
||||
}
|
||||
selectedPeers := pm.host.Peerstore().(wps.WakuPeerstore).PeersByPubSubTopics(pubsubTopics, keys...)
|
||||
peerID, err = selectRandomPeer(selectedPeers, pm.logger)
|
||||
if err == nil {
|
||||
return peerID, nil
|
||||
} else {
|
||||
pm.logger.Debug("discovering peers by pubsubTopic", zap.Strings("pubsubTopics", pubsubTopics))
|
||||
//Trigger on-demand discovery for this topic and connect to peer immediately.
|
||||
//For now discover atleast 1 peer for the criteria
|
||||
pm.discoverPeersByPubsubTopics(pubsubTopics, proto, ctx, 1)
|
||||
//Try to fetch peers again.
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if peerID == "" {
|
||||
pm.logger.Debug("could not retrieve random peer from slot", zap.Error(err))
|
||||
}
|
||||
return "", ErrNoPeersAvailable
|
||||
}
|
||||
|
||||
// PeerSelectionCriteria is the selection Criteria that is used by PeerManager to select peers.
|
||||
type PeerSelectionCriteria struct {
|
||||
SelectionType PeerSelection
|
||||
Proto protocol.ID
|
||||
PubsubTopics []string
|
||||
SpecificPeers peer.IDSlice
|
||||
Ctx context.Context
|
||||
}
|
||||
|
||||
// SelectPeer selects a peer based on selectionType specified.
|
||||
// Context is required only in case of selectionType set to LowestRTT
|
||||
func (pm *PeerManager) SelectPeer(criteria PeerSelectionCriteria) (peer.ID, error) {
|
||||
|
||||
switch criteria.SelectionType {
|
||||
case Automatic:
|
||||
return pm.SelectRandomPeer(criteria)
|
||||
case LowestRTT:
|
||||
return pm.SelectPeerWithLowestRTT(criteria)
|
||||
default:
|
||||
return "", errors.New("unknown peer selection type specified")
|
||||
}
|
||||
}
|
||||
|
||||
type pingResult struct {
|
||||
p peer.ID
|
||||
rtt time.Duration
|
||||
}
|
||||
|
||||
// SelectPeerWithLowestRTT will select a peer that supports a specific protocol with the lowest reply time
|
||||
// If a list of specific peers is passed, the peer will be chosen from that list assuming
|
||||
// it supports the chosen protocol, otherwise it will chose a peer from the node peerstore
|
||||
// TO OPTIMIZE: As of now the peer with lowest RTT is identified when select is called, this should be optimized
|
||||
// to maintain the RTT as part of peer-scoring and just select based on that.
|
||||
func (pm *PeerManager) SelectPeerWithLowestRTT(criteria PeerSelectionCriteria) (peer.ID, error) {
|
||||
var peers peer.IDSlice
|
||||
var err error
|
||||
if criteria.Ctx == nil {
|
||||
pm.logger.Warn("context is not passed for peerSelectionwithRTT, using background context")
|
||||
criteria.Ctx = context.Background()
|
||||
}
|
||||
|
||||
if len(criteria.PubsubTopics) == 0 || (len(criteria.PubsubTopics) == 1 && criteria.PubsubTopics[0] == "") {
|
||||
peers = pm.host.Peerstore().(wps.WakuPeerstore).PeersByPubSubTopics(criteria.PubsubTopics, criteria.SpecificPeers...)
|
||||
}
|
||||
|
||||
peers, err = pm.FilterPeersByProto(peers, criteria.Proto)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
wg := sync.WaitGroup{}
|
||||
waitCh := make(chan struct{})
|
||||
pingCh := make(chan pingResult, 1000)
|
||||
|
||||
wg.Add(len(peers))
|
||||
|
||||
go func() {
|
||||
for _, p := range peers {
|
||||
go func(p peer.ID) {
|
||||
defer wg.Done()
|
||||
ctx, cancel := context.WithTimeout(criteria.Ctx, 3*time.Second)
|
||||
defer cancel()
|
||||
result := <-ping.Ping(ctx, pm.host, p)
|
||||
if result.Error == nil {
|
||||
pingCh <- pingResult{
|
||||
p: p,
|
||||
rtt: result.RTT,
|
||||
}
|
||||
} else {
|
||||
pm.logger.Debug("could not ping", logging.HostID("peer", p), zap.Error(result.Error))
|
||||
}
|
||||
}(p)
|
||||
}
|
||||
wg.Wait()
|
||||
close(waitCh)
|
||||
close(pingCh)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-waitCh:
|
||||
var min *pingResult
|
||||
for p := range pingCh {
|
||||
if min == nil {
|
||||
min = &p
|
||||
} else {
|
||||
if p.rtt < min.rtt {
|
||||
min = &p
|
||||
}
|
||||
}
|
||||
}
|
||||
if min == nil {
|
||||
return "", ErrNoPeersAvailable
|
||||
}
|
||||
|
||||
return min.p, nil
|
||||
case <-criteria.Ctx.Done():
|
||||
return "", ErrNoPeersAvailable
|
||||
}
|
||||
}
|
||||
|
||||
// selectRandomPeer selects randomly a peer from the list of peers passed.
|
||||
func selectRandomPeer(peers peer.IDSlice, log *zap.Logger) (peer.ID, error) {
|
||||
if len(peers) >= 1 {
|
||||
peerID := peers[rand.Intn(len(peers))]
|
||||
// TODO: proper heuristic here that compares peer scores and selects "best" one. For now a random peer for the given protocol is returned
|
||||
return peerID, nil // nolint: gosec
|
||||
}
|
||||
|
||||
return "", ErrNoPeersAvailable
|
||||
}
|
||||
|
||||
// FilterPeersByProto filters list of peers that support specified protocols.
|
||||
// If specificPeers is nil, all peers in the host's peerStore are considered for filtering.
|
||||
func (pm *PeerManager) FilterPeersByProto(specificPeers peer.IDSlice, proto ...protocol.ID) (peer.IDSlice, error) {
|
||||
peerSet := specificPeers
|
||||
if len(peerSet) == 0 {
|
||||
peerSet = pm.host.Peerstore().Peers()
|
||||
}
|
||||
|
||||
var peers peer.IDSlice
|
||||
for _, peer := range peerSet {
|
||||
protocols, err := pm.host.Peerstore().SupportsProtocols(peer, proto...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(protocols) > 0 {
|
||||
peers = append(peers, peer)
|
||||
}
|
||||
}
|
||||
return peers, nil
|
||||
}
|
||||
78
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/service_slot.go
generated
vendored
Normal file
78
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/service_slot.go
generated
vendored
Normal file
@@ -0,0 +1,78 @@
|
||||
package peermanager
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
)
|
||||
|
||||
type peerMap struct {
|
||||
mu sync.RWMutex
|
||||
m map[peer.ID]struct{}
|
||||
}
|
||||
|
||||
func newPeerMap() *peerMap {
|
||||
return &peerMap{
|
||||
m: map[peer.ID]struct{}{},
|
||||
}
|
||||
}
|
||||
|
||||
func (pm *peerMap) getRandom() (peer.ID, error) {
|
||||
pm.mu.RLock()
|
||||
defer pm.mu.RUnlock()
|
||||
for pID := range pm.m {
|
||||
return pID, nil
|
||||
}
|
||||
return "", ErrNoPeersAvailable
|
||||
|
||||
}
|
||||
|
||||
func (pm *peerMap) remove(pID peer.ID) {
|
||||
pm.mu.Lock()
|
||||
defer pm.mu.Unlock()
|
||||
|
||||
delete(pm.m, pID)
|
||||
}
|
||||
func (pm *peerMap) add(pID peer.ID) {
|
||||
pm.mu.Lock()
|
||||
defer pm.mu.Unlock()
|
||||
pm.m[pID] = struct{}{}
|
||||
}
|
||||
|
||||
// ServiceSlots is for storing service slots for a given protocol topic
|
||||
type ServiceSlots struct {
|
||||
mu sync.Mutex
|
||||
m map[protocol.ID]*peerMap
|
||||
}
|
||||
|
||||
// NewServiceSlot is a constructor for ServiceSlot
|
||||
func NewServiceSlot() *ServiceSlots {
|
||||
return &ServiceSlots{
|
||||
m: map[protocol.ID]*peerMap{},
|
||||
}
|
||||
}
|
||||
|
||||
// getPeers for getting all the peers for a given protocol
|
||||
// since peerMap is only used in peerManager that's why it is unexported
|
||||
func (slots *ServiceSlots) getPeers(proto protocol.ID) *peerMap {
|
||||
if proto == relay.WakuRelayID_v200 {
|
||||
return nil
|
||||
}
|
||||
slots.mu.Lock()
|
||||
defer slots.mu.Unlock()
|
||||
if slots.m[proto] == nil {
|
||||
slots.m[proto] = newPeerMap()
|
||||
}
|
||||
return slots.m[proto]
|
||||
}
|
||||
|
||||
// RemovePeer for removing peer ID for a given protocol
|
||||
func (slots *ServiceSlots) removePeer(peerID peer.ID) {
|
||||
slots.mu.Lock()
|
||||
defer slots.mu.Unlock()
|
||||
for _, m := range slots.m {
|
||||
m.remove(peerID)
|
||||
}
|
||||
}
|
||||
166
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/topic_event_handler.go
generated
vendored
Normal file
166
vendor/github.com/waku-org/go-waku/waku/v2/peermanager/topic_event_handler.go
generated
vendored
Normal file
@@ -0,0 +1,166 @@
|
||||
package peermanager
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
"github.com/libp2p/go-libp2p/core/event"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
wps "github.com/waku-org/go-waku/waku/v2/peerstore"
|
||||
waku_proto "github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
func (pm *PeerManager) SubscribeToRelayEvtBus(bus event.Bus) error {
|
||||
var err error
|
||||
pm.sub, err = bus.Subscribe([]interface{}{new(relay.EvtPeerTopic), new(relay.EvtRelaySubscribed), new(relay.EvtRelayUnsubscribed)})
|
||||
return err
|
||||
}
|
||||
|
||||
func (pm *PeerManager) handleNewRelayTopicSubscription(pubsubTopic string, topicInst *pubsub.Topic) {
|
||||
pm.logger.Info("handleNewRelayTopicSubscription", zap.String("pubSubTopic", pubsubTopic))
|
||||
pm.topicMutex.Lock()
|
||||
defer pm.topicMutex.Unlock()
|
||||
|
||||
_, ok := pm.subRelayTopics[pubsubTopic]
|
||||
if ok {
|
||||
//Nothing to be done, as we are already subscribed to this topic.
|
||||
return
|
||||
}
|
||||
pm.subRelayTopics[pubsubTopic] = &NodeTopicDetails{topicInst}
|
||||
//Check how many relay peers we are connected to that subscribe to this topic, if less than D find peers in peerstore and connect.
|
||||
//If no peers in peerStore, trigger discovery for this topic?
|
||||
relevantPeersForPubSubTopic := pm.host.Peerstore().(*wps.WakuPeerstoreImpl).PeersByPubSubTopic(pubsubTopic)
|
||||
var notConnectedPeers peer.IDSlice
|
||||
connectedPeers := 0
|
||||
for _, peer := range relevantPeersForPubSubTopic {
|
||||
if pm.host.Network().Connectedness(peer) == network.Connected {
|
||||
connectedPeers++
|
||||
} else {
|
||||
notConnectedPeers = append(notConnectedPeers, peer)
|
||||
}
|
||||
}
|
||||
|
||||
if connectedPeers >= waku_proto.GossipSubOptimalFullMeshSize { //TODO: Use a config rather than hard-coding.
|
||||
// Should we use optimal number or define some sort of a config for the node to choose from?
|
||||
// A desktop node may choose this to be 4-6, whereas a service node may choose this to be 8-12 based on resources it has
|
||||
// or bandwidth it can support.
|
||||
// Should we link this to bandwidth management somehow or just depend on some sort of config profile?
|
||||
pm.logger.Info("Optimal required relay peers for new pubSubTopic are already connected ", zap.String("pubSubTopic", pubsubTopic),
|
||||
zap.Int("connectedPeerCount", connectedPeers))
|
||||
return
|
||||
}
|
||||
triggerDiscovery := false
|
||||
if notConnectedPeers.Len() > 0 {
|
||||
numPeersToConnect := notConnectedPeers.Len() - connectedPeers
|
||||
if numPeersToConnect < 0 {
|
||||
numPeersToConnect = notConnectedPeers.Len()
|
||||
} else if numPeersToConnect-connectedPeers > waku_proto.GossipSubOptimalFullMeshSize {
|
||||
numPeersToConnect = waku_proto.GossipSubOptimalFullMeshSize - connectedPeers
|
||||
}
|
||||
if numPeersToConnect+connectedPeers < waku_proto.GossipSubOptimalFullMeshSize {
|
||||
triggerDiscovery = true
|
||||
}
|
||||
//For now all peers are being given same priority,
|
||||
// Later we may want to choose peers that have more shards in common over others.
|
||||
pm.connectToPeers(notConnectedPeers[0:numPeersToConnect])
|
||||
} else {
|
||||
triggerDiscovery = true
|
||||
}
|
||||
|
||||
if triggerDiscovery {
|
||||
//TODO: Initiate on-demand discovery for this pubSubTopic.
|
||||
// Use peer-exchange and rendevouz?
|
||||
//Should we query discoverycache to find out if there are any more peers before triggering discovery?
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func (pm *PeerManager) handleNewRelayTopicUnSubscription(pubsubTopic string) {
|
||||
pm.logger.Info("handleNewRelayTopicUnSubscription", zap.String("pubSubTopic", pubsubTopic))
|
||||
pm.topicMutex.Lock()
|
||||
defer pm.topicMutex.Unlock()
|
||||
_, ok := pm.subRelayTopics[pubsubTopic]
|
||||
if !ok {
|
||||
//Nothing to be done, as we are already unsubscribed from this topic.
|
||||
return
|
||||
}
|
||||
delete(pm.subRelayTopics, pubsubTopic)
|
||||
|
||||
//If there are peers only subscribed to this topic, disconnect them.
|
||||
relevantPeersForPubSubTopic := pm.host.Peerstore().(*wps.WakuPeerstoreImpl).PeersByPubSubTopic(pubsubTopic)
|
||||
for _, peer := range relevantPeersForPubSubTopic {
|
||||
if pm.host.Network().Connectedness(peer) == network.Connected {
|
||||
peerTopics, err := pm.host.Peerstore().(*wps.WakuPeerstoreImpl).PubSubTopics(peer)
|
||||
if err != nil {
|
||||
pm.logger.Error("Could not retrieve pubsub topics for peer", zap.Error(err),
|
||||
logging.HostID("peerID", peer))
|
||||
continue
|
||||
}
|
||||
if len(peerTopics) == 1 && peerTopics[0] == pubsubTopic {
|
||||
err := pm.host.Network().ClosePeer(peer)
|
||||
if err != nil {
|
||||
pm.logger.Warn("Failed to disconnect connection towards peer",
|
||||
logging.HostID("peerID", peer))
|
||||
continue
|
||||
}
|
||||
pm.logger.Debug("Successfully disconnected connection towards peer",
|
||||
logging.HostID("peerID", peer))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (pm *PeerManager) handlerPeerTopicEvent(peerEvt relay.EvtPeerTopic) {
|
||||
wps := pm.host.Peerstore().(*wps.WakuPeerstoreImpl)
|
||||
peerID := peerEvt.PeerID
|
||||
if peerEvt.State == relay.PEER_JOINED {
|
||||
err := wps.AddPubSubTopic(peerID, peerEvt.PubsubTopic)
|
||||
if err != nil {
|
||||
pm.logger.Error("failed to add pubSubTopic for peer",
|
||||
logging.HostID("peerID", peerID), zap.String("topic", peerEvt.PubsubTopic), zap.Error(err))
|
||||
}
|
||||
} else if peerEvt.State == relay.PEER_LEFT {
|
||||
err := wps.RemovePubSubTopic(peerID, peerEvt.PubsubTopic)
|
||||
if err != nil {
|
||||
pm.logger.Error("failed to remove pubSubTopic for peer",
|
||||
logging.HostID("peerID", peerID), zap.Error(err))
|
||||
}
|
||||
} else {
|
||||
pm.logger.Error("unknown peer event received", zap.Int("eventState", int(peerEvt.State)))
|
||||
}
|
||||
}
|
||||
|
||||
func (pm *PeerManager) peerEventLoop(ctx context.Context) {
|
||||
defer pm.sub.Close()
|
||||
for {
|
||||
select {
|
||||
case e := <-pm.sub.Out():
|
||||
switch e := e.(type) {
|
||||
case relay.EvtPeerTopic:
|
||||
{
|
||||
peerEvt := (relay.EvtPeerTopic)(e)
|
||||
pm.handlerPeerTopicEvent(peerEvt)
|
||||
}
|
||||
case relay.EvtRelaySubscribed:
|
||||
{
|
||||
eventDetails := (relay.EvtRelaySubscribed)(e)
|
||||
pm.handleNewRelayTopicSubscription(eventDetails.Topic, eventDetails.TopicInst)
|
||||
}
|
||||
case relay.EvtRelayUnsubscribed:
|
||||
{
|
||||
eventDetails := (relay.EvtRelayUnsubscribed)(e)
|
||||
pm.handleNewRelayTopicUnSubscription(eventDetails.Topic)
|
||||
}
|
||||
default:
|
||||
pm.logger.Error("unsupported event type", zap.Any("eventType", e))
|
||||
}
|
||||
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
139
vendor/github.com/waku-org/go-waku/waku/v2/peerstore/inherited.go
generated
vendored
Normal file
139
vendor/github.com/waku-org/go-waku/waku/v2/peerstore/inherited.go
generated
vendored
Normal file
@@ -0,0 +1,139 @@
|
||||
package peerstore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
ic "github.com/libp2p/go-libp2p/core/crypto"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-libp2p/core/peerstore"
|
||||
"github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/libp2p/go-libp2p/core/record"
|
||||
ma "github.com/multiformats/go-multiaddr"
|
||||
)
|
||||
|
||||
// Contains all interface methods from a libp2p peerstore
|
||||
|
||||
func (ps *WakuPeerstoreImpl) AddAddr(p peer.ID, addr ma.Multiaddr, ttl time.Duration) {
|
||||
ps.peerStore.AddAddr(p, addr, ttl)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) AddAddrs(p peer.ID, addrs []ma.Multiaddr, ttl time.Duration) {
|
||||
ps.peerStore.AddAddrs(p, addrs, ttl)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) SetAddr(p peer.ID, addr ma.Multiaddr, ttl time.Duration) {
|
||||
ps.peerStore.SetAddr(p, addr, ttl)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) SetAddrs(p peer.ID, addrs []ma.Multiaddr, ttl time.Duration) {
|
||||
ps.peerStore.SetAddrs(p, addrs, ttl)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) UpdateAddrs(p peer.ID, oldTTL time.Duration, newTTL time.Duration) {
|
||||
ps.peerStore.UpdateAddrs(p, oldTTL, newTTL)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) Addrs(p peer.ID) []ma.Multiaddr {
|
||||
return ps.peerStore.Addrs(p)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) AddrStream(ctx context.Context, p peer.ID) <-chan ma.Multiaddr {
|
||||
return ps.peerStore.AddrStream(ctx, p)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) ClearAddrs(p peer.ID) {
|
||||
ps.peerStore.ClearAddrs(p)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) PeersWithAddrs() peer.IDSlice {
|
||||
return ps.peerStore.PeersWithAddrs()
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) PeerInfo(peerID peer.ID) peer.AddrInfo {
|
||||
return ps.peerStore.PeerInfo(peerID)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) Peers() peer.IDSlice {
|
||||
return ps.peerStore.Peers()
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) Close() error {
|
||||
return ps.peerStore.Close()
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) PubKey(p peer.ID) ic.PubKey {
|
||||
return ps.peerStore.PubKey(p)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) AddPubKey(p peer.ID, pubk ic.PubKey) error {
|
||||
return ps.peerStore.AddPubKey(p, pubk)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) PrivKey(p peer.ID) ic.PrivKey {
|
||||
return ps.peerStore.PrivKey(p)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) AddPrivKey(p peer.ID, privk ic.PrivKey) error {
|
||||
return ps.peerStore.AddPrivKey(p, privk)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) PeersWithKeys() peer.IDSlice {
|
||||
return ps.peerStore.PeersWithKeys()
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) RemovePeer(p peer.ID) {
|
||||
ps.peerStore.RemovePeer(p)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) Get(p peer.ID, key string) (interface{}, error) {
|
||||
return ps.peerStore.Get(p, key)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) Put(p peer.ID, key string, val interface{}) error {
|
||||
return ps.peerStore.Put(p, key, val)
|
||||
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) RecordLatency(p peer.ID, t time.Duration) {
|
||||
ps.peerStore.RecordLatency(p, t)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) LatencyEWMA(p peer.ID) time.Duration {
|
||||
return ps.peerStore.LatencyEWMA(p)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) GetProtocols(p peer.ID) ([]protocol.ID, error) {
|
||||
return ps.peerStore.GetProtocols(p)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) AddProtocols(p peer.ID, proto ...protocol.ID) error {
|
||||
return ps.peerStore.AddProtocols(p, proto...)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) SetProtocols(p peer.ID, proto ...protocol.ID) error {
|
||||
return ps.peerStore.SetProtocols(p, proto...)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) RemoveProtocols(p peer.ID, proto ...protocol.ID) error {
|
||||
return ps.peerStore.RemoveProtocols(p, proto...)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) SupportsProtocols(p peer.ID, proto ...protocol.ID) ([]protocol.ID, error) {
|
||||
return ps.peerStore.SupportsProtocols(p, proto...)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) FirstSupportedProtocol(p peer.ID, proto ...protocol.ID) (protocol.ID, error) {
|
||||
return ps.peerStore.FirstSupportedProtocol(p, proto...)
|
||||
}
|
||||
|
||||
func (ps *WakuPeerstoreImpl) ConsumePeerRecord(s *record.Envelope, ttl time.Duration) (accepted bool, err error) {
|
||||
return ps.peerStore.(peerstore.CertifiedAddrBook).ConsumePeerRecord(s, ttl)
|
||||
}
|
||||
|
||||
// GetPeerRecord returns a Envelope containing a PeerRecord for the
|
||||
// given peer id, if one exists.
|
||||
// Returns nil if no signed PeerRecord exists for the peer.
|
||||
func (ps *WakuPeerstoreImpl) GetPeerRecord(p peer.ID) *record.Envelope {
|
||||
return ps.peerStore.(peerstore.CertifiedAddrBook).GetPeerRecord(p)
|
||||
}
|
||||
260
vendor/github.com/waku-org/go-waku/waku/v2/peerstore/waku_peer_store.go
generated
vendored
Normal file
260
vendor/github.com/waku-org/go-waku/waku/v2/peerstore/waku_peer_store.go
generated
vendored
Normal file
@@ -0,0 +1,260 @@
|
||||
package peerstore
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"sync"
|
||||
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-libp2p/core/peerstore"
|
||||
)
|
||||
|
||||
// Origin is used to determine how the peer is identified,
|
||||
// either it is statically added or discovered via one of the discovery protocols
|
||||
type Origin int64
|
||||
|
||||
const (
|
||||
Unknown Origin = iota
|
||||
Discv5
|
||||
Static
|
||||
PeerExchange
|
||||
DNSDiscovery
|
||||
Rendezvous
|
||||
PeerManager
|
||||
)
|
||||
|
||||
const peerOrigin = "origin"
|
||||
const peerENR = "enr"
|
||||
const peerDirection = "direction"
|
||||
const peerPubSubTopics = "pubSubTopics"
|
||||
|
||||
// ConnectionFailures contains connection failure information towards all peers
|
||||
type ConnectionFailures struct {
|
||||
sync.RWMutex
|
||||
failures map[peer.ID]int
|
||||
}
|
||||
|
||||
// WakuPeerstoreImpl is a implementation of WakuPeerStore
|
||||
type WakuPeerstoreImpl struct {
|
||||
peerStore peerstore.Peerstore
|
||||
connFailures ConnectionFailures
|
||||
}
|
||||
|
||||
// WakuPeerstore is an interface for implementing WakuPeerStore
|
||||
type WakuPeerstore interface {
|
||||
SetOrigin(p peer.ID, origin Origin) error
|
||||
Origin(p peer.ID) (Origin, error)
|
||||
PeersByOrigin(origin Origin) peer.IDSlice
|
||||
SetENR(p peer.ID, enr *enode.Node) error
|
||||
ENR(p peer.ID) (*enode.Node, error)
|
||||
AddConnFailure(p peer.AddrInfo)
|
||||
ResetConnFailures(p peer.AddrInfo)
|
||||
ConnFailures(p peer.AddrInfo) int
|
||||
|
||||
SetDirection(p peer.ID, direction network.Direction) error
|
||||
Direction(p peer.ID) (network.Direction, error)
|
||||
|
||||
AddPubSubTopic(p peer.ID, topic string) error
|
||||
RemovePubSubTopic(p peer.ID, topic string) error
|
||||
PubSubTopics(p peer.ID) ([]string, error)
|
||||
SetPubSubTopics(p peer.ID, topics []string) error
|
||||
PeersByPubSubTopics(pubSubTopics []string, specificPeers ...peer.ID) peer.IDSlice
|
||||
PeersByPubSubTopic(pubSubTopic string, specificPeers ...peer.ID) peer.IDSlice
|
||||
}
|
||||
|
||||
// NewWakuPeerstore creates a new WakuPeerStore object
|
||||
func NewWakuPeerstore(p peerstore.Peerstore) peerstore.Peerstore {
|
||||
return &WakuPeerstoreImpl{
|
||||
peerStore: p,
|
||||
connFailures: ConnectionFailures{
|
||||
failures: make(map[peer.ID]int),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// SetOrigin sets origin for a specific peer.
|
||||
func (ps *WakuPeerstoreImpl) SetOrigin(p peer.ID, origin Origin) error {
|
||||
return ps.peerStore.Put(p, peerOrigin, origin)
|
||||
}
|
||||
|
||||
// Origin fetches the origin for a specific peer.
|
||||
func (ps *WakuPeerstoreImpl) Origin(p peer.ID) (Origin, error) {
|
||||
result, err := ps.peerStore.Get(p, peerOrigin)
|
||||
if err != nil {
|
||||
return Unknown, err
|
||||
}
|
||||
|
||||
return result.(Origin), nil
|
||||
}
|
||||
|
||||
// PeersByOrigin returns the list of peers for a specific origin
|
||||
func (ps *WakuPeerstoreImpl) PeersByOrigin(expectedOrigin Origin) peer.IDSlice {
|
||||
var result peer.IDSlice
|
||||
for _, p := range ps.Peers() {
|
||||
actualOrigin, err := ps.Origin(p)
|
||||
if err == nil && actualOrigin == expectedOrigin {
|
||||
result = append(result, p)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// SetENR sets the ENR record a peer
|
||||
func (ps *WakuPeerstoreImpl) SetENR(p peer.ID, enr *enode.Node) error {
|
||||
return ps.peerStore.Put(p, peerENR, enr)
|
||||
}
|
||||
|
||||
// ENR fetches the ENR record for a peer
|
||||
func (ps *WakuPeerstoreImpl) ENR(p peer.ID) (*enode.Node, error) {
|
||||
result, err := ps.peerStore.Get(p, peerENR)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return result.(*enode.Node), nil
|
||||
}
|
||||
|
||||
// AddConnFailure increments connectionFailures for a peer
|
||||
func (ps *WakuPeerstoreImpl) AddConnFailure(p peer.AddrInfo) {
|
||||
ps.connFailures.Lock()
|
||||
defer ps.connFailures.Unlock()
|
||||
ps.connFailures.failures[p.ID]++
|
||||
}
|
||||
|
||||
// ResetConnFailures resets connectionFailures for a peer to 0
|
||||
func (ps *WakuPeerstoreImpl) ResetConnFailures(p peer.AddrInfo) {
|
||||
ps.connFailures.Lock()
|
||||
defer ps.connFailures.Unlock()
|
||||
ps.connFailures.failures[p.ID] = 0
|
||||
}
|
||||
|
||||
// ConnFailures fetches connectionFailures for a peer
|
||||
func (ps *WakuPeerstoreImpl) ConnFailures(p peer.AddrInfo) int {
|
||||
ps.connFailures.RLock()
|
||||
defer ps.connFailures.RUnlock()
|
||||
return ps.connFailures.failures[p.ID]
|
||||
}
|
||||
|
||||
// SetDirection sets connection direction for a specific peer.
|
||||
func (ps *WakuPeerstoreImpl) SetDirection(p peer.ID, direction network.Direction) error {
|
||||
return ps.peerStore.Put(p, peerDirection, direction)
|
||||
}
|
||||
|
||||
// Direction fetches the connection direction (Inbound or outBound) for a specific peer
|
||||
func (ps *WakuPeerstoreImpl) Direction(p peer.ID) (network.Direction, error) {
|
||||
result, err := ps.peerStore.Get(p, peerDirection)
|
||||
if err != nil {
|
||||
return network.DirUnknown, err
|
||||
}
|
||||
|
||||
return result.(network.Direction), nil
|
||||
}
|
||||
|
||||
// AddPubSubTopic adds a new pubSubTopic for a peer
|
||||
func (ps *WakuPeerstoreImpl) AddPubSubTopic(p peer.ID, topic string) error {
|
||||
existingTopics, err := ps.PubSubTopics(p)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, t := range existingTopics {
|
||||
if t == topic {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
existingTopics = append(existingTopics, topic)
|
||||
return ps.peerStore.Put(p, peerPubSubTopics, existingTopics)
|
||||
}
|
||||
|
||||
// RemovePubSubTopic removes a pubSubTopic from the peer
|
||||
func (ps *WakuPeerstoreImpl) RemovePubSubTopic(p peer.ID, topic string) error {
|
||||
existingTopics, err := ps.PubSubTopics(p)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(existingTopics) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
for i := range existingTopics {
|
||||
if existingTopics[i] == topic {
|
||||
existingTopics = append(existingTopics[:i], existingTopics[i+1:]...)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
err = ps.SetPubSubTopics(p, existingTopics)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetPubSubTopics sets pubSubTopics for a peer, it also overrides existing ones that were set previously..
|
||||
func (ps *WakuPeerstoreImpl) SetPubSubTopics(p peer.ID, topics []string) error {
|
||||
return ps.peerStore.Put(p, peerPubSubTopics, topics)
|
||||
}
|
||||
|
||||
// PubSubTopics fetches list of pubSubTopics for a peer
|
||||
func (ps *WakuPeerstoreImpl) PubSubTopics(p peer.ID) ([]string, error) {
|
||||
result, err := ps.peerStore.Get(p, peerPubSubTopics)
|
||||
if err != nil {
|
||||
if errors.Is(err, peerstore.ErrNotFound) {
|
||||
return nil, nil
|
||||
} else {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return result.([]string), nil
|
||||
}
|
||||
|
||||
// PeersByPubSubTopic Returns list of peers that support list of pubSubTopics
|
||||
// If specifiPeers are listed, filtering is done from them otherwise from all peers in peerstore
|
||||
func (ps *WakuPeerstoreImpl) PeersByPubSubTopics(pubSubTopics []string, specificPeers ...peer.ID) peer.IDSlice {
|
||||
if specificPeers == nil {
|
||||
specificPeers = ps.Peers()
|
||||
}
|
||||
var result peer.IDSlice
|
||||
for _, p := range specificPeers {
|
||||
topics, err := ps.PubSubTopics(p)
|
||||
if err == nil {
|
||||
//Convoluted and crazy logic to find subset of topics
|
||||
// Could not find a better way to do it?
|
||||
peerTopicMap := make(map[string]struct{})
|
||||
match := true
|
||||
for _, topic := range topics {
|
||||
peerTopicMap[topic] = struct{}{}
|
||||
}
|
||||
for _, topic := range pubSubTopics {
|
||||
if _, ok := peerTopicMap[topic]; !ok {
|
||||
match = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if match {
|
||||
result = append(result, p)
|
||||
}
|
||||
} //Note: skipping a peer in case of an error as there would be others available.
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// PeersByPubSubTopic Returns list of peers that support a single pubSubTopic
|
||||
// If specifiPeers are listed, filtering is done from them otherwise from all peers in peerstore
|
||||
func (ps *WakuPeerstoreImpl) PeersByPubSubTopic(pubSubTopic string, specificPeers ...peer.ID) peer.IDSlice {
|
||||
if specificPeers == nil {
|
||||
specificPeers = ps.Peers()
|
||||
}
|
||||
var result peer.IDSlice
|
||||
for _, p := range specificPeers {
|
||||
topics, err := ps.PubSubTopics(p)
|
||||
if err == nil {
|
||||
for _, topic := range topics {
|
||||
if topic == pubSubTopic {
|
||||
result = append(result, p)
|
||||
}
|
||||
}
|
||||
} //Note: skipping a peer in case of an error as there would be others available.
|
||||
}
|
||||
return result
|
||||
}
|
||||
3
vendor/github.com/waku-org/go-waku/waku/v2/protocol/README.md
generated
vendored
Normal file
3
vendor/github.com/waku-org/go-waku/waku/v2/protocol/README.md
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
# Waku v2 protocol
|
||||
|
||||
This folder contains implementations of [Waku v2 protocols](https://specs.vac.dev/specs/waku/v2/waku-v2.html).
|
||||
56
vendor/github.com/waku-org/go-waku/waku/v2/protocol/content_filter.go
generated
vendored
Normal file
56
vendor/github.com/waku-org/go-waku/waku/v2/protocol/content_filter.go
generated
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
package protocol
|
||||
|
||||
import "golang.org/x/exp/maps"
|
||||
|
||||
type PubsubTopicStr = string
|
||||
type ContentTopicStr = string
|
||||
|
||||
type ContentTopicSet map[string]struct{}
|
||||
|
||||
func NewContentTopicSet(contentTopics ...string) ContentTopicSet {
|
||||
s := make(ContentTopicSet, len(contentTopics))
|
||||
for _, ct := range contentTopics {
|
||||
s[ct] = struct{}{}
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (cf ContentTopicSet) ToList() []string {
|
||||
return maps.Keys(cf)
|
||||
}
|
||||
|
||||
// ContentFilter is used to specify the filter to be applied for a FilterNode.
|
||||
// Topic means pubSubTopic - optional in case of using contentTopics that following Auto sharding, mandatory in case of named or static sharding.
|
||||
// ContentTopics - Specify list of content topics to be filtered under a pubSubTopic (for named and static sharding), or a list of contentTopics (in case ofAuto sharding)
|
||||
// If pubSub topic is not specified, then content-topics are used to derive the shard and corresponding pubSubTopic using autosharding algorithm
|
||||
type ContentFilter struct {
|
||||
PubsubTopic string `json:"pubsubTopic"`
|
||||
ContentTopics ContentTopicSet `json:"contentTopics"`
|
||||
}
|
||||
|
||||
func (cf ContentFilter) ContentTopicsList() []string {
|
||||
return cf.ContentTopics.ToList()
|
||||
}
|
||||
|
||||
func NewContentFilter(pubsubTopic string, contentTopics ...string) ContentFilter {
|
||||
return ContentFilter{pubsubTopic, NewContentTopicSet(contentTopics...)}
|
||||
}
|
||||
|
||||
func (cf ContentFilter) Equals(cf1 ContentFilter) bool {
|
||||
if cf.PubsubTopic != cf1.PubsubTopic ||
|
||||
len(cf.ContentTopics) != len(cf1.ContentTopics) {
|
||||
return false
|
||||
}
|
||||
for topic := range cf.ContentTopics {
|
||||
_, ok := cf1.ContentTopics[topic]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// This function converts a contentFilter into a map of pubSubTopics and corresponding contentTopics
|
||||
func ContentFilterToPubSubTopicMap(contentFilter ContentFilter) (map[PubsubTopicStr][]ContentTopicStr, error) {
|
||||
return GeneratePubsubToContentTopicMap(contentFilter.PubsubTopic, contentFilter.ContentTopicsList())
|
||||
}
|
||||
124
vendor/github.com/waku-org/go-waku/waku/v2/protocol/content_topic.go
generated
vendored
Normal file
124
vendor/github.com/waku-org/go-waku/waku/v2/protocol/content_topic.go
generated
vendored
Normal file
@@ -0,0 +1,124 @@
|
||||
package protocol
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var ErrInvalidFormat = errors.New("invalid content topic format")
|
||||
var ErrMissingGeneration = errors.New("missing part: generation")
|
||||
var ErrInvalidGeneration = errors.New("generation should be a number")
|
||||
|
||||
// ContentTopic is used for content based.
|
||||
type ContentTopic struct {
|
||||
ContentTopicParams
|
||||
ApplicationName string
|
||||
ApplicationVersion string
|
||||
ContentTopicName string
|
||||
Encoding string
|
||||
}
|
||||
|
||||
// ContentTopicParams contains all the optional params for a content topic
|
||||
type ContentTopicParams struct {
|
||||
Generation int
|
||||
}
|
||||
|
||||
// Equal method used to compare 2 contentTopicParams
|
||||
func (ctp ContentTopicParams) Equal(ctp2 ContentTopicParams) bool {
|
||||
return ctp.Generation == ctp2.Generation
|
||||
}
|
||||
|
||||
// ContentTopicOption is following the options pattern to define optional params
|
||||
type ContentTopicOption func(*ContentTopicParams)
|
||||
|
||||
// String formats a content topic in string format as per RFC 23.
|
||||
func (ct ContentTopic) String() string {
|
||||
return fmt.Sprintf("/%s/%s/%s/%s", ct.ApplicationName, ct.ApplicationVersion, ct.ContentTopicName, ct.Encoding)
|
||||
}
|
||||
|
||||
// NewContentTopic creates a new content topic based on params specified.
|
||||
// Returns ErrInvalidGeneration if an unsupported generation is specified.
|
||||
// Note that this is recommended to be used for autosharding where contentTopic format is enforced as per https://rfc.vac.dev/spec/51/#content-topics-format-for-autosharding
|
||||
func NewContentTopic(applicationName string, applicationVersion string,
|
||||
contentTopicName string, encoding string, opts ...ContentTopicOption) (ContentTopic, error) {
|
||||
|
||||
params := new(ContentTopicParams)
|
||||
optList := DefaultOptions()
|
||||
optList = append(optList, opts...)
|
||||
for _, opt := range optList {
|
||||
opt(params)
|
||||
}
|
||||
if params.Generation > 0 {
|
||||
return ContentTopic{}, ErrInvalidGeneration
|
||||
}
|
||||
return ContentTopic{
|
||||
ContentTopicParams: *params,
|
||||
ApplicationName: applicationName,
|
||||
ApplicationVersion: applicationVersion,
|
||||
ContentTopicName: contentTopicName,
|
||||
Encoding: encoding,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// WithGeneration option can be used to specify explicitly a generation for contentTopic
|
||||
func WithGeneration(generation int) ContentTopicOption {
|
||||
return func(params *ContentTopicParams) {
|
||||
params.Generation = generation
|
||||
}
|
||||
}
|
||||
|
||||
// DefaultOptions sets default values for contentTopic optional params.
|
||||
func DefaultOptions() []ContentTopicOption {
|
||||
return []ContentTopicOption{
|
||||
WithGeneration(0),
|
||||
}
|
||||
}
|
||||
|
||||
// Equal to compare 2 content topics.
|
||||
func (ct ContentTopic) Equal(ct2 ContentTopic) bool {
|
||||
return ct.ApplicationName == ct2.ApplicationName && ct.ApplicationVersion == ct2.ApplicationVersion &&
|
||||
ct.ContentTopicName == ct2.ContentTopicName && ct.Encoding == ct2.Encoding &&
|
||||
ct.ContentTopicParams.Equal(ct2.ContentTopicParams)
|
||||
}
|
||||
|
||||
// StringToContentTopic can be used to create a ContentTopic object from a string
|
||||
// Note that this has to be used only when following the rfc format of contentTopic, which is currently validated only for Autosharding.
|
||||
// For static and named-sharding, contentTopic can be of any format and hence it is not recommended to use this function.
|
||||
// This can be updated if required to handle such a case.
|
||||
func StringToContentTopic(s string) (ContentTopic, error) {
|
||||
p := strings.Split(s, "/")
|
||||
switch len(p) {
|
||||
case 5:
|
||||
if len(p[1]) == 0 || len(p[2]) == 0 || len(p[3]) == 0 || len(p[4]) == 0 {
|
||||
return ContentTopic{}, ErrInvalidFormat
|
||||
}
|
||||
return ContentTopic{
|
||||
ApplicationName: p[1],
|
||||
ApplicationVersion: p[2],
|
||||
ContentTopicName: p[3],
|
||||
Encoding: p[4],
|
||||
}, nil
|
||||
case 6:
|
||||
if len(p[1]) == 0 {
|
||||
return ContentTopic{}, ErrMissingGeneration
|
||||
}
|
||||
generation, err := strconv.Atoi(p[1])
|
||||
if err != nil || generation > 0 {
|
||||
return ContentTopic{}, ErrInvalidGeneration
|
||||
}
|
||||
if len(p[2]) == 0 || len(p[3]) == 0 || len(p[4]) == 0 || len(p[5]) == 0 {
|
||||
return ContentTopic{}, ErrInvalidFormat
|
||||
}
|
||||
return ContentTopic{
|
||||
ContentTopicParams: ContentTopicParams{Generation: generation},
|
||||
ApplicationName: p[2],
|
||||
ApplicationVersion: p[3],
|
||||
ContentTopicName: p[4],
|
||||
Encoding: p[5],
|
||||
}, nil
|
||||
default:
|
||||
return ContentTopic{}, ErrInvalidFormat
|
||||
}
|
||||
}
|
||||
153
vendor/github.com/waku-org/go-waku/waku/v2/protocol/enr/enr.go
generated
vendored
Normal file
153
vendor/github.com/waku-org/go-waku/waku/v2/protocol/enr/enr.go
generated
vendored
Normal file
@@ -0,0 +1,153 @@
|
||||
package enr
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/multiformats/go-multiaddr"
|
||||
"github.com/waku-org/go-waku/waku/v2/utils"
|
||||
)
|
||||
|
||||
// WakuENRField is the name of the ENR field that contains information about which protocols are supported by the node
|
||||
const WakuENRField = "waku2"
|
||||
|
||||
// MultiaddrENRField is the name of the ENR field that will contain multiaddresses that cannot be described using the
|
||||
// already available ENR fields (i.e. in the case of websocket connections)
|
||||
const MultiaddrENRField = "multiaddrs"
|
||||
|
||||
const ShardingIndicesListEnrField = "rs"
|
||||
|
||||
const ShardingBitVectorEnrField = "rsv"
|
||||
|
||||
// WakuEnrBitfield is a8-bit flag field to indicate Waku capabilities. Only the 4 LSBs are currently defined according to RFC31 (https://rfc.vac.dev/spec/31/).
|
||||
type WakuEnrBitfield = uint8
|
||||
|
||||
// NewWakuEnrBitfield creates a WakuEnrBitField whose value will depend on which protocols are enabled in the node
|
||||
func NewWakuEnrBitfield(lightpush, filter, store, relay bool) WakuEnrBitfield {
|
||||
var v uint8
|
||||
|
||||
if lightpush {
|
||||
v |= (1 << 3)
|
||||
}
|
||||
|
||||
if filter {
|
||||
v |= (1 << 2)
|
||||
}
|
||||
|
||||
if store {
|
||||
v |= (1 << 1)
|
||||
}
|
||||
|
||||
if relay {
|
||||
v |= (1 << 0)
|
||||
}
|
||||
|
||||
return v
|
||||
}
|
||||
|
||||
// EnodeToMultiaddress converts an enode into a multiaddress
|
||||
func enodeToMultiAddr(node *enode.Node) (multiaddr.Multiaddr, error) {
|
||||
pubKey := utils.EcdsaPubKeyToSecp256k1PublicKey(node.Pubkey())
|
||||
peerID, err := peer.IDFromPublicKey(pubKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ipType := "ip4"
|
||||
portNumber := node.TCP()
|
||||
if utils.IsIPv6(node.IP().String()) {
|
||||
ipType = "ip6"
|
||||
var port enr.TCP6
|
||||
if err := node.Record().Load(&port); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
portNumber = int(port)
|
||||
}
|
||||
|
||||
return multiaddr.NewMultiaddr(fmt.Sprintf("/%s/%s/tcp/%d/p2p/%s", ipType, node.IP(), portNumber, peerID))
|
||||
}
|
||||
|
||||
// Multiaddress is used to extract all the multiaddresses that are part of a ENR record
|
||||
func Multiaddress(node *enode.Node) (peer.ID, []multiaddr.Multiaddr, error) {
|
||||
pubKey := utils.EcdsaPubKeyToSecp256k1PublicKey(node.Pubkey())
|
||||
peerID, err := peer.IDFromPublicKey(pubKey)
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
|
||||
var result []multiaddr.Multiaddr
|
||||
|
||||
addr, err := enodeToMultiAddr(node)
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
result = append(result, addr)
|
||||
|
||||
var multiaddrRaw []byte
|
||||
if err := node.Record().Load(enr.WithEntry(MultiaddrENRField, &multiaddrRaw)); err != nil {
|
||||
if !enr.IsNotFound(err) {
|
||||
return "", nil, err
|
||||
}
|
||||
// No multiaddr entry on enr
|
||||
return peerID, result, nil
|
||||
}
|
||||
|
||||
if len(multiaddrRaw) < 2 {
|
||||
// There was no error loading the multiaddr field, but its length is incorrect
|
||||
return peerID, result, nil
|
||||
}
|
||||
|
||||
offset := 0
|
||||
for {
|
||||
maSize := binary.BigEndian.Uint16(multiaddrRaw[offset : offset+2])
|
||||
if len(multiaddrRaw) < offset+2+int(maSize) {
|
||||
return "", nil, errors.New("invalid multiaddress field length")
|
||||
}
|
||||
maRaw := multiaddrRaw[offset+2 : offset+2+int(maSize)]
|
||||
addr, err := multiaddr.NewMultiaddrBytes(maRaw)
|
||||
if err != nil {
|
||||
return "", nil, fmt.Errorf("invalid multiaddress field length")
|
||||
}
|
||||
|
||||
hostInfoStr := fmt.Sprintf("/p2p/%s", peerID.Pretty())
|
||||
_, pID := peer.SplitAddr(addr)
|
||||
if pID != "" && pID != peerID {
|
||||
// Addresses in the ENR that contain a p2p component are circuit relay addr
|
||||
hostInfoStr = "/p2p-circuit" + hostInfoStr
|
||||
}
|
||||
|
||||
hostInfo, err := multiaddr.NewMultiaddr(hostInfoStr)
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
result = append(result, addr.Encapsulate(hostInfo))
|
||||
|
||||
offset += 2 + int(maSize)
|
||||
if offset >= len(multiaddrRaw) {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return peerID, result, nil
|
||||
}
|
||||
|
||||
// EnodeToPeerInfo extracts the peer ID and multiaddresses defined in an ENR
|
||||
func EnodeToPeerInfo(node *enode.Node) (*peer.AddrInfo, error) {
|
||||
_, addresses, err := Multiaddress(node)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
res, err := peer.AddrInfosFromP2pAddrs(addresses...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(res) == 0 {
|
||||
return nil, errors.New("could not retrieve peer addresses from enr")
|
||||
}
|
||||
return &res[0], nil
|
||||
}
|
||||
127
vendor/github.com/waku-org/go-waku/waku/v2/protocol/enr/localnode.go
generated
vendored
Normal file
127
vendor/github.com/waku-org/go-waku/waku/v2/protocol/enr/localnode.go
generated
vendored
Normal file
@@ -0,0 +1,127 @@
|
||||
package enr
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"math"
|
||||
"math/rand"
|
||||
"net"
|
||||
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/multiformats/go-multiaddr"
|
||||
)
|
||||
|
||||
func NewLocalnode(priv *ecdsa.PrivateKey) (*enode.LocalNode, error) {
|
||||
db, err := enode.OpenDB("")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return enode.NewLocalNode(db, priv), nil
|
||||
}
|
||||
|
||||
type ENROption func(*enode.LocalNode) error
|
||||
|
||||
func WithMultiaddress(multiaddrs ...multiaddr.Multiaddr) ENROption {
|
||||
return func(localnode *enode.LocalNode) (err error) {
|
||||
|
||||
// Randomly shuffle multiaddresses
|
||||
rand.Shuffle(len(multiaddrs), func(i, j int) { multiaddrs[i], multiaddrs[j] = multiaddrs[j], multiaddrs[i] })
|
||||
|
||||
// Adding extra multiaddresses. Should probably not exceed the enr max size of 300bytes
|
||||
failedOnceWritingENR := false
|
||||
couldWriteENRatLeastOnce := false
|
||||
successIdx := -1
|
||||
for i := len(multiaddrs); i > 0; i-- {
|
||||
err = writeMultiaddressField(localnode, multiaddrs[0:i])
|
||||
if err == nil {
|
||||
couldWriteENRatLeastOnce = true
|
||||
successIdx = i
|
||||
break
|
||||
}
|
||||
failedOnceWritingENR = true
|
||||
}
|
||||
|
||||
if failedOnceWritingENR && couldWriteENRatLeastOnce {
|
||||
// Could write a subset of multiaddresses but not all
|
||||
err = writeMultiaddressField(localnode, multiaddrs[0:successIdx])
|
||||
if err != nil {
|
||||
return errors.New("could not write new ENR")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithCapabilities(lightpush, filter, store, relay bool) ENROption {
|
||||
return func(localnode *enode.LocalNode) (err error) {
|
||||
wakuflags := NewWakuEnrBitfield(lightpush, filter, store, relay)
|
||||
return WithWakuBitfield(wakuflags)(localnode)
|
||||
}
|
||||
}
|
||||
|
||||
func WithWakuBitfield(flags WakuEnrBitfield) ENROption {
|
||||
return func(localnode *enode.LocalNode) (err error) {
|
||||
localnode.Set(enr.WithEntry(WakuENRField, flags))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithIP(ipAddr *net.TCPAddr) ENROption {
|
||||
return func(localnode *enode.LocalNode) (err error) {
|
||||
localnode.SetStaticIP(ipAddr.IP)
|
||||
localnode.Set(enr.TCP(uint16(ipAddr.Port))) // TODO: ipv6?
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithUDPPort(udpPort uint) ENROption {
|
||||
return func(localnode *enode.LocalNode) (err error) {
|
||||
if udpPort > math.MaxUint16 {
|
||||
return errors.New("invalid udp port number")
|
||||
}
|
||||
localnode.SetFallbackUDP(int(udpPort))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func Update(localnode *enode.LocalNode, enrOptions ...ENROption) error {
|
||||
for _, opt := range enrOptions {
|
||||
err := opt(localnode)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func writeMultiaddressField(localnode *enode.LocalNode, addrAggr []multiaddr.Multiaddr) (err error) {
|
||||
defer func() {
|
||||
if e := recover(); e != nil {
|
||||
// Deleting the multiaddr entry, as we could not write it succesfully
|
||||
localnode.Delete(enr.WithEntry(MultiaddrENRField, struct{}{}))
|
||||
err = errors.New("could not write enr record")
|
||||
}
|
||||
}()
|
||||
|
||||
var fieldRaw []byte
|
||||
for _, addr := range addrAggr {
|
||||
maRaw := addr.Bytes()
|
||||
maSize := make([]byte, 2)
|
||||
binary.BigEndian.PutUint16(maSize, uint16(len(maRaw)))
|
||||
|
||||
fieldRaw = append(fieldRaw, maSize...)
|
||||
fieldRaw = append(fieldRaw, maRaw...)
|
||||
}
|
||||
|
||||
if len(fieldRaw) != 0 && len(fieldRaw) <= 100 { // Max length for multiaddr field before triggering the 300 bytes limit
|
||||
localnode.Set(enr.WithEntry(MultiaddrENRField, fieldRaw))
|
||||
}
|
||||
|
||||
// This is to trigger the signing record err due to exceeding 300bytes limit
|
||||
_ = localnode.Node()
|
||||
|
||||
return nil
|
||||
}
|
||||
143
vendor/github.com/waku-org/go-waku/waku/v2/protocol/enr/shards.go
generated
vendored
Normal file
143
vendor/github.com/waku-org/go-waku/waku/v2/protocol/enr/shards.go
generated
vendored
Normal file
@@ -0,0 +1,143 @@
|
||||
package enr
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
)
|
||||
|
||||
func deleteShardingENREntries(localnode *enode.LocalNode) {
|
||||
localnode.Delete(enr.WithEntry(ShardingBitVectorEnrField, struct{}{}))
|
||||
localnode.Delete(enr.WithEntry(ShardingIndicesListEnrField, struct{}{}))
|
||||
}
|
||||
|
||||
func WithWakuRelayShardList(rs protocol.RelayShards) ENROption {
|
||||
return func(localnode *enode.LocalNode) error {
|
||||
value, err := rs.ShardList()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
deleteShardingENREntries(localnode)
|
||||
localnode.Set(enr.WithEntry(ShardingIndicesListEnrField, value))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithWakuRelayShardingBitVector(rs protocol.RelayShards) ENROption {
|
||||
return func(localnode *enode.LocalNode) error {
|
||||
deleteShardingENREntries(localnode)
|
||||
localnode.Set(enr.WithEntry(ShardingBitVectorEnrField, rs.BitVector()))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithWakuRelaySharding(rs protocol.RelayShards) ENROption {
|
||||
return func(localnode *enode.LocalNode) error {
|
||||
if len(rs.ShardIDs) >= 64 {
|
||||
return WithWakuRelayShardingBitVector(rs)(localnode)
|
||||
}
|
||||
|
||||
return WithWakuRelayShardList(rs)(localnode)
|
||||
}
|
||||
}
|
||||
|
||||
func WithWakuRelayShardingTopics(topics ...string) ENROption {
|
||||
return func(localnode *enode.LocalNode) error {
|
||||
rs, err := protocol.TopicsToRelayShards(topics...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(rs) != 1 {
|
||||
return errors.New("expected a single RelayShards")
|
||||
}
|
||||
|
||||
return WithWakuRelaySharding(rs[0])(localnode)
|
||||
}
|
||||
}
|
||||
|
||||
// ENR record accessors
|
||||
|
||||
func RelayShardList(record *enr.Record) (*protocol.RelayShards, error) {
|
||||
var field []byte
|
||||
if err := record.Load(enr.WithEntry(ShardingIndicesListEnrField, &field)); err != nil {
|
||||
if enr.IsNotFound(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
res, err := protocol.FromShardList(field)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func RelayShardingBitVector(record *enr.Record) (*protocol.RelayShards, error) {
|
||||
var field []byte
|
||||
if err := record.Load(enr.WithEntry(ShardingBitVectorEnrField, &field)); err != nil {
|
||||
if enr.IsNotFound(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
res, err := protocol.FromBitVector(field)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func RelaySharding(record *enr.Record) (*protocol.RelayShards, error) {
|
||||
res, err := RelayShardList(record)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if res != nil {
|
||||
return res, nil
|
||||
}
|
||||
|
||||
return RelayShardingBitVector(record)
|
||||
}
|
||||
|
||||
// Utils
|
||||
|
||||
func ContainsShard(record *enr.Record, cluster uint16, index uint16) bool {
|
||||
if index > protocol.MaxShardIndex {
|
||||
return false
|
||||
}
|
||||
|
||||
rs, err := RelaySharding(record)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return rs.Contains(cluster, index)
|
||||
}
|
||||
|
||||
func ContainsShardWithWakuTopic(record *enr.Record, topic protocol.WakuPubSubTopic) bool {
|
||||
if shardTopic, err := protocol.ToShardPubsubTopic(topic); err != nil {
|
||||
return false
|
||||
} else {
|
||||
return ContainsShard(record, shardTopic.Cluster(), shardTopic.Shard())
|
||||
}
|
||||
}
|
||||
|
||||
func ContainsRelayShard(record *enr.Record, topic protocol.StaticShardingPubsubTopic) bool {
|
||||
return ContainsShardWithWakuTopic(record, topic)
|
||||
}
|
||||
|
||||
func ContainsShardTopic(record *enr.Record, topic string) bool {
|
||||
shardTopic, err := protocol.ToWakuPubsubTopic(topic)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return ContainsShardWithWakuTopic(record, shardTopic)
|
||||
}
|
||||
53
vendor/github.com/waku-org/go-waku/waku/v2/protocol/envelope.go
generated
vendored
Normal file
53
vendor/github.com/waku-org/go-waku/waku/v2/protocol/envelope.go
generated
vendored
Normal file
@@ -0,0 +1,53 @@
|
||||
package protocol
|
||||
|
||||
import (
|
||||
"github.com/waku-org/go-waku/waku/v2/hash"
|
||||
wpb "github.com/waku-org/go-waku/waku/v2/protocol/pb"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/store/pb"
|
||||
)
|
||||
|
||||
// Envelope contains information about the pubsub topic of a WakuMessage
|
||||
// and a hash used to identify a message based on the bytes of a WakuMessage
|
||||
// protobuffer
|
||||
type Envelope struct {
|
||||
msg *wpb.WakuMessage
|
||||
hash []byte
|
||||
index *pb.Index
|
||||
}
|
||||
|
||||
// NewEnvelope creates a new Envelope that contains a WakuMessage
|
||||
// It's used as a way to know to which Pubsub topic belongs a WakuMessage
|
||||
// as well as generating a hash based on the bytes that compose the message
|
||||
func NewEnvelope(msg *wpb.WakuMessage, receiverTime int64, pubSubTopic string) *Envelope {
|
||||
messageHash := msg.Hash(pubSubTopic)
|
||||
digest := hash.SHA256([]byte(msg.ContentTopic), msg.Payload)
|
||||
return &Envelope{
|
||||
msg: msg,
|
||||
hash: messageHash,
|
||||
index: &pb.Index{
|
||||
Digest: digest[:],
|
||||
ReceiverTime: receiverTime,
|
||||
SenderTime: msg.GetTimestamp(),
|
||||
PubsubTopic: pubSubTopic,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// Message returns the WakuMessage associated to an Envelope
|
||||
func (e *Envelope) Message() *wpb.WakuMessage {
|
||||
return e.msg
|
||||
}
|
||||
|
||||
// PubsubTopic returns the topic on which a WakuMessage was received
|
||||
func (e *Envelope) PubsubTopic() string {
|
||||
return e.index.PubsubTopic
|
||||
}
|
||||
|
||||
// Hash returns a 32 byte hash calculated from the WakuMessage bytes
|
||||
func (e *Envelope) Hash() []byte {
|
||||
return e.hash
|
||||
}
|
||||
|
||||
func (e *Envelope) Index() *pb.Index {
|
||||
return e.index
|
||||
}
|
||||
668
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/client.go
generated
vendored
Normal file
668
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/client.go
generated
vendored
Normal file
@@ -0,0 +1,668 @@
|
||||
package filter
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
"net/http"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
libp2pProtocol "github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/libp2p/go-msgio/pbio"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
"github.com/waku-org/go-waku/waku/v2/peermanager"
|
||||
"github.com/waku-org/go-waku/waku/v2/peerstore"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/filter/pb"
|
||||
wpb "github.com/waku-org/go-waku/waku/v2/protocol/pb"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/subscription"
|
||||
"github.com/waku-org/go-waku/waku/v2/service"
|
||||
"github.com/waku-org/go-waku/waku/v2/timesource"
|
||||
"go.uber.org/zap"
|
||||
"golang.org/x/exp/maps"
|
||||
"golang.org/x/exp/slices"
|
||||
)
|
||||
|
||||
// FilterPushID_v20beta1 is the current Waku Filter protocol identifier used to allow
|
||||
// filter service nodes to push messages matching registered subscriptions to this client.
|
||||
const FilterPushID_v20beta1 = libp2pProtocol.ID("/vac/waku/filter-push/2.0.0-beta1")
|
||||
|
||||
var (
|
||||
ErrNoPeersAvailable = errors.New("no suitable remote peers")
|
||||
ErrSubscriptionNotFound = errors.New("subscription not found")
|
||||
)
|
||||
|
||||
type WakuFilterLightNode struct {
|
||||
*service.CommonService
|
||||
h host.Host
|
||||
broadcaster relay.Broadcaster //TODO: Move the broadcast functionality outside of relay client to a higher SDK layer.s
|
||||
timesource timesource.Timesource
|
||||
metrics Metrics
|
||||
log *zap.Logger
|
||||
subscriptions *subscription.SubscriptionsMap
|
||||
pm *peermanager.PeerManager
|
||||
}
|
||||
|
||||
type WakuFilterPushError struct {
|
||||
Err error
|
||||
PeerID peer.ID
|
||||
}
|
||||
|
||||
type WakuFilterPushResult struct {
|
||||
errs []WakuFilterPushError
|
||||
sync.RWMutex
|
||||
}
|
||||
|
||||
func (arr *WakuFilterPushResult) Add(err WakuFilterPushError) {
|
||||
arr.Lock()
|
||||
defer arr.Unlock()
|
||||
arr.errs = append(arr.errs, err)
|
||||
}
|
||||
func (arr *WakuFilterPushResult) Errors() []WakuFilterPushError {
|
||||
arr.RLock()
|
||||
defer arr.RUnlock()
|
||||
return arr.errs
|
||||
}
|
||||
|
||||
// NewWakuFilterLightnode returns a new instance of Waku Filter struct setup according to the chosen parameter and options
|
||||
// Note that broadcaster is optional.
|
||||
// Takes an optional peermanager if WakuFilterLightnode is being created along with WakuNode.
|
||||
// If using libp2p host, then pass peermanager as nil
|
||||
func NewWakuFilterLightNode(broadcaster relay.Broadcaster, pm *peermanager.PeerManager,
|
||||
timesource timesource.Timesource, reg prometheus.Registerer, log *zap.Logger) *WakuFilterLightNode {
|
||||
wf := new(WakuFilterLightNode)
|
||||
wf.log = log.Named("filterv2-lightnode")
|
||||
wf.broadcaster = broadcaster
|
||||
wf.timesource = timesource
|
||||
wf.pm = pm
|
||||
wf.CommonService = service.NewCommonService()
|
||||
wf.metrics = newMetrics(reg)
|
||||
|
||||
return wf
|
||||
}
|
||||
|
||||
// Sets the host to be able to mount or consume a protocol
|
||||
func (wf *WakuFilterLightNode) SetHost(h host.Host) {
|
||||
wf.h = h
|
||||
}
|
||||
|
||||
func (wf *WakuFilterLightNode) Start(ctx context.Context) error {
|
||||
return wf.CommonService.Start(ctx, wf.start)
|
||||
|
||||
}
|
||||
|
||||
func (wf *WakuFilterLightNode) start() error {
|
||||
wf.subscriptions = subscription.NewSubscriptionMap(wf.log)
|
||||
wf.h.SetStreamHandlerMatch(FilterPushID_v20beta1, protocol.PrefixTextMatch(string(FilterPushID_v20beta1)), wf.onRequest(wf.Context()))
|
||||
|
||||
wf.log.Info("filter-push protocol started")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop unmounts the filter protocol
|
||||
func (wf *WakuFilterLightNode) Stop() {
|
||||
wf.CommonService.Stop(func() {
|
||||
wf.h.RemoveStreamHandler(FilterPushID_v20beta1)
|
||||
if wf.subscriptions.Count() > 0 {
|
||||
go func() {
|
||||
defer func() {
|
||||
_ = recover()
|
||||
}()
|
||||
res, err := wf.unsubscribeAll(wf.Context())
|
||||
if err != nil {
|
||||
wf.log.Warn("unsubscribing from full nodes", zap.Error(err))
|
||||
}
|
||||
|
||||
for _, r := range res.Errors() {
|
||||
if r.Err != nil {
|
||||
wf.log.Warn("unsubscribing from full nodes", zap.Error(r.Err), logging.HostID("peerID", r.PeerID))
|
||||
}
|
||||
|
||||
}
|
||||
wf.subscriptions.Clear()
|
||||
}()
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func (wf *WakuFilterLightNode) onRequest(ctx context.Context) func(network.Stream) {
|
||||
return func(stream network.Stream) {
|
||||
peerID := stream.Conn().RemotePeer()
|
||||
|
||||
logger := wf.log.With(logging.HostID("peerID", peerID))
|
||||
|
||||
if !wf.subscriptions.IsSubscribedTo(peerID) {
|
||||
logger.Warn("received message push from unknown peer", logging.HostID("peerID", peerID))
|
||||
wf.metrics.RecordError(unknownPeerMessagePush)
|
||||
if err := stream.Reset(); err != nil {
|
||||
wf.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
reader := pbio.NewDelimitedReader(stream, math.MaxInt32)
|
||||
|
||||
messagePush := &pb.MessagePush{}
|
||||
err := reader.ReadMsg(messagePush)
|
||||
if err != nil {
|
||||
logger.Error("reading message push", zap.Error(err))
|
||||
wf.metrics.RecordError(decodeRPCFailure)
|
||||
if err := stream.Reset(); err != nil {
|
||||
wf.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
stream.Close()
|
||||
|
||||
if err = messagePush.Validate(); err != nil {
|
||||
logger.Warn("received invalid messagepush")
|
||||
return
|
||||
}
|
||||
|
||||
pubSubTopic := ""
|
||||
//For now returning failure, this will get addressed with autosharding changes for filter.
|
||||
if messagePush.PubsubTopic == nil {
|
||||
pubSubTopic, err = protocol.GetPubSubTopicFromContentTopic(messagePush.WakuMessage.ContentTopic)
|
||||
if err != nil {
|
||||
logger.Error("could not derive pubSubTopic from contentTopic", zap.Error(err))
|
||||
wf.metrics.RecordError(decodeRPCFailure)
|
||||
if err := stream.Reset(); err != nil {
|
||||
wf.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return
|
||||
}
|
||||
} else {
|
||||
pubSubTopic = *messagePush.PubsubTopic
|
||||
}
|
||||
|
||||
logger = messagePush.WakuMessage.Logger(logger, pubSubTopic)
|
||||
|
||||
if !wf.subscriptions.Has(peerID, protocol.NewContentFilter(pubSubTopic, messagePush.WakuMessage.ContentTopic)) {
|
||||
logger.Warn("received messagepush with invalid subscription parameters")
|
||||
wf.metrics.RecordError(invalidSubscriptionMessage)
|
||||
return
|
||||
}
|
||||
|
||||
wf.metrics.RecordMessage()
|
||||
|
||||
wf.notify(peerID, pubSubTopic, messagePush.WakuMessage)
|
||||
|
||||
logger.Info("received message push")
|
||||
}
|
||||
}
|
||||
|
||||
func (wf *WakuFilterLightNode) notify(remotePeerID peer.ID, pubsubTopic string, msg *wpb.WakuMessage) {
|
||||
envelope := protocol.NewEnvelope(msg, wf.timesource.Now().UnixNano(), pubsubTopic)
|
||||
|
||||
if wf.broadcaster != nil {
|
||||
// Broadcasting message so it's stored
|
||||
wf.broadcaster.Submit(envelope)
|
||||
}
|
||||
// Notify filter subscribers
|
||||
wf.subscriptions.Notify(remotePeerID, envelope)
|
||||
}
|
||||
|
||||
func (wf *WakuFilterLightNode) request(ctx context.Context, params *FilterSubscribeParameters,
|
||||
reqType pb.FilterSubscribeRequest_FilterSubscribeType, contentFilter protocol.ContentFilter) error {
|
||||
request := &pb.FilterSubscribeRequest{
|
||||
RequestId: hex.EncodeToString(params.requestID),
|
||||
FilterSubscribeType: reqType,
|
||||
PubsubTopic: &contentFilter.PubsubTopic,
|
||||
ContentTopics: contentFilter.ContentTopicsList(),
|
||||
}
|
||||
|
||||
err := request.Validate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger := wf.log.With(logging.HostID("peerID", params.selectedPeer))
|
||||
|
||||
stream, err := wf.h.NewStream(ctx, params.selectedPeer, FilterSubscribeID_v20beta1)
|
||||
if err != nil {
|
||||
wf.metrics.RecordError(dialFailure)
|
||||
return err
|
||||
}
|
||||
|
||||
writer := pbio.NewDelimitedWriter(stream)
|
||||
reader := pbio.NewDelimitedReader(stream, math.MaxInt32)
|
||||
|
||||
logger.Debug("sending FilterSubscribeRequest", zap.Stringer("request", request))
|
||||
err = writer.WriteMsg(request)
|
||||
if err != nil {
|
||||
wf.metrics.RecordError(writeRequestFailure)
|
||||
logger.Error("sending FilterSubscribeRequest", zap.Error(err))
|
||||
if err := stream.Reset(); err != nil {
|
||||
logger.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
filterSubscribeResponse := &pb.FilterSubscribeResponse{}
|
||||
err = reader.ReadMsg(filterSubscribeResponse)
|
||||
if err != nil {
|
||||
logger.Error("receiving FilterSubscribeResponse", zap.Error(err))
|
||||
wf.metrics.RecordError(decodeRPCFailure)
|
||||
if err := stream.Reset(); err != nil {
|
||||
logger.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
stream.Close()
|
||||
|
||||
if err = filterSubscribeResponse.Validate(); err != nil {
|
||||
wf.metrics.RecordError(decodeRPCFailure)
|
||||
logger.Error("validating response", zap.Error(err))
|
||||
return err
|
||||
|
||||
}
|
||||
|
||||
if filterSubscribeResponse.RequestId != request.RequestId {
|
||||
wf.log.Error("requestID mismatch", zap.String("expected", request.RequestId), zap.String("received", filterSubscribeResponse.RequestId))
|
||||
wf.metrics.RecordError(requestIDMismatch)
|
||||
err := NewFilterError(300, "request_id_mismatch")
|
||||
return &err
|
||||
}
|
||||
|
||||
if filterSubscribeResponse.StatusCode != http.StatusOK {
|
||||
wf.metrics.RecordError(errorResponse)
|
||||
errMessage := ""
|
||||
if filterSubscribeResponse.StatusDesc != nil {
|
||||
errMessage = *filterSubscribeResponse.StatusDesc
|
||||
}
|
||||
err := NewFilterError(int(filterSubscribeResponse.StatusCode), errMessage)
|
||||
return &err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (wf *WakuFilterLightNode) handleFilterSubscribeOptions(ctx context.Context, contentFilter protocol.ContentFilter, opts []FilterSubscribeOption) (*FilterSubscribeParameters, map[string][]string, error) {
|
||||
params := new(FilterSubscribeParameters)
|
||||
params.log = wf.log
|
||||
params.host = wf.h
|
||||
params.pm = wf.pm
|
||||
|
||||
optList := DefaultSubscriptionOptions()
|
||||
optList = append(optList, opts...)
|
||||
for _, opt := range optList {
|
||||
err := opt(params)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
|
||||
pubSubTopicMap, err := protocol.ContentFilterToPubSubTopicMap(contentFilter)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
//Add Peer to peerstore.
|
||||
if params.pm != nil && params.peerAddr != nil {
|
||||
pData, err := wf.pm.AddPeer(params.peerAddr, peerstore.Static, maps.Keys(pubSubTopicMap), FilterSubscribeID_v20beta1)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
wf.pm.Connect(pData)
|
||||
params.selectedPeer = pData.AddrInfo.ID
|
||||
}
|
||||
if params.pm != nil && params.selectedPeer == "" {
|
||||
params.selectedPeer, err = wf.pm.SelectPeer(
|
||||
peermanager.PeerSelectionCriteria{
|
||||
SelectionType: params.peerSelectionType,
|
||||
Proto: FilterSubscribeID_v20beta1,
|
||||
PubsubTopics: maps.Keys(pubSubTopicMap),
|
||||
SpecificPeers: params.preferredPeers,
|
||||
Ctx: ctx,
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
return params, pubSubTopicMap, nil
|
||||
}
|
||||
|
||||
// Subscribe setups a subscription to receive messages that match a specific content filter
|
||||
// If contentTopics passed result in different pubSub topics (due to Auto/Static sharding), then multiple subscription requests are sent to the peer.
|
||||
// This may change if Filterv2 protocol is updated to handle such a scenario in a single request.
|
||||
// Note: In case of partial failure, results are returned for successful subscriptions along with error indicating failed contentTopics.
|
||||
func (wf *WakuFilterLightNode) Subscribe(ctx context.Context, contentFilter protocol.ContentFilter, opts ...FilterSubscribeOption) ([]*subscription.SubscriptionDetails, error) {
|
||||
wf.RLock()
|
||||
defer wf.RUnlock()
|
||||
if err := wf.ErrOnNotRunning(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
params, pubSubTopicMap, err := wf.handleFilterSubscribeOptions(ctx, contentFilter, opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
failedContentTopics := []string{}
|
||||
subscriptions := make([]*subscription.SubscriptionDetails, 0)
|
||||
for pubSubTopic, cTopics := range pubSubTopicMap {
|
||||
var selectedPeer peer.ID
|
||||
if params.pm != nil && params.selectedPeer == "" {
|
||||
selectedPeer, err = wf.pm.SelectPeer(
|
||||
peermanager.PeerSelectionCriteria{
|
||||
SelectionType: params.peerSelectionType,
|
||||
Proto: FilterSubscribeID_v20beta1,
|
||||
PubsubTopics: []string{pubSubTopic},
|
||||
SpecificPeers: params.preferredPeers,
|
||||
Ctx: ctx,
|
||||
},
|
||||
)
|
||||
} else {
|
||||
selectedPeer = params.selectedPeer
|
||||
}
|
||||
if selectedPeer == "" {
|
||||
wf.metrics.RecordError(peerNotFoundFailure)
|
||||
wf.log.Error("selecting peer", zap.String("pubSubTopic", pubSubTopic), zap.Strings("contentTopics", cTopics),
|
||||
zap.Error(err))
|
||||
failedContentTopics = append(failedContentTopics, cTopics...)
|
||||
continue
|
||||
}
|
||||
|
||||
var cFilter protocol.ContentFilter
|
||||
cFilter.PubsubTopic = pubSubTopic
|
||||
cFilter.ContentTopics = protocol.NewContentTopicSet(cTopics...)
|
||||
|
||||
paramsCopy := params.Copy()
|
||||
paramsCopy.selectedPeer = selectedPeer
|
||||
err := wf.request(
|
||||
ctx,
|
||||
paramsCopy,
|
||||
pb.FilterSubscribeRequest_SUBSCRIBE,
|
||||
cFilter,
|
||||
)
|
||||
if err != nil {
|
||||
wf.log.Error("Failed to subscribe", zap.String("pubSubTopic", pubSubTopic), zap.Strings("contentTopics", cTopics),
|
||||
zap.Error(err))
|
||||
failedContentTopics = append(failedContentTopics, cTopics...)
|
||||
continue
|
||||
}
|
||||
subscriptions = append(subscriptions, wf.subscriptions.NewSubscription(selectedPeer, cFilter))
|
||||
}
|
||||
|
||||
if len(failedContentTopics) > 0 {
|
||||
return subscriptions, fmt.Errorf("subscriptions failed for contentTopics: %s", strings.Join(failedContentTopics, ","))
|
||||
} else {
|
||||
return subscriptions, nil
|
||||
}
|
||||
}
|
||||
|
||||
// FilterSubscription is used to obtain an object from which you could receive messages received via filter protocol
|
||||
func (wf *WakuFilterLightNode) FilterSubscription(peerID peer.ID, contentFilter protocol.ContentFilter) (*subscription.SubscriptionDetails, error) {
|
||||
wf.RLock()
|
||||
defer wf.RUnlock()
|
||||
if err := wf.ErrOnNotRunning(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if !wf.subscriptions.Has(peerID, contentFilter) {
|
||||
return nil, errors.New("subscription does not exist")
|
||||
}
|
||||
|
||||
return wf.subscriptions.NewSubscription(peerID, contentFilter), nil
|
||||
}
|
||||
|
||||
func (wf *WakuFilterLightNode) getUnsubscribeParameters(opts ...FilterSubscribeOption) (*FilterSubscribeParameters, error) {
|
||||
params := new(FilterSubscribeParameters)
|
||||
params.log = wf.log
|
||||
opts = append(DefaultUnsubscribeOptions(), opts...)
|
||||
for _, opt := range opts {
|
||||
err := opt(params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return params, nil
|
||||
}
|
||||
|
||||
func (wf *WakuFilterLightNode) Ping(ctx context.Context, peerID peer.ID, opts ...FilterPingOption) error {
|
||||
wf.RLock()
|
||||
defer wf.RUnlock()
|
||||
if err := wf.ErrOnNotRunning(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
params := &FilterPingParameters{}
|
||||
for _, opt := range opts {
|
||||
opt(params)
|
||||
}
|
||||
if len(params.requestID) == 0 {
|
||||
params.requestID = protocol.GenerateRequestID()
|
||||
}
|
||||
|
||||
return wf.request(
|
||||
ctx,
|
||||
&FilterSubscribeParameters{selectedPeer: peerID, requestID: params.requestID},
|
||||
pb.FilterSubscribeRequest_SUBSCRIBER_PING,
|
||||
protocol.ContentFilter{})
|
||||
}
|
||||
|
||||
func (wf *WakuFilterLightNode) IsSubscriptionAlive(ctx context.Context, subscription *subscription.SubscriptionDetails) error {
|
||||
wf.RLock()
|
||||
defer wf.RUnlock()
|
||||
if err := wf.ErrOnNotRunning(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return wf.Ping(ctx, subscription.PeerID)
|
||||
}
|
||||
|
||||
// Unsubscribe is used to stop receiving messages from a peer that match a content filter
|
||||
func (wf *WakuFilterLightNode) Unsubscribe(ctx context.Context, contentFilter protocol.ContentFilter, opts ...FilterSubscribeOption) (*WakuFilterPushResult, error) {
|
||||
wf.RLock()
|
||||
defer wf.RUnlock()
|
||||
if err := wf.ErrOnNotRunning(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(contentFilter.ContentTopics) == 0 {
|
||||
return nil, errors.New("at least one content topic is required")
|
||||
}
|
||||
|
||||
if slices.Contains[string](contentFilter.ContentTopicsList(), "") {
|
||||
return nil, errors.New("one or more content topics specified is empty")
|
||||
}
|
||||
|
||||
if len(contentFilter.ContentTopics) > MaxContentTopicsPerRequest {
|
||||
return nil, fmt.Errorf("exceeds maximum content topics: %d", MaxContentTopicsPerRequest)
|
||||
}
|
||||
|
||||
params, err := wf.getUnsubscribeParameters(opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
pubSubTopicMap, err := protocol.ContentFilterToPubSubTopicMap(contentFilter)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result := &WakuFilterPushResult{}
|
||||
for pTopic, cTopics := range pubSubTopicMap {
|
||||
cFilter := protocol.NewContentFilter(pTopic, cTopics...)
|
||||
|
||||
peers := make(map[peer.ID]struct{})
|
||||
subs := wf.subscriptions.GetSubscription(params.selectedPeer, cFilter)
|
||||
if len(subs) == 0 {
|
||||
result.Add(WakuFilterPushError{
|
||||
Err: ErrSubscriptionNotFound,
|
||||
PeerID: params.selectedPeer,
|
||||
})
|
||||
continue
|
||||
}
|
||||
for _, sub := range subs {
|
||||
sub.Remove(cTopics...)
|
||||
peers[sub.PeerID] = struct{}{}
|
||||
}
|
||||
if params.wg != nil {
|
||||
params.wg.Add(len(peers))
|
||||
}
|
||||
// send unsubscribe request to all the peers
|
||||
for peerID := range peers {
|
||||
go func(peerID peer.ID) {
|
||||
defer func() {
|
||||
if params.wg != nil {
|
||||
params.wg.Done()
|
||||
}
|
||||
}()
|
||||
err := wf.unsubscribeFromServer(ctx, &FilterSubscribeParameters{selectedPeer: peerID, requestID: params.requestID}, cFilter)
|
||||
|
||||
if params.wg != nil {
|
||||
result.Add(WakuFilterPushError{
|
||||
Err: err,
|
||||
PeerID: peerID,
|
||||
})
|
||||
}
|
||||
}(peerID)
|
||||
}
|
||||
}
|
||||
if params.wg != nil {
|
||||
params.wg.Wait()
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (wf *WakuFilterLightNode) Subscriptions() []*subscription.SubscriptionDetails {
|
||||
subs := wf.subscriptions.GetSubscription("", protocol.ContentFilter{})
|
||||
return subs
|
||||
}
|
||||
|
||||
func (wf *WakuFilterLightNode) IsListening(pubsubTopic, contentTopic string) bool {
|
||||
return wf.subscriptions.IsListening(pubsubTopic, contentTopic)
|
||||
|
||||
}
|
||||
|
||||
// UnsubscribeWithSubscription is used to close a particular subscription
|
||||
// If there are no more subscriptions matching the passed [peer, contentFilter] pair,
|
||||
// server unsubscribe is also performed
|
||||
func (wf *WakuFilterLightNode) UnsubscribeWithSubscription(ctx context.Context, sub *subscription.SubscriptionDetails,
|
||||
opts ...FilterSubscribeOption) (*WakuFilterPushResult, error) {
|
||||
wf.RLock()
|
||||
defer wf.RUnlock()
|
||||
if err := wf.ErrOnNotRunning(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
params, err := wf.getUnsubscribeParameters(opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Close this sub
|
||||
sub.Close()
|
||||
|
||||
result := &WakuFilterPushResult{}
|
||||
|
||||
if !wf.subscriptions.Has(sub.PeerID, sub.ContentFilter) {
|
||||
// Last sub for this [peer, contentFilter] pair
|
||||
params.selectedPeer = sub.PeerID
|
||||
err = wf.unsubscribeFromServer(ctx, params, sub.ContentFilter)
|
||||
result.Add(WakuFilterPushError{
|
||||
Err: err,
|
||||
PeerID: sub.PeerID,
|
||||
})
|
||||
}
|
||||
return result, err
|
||||
|
||||
}
|
||||
|
||||
func (wf *WakuFilterLightNode) unsubscribeFromServer(ctx context.Context, params *FilterSubscribeParameters, cFilter protocol.ContentFilter) error {
|
||||
err := wf.request(ctx, params, pb.FilterSubscribeRequest_UNSUBSCRIBE, cFilter)
|
||||
if err != nil {
|
||||
ferr, ok := err.(*FilterError)
|
||||
if ok && ferr.Code == http.StatusNotFound {
|
||||
wf.log.Warn("peer does not have a subscription", logging.HostID("peerID", params.selectedPeer), zap.Error(err))
|
||||
} else {
|
||||
wf.log.Error("could not unsubscribe from peer", logging.HostID("peerID", params.selectedPeer), zap.Error(err))
|
||||
}
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// close all subscribe for selectedPeer or if selectedPeer == "", then all peers
|
||||
// send the unsubscribeAll request to the peers
|
||||
func (wf *WakuFilterLightNode) unsubscribeAll(ctx context.Context, opts ...FilterSubscribeOption) (*WakuFilterPushResult, error) {
|
||||
params, err := wf.getUnsubscribeParameters(opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result := &WakuFilterPushResult{}
|
||||
|
||||
peers := make(map[peer.ID]struct{})
|
||||
subs := wf.subscriptions.GetSubscription(params.selectedPeer, protocol.ContentFilter{})
|
||||
if len(subs) == 0 && params.selectedPeer != "" {
|
||||
result.Add(WakuFilterPushError{
|
||||
Err: err,
|
||||
PeerID: params.selectedPeer,
|
||||
})
|
||||
return result, ErrSubscriptionNotFound
|
||||
}
|
||||
for _, sub := range subs {
|
||||
sub.Close()
|
||||
peers[sub.PeerID] = struct{}{}
|
||||
}
|
||||
if params.wg != nil {
|
||||
params.wg.Add(len(peers))
|
||||
}
|
||||
for peerId := range peers {
|
||||
go func(peerID peer.ID) {
|
||||
defer func() {
|
||||
if params.wg != nil {
|
||||
params.wg.Done()
|
||||
}
|
||||
_ = recover()
|
||||
}()
|
||||
|
||||
paramsCopy := params.Copy()
|
||||
paramsCopy.selectedPeer = peerID
|
||||
err := wf.request(
|
||||
ctx,
|
||||
params,
|
||||
pb.FilterSubscribeRequest_UNSUBSCRIBE_ALL,
|
||||
protocol.ContentFilter{})
|
||||
if err != nil {
|
||||
wf.log.Error("could not unsubscribe from peer", logging.HostID("peerID", peerID), zap.Error(err))
|
||||
}
|
||||
if params.wg != nil {
|
||||
result.Add(WakuFilterPushError{
|
||||
Err: err,
|
||||
PeerID: peerID,
|
||||
})
|
||||
}
|
||||
}(peerId)
|
||||
}
|
||||
|
||||
if params.wg != nil {
|
||||
params.wg.Wait()
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// UnsubscribeAll is used to stop receiving messages from peer(s). It does not close subscriptions
|
||||
func (wf *WakuFilterLightNode) UnsubscribeAll(ctx context.Context, opts ...FilterSubscribeOption) (*WakuFilterPushResult, error) {
|
||||
wf.RLock()
|
||||
defer wf.RUnlock()
|
||||
if err := wf.ErrOnNotRunning(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return wf.unsubscribeAll(ctx, opts...)
|
||||
}
|
||||
39
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/common.go
generated
vendored
Normal file
39
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/common.go
generated
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
package filter
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
)
|
||||
|
||||
const DefaultMaxSubscriptions = 1000
|
||||
const MaxCriteriaPerSubscription = 1000
|
||||
const MaxContentTopicsPerRequest = 30
|
||||
const MessagePushTimeout = 20 * time.Second
|
||||
|
||||
type FilterError struct {
|
||||
Code int
|
||||
Message string
|
||||
}
|
||||
|
||||
func NewFilterError(code int, message string) FilterError {
|
||||
return FilterError{
|
||||
Code: code,
|
||||
Message: message,
|
||||
}
|
||||
}
|
||||
|
||||
const errorStringFmt = "%d - %s"
|
||||
|
||||
func (e *FilterError) Error() string {
|
||||
return fmt.Sprintf(errorStringFmt, e.Code, e.Message)
|
||||
}
|
||||
|
||||
func ExtractCodeFromFilterError(fErr string) int {
|
||||
code := 0
|
||||
var message string
|
||||
_, err := fmt.Sscanf(fErr, errorStringFmt, &code, &message)
|
||||
if err != nil {
|
||||
return -1
|
||||
}
|
||||
return code
|
||||
}
|
||||
120
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/metrics.go
generated
vendored
Normal file
120
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/metrics.go
generated
vendored
Normal file
@@ -0,0 +1,120 @@
|
||||
package filter
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/libp2p/go-libp2p/p2p/metricshelper"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
var filterMessages = prometheus.NewCounter(
|
||||
prometheus.CounterOpts{
|
||||
Name: "waku_filter_messages",
|
||||
Help: "The number of messages received via filter protocol",
|
||||
})
|
||||
|
||||
var filterErrors = prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Name: "waku_filter_errors",
|
||||
Help: "The distribution of the filter protocol errors",
|
||||
},
|
||||
[]string{"error_type"},
|
||||
)
|
||||
|
||||
var filterRequests = prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Name: "waku_filter_requests",
|
||||
Help: "The distribution of filter requests",
|
||||
},
|
||||
[]string{"request_type"},
|
||||
)
|
||||
|
||||
var filterRequestDurationSeconds = prometheus.NewHistogramVec(
|
||||
prometheus.HistogramOpts{
|
||||
Name: "waku_filter_request_duration_seconds",
|
||||
Help: "Duration of Filter Subscribe Requests",
|
||||
},
|
||||
[]string{"request_type"},
|
||||
)
|
||||
|
||||
var filterHandleMessageDurationSeconds = prometheus.NewHistogram(
|
||||
prometheus.HistogramOpts{
|
||||
Name: "waku_filter_handle_message_duration_seconds",
|
||||
Help: "Duration to Push Message to Filter Subscribers",
|
||||
})
|
||||
|
||||
var filterSubscriptions = prometheus.NewGauge(
|
||||
prometheus.GaugeOpts{
|
||||
Name: "waku_filter_subscriptions",
|
||||
Help: "The number of filter subscriptions",
|
||||
})
|
||||
|
||||
var collectors = []prometheus.Collector{
|
||||
filterMessages,
|
||||
filterErrors,
|
||||
filterRequests,
|
||||
filterSubscriptions,
|
||||
filterRequestDurationSeconds,
|
||||
filterHandleMessageDurationSeconds,
|
||||
}
|
||||
|
||||
// Metrics exposes the functions required to update prometheus metrics for filter protocol
|
||||
type Metrics interface {
|
||||
RecordMessage()
|
||||
RecordRequest(requestType string, duration time.Duration)
|
||||
RecordPushDuration(duration time.Duration)
|
||||
RecordSubscriptions(num int)
|
||||
RecordError(err metricsErrCategory)
|
||||
}
|
||||
|
||||
type metricsImpl struct {
|
||||
reg prometheus.Registerer
|
||||
}
|
||||
|
||||
func newMetrics(reg prometheus.Registerer) Metrics {
|
||||
metricshelper.RegisterCollectors(reg, collectors...)
|
||||
return &metricsImpl{
|
||||
reg: reg,
|
||||
}
|
||||
}
|
||||
|
||||
// RecordMessage is used to increase the counter for the number of messages received via waku filter
|
||||
func (m *metricsImpl) RecordMessage() {
|
||||
filterMessages.Inc()
|
||||
}
|
||||
|
||||
type metricsErrCategory string
|
||||
|
||||
var (
|
||||
unknownPeerMessagePush metricsErrCategory = "unknown_peer_messagepush"
|
||||
decodeRPCFailure metricsErrCategory = "decode_rpc_failure"
|
||||
invalidSubscriptionMessage metricsErrCategory = "invalid_subscription_message"
|
||||
dialFailure metricsErrCategory = "dial_failure"
|
||||
writeRequestFailure metricsErrCategory = "write_request_failure"
|
||||
requestIDMismatch metricsErrCategory = "request_id_mismatch"
|
||||
errorResponse metricsErrCategory = "error_response"
|
||||
peerNotFoundFailure metricsErrCategory = "peer_not_found_failure"
|
||||
writeResponseFailure metricsErrCategory = "write_response_failure"
|
||||
pushTimeoutFailure metricsErrCategory = "push_timeout_failure"
|
||||
)
|
||||
|
||||
// RecordError increases the counter for different error types
|
||||
func (m *metricsImpl) RecordError(err metricsErrCategory) {
|
||||
filterErrors.WithLabelValues(string(err)).Inc()
|
||||
}
|
||||
|
||||
// RecordRequest tracks the duration of each type of filter request received
|
||||
func (m *metricsImpl) RecordRequest(requestType string, duration time.Duration) {
|
||||
filterRequests.WithLabelValues(requestType).Inc()
|
||||
filterRequestDurationSeconds.WithLabelValues(requestType).Observe(duration.Seconds())
|
||||
}
|
||||
|
||||
// RecordPushDuration tracks the duration of pushing a message to a filter subscriber
|
||||
func (m *metricsImpl) RecordPushDuration(duration time.Duration) {
|
||||
filterHandleMessageDurationSeconds.Observe(duration.Seconds())
|
||||
}
|
||||
|
||||
// RecordSubscriptions track the current number of filter subscriptions
|
||||
func (m *metricsImpl) RecordSubscriptions(num int) {
|
||||
filterSubscriptions.Set(float64(num))
|
||||
}
|
||||
192
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/options.go
generated
vendored
Normal file
192
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/options.go
generated
vendored
Normal file
@@ -0,0 +1,192 @@
|
||||
package filter
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/multiformats/go-multiaddr"
|
||||
"github.com/waku-org/go-waku/waku/v2/peermanager"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
func (old *FilterSubscribeParameters) Copy() *FilterSubscribeParameters {
|
||||
return &FilterSubscribeParameters{
|
||||
selectedPeer: old.selectedPeer,
|
||||
requestID: old.requestID,
|
||||
}
|
||||
}
|
||||
|
||||
type (
|
||||
FilterPingParameters struct {
|
||||
requestID []byte
|
||||
}
|
||||
FilterPingOption func(*FilterPingParameters)
|
||||
)
|
||||
|
||||
func WithPingRequestId(requestId []byte) FilterPingOption {
|
||||
return func(params *FilterPingParameters) {
|
||||
params.requestID = requestId
|
||||
}
|
||||
}
|
||||
|
||||
type (
|
||||
FilterSubscribeParameters struct {
|
||||
selectedPeer peer.ID
|
||||
peerAddr multiaddr.Multiaddr
|
||||
peerSelectionType peermanager.PeerSelection
|
||||
preferredPeers peer.IDSlice
|
||||
requestID []byte
|
||||
log *zap.Logger
|
||||
|
||||
// Subscribe-specific
|
||||
host host.Host
|
||||
pm *peermanager.PeerManager
|
||||
|
||||
// Unsubscribe-specific
|
||||
unsubscribeAll bool
|
||||
wg *sync.WaitGroup
|
||||
}
|
||||
|
||||
FilterParameters struct {
|
||||
Timeout time.Duration
|
||||
MaxSubscribers int
|
||||
pm *peermanager.PeerManager
|
||||
}
|
||||
|
||||
Option func(*FilterParameters)
|
||||
|
||||
FilterSubscribeOption func(*FilterSubscribeParameters) error
|
||||
)
|
||||
|
||||
func WithTimeout(timeout time.Duration) Option {
|
||||
return func(params *FilterParameters) {
|
||||
params.Timeout = timeout
|
||||
}
|
||||
}
|
||||
|
||||
// WithPeer is an option used to specify the peerID to request the message history.
|
||||
// Note that this option is mutually exclusive to WithPeerAddr, only one of them can be used.
|
||||
func WithPeer(p peer.ID) FilterSubscribeOption {
|
||||
return func(params *FilterSubscribeParameters) error {
|
||||
params.selectedPeer = p
|
||||
if params.peerAddr != nil {
|
||||
return errors.New("peerAddr and peerId options are mutually exclusive")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithPeerAddr is an option used to specify a peerAddress.
|
||||
// This new peer will be added to peerStore.
|
||||
// Note that this option is mutually exclusive to WithPeerAddr, only one of them can be used.
|
||||
func WithPeerAddr(pAddr multiaddr.Multiaddr) FilterSubscribeOption {
|
||||
return func(params *FilterSubscribeParameters) error {
|
||||
params.peerAddr = pAddr
|
||||
if params.selectedPeer != "" {
|
||||
return errors.New("peerAddr and peerId options are mutually exclusive")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithAutomaticPeerSelection is an option used to randomly select a peer from the peer store.
|
||||
// If a list of specific peers is passed, the peer will be chosen from that list assuming it
|
||||
// supports the chosen protocol, otherwise it will chose a peer from the node peerstore
|
||||
func WithAutomaticPeerSelection(fromThesePeers ...peer.ID) FilterSubscribeOption {
|
||||
return func(params *FilterSubscribeParameters) error {
|
||||
params.peerSelectionType = peermanager.Automatic
|
||||
params.preferredPeers = fromThesePeers
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithFastestPeerSelection is an option used to select a peer from the peer store
|
||||
// with the lowest ping If a list of specific peers is passed, the peer will be chosen
|
||||
// from that list assuming it supports the chosen protocol, otherwise it will chose a
|
||||
// peer from the node peerstore
|
||||
func WithFastestPeerSelection(fromThesePeers ...peer.ID) FilterSubscribeOption {
|
||||
return func(params *FilterSubscribeParameters) error {
|
||||
params.peerSelectionType = peermanager.LowestRTT
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithRequestID is an option to set a specific request ID to be used when
|
||||
// creating/removing a filter subscription
|
||||
func WithRequestID(requestID []byte) FilterSubscribeOption {
|
||||
return func(params *FilterSubscribeParameters) error {
|
||||
params.requestID = requestID
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithAutomaticRequestID is an option to automatically generate a request ID
|
||||
// when creating a filter subscription
|
||||
func WithAutomaticRequestID() FilterSubscribeOption {
|
||||
return func(params *FilterSubscribeParameters) error {
|
||||
params.requestID = protocol.GenerateRequestID()
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func DefaultSubscriptionOptions() []FilterSubscribeOption {
|
||||
return []FilterSubscribeOption{
|
||||
WithAutomaticPeerSelection(),
|
||||
WithAutomaticRequestID(),
|
||||
}
|
||||
}
|
||||
|
||||
func UnsubscribeAll() FilterSubscribeOption {
|
||||
return func(params *FilterSubscribeParameters) error {
|
||||
params.unsubscribeAll = true
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithWaitGroup allows specifying a waitgroup to wait until all
|
||||
// unsubscribe requests are complete before the function is complete
|
||||
func WithWaitGroup(wg *sync.WaitGroup) FilterSubscribeOption {
|
||||
return func(params *FilterSubscribeParameters) error {
|
||||
params.wg = wg
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// DontWait is used to fire and forget an unsubscription, and don't
|
||||
// care about the results of it
|
||||
func DontWait() FilterSubscribeOption {
|
||||
return func(params *FilterSubscribeParameters) error {
|
||||
params.wg = nil
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func DefaultUnsubscribeOptions() []FilterSubscribeOption {
|
||||
return []FilterSubscribeOption{
|
||||
WithAutomaticRequestID(),
|
||||
WithWaitGroup(&sync.WaitGroup{}),
|
||||
}
|
||||
}
|
||||
|
||||
func WithMaxSubscribers(maxSubscribers int) Option {
|
||||
return func(params *FilterParameters) {
|
||||
params.MaxSubscribers = maxSubscribers
|
||||
}
|
||||
}
|
||||
|
||||
func WithPeerManager(pm *peermanager.PeerManager) Option {
|
||||
return func(params *FilterParameters) {
|
||||
params.pm = pm
|
||||
}
|
||||
}
|
||||
|
||||
func DefaultOptions() []Option {
|
||||
return []Option{
|
||||
WithTimeout(24 * time.Hour),
|
||||
WithMaxSubscribers(DefaultMaxSubscriptions),
|
||||
}
|
||||
}
|
||||
416
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/pb/filter.pb.go
generated
vendored
Normal file
416
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/pb/filter.pb.go
generated
vendored
Normal file
@@ -0,0 +1,416 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.31.0
|
||||
// protoc v4.24.4
|
||||
// source: filter.proto
|
||||
|
||||
// 12/WAKU2-FILTER rfc: https://rfc.vac.dev/spec/12/
|
||||
|
||||
package pb
|
||||
|
||||
import (
|
||||
pb "github.com/waku-org/go-waku/waku/v2/protocol/pb"
|
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
reflect "reflect"
|
||||
sync "sync"
|
||||
)
|
||||
|
||||
const (
|
||||
// Verify that this generated code is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
||||
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
type FilterSubscribeRequest_FilterSubscribeType int32
|
||||
|
||||
const (
|
||||
FilterSubscribeRequest_SUBSCRIBER_PING FilterSubscribeRequest_FilterSubscribeType = 0
|
||||
FilterSubscribeRequest_SUBSCRIBE FilterSubscribeRequest_FilterSubscribeType = 1
|
||||
FilterSubscribeRequest_UNSUBSCRIBE FilterSubscribeRequest_FilterSubscribeType = 2
|
||||
FilterSubscribeRequest_UNSUBSCRIBE_ALL FilterSubscribeRequest_FilterSubscribeType = 3
|
||||
)
|
||||
|
||||
// Enum value maps for FilterSubscribeRequest_FilterSubscribeType.
|
||||
var (
|
||||
FilterSubscribeRequest_FilterSubscribeType_name = map[int32]string{
|
||||
0: "SUBSCRIBER_PING",
|
||||
1: "SUBSCRIBE",
|
||||
2: "UNSUBSCRIBE",
|
||||
3: "UNSUBSCRIBE_ALL",
|
||||
}
|
||||
FilterSubscribeRequest_FilterSubscribeType_value = map[string]int32{
|
||||
"SUBSCRIBER_PING": 0,
|
||||
"SUBSCRIBE": 1,
|
||||
"UNSUBSCRIBE": 2,
|
||||
"UNSUBSCRIBE_ALL": 3,
|
||||
}
|
||||
)
|
||||
|
||||
func (x FilterSubscribeRequest_FilterSubscribeType) Enum() *FilterSubscribeRequest_FilterSubscribeType {
|
||||
p := new(FilterSubscribeRequest_FilterSubscribeType)
|
||||
*p = x
|
||||
return p
|
||||
}
|
||||
|
||||
func (x FilterSubscribeRequest_FilterSubscribeType) String() string {
|
||||
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
|
||||
}
|
||||
|
||||
func (FilterSubscribeRequest_FilterSubscribeType) Descriptor() protoreflect.EnumDescriptor {
|
||||
return file_filter_proto_enumTypes[0].Descriptor()
|
||||
}
|
||||
|
||||
func (FilterSubscribeRequest_FilterSubscribeType) Type() protoreflect.EnumType {
|
||||
return &file_filter_proto_enumTypes[0]
|
||||
}
|
||||
|
||||
func (x FilterSubscribeRequest_FilterSubscribeType) Number() protoreflect.EnumNumber {
|
||||
return protoreflect.EnumNumber(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use FilterSubscribeRequest_FilterSubscribeType.Descriptor instead.
|
||||
func (FilterSubscribeRequest_FilterSubscribeType) EnumDescriptor() ([]byte, []int) {
|
||||
return file_filter_proto_rawDescGZIP(), []int{0, 0}
|
||||
}
|
||||
|
||||
// Protocol identifier: /vac/waku/filter-subscribe/2.0.0-beta1
|
||||
type FilterSubscribeRequest struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
RequestId string `protobuf:"bytes,1,opt,name=request_id,json=requestId,proto3" json:"request_id,omitempty"`
|
||||
FilterSubscribeType FilterSubscribeRequest_FilterSubscribeType `protobuf:"varint,2,opt,name=filter_subscribe_type,json=filterSubscribeType,proto3,enum=waku.filter.v2.FilterSubscribeRequest_FilterSubscribeType" json:"filter_subscribe_type,omitempty"`
|
||||
// Filter criteria
|
||||
PubsubTopic *string `protobuf:"bytes,10,opt,name=pubsub_topic,json=pubsubTopic,proto3,oneof" json:"pubsub_topic,omitempty"`
|
||||
ContentTopics []string `protobuf:"bytes,11,rep,name=content_topics,json=contentTopics,proto3" json:"content_topics,omitempty"`
|
||||
}
|
||||
|
||||
func (x *FilterSubscribeRequest) Reset() {
|
||||
*x = FilterSubscribeRequest{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_filter_proto_msgTypes[0]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *FilterSubscribeRequest) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*FilterSubscribeRequest) ProtoMessage() {}
|
||||
|
||||
func (x *FilterSubscribeRequest) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_filter_proto_msgTypes[0]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use FilterSubscribeRequest.ProtoReflect.Descriptor instead.
|
||||
func (*FilterSubscribeRequest) Descriptor() ([]byte, []int) {
|
||||
return file_filter_proto_rawDescGZIP(), []int{0}
|
||||
}
|
||||
|
||||
func (x *FilterSubscribeRequest) GetRequestId() string {
|
||||
if x != nil {
|
||||
return x.RequestId
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *FilterSubscribeRequest) GetFilterSubscribeType() FilterSubscribeRequest_FilterSubscribeType {
|
||||
if x != nil {
|
||||
return x.FilterSubscribeType
|
||||
}
|
||||
return FilterSubscribeRequest_SUBSCRIBER_PING
|
||||
}
|
||||
|
||||
func (x *FilterSubscribeRequest) GetPubsubTopic() string {
|
||||
if x != nil && x.PubsubTopic != nil {
|
||||
return *x.PubsubTopic
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *FilterSubscribeRequest) GetContentTopics() []string {
|
||||
if x != nil {
|
||||
return x.ContentTopics
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type FilterSubscribeResponse struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
RequestId string `protobuf:"bytes,1,opt,name=request_id,json=requestId,proto3" json:"request_id,omitempty"`
|
||||
StatusCode uint32 `protobuf:"varint,10,opt,name=status_code,json=statusCode,proto3" json:"status_code,omitempty"`
|
||||
StatusDesc *string `protobuf:"bytes,11,opt,name=status_desc,json=statusDesc,proto3,oneof" json:"status_desc,omitempty"`
|
||||
}
|
||||
|
||||
func (x *FilterSubscribeResponse) Reset() {
|
||||
*x = FilterSubscribeResponse{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_filter_proto_msgTypes[1]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *FilterSubscribeResponse) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*FilterSubscribeResponse) ProtoMessage() {}
|
||||
|
||||
func (x *FilterSubscribeResponse) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_filter_proto_msgTypes[1]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use FilterSubscribeResponse.ProtoReflect.Descriptor instead.
|
||||
func (*FilterSubscribeResponse) Descriptor() ([]byte, []int) {
|
||||
return file_filter_proto_rawDescGZIP(), []int{1}
|
||||
}
|
||||
|
||||
func (x *FilterSubscribeResponse) GetRequestId() string {
|
||||
if x != nil {
|
||||
return x.RequestId
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *FilterSubscribeResponse) GetStatusCode() uint32 {
|
||||
if x != nil {
|
||||
return x.StatusCode
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *FilterSubscribeResponse) GetStatusDesc() string {
|
||||
if x != nil && x.StatusDesc != nil {
|
||||
return *x.StatusDesc
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Protocol identifier: /vac/waku/filter-push/2.0.0-beta1
|
||||
type MessagePush struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
WakuMessage *pb.WakuMessage `protobuf:"bytes,1,opt,name=waku_message,json=wakuMessage,proto3" json:"waku_message,omitempty"`
|
||||
PubsubTopic *string `protobuf:"bytes,2,opt,name=pubsub_topic,json=pubsubTopic,proto3,oneof" json:"pubsub_topic,omitempty"`
|
||||
}
|
||||
|
||||
func (x *MessagePush) Reset() {
|
||||
*x = MessagePush{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_filter_proto_msgTypes[2]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *MessagePush) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*MessagePush) ProtoMessage() {}
|
||||
|
||||
func (x *MessagePush) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_filter_proto_msgTypes[2]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use MessagePush.ProtoReflect.Descriptor instead.
|
||||
func (*MessagePush) Descriptor() ([]byte, []int) {
|
||||
return file_filter_proto_rawDescGZIP(), []int{2}
|
||||
}
|
||||
|
||||
func (x *MessagePush) GetWakuMessage() *pb.WakuMessage {
|
||||
if x != nil {
|
||||
return x.WakuMessage
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *MessagePush) GetPubsubTopic() string {
|
||||
if x != nil && x.PubsubTopic != nil {
|
||||
return *x.PubsubTopic
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
var File_filter_proto protoreflect.FileDescriptor
|
||||
|
||||
var file_filter_proto_rawDesc = []byte{
|
||||
0x0a, 0x0c, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x0e,
|
||||
0x77, 0x61, 0x6b, 0x75, 0x2e, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x1a, 0x1d,
|
||||
0x77, 0x61, 0x6b, 0x75, 0x2f, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x2f, 0x76, 0x31, 0x2f,
|
||||
0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xe8, 0x02,
|
||||
0x0a, 0x16, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x53, 0x75, 0x62, 0x73, 0x63, 0x72, 0x69, 0x62,
|
||||
0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x72, 0x65, 0x71, 0x75,
|
||||
0x65, 0x73, 0x74, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x72, 0x65,
|
||||
0x71, 0x75, 0x65, 0x73, 0x74, 0x49, 0x64, 0x12, 0x6e, 0x0a, 0x15, 0x66, 0x69, 0x6c, 0x74, 0x65,
|
||||
0x72, 0x5f, 0x73, 0x75, 0x62, 0x73, 0x63, 0x72, 0x69, 0x62, 0x65, 0x5f, 0x74, 0x79, 0x70, 0x65,
|
||||
0x18, 0x02, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x3a, 0x2e, 0x77, 0x61, 0x6b, 0x75, 0x2e, 0x66, 0x69,
|
||||
0x6c, 0x74, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x2e, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x53, 0x75,
|
||||
0x62, 0x73, 0x63, 0x72, 0x69, 0x62, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x2e, 0x46,
|
||||
0x69, 0x6c, 0x74, 0x65, 0x72, 0x53, 0x75, 0x62, 0x73, 0x63, 0x72, 0x69, 0x62, 0x65, 0x54, 0x79,
|
||||
0x70, 0x65, 0x52, 0x13, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x53, 0x75, 0x62, 0x73, 0x63, 0x72,
|
||||
0x69, 0x62, 0x65, 0x54, 0x79, 0x70, 0x65, 0x12, 0x26, 0x0a, 0x0c, 0x70, 0x75, 0x62, 0x73, 0x75,
|
||||
0x62, 0x5f, 0x74, 0x6f, 0x70, 0x69, 0x63, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52,
|
||||
0x0b, 0x70, 0x75, 0x62, 0x73, 0x75, 0x62, 0x54, 0x6f, 0x70, 0x69, 0x63, 0x88, 0x01, 0x01, 0x12,
|
||||
0x25, 0x0a, 0x0e, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x5f, 0x74, 0x6f, 0x70, 0x69, 0x63,
|
||||
0x73, 0x18, 0x0b, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0d, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74,
|
||||
0x54, 0x6f, 0x70, 0x69, 0x63, 0x73, 0x22, 0x5f, 0x0a, 0x13, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72,
|
||||
0x53, 0x75, 0x62, 0x73, 0x63, 0x72, 0x69, 0x62, 0x65, 0x54, 0x79, 0x70, 0x65, 0x12, 0x13, 0x0a,
|
||||
0x0f, 0x53, 0x55, 0x42, 0x53, 0x43, 0x52, 0x49, 0x42, 0x45, 0x52, 0x5f, 0x50, 0x49, 0x4e, 0x47,
|
||||
0x10, 0x00, 0x12, 0x0d, 0x0a, 0x09, 0x53, 0x55, 0x42, 0x53, 0x43, 0x52, 0x49, 0x42, 0x45, 0x10,
|
||||
0x01, 0x12, 0x0f, 0x0a, 0x0b, 0x55, 0x4e, 0x53, 0x55, 0x42, 0x53, 0x43, 0x52, 0x49, 0x42, 0x45,
|
||||
0x10, 0x02, 0x12, 0x13, 0x0a, 0x0f, 0x55, 0x4e, 0x53, 0x55, 0x42, 0x53, 0x43, 0x52, 0x49, 0x42,
|
||||
0x45, 0x5f, 0x41, 0x4c, 0x4c, 0x10, 0x03, 0x42, 0x0f, 0x0a, 0x0d, 0x5f, 0x70, 0x75, 0x62, 0x73,
|
||||
0x75, 0x62, 0x5f, 0x74, 0x6f, 0x70, 0x69, 0x63, 0x22, 0x8f, 0x01, 0x0a, 0x17, 0x46, 0x69, 0x6c,
|
||||
0x74, 0x65, 0x72, 0x53, 0x75, 0x62, 0x73, 0x63, 0x72, 0x69, 0x62, 0x65, 0x52, 0x65, 0x73, 0x70,
|
||||
0x6f, 0x6e, 0x73, 0x65, 0x12, 0x1d, 0x0a, 0x0a, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x5f,
|
||||
0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73,
|
||||
0x74, 0x49, 0x64, 0x12, 0x1f, 0x0a, 0x0b, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x5f, 0x63, 0x6f,
|
||||
0x64, 0x65, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0a, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73,
|
||||
0x43, 0x6f, 0x64, 0x65, 0x12, 0x24, 0x0a, 0x0b, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x5f, 0x64,
|
||||
0x65, 0x73, 0x63, 0x18, 0x0b, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x0a, 0x73, 0x74, 0x61,
|
||||
0x74, 0x75, 0x73, 0x44, 0x65, 0x73, 0x63, 0x88, 0x01, 0x01, 0x42, 0x0e, 0x0a, 0x0c, 0x5f, 0x73,
|
||||
0x74, 0x61, 0x74, 0x75, 0x73, 0x5f, 0x64, 0x65, 0x73, 0x63, 0x22, 0x87, 0x01, 0x0a, 0x0b, 0x4d,
|
||||
0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x50, 0x75, 0x73, 0x68, 0x12, 0x3f, 0x0a, 0x0c, 0x77, 0x61,
|
||||
0x6b, 0x75, 0x5f, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b,
|
||||
0x32, 0x1c, 0x2e, 0x77, 0x61, 0x6b, 0x75, 0x2e, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x2e,
|
||||
0x76, 0x31, 0x2e, 0x57, 0x61, 0x6b, 0x75, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x52, 0x0b,
|
||||
0x77, 0x61, 0x6b, 0x75, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x26, 0x0a, 0x0c, 0x70,
|
||||
0x75, 0x62, 0x73, 0x75, 0x62, 0x5f, 0x74, 0x6f, 0x70, 0x69, 0x63, 0x18, 0x02, 0x20, 0x01, 0x28,
|
||||
0x09, 0x48, 0x00, 0x52, 0x0b, 0x70, 0x75, 0x62, 0x73, 0x75, 0x62, 0x54, 0x6f, 0x70, 0x69, 0x63,
|
||||
0x88, 0x01, 0x01, 0x42, 0x0f, 0x0a, 0x0d, 0x5f, 0x70, 0x75, 0x62, 0x73, 0x75, 0x62, 0x5f, 0x74,
|
||||
0x6f, 0x70, 0x69, 0x63, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
|
||||
}
|
||||
|
||||
var (
|
||||
file_filter_proto_rawDescOnce sync.Once
|
||||
file_filter_proto_rawDescData = file_filter_proto_rawDesc
|
||||
)
|
||||
|
||||
func file_filter_proto_rawDescGZIP() []byte {
|
||||
file_filter_proto_rawDescOnce.Do(func() {
|
||||
file_filter_proto_rawDescData = protoimpl.X.CompressGZIP(file_filter_proto_rawDescData)
|
||||
})
|
||||
return file_filter_proto_rawDescData
|
||||
}
|
||||
|
||||
var file_filter_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
|
||||
var file_filter_proto_msgTypes = make([]protoimpl.MessageInfo, 3)
|
||||
var file_filter_proto_goTypes = []interface{}{
|
||||
(FilterSubscribeRequest_FilterSubscribeType)(0), // 0: waku.filter.v2.FilterSubscribeRequest.FilterSubscribeType
|
||||
(*FilterSubscribeRequest)(nil), // 1: waku.filter.v2.FilterSubscribeRequest
|
||||
(*FilterSubscribeResponse)(nil), // 2: waku.filter.v2.FilterSubscribeResponse
|
||||
(*MessagePush)(nil), // 3: waku.filter.v2.MessagePush
|
||||
(*pb.WakuMessage)(nil), // 4: waku.message.v1.WakuMessage
|
||||
}
|
||||
var file_filter_proto_depIdxs = []int32{
|
||||
0, // 0: waku.filter.v2.FilterSubscribeRequest.filter_subscribe_type:type_name -> waku.filter.v2.FilterSubscribeRequest.FilterSubscribeType
|
||||
4, // 1: waku.filter.v2.MessagePush.waku_message:type_name -> waku.message.v1.WakuMessage
|
||||
2, // [2:2] is the sub-list for method output_type
|
||||
2, // [2:2] is the sub-list for method input_type
|
||||
2, // [2:2] is the sub-list for extension type_name
|
||||
2, // [2:2] is the sub-list for extension extendee
|
||||
0, // [0:2] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_filter_proto_init() }
|
||||
func file_filter_proto_init() {
|
||||
if File_filter_proto != nil {
|
||||
return
|
||||
}
|
||||
if !protoimpl.UnsafeEnabled {
|
||||
file_filter_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*FilterSubscribeRequest); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_filter_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*FilterSubscribeResponse); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_filter_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*MessagePush); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
file_filter_proto_msgTypes[0].OneofWrappers = []interface{}{}
|
||||
file_filter_proto_msgTypes[1].OneofWrappers = []interface{}{}
|
||||
file_filter_proto_msgTypes[2].OneofWrappers = []interface{}{}
|
||||
type x struct{}
|
||||
out := protoimpl.TypeBuilder{
|
||||
File: protoimpl.DescBuilder{
|
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: file_filter_proto_rawDesc,
|
||||
NumEnums: 1,
|
||||
NumMessages: 3,
|
||||
NumExtensions: 0,
|
||||
NumServices: 0,
|
||||
},
|
||||
GoTypes: file_filter_proto_goTypes,
|
||||
DependencyIndexes: file_filter_proto_depIdxs,
|
||||
EnumInfos: file_filter_proto_enumTypes,
|
||||
MessageInfos: file_filter_proto_msgTypes,
|
||||
}.Build()
|
||||
File_filter_proto = out.File
|
||||
file_filter_proto_rawDesc = nil
|
||||
file_filter_proto_goTypes = nil
|
||||
file_filter_proto_depIdxs = nil
|
||||
}
|
||||
3
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/pb/generate.go
generated
vendored
Normal file
3
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/pb/generate.go
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
package pb
|
||||
|
||||
//go:generate protoc -I./../../waku-proto/waku/filter/v2/. -I./../../waku-proto/ --go_opt=paths=source_relative --go_opt=Mfilter.proto=github.com/waku-org/go-waku/waku/v2/protocol/filter/pb --go_opt=Mwaku/message/v1/message.proto=github.com/waku-org/go-waku/waku/v2/protocol/pb --go_out=. ./../../waku-proto/waku/filter/v2/filter.proto
|
||||
60
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/pb/validation.go
generated
vendored
Normal file
60
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/pb/validation.go
generated
vendored
Normal file
@@ -0,0 +1,60 @@
|
||||
package pb
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"golang.org/x/exp/slices"
|
||||
)
|
||||
|
||||
const MaxContentTopicsPerRequest = 30
|
||||
|
||||
var (
|
||||
errMissingRequestID = errors.New("missing RequestId field")
|
||||
errMissingPubsubTopic = errors.New("missing PubsubTopic field")
|
||||
errNoContentTopics = errors.New("at least one contenttopic should be specified")
|
||||
errMaxContentTopics = fmt.Errorf("exceeds maximum content topics: %d", MaxContentTopicsPerRequest)
|
||||
errEmptyContentTopics = errors.New("one or more content topics specified is empty")
|
||||
errMissingMessage = errors.New("missing WakuMessage field")
|
||||
)
|
||||
|
||||
func (x *FilterSubscribeRequest) Validate() error {
|
||||
if x.RequestId == "" {
|
||||
return errMissingRequestID
|
||||
}
|
||||
|
||||
if x.FilterSubscribeType == FilterSubscribeRequest_SUBSCRIBE || x.FilterSubscribeType == FilterSubscribeRequest_UNSUBSCRIBE {
|
||||
if x.PubsubTopic == nil || *x.PubsubTopic == "" {
|
||||
return errMissingPubsubTopic
|
||||
}
|
||||
|
||||
if len(x.ContentTopics) == 0 {
|
||||
return errNoContentTopics
|
||||
}
|
||||
|
||||
if slices.Contains[string](x.ContentTopics, "") {
|
||||
return errEmptyContentTopics
|
||||
}
|
||||
|
||||
if len(x.ContentTopics) > MaxContentTopicsPerRequest {
|
||||
return errMaxContentTopics
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *FilterSubscribeResponse) Validate() error {
|
||||
if x.RequestId == "" {
|
||||
return errMissingRequestID
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *MessagePush) Validate() error {
|
||||
if x.WakuMessage == nil {
|
||||
return errMissingMessage
|
||||
}
|
||||
return x.WakuMessage.Validate()
|
||||
}
|
||||
309
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/server.go
generated
vendored
Normal file
309
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/server.go
generated
vendored
Normal file
@@ -0,0 +1,309 @@
|
||||
package filter
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"math"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
libp2pProtocol "github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/libp2p/go-msgio/pbio"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/filter/pb"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
"github.com/waku-org/go-waku/waku/v2/service"
|
||||
"github.com/waku-org/go-waku/waku/v2/timesource"
|
||||
"github.com/waku-org/go-waku/waku/v2/utils"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
// FilterSubscribeID_v20beta1 is the current Waku Filter protocol identifier for servers to
|
||||
// allow filter clients to subscribe, modify, refresh and unsubscribe a desired set of filter criteria
|
||||
const FilterSubscribeID_v20beta1 = libp2pProtocol.ID("/vac/waku/filter-subscribe/2.0.0-beta1")
|
||||
const FilterSubscribeENRField = uint8(1 << 2)
|
||||
const peerHasNoSubscription = "peer has no subscriptions"
|
||||
|
||||
type (
|
||||
WakuFilterFullNode struct {
|
||||
h host.Host
|
||||
msgSub *relay.Subscription
|
||||
metrics Metrics
|
||||
log *zap.Logger
|
||||
*service.CommonService
|
||||
subscriptions *SubscribersMap
|
||||
|
||||
maxSubscriptions int
|
||||
}
|
||||
)
|
||||
|
||||
// NewWakuFilterFullNode returns a new instance of Waku Filter struct setup according to the chosen parameter and options
|
||||
func NewWakuFilterFullNode(timesource timesource.Timesource, reg prometheus.Registerer, log *zap.Logger, opts ...Option) *WakuFilterFullNode {
|
||||
wf := new(WakuFilterFullNode)
|
||||
wf.log = log.Named("filterv2-fullnode")
|
||||
|
||||
params := new(FilterParameters)
|
||||
optList := DefaultOptions()
|
||||
optList = append(optList, opts...)
|
||||
for _, opt := range optList {
|
||||
opt(params)
|
||||
}
|
||||
|
||||
wf.CommonService = service.NewCommonService()
|
||||
wf.metrics = newMetrics(reg)
|
||||
wf.subscriptions = NewSubscribersMap(params.Timeout)
|
||||
wf.maxSubscriptions = params.MaxSubscribers
|
||||
if params.pm != nil {
|
||||
params.pm.RegisterWakuProtocol(FilterSubscribeID_v20beta1, FilterSubscribeENRField)
|
||||
}
|
||||
return wf
|
||||
}
|
||||
|
||||
// Sets the host to be able to mount or consume a protocol
|
||||
func (wf *WakuFilterFullNode) SetHost(h host.Host) {
|
||||
wf.h = h
|
||||
}
|
||||
|
||||
func (wf *WakuFilterFullNode) Start(ctx context.Context, sub *relay.Subscription) error {
|
||||
return wf.CommonService.Start(ctx, func() error {
|
||||
return wf.start(sub)
|
||||
})
|
||||
}
|
||||
|
||||
func (wf *WakuFilterFullNode) start(sub *relay.Subscription) error {
|
||||
wf.h.SetStreamHandlerMatch(FilterSubscribeID_v20beta1, protocol.PrefixTextMatch(string(FilterSubscribeID_v20beta1)), wf.onRequest(wf.Context()))
|
||||
|
||||
wf.msgSub = sub
|
||||
wf.WaitGroup().Add(1)
|
||||
go wf.filterListener(wf.Context())
|
||||
|
||||
wf.log.Info("filter-subscriber protocol started")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (wf *WakuFilterFullNode) onRequest(ctx context.Context) func(network.Stream) {
|
||||
return func(stream network.Stream) {
|
||||
logger := wf.log.With(logging.HostID("peer", stream.Conn().RemotePeer()))
|
||||
|
||||
reader := pbio.NewDelimitedReader(stream, math.MaxInt32)
|
||||
|
||||
subscribeRequest := &pb.FilterSubscribeRequest{}
|
||||
err := reader.ReadMsg(subscribeRequest)
|
||||
if err != nil {
|
||||
wf.metrics.RecordError(decodeRPCFailure)
|
||||
logger.Error("reading request", zap.Error(err))
|
||||
if err := stream.Reset(); err != nil {
|
||||
wf.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
logger = logger.With(zap.String("requestID", subscribeRequest.RequestId))
|
||||
|
||||
start := time.Now()
|
||||
|
||||
if err := subscribeRequest.Validate(); err != nil {
|
||||
wf.reply(ctx, stream, subscribeRequest, http.StatusBadRequest, err.Error())
|
||||
} else {
|
||||
switch subscribeRequest.FilterSubscribeType {
|
||||
case pb.FilterSubscribeRequest_SUBSCRIBE:
|
||||
wf.subscribe(ctx, stream, subscribeRequest)
|
||||
case pb.FilterSubscribeRequest_SUBSCRIBER_PING:
|
||||
wf.ping(ctx, stream, subscribeRequest)
|
||||
case pb.FilterSubscribeRequest_UNSUBSCRIBE:
|
||||
wf.unsubscribe(ctx, stream, subscribeRequest)
|
||||
case pb.FilterSubscribeRequest_UNSUBSCRIBE_ALL:
|
||||
wf.unsubscribeAll(ctx, stream, subscribeRequest)
|
||||
}
|
||||
}
|
||||
|
||||
stream.Close()
|
||||
|
||||
wf.metrics.RecordRequest(subscribeRequest.FilterSubscribeType.String(), time.Since(start))
|
||||
|
||||
logger.Info("received request", zap.String("requestType", subscribeRequest.FilterSubscribeType.String()))
|
||||
}
|
||||
}
|
||||
|
||||
func (wf *WakuFilterFullNode) reply(ctx context.Context, stream network.Stream, request *pb.FilterSubscribeRequest, statusCode int, description ...string) {
|
||||
response := &pb.FilterSubscribeResponse{
|
||||
RequestId: request.RequestId,
|
||||
StatusCode: uint32(statusCode),
|
||||
}
|
||||
|
||||
if len(description) != 0 {
|
||||
response.StatusDesc = &description[0]
|
||||
} else {
|
||||
desc := http.StatusText(statusCode)
|
||||
response.StatusDesc = &desc
|
||||
}
|
||||
|
||||
writer := pbio.NewDelimitedWriter(stream)
|
||||
err := writer.WriteMsg(response)
|
||||
if err != nil {
|
||||
wf.metrics.RecordError(writeResponseFailure)
|
||||
wf.log.Error("sending response", zap.Error(err))
|
||||
if err := stream.Reset(); err != nil {
|
||||
wf.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (wf *WakuFilterFullNode) ping(ctx context.Context, stream network.Stream, request *pb.FilterSubscribeRequest) {
|
||||
exists := wf.subscriptions.Has(stream.Conn().RemotePeer())
|
||||
|
||||
if exists {
|
||||
wf.reply(ctx, stream, request, http.StatusOK)
|
||||
} else {
|
||||
wf.reply(ctx, stream, request, http.StatusNotFound, peerHasNoSubscription)
|
||||
}
|
||||
}
|
||||
|
||||
func (wf *WakuFilterFullNode) subscribe(ctx context.Context, stream network.Stream, request *pb.FilterSubscribeRequest) {
|
||||
if wf.subscriptions.Count() >= wf.maxSubscriptions {
|
||||
wf.reply(ctx, stream, request, http.StatusServiceUnavailable, "node has reached maximum number of subscriptions")
|
||||
return
|
||||
}
|
||||
|
||||
peerID := stream.Conn().RemotePeer()
|
||||
|
||||
if totalSubs, exists := wf.subscriptions.Get(peerID); exists {
|
||||
ctTotal := 0
|
||||
for _, contentTopicSet := range totalSubs {
|
||||
ctTotal += len(contentTopicSet)
|
||||
}
|
||||
|
||||
if ctTotal+len(request.ContentTopics) > MaxCriteriaPerSubscription {
|
||||
wf.reply(ctx, stream, request, http.StatusServiceUnavailable, "peer has reached maximum number of filter criteria")
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
wf.subscriptions.Set(peerID, *request.PubsubTopic, request.ContentTopics)
|
||||
|
||||
wf.metrics.RecordSubscriptions(wf.subscriptions.Count())
|
||||
wf.reply(ctx, stream, request, http.StatusOK)
|
||||
}
|
||||
|
||||
func (wf *WakuFilterFullNode) unsubscribe(ctx context.Context, stream network.Stream, request *pb.FilterSubscribeRequest) {
|
||||
err := wf.subscriptions.Delete(stream.Conn().RemotePeer(), *request.PubsubTopic, request.ContentTopics)
|
||||
if err != nil {
|
||||
wf.reply(ctx, stream, request, http.StatusNotFound, peerHasNoSubscription)
|
||||
} else {
|
||||
wf.metrics.RecordSubscriptions(wf.subscriptions.Count())
|
||||
wf.reply(ctx, stream, request, http.StatusOK)
|
||||
}
|
||||
}
|
||||
|
||||
func (wf *WakuFilterFullNode) unsubscribeAll(ctx context.Context, stream network.Stream, request *pb.FilterSubscribeRequest) {
|
||||
err := wf.subscriptions.DeleteAll(stream.Conn().RemotePeer())
|
||||
if err != nil {
|
||||
wf.reply(ctx, stream, request, http.StatusNotFound, peerHasNoSubscription)
|
||||
} else {
|
||||
wf.metrics.RecordSubscriptions(wf.subscriptions.Count())
|
||||
wf.reply(ctx, stream, request, http.StatusOK)
|
||||
}
|
||||
}
|
||||
|
||||
func (wf *WakuFilterFullNode) filterListener(ctx context.Context) {
|
||||
defer wf.WaitGroup().Done()
|
||||
|
||||
// This function is invoked for each message received
|
||||
// on the full node in context of Waku2-Filter
|
||||
handle := func(envelope *protocol.Envelope) error {
|
||||
msg := envelope.Message()
|
||||
pubsubTopic := envelope.PubsubTopic()
|
||||
logger := utils.MessagesLogger("filter").With(logging.HexBytes("hash", envelope.Hash()),
|
||||
zap.String("pubsubTopic", envelope.PubsubTopic()),
|
||||
zap.String("contentTopic", envelope.Message().ContentTopic),
|
||||
)
|
||||
logger.Debug("push message to filter subscribers")
|
||||
|
||||
// Each subscriber is a light node that earlier on invoked
|
||||
// a FilterRequest on this node
|
||||
for subscriber := range wf.subscriptions.Items(pubsubTopic, msg.ContentTopic) {
|
||||
logger := logger.With(logging.HostID("peer", subscriber))
|
||||
// Do a message push to light node
|
||||
logger.Debug("pushing message to light node")
|
||||
wf.WaitGroup().Add(1)
|
||||
go func(subscriber peer.ID) {
|
||||
defer wf.WaitGroup().Done()
|
||||
start := time.Now()
|
||||
err := wf.pushMessage(ctx, logger, subscriber, envelope)
|
||||
if err != nil {
|
||||
logger.Error("pushing message", zap.Error(err))
|
||||
return
|
||||
}
|
||||
wf.metrics.RecordPushDuration(time.Since(start))
|
||||
}(subscriber)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
for m := range wf.msgSub.Ch {
|
||||
if err := handle(m); err != nil {
|
||||
wf.log.Error("handling message", zap.Error(err))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (wf *WakuFilterFullNode) pushMessage(ctx context.Context, logger *zap.Logger, peerID peer.ID, env *protocol.Envelope) error {
|
||||
pubSubTopic := env.PubsubTopic()
|
||||
messagePush := &pb.MessagePush{
|
||||
PubsubTopic: &pubSubTopic,
|
||||
WakuMessage: env.Message(),
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, MessagePushTimeout)
|
||||
defer cancel()
|
||||
|
||||
stream, err := wf.h.NewStream(ctx, peerID, FilterPushID_v20beta1)
|
||||
if err != nil {
|
||||
wf.subscriptions.FlagAsFailure(peerID)
|
||||
if errors.Is(context.DeadlineExceeded, err) {
|
||||
wf.metrics.RecordError(pushTimeoutFailure)
|
||||
} else {
|
||||
wf.metrics.RecordError(dialFailure)
|
||||
}
|
||||
logger.Error("opening peer stream", zap.Error(err))
|
||||
return err
|
||||
}
|
||||
|
||||
writer := pbio.NewDelimitedWriter(stream)
|
||||
err = writer.WriteMsg(messagePush)
|
||||
if err != nil {
|
||||
if errors.Is(context.DeadlineExceeded, err) {
|
||||
wf.metrics.RecordError(pushTimeoutFailure)
|
||||
} else {
|
||||
wf.metrics.RecordError(writeResponseFailure)
|
||||
}
|
||||
logger.Error("pushing messages to peer", zap.Error(err))
|
||||
wf.subscriptions.FlagAsFailure(peerID)
|
||||
if err := stream.Reset(); err != nil {
|
||||
wf.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
stream.Close()
|
||||
|
||||
wf.subscriptions.FlagAsSuccess(peerID)
|
||||
|
||||
logger.Debug("message pushed succesfully")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop unmounts the filter protocol
|
||||
func (wf *WakuFilterFullNode) Stop() {
|
||||
wf.CommonService.Stop(func() {
|
||||
wf.h.RemoveStreamHandler(FilterSubscribeID_v20beta1)
|
||||
wf.msgSub.Unsubscribe()
|
||||
})
|
||||
}
|
||||
246
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/subscribers_map.go
generated
vendored
Normal file
246
vendor/github.com/waku-org/go-waku/waku/v2/protocol/filter/subscribers_map.go
generated
vendored
Normal file
@@ -0,0 +1,246 @@
|
||||
package filter
|
||||
|
||||
import (
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
)
|
||||
|
||||
type PeerSet map[peer.ID]struct{}
|
||||
|
||||
type PubsubTopics map[string]protocol.ContentTopicSet // pubsubTopic => contentTopics
|
||||
|
||||
var errNotFound = errors.New("not found")
|
||||
|
||||
type SubscribersMap struct {
|
||||
sync.RWMutex
|
||||
|
||||
items map[peer.ID]PubsubTopics
|
||||
interestMap map[string]PeerSet // key: sha256(pubsubTopic-contentTopic) => peers
|
||||
|
||||
timeout time.Duration
|
||||
failedPeers map[peer.ID]time.Time
|
||||
}
|
||||
|
||||
func NewSubscribersMap(timeout time.Duration) *SubscribersMap {
|
||||
return &SubscribersMap{
|
||||
items: make(map[peer.ID]PubsubTopics),
|
||||
interestMap: make(map[string]PeerSet),
|
||||
timeout: timeout,
|
||||
failedPeers: make(map[peer.ID]time.Time),
|
||||
}
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) Clear() {
|
||||
sub.Lock()
|
||||
defer sub.Unlock()
|
||||
|
||||
sub.items = make(map[peer.ID]PubsubTopics)
|
||||
sub.interestMap = make(map[string]PeerSet)
|
||||
sub.failedPeers = make(map[peer.ID]time.Time)
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) Set(peerID peer.ID, pubsubTopic string, contentTopics []string) {
|
||||
sub.Lock()
|
||||
defer sub.Unlock()
|
||||
|
||||
pubsubTopicMap, ok := sub.items[peerID]
|
||||
if !ok {
|
||||
pubsubTopicMap = make(PubsubTopics)
|
||||
}
|
||||
|
||||
contentTopicsMap, ok := pubsubTopicMap[pubsubTopic]
|
||||
if !ok {
|
||||
contentTopicsMap = make(protocol.ContentTopicSet)
|
||||
}
|
||||
|
||||
for _, c := range contentTopics {
|
||||
contentTopicsMap[c] = struct{}{}
|
||||
}
|
||||
|
||||
pubsubTopicMap[pubsubTopic] = contentTopicsMap
|
||||
|
||||
sub.items[peerID] = pubsubTopicMap
|
||||
|
||||
for _, c := range contentTopics {
|
||||
c := c
|
||||
sub.addToInterestMap(peerID, pubsubTopic, c)
|
||||
}
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) Get(peerID peer.ID) (PubsubTopics, bool) {
|
||||
sub.RLock()
|
||||
defer sub.RUnlock()
|
||||
|
||||
value, ok := sub.items[peerID]
|
||||
|
||||
return value, ok
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) Has(peerID peer.ID) bool {
|
||||
sub.RLock()
|
||||
defer sub.RUnlock()
|
||||
|
||||
_, ok := sub.items[peerID]
|
||||
|
||||
return ok
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) Delete(peerID peer.ID, pubsubTopic string, contentTopics []string) error {
|
||||
sub.Lock()
|
||||
defer sub.Unlock()
|
||||
|
||||
pubsubTopicMap, ok := sub.items[peerID]
|
||||
if !ok {
|
||||
return errNotFound
|
||||
}
|
||||
|
||||
contentTopicsMap, ok := pubsubTopicMap[pubsubTopic]
|
||||
if !ok {
|
||||
return errNotFound
|
||||
}
|
||||
|
||||
// Removing content topics individually
|
||||
for _, c := range contentTopics {
|
||||
c := c
|
||||
delete(contentTopicsMap, c)
|
||||
sub.removeFromInterestMap(peerID, pubsubTopic, c)
|
||||
}
|
||||
|
||||
pubsubTopicMap[pubsubTopic] = contentTopicsMap
|
||||
|
||||
// No more content topics available. Removing content topic completely
|
||||
if len(contentTopicsMap) == 0 {
|
||||
delete(pubsubTopicMap, pubsubTopic)
|
||||
}
|
||||
|
||||
sub.items[peerID] = pubsubTopicMap
|
||||
|
||||
if len(sub.items[peerID]) == 0 {
|
||||
delete(sub.items, peerID)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) deleteAll(peerID peer.ID) error {
|
||||
pubsubTopicMap, ok := sub.items[peerID]
|
||||
if !ok {
|
||||
return errNotFound
|
||||
}
|
||||
|
||||
for pubsubTopic, contentTopicsMap := range pubsubTopicMap {
|
||||
// Remove all content topics related to this pubsub topic
|
||||
for c := range contentTopicsMap {
|
||||
sub.removeFromInterestMap(peerID, pubsubTopic, c)
|
||||
}
|
||||
}
|
||||
|
||||
delete(sub.items, peerID)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) DeleteAll(peerID peer.ID) error {
|
||||
sub.Lock()
|
||||
defer sub.Unlock()
|
||||
|
||||
return sub.deleteAll(peerID)
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) RemoveAll() {
|
||||
sub.Lock()
|
||||
defer sub.Unlock()
|
||||
|
||||
sub.items = make(map[peer.ID]PubsubTopics)
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) Count() int {
|
||||
sub.RLock()
|
||||
defer sub.RUnlock()
|
||||
|
||||
return len(sub.items)
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) Items(pubsubTopic string, contentTopic string) <-chan peer.ID {
|
||||
c := make(chan peer.ID)
|
||||
|
||||
key := getKey(pubsubTopic, contentTopic)
|
||||
|
||||
f := func() {
|
||||
sub.RLock()
|
||||
defer sub.RUnlock()
|
||||
|
||||
if peers, ok := sub.interestMap[key]; ok {
|
||||
for p := range peers {
|
||||
c <- p
|
||||
}
|
||||
}
|
||||
close(c)
|
||||
}
|
||||
go f()
|
||||
|
||||
return c
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) addToInterestMap(peerID peer.ID, pubsubTopic string, contentTopic string) {
|
||||
key := getKey(pubsubTopic, contentTopic)
|
||||
peerSet, ok := sub.interestMap[key]
|
||||
if !ok {
|
||||
peerSet = make(PeerSet)
|
||||
}
|
||||
peerSet[peerID] = struct{}{}
|
||||
sub.interestMap[key] = peerSet
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) removeFromInterestMap(peerID peer.ID, pubsubTopic string, contentTopic string) {
|
||||
key := getKey(pubsubTopic, contentTopic)
|
||||
_, exists := sub.interestMap[key]
|
||||
if exists {
|
||||
delete(sub.interestMap[key], peerID)
|
||||
}
|
||||
}
|
||||
|
||||
func getKey(pubsubTopic string, contentTopic string) string {
|
||||
pubsubTopicBytes := []byte(pubsubTopic)
|
||||
key := append(pubsubTopicBytes, []byte(contentTopic)...)
|
||||
return hex.EncodeToString(crypto.Keccak256(key))
|
||||
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) IsFailedPeer(peerID peer.ID) bool {
|
||||
sub.RLock()
|
||||
defer sub.RUnlock()
|
||||
_, ok := sub.failedPeers[peerID]
|
||||
return ok
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) FlagAsSuccess(peerID peer.ID) {
|
||||
sub.Lock()
|
||||
defer sub.Unlock()
|
||||
|
||||
_, ok := sub.failedPeers[peerID]
|
||||
if ok {
|
||||
delete(sub.failedPeers, peerID)
|
||||
}
|
||||
}
|
||||
|
||||
func (sub *SubscribersMap) FlagAsFailure(peerID peer.ID) {
|
||||
sub.Lock()
|
||||
defer sub.Unlock()
|
||||
|
||||
lastFailure, ok := sub.failedPeers[peerID]
|
||||
if ok {
|
||||
elapsedTime := time.Since(lastFailure)
|
||||
if elapsedTime < sub.timeout {
|
||||
_ = sub.deleteAll(peerID)
|
||||
}
|
||||
} else {
|
||||
sub.failedPeers[peerID] = time.Now()
|
||||
}
|
||||
}
|
||||
115
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/filter_map.go
generated
vendored
Normal file
115
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/filter_map.go
generated
vendored
Normal file
@@ -0,0 +1,115 @@
|
||||
package legacy_filter
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/pb"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
"github.com/waku-org/go-waku/waku/v2/timesource"
|
||||
)
|
||||
|
||||
type FilterMap struct {
|
||||
sync.RWMutex
|
||||
timesource timesource.Timesource
|
||||
items map[string]Filter
|
||||
broadcaster relay.Broadcaster
|
||||
}
|
||||
|
||||
type FilterMapItem struct {
|
||||
Key string
|
||||
Value Filter
|
||||
}
|
||||
|
||||
func NewFilterMap(broadcaster relay.Broadcaster, timesource timesource.Timesource) *FilterMap {
|
||||
return &FilterMap{
|
||||
timesource: timesource,
|
||||
items: make(map[string]Filter),
|
||||
broadcaster: broadcaster,
|
||||
}
|
||||
}
|
||||
|
||||
func (fm *FilterMap) Set(key string, value Filter) {
|
||||
fm.Lock()
|
||||
defer fm.Unlock()
|
||||
|
||||
fm.items[key] = value
|
||||
}
|
||||
|
||||
func (fm *FilterMap) Get(key string) (Filter, bool) {
|
||||
fm.Lock()
|
||||
defer fm.Unlock()
|
||||
|
||||
value, ok := fm.items[key]
|
||||
|
||||
return value, ok
|
||||
}
|
||||
|
||||
func (fm *FilterMap) Delete(key string) {
|
||||
fm.Lock()
|
||||
defer fm.Unlock()
|
||||
|
||||
_, ok := fm.items[key]
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
|
||||
close(fm.items[key].Chan)
|
||||
delete(fm.items, key)
|
||||
}
|
||||
|
||||
func (fm *FilterMap) RemoveAll() {
|
||||
fm.Lock()
|
||||
defer fm.Unlock()
|
||||
|
||||
for k, v := range fm.items {
|
||||
close(v.Chan)
|
||||
delete(fm.items, k)
|
||||
}
|
||||
}
|
||||
|
||||
func (fm *FilterMap) Items() <-chan FilterMapItem {
|
||||
c := make(chan FilterMapItem)
|
||||
|
||||
f := func() {
|
||||
fm.RLock()
|
||||
defer fm.RUnlock()
|
||||
|
||||
for k, v := range fm.items {
|
||||
c <- FilterMapItem{k, v}
|
||||
}
|
||||
close(c)
|
||||
}
|
||||
go f()
|
||||
|
||||
return c
|
||||
}
|
||||
|
||||
// Notify is used to push a received message from a filter subscription to
|
||||
// any content filter registered on this node and to the broadcast subscribers
|
||||
func (fm *FilterMap) Notify(msg *pb.WakuMessage, requestID string) {
|
||||
fm.RLock()
|
||||
defer fm.RUnlock()
|
||||
|
||||
filter, ok := fm.items[requestID]
|
||||
if !ok {
|
||||
// We do this because the key for the filter is set to the requestID received from the filter protocol.
|
||||
// This means we do not need to check the content filter explicitly as all MessagePushs already contain
|
||||
// the requestID of the coresponding filter.
|
||||
return
|
||||
}
|
||||
|
||||
envelope := protocol.NewEnvelope(msg, fm.timesource.Now().UnixNano(), filter.Topic)
|
||||
|
||||
// Broadcasting message so it's stored
|
||||
fm.broadcaster.Submit(envelope)
|
||||
|
||||
// TODO: In case of no topics we should either trigger here for all messages,
|
||||
// or we should not allow such filter to exist in the first place.
|
||||
for _, contentTopic := range filter.ContentFilters {
|
||||
if msg.ContentTopic == contentTopic {
|
||||
filter.Chan <- envelope
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
167
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/filter_subscribers.go
generated
vendored
Normal file
167
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/filter_subscribers.go
generated
vendored
Normal file
@@ -0,0 +1,167 @@
|
||||
package legacy_filter
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/pb"
|
||||
)
|
||||
|
||||
type Subscriber struct {
|
||||
peer peer.ID
|
||||
requestID string
|
||||
filter *pb.FilterRequest // @TODO MAKE THIS A SEQUENCE AGAIN?
|
||||
}
|
||||
|
||||
func (sub Subscriber) HasContentTopic(topic string) bool {
|
||||
if len(sub.filter.ContentFilters) == 0 {
|
||||
return true // When the subscriber has no specific ContentTopic filter
|
||||
}
|
||||
|
||||
for _, filter := range sub.filter.ContentFilters {
|
||||
if filter.ContentTopic == topic {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
type Subscribers struct {
|
||||
sync.RWMutex
|
||||
subscribers []Subscriber
|
||||
timeout time.Duration
|
||||
failedPeers map[peer.ID]time.Time
|
||||
}
|
||||
|
||||
func NewSubscribers(timeout time.Duration) *Subscribers {
|
||||
return &Subscribers{
|
||||
timeout: timeout,
|
||||
failedPeers: make(map[peer.ID]time.Time),
|
||||
}
|
||||
}
|
||||
|
||||
func (sub *Subscribers) Clear() {
|
||||
sub.Lock()
|
||||
defer sub.Unlock()
|
||||
|
||||
sub.subscribers = nil
|
||||
sub.failedPeers = make(map[peer.ID]time.Time)
|
||||
}
|
||||
|
||||
func (sub *Subscribers) Append(s Subscriber) int {
|
||||
sub.Lock()
|
||||
defer sub.Unlock()
|
||||
|
||||
sub.subscribers = append(sub.subscribers, s)
|
||||
return len(sub.subscribers)
|
||||
}
|
||||
|
||||
func (sub *Subscribers) Items(contentTopic *string) <-chan Subscriber {
|
||||
c := make(chan Subscriber)
|
||||
|
||||
f := func() {
|
||||
sub.RLock()
|
||||
defer sub.RUnlock()
|
||||
for _, s := range sub.subscribers {
|
||||
if contentTopic == nil || s.HasContentTopic(*contentTopic) {
|
||||
c <- s
|
||||
}
|
||||
}
|
||||
close(c)
|
||||
}
|
||||
go f()
|
||||
|
||||
return c
|
||||
}
|
||||
|
||||
func (sub *Subscribers) Length() int {
|
||||
sub.RLock()
|
||||
defer sub.RUnlock()
|
||||
|
||||
return len(sub.subscribers)
|
||||
}
|
||||
|
||||
func (sub *Subscribers) IsFailedPeer(peerID peer.ID) bool {
|
||||
sub.RLock()
|
||||
defer sub.RUnlock()
|
||||
_, ok := sub.failedPeers[peerID]
|
||||
return ok
|
||||
}
|
||||
|
||||
func (sub *Subscribers) FlagAsSuccess(peerID peer.ID) {
|
||||
sub.Lock()
|
||||
defer sub.Unlock()
|
||||
|
||||
_, ok := sub.failedPeers[peerID]
|
||||
if ok {
|
||||
delete(sub.failedPeers, peerID)
|
||||
}
|
||||
}
|
||||
|
||||
func (sub *Subscribers) FlagAsFailure(peerID peer.ID) {
|
||||
sub.Lock()
|
||||
defer sub.Unlock()
|
||||
|
||||
lastFailure, ok := sub.failedPeers[peerID]
|
||||
if ok {
|
||||
elapsedTime := time.Since(lastFailure)
|
||||
if elapsedTime > sub.timeout {
|
||||
var tmpSubs []Subscriber
|
||||
for _, s := range sub.subscribers {
|
||||
if s.peer != peerID {
|
||||
tmpSubs = append(tmpSubs, s)
|
||||
}
|
||||
}
|
||||
sub.subscribers = tmpSubs
|
||||
|
||||
delete(sub.failedPeers, peerID)
|
||||
}
|
||||
} else {
|
||||
sub.failedPeers[peerID] = time.Now()
|
||||
}
|
||||
}
|
||||
|
||||
// RemoveContentFilters removes a set of content filters registered for an specific peer
|
||||
func (sub *Subscribers) RemoveContentFilters(peerID peer.ID, requestID string, contentFilters []*pb.FilterRequest_ContentFilter) {
|
||||
sub.Lock()
|
||||
defer sub.Unlock()
|
||||
|
||||
var peerIdsToRemove []peer.ID
|
||||
|
||||
for subIndex, subscriber := range sub.subscribers {
|
||||
if subscriber.peer != peerID || subscriber.requestID != requestID {
|
||||
continue
|
||||
}
|
||||
|
||||
// make sure we delete the content filter
|
||||
// if no more topics are left
|
||||
for _, contentFilter := range contentFilters {
|
||||
subCfs := subscriber.filter.ContentFilters
|
||||
for i, cf := range subCfs {
|
||||
if cf.ContentTopic == contentFilter.ContentTopic {
|
||||
l := len(subCfs) - 1
|
||||
subCfs[i] = subCfs[l]
|
||||
subscriber.filter.ContentFilters = subCfs[:l]
|
||||
}
|
||||
}
|
||||
sub.subscribers[subIndex] = subscriber
|
||||
}
|
||||
|
||||
if len(subscriber.filter.ContentFilters) == 0 {
|
||||
peerIdsToRemove = append(peerIdsToRemove, subscriber.peer)
|
||||
}
|
||||
}
|
||||
|
||||
// make sure we delete the subscriber
|
||||
// if no more content filters left
|
||||
for _, peerID := range peerIdsToRemove {
|
||||
for i, s := range sub.subscribers {
|
||||
if s.peer == peerID && s.requestID == requestID {
|
||||
l := len(sub.subscribers) - 1
|
||||
sub.subscribers[i] = sub.subscribers[l]
|
||||
sub.subscribers = sub.subscribers[:l]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
75
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/metrics.go
generated
vendored
Normal file
75
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/metrics.go
generated
vendored
Normal file
@@ -0,0 +1,75 @@
|
||||
package legacy_filter
|
||||
|
||||
import (
|
||||
"github.com/libp2p/go-libp2p/p2p/metricshelper"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
var filterMessages = prometheus.NewCounter(
|
||||
prometheus.CounterOpts{
|
||||
Name: "legacy_filter_messages",
|
||||
Help: "The number of messages received via legacy filter protocol",
|
||||
})
|
||||
|
||||
var filterErrors = prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Name: "legacy_filter_errors",
|
||||
Help: "The distribution of the legacy filter protocol errors",
|
||||
},
|
||||
[]string{"error_type"},
|
||||
)
|
||||
|
||||
var filterSubscribers = prometheus.NewGauge(
|
||||
prometheus.GaugeOpts{
|
||||
Name: "legacy_filter_subscriptions",
|
||||
Help: "The number of legacy filter subscribers",
|
||||
})
|
||||
|
||||
var collectors = []prometheus.Collector{
|
||||
filterMessages,
|
||||
filterErrors,
|
||||
filterSubscribers,
|
||||
}
|
||||
|
||||
// Metrics exposes the functions required to update prometheus metrics for legacy filter protocol
|
||||
type Metrics interface {
|
||||
RecordMessages(num int)
|
||||
RecordSubscribers(num int)
|
||||
RecordError(err metricsErrCategory)
|
||||
}
|
||||
|
||||
type metricsImpl struct {
|
||||
reg prometheus.Registerer
|
||||
}
|
||||
|
||||
func newMetrics(reg prometheus.Registerer) Metrics {
|
||||
metricshelper.RegisterCollectors(reg, collectors...)
|
||||
return &metricsImpl{
|
||||
reg: reg,
|
||||
}
|
||||
}
|
||||
|
||||
// RecordMessage is used to increase the counter for the number of messages received via waku filter
|
||||
func (m *metricsImpl) RecordMessages(num int) {
|
||||
filterMessages.Add(float64(num))
|
||||
}
|
||||
|
||||
type metricsErrCategory string
|
||||
|
||||
var (
|
||||
decodeRPCFailure metricsErrCategory = "decode_rpc_failure"
|
||||
dialFailure metricsErrCategory = "dial_failure"
|
||||
pushWriteError metricsErrCategory = "push_write_error"
|
||||
peerNotFoundFailure metricsErrCategory = "peer_not_found_failure"
|
||||
writeRequestFailure metricsErrCategory = "write_request_failure"
|
||||
)
|
||||
|
||||
// RecordError increases the counter for different error types
|
||||
func (m *metricsImpl) RecordError(err metricsErrCategory) {
|
||||
filterErrors.WithLabelValues(string(err)).Inc()
|
||||
}
|
||||
|
||||
// RecordSubscribers track the current number of filter subscribers
|
||||
func (m *metricsImpl) RecordSubscribers(num int) {
|
||||
filterSubscribers.Set(float64(num))
|
||||
}
|
||||
5
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/pb/generate.go
generated
vendored
Normal file
5
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/pb/generate.go
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
package pb
|
||||
|
||||
//go:generate mv ./../../waku-proto/waku/filter/v2beta1/filter.proto ./../../waku-proto/waku/filter/v2beta1/legacy_filter.proto
|
||||
//go:generate protoc -I./../../waku-proto/waku/filter/v2beta1/. -I./../../waku-proto/ --go_opt=paths=source_relative --go_opt=Mlegacy_filter.proto=github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/pb --go_opt=Mwaku/message/v1/message.proto=github.com/waku-org/go-waku/waku/v2/protocol/pb --go_out=. ./../../waku-proto/waku/filter/v2beta1/legacy_filter.proto
|
||||
//go:generate mv ./../../waku-proto/waku/filter/v2beta1/legacy_filter.proto ./../../waku-proto/waku/filter/v2beta1/filter.proto
|
||||
394
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/pb/legacy_filter.pb.go
generated
vendored
Normal file
394
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/pb/legacy_filter.pb.go
generated
vendored
Normal file
@@ -0,0 +1,394 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.31.0
|
||||
// protoc v4.24.4
|
||||
// source: legacy_filter.proto
|
||||
|
||||
// 12/WAKU2-FILTER rfc: https://rfc.vac.dev/spec/12/
|
||||
// Protocol identifier: /vac/waku/filter/2.0.0-beta1
|
||||
|
||||
package pb
|
||||
|
||||
import (
|
||||
pb "github.com/waku-org/go-waku/waku/v2/protocol/pb"
|
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
reflect "reflect"
|
||||
sync "sync"
|
||||
)
|
||||
|
||||
const (
|
||||
// Verify that this generated code is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
||||
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
type FilterRequest struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Subscribe bool `protobuf:"varint,1,opt,name=subscribe,proto3" json:"subscribe,omitempty"`
|
||||
Topic string `protobuf:"bytes,2,opt,name=topic,proto3" json:"topic,omitempty"`
|
||||
ContentFilters []*FilterRequest_ContentFilter `protobuf:"bytes,3,rep,name=content_filters,json=contentFilters,proto3" json:"content_filters,omitempty"`
|
||||
}
|
||||
|
||||
func (x *FilterRequest) Reset() {
|
||||
*x = FilterRequest{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_legacy_filter_proto_msgTypes[0]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *FilterRequest) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*FilterRequest) ProtoMessage() {}
|
||||
|
||||
func (x *FilterRequest) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_legacy_filter_proto_msgTypes[0]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use FilterRequest.ProtoReflect.Descriptor instead.
|
||||
func (*FilterRequest) Descriptor() ([]byte, []int) {
|
||||
return file_legacy_filter_proto_rawDescGZIP(), []int{0}
|
||||
}
|
||||
|
||||
func (x *FilterRequest) GetSubscribe() bool {
|
||||
if x != nil {
|
||||
return x.Subscribe
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (x *FilterRequest) GetTopic() string {
|
||||
if x != nil {
|
||||
return x.Topic
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *FilterRequest) GetContentFilters() []*FilterRequest_ContentFilter {
|
||||
if x != nil {
|
||||
return x.ContentFilters
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type MessagePush struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Messages []*pb.WakuMessage `protobuf:"bytes,1,rep,name=messages,proto3" json:"messages,omitempty"`
|
||||
}
|
||||
|
||||
func (x *MessagePush) Reset() {
|
||||
*x = MessagePush{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_legacy_filter_proto_msgTypes[1]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *MessagePush) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*MessagePush) ProtoMessage() {}
|
||||
|
||||
func (x *MessagePush) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_legacy_filter_proto_msgTypes[1]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use MessagePush.ProtoReflect.Descriptor instead.
|
||||
func (*MessagePush) Descriptor() ([]byte, []int) {
|
||||
return file_legacy_filter_proto_rawDescGZIP(), []int{1}
|
||||
}
|
||||
|
||||
func (x *MessagePush) GetMessages() []*pb.WakuMessage {
|
||||
if x != nil {
|
||||
return x.Messages
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type FilterRpc struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
RequestId string `protobuf:"bytes,1,opt,name=request_id,json=requestId,proto3" json:"request_id,omitempty"`
|
||||
Request *FilterRequest `protobuf:"bytes,2,opt,name=request,proto3,oneof" json:"request,omitempty"`
|
||||
Push *MessagePush `protobuf:"bytes,3,opt,name=push,proto3,oneof" json:"push,omitempty"`
|
||||
}
|
||||
|
||||
func (x *FilterRpc) Reset() {
|
||||
*x = FilterRpc{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_legacy_filter_proto_msgTypes[2]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *FilterRpc) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*FilterRpc) ProtoMessage() {}
|
||||
|
||||
func (x *FilterRpc) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_legacy_filter_proto_msgTypes[2]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use FilterRpc.ProtoReflect.Descriptor instead.
|
||||
func (*FilterRpc) Descriptor() ([]byte, []int) {
|
||||
return file_legacy_filter_proto_rawDescGZIP(), []int{2}
|
||||
}
|
||||
|
||||
func (x *FilterRpc) GetRequestId() string {
|
||||
if x != nil {
|
||||
return x.RequestId
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *FilterRpc) GetRequest() *FilterRequest {
|
||||
if x != nil {
|
||||
return x.Request
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *FilterRpc) GetPush() *MessagePush {
|
||||
if x != nil {
|
||||
return x.Push
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type FilterRequest_ContentFilter struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
ContentTopic string `protobuf:"bytes,1,opt,name=content_topic,json=contentTopic,proto3" json:"content_topic,omitempty"`
|
||||
}
|
||||
|
||||
func (x *FilterRequest_ContentFilter) Reset() {
|
||||
*x = FilterRequest_ContentFilter{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_legacy_filter_proto_msgTypes[3]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *FilterRequest_ContentFilter) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*FilterRequest_ContentFilter) ProtoMessage() {}
|
||||
|
||||
func (x *FilterRequest_ContentFilter) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_legacy_filter_proto_msgTypes[3]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use FilterRequest_ContentFilter.ProtoReflect.Descriptor instead.
|
||||
func (*FilterRequest_ContentFilter) Descriptor() ([]byte, []int) {
|
||||
return file_legacy_filter_proto_rawDescGZIP(), []int{0, 0}
|
||||
}
|
||||
|
||||
func (x *FilterRequest_ContentFilter) GetContentTopic() string {
|
||||
if x != nil {
|
||||
return x.ContentTopic
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
var File_legacy_filter_proto protoreflect.FileDescriptor
|
||||
|
||||
var file_legacy_filter_proto_rawDesc = []byte{
|
||||
0x0a, 0x13, 0x6c, 0x65, 0x67, 0x61, 0x63, 0x79, 0x5f, 0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e,
|
||||
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x13, 0x77, 0x61, 0x6b, 0x75, 0x2e, 0x66, 0x69, 0x6c, 0x74,
|
||||
0x65, 0x72, 0x2e, 0x76, 0x32, 0x62, 0x65, 0x74, 0x61, 0x31, 0x1a, 0x1d, 0x77, 0x61, 0x6b, 0x75,
|
||||
0x2f, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x65, 0x73, 0x73,
|
||||
0x61, 0x67, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xd4, 0x01, 0x0a, 0x0d, 0x46, 0x69,
|
||||
0x6c, 0x74, 0x65, 0x72, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1c, 0x0a, 0x09, 0x73,
|
||||
0x75, 0x62, 0x73, 0x63, 0x72, 0x69, 0x62, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x09,
|
||||
0x73, 0x75, 0x62, 0x73, 0x63, 0x72, 0x69, 0x62, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x74, 0x6f, 0x70,
|
||||
0x69, 0x63, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x74, 0x6f, 0x70, 0x69, 0x63, 0x12,
|
||||
0x59, 0x0a, 0x0f, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x5f, 0x66, 0x69, 0x6c, 0x74, 0x65,
|
||||
0x72, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x30, 0x2e, 0x77, 0x61, 0x6b, 0x75, 0x2e,
|
||||
0x66, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x62, 0x65, 0x74, 0x61, 0x31, 0x2e, 0x46,
|
||||
0x69, 0x6c, 0x74, 0x65, 0x72, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x2e, 0x43, 0x6f, 0x6e,
|
||||
0x74, 0x65, 0x6e, 0x74, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x52, 0x0e, 0x63, 0x6f, 0x6e, 0x74,
|
||||
0x65, 0x6e, 0x74, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x73, 0x1a, 0x34, 0x0a, 0x0d, 0x43, 0x6f,
|
||||
0x6e, 0x74, 0x65, 0x6e, 0x74, 0x46, 0x69, 0x6c, 0x74, 0x65, 0x72, 0x12, 0x23, 0x0a, 0x0d, 0x63,
|
||||
0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x5f, 0x74, 0x6f, 0x70, 0x69, 0x63, 0x18, 0x01, 0x20, 0x01,
|
||||
0x28, 0x09, 0x52, 0x0c, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x54, 0x6f, 0x70, 0x69, 0x63,
|
||||
0x22, 0x47, 0x0a, 0x0b, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x50, 0x75, 0x73, 0x68, 0x12,
|
||||
0x38, 0x0a, 0x08, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28,
|
||||
0x0b, 0x32, 0x1c, 0x2e, 0x77, 0x61, 0x6b, 0x75, 0x2e, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65,
|
||||
0x2e, 0x76, 0x31, 0x2e, 0x57, 0x61, 0x6b, 0x75, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x52,
|
||||
0x08, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x73, 0x22, 0xbd, 0x01, 0x0a, 0x09, 0x46, 0x69,
|
||||
0x6c, 0x74, 0x65, 0x72, 0x52, 0x70, 0x63, 0x12, 0x1d, 0x0a, 0x0a, 0x72, 0x65, 0x71, 0x75, 0x65,
|
||||
0x73, 0x74, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x72, 0x65, 0x71,
|
||||
0x75, 0x65, 0x73, 0x74, 0x49, 0x64, 0x12, 0x41, 0x0a, 0x07, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73,
|
||||
0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x77, 0x61, 0x6b, 0x75, 0x2e, 0x66,
|
||||
0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x62, 0x65, 0x74, 0x61, 0x31, 0x2e, 0x46, 0x69,
|
||||
0x6c, 0x74, 0x65, 0x72, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x48, 0x00, 0x52, 0x07, 0x72,
|
||||
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x88, 0x01, 0x01, 0x12, 0x39, 0x0a, 0x04, 0x70, 0x75, 0x73,
|
||||
0x68, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x20, 0x2e, 0x77, 0x61, 0x6b, 0x75, 0x2e, 0x66,
|
||||
0x69, 0x6c, 0x74, 0x65, 0x72, 0x2e, 0x76, 0x32, 0x62, 0x65, 0x74, 0x61, 0x31, 0x2e, 0x4d, 0x65,
|
||||
0x73, 0x73, 0x61, 0x67, 0x65, 0x50, 0x75, 0x73, 0x68, 0x48, 0x01, 0x52, 0x04, 0x70, 0x75, 0x73,
|
||||
0x68, 0x88, 0x01, 0x01, 0x42, 0x0a, 0x0a, 0x08, 0x5f, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
|
||||
0x42, 0x07, 0x0a, 0x05, 0x5f, 0x70, 0x75, 0x73, 0x68, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f,
|
||||
0x33,
|
||||
}
|
||||
|
||||
var (
|
||||
file_legacy_filter_proto_rawDescOnce sync.Once
|
||||
file_legacy_filter_proto_rawDescData = file_legacy_filter_proto_rawDesc
|
||||
)
|
||||
|
||||
func file_legacy_filter_proto_rawDescGZIP() []byte {
|
||||
file_legacy_filter_proto_rawDescOnce.Do(func() {
|
||||
file_legacy_filter_proto_rawDescData = protoimpl.X.CompressGZIP(file_legacy_filter_proto_rawDescData)
|
||||
})
|
||||
return file_legacy_filter_proto_rawDescData
|
||||
}
|
||||
|
||||
var file_legacy_filter_proto_msgTypes = make([]protoimpl.MessageInfo, 4)
|
||||
var file_legacy_filter_proto_goTypes = []interface{}{
|
||||
(*FilterRequest)(nil), // 0: waku.filter.v2beta1.FilterRequest
|
||||
(*MessagePush)(nil), // 1: waku.filter.v2beta1.MessagePush
|
||||
(*FilterRpc)(nil), // 2: waku.filter.v2beta1.FilterRpc
|
||||
(*FilterRequest_ContentFilter)(nil), // 3: waku.filter.v2beta1.FilterRequest.ContentFilter
|
||||
(*pb.WakuMessage)(nil), // 4: waku.message.v1.WakuMessage
|
||||
}
|
||||
var file_legacy_filter_proto_depIdxs = []int32{
|
||||
3, // 0: waku.filter.v2beta1.FilterRequest.content_filters:type_name -> waku.filter.v2beta1.FilterRequest.ContentFilter
|
||||
4, // 1: waku.filter.v2beta1.MessagePush.messages:type_name -> waku.message.v1.WakuMessage
|
||||
0, // 2: waku.filter.v2beta1.FilterRpc.request:type_name -> waku.filter.v2beta1.FilterRequest
|
||||
1, // 3: waku.filter.v2beta1.FilterRpc.push:type_name -> waku.filter.v2beta1.MessagePush
|
||||
4, // [4:4] is the sub-list for method output_type
|
||||
4, // [4:4] is the sub-list for method input_type
|
||||
4, // [4:4] is the sub-list for extension type_name
|
||||
4, // [4:4] is the sub-list for extension extendee
|
||||
0, // [0:4] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_legacy_filter_proto_init() }
|
||||
func file_legacy_filter_proto_init() {
|
||||
if File_legacy_filter_proto != nil {
|
||||
return
|
||||
}
|
||||
if !protoimpl.UnsafeEnabled {
|
||||
file_legacy_filter_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*FilterRequest); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_legacy_filter_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*MessagePush); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_legacy_filter_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*FilterRpc); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_legacy_filter_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*FilterRequest_ContentFilter); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
file_legacy_filter_proto_msgTypes[2].OneofWrappers = []interface{}{}
|
||||
type x struct{}
|
||||
out := protoimpl.TypeBuilder{
|
||||
File: protoimpl.DescBuilder{
|
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: file_legacy_filter_proto_rawDesc,
|
||||
NumEnums: 0,
|
||||
NumMessages: 4,
|
||||
NumExtensions: 0,
|
||||
NumServices: 0,
|
||||
},
|
||||
GoTypes: file_legacy_filter_proto_goTypes,
|
||||
DependencyIndexes: file_legacy_filter_proto_depIdxs,
|
||||
MessageInfos: file_legacy_filter_proto_msgTypes,
|
||||
}.Build()
|
||||
File_legacy_filter_proto = out.File
|
||||
file_legacy_filter_proto_rawDesc = nil
|
||||
file_legacy_filter_proto_goTypes = nil
|
||||
file_legacy_filter_proto_depIdxs = nil
|
||||
}
|
||||
468
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/waku_filter.go
generated
vendored
Normal file
468
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/waku_filter.go
generated
vendored
Normal file
@@ -0,0 +1,468 @@
|
||||
package legacy_filter
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"math"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
libp2pProtocol "github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/libp2p/go-msgio/pbio"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
"github.com/waku-org/go-waku/waku/v2/peermanager"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/pb"
|
||||
wpb "github.com/waku-org/go-waku/waku/v2/protocol/pb"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
"github.com/waku-org/go-waku/waku/v2/service"
|
||||
"github.com/waku-org/go-waku/waku/v2/timesource"
|
||||
"go.uber.org/zap"
|
||||
"golang.org/x/sync/errgroup"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNoPeersAvailable = errors.New("no suitable remote peers")
|
||||
)
|
||||
|
||||
type (
|
||||
Filter struct {
|
||||
filterID string
|
||||
PeerID peer.ID
|
||||
Topic string
|
||||
ContentFilters []string
|
||||
Chan chan *protocol.Envelope
|
||||
}
|
||||
|
||||
ContentFilter struct {
|
||||
Topic string
|
||||
ContentTopics []string
|
||||
}
|
||||
|
||||
FilterSubscription struct {
|
||||
RequestID string
|
||||
Peer peer.ID
|
||||
}
|
||||
|
||||
WakuFilter struct {
|
||||
*service.CommonService
|
||||
h host.Host
|
||||
pm *peermanager.PeerManager
|
||||
isFullNode bool
|
||||
msgSub *relay.Subscription
|
||||
metrics Metrics
|
||||
log *zap.Logger
|
||||
|
||||
filters *FilterMap
|
||||
subscribers *Subscribers
|
||||
}
|
||||
)
|
||||
|
||||
// FilterID_v20beta1 is the current Waku Filter protocol identifier
|
||||
const FilterID_v20beta1 = libp2pProtocol.ID("/vac/waku/filter/2.0.0-beta1")
|
||||
|
||||
// NewWakuRelay returns a new instance of Waku Filter struct setup according to the chosen parameter and options
|
||||
func NewWakuFilter(broadcaster relay.Broadcaster, isFullNode bool, timesource timesource.Timesource, reg prometheus.Registerer, log *zap.Logger, opts ...Option) *WakuFilter {
|
||||
wf := new(WakuFilter)
|
||||
wf.log = log.Named("filter").With(zap.Bool("fullNode", isFullNode))
|
||||
|
||||
params := new(FilterParameters)
|
||||
optList := DefaultOptions()
|
||||
optList = append(optList, opts...)
|
||||
for _, opt := range optList {
|
||||
opt(params)
|
||||
}
|
||||
|
||||
wf.isFullNode = isFullNode
|
||||
wf.CommonService = service.NewCommonService()
|
||||
wf.filters = NewFilterMap(broadcaster, timesource)
|
||||
wf.subscribers = NewSubscribers(params.Timeout)
|
||||
wf.metrics = newMetrics(reg)
|
||||
|
||||
return wf
|
||||
}
|
||||
|
||||
// Sets the host to be able to mount or consume a protocol
|
||||
func (wf *WakuFilter) SetHost(h host.Host) {
|
||||
wf.h = h
|
||||
}
|
||||
|
||||
func (wf *WakuFilter) Start(ctx context.Context, sub *relay.Subscription) error {
|
||||
return wf.CommonService.Start(ctx, func() error {
|
||||
return wf.start(sub)
|
||||
})
|
||||
}
|
||||
|
||||
func (wf *WakuFilter) start(sub *relay.Subscription) error {
|
||||
wf.h.SetStreamHandlerMatch(FilterID_v20beta1, protocol.PrefixTextMatch(string(FilterID_v20beta1)), wf.onRequest(wf.Context()))
|
||||
wf.msgSub = sub
|
||||
wf.WaitGroup().Add(1)
|
||||
go wf.filterListener(wf.Context())
|
||||
wf.log.Info("filter protocol started")
|
||||
return nil
|
||||
}
|
||||
func (wf *WakuFilter) onRequest(ctx context.Context) func(network.Stream) {
|
||||
return func(stream network.Stream) {
|
||||
peerID := stream.Conn().RemotePeer()
|
||||
logger := wf.log.With(logging.HostID("peer", peerID))
|
||||
|
||||
filterRPCRequest := &pb.FilterRpc{}
|
||||
|
||||
reader := pbio.NewDelimitedReader(stream, math.MaxInt32)
|
||||
|
||||
err := reader.ReadMsg(filterRPCRequest)
|
||||
if err != nil {
|
||||
wf.metrics.RecordError(decodeRPCFailure)
|
||||
logger.Error("reading request", zap.Error(err))
|
||||
if err := stream.Reset(); err != nil {
|
||||
wf.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
logger.Info("received request")
|
||||
|
||||
if filterRPCRequest.Push != nil && len(filterRPCRequest.Push.Messages) > 0 {
|
||||
// We're on a light node.
|
||||
// This is a message push coming from a full node.
|
||||
for _, message := range filterRPCRequest.Push.Messages {
|
||||
wf.filters.Notify(message, filterRPCRequest.RequestId) // Trigger filter handlers on a light node
|
||||
}
|
||||
|
||||
logger.Info("received a message push", zap.Int("messages", len(filterRPCRequest.Push.Messages)))
|
||||
wf.metrics.RecordMessages(len(filterRPCRequest.Push.Messages))
|
||||
} else if filterRPCRequest.Request != nil && wf.isFullNode {
|
||||
// We're on a full node.
|
||||
// This is a filter request coming from a light node.
|
||||
if filterRPCRequest.Request.Subscribe {
|
||||
subscriber := Subscriber{peer: stream.Conn().RemotePeer(), requestID: filterRPCRequest.RequestId, filter: filterRPCRequest.Request}
|
||||
if subscriber.filter.Topic == "" { // @TODO: review if empty topic is possible
|
||||
subscriber.filter.Topic = relay.DefaultWakuTopic
|
||||
}
|
||||
|
||||
subscribersLen := wf.subscribers.Append(subscriber)
|
||||
|
||||
logger.Info("adding subscriber")
|
||||
wf.metrics.RecordSubscribers(subscribersLen)
|
||||
} else {
|
||||
wf.subscribers.RemoveContentFilters(peerID, filterRPCRequest.RequestId, filterRPCRequest.Request.ContentFilters)
|
||||
|
||||
logger.Info("removing subscriber")
|
||||
wf.metrics.RecordSubscribers(wf.subscribers.Length())
|
||||
}
|
||||
} else {
|
||||
logger.Error("can't serve request")
|
||||
if err := stream.Reset(); err != nil {
|
||||
wf.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
stream.Close()
|
||||
}
|
||||
}
|
||||
|
||||
func (wf *WakuFilter) pushMessage(ctx context.Context, subscriber Subscriber, msg *wpb.WakuMessage) error {
|
||||
pushRPC := &pb.FilterRpc{RequestId: subscriber.requestID, Push: &pb.MessagePush{Messages: []*wpb.WakuMessage{msg}}}
|
||||
logger := wf.log.With(logging.HostID("peer", subscriber.peer))
|
||||
|
||||
stream, err := wf.h.NewStream(ctx, subscriber.peer, FilterID_v20beta1)
|
||||
if err != nil {
|
||||
wf.subscribers.FlagAsFailure(subscriber.peer)
|
||||
logger.Error("opening peer stream", zap.Error(err))
|
||||
wf.metrics.RecordError(dialFailure)
|
||||
return err
|
||||
}
|
||||
|
||||
writer := pbio.NewDelimitedWriter(stream)
|
||||
err = writer.WriteMsg(pushRPC)
|
||||
if err != nil {
|
||||
logger.Error("pushing messages to peer", zap.Error(err))
|
||||
wf.subscribers.FlagAsFailure(subscriber.peer)
|
||||
wf.metrics.RecordError(pushWriteError)
|
||||
if err := stream.Reset(); err != nil {
|
||||
wf.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
stream.Close()
|
||||
|
||||
wf.subscribers.FlagAsSuccess(subscriber.peer)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (wf *WakuFilter) filterListener(ctx context.Context) {
|
||||
defer wf.WaitGroup().Done()
|
||||
|
||||
// This function is invoked for each message received
|
||||
// on the full node in context of Waku2-Filter
|
||||
handle := func(envelope *protocol.Envelope) error { // async
|
||||
msg := envelope.Message()
|
||||
pubsubTopic := envelope.PubsubTopic()
|
||||
logger := wf.log.With(zap.Stringer("message", msg))
|
||||
g := new(errgroup.Group)
|
||||
// Each subscriber is a light node that earlier on invoked
|
||||
// a FilterRequest on this node
|
||||
for subscriber := range wf.subscribers.Items(&(msg.ContentTopic)) {
|
||||
logger := logger.With(logging.HostID("subscriber", subscriber.peer))
|
||||
subscriber := subscriber // https://golang.org/doc/faq#closures_and_goroutines
|
||||
if subscriber.filter.Topic != pubsubTopic {
|
||||
logger.Info("pubsub topic mismatch",
|
||||
zap.String("subscriberTopic", subscriber.filter.Topic),
|
||||
zap.String("messageTopic", pubsubTopic))
|
||||
continue
|
||||
}
|
||||
|
||||
// Do a message push to light node
|
||||
logger.Info("pushing message to light node", zap.String("contentTopic", msg.ContentTopic))
|
||||
g.Go(func() (err error) {
|
||||
err = wf.pushMessage(ctx, subscriber, msg)
|
||||
if err != nil {
|
||||
logger.Error("pushing message", zap.Error(err))
|
||||
}
|
||||
return err
|
||||
})
|
||||
}
|
||||
|
||||
return g.Wait()
|
||||
}
|
||||
|
||||
for m := range wf.msgSub.Ch {
|
||||
if err := handle(m); err != nil {
|
||||
wf.log.Error("handling message", zap.Error(err))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Having a FilterRequest struct,
|
||||
// select a peer with filter support, dial it,
|
||||
// and submit FilterRequest wrapped in FilterRPC
|
||||
func (wf *WakuFilter) requestSubscription(ctx context.Context, filter ContentFilter, opts ...FilterSubscribeOption) (subscription *FilterSubscription, err error) {
|
||||
params := new(FilterSubscribeParameters)
|
||||
params.log = wf.log
|
||||
params.host = wf.h
|
||||
|
||||
optList := DefaultSubscribtionOptions()
|
||||
optList = append(optList, opts...)
|
||||
for _, opt := range optList {
|
||||
opt(params)
|
||||
}
|
||||
if wf.pm != nil && params.selectedPeer == "" {
|
||||
params.selectedPeer, _ = wf.pm.SelectPeer(
|
||||
peermanager.PeerSelectionCriteria{
|
||||
SelectionType: params.peerSelectionType,
|
||||
Proto: FilterID_v20beta1,
|
||||
PubsubTopics: []string{filter.Topic},
|
||||
SpecificPeers: params.preferredPeers,
|
||||
Ctx: ctx,
|
||||
},
|
||||
)
|
||||
}
|
||||
if params.selectedPeer == "" {
|
||||
wf.metrics.RecordError(peerNotFoundFailure)
|
||||
return nil, ErrNoPeersAvailable
|
||||
}
|
||||
|
||||
var contentFilters []*pb.FilterRequest_ContentFilter
|
||||
for _, ct := range filter.ContentTopics {
|
||||
contentFilters = append(contentFilters, &pb.FilterRequest_ContentFilter{ContentTopic: ct})
|
||||
}
|
||||
|
||||
request := &pb.FilterRequest{
|
||||
Subscribe: true,
|
||||
Topic: filter.Topic,
|
||||
ContentFilters: contentFilters,
|
||||
}
|
||||
|
||||
stream, err := wf.h.NewStream(ctx, params.selectedPeer, FilterID_v20beta1)
|
||||
if err != nil {
|
||||
wf.metrics.RecordError(dialFailure)
|
||||
return
|
||||
}
|
||||
|
||||
// This is the only successful path to subscription
|
||||
requestID := hex.EncodeToString(protocol.GenerateRequestID())
|
||||
|
||||
writer := pbio.NewDelimitedWriter(stream)
|
||||
filterRPC := &pb.FilterRpc{RequestId: requestID, Request: request}
|
||||
wf.log.Debug("sending filterRPC", zap.Stringer("rpc", filterRPC))
|
||||
err = writer.WriteMsg(filterRPC)
|
||||
if err != nil {
|
||||
wf.metrics.RecordError(writeRequestFailure)
|
||||
wf.log.Error("sending filterRPC", zap.Error(err))
|
||||
if err := stream.Reset(); err != nil {
|
||||
wf.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
stream.Close()
|
||||
|
||||
subscription = new(FilterSubscription)
|
||||
subscription.Peer = params.selectedPeer
|
||||
subscription.RequestID = requestID
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Unsubscribe is used to stop receiving messages from a peer that match a content filter
|
||||
func (wf *WakuFilter) Unsubscribe(ctx context.Context, contentFilter ContentFilter, peer peer.ID) error {
|
||||
stream, err := wf.h.NewStream(ctx, peer, FilterID_v20beta1)
|
||||
if err != nil {
|
||||
wf.metrics.RecordError(dialFailure)
|
||||
return err
|
||||
}
|
||||
|
||||
// This is the only successful path to subscription
|
||||
id := protocol.GenerateRequestID()
|
||||
|
||||
var contentFilters []*pb.FilterRequest_ContentFilter
|
||||
for _, ct := range contentFilter.ContentTopics {
|
||||
contentFilters = append(contentFilters, &pb.FilterRequest_ContentFilter{ContentTopic: ct})
|
||||
}
|
||||
|
||||
request := &pb.FilterRequest{
|
||||
Subscribe: false,
|
||||
Topic: contentFilter.Topic,
|
||||
ContentFilters: contentFilters,
|
||||
}
|
||||
|
||||
writer := pbio.NewDelimitedWriter(stream)
|
||||
filterRPC := &pb.FilterRpc{RequestId: hex.EncodeToString(id), Request: request}
|
||||
err = writer.WriteMsg(filterRPC)
|
||||
if err != nil {
|
||||
wf.metrics.RecordError(writeRequestFailure)
|
||||
if err := stream.Reset(); err != nil {
|
||||
wf.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
stream.Close()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop unmounts the filter protocol
|
||||
func (wf *WakuFilter) Stop() {
|
||||
wf.CommonService.Stop(func() {
|
||||
wf.msgSub.Unsubscribe()
|
||||
|
||||
wf.h.RemoveStreamHandler(FilterID_v20beta1)
|
||||
wf.filters.RemoveAll()
|
||||
wf.subscribers.Clear()
|
||||
})
|
||||
}
|
||||
|
||||
// Subscribe setups a subscription to receive messages that match a specific content filter
|
||||
func (wf *WakuFilter) Subscribe(ctx context.Context, f ContentFilter, opts ...FilterSubscribeOption) (filterID string, theFilter Filter, err error) {
|
||||
// TODO: check if there's an existing pubsub topic that uses the same peer. If so, reuse filter, and return same channel and filterID
|
||||
|
||||
// Registers for messages that match a specific filter. Triggers the handler whenever a message is received.
|
||||
// ContentFilterChan takes MessagePush structs
|
||||
remoteSubs, err := wf.requestSubscription(ctx, f, opts...)
|
||||
if err != nil || remoteSubs.RequestID == "" {
|
||||
// Failed to subscribe
|
||||
wf.log.Error("requesting subscription", zap.Error(err))
|
||||
return
|
||||
}
|
||||
|
||||
// Register handler for filter, whether remote subscription succeeded or not
|
||||
|
||||
filterID = remoteSubs.RequestID
|
||||
theFilter = Filter{
|
||||
filterID: filterID,
|
||||
PeerID: remoteSubs.Peer,
|
||||
Topic: f.Topic,
|
||||
ContentFilters: f.ContentTopics,
|
||||
Chan: make(chan *protocol.Envelope, 1024), // To avoid blocking
|
||||
}
|
||||
wf.filters.Set(filterID, theFilter)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// UnsubscribeFilterByID removes a subscription to a filter node completely
|
||||
// using using a filter. It also closes the filter channel
|
||||
func (wf *WakuFilter) UnsubscribeByFilter(ctx context.Context, filter Filter) error {
|
||||
err := wf.UnsubscribeFilterByID(ctx, filter.filterID)
|
||||
if err != nil {
|
||||
close(filter.Chan)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// UnsubscribeFilterByID removes a subscription to a filter node completely
|
||||
// using the filterID returned when the subscription was created
|
||||
func (wf *WakuFilter) UnsubscribeFilterByID(ctx context.Context, filterID string) error {
|
||||
|
||||
var f Filter
|
||||
var ok bool
|
||||
|
||||
if f, ok = wf.filters.Get(filterID); !ok {
|
||||
return errors.New("filter not found")
|
||||
}
|
||||
|
||||
cf := ContentFilter{
|
||||
Topic: f.Topic,
|
||||
ContentTopics: f.ContentFilters,
|
||||
}
|
||||
|
||||
err := wf.Unsubscribe(ctx, cf, f.PeerID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
wf.filters.Delete(filterID)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Unsubscribe filter removes content topics from a filter subscription. If all
|
||||
// the contentTopics are removed the subscription is dropped completely
|
||||
func (wf *WakuFilter) UnsubscribeFilter(ctx context.Context, cf ContentFilter) error {
|
||||
// Remove local filter
|
||||
idsToRemove := make(map[string]struct{})
|
||||
for filterMapItem := range wf.filters.Items() {
|
||||
f := filterMapItem.Value
|
||||
id := filterMapItem.Key
|
||||
|
||||
if f.Topic != cf.Topic {
|
||||
continue
|
||||
}
|
||||
|
||||
// Send message to full node in order to unsubscribe
|
||||
err := wf.Unsubscribe(ctx, cf, f.PeerID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Iterate filter entries to remove matching content topics
|
||||
// make sure we delete the content filter
|
||||
// if no more topics are left
|
||||
for _, cfToDelete := range cf.ContentTopics {
|
||||
for i, cf := range f.ContentFilters {
|
||||
if cf == cfToDelete {
|
||||
l := len(f.ContentFilters) - 1
|
||||
f.ContentFilters[l], f.ContentFilters[i] = f.ContentFilters[i], f.ContentFilters[l]
|
||||
f.ContentFilters = f.ContentFilters[:l]
|
||||
break
|
||||
}
|
||||
|
||||
}
|
||||
if len(f.ContentFilters) == 0 {
|
||||
idsToRemove[id] = struct{}{}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for rID := range idsToRemove {
|
||||
wf.filters.Delete(rID)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
79
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/waku_filter_option.go
generated
vendored
Normal file
79
vendor/github.com/waku-org/go-waku/waku/v2/protocol/legacy_filter/waku_filter_option.go
generated
vendored
Normal file
@@ -0,0 +1,79 @@
|
||||
package legacy_filter
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/waku-org/go-waku/waku/v2/peermanager"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
type (
|
||||
FilterSubscribeParameters struct {
|
||||
host host.Host
|
||||
selectedPeer peer.ID
|
||||
peerSelectionType peermanager.PeerSelection
|
||||
preferredPeers peer.IDSlice
|
||||
log *zap.Logger
|
||||
}
|
||||
|
||||
FilterSubscribeOption func(*FilterSubscribeParameters)
|
||||
|
||||
FilterParameters struct {
|
||||
Timeout time.Duration
|
||||
pm *peermanager.PeerManager
|
||||
}
|
||||
|
||||
Option func(*FilterParameters)
|
||||
)
|
||||
|
||||
func WithTimeout(timeout time.Duration) Option {
|
||||
return func(params *FilterParameters) {
|
||||
params.Timeout = timeout
|
||||
}
|
||||
}
|
||||
|
||||
func WithPeerManager(pm *peermanager.PeerManager) Option {
|
||||
return func(params *FilterParameters) {
|
||||
params.pm = pm
|
||||
}
|
||||
}
|
||||
|
||||
func WithPeer(p peer.ID) FilterSubscribeOption {
|
||||
return func(params *FilterSubscribeParameters) {
|
||||
params.selectedPeer = p
|
||||
}
|
||||
}
|
||||
|
||||
// WithAutomaticPeerSelection is an option used to randomly select a peer from the peer store.
|
||||
// If a list of specific peers is passed, the peer will be chosen from that list assuming it
|
||||
// supports the chosen protocol, otherwise it will chose a peer from the node peerstore
|
||||
func WithAutomaticPeerSelection(fromThesePeers ...peer.ID) FilterSubscribeOption {
|
||||
return func(params *FilterSubscribeParameters) {
|
||||
params.peerSelectionType = peermanager.Automatic
|
||||
params.preferredPeers = fromThesePeers
|
||||
}
|
||||
}
|
||||
|
||||
// WithFastestPeerSelection is an option used to select a peer from the peer store
|
||||
// with the lowest ping If a list of specific peers is passed, the peer will be chosen
|
||||
// from that list assuming it supports the chosen protocol, otherwise it will chose a
|
||||
// peer from the node peerstore
|
||||
func WithFastestPeerSelection(fromThesePeers ...peer.ID) FilterSubscribeOption {
|
||||
return func(params *FilterSubscribeParameters) {
|
||||
params.peerSelectionType = peermanager.LowestRTT
|
||||
}
|
||||
}
|
||||
|
||||
func DefaultOptions() []Option {
|
||||
return []Option{
|
||||
WithTimeout(24 * time.Hour),
|
||||
}
|
||||
}
|
||||
|
||||
func DefaultSubscribtionOptions() []FilterSubscribeOption {
|
||||
return []FilterSubscribeOption{
|
||||
WithAutomaticPeerSelection(),
|
||||
}
|
||||
}
|
||||
65
vendor/github.com/waku-org/go-waku/waku/v2/protocol/lightpush/metrics.go
generated
vendored
Normal file
65
vendor/github.com/waku-org/go-waku/waku/v2/protocol/lightpush/metrics.go
generated
vendored
Normal file
@@ -0,0 +1,65 @@
|
||||
package lightpush
|
||||
|
||||
import (
|
||||
"github.com/libp2p/go-libp2p/p2p/metricshelper"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
)
|
||||
|
||||
var lightpushMessages = prometheus.NewCounter(
|
||||
prometheus.CounterOpts{
|
||||
Name: "waku_lightpush_messages",
|
||||
Help: "The number of messages sent via lightpush protocol",
|
||||
})
|
||||
|
||||
var lightpushErrors = prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Name: "waku_lightpush_errors",
|
||||
Help: "The distribution of the lightpush protocol errors",
|
||||
},
|
||||
[]string{"error_type"},
|
||||
)
|
||||
|
||||
var collectors = []prometheus.Collector{
|
||||
lightpushMessages,
|
||||
lightpushErrors,
|
||||
}
|
||||
|
||||
// Metrics exposes the functions required to update prometheus metrics for lightpush protocol
|
||||
type Metrics interface {
|
||||
RecordMessage()
|
||||
RecordError(err metricsErrCategory)
|
||||
}
|
||||
|
||||
type metricsImpl struct {
|
||||
reg prometheus.Registerer
|
||||
}
|
||||
|
||||
func newMetrics(reg prometheus.Registerer) Metrics {
|
||||
metricshelper.RegisterCollectors(reg, collectors...)
|
||||
return &metricsImpl{
|
||||
reg: reg,
|
||||
}
|
||||
}
|
||||
|
||||
// RecordMessage is used to increase the counter for the number of messages received via waku lightpush
|
||||
func (m *metricsImpl) RecordMessage() {
|
||||
lightpushMessages.Inc()
|
||||
}
|
||||
|
||||
type metricsErrCategory string
|
||||
|
||||
var (
|
||||
decodeRPCFailure metricsErrCategory = "decode_rpc_failure"
|
||||
writeRequestFailure metricsErrCategory = "write_request_failure"
|
||||
writeResponseFailure metricsErrCategory = "write_response_failure"
|
||||
dialFailure metricsErrCategory = "dial_failure"
|
||||
messagePushFailure metricsErrCategory = "message_push_failure"
|
||||
requestBodyFailure metricsErrCategory = "request_failure"
|
||||
responseBodyFailure metricsErrCategory = "response_body_failure"
|
||||
peerNotFoundFailure metricsErrCategory = "peer_not_found_failure"
|
||||
)
|
||||
|
||||
// RecordError increases the counter for different error types
|
||||
func (m *metricsImpl) RecordError(err metricsErrCategory) {
|
||||
lightpushErrors.WithLabelValues(string(err)).Inc()
|
||||
}
|
||||
3
vendor/github.com/waku-org/go-waku/waku/v2/protocol/lightpush/pb/generate.go
generated
vendored
Normal file
3
vendor/github.com/waku-org/go-waku/waku/v2/protocol/lightpush/pb/generate.go
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
package pb
|
||||
|
||||
//go:generate protoc -I./../../waku-proto/waku/lightpush/v2beta1/. -I./../../waku-proto/ --go_opt=paths=source_relative --go_opt=Mlightpush.proto=github.com/waku-org/go-waku/waku/v2/protocol/lightpush/pb --go_opt=Mwaku/message/v1/message.proto=github.com/waku-org/go-waku/waku/v2/protocol/pb --go_out=. ./../../waku-proto/waku/lightpush/v2beta1/lightpush.proto
|
||||
328
vendor/github.com/waku-org/go-waku/waku/v2/protocol/lightpush/pb/lightpush.pb.go
generated
vendored
Normal file
328
vendor/github.com/waku-org/go-waku/waku/v2/protocol/lightpush/pb/lightpush.pb.go
generated
vendored
Normal file
@@ -0,0 +1,328 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.31.0
|
||||
// protoc v4.24.4
|
||||
// source: lightpush.proto
|
||||
|
||||
// 19/WAKU2-LIGHTPUSH rfc: https://rfc.vac.dev/spec/19/
|
||||
// Protocol identifier: /vac/waku/lightpush/2.0.0-beta1
|
||||
|
||||
package pb
|
||||
|
||||
import (
|
||||
pb "github.com/waku-org/go-waku/waku/v2/protocol/pb"
|
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
reflect "reflect"
|
||||
sync "sync"
|
||||
)
|
||||
|
||||
const (
|
||||
// Verify that this generated code is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
||||
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
type PushRequest struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
PubsubTopic string `protobuf:"bytes,1,opt,name=pubsub_topic,json=pubsubTopic,proto3" json:"pubsub_topic,omitempty"`
|
||||
Message *pb.WakuMessage `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
|
||||
}
|
||||
|
||||
func (x *PushRequest) Reset() {
|
||||
*x = PushRequest{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_lightpush_proto_msgTypes[0]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *PushRequest) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*PushRequest) ProtoMessage() {}
|
||||
|
||||
func (x *PushRequest) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_lightpush_proto_msgTypes[0]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use PushRequest.ProtoReflect.Descriptor instead.
|
||||
func (*PushRequest) Descriptor() ([]byte, []int) {
|
||||
return file_lightpush_proto_rawDescGZIP(), []int{0}
|
||||
}
|
||||
|
||||
func (x *PushRequest) GetPubsubTopic() string {
|
||||
if x != nil {
|
||||
return x.PubsubTopic
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *PushRequest) GetMessage() *pb.WakuMessage {
|
||||
if x != nil {
|
||||
return x.Message
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type PushResponse struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
IsSuccess bool `protobuf:"varint,1,opt,name=is_success,json=isSuccess,proto3" json:"is_success,omitempty"`
|
||||
Info *string `protobuf:"bytes,2,opt,name=info,proto3,oneof" json:"info,omitempty"`
|
||||
}
|
||||
|
||||
func (x *PushResponse) Reset() {
|
||||
*x = PushResponse{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_lightpush_proto_msgTypes[1]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *PushResponse) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*PushResponse) ProtoMessage() {}
|
||||
|
||||
func (x *PushResponse) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_lightpush_proto_msgTypes[1]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use PushResponse.ProtoReflect.Descriptor instead.
|
||||
func (*PushResponse) Descriptor() ([]byte, []int) {
|
||||
return file_lightpush_proto_rawDescGZIP(), []int{1}
|
||||
}
|
||||
|
||||
func (x *PushResponse) GetIsSuccess() bool {
|
||||
if x != nil {
|
||||
return x.IsSuccess
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (x *PushResponse) GetInfo() string {
|
||||
if x != nil && x.Info != nil {
|
||||
return *x.Info
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
type PushRpc struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
RequestId string `protobuf:"bytes,1,opt,name=request_id,json=requestId,proto3" json:"request_id,omitempty"`
|
||||
Request *PushRequest `protobuf:"bytes,2,opt,name=request,proto3,oneof" json:"request,omitempty"`
|
||||
Response *PushResponse `protobuf:"bytes,3,opt,name=response,proto3,oneof" json:"response,omitempty"`
|
||||
}
|
||||
|
||||
func (x *PushRpc) Reset() {
|
||||
*x = PushRpc{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_lightpush_proto_msgTypes[2]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *PushRpc) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*PushRpc) ProtoMessage() {}
|
||||
|
||||
func (x *PushRpc) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_lightpush_proto_msgTypes[2]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use PushRpc.ProtoReflect.Descriptor instead.
|
||||
func (*PushRpc) Descriptor() ([]byte, []int) {
|
||||
return file_lightpush_proto_rawDescGZIP(), []int{2}
|
||||
}
|
||||
|
||||
func (x *PushRpc) GetRequestId() string {
|
||||
if x != nil {
|
||||
return x.RequestId
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *PushRpc) GetRequest() *PushRequest {
|
||||
if x != nil {
|
||||
return x.Request
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *PushRpc) GetResponse() *PushResponse {
|
||||
if x != nil {
|
||||
return x.Response
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
var File_lightpush_proto protoreflect.FileDescriptor
|
||||
|
||||
var file_lightpush_proto_rawDesc = []byte{
|
||||
0x0a, 0x0f, 0x6c, 0x69, 0x67, 0x68, 0x74, 0x70, 0x75, 0x73, 0x68, 0x2e, 0x70, 0x72, 0x6f, 0x74,
|
||||
0x6f, 0x12, 0x16, 0x77, 0x61, 0x6b, 0x75, 0x2e, 0x6c, 0x69, 0x67, 0x68, 0x74, 0x70, 0x75, 0x73,
|
||||
0x68, 0x2e, 0x76, 0x32, 0x62, 0x65, 0x74, 0x61, 0x31, 0x1a, 0x1d, 0x77, 0x61, 0x6b, 0x75, 0x2f,
|
||||
0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x65, 0x73, 0x73, 0x61,
|
||||
0x67, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x68, 0x0a, 0x0b, 0x50, 0x75, 0x73, 0x68,
|
||||
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x21, 0x0a, 0x0c, 0x70, 0x75, 0x62, 0x73, 0x75,
|
||||
0x62, 0x5f, 0x74, 0x6f, 0x70, 0x69, 0x63, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x70,
|
||||
0x75, 0x62, 0x73, 0x75, 0x62, 0x54, 0x6f, 0x70, 0x69, 0x63, 0x12, 0x36, 0x0a, 0x07, 0x6d, 0x65,
|
||||
0x73, 0x73, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1c, 0x2e, 0x77, 0x61,
|
||||
0x6b, 0x75, 0x2e, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x57, 0x61,
|
||||
0x6b, 0x75, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x52, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61,
|
||||
0x67, 0x65, 0x22, 0x4f, 0x0a, 0x0c, 0x50, 0x75, 0x73, 0x68, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
|
||||
0x73, 0x65, 0x12, 0x1d, 0x0a, 0x0a, 0x69, 0x73, 0x5f, 0x73, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73,
|
||||
0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x09, 0x69, 0x73, 0x53, 0x75, 0x63, 0x63, 0x65, 0x73,
|
||||
0x73, 0x12, 0x17, 0x0a, 0x04, 0x69, 0x6e, 0x66, 0x6f, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x48,
|
||||
0x00, 0x52, 0x04, 0x69, 0x6e, 0x66, 0x6f, 0x88, 0x01, 0x01, 0x42, 0x07, 0x0a, 0x05, 0x5f, 0x69,
|
||||
0x6e, 0x66, 0x6f, 0x22, 0xcc, 0x01, 0x0a, 0x07, 0x50, 0x75, 0x73, 0x68, 0x52, 0x70, 0x63, 0x12,
|
||||
0x1d, 0x0a, 0x0a, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20,
|
||||
0x01, 0x28, 0x09, 0x52, 0x09, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x49, 0x64, 0x12, 0x42,
|
||||
0x0a, 0x07, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32,
|
||||
0x23, 0x2e, 0x77, 0x61, 0x6b, 0x75, 0x2e, 0x6c, 0x69, 0x67, 0x68, 0x74, 0x70, 0x75, 0x73, 0x68,
|
||||
0x2e, 0x76, 0x32, 0x62, 0x65, 0x74, 0x61, 0x31, 0x2e, 0x50, 0x75, 0x73, 0x68, 0x52, 0x65, 0x71,
|
||||
0x75, 0x65, 0x73, 0x74, 0x48, 0x00, 0x52, 0x07, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x88,
|
||||
0x01, 0x01, 0x12, 0x45, 0x0a, 0x08, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x03,
|
||||
0x20, 0x01, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x77, 0x61, 0x6b, 0x75, 0x2e, 0x6c, 0x69, 0x67, 0x68,
|
||||
0x74, 0x70, 0x75, 0x73, 0x68, 0x2e, 0x76, 0x32, 0x62, 0x65, 0x74, 0x61, 0x31, 0x2e, 0x50, 0x75,
|
||||
0x73, 0x68, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x48, 0x01, 0x52, 0x08, 0x72, 0x65,
|
||||
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x88, 0x01, 0x01, 0x42, 0x0a, 0x0a, 0x08, 0x5f, 0x72, 0x65,
|
||||
0x71, 0x75, 0x65, 0x73, 0x74, 0x42, 0x0b, 0x0a, 0x09, 0x5f, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e,
|
||||
0x73, 0x65, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
|
||||
}
|
||||
|
||||
var (
|
||||
file_lightpush_proto_rawDescOnce sync.Once
|
||||
file_lightpush_proto_rawDescData = file_lightpush_proto_rawDesc
|
||||
)
|
||||
|
||||
func file_lightpush_proto_rawDescGZIP() []byte {
|
||||
file_lightpush_proto_rawDescOnce.Do(func() {
|
||||
file_lightpush_proto_rawDescData = protoimpl.X.CompressGZIP(file_lightpush_proto_rawDescData)
|
||||
})
|
||||
return file_lightpush_proto_rawDescData
|
||||
}
|
||||
|
||||
var file_lightpush_proto_msgTypes = make([]protoimpl.MessageInfo, 3)
|
||||
var file_lightpush_proto_goTypes = []interface{}{
|
||||
(*PushRequest)(nil), // 0: waku.lightpush.v2beta1.PushRequest
|
||||
(*PushResponse)(nil), // 1: waku.lightpush.v2beta1.PushResponse
|
||||
(*PushRpc)(nil), // 2: waku.lightpush.v2beta1.PushRpc
|
||||
(*pb.WakuMessage)(nil), // 3: waku.message.v1.WakuMessage
|
||||
}
|
||||
var file_lightpush_proto_depIdxs = []int32{
|
||||
3, // 0: waku.lightpush.v2beta1.PushRequest.message:type_name -> waku.message.v1.WakuMessage
|
||||
0, // 1: waku.lightpush.v2beta1.PushRpc.request:type_name -> waku.lightpush.v2beta1.PushRequest
|
||||
1, // 2: waku.lightpush.v2beta1.PushRpc.response:type_name -> waku.lightpush.v2beta1.PushResponse
|
||||
3, // [3:3] is the sub-list for method output_type
|
||||
3, // [3:3] is the sub-list for method input_type
|
||||
3, // [3:3] is the sub-list for extension type_name
|
||||
3, // [3:3] is the sub-list for extension extendee
|
||||
0, // [0:3] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_lightpush_proto_init() }
|
||||
func file_lightpush_proto_init() {
|
||||
if File_lightpush_proto != nil {
|
||||
return
|
||||
}
|
||||
if !protoimpl.UnsafeEnabled {
|
||||
file_lightpush_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*PushRequest); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_lightpush_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*PushResponse); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_lightpush_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*PushRpc); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
file_lightpush_proto_msgTypes[1].OneofWrappers = []interface{}{}
|
||||
file_lightpush_proto_msgTypes[2].OneofWrappers = []interface{}{}
|
||||
type x struct{}
|
||||
out := protoimpl.TypeBuilder{
|
||||
File: protoimpl.DescBuilder{
|
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: file_lightpush_proto_rawDesc,
|
||||
NumEnums: 0,
|
||||
NumMessages: 3,
|
||||
NumExtensions: 0,
|
||||
NumServices: 0,
|
||||
},
|
||||
GoTypes: file_lightpush_proto_goTypes,
|
||||
DependencyIndexes: file_lightpush_proto_depIdxs,
|
||||
MessageInfos: file_lightpush_proto_msgTypes,
|
||||
}.Build()
|
||||
File_lightpush_proto = out.File
|
||||
file_lightpush_proto_rawDesc = nil
|
||||
file_lightpush_proto_goTypes = nil
|
||||
file_lightpush_proto_depIdxs = nil
|
||||
}
|
||||
48
vendor/github.com/waku-org/go-waku/waku/v2/protocol/lightpush/pb/validation.go
generated
vendored
Normal file
48
vendor/github.com/waku-org/go-waku/waku/v2/protocol/lightpush/pb/validation.go
generated
vendored
Normal file
@@ -0,0 +1,48 @@
|
||||
package pb
|
||||
|
||||
import "errors"
|
||||
|
||||
var (
|
||||
errMissingRequestID = errors.New("missing RequestId field")
|
||||
errMissingQuery = errors.New("missing Query field")
|
||||
errMissingMessage = errors.New("missing Message field")
|
||||
errMissingPubsubTopic = errors.New("missing PubsubTopic field")
|
||||
errRequestIDMismatch = errors.New("requestID in response does not match request")
|
||||
errMissingResponse = errors.New("missing Response field")
|
||||
)
|
||||
|
||||
func (x *PushRpc) ValidateRequest() error {
|
||||
if x.RequestId == "" {
|
||||
return errMissingRequestID
|
||||
}
|
||||
|
||||
if x.Request == nil {
|
||||
return errMissingQuery
|
||||
}
|
||||
|
||||
if x.Request.PubsubTopic == "" {
|
||||
return errMissingPubsubTopic
|
||||
}
|
||||
|
||||
if x.Request.Message == nil {
|
||||
return errMissingMessage
|
||||
}
|
||||
|
||||
return x.Request.Message.Validate()
|
||||
}
|
||||
|
||||
func (x *PushRpc) ValidateResponse(requestID string) error {
|
||||
if x.RequestId == "" {
|
||||
return errMissingRequestID
|
||||
}
|
||||
|
||||
if x.RequestId != requestID {
|
||||
return errRequestIDMismatch
|
||||
}
|
||||
|
||||
if x.Response == nil {
|
||||
return errMissingResponse
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
325
vendor/github.com/waku-org/go-waku/waku/v2/protocol/lightpush/waku_lightpush.go
generated
vendored
Normal file
325
vendor/github.com/waku-org/go-waku/waku/v2/protocol/lightpush/waku_lightpush.go
generated
vendored
Normal file
@@ -0,0 +1,325 @@
|
||||
package lightpush
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
libp2pProtocol "github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/libp2p/go-msgio/pbio"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
"github.com/waku-org/go-waku/waku/v2/peermanager"
|
||||
"github.com/waku-org/go-waku/waku/v2/peerstore"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/lightpush/pb"
|
||||
wpb "github.com/waku-org/go-waku/waku/v2/protocol/pb"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
"github.com/waku-org/go-waku/waku/v2/utils"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
// LightPushID_v20beta1 is the current Waku LightPush protocol identifier
|
||||
const LightPushID_v20beta1 = libp2pProtocol.ID("/vac/waku/lightpush/2.0.0-beta1")
|
||||
const LightPushENRField = uint8(1 << 3)
|
||||
|
||||
var (
|
||||
ErrNoPeersAvailable = errors.New("no suitable remote peers")
|
||||
ErrInvalidID = errors.New("invalid request id")
|
||||
)
|
||||
|
||||
// WakuLightPush is the implementation of the Waku LightPush protocol
|
||||
type WakuLightPush struct {
|
||||
h host.Host
|
||||
relay *relay.WakuRelay
|
||||
cancel context.CancelFunc
|
||||
pm *peermanager.PeerManager
|
||||
metrics Metrics
|
||||
|
||||
log *zap.Logger
|
||||
}
|
||||
|
||||
// NewWakuLightPush returns a new instance of Waku Lightpush struct
|
||||
// Takes an optional peermanager if WakuLightPush is being created along with WakuNode.
|
||||
// If using libp2p host, then pass peermanager as nil
|
||||
func NewWakuLightPush(relay *relay.WakuRelay, pm *peermanager.PeerManager, reg prometheus.Registerer, log *zap.Logger) *WakuLightPush {
|
||||
wakuLP := new(WakuLightPush)
|
||||
wakuLP.relay = relay
|
||||
wakuLP.log = log.Named("lightpush")
|
||||
wakuLP.pm = pm
|
||||
wakuLP.metrics = newMetrics(reg)
|
||||
|
||||
if pm != nil {
|
||||
wakuLP.pm.RegisterWakuProtocol(LightPushID_v20beta1, LightPushENRField)
|
||||
}
|
||||
|
||||
return wakuLP
|
||||
}
|
||||
|
||||
// Sets the host to be able to mount or consume a protocol
|
||||
func (wakuLP *WakuLightPush) SetHost(h host.Host) {
|
||||
wakuLP.h = h
|
||||
}
|
||||
|
||||
// Start inits the lighpush protocol
|
||||
func (wakuLP *WakuLightPush) Start(ctx context.Context) error {
|
||||
if wakuLP.relayIsNotAvailable() {
|
||||
return errors.New("relay is required, without it, it is only a client and cannot be started")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
|
||||
wakuLP.cancel = cancel
|
||||
wakuLP.h.SetStreamHandlerMatch(LightPushID_v20beta1, protocol.PrefixTextMatch(string(LightPushID_v20beta1)), wakuLP.onRequest(ctx))
|
||||
wakuLP.log.Info("Light Push protocol started")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// relayIsNotAvailable determines if this node supports relaying messages for other lightpush clients
|
||||
func (wakuLP *WakuLightPush) relayIsNotAvailable() bool {
|
||||
return wakuLP.relay == nil
|
||||
}
|
||||
|
||||
func (wakuLP *WakuLightPush) onRequest(ctx context.Context) func(network.Stream) {
|
||||
return func(stream network.Stream) {
|
||||
logger := wakuLP.log.With(logging.HostID("peer", stream.Conn().RemotePeer()))
|
||||
requestPushRPC := &pb.PushRpc{}
|
||||
|
||||
reader := pbio.NewDelimitedReader(stream, math.MaxInt32)
|
||||
|
||||
err := reader.ReadMsg(requestPushRPC)
|
||||
if err != nil {
|
||||
logger.Error("reading request", zap.Error(err))
|
||||
wakuLP.metrics.RecordError(decodeRPCFailure)
|
||||
if err := stream.Reset(); err != nil {
|
||||
wakuLP.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
responsePushRPC := &pb.PushRpc{
|
||||
RequestId: requestPushRPC.RequestId,
|
||||
Response: &pb.PushResponse{},
|
||||
}
|
||||
|
||||
if err := requestPushRPC.ValidateRequest(); err != nil {
|
||||
responseMsg := err.Error()
|
||||
responsePushRPC.Response.Info = &responseMsg
|
||||
wakuLP.metrics.RecordError(requestBodyFailure)
|
||||
wakuLP.reply(stream, responsePushRPC, logger)
|
||||
return
|
||||
}
|
||||
|
||||
logger = logger.With(zap.String("requestID", requestPushRPC.RequestId))
|
||||
|
||||
logger.Info("push request")
|
||||
|
||||
pubSubTopic := requestPushRPC.Request.PubsubTopic
|
||||
message := requestPushRPC.Request.Message
|
||||
|
||||
wakuLP.metrics.RecordMessage()
|
||||
|
||||
// TODO: Assumes success, should probably be extended to check for network, peers, etc
|
||||
// It might make sense to use WithReadiness option here?
|
||||
|
||||
_, err = wakuLP.relay.Publish(ctx, message, relay.WithPubSubTopic(pubSubTopic))
|
||||
if err != nil {
|
||||
logger.Error("publishing message", zap.Error(err))
|
||||
wakuLP.metrics.RecordError(messagePushFailure)
|
||||
responseMsg := fmt.Sprintf("Could not publish message: %s", err.Error())
|
||||
responsePushRPC.Response.Info = &responseMsg
|
||||
return
|
||||
} else {
|
||||
responsePushRPC.Response.IsSuccess = true
|
||||
responseMsg := "OK"
|
||||
responsePushRPC.Response.Info = &responseMsg
|
||||
}
|
||||
|
||||
wakuLP.reply(stream, responsePushRPC, logger)
|
||||
|
||||
logger.Info("response sent")
|
||||
|
||||
stream.Close()
|
||||
|
||||
if responsePushRPC.Response.IsSuccess {
|
||||
logger.Info("request success")
|
||||
} else {
|
||||
logger.Info("request failure", zap.String("info", responsePushRPC.GetResponse().GetInfo()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (wakuLP *WakuLightPush) reply(stream network.Stream, responsePushRPC *pb.PushRpc, logger *zap.Logger) {
|
||||
writer := pbio.NewDelimitedWriter(stream)
|
||||
err := writer.WriteMsg(responsePushRPC)
|
||||
if err != nil {
|
||||
wakuLP.metrics.RecordError(writeResponseFailure)
|
||||
logger.Error("writing response", zap.Error(err))
|
||||
if err := stream.Reset(); err != nil {
|
||||
wakuLP.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return
|
||||
}
|
||||
stream.Close()
|
||||
}
|
||||
|
||||
// request sends a message via lightPush protocol to either a specified peer or peer that is selected.
|
||||
func (wakuLP *WakuLightPush) request(ctx context.Context, req *pb.PushRequest, params *lightPushParameters) (*pb.PushResponse, error) {
|
||||
if params == nil {
|
||||
return nil, errors.New("lightpush params are mandatory")
|
||||
}
|
||||
|
||||
if len(params.requestID) == 0 {
|
||||
return nil, ErrInvalidID
|
||||
}
|
||||
|
||||
logger := wakuLP.log.With(logging.HostID("peer", params.selectedPeer))
|
||||
|
||||
stream, err := wakuLP.h.NewStream(ctx, params.selectedPeer, LightPushID_v20beta1)
|
||||
if err != nil {
|
||||
logger.Error("creating stream to peer", zap.Error(err))
|
||||
wakuLP.metrics.RecordError(dialFailure)
|
||||
return nil, err
|
||||
}
|
||||
pushRequestRPC := &pb.PushRpc{RequestId: hex.EncodeToString(params.requestID), Request: req}
|
||||
|
||||
writer := pbio.NewDelimitedWriter(stream)
|
||||
reader := pbio.NewDelimitedReader(stream, math.MaxInt32)
|
||||
|
||||
err = writer.WriteMsg(pushRequestRPC)
|
||||
if err != nil {
|
||||
wakuLP.metrics.RecordError(writeRequestFailure)
|
||||
logger.Error("writing request", zap.Error(err))
|
||||
if err := stream.Reset(); err != nil {
|
||||
wakuLP.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
pushResponseRPC := &pb.PushRpc{}
|
||||
err = reader.ReadMsg(pushResponseRPC)
|
||||
if err != nil {
|
||||
logger.Error("reading response", zap.Error(err))
|
||||
wakuLP.metrics.RecordError(decodeRPCFailure)
|
||||
if err := stream.Reset(); err != nil {
|
||||
wakuLP.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
stream.Close()
|
||||
|
||||
if err = pushResponseRPC.ValidateResponse(pushRequestRPC.RequestId); err != nil {
|
||||
wakuLP.metrics.RecordError(responseBodyFailure)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return pushResponseRPC.Response, nil
|
||||
}
|
||||
|
||||
// Stop unmounts the lightpush protocol
|
||||
func (wakuLP *WakuLightPush) Stop() {
|
||||
if wakuLP.cancel == nil {
|
||||
return
|
||||
}
|
||||
|
||||
wakuLP.cancel()
|
||||
wakuLP.h.RemoveStreamHandler(LightPushID_v20beta1)
|
||||
}
|
||||
|
||||
func (wakuLP *WakuLightPush) handleOpts(ctx context.Context, message *wpb.WakuMessage, opts ...Option) (*lightPushParameters, error) {
|
||||
params := new(lightPushParameters)
|
||||
params.host = wakuLP.h
|
||||
params.log = wakuLP.log
|
||||
params.pm = wakuLP.pm
|
||||
var err error
|
||||
|
||||
optList := append(DefaultOptions(wakuLP.h), opts...)
|
||||
for _, opt := range optList {
|
||||
err := opt(params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if params.pubsubTopic == "" {
|
||||
params.pubsubTopic, err = protocol.GetPubSubTopicFromContentTopic(message.ContentTopic)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if params.pm != nil && params.peerAddr != nil {
|
||||
pData, err := wakuLP.pm.AddPeer(params.peerAddr, peerstore.Static, []string{params.pubsubTopic}, LightPushID_v20beta1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
wakuLP.pm.Connect(pData)
|
||||
params.selectedPeer = pData.AddrInfo.ID
|
||||
}
|
||||
|
||||
if params.pm != nil && params.selectedPeer == "" {
|
||||
params.selectedPeer, err = wakuLP.pm.SelectPeer(
|
||||
peermanager.PeerSelectionCriteria{
|
||||
SelectionType: params.peerSelectionType,
|
||||
Proto: LightPushID_v20beta1,
|
||||
PubsubTopics: []string{params.pubsubTopic},
|
||||
SpecificPeers: params.preferredPeers,
|
||||
Ctx: ctx,
|
||||
},
|
||||
)
|
||||
}
|
||||
if params.selectedPeer == "" {
|
||||
if err != nil {
|
||||
params.log.Error("selecting peer", zap.Error(err))
|
||||
wakuLP.metrics.RecordError(peerNotFoundFailure)
|
||||
return nil, ErrNoPeersAvailable
|
||||
}
|
||||
}
|
||||
return params, nil
|
||||
}
|
||||
|
||||
// Publish is used to broadcast a WakuMessage to the pubSubTopic (which is derived from the
|
||||
// contentTopic) via lightpush protocol. If auto-sharding is not to be used, then the
|
||||
// `WithPubSubTopic` option should be provided to publish the message to an specific pubSubTopic
|
||||
func (wakuLP *WakuLightPush) Publish(ctx context.Context, message *wpb.WakuMessage, opts ...Option) ([]byte, error) {
|
||||
if message == nil {
|
||||
return nil, errors.New("message can't be null")
|
||||
}
|
||||
|
||||
params, err := wakuLP.handleOpts(ctx, message, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
req := new(pb.PushRequest)
|
||||
req.Message = message
|
||||
req.PubsubTopic = params.pubsubTopic
|
||||
|
||||
logger := message.Logger(wakuLP.log, params.pubsubTopic).With(logging.HostID("peerID", params.selectedPeer))
|
||||
|
||||
logger.Debug("publishing message")
|
||||
|
||||
response, err := wakuLP.request(ctx, req, params)
|
||||
if err != nil {
|
||||
logger.Error("could not publish message", zap.Error(err))
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if response.IsSuccess {
|
||||
hash := message.Hash(params.pubsubTopic)
|
||||
utils.MessagesLogger("lightpush").Debug("waku.lightpush published", logging.HexBytes("hash", hash))
|
||||
return hash, nil
|
||||
}
|
||||
|
||||
errMsg := "lightpush error"
|
||||
if response.Info != nil {
|
||||
errMsg = *response.Info
|
||||
}
|
||||
|
||||
return nil, errors.New(errMsg)
|
||||
}
|
||||
117
vendor/github.com/waku-org/go-waku/waku/v2/protocol/lightpush/waku_lightpush_option.go
generated
vendored
Normal file
117
vendor/github.com/waku-org/go-waku/waku/v2/protocol/lightpush/waku_lightpush_option.go
generated
vendored
Normal file
@@ -0,0 +1,117 @@
|
||||
package lightpush
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/multiformats/go-multiaddr"
|
||||
"github.com/waku-org/go-waku/waku/v2/peermanager"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/relay"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
type lightPushParameters struct {
|
||||
host host.Host
|
||||
peerAddr multiaddr.Multiaddr
|
||||
selectedPeer peer.ID
|
||||
peerSelectionType peermanager.PeerSelection
|
||||
preferredPeers peer.IDSlice
|
||||
requestID []byte
|
||||
pm *peermanager.PeerManager
|
||||
log *zap.Logger
|
||||
pubsubTopic string
|
||||
}
|
||||
|
||||
// Option is the type of options accepted when performing LightPush protocol requests
|
||||
type Option func(*lightPushParameters) error
|
||||
|
||||
// WithPeer is an option used to specify the peerID to push a waku message to
|
||||
func WithPeer(p peer.ID) Option {
|
||||
return func(params *lightPushParameters) error {
|
||||
params.selectedPeer = p
|
||||
if params.peerAddr != nil {
|
||||
return errors.New("peerAddr and peerId options are mutually exclusive")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithPeerAddr is an option used to specify a peerAddress
|
||||
// This new peer will be added to peerStore.
|
||||
// Note that this option is mutually exclusive to WithPeerAddr, only one of them can be used.
|
||||
func WithPeerAddr(pAddr multiaddr.Multiaddr) Option {
|
||||
return func(params *lightPushParameters) error {
|
||||
params.peerAddr = pAddr
|
||||
if params.selectedPeer != "" {
|
||||
return errors.New("peerAddr and peerId options are mutually exclusive")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithAutomaticPeerSelection is an option used to randomly select a peer from the peer store
|
||||
// to push a waku message to. If a list of specific peers is passed, the peer will be chosen
|
||||
// from that list assuming it supports the chosen protocol, otherwise it will chose a peer
|
||||
// from the node peerstore
|
||||
func WithAutomaticPeerSelection(fromThesePeers ...peer.ID) Option {
|
||||
return func(params *lightPushParameters) error {
|
||||
params.peerSelectionType = peermanager.Automatic
|
||||
params.preferredPeers = fromThesePeers
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithFastestPeerSelection is an option used to select a peer from the peer store
|
||||
// with the lowest ping. If a list of specific peers is passed, the peer will be chosen
|
||||
// from that list assuming it supports the chosen protocol, otherwise it will chose a peer
|
||||
// from the node peerstore
|
||||
func WithFastestPeerSelection(fromThesePeers ...peer.ID) Option {
|
||||
return func(params *lightPushParameters) error {
|
||||
params.peerSelectionType = peermanager.LowestRTT
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithPubSubTopic is used to specify the pubsub topic on which a WakuMessage will be broadcasted
|
||||
func WithPubSubTopic(pubsubTopic string) Option {
|
||||
return func(params *lightPushParameters) error {
|
||||
params.pubsubTopic = pubsubTopic
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithDefaultPubsubTopic is used to indicate that the message should be broadcasted in the default pubsub topic
|
||||
func WithDefaultPubsubTopic() Option {
|
||||
return func(params *lightPushParameters) error {
|
||||
params.pubsubTopic = relay.DefaultWakuTopic
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithRequestID is an option to set a specific request ID to be used when
|
||||
// publishing a message
|
||||
func WithRequestID(requestID []byte) Option {
|
||||
return func(params *lightPushParameters) error {
|
||||
params.requestID = requestID
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithAutomaticRequestID is an option to automatically generate a request ID
|
||||
// when publishing a message
|
||||
func WithAutomaticRequestID() Option {
|
||||
return func(params *lightPushParameters) error {
|
||||
params.requestID = protocol.GenerateRequestID()
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// DefaultOptions are the default options to be used when using the lightpush protocol
|
||||
func DefaultOptions(host host.Host) []Option {
|
||||
return []Option{
|
||||
WithAutomaticRequestID(),
|
||||
WithAutomaticPeerSelection(),
|
||||
}
|
||||
}
|
||||
3
vendor/github.com/waku-org/go-waku/waku/v2/protocol/metadata/pb/generate.go
generated
vendored
Normal file
3
vendor/github.com/waku-org/go-waku/waku/v2/protocol/metadata/pb/generate.go
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
package pb
|
||||
|
||||
//go:generate protoc -I./../../waku-proto/waku/metadata/v1/. -I./../../waku-proto/ --go_opt=paths=source_relative --go_opt=Mwaku_metadata.proto=github.com/waku-org/go-waku/waku/v2/protocol/metadata/pb --go_out=. ./../../waku-proto/waku/metadata/v1/waku_metadata.proto
|
||||
232
vendor/github.com/waku-org/go-waku/waku/v2/protocol/metadata/pb/waku_metadata.pb.go
generated
vendored
Normal file
232
vendor/github.com/waku-org/go-waku/waku/v2/protocol/metadata/pb/waku_metadata.pb.go
generated
vendored
Normal file
@@ -0,0 +1,232 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.31.0
|
||||
// protoc v4.24.4
|
||||
// source: waku_metadata.proto
|
||||
|
||||
// rfc: https://rfc.vac.dev/spec/66/
|
||||
|
||||
package pb
|
||||
|
||||
import (
|
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
reflect "reflect"
|
||||
sync "sync"
|
||||
)
|
||||
|
||||
const (
|
||||
// Verify that this generated code is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
||||
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
type WakuMetadataRequest struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
ClusterId *uint32 `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,oneof" json:"cluster_id,omitempty"`
|
||||
Shards []uint32 `protobuf:"varint,2,rep,packed,name=shards,proto3" json:"shards,omitempty"`
|
||||
}
|
||||
|
||||
func (x *WakuMetadataRequest) Reset() {
|
||||
*x = WakuMetadataRequest{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_waku_metadata_proto_msgTypes[0]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *WakuMetadataRequest) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*WakuMetadataRequest) ProtoMessage() {}
|
||||
|
||||
func (x *WakuMetadataRequest) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_waku_metadata_proto_msgTypes[0]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use WakuMetadataRequest.ProtoReflect.Descriptor instead.
|
||||
func (*WakuMetadataRequest) Descriptor() ([]byte, []int) {
|
||||
return file_waku_metadata_proto_rawDescGZIP(), []int{0}
|
||||
}
|
||||
|
||||
func (x *WakuMetadataRequest) GetClusterId() uint32 {
|
||||
if x != nil && x.ClusterId != nil {
|
||||
return *x.ClusterId
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *WakuMetadataRequest) GetShards() []uint32 {
|
||||
if x != nil {
|
||||
return x.Shards
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type WakuMetadataResponse struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
ClusterId *uint32 `protobuf:"varint,1,opt,name=cluster_id,json=clusterId,proto3,oneof" json:"cluster_id,omitempty"`
|
||||
Shards []uint32 `protobuf:"varint,2,rep,packed,name=shards,proto3" json:"shards,omitempty"`
|
||||
}
|
||||
|
||||
func (x *WakuMetadataResponse) Reset() {
|
||||
*x = WakuMetadataResponse{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_waku_metadata_proto_msgTypes[1]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *WakuMetadataResponse) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*WakuMetadataResponse) ProtoMessage() {}
|
||||
|
||||
func (x *WakuMetadataResponse) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_waku_metadata_proto_msgTypes[1]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use WakuMetadataResponse.ProtoReflect.Descriptor instead.
|
||||
func (*WakuMetadataResponse) Descriptor() ([]byte, []int) {
|
||||
return file_waku_metadata_proto_rawDescGZIP(), []int{1}
|
||||
}
|
||||
|
||||
func (x *WakuMetadataResponse) GetClusterId() uint32 {
|
||||
if x != nil && x.ClusterId != nil {
|
||||
return *x.ClusterId
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *WakuMetadataResponse) GetShards() []uint32 {
|
||||
if x != nil {
|
||||
return x.Shards
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
var File_waku_metadata_proto protoreflect.FileDescriptor
|
||||
|
||||
var file_waku_metadata_proto_rawDesc = []byte{
|
||||
0x0a, 0x13, 0x77, 0x61, 0x6b, 0x75, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x2e,
|
||||
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x10, 0x77, 0x61, 0x6b, 0x75, 0x2e, 0x6d, 0x65, 0x74, 0x61,
|
||||
0x64, 0x61, 0x74, 0x61, 0x2e, 0x76, 0x31, 0x22, 0x60, 0x0a, 0x13, 0x57, 0x61, 0x6b, 0x75, 0x4d,
|
||||
0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x22,
|
||||
0x0a, 0x0a, 0x63, 0x6c, 0x75, 0x73, 0x74, 0x65, 0x72, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01,
|
||||
0x28, 0x0d, 0x48, 0x00, 0x52, 0x09, 0x63, 0x6c, 0x75, 0x73, 0x74, 0x65, 0x72, 0x49, 0x64, 0x88,
|
||||
0x01, 0x01, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x68, 0x61, 0x72, 0x64, 0x73, 0x18, 0x02, 0x20, 0x03,
|
||||
0x28, 0x0d, 0x52, 0x06, 0x73, 0x68, 0x61, 0x72, 0x64, 0x73, 0x42, 0x0d, 0x0a, 0x0b, 0x5f, 0x63,
|
||||
0x6c, 0x75, 0x73, 0x74, 0x65, 0x72, 0x5f, 0x69, 0x64, 0x22, 0x61, 0x0a, 0x14, 0x57, 0x61, 0x6b,
|
||||
0x75, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
|
||||
0x65, 0x12, 0x22, 0x0a, 0x0a, 0x63, 0x6c, 0x75, 0x73, 0x74, 0x65, 0x72, 0x5f, 0x69, 0x64, 0x18,
|
||||
0x01, 0x20, 0x01, 0x28, 0x0d, 0x48, 0x00, 0x52, 0x09, 0x63, 0x6c, 0x75, 0x73, 0x74, 0x65, 0x72,
|
||||
0x49, 0x64, 0x88, 0x01, 0x01, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x68, 0x61, 0x72, 0x64, 0x73, 0x18,
|
||||
0x02, 0x20, 0x03, 0x28, 0x0d, 0x52, 0x06, 0x73, 0x68, 0x61, 0x72, 0x64, 0x73, 0x42, 0x0d, 0x0a,
|
||||
0x0b, 0x5f, 0x63, 0x6c, 0x75, 0x73, 0x74, 0x65, 0x72, 0x5f, 0x69, 0x64, 0x62, 0x06, 0x70, 0x72,
|
||||
0x6f, 0x74, 0x6f, 0x33,
|
||||
}
|
||||
|
||||
var (
|
||||
file_waku_metadata_proto_rawDescOnce sync.Once
|
||||
file_waku_metadata_proto_rawDescData = file_waku_metadata_proto_rawDesc
|
||||
)
|
||||
|
||||
func file_waku_metadata_proto_rawDescGZIP() []byte {
|
||||
file_waku_metadata_proto_rawDescOnce.Do(func() {
|
||||
file_waku_metadata_proto_rawDescData = protoimpl.X.CompressGZIP(file_waku_metadata_proto_rawDescData)
|
||||
})
|
||||
return file_waku_metadata_proto_rawDescData
|
||||
}
|
||||
|
||||
var file_waku_metadata_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
|
||||
var file_waku_metadata_proto_goTypes = []interface{}{
|
||||
(*WakuMetadataRequest)(nil), // 0: waku.metadata.v1.WakuMetadataRequest
|
||||
(*WakuMetadataResponse)(nil), // 1: waku.metadata.v1.WakuMetadataResponse
|
||||
}
|
||||
var file_waku_metadata_proto_depIdxs = []int32{
|
||||
0, // [0:0] is the sub-list for method output_type
|
||||
0, // [0:0] is the sub-list for method input_type
|
||||
0, // [0:0] is the sub-list for extension type_name
|
||||
0, // [0:0] is the sub-list for extension extendee
|
||||
0, // [0:0] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_waku_metadata_proto_init() }
|
||||
func file_waku_metadata_proto_init() {
|
||||
if File_waku_metadata_proto != nil {
|
||||
return
|
||||
}
|
||||
if !protoimpl.UnsafeEnabled {
|
||||
file_waku_metadata_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*WakuMetadataRequest); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_waku_metadata_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*WakuMetadataResponse); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
file_waku_metadata_proto_msgTypes[0].OneofWrappers = []interface{}{}
|
||||
file_waku_metadata_proto_msgTypes[1].OneofWrappers = []interface{}{}
|
||||
type x struct{}
|
||||
out := protoimpl.TypeBuilder{
|
||||
File: protoimpl.DescBuilder{
|
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: file_waku_metadata_proto_rawDesc,
|
||||
NumEnums: 0,
|
||||
NumMessages: 2,
|
||||
NumExtensions: 0,
|
||||
NumServices: 0,
|
||||
},
|
||||
GoTypes: file_waku_metadata_proto_goTypes,
|
||||
DependencyIndexes: file_waku_metadata_proto_depIdxs,
|
||||
MessageInfos: file_waku_metadata_proto_msgTypes,
|
||||
}.Build()
|
||||
File_waku_metadata_proto = out.File
|
||||
file_waku_metadata_proto_rawDesc = nil
|
||||
file_waku_metadata_proto_goTypes = nil
|
||||
file_waku_metadata_proto_depIdxs = nil
|
||||
}
|
||||
251
vendor/github.com/waku-org/go-waku/waku/v2/protocol/metadata/waku_metadata.go
generated
vendored
Normal file
251
vendor/github.com/waku-org/go-waku/waku/v2/protocol/metadata/waku_metadata.go
generated
vendored
Normal file
@@ -0,0 +1,251 @@
|
||||
package metadata
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"math"
|
||||
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
libp2pProtocol "github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/libp2p/go-msgio/pbio"
|
||||
"github.com/multiformats/go-multiaddr"
|
||||
"github.com/waku-org/go-waku/logging"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/enr"
|
||||
"github.com/waku-org/go-waku/waku/v2/protocol/metadata/pb"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
// MetadataID_v1 is the current Waku Metadata protocol identifier
|
||||
const MetadataID_v1 = libp2pProtocol.ID("/vac/waku/metadata/1.0.0")
|
||||
|
||||
// WakuMetadata is the implementation of the Waku Metadata protocol
|
||||
type WakuMetadata struct {
|
||||
network.Notifiee
|
||||
|
||||
h host.Host
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
clusterID uint16
|
||||
localnode *enode.LocalNode
|
||||
|
||||
log *zap.Logger
|
||||
}
|
||||
|
||||
// NewWakuMetadata returns a new instance of Waku Metadata struct
|
||||
// Takes an optional peermanager if WakuLightPush is being created along with WakuNode.
|
||||
// If using libp2p host, then pass peermanager as nil
|
||||
func NewWakuMetadata(clusterID uint16, localnode *enode.LocalNode, log *zap.Logger) *WakuMetadata {
|
||||
m := new(WakuMetadata)
|
||||
m.log = log.Named("metadata")
|
||||
m.clusterID = clusterID
|
||||
m.localnode = localnode
|
||||
|
||||
return m
|
||||
}
|
||||
|
||||
// Sets the host to be able to mount or consume a protocol
|
||||
func (wakuM *WakuMetadata) SetHost(h host.Host) {
|
||||
wakuM.h = h
|
||||
}
|
||||
|
||||
// Start inits the metadata protocol
|
||||
func (wakuM *WakuMetadata) Start(ctx context.Context) error {
|
||||
if wakuM.clusterID == 0 {
|
||||
wakuM.log.Warn("no clusterID is specified. Protocol will not be initialized")
|
||||
return nil
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
|
||||
wakuM.ctx = ctx
|
||||
wakuM.cancel = cancel
|
||||
|
||||
wakuM.h.Network().Notify(wakuM)
|
||||
|
||||
wakuM.h.SetStreamHandlerMatch(MetadataID_v1, protocol.PrefixTextMatch(string(MetadataID_v1)), wakuM.onRequest(ctx))
|
||||
wakuM.log.Info("metadata protocol started")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (wakuM *WakuMetadata) getClusterAndShards() (*uint32, []uint32, error) {
|
||||
shard, err := enr.RelaySharding(wakuM.localnode.Node().Record())
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
var shards []uint32
|
||||
if shard != nil && shard.ClusterID == uint16(wakuM.clusterID) {
|
||||
for _, idx := range shard.ShardIDs {
|
||||
shards = append(shards, uint32(idx))
|
||||
}
|
||||
}
|
||||
|
||||
u32ClusterID := uint32(wakuM.clusterID)
|
||||
|
||||
return &u32ClusterID, shards, nil
|
||||
}
|
||||
|
||||
func (wakuM *WakuMetadata) Request(ctx context.Context, peerID peer.ID) (*protocol.RelayShards, error) {
|
||||
logger := wakuM.log.With(logging.HostID("peer", peerID))
|
||||
|
||||
stream, err := wakuM.h.NewStream(ctx, peerID, MetadataID_v1)
|
||||
if err != nil {
|
||||
logger.Error("creating stream to peer", zap.Error(err))
|
||||
return nil, err
|
||||
}
|
||||
|
||||
clusterID, shards, err := wakuM.getClusterAndShards()
|
||||
if err != nil {
|
||||
if err := stream.Reset(); err != nil {
|
||||
wakuM.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
request := &pb.WakuMetadataRequest{}
|
||||
request.ClusterId = clusterID
|
||||
request.Shards = shards
|
||||
|
||||
writer := pbio.NewDelimitedWriter(stream)
|
||||
reader := pbio.NewDelimitedReader(stream, math.MaxInt32)
|
||||
|
||||
err = writer.WriteMsg(request)
|
||||
if err != nil {
|
||||
logger.Error("writing request", zap.Error(err))
|
||||
if err := stream.Reset(); err != nil {
|
||||
wakuM.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
response := &pb.WakuMetadataResponse{}
|
||||
err = reader.ReadMsg(response)
|
||||
if err != nil {
|
||||
logger.Error("reading response", zap.Error(err))
|
||||
if err := stream.Reset(); err != nil {
|
||||
wakuM.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
stream.Close()
|
||||
|
||||
if response.ClusterId == nil {
|
||||
return nil, errors.New("node did not provide a waku clusterid")
|
||||
}
|
||||
|
||||
rClusterID := uint16(*response.ClusterId)
|
||||
var rShardIDs []uint16
|
||||
for _, i := range response.Shards {
|
||||
rShardIDs = append(rShardIDs, uint16(i))
|
||||
}
|
||||
|
||||
rs, err := protocol.NewRelayShards(rClusterID, rShardIDs...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &rs, nil
|
||||
}
|
||||
|
||||
func (wakuM *WakuMetadata) onRequest(ctx context.Context) func(network.Stream) {
|
||||
return func(stream network.Stream) {
|
||||
logger := wakuM.log.With(logging.HostID("peer", stream.Conn().RemotePeer()))
|
||||
request := &pb.WakuMetadataRequest{}
|
||||
|
||||
writer := pbio.NewDelimitedWriter(stream)
|
||||
reader := pbio.NewDelimitedReader(stream, math.MaxInt32)
|
||||
|
||||
err := reader.ReadMsg(request)
|
||||
if err != nil {
|
||||
logger.Error("reading request", zap.Error(err))
|
||||
if err := stream.Reset(); err != nil {
|
||||
wakuM.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
response := new(pb.WakuMetadataResponse)
|
||||
|
||||
clusterID, shards, err := wakuM.getClusterAndShards()
|
||||
if err != nil {
|
||||
logger.Error("obtaining shard info", zap.Error(err))
|
||||
} else {
|
||||
response.ClusterId = clusterID
|
||||
response.Shards = shards
|
||||
}
|
||||
|
||||
err = writer.WriteMsg(response)
|
||||
if err != nil {
|
||||
logger.Error("writing response", zap.Error(err))
|
||||
if err := stream.Reset(); err != nil {
|
||||
wakuM.log.Error("resetting connection", zap.Error(err))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
stream.Close()
|
||||
}
|
||||
}
|
||||
|
||||
// Stop unmounts the metadata protocol
|
||||
func (wakuM *WakuMetadata) Stop() {
|
||||
if wakuM.cancel == nil {
|
||||
return
|
||||
}
|
||||
|
||||
wakuM.h.Network().StopNotify(wakuM)
|
||||
wakuM.cancel()
|
||||
wakuM.h.RemoveStreamHandler(MetadataID_v1)
|
||||
|
||||
}
|
||||
|
||||
// Listen is called when network starts listening on an addr
|
||||
func (wakuM *WakuMetadata) Listen(n network.Network, m multiaddr.Multiaddr) {
|
||||
// Do nothing
|
||||
}
|
||||
|
||||
// ListenClose is called when network stops listening on an address
|
||||
func (wakuM *WakuMetadata) ListenClose(n network.Network, m multiaddr.Multiaddr) {
|
||||
// Do nothing
|
||||
}
|
||||
|
||||
func (wakuM *WakuMetadata) disconnectPeer(peerID peer.ID, reason error) {
|
||||
logger := wakuM.log.With(logging.HostID("peerID", peerID))
|
||||
logger.Error("disconnecting from peer", zap.Error(reason))
|
||||
wakuM.h.Peerstore().RemovePeer(peerID)
|
||||
if err := wakuM.h.Network().ClosePeer(peerID); err != nil {
|
||||
logger.Error("could not disconnect from peer", zap.Error(err))
|
||||
}
|
||||
}
|
||||
|
||||
// Connected is called when a connection is opened
|
||||
func (wakuM *WakuMetadata) Connected(n network.Network, cc network.Conn) {
|
||||
go func() {
|
||||
// Metadata verification is done only if a clusterID is specified
|
||||
if wakuM.clusterID == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
peerID := cc.RemotePeer()
|
||||
|
||||
shard, err := wakuM.Request(wakuM.ctx, peerID)
|
||||
if err != nil {
|
||||
wakuM.disconnectPeer(peerID, err)
|
||||
return
|
||||
}
|
||||
|
||||
if shard.ClusterID != wakuM.clusterID {
|
||||
wakuM.disconnectPeer(peerID, errors.New("different clusterID reported"))
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Disconnected is called when a connection closed
|
||||
func (wakuM *WakuMetadata) Disconnected(n network.Network, cc network.Conn) {
|
||||
// Do nothing
|
||||
}
|
||||
29
vendor/github.com/waku-org/go-waku/waku/v2/protocol/pb/codec.go
generated
vendored
Normal file
29
vendor/github.com/waku-org/go-waku/waku/v2/protocol/pb/codec.go
generated
vendored
Normal file
@@ -0,0 +1,29 @@
|
||||
package pb
|
||||
|
||||
import (
|
||||
"google.golang.org/protobuf/encoding/protojson"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
func (m *WakuMessage) MarshalJSON() ([]byte, error) {
|
||||
return (protojson.MarshalOptions{}).Marshal(m)
|
||||
}
|
||||
|
||||
func Unmarshal(data []byte) (*WakuMessage, error) {
|
||||
msg := &WakuMessage{}
|
||||
err := proto.Unmarshal(data, msg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = msg.Validate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return msg, nil
|
||||
}
|
||||
|
||||
func (m *WakuMessage) UnmarshalJSON(data []byte) error {
|
||||
return (protojson.UnmarshalOptions{}).Unmarshal(data, m)
|
||||
}
|
||||
3
vendor/github.com/waku-org/go-waku/waku/v2/protocol/pb/generate.go
generated
vendored
Normal file
3
vendor/github.com/waku-org/go-waku/waku/v2/protocol/pb/generate.go
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
package pb
|
||||
|
||||
//go:generate protoc -I./../waku-proto/waku/message/v1/. -I./../waku-proto/ --go_opt=paths=source_relative --go_opt=Mmessage.proto=github.com/waku-org/go-waku/waku/v2/pb --go_out=. ./../waku-proto/waku/message/v1/message.proto
|
||||
210
vendor/github.com/waku-org/go-waku/waku/v2/protocol/pb/message.pb.go
generated
vendored
Normal file
210
vendor/github.com/waku-org/go-waku/waku/v2/protocol/pb/message.pb.go
generated
vendored
Normal file
@@ -0,0 +1,210 @@
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.31.0
|
||||
// protoc v4.24.4
|
||||
// source: message.proto
|
||||
|
||||
// 14/WAKU2-MESSAGE rfc: https://rfc.vac.dev/spec/14/
|
||||
|
||||
package pb
|
||||
|
||||
import (
|
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
reflect "reflect"
|
||||
sync "sync"
|
||||
)
|
||||
|
||||
const (
|
||||
// Verify that this generated code is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
||||
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
type WakuMessage struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Payload []byte `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"`
|
||||
ContentTopic string `protobuf:"bytes,2,opt,name=content_topic,json=contentTopic,proto3" json:"content_topic,omitempty"`
|
||||
Version *uint32 `protobuf:"varint,3,opt,name=version,proto3,oneof" json:"version,omitempty"`
|
||||
Timestamp *int64 `protobuf:"zigzag64,10,opt,name=timestamp,proto3,oneof" json:"timestamp,omitempty"`
|
||||
Meta []byte `protobuf:"bytes,11,opt,name=meta,proto3,oneof" json:"meta,omitempty"`
|
||||
Ephemeral *bool `protobuf:"varint,31,opt,name=ephemeral,proto3,oneof" json:"ephemeral,omitempty"`
|
||||
RateLimitProof []byte `protobuf:"bytes,21,opt,name=rate_limit_proof,json=rateLimitProof,proto3,oneof" json:"rate_limit_proof,omitempty"`
|
||||
}
|
||||
|
||||
func (x *WakuMessage) Reset() {
|
||||
*x = WakuMessage{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_message_proto_msgTypes[0]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *WakuMessage) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*WakuMessage) ProtoMessage() {}
|
||||
|
||||
func (x *WakuMessage) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_message_proto_msgTypes[0]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use WakuMessage.ProtoReflect.Descriptor instead.
|
||||
func (*WakuMessage) Descriptor() ([]byte, []int) {
|
||||
return file_message_proto_rawDescGZIP(), []int{0}
|
||||
}
|
||||
|
||||
func (x *WakuMessage) GetPayload() []byte {
|
||||
if x != nil {
|
||||
return x.Payload
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *WakuMessage) GetContentTopic() string {
|
||||
if x != nil {
|
||||
return x.ContentTopic
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *WakuMessage) GetVersion() uint32 {
|
||||
if x != nil && x.Version != nil {
|
||||
return *x.Version
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *WakuMessage) GetTimestamp() int64 {
|
||||
if x != nil && x.Timestamp != nil {
|
||||
return *x.Timestamp
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *WakuMessage) GetMeta() []byte {
|
||||
if x != nil {
|
||||
return x.Meta
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *WakuMessage) GetEphemeral() bool {
|
||||
if x != nil && x.Ephemeral != nil {
|
||||
return *x.Ephemeral
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (x *WakuMessage) GetRateLimitProof() []byte {
|
||||
if x != nil {
|
||||
return x.RateLimitProof
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
var File_message_proto protoreflect.FileDescriptor
|
||||
|
||||
var file_message_proto_rawDesc = []byte{
|
||||
0x0a, 0x0d, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12,
|
||||
0x0f, 0x77, 0x61, 0x6b, 0x75, 0x2e, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x2e, 0x76, 0x31,
|
||||
0x22, 0xbf, 0x02, 0x0a, 0x0b, 0x57, 0x61, 0x6b, 0x75, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65,
|
||||
0x12, 0x18, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28,
|
||||
0x0c, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x23, 0x0a, 0x0d, 0x63, 0x6f,
|
||||
0x6e, 0x74, 0x65, 0x6e, 0x74, 0x5f, 0x74, 0x6f, 0x70, 0x69, 0x63, 0x18, 0x02, 0x20, 0x01, 0x28,
|
||||
0x09, 0x52, 0x0c, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x54, 0x6f, 0x70, 0x69, 0x63, 0x12,
|
||||
0x1d, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d,
|
||||
0x48, 0x00, 0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x88, 0x01, 0x01, 0x12, 0x21,
|
||||
0x0a, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x18, 0x0a, 0x20, 0x01, 0x28,
|
||||
0x12, 0x48, 0x01, 0x52, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x88, 0x01,
|
||||
0x01, 0x12, 0x17, 0x0a, 0x04, 0x6d, 0x65, 0x74, 0x61, 0x18, 0x0b, 0x20, 0x01, 0x28, 0x0c, 0x48,
|
||||
0x02, 0x52, 0x04, 0x6d, 0x65, 0x74, 0x61, 0x88, 0x01, 0x01, 0x12, 0x21, 0x0a, 0x09, 0x65, 0x70,
|
||||
0x68, 0x65, 0x6d, 0x65, 0x72, 0x61, 0x6c, 0x18, 0x1f, 0x20, 0x01, 0x28, 0x08, 0x48, 0x03, 0x52,
|
||||
0x09, 0x65, 0x70, 0x68, 0x65, 0x6d, 0x65, 0x72, 0x61, 0x6c, 0x88, 0x01, 0x01, 0x12, 0x2d, 0x0a,
|
||||
0x10, 0x72, 0x61, 0x74, 0x65, 0x5f, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x5f, 0x70, 0x72, 0x6f, 0x6f,
|
||||
0x66, 0x18, 0x15, 0x20, 0x01, 0x28, 0x0c, 0x48, 0x04, 0x52, 0x0e, 0x72, 0x61, 0x74, 0x65, 0x4c,
|
||||
0x69, 0x6d, 0x69, 0x74, 0x50, 0x72, 0x6f, 0x6f, 0x66, 0x88, 0x01, 0x01, 0x42, 0x0a, 0x0a, 0x08,
|
||||
0x5f, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x42, 0x0c, 0x0a, 0x0a, 0x5f, 0x74, 0x69, 0x6d,
|
||||
0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x42, 0x07, 0x0a, 0x05, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x42,
|
||||
0x0c, 0x0a, 0x0a, 0x5f, 0x65, 0x70, 0x68, 0x65, 0x6d, 0x65, 0x72, 0x61, 0x6c, 0x42, 0x13, 0x0a,
|
||||
0x11, 0x5f, 0x72, 0x61, 0x74, 0x65, 0x5f, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x5f, 0x70, 0x72, 0x6f,
|
||||
0x6f, 0x66, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
|
||||
}
|
||||
|
||||
var (
|
||||
file_message_proto_rawDescOnce sync.Once
|
||||
file_message_proto_rawDescData = file_message_proto_rawDesc
|
||||
)
|
||||
|
||||
func file_message_proto_rawDescGZIP() []byte {
|
||||
file_message_proto_rawDescOnce.Do(func() {
|
||||
file_message_proto_rawDescData = protoimpl.X.CompressGZIP(file_message_proto_rawDescData)
|
||||
})
|
||||
return file_message_proto_rawDescData
|
||||
}
|
||||
|
||||
var file_message_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
|
||||
var file_message_proto_goTypes = []interface{}{
|
||||
(*WakuMessage)(nil), // 0: waku.message.v1.WakuMessage
|
||||
}
|
||||
var file_message_proto_depIdxs = []int32{
|
||||
0, // [0:0] is the sub-list for method output_type
|
||||
0, // [0:0] is the sub-list for method input_type
|
||||
0, // [0:0] is the sub-list for extension type_name
|
||||
0, // [0:0] is the sub-list for extension extendee
|
||||
0, // [0:0] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_message_proto_init() }
|
||||
func file_message_proto_init() {
|
||||
if File_message_proto != nil {
|
||||
return
|
||||
}
|
||||
if !protoimpl.UnsafeEnabled {
|
||||
file_message_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*WakuMessage); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
file_message_proto_msgTypes[0].OneofWrappers = []interface{}{}
|
||||
type x struct{}
|
||||
out := protoimpl.TypeBuilder{
|
||||
File: protoimpl.DescBuilder{
|
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: file_message_proto_rawDesc,
|
||||
NumEnums: 0,
|
||||
NumMessages: 1,
|
||||
NumExtensions: 0,
|
||||
NumServices: 0,
|
||||
},
|
||||
GoTypes: file_message_proto_goTypes,
|
||||
DependencyIndexes: file_message_proto_depIdxs,
|
||||
MessageInfos: file_message_proto_msgTypes,
|
||||
}.Build()
|
||||
File_message_proto = out.File
|
||||
file_message_proto_rawDesc = nil
|
||||
file_message_proto_goTypes = nil
|
||||
file_message_proto_depIdxs = nil
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user