Tuesday, September 12, 2017

Introduction to ETCDv3

Introduction to ETCD (Hands on)


ETCD is an open-source key value store that is used for configuration, transnational locks, elections, service discovery, distributed queue and more.  It has been referred to as "the heart of cloud native" as well.  It is normally deployed in a distributed manner, multiple nodes in a cluster.  The other open-source tools that do similar things are Consul and Apache Zookeeper.

This blog will not go into architecture and deployment of a production system, for that please refer to Kelsey Hightower's video here.  This blog will focus on interacting with / leveraging ETCD, once it is up and running.

At a high level ETCD can be used for any micro-service and is often used for container clusters.  ETCD V2 is now the older release (but still highly used) and V3 is the new release.  The command line tool for ETCD is called etcdctl.  By default etcdctl calls version 2.  Version 3 adds some new features and changes syntax in quite a few of the commands.  To use V3 set the environment variable as shown in figure 1.

 $ export ETCDCTL_API=3  
Figure 1.

The figures 2 - 7 highlights some of the differences in commands between V2 and V3.

Version 2 write to keystore:
 $ etcdctl set /fname John  
 John  
Figure 2.

Version 3 write to keystore:
 $ ETCDCTL_API=3 etcdctl put fname John  
 OK    
Figure 3.

Version 2 Get a record:
 $ etcdctl get /fname  
 John  
Figure 4.

Version 3 Get a record:
 $ ETCDCTL_API=3 etcdctl get fname  
 fname  
 John  
Figure 5.

 $ etcdctl rm /fname  
 PrevNode.Value: John  
Figure 6.

Version 3 Delete a record:
 $ ETCDCTL_API=3 etcdctl del fname  
 1  
Figure 7.

NOTE: It is also worth noting that version 2 stored data is separate from version 3 stored data, as shown in figure 8.

 $ etcdctl ls / --recursive  
 /name  
 $ ETCDCTL_API=3 etcdctl get --prefix ""  
 id  
 342323  
Figure 8.

As you can see the commands and the output from the commands have changed.  From here on, we'll focus mainly on version 3 commands.

New Features in V3

NOTE: The following is not an exhaustive list as each release in V3 adds more capability and/or scale.

Many new scale enhancements have been put into version 3 allowing for faster concurrent processing.  For instance the ETCDv2 API was based on REST/Json and could only handle so many clients and keys.  In ETCDv3 the REST/Json API is still there (more on that later), but GRPC was implemented to handle faster processing of records.   If both API's are used in version 3, then records won't be shared between the two as the API's are isolated from each other (related to statement above leveraging etcdctl tool).

Another cool feature addition are transactions (aka txn).  Transactions allow you to do atomic updates to sets of keys.  A transaction is composed of an if(cond1, cond2), then(op1, op2), else(op1, op2) type structure (much like many programming languages).

As an example,  suppose you wanted to check to see if a key exists before you insert, or you may want to create a lock variable and check if it is locked before trying to alter the record  This can be done with transactions, shown in firgures 9 and 10.

The first example is checking if a lock is present.  If not, then it sets lock to true and writes data then commits in one transaction, as shown in figure 9.

 $ ETCDCTL_API=3 etcdctl txn -i  
 compares:  
 mod("PL32343/lock") = "0"  
 success requests (get, put, del):  
 put PL32343/lock 1  
 put PL32343/type "test type"  
 failure requests (get, put, del):  
 get PL32343/type   
 SUCCESS  
 OK  
 OK  
 $ ETCDCTL_API=3 etcdctl get --prefix ""  
 PL32343/lock  
 1  
 PL32343/type  
 test type  
Figure 9. Interactive Mode

Figure 10  is done in non-interactive mode and will fail the condition and take the action of getting the record.

 ETCDCTL_API=3 etcdctl txn <<<'mod("PL32343/lock") = "0"  
 put PL32343/lock 1  
 put PL32343/type "test type"  
 get PL32343/type   
 '  
 FAILURE  
 PL32343/type  
 test type  
Figure 10. Non-Interactive Mode

Disclamer: I don't actually unlock it again, this was for demo purposes only.

NOTE: When using etcdctl tool in these modes it is important to know that spacing is strict in rows as well as columns for separation.

ETCDv3 API

The ETCDv3 REST API is very different vs the V2 REST API.  This is due to the fact that the V3 REST API is generated from the protocol buffers file via grpc-gateway.  In ETCDv2 you use standard REST methods like GET, PUT, DELETE, ... to interact.  Another difference is that the V2 key is treated as a hierarchy vs V3's string key.  For example the key "PL32343/type" will return the following:

 $ etcdctl ls / --recursive  
 /PL99337  
 /PL99337/type  
Figure 11.

If you check the key "PL99337" it returns you a pointer to another key "/PL99337/type" that has a value.  Shown in Figure 12.

 $ curl -X GET http://127.0.0.1:2379/v2/keys/PL99337 | jq  
 {  
  "action": "get",  
  "node": {  
   "key": "/PL99337",  
   "dir": true,  
   "nodes": [  
    {  
     "key": "/PL99337/type",  
     "value": "test 3",  
     "modifiedIndex": 19,  
     "createdIndex": 19  
    }  
   ],  
   "modifiedIndex": 19,  
   "createdIndex": 19  
  }  
 }  
Figure 12.

Also, notice that all keys and values returned are readable and plain text strings.  This is an important distinction vs V3, as we will see coming up. 

REST

The REST API was derived from grpc-gateway and follows the protocol buffers file very closely.  To get sample curl commands you need to read the protocol buffers file to derive your endpoints.  The following is an excerpts from the protobuf file located as follows:  github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto

1:   service KV {  
2:    // Range gets the keys in the range from the key-value store.  
3:    rpc Range(RangeRequest) returns (RangeResponse) {  
4:      option (google.api.http) = {  
5:       post: "/v3alpha/kv/range"  
6:       body: "*"  
7:     };  
8:    }  
9:    // Put puts the given key into the key-value store.  
10:    // A put request increments the revision of the key-value store  
11:    // and generates one event in the event history.  
12:    rpc Put(PutRequest) returns (PutResponse) {  
13:      option (google.api.http) = {  
14:       post: "/v3alpha/kv/put"  
15:       body: "*"  
16:     };  
17:    }  
18:   }  
19:   message PutRequest {  
20:   // key is the key, in bytes, to put into the key-value store.  
21:   bytes key = 1;  
22:   // value is the value, in bytes, to associate with the key in the key-value store.  
23:   bytes value = 2;  
24:   // lease is the lease ID to associate with the key in the key-value store. A lease  
25:   // value of 0 indicates no lease.  
26:   int64 lease = 3;  
27:   // If prev_kv is set, etcd gets the previous key-value pair before changing it.  
28:   // The previous key-value pair will be returned in the put response.  
29:   bool prev_kv = 4;  
30:   // If ignore_value is set, etcd updates the key using its current value.  
31:   // Returns an error if the key does not exist.  
32:   bool ignore_value = 5;  
33:   // If ignore_lease is set, etcd updates the key using its current lease.  
34:   // Returns an error if the key does not exist.  
35:   bool ignore_lease = 6;  
36:  }  
Figure 13.

In figure 13, we see that the service KV (line 1) has an rpc process called Put (line 12) that sets a value.  The options are in the file, but in the example in figure 14 we will just set a key and a value (no lease).  The key will be "PL32343/type" and value will be "test type", but we have to base64 encode them first.  You can base64 encode & decode online here.

  $ curl -X POST -d '{"key": "UEwzMjM0My90eXBl", "value": "dGVzdCB0eXBl"}' http://127.0.0.1:2379/v3alpha/kv/put | jq  
 {  
  "header": {  
   "cluster_id": "14841639068965178418",  
   "member_id": "10276657743932975437",  
   "revision": "121",  
   "raft_term": "8"  
  }  
 }  
Figure 14.

To get the value of a key (in this case we'll use the same base64 encoded key), we look at the range rpc under the KV service (please refer to the message RangeRequest in the protobuf file, as it is very long).  You'll see you have some options, you can express a range of keys or just a single key.  If a range is selected, you can sort them in various orders.  The following is the result of a query to get the value of "PL32343/type" key.

 $ curl -X POST -d '{"key": "UEwzMjM0My90eXBl"}' http://localhost:2379/v3alpha/kv/range | jq  
 {  
  "header": {  
   "cluster_id": "14841639068965178418",  
   "member_id": "10276657743932975437",  
   "revision": "120",  
   "raft_term": "8"  
  },  
  "kvs": [  
   {  
    "key": "UEwzMjM0My90eXBl",  
    "create_revision": "115",  
    "mod_revision": "115",  
    "version": "1",  
    "value": "dGVzdCB0eXBl"  
   }  
  ],  
  "count": "1"  
 }  
Figure 15.

There are many options for the KV service defined in the protocol buffers file like; DeleteRange, Txn and compact (refer to the protocol buffers file to see how to assemble the arguments required, as done above).

GRPC


See Protocol Buffers file and your language specific drivers.  A future blog on this subject may be forthcoming here.

Other Operations


Watch

Another interesting feature of etcd is the ability to watch for activity on a given key.  In the following example we are watching for changes in the PL32343/lock key on system 1 and changing the key on system 2.

System1:
 $ ETCDCTL_API=3 etcdctl watch PL32343/lock  
 PUT  
 PL32343/lock  
 0  
Figure 16.

System2:
 $ ETCDCTL_API=3 etcdctl put PL32343/lock 0  
 OK  
Figure 17.

We see that system 2 set the key to zero and system 1 sees the key and the new value and then enters watching the key again.

Lease

Another difference between V2 and V3 is that TTL was changed to Lease.  The idea is to put a timer on a value and have it expire after some period of time.  V3 changes how you have to accomplish this by creating the lease first and then assigning a lease to a new key/value pair.  Figure 18 shows creating a lease and assigning the lease to a variable.


 $ ETCDCTL_API=3 etcdctl lease grant 60  
 lease 694d5e63073a7d9e granted with TTL(60s)  
 $ ETCDCTL_API=3 etcdctl put lname doe --lease=694d5e63073a7d9e  
 OK  
 $ ETCDCTL_API=3 etcdctl get lname  
 lname  
 doe  
Figure 18.

Figure 19 shows that after 60 seconds the value has expired.

 $ ETCDCTL_API=3 etcdctl get lname  
Figure 19.

I had the thought of doing the lease creation and the assignment in one transaction using the etcdctl tool, the problem is that transactions are atomic and you wouldn't be able to derive the lease number in order to assign it in your transaction.  But lets face it, your not going to implement a micro-service that leverages ETCDv3 using the etcdctl tool.  Using a programming language and the proper libraries you can create the lease and do a transaction fast enough.

I hope this provides you a better way to understand the differences between ETCD v2 and V3, as well as, provide a way of cracking the code that is V3.  Hopefully you can see the power that the various capabilities combined could provide your micro-service.  In a future post I'll show some of those combinations and how you can use them together.

Thursday, January 19, 2017

GoBGP for the Python Developer Tutorial Part 1

Published by Tim Epkes
 
GoBGP is an open source initiative providing a feature rich and scalable BGP solution to the open source community.  GoBGP can be found on github at the following URL https://github.com/osrg/gobgp.

If you have not read my Introduction to GoBGP part 1 / part 2 and are not familiar with GoBGP you may want to review them first.  This tutorial also assumes familiarity with BGP.
If your like me, you like the ability to extract the data you deem important from your protocols.  Generally speaking, getting data from protocols has always been an excercise of SNMP calls or screen scrape of CLI commads.  In this tutorial, we are going to discuss the mechanism to interface the GoBGP (gobgpd) daemon more directly with the language Python and GRPC.  We'll begin by reviewing the Protocol Buffers file definitions and then take that file, compile it for python and then write some easy to follow programs that illustrate how to make use of it.
GoBGP makes information available via GRPC and Protocol Buffers files.  This makes it very  easy to interface using multiple languages as GRPC and Protocol Buffers have several language drivers that they support.  Such as, but not limited too:
  • C++
  • Java
  • Python
  • Gorman
  • Ruby
  • C#
  • Node.js
  • Android Java
  • Objective Chip
  • PHP
In this tutorial it is assumed you have a running GoBGP network (refer to Introduction to GoBGP  part 1 or part 2 ) to accomplish this.  And you will chose one of the routers in your setup to query against.  Also ensure port 50051 is listening on the interface or all interfaces that you will query by executing the command as shown in figure 1.

root@R1:~# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.11:40351        0.0.0.0:*               LISTEN      -               
tcp        0      0 127.0.0.1:2601          0.0.0.0:*               LISTEN      20/zebra        
tcp        0      0 127.0.0.1:6060          0.0.0.0:*               LISTEN      26/gobgpd       
tcp        0      0 0.0.0.0:179             0.0.0.0:*               LISTEN      26/gobgpd       
tcp6       0      0 :::50051                :::*                    LISTEN      26/gobgpd       
tcp6       0      0 :::179                  :::*                    LISTEN      26/gobgpd 
 
Figure 1.

The Protocol Buffers for GoBGP is published on the github site and you can obtain the pb (protocol buffer) file as shown in figure 2.

$ wget https://raw.githubusercontent.com/osrg/gobgp/master/api/gobgp.proto
Figure 2.

This should pull down the pb file to your local desktop. Next, you have to install Python GRPC libraries, goto the following URL to see how to accomplish this.

Once you have the Python GRPC libraries installed, we need to convert the pb file into python libraries.  There is an outdated GoBGP document out on the Internet that should no longer be used, figure 3 shows the correct way to do the conversion.

$ python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. gobgp.proto
Figure 3.

This should have created the following two files:
  • gobgp_pb2_grpc.py
  • gobgp_pb2.py
GRPC creates stub objects from the services defined in the pb file.  In this case there is only one umbrella service called GobgpApi, you should see in figure 4 of the pb file:

service GobgpApi {
Figure 4.

Therefore, the stub object is called GobgpApiStub.  This will be used in setting up communications to GoBGP.  If we execute python and execute the code shown in figure 5, our connection will be established to GoBGPd.

>>> import grpc
>>> import gobgp_pb2_grpc
>>> import gobgp_pb2

>>> channel = grpc.insecure_channel('10.8.1.2:50051')
>>> stub = gobgp_pb2_grpc.GobgpApiStub(channel)

Figure 5.

The IP address depends on your GoBGP routers IP that your communicating on.  Mine is 10.8.1.2 as depicted in red in Figure 5.  We will pick two services from the pb file from the services listing as shown in figure 6.

service GobgpApi {
  rpc StartServer(StartServerRequest) returns (StartServerResponse) {}
  rpc StopServer(StopServerRequest) returns (StopServerResponse) {}
  rpc GetServer(GetServerRequest) returns (GetServerResponse) {}
  rpc AddNeighbor(AddNeighborRequest) returns (AddNeighborResponse) {}
  rpc DeleteNeighbor(DeleteNeighborRequest) returns (DeleteNeighborResponse) {}
  rpc GetNeighbor(GetNeighborRequest) returns (GetNeighborResponse) {}
  rpc ResetNeighbor(ResetNeighborRequest) returns (ResetNeighborResponse) {}
  rpc SoftResetNeighbor(SoftResetNeighborRequest) returns (SoftResetNeighborResponse) {}
  rpc ShutdownNeighbor(ShutdownNeighborRequest) returns (ShutdownNeighborResponse) {}
  rpc EnableNeighbor(EnableNeighborRequest) returns (EnableNeighborResponse) {}
  rpc DisableNeighbor(DisableNeighborRequest) returns (DisableNeighborResponse) {}
  rpc GetRib(GetRibRequest) returns (GetRibResponse) {}
  .
  .
  .
}
Figure 6.

For the first example I am going to choose the GetRib service.  If we look at the GetRib service, we see it takes an argument of GetRibRequest and receives a reply of type GetRibResponse, both of which can be seen in the pb file.  Figure 7 shows these 2 structures, plus other defined fields they leverage.

message GetRibRequest {
  Table table = 1;
}

message GetRibResponse {
  Table table = 1;
}

message Table {
  Resource type = 1;
  string name = 2;
  uint32 family = 3;
  repeated Destination destinations = 4;
  bool post_policy = 5;
}

enum Resource {
  GLOBAL = 0;
  LOCAL = 1;
  ADJ_IN = 2;
  ADJ_OUT = 3;
  VRF = 4;
}

message Destination {
  string prefix = 1;
  repeated Path paths = 2;
  bool longer_prefixes = 3;
  bool shorter_prefixes = 4;
}

message Path {
  bytes nlri = 1;
  repeated bytes pattrs = 2;
  int64 age = 3;
  bool best = 4;
  bool is_withdraw = 5;
  int32 validation = 6;
  bool no_implicit_withdraw = 7;
  uint32 family = 8;
  uint32 source_asn = 9;
  string source_id = 10;
  bool filtered = 11;
  bool stale = 12;
  bool is_from_external = 13;
  string neighbor_ip = 14;
}

Figure 7.

Using the information referenced in figure 7, we see that GetRib Service uses the GetRibRequest as an argument, this helps, there are a few other things to make the call successful that have to be done, that are not fully documented yet. 

1:  import gobgp_pb2 as gobgp
2:  import gobgp_pb2_grpc as gobgp_grpc
3:  import grpc
4: 
5:  AFI_IP4 = 1
6:  SAFI_UNICAST = 1
7:  FAMILY = AFI_IP << 16 | SAFI_UNICAST
8:
9:  ch = grpc.insecure_channel('10.8.1.2:50051')
10: stub = gobgp_grpc.GobgpApiStub(ch)
11: 
12: req = gobgp.GetRibRequest()
13: t = gobgp.Table(family=FAMILY)
14: req.table.MergeFrom(t)
15: routes = stub.GetRib(req)
16: for n,route in enumerate(routes.table.destinations):
17:     print "Prefix %d -> %s" % (n,route.prefix)
18:     path_count = 1
19:     for path in route.paths:
20:         print "  path %d --> %s %s" % (path_count,path.source_asn,path.neighbor_ip)
21:         path_count += 1
Figure 8.

Output from code in figure 8 can be seen in Figure 9.

Prefix 0 -> 10.8.3.0/24
  path 1 --> 65001 172.40.2.3
Prefix 1 -> 10.8.1.0/24
  path 1 --> 65001 <nil>
Prefix 2 -> 172.40.1.0/29
  path 1 --> 65001 <nil>
  path 2 --> 65001 172.40.1.3
Prefix 3 -> 172.40.2.0/29
  path 1 --> 65001 <nil>
  path 2 --> 65001 172.40.2.3
Prefix 4 -> 192.65.1.0/29
  path 1 --> 65001 <nil>
  path 2 --> 65002 192.65.1.3
Prefix 5 -> 10.8.2.0/24
  path 1 --> 65001 172.40.1.3
Prefix 6 -> 10.20.0.0/24
  path 1 --> 65002 192.65.1.3
Figure 9.

Let's go over some of the highlighted items in figure 8. Lines 1-3 import grpc and the protocol buffer files that were generated (leaving off the .py extension).  Line 7 uses a bit shift of AFI and SAFI to create the variable FAMILY which equals 65537.  Lines 9-10 setup the connection to the gobgp daemon running on the device with IP 10.8.1.2 using port 50051.  Lines 12-15 creates the req variable giving it fields from the GetRibRequest structure, creates a variable t that gets assigned variables from the routing table structure and assigns it the address family equal to the FAMILY variable (or 65537 for IPv4 Unicast) and makes a call to GetRib, whose result is stored in routes.  Lines 16-21 just loops through the routes, formats and prints.   The inner loop starting at line 24 then formats and outputs the path or paths a prefix can take.

For the second example we will use the GetNeighbor service, But won't go into as much detail.  This code example, shown in figure 10, shows the neighbors that the gobgpd instance has.

1:   import grpc
2:   import gobgp_pb2_grpc
3:   import gobgp_pb2
4:  
5:   _TIMEOUT_SECONDS = 10
6:  
7:   channel = grpc.insecure_channel('10.8.1.2:50051')
8:   stub = gobgp_pb2_grpc.GobgpApiStub(channel)
9:  
10:  peers = stub.GetNeighbor(gobgp_pb2.GetNeighborRequest(),_TIMEOUT_SECONDS)
11:  for peer in peers.peers:
12:       print "%s %s %s" % (peer.conf.neighbor_address,peer.conf.peer_as,peer.info.bgp_state) 
Figure 10.

Figure 11 shows the output from the program in figure 10.

172.40.1.3 65001 BGP_FSM_ACTIVE
172.40.2.3 65001 BGP_FSM_ESTABLISHED
192.65.1.3 65002 BGP_FSM_ACTIVE
Figure 11. 

The code in figure 10 shows the standard imports made prior in Lines 1-3Line 5 just establishes a private variable time-out variable to be used later.  Lines 7-8 sets up the connection as before.  Line 10 makes the call to the GetNeighbor function and has two arguments, one is to pass GetNeighborRequest (as seen in the protobuf file and the other optional one is the timeout value (which was set to 10 seconds).  Lines 11-12 just loops the neighbors and prints out formatted information of the neighbors IP, AS and current BGP state.

If we refer to the protobuf file (see figure 12) we see that the GetNeighbor function returns a GetNeighborResponse, which has a variable called peer that refers to a structure Peer.  Wow that was a mouthful.

In the Peer, we have two subfields I used in my output (peer.conf... and peer.info....).  Peer has conf and info which refer to yet other structures PeerConf and PeerState.  In PeerConf, you can see neighbor_address and peer_as.  Likewise in PeerState you can see where I got the bgp_state field.  This structure gives you and idea of other fields you can use in your output and becomes a very helpful guide.

message GetNeighborResponse {
  repeated Peer peers = 1;
}

message Peer {
  repeated uint32 families = 1;
  ApplyPolicy apply_policy = 2;
  PeerConf conf = 3;
  EbgpMultihop ebgp_multihop = 4;
  RouteReflector route_reflector = 5;
  PeerState info = 6;
  Timers timers = 7;
  Transport transport = 8;
  RouteServer route_server = 9;
}

message PeerConf {
  string auth_password = 1;
  string description = 2;
  uint32 local_as = 3;
  string neighbor_address = 4;
  uint32 peer_as = 5;
  string peer_group = 6;
  uint32 peer_type = 7;
  uint32 remove_private_as = 8;
  bool route_flap_damping = 9;
  uint32 send_community = 10;
  repeated bytes remote_cap = 11;
  repeated bytes local_cap = 12;
  string id = 13;
  repeated PrefixLimit prefix_limits = 14;
  string local_address = 15;
  string neighbor_interface = 16;
  string vrf = 17;
}

message PeerState {
  string auth_password = 1;
  string description = 2;
  uint32 local_as = 3;
  Messages messages = 4;
  string neighbor_address = 5;
  uint32 peer_as = 6;
  string peer_group = 7;
  uint32 peer_type = 8;
  Queues queues = 9;
  uint32 remove_private_as = 10;
  bool route_flap_damping = 11;
  uint32 send_community = 12;
  uint32 session_state = 13;
  repeated string supported_capabilities = 14;
  string bgp_state = 15;
  enum AdminState {
      UP = 0;
      DOWN = 1;
      PFX_CT = 2; // prefix counter over limit
  }
  AdminState admin_state = 16;
  uint32 received = 17;
  uint32 accepted = 18;
  uint32 advertised = 19;
  uint32 out_q = 20;
  uint32 flops = 21;
}

Figure 12. 

These are just two simple examples of how to get information from GoBGP.  In a later post I will show additional capabilities.  I hope you found this helpful.

Wednesday, January 11, 2017

Introduction to GoBGP Part 2

Published by Tim Epkes
 
This is part 2 of a multi-tutorial series on GoBGP.  Part 1 can be viewed at http://netreflection.blogspot.com/2017/01/introduction-to-gobgp.html and focused more on getting started.  In part 2 we will explore using multiple neighbors, with various policies per neighbor.  This tutorial assumes either Part 1 was reviewed or base familiarity with GoBGP.  Multiple AS's will be utilized as showing in figure 1.
Figure 1.

The configurations of each router are as follows:

R1 Base Configuration

[global.config]
  as = 65001
  router-id = "172.40.1.2"

[zebra]
  [zebra.config]
    enabled = true
    url = "unix:/var/run/quagga/zserv.api"
    redistribute-route-type-list = ["connect"]

[[neighbors]]
  [neighbors.config]
    neighbor-address = "172.40.1.3"
    peer-as = 65001
  [neighbors.route-reflector.config]
    route-reflector-client = true
    route-reflector-cluster-id = "172.40.1.2"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv6-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-labelled-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "l3vpn-ipv4-unicast"

[[neighbors]]
  [neighbors.config]
    neighbor-address = "172.40.2.3"
    peer-as = 65001
  [neighbors.route-reflector.config]
    route-reflector-client = true
    route-reflector-cluster-id = "172.40.1.2" 
 [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv6-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-labelled-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "l3vpn-ipv4-unicast"

[[neighbors]]
  [neighbors.config]
    neighbor-address = "192.65.1.3"
    peer-as = 65002
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv6-unicast"
Figure 2.

One highlight to R1's configuration, as we have R2 and R3 in the same AS as R1, we have to make R1 a route-reflector to reflect routes from R2 to R3 and vice-versa.  Those configurations are highlighted in figure 2 with red letters.

R2 Base Configuration

[global.config]
  as = 65001
  router-id = "172.40.1.3"

[zebra]
  [zebra.config]
    enabled = true
    url = "unix:/var/run/quagga/zserv.api"
    redistribute-route-type-list = ["connect"]

[[neighbors]]
  [neighbors.config]
    neighbor-address = "172.40.1.2"
    peer-as = 65001
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv6-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-labelled-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "l3vpn-ipv4-unicast"

Figure 3.

R3 Base Configuration

[global.config]
  as = 65001
  router-id = "172.40.2.3"

[zebra]
  [zebra.config]
    enabled = true
    url = "unix:/var/run/quagga/zserv.api"
    redistribute-route-type-list = ["connect"]

[[neighbors]]
  [neighbors.config]
    neighbor-address = "172.40.2.2"
    peer-as = 65001
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv6-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-labelled-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "l3vpn-ipv4-unicast"
Figure 4.

R4 Base Configuration

[global.config]
  as = 65002
  router-id = "192.65.1.3"

[zebra]
  [zebra.config]
    enabled = true
    url = "unix:/var/run/quagga/zserv.api"
    redistribute-route-type-list = ["connect"]

[[neighbors]]
  [neighbors.config]
    neighbor-address = "192.65.1.2"
    peer-as = 65001
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv6-unicast"
Figure 5.

To quickly show that all neighbors are established and exchanging routes, figure 6 is the output from R1's neighbors.

R1:~# gobgp neighbor
Peer          AS  Up/Down State       |#Advertised Received Accepted
172.40.1.3 65001 00:00:17 Establ      |          6        2        2
172.40.2.3 65001 00:00:15 Establ      |          6        2        2
192.65.1.3 65002 00:00:18 Establ      |          6        2        2
Figure 6.

Now R1 is going to generate routes, R4 will not get any routes with the community 100:50, R2 and R3 will not get any routes with the community 200:50.  But first we need to add those routes.  Figure 7 has the routes to be added.

Apply to R1
gobgp global rib add 10.50.1.0/24 origin igp nexthop 10.8.1.1 community 100:50 -a ipv4
gobgp global rib add 10.50.2.0/24 origin igp nexthop 10.8.1.1 community 100:50 -a ipv4
gobgp global rib add 10.50.3.0/24 origin igp nexthop 10.8.1.1 community 100:50 -a ipv4
gobgp global rib add 192.168.50.0/24 origin igp nexthop 10.8.1.1 community 200:50 -a ipv4
gobgp global rib add 192.201.0.0/16 origin igp nexthop 10.8.1.1 community 200:50 -a ipv4
Figure 7.

After applying the routes, each neighbor should be receiving 11 routes as shown in figure 8.

R1:~# gobgp neighbor    
Peer          AS  Up/Down State       |#Advertised Received Accepted
172.40.1.3 65001 00:17:59 Establ      |         11        2        2
172.40.2.3 65001 00:17:57 Establ      |         11        2        2
192.65.1.3 65002 00:18:00 Establ      |         11        2        2
Figure 8.

Now we need to create the following items:
  • Define neighbor sets
  • Define community sets
  • Define multiple policies
  • Attach the policy's globally.
First we start off creating our neighbor sets.  One will be for R4 in an external AS and the other will be for R2 and R3.

Add the configuration from figure 9 to R1
[[defined-sets.neighbor-sets]]
  neighbor-set-name = "ns1"
  neighbor-info-list = ["172.40.1.3","172.40.2.3"]

[[defined-sets.neighbor-sets]]
  neighbor-set-name = "extns1"
  neighbor-info-list = ["192.65.1.3"]
Figure 9.

Reload and verify the neighbor-sets are defined.
R1# gobgp policy neighbor
NAME    ADDRESS   
extns1  192.65.1.3
ns1     172.40.1.3
        172.40.2.3
Figure 10.

Now lets create the community-sets to be used in the policies.  Figure 11 has the community-sets to be added to R1.

[[defined-sets.bgp-defined-sets.community-sets]]
  community-set-name = "cs0"
  community-list = ["100:50"]

[[defined-sets.bgp-defined-sets.community-sets]]
  community-set-name = "cs1"
  community-list = ["200:50"]
Figure 11.

This time, we will hold off on applying until the remainder of the configuration is complete.  Now we need to create 2 policies, 1 which will restrict all internal routers from getting routes with community 200:50 and the other policy, which will prevent all routes with the community 100:50 from leaking outside AS 65001 (or to R4, the external site).  Figure 12 has he policies to be applied to R1.

[[policy-definitions]]
    name = "policy1"
    [[policy-definitions.statements]]
        name = "Drop community 200:50 to internal routers"
        [policy-definitions.statements.conditions.match-neighbor-set]
          neighbor-set = "ns1"
          match-set-options = "any"
        [policy-definitions.statements.conditions.bgp-conditions.match-community-set]
            community-set = "cs1"
            match-set-options = "any"
        [policy-definitions.statements.actions.route-disposition]
            accept-route = false

[[policy-definitions]]
    name = "policy2"
    [[policy-definitions.statements]]
        name = "Drop community 100:50 to external routers"
        [policy-definitions.statements.conditions.match-neighbor-set]
          neighbor-set = "extns1"
          match-set-options = "any"
        [policy-definitions.statements.conditions.bgp-conditions.match-community-set]
            community-set = "cs0"
            match-set-options = "any"
        [policy-definitions.statements.actions.route-disposition]
            accept-route = false
Figure 12.

After this we need to apply the policy globally.  We do this as depicted in figure 13.

[global.config]
  as = 65001
  router-id = "172.40.1.2"
[global.apply-policy.config]
  export-policy-list = ["policy1","policy2"]
Figure 13.

Next let's verify the policies are attached and properly defined.  Figure 14 has the output showing running policy.

R1:~# gobgp policy                                                                                              
Name policy1:
    StatementName Drop community 200:50 to internal routers:
      Conditions:
        NeighborSet: ANY ns1
        CommunitySet: ANY cs1
      Actions:
        REJECT
Name policy2:
    StatementName Drop community 100:50 to external routers:
      Conditions:
        NeighborSet: ANY extns1
        CommunitySet: ANY cs0
      Actions:
        REJECT
Figure 14.

Figure 15 shows a script to get what is being advertised out from R1 to each peer.  Notice the routes highlighted in red in accordance to the polices.

R1:~# for i in `gobgp neighbor |egrep -v "Peer" | awk '{print $1}'`
 do
   echo $i        
   gobgp neighbor $i adj-out
 done
 
OUTPUT: 
172.40.1.3
    Network             Next Hop             AS_PATH              Attrs
    10.8.1.0/24         172.40.1.2                                [{Origin: i} {Med: 0} {LocalPref: 100} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
    10.8.3.0/24         172.40.2.3                                [{Origin: i} {Med: 0} {LocalPref: 100} {Originator: 172.40.2.3} {ClusterList: [172.40.1.2]}]
    10.20.0.0/24        192.65.1.3           65002                [{Origin: i} {Med: 0} {LocalPref: 100} {Originator: 192.65.1.3} {ClusterList: [172.40.1.2]}]
    10.50.1.0/24        10.8.1.1                                  [{Origin: i} {LocalPref: 100} {Communities: 100:50} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
    10.50.2.0/24        10.8.1.1                                  [{Origin: i} {LocalPref: 100} {Communities: 100:50} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
    10.50.3.0/24        10.8.1.1                                  [{Origin: i} {LocalPref: 100} {Communities: 100:50} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
    172.40.1.0/29       172.40.1.2                                [{Origin: i} {Med: 0} {LocalPref: 100} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
    172.40.2.0/29       172.40.1.2                                [{Origin: i} {Med: 0} {LocalPref: 100} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
    192.65.1.0/29       172.40.1.2                                [{Origin: i} {Med: 0} {LocalPref: 100} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
172.40.2.3
    Network             Next Hop             AS_PATH              Attrs
    10.8.1.0/24         172.40.2.2                                [{Origin: i} {Med: 0} {LocalPref: 100} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
    10.8.2.0/24         172.40.1.3                                [{Origin: i} {Med: 0} {LocalPref: 100} {Originator: 172.40.1.3} {ClusterList: [172.40.1.2]}]
    10.20.0.0/24        192.65.1.3           65002                [{Origin: i} {Med: 0} {LocalPref: 100} {Originator: 192.65.1.3} {ClusterList: [172.40.1.2]}]
    10.50.1.0/24        10.8.1.1                                  [{Origin: i} {LocalPref: 100} {Communities: 100:50} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
    10.50.2.0/24        10.8.1.1                                  [{Origin: i} {LocalPref: 100} {Communities: 100:50} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
    10.50.3.0/24        10.8.1.1                                  [{Origin: i} {LocalPref: 100} {Communities: 100:50} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
    172.40.1.0/29       172.40.2.2                                [{Origin: i} {Med: 0} {LocalPref: 100} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
    172.40.2.0/29       172.40.2.2                                [{Origin: i} {Med: 0} {LocalPref: 100} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
    192.65.1.0/29       172.40.2.2                                [{Origin: i} {Med: 0} {LocalPref: 100} {Originator: 0.0.0.0} {ClusterList: [172.40.1.2]}]
192.65.1.3
    Network             Next Hop             AS_PATH              Attrs
    10.8.1.0/24         192.65.1.2           65001                [{Origin: i} {Med: 0}]
    10.8.2.0/24         192.65.1.2           65001                [{Origin: i}]
    10.8.3.0/24         192.65.1.2           65001                [{Origin: i}]
    172.40.1.0/29       192.65.1.2           65001                [{Origin: i} {Med: 0}]
    172.40.2.0/29       192.65.1.2           65001                [{Origin: i} {Med: 0}]
    192.65.1.0/29       192.65.1.2           65001                [{Origin: i} {Med: 0}]
    192.168.50.0/24     10.8.1.1             65001                [{Origin: i} {Communities: 200:50}]
    192.201.0.0/16      10.8.1.1             65001                [{Origin: i} {Communities: 200:50}]

Figure 15.

Finally, as I always like to see the whole config after all the changes, here is the final configuration for R1:

[global.config]
  as = 65001
  router-id = "172.40.1.2"
[global.apply-policy.config]
  export-policy-list = ["policy1","policy2"]

[zebra]
  [zebra.config]
    enabled = true
    url = "unix:/var/run/quagga/zserv.api"
    redistribute-route-type-list = ["connect"]

[[neighbors]]
  [neighbors.config]
    neighbor-address = "172.40.1.3"
    peer-as = 65001
  [neighbors.route-reflector.config]
    route-reflector-client = true
    route-reflector-cluster-id = "172.40.1.2"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv6-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-labelled-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "l3vpn-ipv4-unicast"

[[neighbors]]
  [neighbors.config]
    neighbor-address = "172.40.2.3"
    peer-as = 65001
  [neighbors.route-reflector.config]
    route-reflector-client = true
    route-reflector-cluster-id = "172.40.1.2"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv6-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-labelled-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "l3vpn-ipv4-unicast"

[[neighbors]]
  [neighbors.config]
    neighbor-address = "192.65.1.3"
    peer-as = 65002
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv6-unicast"


[[defined-sets.neighbor-sets]]
  neighbor-set-name = "ns1"
  neighbor-info-list = ["172.40.1.3","172.40.2.3"]

[[defined-sets.neighbor-sets]]
  neighbor-set-name = "extns1"
  neighbor-info-list = ["192.65.1.3"]

[[defined-sets.bgp-defined-sets.community-sets]]
  community-set-name = "cs0"
  community-list = ["100:50"]

[[defined-sets.bgp-defined-sets.community-sets]]
  community-set-name = "cs1"
  community-list = ["200:50"]

[[policy-definitions]]
    name = "policy1"
    [[policy-definitions.statements]]
        name = "Drop community 200:50 to internal routers"
        [policy-definitions.statements.conditions.match-neighbor-set]
          neighbor-set = "ns1"
          match-set-options = "any"
        [policy-definitions.statements.conditions.bgp-conditions.match-community-set]
            community-set = "cs1"
            match-set-options = "any"
        [policy-definitions.statements.actions.route-disposition]
            accept-route = false

[[policy-definitions]]
    name = "policy2"
    [[policy-definitions.statements]]
        name = "Drop community 100:50 to external routers"
        [policy-definitions.statements.conditions.match-neighbor-set]
          neighbor-set = "extns1"
          match-set-options = "any"
        [policy-definitions.statements.conditions.bgp-conditions.match-community-set]
            community-set = "cs0"
            match-set-options = "any"
        [policy-definitions.statements.actions.route-disposition]
            accept-route = false

Hopefully you found this helpful.  In a part 3 that will come at a later time, we will explore a more complex setup and policy.

Tuesday, January 10, 2017

Introduction to GoBGP Part 1

Published by Tim Epkes
 
GoBGP is an open source initiative providing a feature rich and scalable BGP solution to the open source community.  GoBGP can be found on github at the following URL https://github.com/osrg/gobgp.

In this post I will show how to get started with GoBGP using two routers.  Future posts will include more complicated setups.


Both routers are linked via the 192.168.0.0/29, R1 is 192.168.0.2 and R2 is 192.168.0.3.  Each router has its own subnet (10.1.1.0/24 and 10.2.2.0/24 respectively).  The following is the configuration for R1:

R1 Configuration (in TOML) R2 Configuration (in TOML)
[global.config]
  as = 65001
  router-id = "192.168.0.2"
 
[zebra]
  [zebra.config]
    enabled = true
    url = "unix:/var/run/quagga/zserv.api"
    redistribute-route-type-list = ["connect"]
 
[[neighbors]]
  [neighbors.config]
    neighbor-address = "192.168.0.3"
    peer-as = 65001
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv6-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-labelled-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "l3vpn-ipv4-unicast"
[global.config]
  as = 65001
  router-id = "192.168.0.3"
 
[zebra]
  [zebra.config]
    enabled = true
    url = "unix:/var/run/quagga/zserv.api"
    redistribute-route-type-list = ["connect"]
 
[[neighbors]]
  [neighbors.config]
    neighbor-address = "192.168.0.2"
    peer-as = 65001
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv6-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "ipv4-labelled-unicast"
  [[neighbors.afi-safis]]
    [neighbors.afi-safis.config]
      afi-safi-name = "l3vpn-ipv4-unicast"
Note: The configuration file can be written in TOML, YAML or JSON (like the flexibility).

Once the two routers are configured and you have to start GoBGPd doing the following:

R1# gobgpd -f /etc/gobgpd/gobgpd.conf                                                  

R2# gobgpd -f /etc/gobgpd/gobgpd.conf                                                  

Note: To reload you can simply do a kill -HUP on the pid of GoBGPd.

To verify R1 is neighbored to R2 type the following:

R1# gobgp neighbor
Peer           AS  Up/Down State       |#Advertised Received Accepted                  
192.168.0.3 65001 00:29:12 Establ      |          2        2        2
Figure 1.

Figure 1 shows that R1 is successfully neighbored to R2, is advertising and receiving routes.  To obtain detailed information about your neighbor just specify the neighbor IP address in your gobgp neighbor command:

R1# gobgp neighbor 192.168.0.3
BGP neighbor is 192.168.0.3, remote AS 65001
  BGP version 4, remote router ID 192.168.0.3
  BGP state = BGP_FSM_ESTABLISHED, up for 00:36:03
  BGP OutQ = 0, Flops = 0
  Hold time is 90, keepalive interval is 30 seconds
  Configured hold time is 90, keepalive interval is 30 seconds                         
  Neighbor capabilities:
    multiprotocol:
        ipv4-unicast: advertised and received
        ipv6-unicast: advertised and received
        ipv4-labelled-unicast: advertised and received
        l3vpn-ipv4-unicast: advertised and received
    route-refresh: advertised and received
    4-octet-as: advertised and received
  Message statistics:
                         Sent       Rcvd
    Opens:                  1          1
    Notifications:          0          0
    Updates:                1          1
    Keepalives:            73         73
    Route Refesh:           0          0
    Discarded:              0          0
    Total:                 75         75
  Route statistics:
    Advertised:             2
    Received:               2
    Accepted:               2
Figure 2.

Figure 2 shows more detailed information about the neighbor like; R1 is established with its neighbor, the address families that can advertise and be received, routes advertised / received / accepted and other useful information.

From here we want to see the routes that are being exchanged.  To do this you need to take a look at the global rib using the following commands:

R1# gobgp global rib
    Network             Next Hop             AS_PATH              Age        Attrs
*>  10.1.1.0/24         0.0.0.0                                   00:01:43   [{Origin: i} {Med: 0}]
*>  10.2.2.0/24         192.168.0.3                               00:01:30   [{Origin: i} {Med: 0} {LocalPref: 100}]
*>  192.168.0.0/28      0.0.0.0                                   00:01:43   [{Origin: i} {Med: 0}]
*   192.168.0.0/28      192.168.0.3                               00:01:30   [{Origin: i} {Med: 0} {LocalPref: 100}]
Figure 3.

Figure 3 shows our routes locally and the routes from R2.  Now we are going to add some routes and provide community values.

R1# gobgp global rib add 10.50.1.0/24 origin igp nexthop 10.1.1.1 community 100:50 -a ipv4
R1# gobgp global rib add 10.50.2.0/24 origin igp nexthop 10.1.1.1 community 100:50 -a ipv4
R1# gobgp global rib add 10.50.3.0/24 origin igp nexthop 10.1.1.1 community 100:50 -a ipv4
R1# gobgp global rib add 192.168.50.0/24 origin igp nexthop 10.1.1.1 community 200:50 -a ipv4
Figure 4.

After adding the additional routes, verify the routes in the rib on the R2 side.

# gobgp global rib              
    Network             Next Hop             AS_PATH              Age        Attrs
*>  10.1.1.0/24         192.168.0.2                               00:31:01   [{Origin: i} {Med: 0} {LocalPref: 100}]
*>  10.2.2.0/24         0.0.0.0                                   4d 06:29:53 [{Origin: i} {Med: 0}]
*>  10.50.1.0/24        10.1.1.1                                  00:06:32   [{Origin: i} {LocalPref: 100} {Communities: 100:50}]
*>  10.50.2.0/24        10.1.1.1                                  00:06:32   [{Origin: i} {LocalPref: 100} {Communities: 100:50}]
*>  10.50.3.0/24        10.1.1.1                                  00:06:32   [{Origin: i} {LocalPref: 100} {Communities: 100:50}]
*>  192.168.0.0/28      0.0.0.0                                   4d 06:29:53 [{Origin: i} {Med: 0}]
*   192.168.0.0/28      192.168.0.2                               00:31:01   [{Origin: i} {Med: 0} {LocalPref: 100}]
*>  192.168.50.0/24     10.1.1.1                                  00:00:05   [{Origin: i} {LocalPref: 100} {Communities: 200:50}]
Figure 5.

Figure 5 shows the additional routes and the communities that were set.  To get summaries of routes GoBGP processed, execute the following:

R1# gobgp global rib summary   IPv4 is default 
Table ipv4-unicast
Destination: 7, Path: 8

R1# gobgp global rib summary -a ipv4-labelled
Table ipv4-labelled-unicast                                                             
Destination: 2, Path: 2

R1# gobgp global rib summary -a ipv6   
Table ipv6-unicast
Destination: 0, Path: 0
Figure 6.

Next, we want to create a policy that does not allow the community 200:50 prefixes out of R1.  To do this we need to accomplish the following:
  • Define a community set cs0
  • Create a policy leveraging the community set cs0
  • Attach the policy to the global configuration
First we create our community set, as follows:

[[defined-sets.bgp-defined-sets.community-sets]]                                        
  community-set-name = "cs0"
  community-list = ["200:50"]
Figure 7.

Figure 7 shows the community set definition.  First you provide your community set with a name and then a list of communities.

Next we create our policy, as follows:
[[policy-definitions]]
    name = "policy1"
    [[policy-definitions.statements]]
        name = "Drop community 200:50"
        [policy-definitions.statements.conditions.bgp-conditions.match-community-set]   
            community-set = "cs0"
            match-set-options = "any"
        [policy-definitions.statements.actions.route-disposition]
            accept-route = false
Figure 8.

Figure 8 shows the policy definition.  We give our policy a name, in this case "policy1" (for the lack of a better name).  Under policy-definition.statements, we give a brief description of the policy by assigning it to name.  A policy has to have the following:
  • Name
  • Condition
  • Action
The condition has various options, in this case the condition will just include a community-set.  We assign the community-set to "cs0" and have match-set-options = "any".  Any means match any community in the list.  You have 3 options for the match-set-options under match-community-set are:
  • Any - Match any one of the communities (default)
  • All - Match all of the communities in the defined community set
  • Invert - Match anything but the communities in the defined community set
Next the action then needs to be defined.  Here we are specifying the route-disposition, if the conditions are true,  disallow the route. 


Finally we attach the policy, as follows:

[global.config]                                                                             
  as = 65001
  router-id = "192.168.0.2"
[global.apply-policy.config]
  export-policy-list = ["policy1"]
Figure 9.

Figure 9  shows the policy attached to the global configuration.  In this case I want to restrict community 200:50 routes from going to R2, so I put policy1 in the export-policy-list.

You might ask, why would you apply it globally, when you want the policy to apply to a certain neighbor?  In this case I have only one neighbor, but if I had more than one neighbor, I still would apply it globally.  To restrict what peers it applies to you would need to assemble a neighbor-set that then applies that policy only to certain neighbors (a tutorial for another day).

Reload your gobgpd process and verify the policy is applied as shown in figure 10.

R1# gobgp policy
Name policy1:
    StatementName Drop community 200:50:                                               
      Conditions:
        CommunitySet: ANY cs0
      Actions:
        REJECT
Figure10.

To see what routes on R1 have been advertised to a specific neighbor, you can do the following commands:

# gobgp neighbor 192.168.0.3 adj-out
    Network             Next Hop             AS_PATH              Attrs
    10.1.1.0/24         192.168.0.2                               [{Origin: i} {Med: 0} {LocalPref: 100}]
    10.50.1.0/24        10.1.1.1                                  [{Origin: i} {LocalPref: 100} {Communities: 100:50}]
    10.50.2.0/24        10.1.1.1                                  [{Origin: i} {LocalPref: 100} {Communities: 100:50}]
    10.50.3.0/24        10.1.1.1                                  [{Origin: i} {LocalPref: 100} {Communities: 100:50}]
    192.168.0.0/28      192.168.0.2                               [{Origin: i} {Med: 0} {LocalPref: 100}]
Figure11.

I'll build off this tutorial in a part 2 at another time.  I hope you find this helpful.