Next, you configure Auto-RP mapping agents.. In this task you are asked to configure the first part of Auto-RP, the announcing RP.. First, turn on debug ip pim auto-rp on PE2 and one o
Trang 1IP PIM Sparse Mode with Auto RP Task 7.1:
Configuring Auto RP involves configuring candidate RPs to announce their availability They do it by Multicasting the information to the 224.0.1.39 group There can be more than one
This configuration is performed using the ip pim send-rp-announce command You have to specify the interface of which IP
address will be used as RP You should always use a Loopback interface This interface should be reachable in IGP You have to
have the ip pim sparse-mode or ip pim sparse-dense-mode
command configured on that Loopback interface
Next, you configure Auto-RP mapping agents
1 They listen for Auto-RP messages on 224.0.1.39
2 They elect one RP for each group They elect the one with the highest IP address The only way you can control the priority is to configure appropriate IP addresses on Loopback interfaces The higher the IP the more preferred the RP is
3 They Multicast the elected RP information to the 224.0.1.40 group This is called the “discovery” message
There can be more than one mapping agent This configuration is
performed using the ip pim send-rp-discovery command
In this task you are asked to configure the first part of Auto-RP, the
announcing RP First, turn on debug ip pim auto-rp on PE2 and
one of the neighboring routers, let’s say PE1 Next, configure the following on PE2:
PE1-RACK1#debug ip pim auto-rp PIM Auto-RP debugging is on
PE2-RACK1#debug ip pim auto-rp PIM Auto-RP debugging is on
Trang 2PE2-RACK1(config)#ip access-list standard PE2-Groups PE2-RACK1(config-std-nacl)#permit 225.8.8.8
PE2-RACK1(config-std-nacl)#permit 229.0.0.1 PE2-RACK1(config-std-nacl)#permit 229.0.0.2 PE2-RACK1(config)#interface Loopback0 PE2-RACK1(config-if)#ip pim sparse-dense-mode PE2-RACK1(config)#ip pim send-rp-announce Loopback 0 scope 16 group-list PE2-Groups
The following debug output is seen on PE2:
02:01:24: Auto-RP(0): Build RP-Announce for 10.1.1.2, PIMv2/v1, ttl 16, ht 181 02:01:24: Auto-RP(0): Build announce entry for (229.0.0.1/32)
02:01:24: Auto-RP(0): Build announce entry for (229.0.0.2/32) 02:01:24: Auto-RP(0): Build announce entry for (225.8.8.8/32) 02:01:24: Auto-RP(0): Send RP-Announce packet on Ethernet0/0.82 02:01:24: Auto-RP(0): Send RP-Announce packet on Ethernet0/0.21 02:01:24: Auto-RP(0): Send RP-Announce packet on Ethernet0/0.123 02:01:24: Auto-RP(0): Send RP-Announce packet on Loopback0(*)
So far PE1 has no debug output The reason is because it doesn’t care about the received Auto-RP information It’s not an Auto-RP Mapping Agent In fact, we don’t have any routers configured as Auto-RP mapping agents yet Auto-RP is not yet fully configured
PE2-RACK1#sh ip pim rp Group: 235.235.235.235, RP: 10.1.1.1, v2, uptime 00:46:27, expires never Group: 235.5.5.5, RP: 10.1.1.1, v2, uptime 00:46:28, expires never Group: 239.255.255.255, RP: 10.1.1.1, v2, uptime 00:46:28, expires never Group: 224.2.127.254, RP: 10.1.1.1, v2, uptime 00:46:27, expires never Group: 224.8.8.8, RP: 10.1.1.2, next RP-reachable in 00:00:49
Group: 225.8.8.8, RP: 10.1.1.1, v2, uptime 00:46:28, expires never Group: 229.0.0.1, RP: 10.1.1.1, v2, uptime 00:46:28, expires never Group: 229.0.0.2, RP: 10.1.1.1, v2, uptime 00:46:28, expires never
Groups 225.8.8.8, 229.0.0.1, and 229.0.0.2 still have 10.1.1.1 as their static RP Auto-RP is not working yet
Task 7.2:
Similarly to the previous lab, configure the following on PE3:
PE3-RACK1(config)#ip access-list standard PE3-Groups PE3-RACK1(config-std-nacl)#permit 225.2.2.2
PE3-RACK1(config)#int Loopback0 PE3-RACK1(config-if)#ip pim sparse-dense-mode PE3-RACK1(config)#ip pim send-rp-announce Loopback 0 scope 16 group-list PE3-Groups
Trang 3Task 7.3:
Similarly to the previous task, configure Auto-RP announcements not just on one router, but on three routers Later, the mapping agent will elect one of them to be the RP for this group range It will use the highest IP address It will elect 10.1.1.3, PE3, to be the
RP
When configuring this, don’t create a new access-list and new ip pim send-rp-announce statements You have to add 235.0.0.0/8
group range to the existing access-list if there is one
PE2-RACK1(config)#ip access-list standard PE2-Groups PE2-RACK1(config-std-nacl)#permit 235.0.0.0 0.255.255.255
PE1-RACK1(config)#ip access-list standard PE1-Groups PE1-RACK1(config-std-nacl)#permit 235.0.0.0 0.255.255.255 PE1-RACK1(config)#int Loopback0
PE1-RACK1(config-if)#ip pim sparse-dense-mode PE1-RACK1(config)#ip pim send-rp-announce Loopback 0 scope 16 group-list PE1-Groups
PE3-RACK1(config)#ip access-list standard PE3-Groups PE3-RACK1(config-std-nacl)#permit 235.0.0.0 0.255.255.255
You should now see PE1, PE2 and PE3 announcing themselves as RPs to the 224.0.1.39 Multicast group But since there are no mapping agents configured, Auto-RP is not fully configured yet
Task 7.4:
Now you are asked to configure PE1 as an RP Mapping Agent
PE1-RACK1(config)#ip pim send-rp-discovery scope 15
Now watch the debug ip pim auto-rp on PE2 It should now be
receiving Auto-RP discovery messages from PE1
Trang 4PE2-RACK#
02:23:06: Auto-RP(0): Received RP-discovery, from 172.16.12.1, RP_cnt 1,
ht 180 02:23:06: Auto-RP(0): Added with (235.0.0.0/8, RP:10.1.1.2), PIMv2 v1 02:23:06: Auto-RP(0): Received RP-discovery, from 172.16.13.1, RP_cnt 1,
ht 180 02:23:06: Auto-RP(0): Update (235.0.0.0/8, RP:10.1.1.2), PIMv2 v1
PE1 has two IP PIM interfaces; it sent out Auto-RP discovery messages on both interfaces
Now you can see the results of Auto-RP configuration Look at PE3:
PE3-RACK1#sh ip pim rp mapping PIM Group-to-RP Mappings This system is an RP (Auto-RP) Group(s) 225.2.2.2/32
RP 10.1.1.3 (?), v2v1 Info source: 172.16.12.1 (?), elected via Auto-RP Uptime: 00:00:59, expires: 00:02:52
Group(s) 225.8.8.8/32
RP 10.1.1.2 (?), v2v1 Info source: 172.16.12.1 (?), elected via Auto-RP Uptime: 00:02:52, expires: 00:02:50
Group(s) 229.0.0.1/32
RP 10.1.1.2 (?), v2v1 Info source: 172.16.12.1 (?), elected via Auto-RP Uptime: 00:02:52, expires: 00:02:51
Group(s) 229.0.0.2/32
RP 10.1.1.2 (?), v2v1 Info source: 172.16.12.1 (?), elected via Auto-RP Uptime: 00:02:52, expires: 00:02:52
Group(s) 235.0.0.0/8
RP 10.1.1.3 (?), v2v1 Info source: 172.16.12.1 (?), elected via Auto-RP Uptime: 00:18:36, expires: 00:02:28
Acl: RP-10.1.1.2-Groups, Static RP: 10.1.1.2 (?)
Acl: RP-Sink-Groups, Static RP: 10.1.1.1 (?)
Acl: RP-10.1.1.3-Groups, Static-Override RP: 10.1.1.3 (?)
The above five mappings are from Auto-RP, the last three are static Notice that PE3 thinks that there’s only one RP for 235.0.0.0/8 This was a result of mapping agent election on PE1 Let’s look on PE1:
Trang 5PE1-RACK1#sh ip pim rp mapping 235.0.0.0 PIM Group-to-RP Mappings
This system is an RP (Auto-RP) This system is an RP-mapping agent Group(s) 235.0.0.0/8
RP 10.1.1.3 (?), v2v1 Info source: 10.1.1.3 (?), elected via Auto-RP Uptime: 00:03:31, expires: 00:02:28
RP 10.1.1.2 (?), v2v1 Info source: 10.1.1.2 (?), via Auto-RP Uptime: 00:03:01, expires: 00:02:21
RP 10.1.1.1 (?), v2v1 Info source: 10.1.1.1 (?), via Auto-RP Uptime: 00:03:17, expires: 00:02:38
PE1 knows of three RP candidates for this 235.0.0.0/8 Multicast group range PE1 mapping agent elected the one with the highest
IP address (10.1.1.3) and advertised it in a discovery 224.0.1.40 message Let’s look on PE2:
PE2-RACK1#sh ip pim rp mapping 235.0.0.0 PIM Group-to-RP Mappings
This system is an RP (Auto-RP) Group(s) 235.0.0.0/8
RP 10.1.1.3 (?), v2v1 Info source: 172.16.13.1 (?), elected via Auto-RP Uptime: 00:04:47, expires: 00:02:36
PE2 also thinks PE3 is the RP for this Multicast group range The behavior is as expected
PE3-RACK1#sh ip pim rp Group: 235.235.235.235, RP: 10.1.1.3, v2, v1, next RP-reachable in 00:00:38
Group: 224.2.2.2, RP: 10.1.1.3, next RP-reachable in 00:00:53 Group: 225.2.2.2, RP: 10.1.1.3, v2, v1, next RP-reachable in 00:00:38 Group: 225.1.1.1, RP: 10.1.1.1, v2, uptime 00:02:01, expires never Group: 224.1.1.1, RP: 10.1.1.1, v2, uptime 00:02:01, expires never
Trang 6PE1-RACK1#sh ip pim rp Group: 235.235.235.235, RP: 10.1.1.3, v2, v1, uptime 00:01:13, expires 00:02:45
Group: 239.255.255.255, RP: 10.1.1.1, next RP-reachable in 00:01:09 Group: 224.2.127.254, RP: 10.1.1.1, next RP-reachable in 00:01:09 Group: 225.8.8.8, RP: 10.1.1.2, v2, v1, uptime 00:00:56, expires 00:02:03 Group: 229.0.0.1, RP: 10.1.1.2, v2, v1, uptime 00:00:56, expires 00:02:01 Group: 229.0.0.2, RP: 10.1.1.2, v2, v1, uptime 00:00:56, expires 00:02:01 Group: 225.1.1.1, RP: 10.1.1.1, next RP-reachable in 00:01:09
Group: 224.1.1.1, RP: 10.1.1.1, next RP-reachable in 00:01:09
PE2-RACK1#sh ip pim rp Group: 235.235.235.235, RP: 10.1.1.3, v2, v1, uptime 00:01:49, expires 00:02:25
Group: 239.255.255.255, RP: 10.1.1.1, v2, uptime 00:02:49, expires never Group: 224.2.127.254, RP: 10.1.1.1, v2, uptime 00:02:49, expires never Group: 224.8.8.8, RP: 10.1.1.2, next RP-reachable in 00:00:50
Group: 225.8.8.8, RP: 10.1.1.2, v2, v1, next RP-reachable in 00:00:12 Group: 229.0.0.1, RP: 10.1.1.2, v2, v1, next RP-reachable in 00:00:12 Group: 229.0.0.2, RP: 10.1.1.2, v2, v1, next RP-reachable in 00:00:12 Group: 225.1.1.1, RP: 10.1.1.1, v2, uptime 00:02:49, expires never Group: 224.1.1.1, RP: 10.1.1.1, v2, uptime 00:02:49, expires never
In the above three show ip pim rp outputs:
1 The entries that say “v2, v1” are from Auto-RP and the router is not the RP
2 The entries that say “v2” are static and the router is not the
RP
3 The entries that say “next RP-reachable,” the router is the
RP, but you can’t tell if it’s static or Auto-RP, unless you do
show ip pim rp mapping
Task 7.5:
You are being asked to configure a Multicast stub network First, let’s block CE8 PIM neighbor messages coming to PE2
PE2-RACK1(config)#ip access-list standard Block-CE8-PIM PE2-RACK1(config-std-nacl)#deny 10.82.1.1
PE2-RACK1(config-std-nacl)#permit any <- the command isn’t necessary
PE2-RACK1(config)#int ethernet 0/0.82 PE2-RACK1(config-if)#ip pim neighbor-filter Block-CE8-PIM
Trang 7PE2-RACK1#sh ip pim neighbor PIM Neighbor Table
Neighbor Interface Uptime/Expires Ver DR Address Priority/Mode
172.16.123.3 Ethernet0/0.123 02:35:21/00:01:43 v2 1 / DR P 10.82.1.1 Ethernet0/0.82 02:35:28/00:00:52 v2 1 / P 172.16.12.1 Ethernet0/0.21 02:35:21/00:01:34 v2 1 / P
CE8 10.82.1.1 hasn’t expired yet Wait for another minute
PE2-RACK1#sh ip pim neighbor PIM Neighbor Table
Neighbor Interface Uptime/Expires Ver DR Address Priority/Mode
172.16.123.3 Ethernet0/0.123 02:47:37/00:01:16 v2 1 / DR P 172.16.12.1 Ethernet0/0.21 02:47:37/00:01:36 v2 1 / P
No more CE8, no more Multicast information coming from CE8 Verify:
PE2-RACK1#sh ip pim rp
<- NO OUTPUT
To allow Multicast traffic to flow down to CE8, you have to configure IGMP helper address on CE8 to point to PE2 Configure it
on the same interface as IGMP join groups
CE8-RACK1(config)#int fastethernet0/1 CE8-RACK1(config-if)#ip igmp helper-address 10.82.1.2
Now all IGMP groups that exist on CE8 should also appear on PE2:
PE2-RACK1#sh ip igmp groups IGMP Connected Group Membership Group Address Interface Uptime Expires Last Reporter 235.235.235.235 Ethernet0/0.82 00:00:38 00:02:52 10.82.1.1 239.255.255.255 Ethernet0/0.82 00:00:38 00:02:52 10.82.1.1 224.2.127.254 Ethernet0/0.82 00:00:38 00:02:52 10.82.1.1 224.8.8.8 Ethernet0/0.82 00:00:07 00:02:52 10.82.1.1 225.8.8.8 Ethernet0/0.82 00:00:07 00:02:52 10.82.1.1 229.0.0.1 Ethernet0/0.82 00:00:07 00:02:52 10.82.1.1 229.0.0.2 Ethernet0/0.82 00:00:07 00:02:52 10.82.1.1 224.0.1.39 Ethernet0/0.21 00:31:17 00:02:28 172.16.12.1 224.0.1.40 Ethernet0/0.82 02:52:53 00:02:31 10.82.1.1 224.0.1.40 Ethernet0/0.123 02:53:55 00:02:29 172.16.123.2
Trang 8Notice that the reporter for CE8’s groups is the CE8’s IP address All PIM activity will now originate from PE2 But all traffic that comes to PE2 for the Multicast groups will be forwarded to CE8 (10.82.1.1)
Verify on PE1:
PE1-RACK1#ping 224.8.8.8 Type escape sequence to abort
Sending 1, 100-byte ICMP Echos to 224.8.8.8, timeout is 2 seconds:
Reply to request 0 from 10.82.1.1, 40 ms Reply to request 0 from 10.82.1.1, 80 ms Reply to request 0 from 10.82.1.1, 60 ms
Notice something? Before you used to get two ICMP responses when pinging from PE1 Now you get three You have created new Loopback 0 interface with PIM enabled Now IO®S will create three ICMP echo messages when pinging Multicast addresses
Task 7.6:
This is achieved by configuring a scope on RP announcement messages on PE2 You want to prevent these messages ever reaching CE2 which is two hops away You just want them to get
up to PE1 which is one hop away
PE2-RACK1(config)#ip pim send-rp-announce Loopback0 scope 1 group-list PE2-Groups
Task 7.7:
The default behavior of PIM-SM is to switch to the Shortest-Path Tree (aka the Source Tree) and bypass the RP as soon as the new source is detected This means that in most cases, Multicast traffic does not flow through the RP Therefore, the RP does not become a point of congestion The default behavior can be overridden in Cisco routers by setting the SPT Threshold to “Infinity.” This prevents the Cisco router from joining the SPT and keeps all group traffic flowing down the Shared Tree In this case, the RP could become a bottleneck
SPT-Thresholds must be configured on each individual router in the network It will not have the desired affect if it is only configured
Trang 9on the RP This is because the RP does not communicate this value
to the routers in the network
225.1.1.1 is a group that has a receiver on the CE1 router PE1 10.1.1.1 is currently the RP for this group Let’s ping this group from PE2 Initially, traffic will flow up to PE1 and then down the shared tree to PE3 -> CE1 Immediately, PE2 will try to switch to the SPT; the traffic will start flowing the SPT tree, PE2->PE3->CE1
PE2-RACK1#ping 225.1.1.1 Type escape sequence to abort
Sending 1, 100-byte ICMP Echos to 225.1.1.1, timeout is 2 seconds:
Reply to request 0 from 10.13.1.1, 12 ms Reply to request 0 from 10.13.1.1, 20 ms Reply to request 0 from 10.13.1.1, 20 ms Reply to request 0 from 10.13.1.1, 20 ms
Let’s look at the Multicast routing table on PE2 for this 225.1.1.1 group We should expect to see both PE2->PE1 and PE2->PE3 as outgoing interfaces, depending on the source:
PE2-RACK1#sh ip mroute 225.1.1.1
IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel,
Y - Joined MDT-data group, y - Sending to MDT-data group
V - RD & Vector, v - Vector Outgoing interface flags: H - Hardware switched, A - Assert winner Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode (*, 225.1.1.1), 00:04:54/stopped, RP 10.1.1.1, flags: SPF Incoming interface: Ethernet0/0.21, RPF nbr 172.16.12.1 Outgoing interface list: Null
(10.1.1.2, 225.1.1.1), 00:04:54/00:02:50, flags: FT Incoming interface: Loopback0, RPF nbr 0.0.0.0 Outgoing interface list:
Ethernet0/0.123, Forward/Sparse-Dense, 00:03:53/00:02:38 (10.82.1.2, 225.1.1.1), 00:04:54/00:03:02, flags: FT
Incoming interface: Ethernet0/0.82, RPF nbr 0.0.0.0, Registering Outgoing interface list:
Ethernet0/0.123, Forward/Sparse-Dense, 00:03:53/00:02:38
Trang 10Ethernet0/0.21, Forward/Sparse-Dense, 00:04:54/00:02:56 (172.16.12.2, 225.1.1.1), 00:04:54/00:03:08, flags: FT Incoming interface: Ethernet0/0.21, RPF nbr 0.0.0.0 Outgoing interface list:
Ethernet0/0.123, Forward/Sparse-Dense, 00:03:53/00:02:38
Let’s look at the Multicast routing table on PE3 for this 225.1.1.1 group We should expect to see both PE2->PE3 and PE1->PE3 as incoming interfaces, depending on the source:
PE3-RACK1#sh ip mroute 225.1.1.1
IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel,
Y - Joined MDT-data group, y - Sending to MDT-data group
V - RD & Vector, v - Vector Outgoing interface flags: H - Hardware switched, A - Assert winner Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode (*, 225.1.1.1), 02:29:11/stopped, RP 10.1.1.1, flags: SJCF Incoming interface: Ethernet0/0.31, RPF nbr 172.16.13.1 Outgoing interface list:
Ethernet0/0.13, Forward/Sparse-Dense, 02:29:11/00:02:23 (10.1.1.2, 225.1.1.1), 00:05:51/00:02:26, flags: JT
Incoming interface: Ethernet0/0.123, RPF nbr 172.16.123.2 Outgoing interface list:
Ethernet0/0.13, Forward/Sparse-Dense, 00:05:51/00:02:23 (10.82.1.2, 225.1.1.1), 00:00:45/00:02:22, flags: J
Incoming interface: Ethernet0/0.123, RPF nbr 172.16.123.2 Outgoing interface list:
Ethernet0/0.13, Forward/Sparse-Dense, 00:00:45/00:02:23 (172.16.12.2, 225.1.1.1), 00:05:51/00:01:01, flags: JT Incoming interface: Ethernet0/0.123, RPF nbr 172.16.123.2 Outgoing interface list:
Ethernet0/0.13, Forward/Sparse-Dense, 00:05:51/00:02:23 (172.16.123.2, 225.1.1.1), 00:06:54/00:02:24, flags: FT Incoming interface: Ethernet0/0.123, RPF nbr 0.0.0.0 Outgoing interface list:
Ethernet0/0.13, Forward/Sparse-Dense, 00:06:54/00:02:21