MULTICAST QUICKSTART CONFIGURATION GUIDE
 
 
 
 

Dense Mode
Sparse Mode with one RP
Sparse Mode with multiple RPs
Auto-RP with single RP
Auto-RP with multiple RPs
DVMRP
MBGP
MSDP
Stub Multicast Routing
UDLR
Bootstrap Router (BSR)
CGMP
IGMP SNOOPING
PGM
MRM
 
 
 

DENSE MODE

Cisco recommends using pim sparse mode, particularly Auto-RP, wherever possible, especially for new deployments. However, if dense mode is desired, then configure the global command 'ip multicast-routing' and the interface command 'ip pim sparse-dense-mode' on each interface which needs to process multicast traffic. The common requirement, for all configurations within this document, is to configure multicasting globally and then pim on the interfaces. The interface commands 'ip pim dense-mode' and 'ip pim sparse-mode' can now be configured as 'ip pim sparse-dense-mode'. In this mode, the interface will be treated as dense-mode if the group is in dense-mode. If the group is in sparse-mode,ie a Rendevoux Point is known, the interface will be treated in sparse-mode.

S=Source of multicast traffic
R=Receiver of multicat traffic

     +-------+         +-------+
S--eo|ROUTERA|s0-----s0|ROUTERB|e0--R
     +-------+         +-------+
 

RouterA configuration:

ip multicast-routing

interface ethernet0
ip address <address><mask>
ip pim sparse-dense-mode

interface serial0
ip address <address><mask>
ip pim sparse-dense-mode

RouterB configuration:

ip multicast-routing

interface serial0
ip address <address><mask>
ip pim sparse-dense-mode

interface ethernet0
ip address <address><mask>
ip pim sparse-dense-mode
 
 

SPARSE MODE
(with one RP)

ROUTERA is the Rendevoux Point (RP) which should typically be the router closest to the source. The RP discovers that it is the RP because all the other routers are pointing to it as their RP and are subsequently sending registers to it. You can configure multiple RP's, but only one RP per specific group.

     +-------+             +-------+
S--eo|ROUTERA|s0---------s0|ROUTERB|e0--R
     +-------+ 1.1.1.1     +-------+

RouterA configuration:

ip multicast-routing

interface ethernet0
ip address <address> <mask>
ip pim sparse-dense-mode

interface serial0
ip address 1.1.1.1 255.255.255.0
ip pim sparse-dense-mode

RouterB configuration:

ip multicast-routing
ip pim rp-address 1.1.1.1

interface serial0
ip address <address> <mask>
ip pim sparse-dense-mode

interface ethernet0
ip address <address> <mask>
ip pim sparse-dense-mode
 

SPARSE MODE
(with multiple RP's)

         Sa(224.1.1.1)      Sb(224.2.2.2)
           (224.1.1.2)        (224.2.2.3)
           (224.1.1.3)        (224.2.2.4)
               |                   |
           +-------+           +-------+
           |  RP1  |           |  RP2  |
           +-------+           +-------+
            1.1.1.1             2.2.2.2
               |                   |
               |                   |
           +-------+           +-------+
           |Router3|           |Router4|
           +-------+           +-------+
               |                   |
             -------------------------
                        |
                    receivers

Sa(Source) is sending to 224.1.1.1, 224.1.1.2, 224.1.1.3. Sb is sending to 224.2.2.2, 224.2.2.3, and 224.2.2.4. You could have one router, either RP1 or RP2 be the RP for all groups. But if you want different RP's handling different groups,  you need to configure all routers to include which groups the RP's will serve. With static rp configuration like this, all routers in the PIM domain must have the same 'ip pim rp-address <address> <acl>' commands configured. You could also use Auto-RP to configure this, which is described later and which is more simple to configure.

RP1 configuration:

ip multicast routing
ip pim rp-address 2.2.2.2 3

access-list 3 permit 224.2.2.2
access-list 3 permit 224.2.2.3
access-list 3 permit 224.2.2.4

RP2 configuration:

ip multicast routing
ip pim rp-address 1.1.1.1 2

access-list 2 permit 224.1.1.1
access-list 2 permit 224.1.1.2
access-list 2 permit 224.1.1.3

Router 3 and 4 configurations:

ip multicast-routing
ip pim rp-address 1.1.1.1  2
ip pim rp-address 2.2.2.2  3

access-list 2 permit 224.1.1.1
access-list 2 permit 224.1.1.2
access-list 2 permit 224.1.1.3
access-list 3 permit 224.2.2.2
access-list 3 permit 224.2.2.3
access-list 3 permit 224.2.2.4
 

AUTO-RP
(with one RP)

With Auto RP, you configure the RP's themselves to announce their availability as an RP and mapping agent. The RP's send their announcements using 224.0.1.39. The RP mapping agent listens to the announce packets from the RP's and then sends RP to group mappings in a discovery message which is sent to 224.0.1.40. These discovery messages are what the rest of the routers use for their RP to group map. In cisco's network, we use one RP which also serves as the mapping agent. You can configure multiple RP's and multiple mapping agents for redundancy.

     +-------+         +-------+
S--eo|ROUTERA|s0-----s0|ROUTERB|e0--R
     +-------+         +-------+

RouterA configuration:

ip multicast-routing
ip pim send-rp-announce ethernet0 scope 16
ip pim send-rp-discovery scope 16

interface ethernet0
ip address <address> <mask>
ip pim sparse-dense-mode

interface serial0
ip address <address> <mask>
ip pim sparse-dense-mode

RouterB configuration:

ip multicast-routing

interface ethernet0
ip address <address> <mask>
ip pim sparse-dense-mode

interface serial0
ip address <address> <mask>
ip pim sparse-dense-mode
 

AUTO-RP
(with multiple RP's)

        Sa(239.0.0.0/8)     Sb(224.0.0.0/4)
               |                   |
           +-------+           +-------+
           |  RP1  |           |  RP2  |
           +-------+           +-------+
               |                   |
               |                   |
           +-------+           +-------+
           |Router3|           |Router4|
           +-------+           +-------+
               |                   |
             -------------------------
                        |
                    receivers

Router RP1 configuration:

ip multicast-routing
ip pim send-rp-announce ethernet0 scope 16 group-list 1
ip pim send-rp-discovery scope 16

access-list 1 permit 239.0.0.0 0.255.255.255

Router RP2 configuration:

ip multicast-routing
ip pim send-rp-announce ethernet0 scope 16 group-list 1
ip pim send-rp-discovery scope 16

access-list 1 deny 239.0.0.0 0.255.255.255
access-list 1 permit 224.0.0.0 15.255.255.255
 

The access lists allow the RP's to only be an RP for the groups you want. If no access list is configured, then the RP's will be available as an RP for all groups. If two RP's are announcing their availability to be RP's for the same group(s), the mapping agent(s) will resolve these conflicts using the highest ip address wins rule. If you want to influence which router is the RP for a particular group, when two RP's are announcing for that group, you can configure each router with a loopback address. Place the higher ip address on the preferred RP and then use the loopback interface as the source of the announce packets, ie 'ip pim send-rp- announce loopback0'. When multiple mapping agents are used, they will listen to each others discovery packets and the mapping agent with the highest ip address will win and be the only forwarder of 224.0.1.40.

Further details on Auto-RP can be found here:
ftp://ftpeng.cisco.com/ipmulticast/autorp.html
 

DVMRP

Your provider may tell you to create a DVMRP tunnel to them to gain access to the multicast backbone in the internet(mbone). The minimum commands to configure a tunnel are:

interface tunnel0
ip unnumbered <any pim interface)
tunnel source <address of source>
tunnel destination <address of ISP's mrouted box>
tunnel mode dvmrp
ip pim sparse-dense-mode

Typically the ISP will have you tunnel to a unix box running mrouted (dvmrp). If they instead have you tunnel to another cisco box, then use default GRE tunnel mode.

If instead of just receiving multicast packets, you want to generate multicast packets for others on the mbone to see, you need  to advertise the sources subnets. If your multicast source host address is 131.108.1.1 then you need to advertise the existance of that subnet to the mbone. By default, directly connected networks are advertised with metric 1. If your source is not directly connected to the router with the dvmrp tunnel, you need to configure the following command under 'interface tunnel0':

ip dvmrp metric 1 list 3
access-list 3 permit 131.108.1.0 0.0.0.255

It is critical to include an access-list with this command to prevent advertising your entire unicast routing table into the mbone.

If your setup is like the following, and you want to propagate dmvrp routes through your domain, then you need to configure 'ip dvmrp unicast-routing' on s0 of ROUTER's A & B. This will provide the forwarding of dvmrp routes to pim neighbors who will then have a dvmrp routing table used for RPF. DVMRP learned routes take RPF precedence over all other protocols except directly connected routes.

     +-------+         +-------+                  +-------+
S--eo|ROUTERA|s0-----s0|ROUTERB|---DVRMP TUNNEL---|mrouted|
     +-------+         +-------+                  +-------+
 

MBGP

MBGP is a simple way to carry two sets of routes. One set for unicast routing and one set for multicast routing. MBGP provides the control necessary to decide where multicast packets will be allowed to flow. The routes associated with multicast routing are used by PIM to build data distribution trees. MBGP provides the RPF path, not the creation of multicast state. PIM is still needed to forward the multicast packets.

                      unicast
       +-------+s0--192.168.100.0---s0+-------+
AS123  |ROUTERA|                      |ROUTERB|  AS321
       +-------+s1--192.168.200.0---s1+-------+
 loopback0           multicast              loopback0
192.168.2.2                                192.168.1.1

RouterA configuration:

ip multicast-routing

interface loopback0
ip pim sparse-dense-mode
ip address 192.168.2.2 255.255.255.0

interface serial0
ip address 192.168.100.1 255.255.255.0

interface serial1
ip pim sparse-dense-mode
ip address 192.168.200.1 255.255.255.0

router bgp 123
network 192.168.100.0 nlri unicast
network 192.168.200.0 nlri multicast
neighbor 192.168.1.1 remote-as 321 nlri unicast multicast
neighbor 192.168.1.1 ebgp-multihop 255
neighbor 192.168.100.2 update-source loopback0
neighbor 192.168.1.1 route-map setNH out

route-map setNH permit 10
match nlri multicast
set ip next-hop 192.168.200.2

route-map setNH permit 20
 

RouterB configuration:

ip multicast-routing

interface loopback0
ip pim sparse-dense-mode
ip address 192.168.1.1 255.255.255.0

interface serial0
ip address 192.168.100.2 255.255.255.0

interface serial1
ip pim sparse-dense-mode
ip address 192.168.200.2 255.255.255.0

router bgp 321
network 192.168.100.0 nlri unicast
network 192.168.200.0 nlri multicast
neighbor 192.168.2.2 remote-as 123 nlri unicast multicast
neighbor 192.168.2.2 ebgp-multihop 255
neighbor 192.168.100.1 update-source loopback0
neighbor 192.168.2.2 route-map setNH out

route-map setNH permit 10
match nlri multicast
set ip next-hop 192.168.200.1

route-map set NH permit 20
 

If your unicast and multicast typologies are congruent, ie, going over the same link, then the primary difference in the configuration will be with the 'nlri unicast multicast' command:

network 192.168.100.0 nlri unicast multicast

The benefit of having MBGP running in the case of congruent typologies is that even though the traffic will be traversing the same paths, different policies can be applied to unicast BGP versus multicast BGP.

Further details on MBGP can be found here:
ftp://ftpeng.cisco.com/ipmulticast/mbgp.html
 

MSDP

MSDP connects multiple PIM-SM domains together. Each PIM-SM domain uses it's own independent RP(s) and do not have to depend on RPs in other domains. MSDP allows domains to discover multicast sources from other domains. If you are also BGP peering with with the MSDP peer, you should use the same IP address for MSDP as you do for BGP. When MSDP does peer RPF checks, it expects the MSDP peer address to be the same address that BGP/MBGP gives it when it performs a route table lookup on the RP in the SA message. You are not required, however, to run BGP/MBGP
with the MSDP peer as long as there is a BGP/MBGP path between the MSDP peers. If there is no BGP/MBGP path, and if there is more than one MSDP peer, you are required to use "ip msdp default-peer" command. Below, RP-A is the RP for it's domain and RP-B is the RP for it's domain.

       +-------+                     +-------+
AS123  | RP-A  |s0---192.168.100---s0| RP-B  |   AS321
       +-------+ .1               .2 +-------+

RouterA configuration:

ip multicast-routing

ip pim send-rp-announce ethernet0 scope 16
ip pim send-rp-discovery scope 16

ip msdp peer 192.168.100.2
ip msdp sa-request 192.168.100.2

interface serial0
ip address 192.168.100.1 255.255.255.0
ip pim sparse-dense-mode
 

RouterB configuration:

ip multicast-routing

ip pim send-rp-announce ethernet0 scope 16 group-list 1
ip pim send-rp-discovery scope 16

ip msdp peer 192.168.100.1
ip msdp sa-request 192.168.100.1

interface serial0
ip address 192.168.100.2 255.255.255.0
ip pim sparse-dense-mode
 

STUB MULTICAST ROUTING

This feature allows you to configure your remote/stub routers as IGMP proxy agents.  Instead of fully participating in PIM these stub routers will simply forward IGMP messages from the host(s) to the upstream multicast router.
 

                            |
                        +---+---+
                        |  RTR1 | Access router
                        +---+---+
                            | s0 140.1.1.2
                            |
                            | s0 140.1.1.1
                        +---+---+
                        |  RTR2 | Stub router
                        +---+---+
                            |e0
                            |
                       -----+----- stub lan

RTR1 Configuration:

int s0
ip pim sparse-dense-mode
ip pim neighbor-filter 1

access-list 1 deny 140.1.1.1

"ip pim neighbor-filter" is needed so that RTR1 doesn't recognize RTR2 as a PIM neighbor. If you configure RT1 in sparse mode, then the neighbor filter is unnecessary. RTR2 must not run in sparse mode. When in dense mode, the stub multicast sources will be able to flood to the backbone routers.
 

RTR2 Configuration:

ip multicast-routing
int e0
ip pim sparse-dense-mode
ip igmp helper-address 140.1.1.2

int s0
ip pim sparse-dense-mode
 
 

IGMP UNIDIRECTIONAL LINK ROUTING (UDLR) FOR SATELLITE LINKS

Unidirectional Link Routing provides a method for forwarding multicast packets over a unidirectional satellite link to stub networks which have a back channel. This is similiar to stub multicast routing. Without this feature, the uplink router would not be able to dynamically learn about which ip multicast group addresses  to forward over the unidirectional link, because the downlink router can't send anything back.

            source (12.0.0.12)
               |
       -------------
              |
              | 12.0.0.1
        +------------+ 11.0.0.1
        | uplink-rtr |--------------+               NOTE: The back channel
        +------------+              |               is any return route and
     10.0.0.1 v                     |               any number of routers.
              v                     |
              v                     |
              v  UDL                | Back channel
              v                     |
              v                     |
     10.0.0.2 v                     |
       +--------------+ 13.0.0.2    |
       | downlink-rtr |-------------+
       +--------------+
             | 14.0.0.2
             |
        -------------
                 |
              receiver (14.0.0.14)
 

UPLINK-RTR Configuration:

ip multicast-routing

interface Ethernet0
description Typical IP multicast enabled interface
ip address 12.0.0.1 255.0.0.0
ip pim sparse-dense-mode

interface Ethernet1
description Back channel which has connectivity to downlink-rtr
ip address 11.0.0.1 255.0.0.0
ip pim sparse-dense-mode

interface Serial0
description Unidirectional to downlink-rtr
ip address 10.0.0.1 255.0.0.0
ip pim sparse-dense-mode
ip igmp unidirectional-link
no keepalive
 

DOWNLINK-RTR Configuration:

ip multicast-routing

interface Ethernet0
description Typical IP multicast enabled interface
ip address 14.0.0.2 255.0.0.0
ip pim sparse-dense-mode
ip igmp helper-address udl serial0

interface Ethernet1
description Back channel which has connectivity to downlink-rtr
ip address 13.0.0.2 255.0.0.0
ip pim sparse-dense-mode

interface Serial0
description Unidirectional to uplink-rtr
ip address 10.0.0.2 255.0.0.0
ip pim sparse-dense-mode
ip igmp unidirectional-link
no keepalive
 

PIMv2 BOOTSTRAP ROUTER (BSR)

If all routers in the network are running PIMv2, then you can configure a BSR instead of Auto-RP. Both are very similar. With BSR configuration, you configure bsr candidates (similar to RP-Discovery in Auto-RP) and configure RP candidates (similar to RP-Announce in Auto-RP). We have more experience with Auto-RP, which works well, doesn't require PIMv2, and provides better scoping. But, if you want to configure a BSR, then this is how to do it:

1) On the candidate bootstrap routers configure:

"ip pim bsr-candidate <interface> <hash-mask-len> <pref>"

Where <interface> has the candidate BSR's IP address. It is recommended (but not required) that <hash-mask-len> be the same across all candidate BSRs. A candidate BSR with the biggest <pref> value will be elected as the BSR for this domain.

Here's an example:

"ip pim bsr-candidate ethernet0 30 4"

The PIMv2 Bootstrap router (BSR) is used to collect candidate RP information and to disseminate RP-set information associated with each group prefix. To avoid single point of failure, more than one router in a domain can be configured as candidate BSRs.

A BSR is elected among the candidate BSRs automaticly, based on the preference values configured. The routers to serve as candidate BSRs should be well-connected and be in the "back-bone" part of the network, as opposed to in the "dial-up" part of the network.

2) Configure candidate RP routers. The following example shows a candidate RP, on the interface ethernet0, for the entire admin-scope address range:

        "access-list 11 permit 239.0.0.0 0.255.255.255"
        "ip pim rp-candidate ethernet0 group-list 11"
 

CGMP

On the router interface facing the switch:

ip pim sparse-dense-mode
ip cgmp

On the switch:

set cgmp enable
 

IGMP SNOOPING

IGMP snooping is available with release 4.1 of the Catalyst 5000. IGMP Snooping requires a Supervisor III card. No configuration, other than pim, is necessary to configure IGMP snooping on the router. A router is still necessary with IGMP Snooping, however, to provide the igmp querying.

This example shows how to enable IGMP snooping on the switch:

Console> (enable) set igmp enable
IGMP Snooping is enabled.
CGMP is disabled.

This example shows what happens if you try to enable IGMP if CGMP is already enabled:

Console> (enable) set igmp enable
Disable CGMP to enable IGMP Snooping feature.
 
 

PGM

Pragmatic General Multicast (PGM) is a reliable multicast transport protocol for applications that require ordered, duplicate-free, multicast data delivery from multiple sources to multiple receivers. PGM guarantees that a receiver in the group either receives all data packets from transmissions and retransmissions, or is able to detect unrecoverable data packet loss.

There are no PGM global commands. PGM is configured per interface:

'ip pgm'

Configures PGM on a given interface of the router. Multicast routing has to be enabled on the router along with PIM on the interface.
 

MRM

Multicast Route Monitor facilitates automated fault detection in a large multicast routing infrastructure. It is designed to alarm a network administrator of multicast routing problems in close to real-time.

MRM has two types of components, MRM tester and MRM manager. MRM Tester is a sender and/or receiver.

MRM is available in IOS 12.0(5)T onwards. Only the MRM testers and managers need to be running MRM supported IOS version.
 

                  +------------------------------+
     +--------+e0 (                              ) e0+----------+
     | Sender +---( Multicast Forwarding Network )---+ Receiver |
     +--------+   (                              )   +----------+
          10.1.1.2+---------------+--------------+10.1.4.2
                                  |
                                e0|
                             +----+-----+
                             |  Manager |
                             +----------+

Make sure the "Multicast Forwarding Network" has no access-lists and boundaries that denies MRM data/control traffic. MRM test data is UDP/RTP packets addressed to configured group address. MRM control traffic between sender, receiver and manager, is addressed to 224.0.1.111 group which is joined by all three.

Test Sender:

interface Ethernet0
  ip mrm test-sender

Test Receiver:

interface Ethernet0
  ip mrm test-receiver

Test Manager:

ip mrm manager test1
 manager e0 group 239.1.1.1
 senders 1
 receivers 2 sender-list 1

 access-list 1 permit 10.1.1.2
 access-list 2 permit 10.1.4.2

Router# show ip mrm manager
Manager:test1/10.1.2.2 is not running
  Beacon interval/holdtime/ttl:60/86400/32
  Group:239.1.1.1, UDP port test-packet/status-report:16384/65535
  Test sender:
    10.1.1.2
  Test receiver:
    10.1.4.2

Start the test. Manager sends control messages to test-sender and test-receiver as configured in the test parameters. The test-receiver joins the group and monitors test packets sent from the test-sender.

Router# mrm start test1
*Feb  4 10:29:51.798: IP MRM test 'test1' starts ......
Router#

Display Status report on manager:

Router# show ip mrm status

IP MRM status report cache:
Timestamp        Manager         Test Receiver   Pkt Loss/Dup (%)      Ehsr
*Feb  4 14:12:46 10.1.2.2        10.1.4.2        1            (4%)     29
*Feb  4 18:29:54 10.1.2.2        10.1.4.2        1            (4%)     15
Router#

The above display shows that the receiver sent two status reports (one line each) at given time stamp. Each report contains 1 packet loss during the interval window (default 1 second). The Ehsr value shows the estimated next sequence number value from the test-sender. If the receiver saw duplicate packets, it'd show a negative number in the "Pkt Loss/Dup" column.

Stop the test.

Router# mrm stop test1
*Feb  4 10:30:12.018: IP MRM test 'test1' stops
Router#

While running the test, MRM sender will start sending RTP packets to configured group address at default interval of 200 ms. The receiver will monitor (expect) the same packets at the same default interval. If the receiver detects a packet loss in default window interval of 5 second, it sends report to MRM manager. The status report from receiver can be seen by "show ip mrm status" command on manager.
 

For more in depth configuration coverage, on any of the above features, go to ftp://ftpeng.cisco.com/ipmulticast/
 
 

Last Update 3/16/00
mmcbride