Oct 14, 2024

How A Leading Israeli Financial Institution Achieved EVPN-VXLAN Succes

A leading Israeli financial institution has launched a transformative project to upgrade and modernize its data center infrastructure, including replacing the existing network hardware.

The legacy DC network architecture was based on technologies like stack and spanning tree, with low bandwidth links between Top Of Rack ( ToR) , Main DC and DR sites .
While these setups served their purpose for many years, they come with inherent limitations.

The primary objective of the project was to implement a new DC environment using Network Fabric based on EVPN VXLAN technology over Spine & Leaf physical topology that not only addressed the shortcomings of the existing infrastructure but also surpassed it in terms of :

Challenge:

The company employed multicast communication to synchronize data in real time between its servers.
One of the key objectives of the project was to extend support for Layer 2 (L2) and Layer 3 (L3) multicast over the newly implemented EVPN-VXLAN topology.

This posed a complex set of challenges that required strict planning and deep understanding of multicast protocols to achieve seamless multicast communication within the data center environment and between Main DCs and DR sites using DCI ( Data Center Interconnect ).

 

Legacy Multicast implementation :

Multicast implementation usually deployed with Ethernet and IP Data Plan over the network devices based on PIM ,MSDP states as Control Plan.

A network’s multicast flow establishes a tree through an RP (Rendezvous Point) router within the network, tasked with forwarding multicast content from sender to receiver.

 

Multicast implementation in EVPN VXLAN environment:

Compared to legacy Multicast implementation , with the new design EVPN is acting as a Control Plan which responsible for sharing the Receivers and Senders states by converting IGMP and PIM tables into BGP EVPN update types which are distributed to all DC`s and DR`s Leafs.

 

Requirements:

For multicast to function, we need an additional dedicated Underlay & Overlay networks other than the default Underlay & Overlay networks that are used to create the EVPN-VXLAN topology.

The underlay network running a PIM ASM configuration between the Leafs & Spines.

Each leaf will advertise a unique underlay MC group for each VLAN/VRF that have multicast running on it.

The Overlay network took responsibility for advertising the real multicast groups generated by servers and hosts, ensuring the propagation of multicast traffic to the intended recipients.

 

Solution:

This section outlines the specific configurations employed to seamlessly integrate multicast functionalities while ensuring optimal performance, reliability, and scalability within the data center environment.

 

Arista Configuration to support Multicast L2:

ip igmp snooping vlan {VLAN_ID} querier

ip igmp snooping vlan {VLAN_ID} querier address {VTEP_ADDRESS}

!

interface Vxlan1

vxlan vlan {VLAN_ID} multicast group {MC_GROUP_ADDR_1}

router bgp {AS_NUMBER}

vlan-aware-bundle {VLAN_AWARE_BUNDLE_NAME}

rd {RD}

route-target both {RT}

redistribute learned

redistribute igmp

vlan {ALL_VLAN_IDs}

!

!

interface Ethernet55/1

description P2P_LINK_TO_SPINE-1_Ethernet1/1

mtu 9214

no switchport

ip address {IP_ADDR}/31

pim ipv4 sparse-mode

!

platform trident forwarding-table partition flexible exact-match 16000 l2-shared 128000 l3-shared 96000

!

router multicast

ipv4

routing

software-forwarding sfe

!

!

 

Configuration explanation:

 

Configuration to support Multicast L3:

interface {Loopback_X}

description {LOOPBACK_FOR_PIM}

vrf {VRF_NAME}

ip address {IP_ADDR}/32

!

interface Vlan{VLAN_ID}

shutdown

mtu 9214

vrf {VRF_NAME}

ip igmp

pim ipv4 local-interface {Loopback_X}

ip address virtual {ANYCAST_GW_ADDR}/24

!

interface Vxlan1

vxlan vrf {VRF_NAME} multicast group {MC_GROUP_ADDR_2}

!

router bgp {AS_NUMBER}

vrf {VRF_NAME}

rd {RD}

evpn multicast

route-target import evpn {RT}

route-target export evpn {RT}

router-id {ROUTER_ID}

redistribute connected

!

router multicast

vrf {VRF_NAME}

ipv4

routing

!

 

Configuration explanation:

 

Results:

Multicast is up and running over the EVPN-VXLAN fabric as expected with fast re-route convergence in case of links failure using BFD signaling.

 

Summary:

The implementation of multicast in the network has optimized network performance, by reducing the amount of BUM traffic that flows in the network towards unnecessary Leafs.

Thorough testing and validation have confirmed the successful integration of multicast capabilities without compromising the stability of the existing EVPN-VXLAN topology.

 

Nir Gal & Dor Ben Hamo
Net&Sec Solution Architects

 

 

 

 

 

 

 

Under Attack?
Broken Network System?

Leave your details below and we’ll get back to you shortly