HTTP/2 200
date: Sat, 11 Oct 2025 09:37:47 GMT
content-type: text/plain; charset=utf-8
content-length: 21622
cf-ray: 98cd6178ddc18087-BLR
last-modified: Fri, 30 Mar 2018 01:00:00 GMT
etag: "12bde-56896c27801a7-gzip"
vary: Accept-Encoding
content-encoding: gzip
strict-transport-security: max-age=31536000; includeSubDomains
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
cf-cache-status: HIT
expires: Sat, 11 Oct 2025 09:57:47 GMT
cache-control: public, max-age=1200
accept-ranges: bytes
set-cookie: __cf_bm=QWGsejzRKrYTypyLFUSTOX0f.vOXxOl2CF._wdvHnBE-1760175467-1.0.1.1-yuePaZEq3aVh_7ZmaVSy4kVawfcikuWS9ksY2CwJB_X9.LUsO0cYOxQTBCz.he.H2GpNrce5kqfLh3k4iFdTDlB8pd70Iy5Cq5g1zRoxoeo; path=/; expires=Sat, 11-Oct-25 10:07:47 GMT; domain=.rfc-editor.org; HttpOnly; Secure; SameSite=None
server: cloudflare
alt-svc: h3=":443"; ma=86400
Internet Engineering Task Force (IETF) A. Sajassi, Ed.
Request for Comments: 8365 Cisco
Category: Standards Track J. Drake, Ed.
ISSN: 2070-1721 Juniper
N. Bitar
Nokia
R. Shekhar
Juniper
J. Uttaro
AT&T
W. Henderickx
Nokia
March 2018
A Network Virtualization Overlay Solution Using Ethernet VPN (EVPN)
Abstract
This document specifies how Ethernet VPN (EVPN) can be used as a
Network Virtualization Overlay (NVO) solution and explores the
various tunnel encapsulation options over IP and their impact on the
EVPN control plane and procedures. In particular, the following
encapsulation options are analyzed: Virtual Extensible LAN (VXLAN),
Network Virtualization using Generic Routing Encapsulation (NVGRE),
and MPLS over GRE. This specification is also applicable to Generic
Network Virtualization Encapsulation (GENEVE); however, some
incremental work is required, which will be covered in a separate
document. This document also specifies new multihoming procedures
for split-horizon filtering and mass withdrawal. It also specifies
EVPN route constructions for VXLAN/NVGRE encapsulations and
Autonomous System Border Router (ASBR) procedures for multihoming of
Network Virtualization Edge (NVE) devices.
Status of This Memo
This is an Internet Standards Track document.
This document is a product of the Internet Engineering Task Force
(IETF). It represents the consensus of the IETF community. It has
received public review and has been approved for publication by the
Internet Engineering Steering Group (IESG). Further information on
Internet Standards is available in Section 2 of RFC 7841.
Information about the current status of this document, any errata,
and how to provide feedback on it may be obtained at
https://www.rfc-editor.org/info/rfc8365.
Sajassi, et al. Standards Track [Page 1]
RFC 8365 Network Virtualization Overlay Solution March 2018
Copyright Notice
Copyright (c) 2018 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Sajassi, et al. Standards Track [Page 2]
RFC 8365 Network Virtualization Overlay Solution March 2018
Table of Contents
1. Introduction ....................................................4
2. Requirements Notation and Conventions ...........................5
3. Terminology .....................................................5
4. EVPN Features ...................................................7
5. Encapsulation Options for EVPN Overlays .........................8
5.1. VXLAN/NVGRE Encapsulation ..................................8
5.1.1. Virtual Identifiers Scope ...........................9
5.1.2. Virtual Identifiers to EVI Mapping .................11
5.1.3. Constructing EVPN BGP Routes .......................13
5.2. MPLS over GRE .............................................15
6. EVPN with Multiple Data-Plane Encapsulations ...................15
7. Single-Homing NVEs - NVE Residing in Hypervisor ................16
7.1. Impact on EVPN BGP Routes & Attributes for VXLAN/NVGRE ....16
7.2. Impact on EVPN Procedures for VXLAN/NVGRE Encapsulations ..17
8. Multihoming NVEs - NVE Residing in ToR Switch ..................18
8.1. EVPN Multihoming Features .................................18
8.1.1. Multihomed ES Auto-Discovery .......................18
8.1.2. Fast Convergence and Mass Withdrawal ...............18
8.1.3. Split-Horizon ......................................19
8.1.4. Aliasing and Backup Path ...........................19
8.1.5. DF Election ........................................20
8.2. Impact on EVPN BGP Routes and Attributes ..................20
8.3. Impact on EVPN Procedures .................................20
8.3.1. Split Horizon ......................................21
8.3.2. Aliasing and Backup Path ...........................22
8.3.3. Unknown Unicast Traffic Designation ................22
9. Support for Multicast ..........................................23
10. Data-Center Interconnections (DCIs) ...........................24
10.1. DCI Using GWs ............................................24
10.2. DCI Using ASBRs ..........................................24
10.2.1. ASBR Functionality with Single-Homing NVEs ........25
10.2.2. ASBR Functionality with Multihoming NVEs ..........26
11. Security Considerations .......................................28
12. IANA Considerations ...........................................29
13. References ....................................................29
13.1. Normative References .....................................29
13.2. Informative References ...................................30
Acknowledgements ..................................................32
Contributors ......................................................32
Authors' Addresses ................................................33
Sajassi, et al. Standards Track [Page 3]
RFC 8365 Network Virtualization Overlay Solution March 2018
1. Introduction
This document specifies how Ethernet VPN (EVPN) [RFC7432] can be used
as a Network Virtualization Overlay (NVO) solution and explores the
various tunnel encapsulation options over IP and their impact on the
EVPN control plane and procedures. In particular, the following
encapsulation options are analyzed: Virtual Extensible LAN (VXLAN)
[RFC7348], Network Virtualization using Generic Routing Encapsulation
(NVGRE) [RFC7637], and MPLS over Generic Routing Encapsulation (GRE)
[RFC4023]. This specification is also applicable to Generic Network
Virtualization Encapsulation (GENEVE) [GENEVE]; however, some
incremental work is required, which will be covered in a separate
document [EVPN-GENEVE]. This document also specifies new multihoming
procedures for split-horizon filtering and mass withdrawal. It also
specifies EVPN route constructions for VXLAN/NVGRE encapsulations and
Autonomous System Border Router (ASBR) procedures for multihoming of
Network Virtualization Edge (NVE) devices.
In the context of this document, an NVO is a solution to address the
requirements of a multi-tenant data center, especially one with
virtualized hosts, e.g., Virtual Machines (VMs) or virtual workloads.
The key requirements of such a solution, as described in [RFC7364],
are the following:
- Isolation of network traffic per tenant
- Support for a large number of tenants (tens or hundreds of
thousands)
- Extension of Layer 2 (L2) connectivity among different VMs
belonging to a given tenant segment (subnet) across different
Points of Delivery (PoDs) within a data center or between
different data centers
- Allowing a given VM to move between different physical points of
attachment within a given L2 segment
The underlay network for NVO solutions is assumed to provide IP
connectivity between NVO endpoints.
Sajassi, et al. Standards Track [Page 4]
RFC 8365 Network Virtualization Overlay Solution March 2018
This document describes how EVPN can be used as an NVO solution and
explores applicability of EVPN functions and procedures. In
particular, it describes the various tunnel encapsulation options for
EVPN over IP and their impact on the EVPN control plane as well as
procedures for two main scenarios:
(a) single-homing NVEs - when an NVE resides in the hypervisor, and
(b) multihoming NVEs - when an NVE resides in a Top-of-Rack (ToR)
device.
The possible encapsulation options for EVPN overlays that are
analyzed in this document are:
- VXLAN and NVGRE
- MPLS over GRE
Before getting into the description of the different encapsulation
options for EVPN over IP, it is important to highlight the EVPN
solution's main features, how those features are currently supported,
and any impact that the encapsulation has on those features.
2. Requirements Notation and Conventions
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in
BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all
capitals, as shown here.
3. Terminology
Most of the terminology used in this documents comes from [RFC7432]
and [RFC7365].
VXLAN: Virtual Extensible LAN
GRE: Generic Routing Encapsulation
NVGRE: Network Virtualization using Generic Routing Encapsulation
GENEVE: Generic Network Virtualization Encapsulation
PoD: Point of Delivery
NV: Network Virtualization
Sajassi, et al. Standards Track [Page 5]
RFC 8365 Network Virtualization Overlay Solution March 2018
NVO: Network Virtualization Overlay
NVE: Network Virtualization Edge
VNI: VXLAN Network Identifier
VSID: Virtual Subnet Identifier (for NVGRE)
I-SID: Service Instance Identifier
EVPN: Ethernet VPN
EVI: EVPN Instance. An EVPN instance spanning the Provider Edge
(PE) devices participating in that EVPN
MAC-VRF: A Virtual Routing and Forwarding table for Media Access
Control (MAC) addresses on a PE
IP-VRF: A Virtual Routing and Forwarding table for Internet Protocol
(IP) addresses on a PE
ES: Ethernet Segment. When a customer site (device or network) is
connected to one or more PEs via a set of Ethernet links, then
that set of links is referred to as an 'Ethernet segment'.
Ethernet Segment Identifier (ESI): A unique non-zero identifier that
identifies an Ethernet segment is called an 'Ethernet Segment
Identifier'.
Ethernet Tag: An Ethernet tag identifies a particular broadcast
domain, e.g., a VLAN. An EVPN instance consists of one or more
broadcast domains.
PE: Provider Edge
Single-Active Redundancy Mode: When only a single PE, among all the
PEs attached to an ES, is allowed to forward traffic to/from that
ES for a given VLAN, then the Ethernet segment is defined to be
operating in Single-Active redundancy mode.
All-Active Redundancy Mode: When all PEs attached to an Ethernet
segment are allowed to forward known unicast traffic to/from that
ES for a given VLAN, then the ES is defined to be operating in
All-Active redundancy mode.
PIM-SM: Protocol Independent Multicast - Sparse-Mode
Sajassi, et al. Standards Track [Page 6]
RFC 8365 Network Virtualization Overlay Solution March 2018
PIM-SSM: Protocol Independent Multicast - Source-Specific Multicast
BIDIR-PIM: Bidirectional PIM
4. EVPN Features
EVPN [RFC7432] was originally designed to support the requirements
detailed in [RFC7209] and therefore has the following attributes
which directly address control-plane scaling and ease of deployment
issues.
1. Control-plane information is distributed with BGP and broadcast
and multicast traffic is sent using a shared multicast tree or
with ingress replication.
2. Control-plane learning is used for MAC (and IP) addresses
instead of data-plane learning. The latter requires the
flooding of unknown unicast and Address Resolution Protocol
(ARP) frames; whereas, the former does not require any flooding.
3. Route Reflector (RR) is used to reduce a full mesh of BGP
sessions among PE devices to a single BGP session between a PE
and the RR. Furthermore, RR hierarchy can be leveraged to scale
the number of BGP routes on the RR.
4. Auto-discovery via BGP is used to discover PE devices
participating in a given VPN, PE devices participating in a
given redundancy group, tunnel encapsulation types, multicast
tunnel types, multicast members, etc.
5. All-Active multihoming is used. This allows a given Customer
Edge (CE) device to have multiple links to multiple PEs, and
traffic to/from that CE fully utilizes all of these links.
6. When a link between a CE and a PE fails, the PEs for that EVI
are notified of the failure via the withdrawal of a single EVPN
route. This allows those PEs to remove the withdrawing PE as a
next hop for every MAC address associated with the failed link.
This is termed "mass withdrawal".
7. BGP route filtering and constrained route distribution are
leveraged to ensure that the control-plane traffic for a given
EVI is only distributed to the PEs in that EVI.
Sajassi, et al. Standards Track [Page 7]
RFC 8365 Network Virtualization Overlay Solution March 2018
8. When an IEEE 802.1Q [IEEE.802.1Q] interface is used between a CE
and a PE, each of the VLAN IDs (VIDs) on that interface can be
mapped onto a bridge table (for up to 4094 such bridge tables).
All these bridge tables may be mapped onto a single MAC-VRF (in
case of VLAN-aware bundle service).
9. VM Mobility mechanisms ensure that all PEs in a given EVI know
the ES with which a given VM, as identified by its MAC and IP
addresses, is currently associated.
10. RTs are used to allow the operator (or customer) to define a
spectrum of logical network topologies including mesh, hub and
spoke, and extranets (e.g., a VPN whose sites are owned by
different enterprises), without the need for proprietary
software or the aid of other virtual or physical devices.
Because the design goal for NVO is millions of instances per common
physical infrastructure, the scaling properties of the control plane
for NVO are extremely important. EVPN and the extensions described
herein, are designed with this level of scalability in mind.
5. Encapsulation Options for EVPN Overlays
5.1. VXLAN/NVGRE Encapsulation
Both VXLAN and NVGRE are examples of technologies that provide a data
plane encapsulation which is used to transport a packet over the
common physical IP infrastructure between Network Virtualization
Edges (NVEs) - e.g., VXLAN Tunnel End Points (VTEPs) in VXLAN
network. Both of these technologies include the identifier of the
specific NVO instance, VNI in VXLAN and VSID in NVGRE, in each
packet. In the remainder of this document we use VNI as the
representation for NVO instance with the understanding that VSID can
equally be used if the encapsulation is NVGRE unless it is stated
otherwise.
Note that a PE is equivalent to an NVE/VTEP.
VXLAN encapsulation is based on UDP, with an 8-byte header following
the UDP header. VXLAN provides a 24-bit VNI, which typically
provides a one-to-one mapping to the tenant VID, as described in
[RFC7348]. In this scenario, the ingress VTEP does not include an
inner VLAN tag on the encapsulated frame, and the egress VTEP
discards the frames with an inner VLAN tag. This mode of operation
in [RFC7348] maps to VLAN-Based Service in [RFC7432], where a tenant
VID gets mapped to an EVI.
Sajassi, et al. Standards Track [Page 8]
RFC 8365 Network Virtualization Overlay Solution March 2018
VXLAN also provides an option of including an inner VLAN tag in the
encapsulated frame, if explicitly configured at the VTEP. This mode
of operation can map to VLAN Bundle Service in [RFC7432] because all
the tenant's tagged frames map to a single bridge table / MAC-VRF,
and the inner VLAN tag is not used for lookup by the disposition PE
when performing VXLAN decapsulation as described in Section 6 of
[RFC7348].
[RFC7637] encapsulation is based on GRE encapsulation, and it
mandates the inclusion of the optional GRE Key field, which carries
the VSID. There is a one-to-one mapping between the VSID and the
tenant VID, as described in [RFC7637]. The inclusion of an inner
VLAN tag is prohibited. This mode of operation in [RFC7637] maps to
VLAN Based Service in [RFC7432].
As described in the next section, there is no change to the encoding
of EVPN routes to support VXLAN or NVGRE encapsulation, except for
the use of the BGP Encapsulation Extended Community to indicate the
encapsulation type (e.g., VXLAN or NVGRE). However, there is
potential impact to the EVPN procedures depending on where the NVE is
located (i.e., in hypervisor or ToR) and whether multihoming
capabilities are required.
5.1.1. Virtual Identifiers Scope
Although VNIs are defined as 24-bit globally unique values, there are
scenarios in which it is desirable to use a locally significant value
for the VNI, especially in the context of a data-center interconnect.
5.1.1.1. Data-Center Interconnect with Gateway
In the case where NVEs in different data centers need to be
interconnected, and the NVEs need to use VNIs as globally unique
identifiers within a data center, then a Gateway (GW) needs to be
employed at the edge of the data-center network (DCN). This is
because the Gateway will provide the functionality of translating the
VNI when crossing network boundaries, which may align with operator
span-of-control boundaries. As an example, consider the network of
Figure 1. Assume there are three network operators: one for each of
the DC1, DC2, and WAN networks. The Gateways at the edge of the data
centers are responsible for translating the VNIs between the values
used in each of the DCNs and the values used in the WAN.
Sajassi, et al. Standards Track [Page 9]
RFC 8365 Network Virtualization Overlay Solution March 2018
+--------------+
| |
+---------+ | WAN | +---------+
+----+ | +---+ +----+ +----+ +---+ | +----+
|NVE1|--| | | |WAN | |WAN | | | |--|NVE3|
+----+ |IP |GW |--|Edge| |Edge|--|GW | IP | +----+
+----+ |Fabric +---+ +----+ +----+ +---+ Fabric | +----+
|NVE2|--| | | | | |--|NVE4|
+----+ +---------+ +--------------+ +---------+ +----+
|<------ DC 1 ------> <------ DC2 ------>|
Figure 1: Data-Center Interconnect with Gateway
5.1.1.2. Data-Center Interconnect without Gateway
In the case where NVEs in different data centers need to be
interconnected, and the NVEs need to use locally assigned VNIs (e.g.,
similar to MPLS labels), there may be no need to employ Gateways at
the edge of the DCN. More specifically, the VNI value that is used
by the transmitting NVE is allocated by the NVE that is receiving the
traffic (in other words, this is similar to a "downstream-assigned"
MPLS label). This allows the VNI space to be decoupled between
different DCNs without the need for a dedicated Gateway at the edge
of the data centers. This topic is covered in Section 10.2.
+--------------+
| |
+---------+ | WAN | +---------+
+----+ | | +----+ +----+ | | +----+
|NVE1|--| | |ASBR| |ASBR| | |--|NVE3|
+----+ |IP Fabric|---| | | |--|IP Fabric| +----+
+----+ | | +----+ +----+ | | +----+
|NVE2|--| | | | | |--|NVE4|
+----+ +---------+ +--------------+ +---------+ +----+
|<------ DC 1 -----> <---- DC2 ------>|
Figure 2: Data-Center Interconnect with ASBR
Sajassi, et al. Standards Track [Page 10]
RFC 8365 Network Virtualization Overlay Solution March 2018
5.1.2. Virtual Identifiers to EVI Mapping
Just like in [RFC7432], where two options existed for mapping
broadcast domains (represented by VLAN IDs) to an EVI, when the EVPN
control plane is used in conjunction with VXLAN (or NVGRE
encapsulation), there are also two options for mapping broadcast
domains represented by VXLAN VNIs (or NVGRE VSIDs) to an EVI:
Option 1: A Single Broadcast Domain per EVI
In this option, a single Ethernet broadcast domain (e.g., subnet)
represented by a VNI is mapped to a unique EVI. This corresponds to
the VLAN-Based Service in [RFC7432], where a tenant-facing interface,
logical interface (e.g., represented by a VID), or physical interface
gets mapped to an EVI. As such, a BGP Route Distinguisher (RD) and
Route Target (RT) are needed per VNI on every NVE. The advantage of
this model is that it allows the BGP RT constraint mechanisms to be
used in order to limit the propagation and import of routes to only
the NVEs that are interested in a given VNI. The disadvantage of
this model may be the provisioning overhead if the RD and RT are not
derived automatically from the VNI.
In this option, the MAC-VRF table is identified by the RT in the
control plane and by the VNI in the data plane. In this option, the
specific MAC-VRF table corresponds to only a single bridge table.
Option 2: Multiple Broadcast Domains per EVI
In this option, multiple subnets, each represented by a unique VNI,
are mapped to a single EVI. For example, if a tenant has multiple
segments/subnets each represented by a VNI, then all the VNIs for
that tenant are mapped to a single EVI; for example, the EVI in this
case represents the tenant and not a subnet. This corresponds to the
VLAN-aware bundle service in [RFC7432]. The advantage of this model
is that it doesn't require the provisioning of an RD/RT per VNI.
However, this is a moot point when compared to Option 1 where auto-
derivation is used. The disadvantage of this model is that routes
would be imported by NVEs that may not be interested in a given VNI.
In this option, the MAC-VRF table is identified by the RT in the
control plane; a specific bridge table for that MAC-VRF is identified
by the