Whether it’s due to IPv4 address exhaustion, compliance mandates or the need to connect to IPv6-only clients on the internet, IPv6 adoption in the public cloud is growing rapidly. Meanwhile, enterprises also want to connect cloud-based applications back to applications running on-premises over IPv6. Today, we are announcing a significant expansion to our IPv6 Hybrid Connectivity portfolio, building on the existing Dedicated Interconnect and HA-VPN hybrid connectivity options.Â
 New in the IPv6 Hybrid Connectivity portfolio are:
IPv6 BGP sessions
Partner Interconnect IPv6
IPv6-only HA-VPN
Prior to availability of these new features, you could route IPv6 traffic over Dedicated Interconnect and HA-VPN hybrid connectivity options using an underlying IPv4 BGP session.
The new additions to our IPv6 Hybrid Connectivity portfolio address these requirements as follows:
IPv6 BGP Sessions will allow customers to exchange IPv6 prefixes over underlying IPv6 BGP sessions, thus eliminating the dependence on an IPv4 BGP peering device to advertise and receive IPv6 prefixes between Cloud Router and the remote BGP peering device.
Partner Interconnect IPv6 will allow customers to establish connectivity between their on-premises and Google Cloud VPC networks through a service provider using layer 2 or layer 3 partner interconnect attachments.
IPv6-only HA-VPN will allow customers to use IPv6 addressing for both inner and outer IP addresses of the IPSEC VPN tunnels between their Google Cloud VPN gateways and peer VPN gateways.
These new capabilities are critical for enterprises deploying cloud-native, serverless and container-based services, 5G applications, and AI/ML applications that require IP addressing scale along with the ability to connect these workloads between on-premises and public cloud environments.
Now, let’s discuss how you can use these solutions to connect your on-premises IPv6 workloads to IPv6 workloads in your Google Cloud VPC networks.
IPv6 BGP Sessions
Up until now, IPv6 prefixes were exchanged over IPv4 MP-BGP sessions that exchanged v6 prefixes over them by setting the IPv6 address as the next-hop.
The process to enable IPv6 over an existing IPv4 BGP session involved re-negotiating the BGP session with the extra protocol (IPv6). This required resetting the BGP session, potentially impacting existing IPv4 traffic over the underlying interconnect attachment or VPN tunnel. Moreover, extra configuration was needed on the on-premises router to override the next-hop field for the exported IPv6 prefixes.
With the launch of IPv6 BGP sessions, a parallel BGP session is established over the same Interconnect VLAN attachment or VPN tunnel. The new BGP session auto-allocates the IPv6 next-hop, hence eliminating the need to add IPv6 as a second protocol to the existing IPv4 BGP session. This both saves from having to reset the BGP session and from managing route-maps on the on-premises router to override the next-hops when exporting IPv6 routes. IPv6 BGP sessions can be used with Dedicated Interconnect, Partner Interconnect as well as HA-VPN.
To migrate an existing interconnect VLAN attachment to IPv6 BGP, the attachment first has to be updated to enable dual-stack IPv6; this can be done simply by issuing the following gcloud command:
<ListValue: [StructValue([(‘code’, ‘gcloud compute interconnects attachments dedicated update my-test-attachment –region=my-test-region –stack-type=IPV4_IPV6’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e844b36f520>)])]>
Going from an IPv4-only attachment to a dual-stack IPv6 attachment type does not impact existing traffic. See Modify Stack Type for more details on the impact of modifying the IP stack type of VLAN attachments. This step also auto-allocates an IPv6 /125 prefix for the attachment.
Then, add a new cloud router interface using the following gcloud command:
<ListValue: [StructValue([(‘code’, ‘gcloud compute routers add-interface my-cloud-router –region my-test-region –interface-name interface-name-v6 –interconnect-attachment zakim-ic-attach2 –ip-version=IPV6’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e844b36f070>)])]>
Subsequently, you can create an IPv6 BGP session on the IPv6 interface using this command:
<ListValue: [StructValue([(‘code’, ‘gcloud compute routers add-bgp-peer my-cloud-router –interface interface-name-v6 –region my-test-region –peer-name test-peer-name-v6 –peer-asn test-peer-asn’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e844b36f730>)])]>
As you can see above, you only need to perform three steps to migrate an attachment to IPv6 BGP.
Partner Interconnect IPv6
Partner Interconnect now supports IPv6 for both Layer 2 and Layer 3 Partner Interconnect attachments. Setting up dual-stack Partner Interconnect attachments automatically provisions separate IPv4 and IPv6 BGP sessions over the underlying Partner Interconnect attachment. IPv4 prefixes are exchanged over the IPv4 BGP session, whereas IPv6 prefixes are exchanged over the IPv6 BGP session. Google Cloud automatically allocates a Google-owned /125 address range to the Partner Interconnect attachment during this process.
You can either create a new dual-stack IPv6 Partner Interconnect attachment or migrate an existing Partner Interconnect attachment to a dual-stack IPv6 attachment.
To create a new dual-stack IPv6 Partner Interconnect attachment, the attribute –stack-type=IPV4_IPV6 must be set as shown in the following gcloud command:
<ListValue: [StructValue([(‘code’, ‘gcloud compute interconnects attachments partner create my-test-attachment \rn –region=my-test-region \rn –router=cloud-router-name \rn –stack-type=IPV4_IPV6 \rn –edge-availability-domain availability-domain-1/availability-domain-2’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e844b36f640>)])]>
This automatically creates two BGP sessions (one per IP version) in the associated Cloud Router that should be configured separately with the peer’s ASN (in the case of Layer 2 providers). Layer 3 attachments are configured with a peer ASN by the partner so no additional intervention is required.
An existing Partner Interconnect attachment can also be migrated to dual-stack IPv6 by issuing the command:
<ListValue: [StructValue([(‘code’, ‘gcloud compute interconnects attachments partner update my-test-attachment –region=my-test-region –stack-type=IPV4_IPV6’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e8448a73700>)])]>
This works for both Layer 2 and Layer 3 providers. In both instances, a /125 address range is automatically allocated and added to the attachment.
Note: Partner Interconnect only allows IPv6 BGP sessions and not MP-BGP route exchange for IPv6 prefixes.
IPv6-only HA-VPN
Until now, HA-VPN supported passing IPv6 traffic over existing IPSec tunnels that are negotiated and terminated by internet-routable IPv4 addresses (also known as outer IP addresses).Â
With the launch of IPv6-only HA-VPN, we now support the use of IPv6 addressing for both the inner and outer IP addresses of the IPSec tunnel between Google Cloud and peer VPN gateways using Google Cloud HA-VPN. This support also extends to connecting two Google Cloud VPCs using IPv6 HA-VPN.Â
Follow the IPv6 HA-VPN migration guide for a full procedure to migrate an IPv4 HA-VPN gateway to an IPv6 HA-VPN gateway.
Considerations
The new IPv6 hybrid offerings introduce new pathways to enable IPv6 to your on-premises networks. Consider the following general recommendations to leverage these new capabilities:
Whenever possible, choose IPv6 BGP sessions instead of MP-BGP ones. This simplifies route management and avoids the need to reset IPv4 BGP sessions when IPv6 prefix exchange is enabled on these sessions.
Keep in mind that Cloud Router only advertises internal IPv6 (–ipv6-access-type=INTERNAL) subnets in default advertisement mode. Custom advertisements can still be used to advertise any IPv6 prefix.
Peering subnet IPv6 ranges are similarly not advertised by Cloud Router in default mode (which is also the case for IPv4 subnet ranges). Utilize custom advertisements for any peered IPv6 subnet ranges, regardless of ipv6-access-type.
When using HA-VPN, leverage IPv6-only HA-VPN (both outer and inner IPv6 addresses) to maximize compatibility with your on-premises networking equipment.
Firewall rules work the same for both IPv4 and IPv6. Carefully audit your existing VPC firewall rules and firewall policies, to ensure they match any new IPv6 ranges advertised from your on-premises networks, as applicable.Â
In this post, we covered the different hybrid connectivity options that you can use to connect your on-premises IPv6 workloads to Google Cloud IPv6 workloads, which include the previously launched Dedicated Interconnect IPv6 as well as the newly available IPv6 BGP Sessions, Partner Interconnect IPv6, and IPv6-only HA-VPN. To learn more about these options please refer to:
We can’t wait to see how you utilize these different solutions to connect your on-premises IPv6 workloads to Google Cloud IPv6 workloads and fast-forward your transition to IPv6.
Cloud BlogRead More