-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Virtual Device Context #7854
Comments
Picking a concise, descriptive, and vendor-neutral name for this may be tricky. I think I like "device context" best because it clearly references a device, though this would probably get confused with config contexts. At any rate, this would probably entail adding a new model (we'll use DeviceContext for the purpose of discussion) which can be assigned to a device. Interfaces (and other components?) belonging to the device can optionally be assigned to a DeviceContext to indicate their membership. Deleting the DeviceContext instance would also remove these relationships, but the interfaces would remain on the parent device as before. I'm not clear what other data we would need to track for the DeviceContext itself: Presumably a name, but what else goes into its configuration? |
Couple thoughts on data to track:
I am sure there are more |
These type of "device context" are normally hosted on cluster based either physical hardware or VM. |
Which product hosts the device context on a cluster? Granted my experience is with Nexus but the relationship is a One-to-Many Chassis-to-VDC |
Most firewall vendors, such as listed above. Cisco you can run Cisco ASA in context mode. Check Point uses a product called VSX. Running on these you can have virtual routers / virtual systems / virtual switches. Palo Alto and juniper dose it in a similar way. |
So, for ASA, that is not how contexts work at all. I think you are confusing how HA works. With a ASA, if you are using clustering with contexts, on each ASA you will have a context, not one context shared between ASAs. For Active/Active HA, or Active/Standby HA it is the same. If you have a "users" context, you will have a users context on both ASA's. I am sure many of the other vendors are the same. You want a way to track the HA relationship and that is a separate FR (although virtual chassis works well in most.instances for that) |
I think there are two distinct layers of abstraction here:
This FR addresses the later. The former is probably best conveyed using NetBox's virtual chassis model. |
Sure i understand that we want to have multiple virtual things running on the same device. In Cisco ASA, yes you can enable context mode in single device or in a cluster. and yes for cisco its HA. Check Point use VSX with VSLS meaning one context is active on one node within the cluster at the time "Each Virtual System works as a Security Gateway, typically protecting a specified network. When packets arrive at the VSX Gateway, it sends traffic to the Virtual System protecting the destination network. The Virtual System inspects all traffic and allows or rejects it according to rules defined in the security policy. In order to better understand how virtual networks work, it is important to compare physical network environments with their virtual (VSX) counterparts. While physical networks consist of many hardware components, VSX virtual networks reside on a single configurable VSX Gateway or cluster that defines and protects multiple independent networks, together with their virtual components." Regarding virtual chassi, if am not misstaken this for things like Cisco VSS. |
snowie-swe, With ASA, when you have contexts enabled in a cluster or HA, you will have n contexts, 1 for each device. There is no need to track the "cluster", except in the case you want to track the HA state, which as I said is a separate FR. Checkpoint VSX is the same way: https://sc1.checkpoint.com/documents/R80.10/WebAdminGuides/EN/CP_R80.10_VSX_AdminGuide/html_frameset.htm?topic=documents/R80.10/WebAdminGuides/EN/CP_R80.10_VSX_AdminGuide/161797 If you look at the image, you will see there is a virtual "context" associated with each physical device. There is no need to track this similar to the virtual machine model where you have a VM that can be on multiple separate devices. It is different, each device will hold a context. |
actually its not. |
Just to clarify, this FR is about wanting to have virtualization similar to VM but for network equipment, correct? "Emulating multiple virtual environments within a single device" |
Eh, kinda? In my experience (which is far from authoritative), a device context is more like a semi-isolated slice of a device to which physical interfaces are allocated. An example would be splitting a single physical router into two contexts that sit in front of and behind a firewall, effecting two entirely isolated forwarding planes. In my mind I see this as distinct from "pure" virtual networking, where virtual routers need not be associated with any physical interfaces. Others might have a different take. |
The usercases listed above is virtual firewalls atleast the Fortinet and Palo Alto. Fortinet Virtual Domains (vDOMs) and i added. Check Point Virtual System The firewall cases the virtual device use the physical hardware for processing power. |
The image you shared exactly proves my point, there is at least 1 context on each physical device. It doesn't share the "VMWare model" where there is only a single virtual machine and it can start on any device. In contexts, on devices, there will always be at least 1 context per physical device, you will never have a context that runs on two different devices (however you may have a context that is part of a cluster that can be active on any one device) Look at it this way. You do not have to have a cluster to use a virtual context. You can run a virtual context completely independent of whether you have a cluster or not in almost all cases. If you want to track cluster members for HA purposes, that is a separate FR. |
Just to clarify, i have no need to keep track of what where the context is active. |
Also consider F5 vCMP. |
As you have been told, you need to open a separate FR for this, as this is a HA model which is not specific to contexts (yes, it can be applied to contexts, but many vendors also let you HA the bare metal) |
It was said before, but Fortinet VDC (vDOMs) equivalent is built on a cluster if configured in a cluster or a single device if not.
My main experience is with Fortinet, I would tweak the above to be "semi-isolated slice of a device [virtual chassis or physical] to which physical interfaces are allocated. An example would be splitting a single physical router into two contexts that sit in front of and behind a firewall, effecting two entirely isolated forwarding planes." It is fair to be said this cluster/virtual cluster association should be its own FR, but in my mind the FR should approach the common functionality across routers and firewalls a like. A good conversation so far. |
as part of this. |
That one time migration would likely be good for https://github.com/netbox-community/migration-scripts |
We discussed this FR in today's maintainers' meeting, and recognized that a more detailed implementation plan is needed. One of the open questions is whether a particular interface (or other component) should be permitted to belong to multiple contexts. IMO this should not be allowed, as this is typically not permitted in my experience working with device contexts. Separately, this raises the question of whether we should invest more thought into the modeling of virtual switches before undertaking this implementation, as there are numerous parallels. |
Cisco ASA for management purposes allow to use same physical interface in untagged mode into multiple device context. It is true that in other cases you normally use subinterface and associate them on specific context with 1:1 relation. As a suggestion, perhaps we need to merge all capabilities on a single model and creates rules based on vendor on it, in order to guarantee a correct filling. Also, we could consider that Cisco Nexus VDC feature (that is in Cisco decommissioning) could be threated as virtual switch so we could manage with a specific model, because it is true that this feature is abandoned but is also true that there are more switches in the world that uses this feature. |
I still don't see the overlap here. 1 is aggregating ports for a number of physical devices and 1 is dis-aggregating ports from one device into a number of virtual devices. Using multiple models also simplifies some of the frontend/backend logic as we don't have to account for the separate behaviours present in each of these. VDC and VDOM and Contexts and Instances can all be lumped together because for the most part their functionality is the same (apart from sharing interfaces and the like) Virtual Chassis would be a better fit to combine with some cluster/HA model instead.
So, again, this comes down to NetBox's design. NetBox is designed to model the desired state of the network. For example, in all of the virtual chassis that I have deployed, we always configure the stacks to have a primary master, a standby and members (Catalyst for example, we do it by setting priorities appropriately). Our desired state, normally, matches the actual state because we try to ensure that the one we want to be master stays master. Now, we do sometimes have times where these fall out of sync, however that is fine, it will fix itself next reload, but the desired state never changes (we want x switch to be master). I think it could be beneficial, when we look at VPC's, to perhaps look at adapting the Virtual Chassis model, however a vPC is not like a virtual chassis. I do think there are some tweaks to be made to the VC implementation myself, like I would prefer to have all the instances show under Virtual Chassis and instead only show the devices interfaces under the device itself, however this is personal preference. |
I have started working on this, using the planned models. |
Looking at the code, I would also add the type F5 vCMP (Virtual Clustered Multiprocessing). @DanSheps |
We actually have decided to not worry about types for now. It didn't make much sense, for example take a look at a the F5 for example. No matter what "Device Type" (SKU) you have, it is always going to be a vCMP. Same with Cisco Nexus and ASA. If you have a Nexus 7706, it is always going to be a Nexus VDC, if you have a ASA 5515, it is always going to be a ASA security context. The only one corner case I can see, is a Cisco Firepower, which can run ASA bare metal that can then be divided up into ASA security contexts or it will run firepower instances. That should be easy enough to extract based on the "platform" however. Each device type/platform has an implied type and there isn't normally any deviation from that. |
* Work on #7854 * Move to new URL scheme. * Fix PEP8 errors * Fix PEP8 errors * Add GraphQL and fix primary_ip missing * Fix PEP8 on GQL Type * Fix missing NestedSerializer. * Fix missing NestedSerializer & rename VDC to VDCs * Fix migration * Change Validation for identifier * Fix missing migration * Rebase to feature * Post-review changes * Remove VDC Type * Remove M2M Enforcement logic * Interface related changes * Add filter fields to filterset for Interface filter * Add form field to filterset form for Interface filter * Add VDC display to interface detail template * Remove VirtualDeviceContextTypeChoices * Accommodate recent changes in feature branch * Add tests Add missing search() * Update tests, and fix model form * Update test_api * Update test_api.InterfaceTest create_data * Fix issue with tests * Update interface serializer * Update serializer and tests * Update status to be required * Remove error message for constraint * Remove extraneous import * Re-ordered devices menu to place VDC below virtual chassis * Add helptext for `identifier` field * Fix breadcrumb link * Remove add interface link * Add missing tenant and status fields * Changes to tests as per Jeremy * Change for #9623 Co-authored-by: Jeremy Stretch <jstretch@ns1.com> * Update filterset form for status field * Remove Rename View * Change tabs to spaces * Update netbox/dcim/tables/devices.py Co-authored-by: Jeremy Stretch <jstretch@ns1.com> * Update netbox/dcim/tables/devices.py Co-authored-by: Jeremy Stretch <jstretch@ns1.com> * Fix tenant in bulk_edit * Apply suggestions from code review Co-authored-by: Jeremy Stretch <jstretch@ns1.com> * Add status field to table. * Re-order table fields. Co-authored-by: Jeremy Stretch <jstretch@ns1.com>
In fact, no. Depending on the box, it can be able to support vCMP or not. But I get your point from a generic standpoint. I will provide deeper feedback once I played with this modeling. FYI, right now all of this(VDC, vcmp, vsys, vdom, etc) is modeled with Virtualization/cluster in our environment. One role is the "host" part and then the other one are the children part. The host and the children are then present in the same cluster. And we can combine two children in a virtual chassis way to show HA if we need to. |
I'm looking at the VDC and Interface ui on https://beta-demo.netbox.dev/dcim/devices/88/interfaces/ and didn't see the option to put the VDC as a column of the Interface table, if I want to look at the hardware device and see which interfaces are in which VDC(s). Am I missing something or is this a good idea to add? |
Currently the virtual device context is linked to the device for all aspects. I would have think that a virtual device context would have its own role, config context, interfaces, etc. |
I agree with this. Although one possible problem with config context is how should it be applied for a device with or without VDCs? Implicitly on Fortinet Firewalls, without vdoms "enabled" (implicitly used), the vdom is called "root". Once vdoms are used, the context is then divided into "global" and then per-vdom. So in the case of config context, should it only apply to the device if VDC is not defined? and only apply to the VDCs if they are defined? Without differentiating the two, you may encounter problems with config context. Perhaps a new issue should be created to track this? @jmanteau |
With config_context eligibility, it may not be possible or desirable to model every different way that config inheritance may work on different platforms, but I agree that the VDC model should also have a foreign key to Device Roles (it already has Tenants and Tags, Platform is superfluous), and should probably be able to have its own virtual interfaces attached to the VDC, while physical interfaces are shared from the chassis. I would hope that between Device Roles, Tenants and Tags that you'd be able to get the right config context data rendered for the device, what is the config element you'd like to render that this doesn't work for?
For example, on https://beta-demo.netbox.dev/dcim/devices/1/interfaces/ I tried creating two Vlan100 interfaces attached to two different VDCs but it refuses to create the second Vlan100 interface. In many ways VDCs are like VMs (e.g. Cisco Nexus VDC is implemented using LXC, ASA context may be too) except that they don't get to run different software than the main unit (AFAIK), don't get to migrate and have a cluster size of one, all resources are pinned with no overcommit or sharing and physical interfaces are mapped directly to the VMs using hardware IO ACLs for performance.
—
Mark Tinberg ***@***.***>
Division of Information Technology-Network Services
University of Wisconsin-Madison
…________________________________
From: David Sobon ***@***.***>
Sent: Monday, November 21, 2022 5:56 AM
To: netbox-community/netbox ***@***.***>
Cc: Mark Tinberg ***@***.***>; Mention ***@***.***>
Subject: Re: [netbox-community/netbox] Virtual Device Context (Issue #7854)
Currently the virtual device context is linked to the device for all aspects. I would have think that a virtual device context would have its own role, config context.
I agree with this. Although one possible problem with config context is how should it be applied for a device with or without VDCs? Implicitly on Fortinet Firewalls, without vdoms "enabled" (implicitly used), the vdom is called "root". Once vdoms are used, the context is then divided into "global" and then per-vdom. So in the case of config context, should it only apply to the device if VDC is not defined? and only apply to the VDCs if they are defined? Without differentiating the two, you may encounter problems with config config text.
Perhaps a new issue should be created to track this? @jmanteau<https://github.com/jmanteau>
—
Reply to this email directly, view it on GitHub<#7854 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAS7UMYOM24U6JFH5ZGHQ3TWJNPPJANCNFSM5IF4X7QA>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
You're right about config context... I do not see a real use-case - but I could be wrong in other people's potential instance - where, on the single device, it would be different on the VDC (vdom) level. As for interfaces with the same VLAN (but different IP address) on the same physical device... we do have multiple VDCs (vdom) on the same VLAN.. eg: using one VDC (vdom) as a router - instead of using a physically separate router - virtually connecting (using "VDOM links") to each of the other VDCs (vdom)... So without having multiple VLAN interfaces on the same device but different VDC, this limitation would fail to represent a valid real-world configuration. |
Although... I am unsure why we are using tagged VLAN on a point-to-point virtual link. |
@DanSheps @jeremystretch : It seems the current modeling choices have arisen a lot of questions. Should we open a new issue to track all of this ? |
The tagged vlan on the point to point virtual link is very similar to needing to tag "virtual" interfaces on devices to show what vlan they are. I think that whole idea of deciding what vlan an interface is in needs to be re-engineered. We shouldnt need to mark a virtual interface as 802.1x access/tagged for example. We should be able to set the vlan the interface exists in without any config of tagged/untagged. |
OK I found out why. for non-hardware-accelerated virtual links, VLAN ID is not required information as the link is explicitly defined. In this case, I think we can get away with just defining two unique virtual interface names, and using a connection between the two, without needing to use VLAN IDs explicitly. |
We shouldnt need to mark a virtual interface as 802.1x access/tagged for example. We should be able to set the vlan the interface exists in without any config of tagged/untagged.
I'm not super worried about this particular minutia of modeling, config templating can be smart enough to not add a " switchport mode access\n switchport access vlan ${vlan_vid}\n" for virtual links and otherwise the data model is accurate and consistent, the interface does get successfully associated with the VLAN record. Maybe additional verification code could happen to prevent adding trunks to virtual interfaces, if this is something that can _never_ happen on _any_ platform someone might want to model in Netbox, but that's probably not true and the flexibility in the model is useful to someone.
—
Mark Tinberg ***@***.***>
Division of Information Technology-Network Services
University of Wisconsin-Madison
Message ID: ***@***.***>
|
NetBox version
v3.0.10
Feature type
New functionality
Proposed functionality
For lack of a better term, it would be ideal to support the common model of virtual device context. This should NOT model a specific vendor, but the common functionality of virtual device context in that interfaces can be assigned to them (in addition to VRFs if implemented #7852)
Use case
Platform specific implementations to look at to understand the commonly shared functionality:
Database changes
No response
External dependencies
No response
The text was updated successfully, but these errors were encountered: