-
Notifications
You must be signed in to change notification settings - Fork 95
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice patch !
A few questions. Only #1 needs answer before LGTM from me.
(1) did you test if it works after vsanDatastore rename? I think it should as the URL/UUID won't change, but would be good to double check
(2) what happens if there re 2 vsanDatastores ? In 6.5 it was added for VMC (vshere on AWS) and is a hidden feature. It could be a separate issues/PR, but this PR seems like a good place to take a look at least
(3) Do we know what takes the rest of the time ?
(4) why did you decide to put in in is_vsan() , and not in get_vsan_datastore() ?
@msterin Thanks for the review! To answer your questions:
|
BTW, CI failure is due to issue #1864. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this issue get repro'ed on VMFS also? Earlier, I'd tested this config and it used to take a long time on vmfs.
return False | ||
global _vsan_ds | ||
if not _vsan_ds: | ||
_vsan_ds = get_vsan_datastore() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When the service starts it calls init_datastoreCache(), can't the datastore type be be cached in that map as well in addition to the name, URL and dockvols path? This code path of is_vsan_datastore() seems to be extra when we have a DS cache already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree it's possible to improve&reuse existing datastore cache. Some minor concerns:
- Adding a new entry to existing datastore cache might break existing code paths, so we need thorough regression test
- The existing datastore cache is not very well encapsulated. The init_datastoreCache() is being called in multiple places. Ideally this should be avoided.
To reuse this cache, I will need to refactor existing implementation to improve the encapsulation and make it more robust. Otherwise I will have to call init_datastoreCache() in one more code path, which is really something I want to avoid.
Both are not critical concerns though. But it just means much more work to do. Considering that the current vsan_info.py is a module specifically for VSAN, I feel it's acceptable to maintain this minor cache in this module. So personally I don't have strong motivation to switch to the global datastore cache right now. Let me know what you think.
@govint Thanks for the review! Yes, this issue get reproduced on VMFS as well. Actually it doesn't matter if the volumes are created on VMFS or VSAN datastore. The logic get_policy/is_on_vsan/get_vsan_datastore is always invoked as part of generate_ls_rows workflow. |
@msterin Regarding your 2nd question, what I can do is to add a new VSAN cluster to get the 2nd VSAN datastore. However, I can't add the same host to two different VSAN clusters. So for the docker hosts (VMs) in one host, I can only create volumes on the attached VSAN datastore only. The other VSAN datastore which belongs to a different cluster is not visible/accessible to the docker hosts (VMs) in the current VSAN cluster. |
Fixes issue #1858: Optimize vmdkops_admin.get_policy() function. The fix is to cache vSAN datastore managed object. The vSAN datastore should be preconfigured, so it doesn't make sense to fetch it every time, which is extremely time consuming. Please refer to the last 2 comments in issue #1083 for more detailed analysis.
Here's a comparison before and after the fix being applied (time in seconds):
As we can see, the performance was improved significantly. Even though for very large number of volumes (>1000), it's still a bit slow, but at least it's reasonably acceptable now if volume number is less than 500.