-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Azure.Core] Add Public Model Serialization #35742
Conversation
* wip * wip * wip * wip * wip
API change check APIView has identified API level changes in this PR and created following API reviews. |
/// <summary> | ||
/// TODO | ||
/// </summary> | ||
public interface IJsonSerializable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should consider if there's any value to splitting this up into IJsonSerializable
and IJsonDeserializable
. Key Vault does this and it's easy to detect when one or both directions are implemented; though, in practice, I don't know if that's really necessary.
/// <param name="bytesWritten"></param> | ||
/// <param name="options"></param> | ||
/// <returns></returns> | ||
bool TrySerialize(Stream stream, out long bytesWritten, SerializableOptions? options = default); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I appreciate we don't want to use exceptions for code flow, but is there any expected scenario where we wouldn't serialize when we're told (via call) that we should serialize? If not, this just creates undue burden and an exception is warranted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the suggestion here to have a Serialize
and Deserialize
that throw an exception along side the Try variants?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If an error is truly exceptional, yes. If you're just returning a false
when an error occurs and expect the caller to throw their own error - leading to inconsistencies - just throw. Are there any error cases that would be expected e.g., completely empty body - no {}
or []
, just empty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as long as its valid json there shouldn't be an error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have an example usage scenario that would help us understand the value of using TryXx() vs. Non-try APIs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for the "no try, just do"
As Anne mentioned, I'm having trouble figuring out a scenario where I would try. Are you thinking of something like "I don't know what type this is, so I'll try A and if it doesn't work, then B, and if that doesn't work, it must be C?"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As Anne mentioned, I'm having trouble figuring out a scenario where I would try.
Ah, for clarification, I was asking my question quite honestly - I believe such a scenario may exist, I would just like to understand it more concretely, and I think it would be useful for all of us to look at for the purposes of evaluation. I was not trying to imply with my question that such a scenario does not exist.
In general, TryXx() APIs tend to be provided to make it possible to call APIs without invoking the exception handling machinery, which is bad for performance. If Try APIs exist, it's easy to implement non-Try APIs around them, but not vice versa. So in principle I have no problem with using Try APIs (I tend to like them), but it does feel like a non-standard choice for an interface. And also, as @JoshLove-msft mentioned elsewhere, they don't work with async methods because of out parameters, so if that's a need here, we may not have that option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies if it came across otherwise; it was not my intent to imply that you didn't think there were scenarios. My assumption was that you were also having difficulty coming up with a real-world example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example usage is in the test case in this PR. For easy reference https://github.com/Azure/azure-sdk-for-net/pull/35742/files#diff-46013d0356c5137faeddc1f0b2c084532794fcea22ff8574fa4a46b824640855R46-R47
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see it's used, but I still don't understand why. If there is any exception writing object data to a stream of bytes, it seems exceptional and should throw an exception. Is there a case where any underlying exception - or even a higher-level condition - should simply return false? And if there is a case where the underlying exception is thrown - like failure to write to a stream - shouldn't the caller know the details instead of having to get back false
and then throw some exception themselves in a myriad of ways sans details?
public bool IncludeReadOnlyProperties { get; set; } = true; | ||
|
||
/// <summary> | ||
/// TODO | ||
/// </summary> | ||
public bool HandleAdditionalProperties { get; set; } = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: .NET guidelines are that properties should typically default to false
(or default
, in general). Perhaps these should be renamed ExcludeReadOnlyProperties
and ExcludeAdditionalProperties
. This is the reason we have Exclude*
in Azure.Identity options.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
double negatives are always confusing, prefer if we keep these as positive statements instead of at all possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could be something like IgnoreAdditionalProperties and IgnoreReadOnlyProperties, but we definitely have some other public types where we use default of true for bool, e.g. https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/core/Azure.Core/src/DiagnosticsOptions.cs#L81
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a crisp definition of "AdditionalProperties"? We might want to tune this naming for clarity. Is it a "Missing Member", per this definition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 "IgnoreReadOnlyProperties", to be consistent with BCL naming
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JoshLove-msft I like that suggestion @nisha-bhatia lets add a task to rename both to Ignore and default to false.
@annelo-msft I think a missing member is the other way around, the class expected it but the payload didn't include. AddtionalProperties are extra things that the class doesn't know about.
/// <summary> | ||
/// TODO | ||
/// </summary> | ||
public bool PrettyPrint { get; set; } = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is rather specific. Instead, should we may let users pass a System.Text.Json.JsonSerializerOptions
? For example, what if they want 2 spaces to indent instead of 4? Or tabs instead of spaces? If we're going to make pretty printing possible for serialization, we should consider being flexible now instead of doing it later and having this then-confusing property.
Alternatively, what about serializing to a Utf8Writer
instead of a Stream
, in which case they can provide a formatting options completely outside our API with whatever options they want?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of the goals here is not pin us to a specific underlying serialization technology and give us the freedom to swap this out as needed under the covers so we wouldn't want to take these types on the public signature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then at the very least we should probably replicate more of STJ's options. What does "pretty print" even mean? See my examples in my comment for ways this could vary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We definitely don't want to duplicate SJT's options for the reasons listed here #35742 (comment)
JsonWriterOptions jsonWriterOptions = new JsonWriterOptions(); | ||
if (options.PrettyPrint) | ||
{ | ||
jsonWriterOptions.Indented = true; | ||
} | ||
Utf8JsonWriter writer = new Utf8JsonWriter(stream, jsonWriterOptions); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per my comment above, if we just write to a Utf8JsonWriter
callers can supply their own formatting options with whatever tweaks they want for indentation, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to add a section to the description talking about how these types will be used? For example, will all generated models in our libraries implement these interfaces? Will we move Azure.Core types like ETag
to implement them? How will it work with IUtf8JsonSerializable
? What other types/components will take IJsonSerializable
that the interface should work well with? Will there be a reason for customers to implement the interface? What are key usage scenarios?
Do we have a description of the feature set we're looking to support with this feature? For example, are we concerned with backward compatibility and versioning? E.g. if a service API follows the Azure API guidelines, does this interface guarantee it can always be deserialized from any library? If so, should these methods take api-version
?
Do we have plans to support serialization of non-JSON formats, e.g. XML, or other? Should the interfaces support that, or would we add separate interfaces for those formats?
|
||
namespace Azure.Core.Tests.ModelSerializationTests | ||
{ | ||
internal interface IUtf8JsonSerializable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious: why does this live in the test project? Would we make these changes to the shared source IUtf8JsonSerializable type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IUtf8JsonSerializable would need to be changed to include the SerializableOptions param, for now we added a copy of the file.
/// <param name="bytesWritten"></param> | ||
/// <param name="options"></param> | ||
/// <returns></returns> | ||
bool TrySerialize(Stream stream, out long bytesWritten, SerializableOptions? options = default); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we consider also taking BinaryData
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could that handle a stream where position != 0 though? A Utf8JsonWriter
could and supersedes a simple PrettyPrint
in terms of flexibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See #35742 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the similar vein - DynamicJson /cc: @annelo-msft
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pallavit - yes, it would be good to look at the interactions between DynamicData and the serialization interfaces and see if there's a good way to make it easy to go back and forth between them. @m-nash and I have discussed this a tiny bit - it potentially lights up scenarios around going back and forth between protocol and convenience methods, and could also have perf implications where model types are large and callers only want to access a few properties on them when it doesn't make sense to deserialize the entire model. For perf, I would expect us to want to use MutableJsonDocument rather than DynamicData, until there is lighter weight dynamic story.
In the meantime, DynamicData/MutableJsonDocument can serialize to a Stream, so this interface doesn't prevent that interaction, just adds a hop.
Does the motivation for taking Stream
come from the fact that it is the fundamental vehicle for content (e.g. on Response.ContentStream
?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on why does the stream need to be seekable.
If the stream is not at position 0, wouldn't we just start writing at whatever the current position is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the scenario of serialization the customer would want to read from the beginning after we have serialized into the stream if its not seekable how would they do this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The stream might be a non-seekable FileStream that writes to disk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An underlying stream may be seekable when another stream wrapping some segment of bytes might not be, like a buffered stream over a network stream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the DynamicData pivot, is there a goal to keep the serialized-out format close enough to wire format that DynamicData could feasibly implement this interface? Which is related to the question - should protocol method-only APIs have serializable "models" as well?
/// <summary> | ||
/// TODO | ||
/// </summary> | ||
public bool PrettyPrint { get; set; } = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we consider mirroring SJT with WriteIndented?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its interesting in the description of what you wrote they say "true if JSON is pretty printed on serialization" indicating that PrettyPrint
is a pretty well understood concept.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But they chose not to use that term. WriteIndented
seems more inline with what it's actually doing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 that this property would be more helpful and consistent with other .NET APIs if it described its intent/specifically what it does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can discuss the naming here for sure, but PrettyPrint = Indent + Newlines + spaces after :
, etc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What scenario motivates the need for this property?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its a very common scenario to generate a user friendly readable format for logs vs the smallest possible format for wire transfer perf reasons.
this.IsHungry = model.IsHungry; | ||
this.Name = model.Name; | ||
this.RawData = model.RawData; | ||
bytesConsumed = stream.Length; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will bytesConsumed
always be the same as stream.Length
or would stream.Position
be more accurate?
If it is always stream.Length
, do we add any value by returning this considering the caller already knows the length of the stream?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment on Serialize
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For deserialize I think you are right position is probably more accurate, on serialize length and position should always be the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But in both cases, are we adding any info that can't be gained by inspecting the Stream
? What is the expected use case for bytesWritten
and bytesConsumed
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thats a good question we should consider this when we finalize the signature sets this was part of the original suggested strawman from @KrzysztofCwalina.
|
||
namespace Azure.Core.Tests.ModelSerializationTests | ||
{ | ||
public class Animal : IJsonSerializable, IUtf8JsonSerializable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the intended goal that Azure SDK public models will implement both of these interfaces? Will IUtf8JsonSerializable
remain internal? Can you help us understand the thinking around this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe that the intended goal is to expose TrySerialize and TryDeserialize in IJsonSerializable to all users. I think the goal is to generate these two methods for all public models as currently users cannot serialize/ deserialize on their own as these methods are internal. IUtf8JsonSerializable would remain internal and needs to be updated - we are just using a copy for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We didn't want to gut the underlying serialization implementation just yet. Our first step here is create this public interface into what is already happening and this gives us the flexibility to adjust the internal implementation whenever we want as long as functionally it performs the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. For building out our conceptual models, does this mean we should think of IJsonSerializable as an eventual replacement for IUtf8JsonSerializable where it appears in Core APIs (although the practical implementation will be staged in steps for convenience)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great question. Having 2 interfaces gives us the flexibility but it also increases the burden of keeping them consistent.
try | ||
{ | ||
JsonDocument jsonDocument = JsonDocument.Parse(stream); | ||
var model = DeserializeAnimal(jsonDocument.RootElement, options ?? new SerializableOptions()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious: what is the thinking behind creating a separate instance of the type in DeserializeAnimal()
and then using it to populate the values in this method? It feels like you could optimize performance a bit by not creating the type instance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nisha-bhatia should have a task to measure the perf diff for both serialize and deserialize. Based on these results we may or may not pull that optimization into v1. The goal at a high level was to not change the internal implementation on v1.
bytesConsumed = stream.Length; | ||
return true; | ||
} | ||
catch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be specific about exceptions that should cause deserialization to fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nisha-bhatia can you create an issue for this we should be purposeful here.
/// <summary> | ||
/// TODO | ||
/// </summary> | ||
public class SerializableOptions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be helpful to look at the System.Text.Json serialization-related options types, and understand if there are options we would want to support from those and/or if we would want to design the API to get those options from related types like @heaths suggested here: https://github.com/Azure/azure-sdk-for-net/pull/35742/files#r1176840779
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our options are purposefully more restrictive here since the idea is we already have all the renaming / restructuring logic written internally and we simply want to expose this to the customers publicly. We don't really want to expose the ability for them to further override these settings since then they aren't getting a universally transferable format. It would even be difficult to serialize / deserialize within the same library if they could override these types of things.
If there are any specific options you think should be included on top of what we have please let us know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One extra thought I'm stashing here quickly - we're working on DynamicDataOptions which may have serialization implications -- wondering if there's any overlap/should work together pivots here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean differently-formatted JSON is not "universally transferrable"? JSON parsers are supposed to ignore insignificant whitespace by design. I've never seen one that doesn't across a multitude of languages and class libraries. It wouldn't be a true JSON parser otherwise.
Our definition of "pretty print" will vary from person to person / use to use. We should make it at least as robust as STJ or consider not supporting it until there's need.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was referring to the other options which allowed you to define name changes etc through custom json converters. We want to avoid exposing these since we are pinning ourselves to Azure Wire Format where we can control the translation layer so that other languages can deserialize the objects we serialize.
We have played with IAzureJsonSerializable and AzureJsonSerializableOptions to make this point clear but avoided that in this first draft.
PrettyPrint I agree this is a universally accepted concept.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was referring to preferences like how many spaces to indent, or even to use tabs instead of spaces. These are things that STJ and even Newtonsoft.Json expose. PrettyPrint
is doing a lot of heavy lifting - forcing a lot of our preferences onto customers.
I believe I addressed that this interface will be on all of our models and then implementation using the internal IUtf8JsonSerializable will not change in v1, but can change after that since we are already looking at using Utf8JsonReader for perf reasons. In the Bring-Your-Own-Model scenario customers can implement this same interface. We decided to not take on a serialization option called No plans to support non-json format. If we did this it would be the same as the version option we would need to update our serialization writers in the generator first then introduce the capability in the public layer. |
bytesWritten = stream.Length; | ||
return true; | ||
} | ||
catch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not typical to just handle catch any exception and return false, even in TryX methods. Typically there is a well-defined set of scenarios where we would return false, and other exceptions should be propagated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I question whether we should return false at all. See my comments above. If being unable to read or write data is exceptional, we should throw exceptions. Returning true or false just pushes the problem upstream. If the underlying APIs throw, we shouldn't catch that (exceptions as code flow) and turn it into a bool.
/// <param name="bytesWritten"></param> | ||
/// <param name="options"></param> | ||
/// <returns></returns> | ||
bool TrySerialize(Stream stream, out long bytesWritten, SerializableOptions? options = default); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The options appearing after the out
params looks weird to me. I wonder if we should consider pinning the out
params at the end and requiring options - or overloads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think because options are optional they would need to go at the end.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, that's why I'm proposing that we make them required and if we really need to make them optional, we overload.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh sorry missed your suggestion at the end. I think I want to settle on the out param existing at all and the Try vs Do before making adjustments here but will keep this in mind once we settle on that part.
/// <param name="bytesWritten"></param> | ||
/// <param name="options"></param> | ||
/// <returns></returns> | ||
bool TrySerialize(Stream stream, out long bytesWritten, SerializableOptions? options = default); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for the "no try, just do"
As Anne mentioned, I'm having trouble figuring out a scenario where I would try. Are you thinking of something like "I don't know what type this is, so I'll try A and if it doesn't work, then B, and if that doesn't work, it must be C?"
The goal of this PR is to provide a public interface that exposes the currently internal serialization code so customers can access it without needing to use reflection or write their own translation. This interface would be applied to all models in all of our libraries today. The implementation simply uses the existing internal serialization logic since that code already knows all of the property name changes / structure changes that exist. We can at a future date change the underlying implementation as long as we maintain functional compatibility.
The SerializationOptions class introduces a couple concepts that are common across the other languages serialization implementations which are:
We should consider what defaults we want for the above properties. I would imagine most customers would want true for both in most cases and we can simply have false for both in our client invocations.
Currently the interface only takes in
Stream
as the input and output object we should consider other convenience overloads such asstring
.We should also consider explicit / implict cast operators Model -> RequestContent and Response -> Model.
Contributing to the Azure SDK
Please see our CONTRIBUTING.md if you are not familiar with contributing to this repository or have questions.
For specific information about pull request etiquette and best practices, see this section.