Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core-http] Clean up event listener when streaming is done #12038

Merged

Conversation

jeremymeng
Copy link
Member

Currently our cleanup code removes the abort event listener when the response is
returned. In streaming case even the response is returned, the work is not done
yet. However, with the abort listener removed, we lost the ability to cancel the
streaming.

This change fixes the issue by unregistering the abort listener for
streaming when the stream ends or error happens.

This fixes #12029

Copy link
Member

@xirzec xirzec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Jeremy for taking on this work! I have a couple small bits of feedback for your consideration, but I'm eager to find a solution for this problem so that we can also solve it in core-https.

stream.once("error", cleanupCallback);
};

// it's super rare if not impossible that both download streaming and upload streaming happen
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems a little implausible, but given that anything in NodeJS can come in as a stream (even a very simple request body JSON payload), I don't think it's impossible. Is there a reason we can't handle both?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By this rare case I meant, a HTTP request is uploading a stream body while at the same time, streaming down the response body. I chatted with @bterlson and he doesn't think we need to worry about this case.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I think I shouldn't have if-else here. Both uploading stream and downloading stream should have the clean up call back hooked up.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I need to make sure the last to finish does the clean up.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember my intuition was that if for a particular request, if there's a download stream, it would probably ends after the upload stream (if there's any), so clean up at the end of downloading, otherwise clean up at the end of uploading stream if any. Both failing, cleaning up when response returns.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not technically impossible for a response stream to start before a request stream finishes from a conceptual standpoint, but everywhere up and down the stack needs to work in that case and, ultimately, boil up to an application that is comfortable writing a response before a request has even finished. This all seems very rare, and googling seems to confirm.

That said, if it's not hard to support, might not be a bad idea to support it? I imagine a bug here woul dbe very hard to track down...

Copy link
Member Author

@jeremymeng jeremymeng Nov 5, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we use the same abort signal for upload stream and download stream, we want to keep the listener around until both stream are done. There are some possible fixes I think:

  1. use separate abort signals for upload and download, clean up each one independently, or
  2. if we could somehow figure out which stream ends last, do the clean up on that one's end/error event.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we don't have to worry about multi-thread race condition, would simple checks like in c2a8a8a work? I am not sure how to test this edge case with real request/response though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the current version should work. @xirzec please have another look.

stream.removeListener("end", cleanupCallback);
stream.removeListener("error", cleanupCallback);
if (abortListener) {
httpRequest.abortSignal?.removeEventListener("abort", abortListener);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this cleans up the listener - but will cancelation work as expected? Does node-fetch actually stop upload / download streams when the signal is given?

I believe that it could, I just want to make sure we've actually tested it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I tested it using the storage repo code. node-fetch will destroy request body and emit error to the response body when handling abort event https://github.com/node-fetch/node-fetch/blob/v2.6.1/src/index.js#L60

@jeremymeng jeremymeng force-pushed the corehttp-unregister-when-streaming-done branch 2 times, most recently from 4c0dd26 to 8126d61 Compare October 26, 2020 17:19
@jeremymeng jeremymeng force-pushed the corehttp-unregister-when-streaming-done branch 2 times, most recently from bd03bd1 to edfa4d2 Compare November 13, 2020 19:30
@jeremymeng jeremymeng added this to the MQ-2020 milestone Nov 17, 2020
Copy link
Member

@xirzec xirzec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about this more, could we have a solution that did something slightly different, by having the helper return a promise when the stream is completed?

e.g.

let uploadStreamDone = Promise.resolve();
if (httpRequest.abortSignal && isReadableStream(body)) {
   uploadStreamDone = streamIsComplete(body);
}
let downloadStreamDone = Promise.resolve();
if (httpRequest.abortSignal && isReadableStream(operationResponse.readableStreamBody)) {
   downloadStreamDone = streamIsComplete(operationResponse.readableStreamBody);
}

Promise.all([uploadStreamDone, downloadStreamDone]).then(() => {
   httpRequest.abortSignal?.removeEventListener("abort", abortListener);
});

registerCleanUpListenerOnStream = (stream: NodeJS.ReadableStream, isDownload: boolean) => {
const cleanupCallback = () => {
if (isDownload) {
streamingDownload = false;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like how this callback manipulates variables in the outer scope, especially because they are declared below this function definition.

@@ -66,6 +69,28 @@ export abstract class FetchHttpClient implements HttpClient {
}
};
httpRequest.abortSignal.addEventListener("abort", abortListener);

registerCleanUpListenerOnStream = (stream: NodeJS.ReadableStream, isDownload: boolean) => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of using a boolean isDownload, what about something like this:

Suggested change
registerCleanUpListenerOnStream = (stream: NodeJS.ReadableStream, isDownload: boolean) => {
registerCleanUpListenerOnStream = (stream: NodeJS.ReadableStream, streamType: "download" | "upload") => {

It seems like it would be more readable, since you wouldn't be passing true/false to the method when calling it

@@ -55,6 +55,9 @@ export abstract class FetchHttpClient implements HttpClient {

const abortController = new AbortController();
let abortListener: ((event: any) => void) | undefined;
let registerCleanUpListenerOnStream:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of conditionally defining the method and checking its existence later, can't we just check if (httpRequest.abortSignal) instead?

@jeremymeng jeremymeng force-pushed the corehttp-unregister-when-streaming-done branch from 9b63e8d to fc93036 Compare January 6, 2021 23:47
@jeremymeng
Copy link
Member Author

Yes it looks much cleaner with promises. I will see whether it works as expected.

@jeremymeng
Copy link
Member Author

Tested with the repro code and things worked as expected. @xirzec please have another look.

Copy link
Member

@xirzec xirzec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks a lot cleaner! Thanks for updating.

I left one more suggestion that I believe is able to simplify the code a little, but double check that I'm not missing something. 😄

Comment on lines 213 to 221
Promise.all([uploadStreamDone, downloadStreamDone])
.then(() => {
httpRequest.abortSignal?.removeEventListener("abort", abortListener!);
return;
})
.catch((e) => {
throw e;
});
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is identical, but simpler?

Suggested change
Promise.all([uploadStreamDone, downloadStreamDone])
.then(() => {
httpRequest.abortSignal?.removeEventListener("abort", abortListener!);
return;
})
.catch((e) => {
throw e;
});
await Promise.all([uploadStreamDone, downloadStreamDone]);
httpRequest.abortSignal?.removeEventListener("abort", abortListener!);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is exactly what I had first, but for mysterious reason it didn't work and repro program exits silently. Let me try again.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right I had await in fc93036 and my test program exits silently. Using callback works. @bterlson any ideas on the subtle differences between the two that might explain this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm well with the callback method, you're no longer participating in the async method that wraps this call, which means you have an unhandled promise rejection (which in some NodeJS will crash your entire app.)

You don't actually want to do that, I think, since this method is already returning a promise. However, I'm not sure how await works inside finally... does it override the return/reject state of the promise?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like await inside finally should do the right thing (throw if there is an error)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did some more experiments. This silent exiting behavior seems to be related to how the helper is implemented. when await Promise.all() is used, if I don't have the empty "data" event handler the test program exits silently.

 function isStreamComplete(stream: Readable): Promise<void> {
   return new Promise((resolve) => {
     stream.on("data", () => {
       return;
     })
     stream.on("close", () => { console.log("### isStreamComplete on close"); resolve(); });
     stream.on("error", () => { console.log("### isStreamComplete on error"); });
     stream.on("end",   () => { console.log("### isStreamComplete on end");   });
   });
 }

Also I found that with await Promise.all() in our finally block, we are blocking test program's execution until the streaming is finished. User code of stream.destroy() on the download stream after blobClient.download() call for example won't get executed until download stream is complete, which is wrong.

testing 0
### isStreamComplete on end
### isStreamComplete on close
download call done for 0
destroying stream 0...

Comparing to the result of using Promise.all().then() callback: the stream can be destroyed, and then the promise resolves.

testing 0
download call done for 0
destroying stream 0...
### isStreamComplete on end
stream 0 on end
### isStreamComplete on close
stream 0 closed

I still don't understand why an empty "data" event handler matters. Regardless, we shouldn't block on stream completion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, you are absolutely right that we shouldn't block on completion. I think the only dilemma we have here is what to do when the new promise rejects. We can't throw/leave it rejected since the consumer has no way to handle it.

Should we log or ignore it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be fine if our promise helper always resolves and never rejects? The doc says

A Readable stream will always emit the 'close' event if it is created with the emitClose option.

Regarding the test program silent exiting when await Promse.all() is used. @bterlson helped investigate it and found out that if we await on promise that never resolves or rejects, NodeJS exits the process with code 13

  • 13 Unfinished Top-Level Await: await was used outside of a function in the top-level code, but the passed Promise never resolved.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with doing Promise.all().then(cleanup).catch(() => { /* ignore errors */ })

Copy link
Member

@xirzec xirzec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm flipping my vote back until we figure out about the unhandled rejection issue, since that's caused distress to customers before

@jeremymeng jeremymeng requested a review from xirzec January 9, 2021 00:13
@jeremymeng
Copy link
Member Author

/azp run js - storage-blob - tests

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@Azure Azure deleted a comment from azure-pipelines bot Jan 11, 2021
@Azure Azure deleted a comment from azure-pipelines bot Jan 11, 2021
@Azure Azure deleted a comment from azure-pipelines bot Jan 11, 2021
@jeremymeng jeremymeng self-assigned this Jan 12, 2021
@jeremymeng
Copy link
Member Author

I'm flipping my vote back until we figure out about the unhandled rejection issue, since that's caused distress to customers before

@xirzec please have another look. I ported the fix to core-https.

Copy link
Member

@xirzec xirzec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm really happy where this change landed. We should make sure to ship core-http again in February so this goes out.

@@ -734,7 +734,6 @@ describe("FileClient", () => {
// tslint:disable-next-line:no-empty
} catch (err) {
assert.equal(err.name, "AbortError");
assert.equal(err.message, "The operation was aborted.", "Unexpected error caught: " + err);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this unrelated?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's related. node-fetch seems to choose a different abort error message than browser...my commit message from f99cefc. Previously a different AbortError is caught.

Now that aborting streaming is working, we are getting AbortError from
node-fetch which use a different message "The user aborted a request."
than the browser fetch API does. so stop verifying error.message.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for explaining

return;
})
.catch((e) => {
logger.warning("Error when cleaning up abortListener", e);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a teensy more detail for folks who have to ponder the log message?

Suggested change
logger.warning("Error when cleaning up abortListener", e);
logger.warning("Error when cleaning up abortListener on httpRequest", e);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Certainly, updated.

@jeremymeng
Copy link
Member Author

@ljian3377 please have a look at the storage test updates.

@jeremymeng jeremymeng added Client This issue points to a problem in the data-plane of the library. Storage Storage Service (Queues, Blobs, Files) labels Jan 16, 2021
Copy link
Member

@ljian3377 ljian3377 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Storage side change looks good to me.

jeremymeng and others added 14 commits January 20, 2021 21:53
Currently our cleanup code removes the abort event listener when the response is
returned. In streaming case even the response is returned, the work is not done
yet, however, with the abort listener removed, we lost the ability to cancel the
streaming. This change fixes the issue by unregistering the abort listener for
streaming when the stream ends or error happens.
Technically there's a possibility that uploading streaming and download
streaming are happening concurrently.  We only want to remove the
abort controller at the end of last active streaming
Technically there's a possibility that uploading streaming and download
streaming are happening concurrently.  We only want to remove the
abort controller at the end of last active streaming
For some reason the process exits without this.
with .catch on Promise.all(), stream.on("data") no longer needed. Still
mysterious but...
According to the Doc, not all Readable streams emit "close" in Node
v8.x.

Log warning for errors
Now that aborting streaming is working, we are getting AbortError from
node-fetch which use a different message "The user aborted a request."
than the browser fetch API does. so stop verifying error.message.
@jeremymeng jeremymeng force-pushed the corehttp-unregister-when-streaming-done branch from 06732f1 to dc20ce5 Compare January 20, 2021 21:57
@jeremymeng jeremymeng merged commit f8e023f into Azure:master Jan 20, 2021
@jeremymeng jeremymeng deleted the corehttp-unregister-when-streaming-done branch January 20, 2021 23:15
ljian3377 pushed a commit to ljian3377/azure-sdk-for-js that referenced this pull request Jan 22, 2021
Currently our cleanup code removes the abort event listener when the response is
returned. In streaming case even the response is returned, the work is not done
yet, however, with the abort listener removed, we lost the ability to cancel the
streaming. This change fixes the issue by unregistering the abort listener for
streaming when the stream ends.

Now that aborting streaming is working, we are getting AbortError from
node-fetch which use a different message "The user aborted a request."
than the browser fetch API does. so stop verifying error.message.

Port fix to core-https
LuChen-Microsoft added a commit that referenced this pull request Jan 25, 2021
* [Synapse] Land the initial generated code (#12713)

* WIP

* WIP

* WIP

* WIP

* Add last two to workspace

* Add basic READMEs and test builds

* Add basic READMEs and test builds

* Fix package versions

* Skip tests

* Address feedback

* Address feedback

* Rerun swagger codegen

* Rerun swagger codegen

* Add rolled up type files

* WIP

* WIP

* New regeneration with tracing

* New regeneration with tracing

* Remove prepack

* Add sdk-type

* Add lint

* Skip lint for now

* Add copyright and improve generation

* Add copyright and improve generation

* Fix keyvault lint

* Export fixes

* Export fixes

* Fix release dates

* Fix synapse versions (#12840)

* Increment version for storage releases (#12846)

* Increment package version after release of azure-storage-blob

* Increment package version after release of azure-storage-file-datalake

* Increment package version after release of azure-storage-file-share

* Increment package version after release of azure-storage-queue

* [test-utils-recorder] Refactor to separate node/browser codepaths (#12796)

* [test-utils-recorder] Refactored recorder to separate node.js dependencies

* Repaired metricsadvisor build

* Repaired textanalytics build

* Repaired formrecognizer build

* Repaired template build

* Fixed base rollup config

* Updated CHANGELOG for recorder

* Removed some no-longer-used dependencies and options.

* Added `chai` to imports

(How was this not a type error??)

* Apply suggestions from code review

Co-authored-by: Harsha Nalluru <sanallur@microsoft.com>

* Removed unused 'env' import

Co-authored-by: Harsha Nalluru <sanallur@microsoft.com>

* [Cosmos] Fix a strictness issues in utils/encode (#12765)

Fix a strictness issue in `utils/encode.ts` in #12745.

* [Synapse] Add back missing Spark example (#12842)

* Create PR to target against master (#12852)

Co-authored-by: Sima Zhu <sizhu@microsoft.com>

* Rush update (#12860)

* [monitor] Change package namespace back to @azure (#12843)

Change namespace back to @azure

* [communication-administration] fix option passing for LROs and slow playback tests (#12863)

* Add PR CI step to build samples (#12715)

* Add PR CI step to build samples

* Made a change to rush-runner

* Fixed an issue with rush-runner patch

* Repaired metricsadvisor and anomalydetector samples builds

* Repaired storage samples build scripts, added skip to storage-internal-avro

* Added skips to core packages without samples

* Resolved conflicting options in identity samples

* Skipped broken test scripts

* pnpm-lock

* [eventhubs-checkpointstore-blob] fix build:samples script (#3)

* [event-processor-host] fix build:samples script

* Revert sorting of package.json script entries

* Fixed merge artifact in dev-tool package.json

* pnpm-lock

* Revert formatting changes to anomalydetector

* Run build phase with transitive dependencies.

* Add stub build:samples commands for synapse

* Added samples build setup for tables

* Repaired two broken links in EPH

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>
Co-authored-by: chradek <chradek@microsoft.com>

* update invalid sample products (#12849)

Co-authored-by: Shuang Jiang <shujia@microsoft.com>

* [MQ: build reliability] Export the SERVICE_BUS_ENDPOINT output value for usingAadAuth.js (#12818)

There were several issues causing the smoke tests to not be a useful signal - this PR improves on the situation and gets us back to green. There are still some issues that require followup which I'll be filing soon.

Fixes:
* ServiceBus was only whitelisting a single sample (usingAadAuth.js) which _used_ to pass but only because it wasn't doing any real work. When we changed it awhile back to actually attempt to use it's connection it failed. This still needs some investigation but in the meantime I've swapped it out and brought in some more useful samples like sendMessages.js, browseMessages.js and sessions.js, which should give us some coverage. This also required altering the test-resources.json so it properly created the sample queues as well as outputting them so they'd get set in the environment.
* FormRecognizer had some failing samples. After talking with @willmtemple I ignored one of them that will require some actual test data to be bootstrapped reasonably (so not necessarily a good candidate for this). I fixed another one that appeared to just not handle some data being empty (seems legitimate, but perhaps it's a bug).d up in the sample's environment.

Fixes #12803

* Sync eng/common directory with azure-sdk-tools for PR 1270 (#12867)

* resolve git longpath error when applying documentation updates

Co-authored-by: scbedd <45376673+scbedd@users.noreply.github.com>

* Sync eng/common directory with azure-sdk-tools for PR 1274 (#12866)

* Added a step of skipping package json update for spring boot packages

* Added skip package json config

* Update the indentation

Co-authored-by: Sima Zhu <sizhu@microsoft.com>

* Sync eng/common directory with azure-sdk-tools for PR 1273 (#12871)

* Support alpha and azure pipeline build version formats for SemVer parsing and sorting

* Minor build number fixes

* Support zero-padding of build versions more generically in SemVer script

* Fix pre-release label conditional for HasValidPrereleaseLabel in semver script

* Set default convention for build number separator in semver script

Co-authored-by: Ben Broderick Phillips <bebroder@microsoft.com>

* [synapse] Added 'clean' commands. (#12870)

Being able to use `rush clean` is blocked on these commands being defined.

* [service-bus] Addressing some incorrect samples and missing migrationguide.md section (#12879)

Fixes #12861

* Upgrade TypeScript compiler version to 4.1.2 (#12770)

* upgrade TS version and fix compilation issues

* upgrade the linting parser version and fix new linting issues

* fix cosmos sample

* address feedback

* fix linting issues in formrecognizer tests

* use unknown instead of any across our code

* address more issues

* cleanup package.json in core-http

* revert noisy linting changes caused by vanilla eslint rules not TS aware

* allow the poller to have results of type void

* fixing samples

* fix keyvault-certificates' sample

* [communication] update samples (#12810)

* Upgrade ESLint version to 7.15.0 (#12894)

All the heavy lifting for upgrading our linting packages has been already done here https://github.com/Azure/azure-sdk-for-js/pull/12770. This PR upgrades ESLint only and fixes https://github.com/Azure/azure-sdk-for-js/issues/9050.

* OpenTelemetry Exporter using Resources API to get service properties (#12893)

* OT Exporter use Resources API to get service properties

* Test

* Addung pnpm-lock

* Addressing comments

* Updating pnpm

* [communication-sms] enable live tests using pipeline variables (#12920)

* Fixed the bug of replacing img src with href text (#12921)

Co-authored-by: Sima Zhu <sizhu@microsoft.com>

* When we go down the 'drain' code path for batching receivers we need to keep the event listeners in place until we actually resolve the promise. (#12908)

When we went down the 'drain' path with BatchingReceiver we'd actually remove the event listeners. 

With the old code this could happen:

1. message received, debounce timer set
2. debounce timer fires
3. drain is requested (also removed the event listeners!)
4. message arrives (ignored)
5. receiver_drain arrives and resolve's the promise but the message in step 4 is lost.

The simplest way to reproduce this was to just connect over a slower link (created an SB in AU, which is slower than connecting over to something in the US) and send some about 100 messages (I used 2k-ish of text for each one) - this made it more likely that you'd end up with only a partial transfer of messages and have a drain. This became even more obvious once we moved to scheduling the promise resolution with setTimeout() as it widened the gap (this bug was less easy to see before!).

Many thanks to @enoeden for finding this bug!

Fixes #12711

* [Event Hubs] Remove @constructor tags (#12927)

* Standardization of our documentation comments (#12912)

# Status quo
Some of our documentation comments are [TypeDoc](http://typedoc.org/guides/doccomments/) and some of them are JSDoc with type description in the comments even though it is for typed TS code.

# Standardization
I decided the best way to go about this is to migrate to [TSDoc](https://github.com/Microsoft/tsdoc) and enforcing it using the [tsdoc eslint plugin](https://www.npmjs.com/package/eslint-plugin-tsdoc).

Pros:
- TSDoc is a proposal to standardize the doc comments used in TypeScript code, so that different tools can extract content without getting confused by each other’s markup.
- It is being developed at Microsoft, with the TypeScript team.
- It has an ESLint plugin that enforces it and will error when it sees unsupported tags (e.g. `@memberof`, `@class`, `@constructor`, `@type`, etc).

Cons:
- It is still in early stages (adoption is ongoing though, e.g. it is being used by the API extractor tool).
- TSDoc != TypeDoc (the tool we currently use for generating our documentation). However, TypeDoc plans to officially support TSDoc in v1.1 (see https://github.com/TypeStrong/typedoc/issues/1266).

# Notable tag changes
- `@ignore` is a JSDoc tag and was used in conjunction with `@internal`. These tags were needed because [TypeDoc does not yet support documenting only definitions exported by the entry point](https://github.com/TypeStrong/typedoc/pull/1184#issuecomment-650809143) and still documents everything exported from all files. I removed `@ignore` because [`@internal`](https://tsdoc.org/pages/tags/internal) only should suffice
- `@ignore` when used alone is replaced with TypeDoc's [`hidden`](http://typedoc.org/guides/doccomments/#hidden-and-ignore). EDIT: I replaced `@ignore` with [`@hidden`](https://github.com/TypeStrong/typedoc/releases/tag/v0.12.0) because the TypeDoc version used for `docs.microsoft.com` is v0.15.0 which does not support `--stripInternal`. After, they upgrade, I will remove all `@hidden` tags. 
- `@summary` is gone because it is not part of TSDoc or TypeDoc

This PR applies the changes to packages that respect our linting rules. Ones that do not yet will be migrated later when we start fixing their linting issues.

Here are vanilla examples of TypeDoc 0.18.0 (version used by our EngSys) after the changes here as a sanity check:
- random method:
![typedoc](https://user-images.githubusercontent.com/6074665/102302881-f6186380-3f27-11eb-8cc6-93e4c8f7d42d.PNG)
- a class constructor that used to have type information in the documentation comments:
![constructor](https://user-images.githubusercontent.com/6074665/102357078-f8a4a880-3f7b-11eb-92d1-c086ecc39c0b.PNG)

# `@hidden` works the same way as `@ignore`
Here are the list of documented functions generated by `TypeDoc v0.15.0` for the text analytics package and there is no function that was marked `@hidden`, e.g. `combineSuccessfulAndErroneousDocumentsWithStatisticsAndModelVersion`
![image](https://user-images.githubusercontent.com/6074665/102426196-e018aa80-3fdc-11eb-8b69-1ac265391fad.png)

# Things to consider
- Our documentation must be generated using the TypeDoc flag [`--stripInternal`](http://typedoc.org/guides/options/#stripinternal)
- Should we add a `docs` npm script to our `package.json`s (similar to [Cosmos's](https://github.com/Azure/azure-sdk-for-js/blob/2424b74f029273677f62433f28dd1390806f682c/sdk/cosmosdb/cosmos/package.json#L60)) so that we can see how our docs are generated as we write our comments?

Fixes https://github.com/Azure/azure-sdk-for-js/issues/3027.

* Fix code snippet in README (#12930)

* [cosmos] modify package.json with the test scripts (#12872)

* add express sample for event hub (#12246)

* add express sample

* update README.md

* update doc

* update doc

* add apiref

* update curl sample

* [event-hubs] express sample with sendBatch

* remove duplicated eventProducer

* update readme fix broken link

* Update sdk/eventhub/event-hubs/samples/expressSample/src/index.ts

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

* Update sdk/eventhub/event-hubs/samples/expressSample/src/README.md

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

* Update sdk/eventhub/event-hubs/samples/expressSample/src/README.md

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

* Update sdk/eventhub/event-hubs/samples/expressSample/src/README.md

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

* Update sdk/eventhub/event-hubs/samples/expressSample/src/README.md

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

* Update sdk/eventhub/event-hubs/samples/expressSample/src/README.md

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

* Update sdk/eventhub/event-hubs/samples/expressSample/src/asyncBatchingProducer.ts

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

* Update sdk/eventhub/event-hubs/samples/expressSample/src/index.ts

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

* sample code batchSendSize=20

* renaming

* remove metadata from readme

Co-authored-by: chradek <chradek@microsoft.com>
Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

* [Synapse] Remove duplicate clean command (#12940)

* Add dataplane pckgs (#12942)

* add a bunch of packages to dataplane workspace

* add storage-internal-avro

* Sync eng/common directory with azure-sdk-tools for PR 1287 (#12946)

* Move common code to create API review into eng common in tools

* [Docs] Adding docs npm script (#12941)

This newly added command `docs` can help increase the quality of our documentation comments. It enables us to have a tight feedback loop on what is being generated as a documentation of our packages. I am pinning `typedoc` to v0.15.0 for now because this is the version being used for generating docs at `docs.microsoft.com`. This version should be updated when that team updates theirs.

Fixes https://github.com/Azure/azure-sdk-for-js/issues/12928

* [Identity] Manual test fix (#12858)

* manual fix

* ci doesnt like http on localhost links. making it a code segment

* DeletedKey -> KeyVaultKey

* feedback by Jonathan

* Remove @param and @returns tags if they have no description (#12943)

Address feedback for https://github.com/Azure/azure-sdk-for-js/pull/12912.

* [Identity] More standard samples (#12800)

* [Identity] More standard samples

* missing files

* Feedback and other cleanups

* removing device code and interactive samples because of user input

* more info on how to set up the keyvault

* apiref fix

* using-azure-identity.md fix

* no more tsconfig.samples.json

* fixed version

* wip

* now it should work

* formatting

* Skipped the build samples step for now. Will log an issue.

Skipped the build samples step for now. Will log an issue.

* pnpm-lock after merge master

* removing kv keys from package.json

* [core-amqp][event-hubs] support specifying a custom endpoint (#12909)

Fixes #12901 

### Purpose of change

Some users run their applications in a private network that lacks access to public endpoints, such as <namespace>.servicebus.windows.net. Due to these restrictions, these applications are unable to create direct connections to the Azure Event Hubs service using the standard endpoint address and require the ability to specify a custom host name to ensure they route through the proper intermediary for the connection to be made.

### Testing

I've added tests that verify the connection has been configured with the custom endpoint correctly.

I also manually ran these changes by setting up an Application Gateway in Azure with:
- backend pool with a backend target that points to the FQDN of my Event Hubs Namespace (`<namespace>.servicebus.windows.net`)
- A front-end listener using a HTTPS and a nonstandard port (200).
- HTTP settings with the following:
   - backend port: 443
   - Override with new host name: set to 'Yes'
- A rule that maps the front-end listener to the back-end target and uses the HTTP settings above.

I then had a test script that sent events, received events, and called operations on the $management link setting the `customEndpointAddress` to the public IP address from my application gateway.

Here's a simplified version of that script:
```js
const { EventHubProducerClient } = require('@azure/event-hubs');
const ws = require('ws');
const connectionString = `Endpoint=sb://<namespace>.servicebus.windows.net/;SharedAccessKeyName=sakn;SharedAccessKey=key`;
const eventHubName = `my-hub`;

async function run() {
  const client = new EventHubProducerClient(connectionString, eventHubName, {
    customEndpointAddress: `https://my.ip.address:200`,
    webSocketOptions: {
      webSocket: ws
    }
  });

  const props = await client.getEventHubProperties();
  console.log(props);
  return client.close();
}
run().catch(console.error);
```

Note that in my application gateway I also used self-signed certs so needed to do some additional work so node.js would recognize my certificate, but that's separate from the customEndpointAddress work and doesn't require changes from our SDK.

* Create API review from scheduled CI (#12965)

* Create API review from scheduled CI

* [keyvault] MQ docs: minor fixes to the readmes for samples (#12977)

* deploy buttons were referring to 'tests-resources.json' rather than 'test-resources.json' (ie, no 's')
* prettier auto-lowercased the purgeAllKeys ref in the md. Cosmetic and harmless.

* [Key Vault Keys] Migration guide (#12978)

* [Key Vault Secrets] Migration guide (#12980)

* [Key Vault Certificates] Migration guide (#12981)

* release-for-mgmt-network (#12889)

* Sdk auto/@azure arm recoveryservices (#12782)

* CodeGen from PR 11739 in Azure/azure-rest-api-specs
Changes to avoid Breaking changes for dotnet (#11739)

* update

Co-authored-by: SDKAuto <sdkautomation@microsoft.com>

* [@azure/service-bus] Fix invalid check for Node (#12985)

* [Service Bus] Update changelog for 7.0.1 (#12989)

This PR adds changelog entry for https://github.com/Azure/azure-sdk-for-js/issues/12983

* React sample integrating with Azure SDKs (#12924)

## What
Adds a React sample that integrates with multiple Azure services

## Why
As part of our investigation into how well our SDKs integrate into existing
frameworks we created a few sample apps that showcase how to use the
various SDKs. Adding a sample here would ensure that we have something
to reference and provide to folks who are interested in using our SDKs in React.

* Use npm link instead of github for readme (#12982)

This PR updates the migration guides for Key Vault to use npm link instead of github as the latter can point to previews if we were working on previews in the master branch

* arm-synapse-release (#12913)

* arm-synapse-release

* update

* [Code Coverage] Node Tests - Generate reports for live tests (#12968)

Resolves part of the issue - https://github.com/Azure/azure-sdk-for-js/issues/12935

To the reviewers, go to the links corresponding to your library to see the code coverage report. (Example here https://github.com/Azure/azure-sdk-for-js/pull/12968#issuecomment-749320030)
### Enables code coverage for the following packages
- [x] `/sdk/storage/storage-file-datalake` - https://dev.azure.com/azure-sdk/internal/_build/results?buildId=660382&view=codecoverage-tab
- [x] `/sdk/storage/storage-blob` - https://dev.azure.com/azure-sdk/internal/_build/results?buildId=655864&view=codecoverage-tab (Moved from https://github.com/Azure/azure-sdk-for-js/pull/12903)
- [x] `/sdk/search/search-documents` - https://dev.azure.com/azure-sdk/internal/_build/results?buildId=663568&view=results
- [x] `/sdk/textanalytics/` - Live tests are failing hence cannot see CC - https://dev.azure.com/azure-sdk/internal/_build/results?buildId=658221&view=logs&j=bbd7d436-e05d-51a8-99a8-0701dc9dffeb&t=c89e2f7b-6fd1-516c-25fe-9220efbf116c
- [x]  `/sdk/keyvault/keyvault-keys` https://dev.azure.com/azure-sdk/internal/_build/results?buildId=663575&view=results
- [x] `/sdk/keyvault/keyvault-secrets` https://dev.azure.com/azure-sdk/internal/_build/results?buildId=663579&view=results
- [x] `/sdk/keyvault/keyvault-certificates` https://dev.azure.com/azure-sdk/internal/_build/results?buildId=663580&view=results
- [x] `/sdk/template/`
- [x] `/sdk/appconfiguration/` - https://dev.azure.com/azure-sdk/internal/_build/results?buildId=660325&view=codecoverage-tab
- [x] `communication-sms` - https://dev.azure.com/azure-sdk/internal/_build/results?buildId=660326&view=codecoverage-tab
- [x] `communication-chat` - https://dev.azure.com/azure-sdk/internal/_build/results?buildId=660327&view=codecoverage-tab
- [x] `communication-administration` - https://dev.azure.com/azure-sdk/internal/_build/results?buildId=660328&view=codecoverage-tab

#### Others
- [ ] `/sdk/tables/`  - The live test pipeline was failing due to compile issues - moved it to a different PR - https://github.com/Azure/azure-sdk-for-js/pull/12976

* [Communication] - Common - Renaming CommunicationUserCredential to CommunicationTokenCredential (#12992)

* [Text Analytics] Tests read env vars only after they were read by the recorder (#12991)

Use the API Key from the corresponding environment var only after it was read by the recorder. 
Also does the following:
- split `createRecordedClient` into `createRecorder` and `createClient` with better arguments.
- get rid of `testEnv`.
- use `createClient` everywhere.

Fixes #12990 and part of https://github.com/Azure/azure-sdk-for-js/issues/12819.

* [Service Bus] Update migration guide to follow template (#12995)

This PR updates the Service Bus migration guide to follow the [template](https://github.com/Azure/azure-sdk/pull/2132) to fix #12366

* [Communication] - Common - Rename RefreshOptions to CommunicationTokenRefreshOptions  (#13004)

* Rename RefreshOptions

* Update changelog

* Update changelog

Co-authored-by: Minnie Liu <peiliu@microsoft.com>

* Communication common add Microsoft Teams user identifier, rename types with Identifier suffix (#13002)

* [event-hubs] Update migration guide to follow template (#13003)

This PR updates the Event Hubs migration guide to follow the [template](https://github.com/Azure/azure-sdk/pull/2132/files) to fix #12364.

* [Text Analytics] Fix timeout issues (#13006)

* [Text Analytics] Fix timeout issues

* add recordings

* [Service Bus] Delete v1 stress tests and move track 2 tests to "stress" folder (#13008)

This PR does nothing more than cleaning up.

* arm-hdinsight-release (#12971)

* Arm netapp release (#12984)

* arm-netapp-release

* arm-netapp-release

* [Synapse] Remove old synapse (#13011)

* [Service Bus] Perf tests for track 2 and update track 1 ones to latest (#11854)

* perf tests for track 2 and update track 1 ones to latest

* Add memory consumption too

* batch API call update

* Update _messages increment in send track 1 test

* \t in logged output

* Add peeklock variation for streaming test

* Batch api perf test

* fix converting to bool

* Move to GA v7

* "private": true,

* Import v7

* fix instructions

* omit paths

* Update sdk/servicebus/service-bus/test/perf/service-bus-v1/receiveBatch.ts

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

* Update sdk/servicebus/service-bus/test/perf/service-bus-v7/receive.ts

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

* Update sdk/servicebus/service-bus/test/perf/service-bus-v7/receiveBatch.ts

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

Co-authored-by: chradek <51000525+chradek@users.noreply.github.com>

* [Perf Tests] Helper methods from perf framework (#13012)

* [Docs] Add --excludeExternals to rushx docs command (#12987)

* Add a note for EPH on EH service level readme (#13017)

* Add note on admin client features in SB service level readme (#13018)

* Electron sample (#12960)

## What

Add an Electron sample that integrates with multiple Azure services.

## Why

As part of our investigation into how well our SDKs integrate into existing
frameworks we created a few sample apps that showcase how to use the
various SDKs. Adding a sample here would ensure that we have something
to reference and provide to folks who are interested in using our SDKs in Electron.

* [event-hubs] add checkkpointing to migration guide sample  (#13016)

* Callout that the admin package is still in preview (#13020)

The service level readme is meant to show all packages available for customers to use for the said service. In case of Key Vault, we do mention the admin package, but do not call out that it is in preview. This PR adds that information

* [Perf Tests] streamToBuffer helper method (#13014)

This address the comment at https://github.com/Azure/azure-sdk-for-js/pull/12746#discussion_r546922303 by adding a new `readStream` method in perf framework so that it could be reused.

Inspired from https://github.com/Azure/azure-sdk-for-js/blob/ee733377bd63a7aba6e80b5cc0d895a251cb70e1/sdk/storage/storage-blob/src/utils/utils.node.ts#L118

This new method is meant to be used by the perf tests in various storage packages(in various different versions).
https://github.com/Azure/azure-sdk-for-js/pull/12737 https://github.com/Azure/azure-sdk-for-js/pull/12746 https://github.com/Azure/azure-sdk-for-js/pull/12806

* [tables] Code coverage tables (#12976)



Co-authored-by: HarshaNalluru <sanallur@microsoft.com>

* [Text Analytics] Diagnose issue with test resources (#13015)

* [Text Analytics] Diagnose issue with test resources

* cancel only if not completed yet

* arm-appplatform-release (#13025)

* add the environment variable (#12934)

* Add richardpark-msft as CODEOWNER for monitor and core-tracing (#13045)

Just discussed this with @xirzec. Should give me better visibility since I'll be working on those projects.

* [core-http(s)] Reuse DefaultHttpClient between ServiceClient instances (#12966)

Today `ServiceClient` ends up creating a new instance of `DefaultHttpClient` each time one is not passed in via `ServiceClientOptions`. Since it is unusual for a custom `HttpClient` to be passed in, this means we recreate the `HttpClient` each time a someone creates a convenience client that wraps a generated client.

Normally, this isn't a big deal since clients are heavy enough to be cached/re-used by customer code. However, a KeyVault customer ran into a scenario where they had to recreate `CryptographyClient` many times (i.e. 5 times per second) and they noticed some pretty bad performance characteristics (out of memory, out of sockets, etc.)

While there is still more we can do with improving `CryptographyClient` to perhaps not need to be constructed so often (or making it cache its own copy of `KeyVaultClient` internally), this PR aims to solve the biggest issue with the above scenario: many copies of `DefaultHttpClient` each allocating NodeJS `http.Agent`s that then are tied to real OS resources that take a while to cleanup.

Another related note is that the Storage packages already do this today in their custom Pipeline implementation: https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/storage/storage-blob/src/utils/cache.ts

* [schema registry] Facilitate live test runs on arbitrary endpoint (#13030)

* Parameterize schema registry group. It was hard-coded, which
  bumped up against 1 group per resource limit in cheapest tier.

* Remove test of mutating schema and re-registering. When the
  schema changes back and forth between runs like that, it causes
  the service to keep bumping the schema version, which
  eventually hits a limit on total number of schemas.

* [app-configuration] Fixing some lint issues (#13055)

- Make eslint run clean for typedoc (still has other warnings/errors that I need to get back to)
- Ran eslint auto fix, which got some simple stuff like being able to use 'const' and the copyright header.

Fixes #12948

* replace @ignore with @hidden (#12963)

`@ignore` is a JSDoc construct which we no longer use. This PR replaces it with `TypeDoc`'s `@hidden` which [exhibits the same behavior](https://github.com/TypeStrong/typedoc/releases/tag/v0.12.0).

* Update migrationguide.md (#13067)

* Add support to test multi service API version (#12719)

* [WIP] prototype: multi-service-version test support

* Move multi-version test support into its own package

* Add some unit tests

* Add README

* Fix missing rollup error in CI build

* Disable browser test bundling

* Remove option to specify custom string comparison function

Now it's required to have a sorted list of API version strings
supported by the service from the oldest to the latest. This list is
used to compare version strings.

Add missing `skip` method to the chain object.

Add unit tests

* Fix README analysis issue

* Move `forEach()` and `onVersions()` into `versionsToTest()`

* Ignore README verification errors

* Fix rollup error

* Undo file moves

* Minor README tweak

* Fix build error in pipeline

* Add `DISABLE_MULTI_VERSION_TESTING` env var

* [Cosmos] Return undefined for sum aggregate when one partition is undefined (#12988)

* Remove commit revert for sum aggregation PR

* sum progress

* format

* WIP

* Adds correct logic for bailing early on undefined group by result

* Formats

* Adds changelog

* Adds tests for other falsy values

* formatting

* Update to Update-Changelog.ps1 (#13058)

Co-authored-by: Chidozie Ononiwu <chononiw@microsoft.com>

* [EventGrid] Make `build:samples" work (#13057)

- Update to the GA version of the Service Bus package in our samples and
  deal with the breaking changes.

- Add `@azure/service-bus` as a dev-dependency since the samples use it
  and we need it around when we do `build:samples`

- Have `build:samples` use `dev-tool` to prepare the samples and then build
  them, like our other packages do

Fixes #12730

* Add Devops Release Item scripts (#13074)

- Add set of helpers to work with devops work items
- Add script to create devops release package items
- Update SemVer to support version type

Co-authored-by: Wes Haggard <Wes.Haggard@microsoft.com>

* [core-auth] remove core-tracing dependency (#13075)

* [core-auth] remove core-tracing dependency

* add license header

* add note to keep tracing.ts in sync with core-tracing

* Fix datalake Readme issue (#12269)

Fix https://github.com/Azure/azure-sdk-for-js/issues/11495
@jongio for notification.

* arm-healthbot-release (#13059)

* arm-iotcentral-release (#13084)

* Storage/sas improvement (#12850)

* fix a bug: need expiresOn AND permission

* parse sas permission from raw object

* use vanilla instead of map for SASPermissionsLike

* separate the SASPermissionsLike

* me

* Sync eng/common directory with azure-sdk-tools for PR 1298 (#13071)

* Update latest folder with one index.html includes the redirect links

* Update the redirect links and remove everything in latest

* Address feedback

* Have the entire copy for latest GA.

Co-authored-by: Sima Zhu <sizhu@microsoft.com>

* [storage-blob] allow empty blobName to support root directory in datalake (#12890)

* allow empty blobName to support root directory in datalake

* remove .only

* Merge branch 'master' into storage/issues/12813

* changelog

* [Tables] Fix typos in samples (#13072)

Fixes #12914 . Also adding a sample on how to query topN items

* [Identity] upgrade axios version (#13090)

* Update update changelog (#13079)

* Add function for retrieving existing versions of packages

* Add SetPackageVersion function

* Update versioning tools

* Add GetPackageInstallNotes function

* Remove changes not related to changelog

* [service-bus] Making BatchingReceiver properly cleanup after an abort (#13073)

Some more fixes related to the prior PRs around making BatchingReceiver behavior consistent:

* When we are closed (without error) or if we're in receiveAndDelete mode, we use the setTimeout() to ensure that we run after all pending message callbacks have fired
* We always cleanup our handlers before resolve/reject and it's made more clear if we're resolving immediately or resolving after pending message callbacks.

I also did some small refactors to make unit testing a bit easier, which should help make it simpler to validate this code path in the future.

Fixes #12922

* [Service Bus] Bug Fix: Correlation Rule Filter with the "label" doesn't filter the messages (#13069)

### BUG
https://github.com/Azure/azure-sdk-for-js/issues/13063
The correlation filter with the "subject" set using the `createRule` method doesn't filter the messages to the subscription. However, when we create the rule with the same correlation filter from the service bus explorer, messages get filtered as expected.

### CAUSE
The key-value pairs having undefined/null as the values can be present in the request object at any level. They get serialized as empty tags in the XML request for the ATOM management operations.

If such empty tags are present at the top level, the service throws a "Bad Request" error. This was handled by the SDK to make sure that the top-level of the request body doesn't have any empty tags.

If such empty tags are present at the inner levels, the service assigns the empty strings as values to the empty tags instead of throwing an error. This wasn't handled and the empty tags at the rule filter created an unexpected filter for the subscription. This PR attempts to fix this issue.

### FIX
The key-value pairs having undefined/null as the values are removed recursively from the object that is being serialized.

* arm-containerservice-release (#13026)

* arm-containerservice-release

* arm-containersevice-release

* Sync eng/common directory with azure-sdk-tools for PR 1302 (#13098)

* Make the user agent configurable.

* Address comments

* Default to current Chrome version

* Revert the yaml file changes

Co-authored-by: Sima Zhu <sizhu@microsoft.com>

* [storage-file-datalake] Allow DataLakePathClient.move() use sas to authenticate source and destination (#12769)

* Make storage-file-datalake move() support adding both sasToken to x-ms-rename-source and the destination url

* Format

* Add comment to util function

* Fallback to use the source's SAS if both
- source authenticates with SAS
- destination isn't provided a SAS

* Reformat

* Do not encode sas query parameter values in x-ms-rename-source header

* Don't set query parameter for snapshot or versionid at the destination when fallback to use sas token in the source url

* Reformat

* Resolve merge conflict

* Remove fallback logic
Throw an error when the source path contains multiple ? query strings, causing ambiguity

* Encode source path segment by segment to avoid encoding "/" characters

* Add unit tests for move operation with SAS credentials

* Add move related test recordings

* Add a bullet for the fix in the change log

* Take PR suggestion

Co-authored-by: Lin Jian <lijian2@microsoft.com>

* [core-amqp] Revert 2.1.0 changes and prep for 2.0.1 release (#13096)

* Revert "[core-amqp][event-hubs] support specifying a custom endpoint (#12909)"

This reverts commit 336c6cfba1fa354b464a6aa2a2f6646b5ff54f1d.

* [core-amqp] prep for 2.0.1 release

* Fix package.json linting errors (#13070)

## What

Audit Track 2 libraries and address any linting errors in package.json
Fixed anything that seemed safe, and anything that seemed like a 
potential breaking change was confirmed with the owners or reduced
to a warning.

## Why

Another step towards being able to run the linter in our builds.

fixes #10597

* Remove old Update-Change-Log.ps1 (#13097)

Co-authored-by: Chidozie Ononiwu <chononiw@microsoft.com>

* OT Exporter retry when there are network issues (#12905)

* OT Exporter retry when there are network issues

* Adding network error check

* Fixing issue with tests

* reverting unnecessary change

* [eph] Skip EPH samples from smoke-test runs (#13106)

* [core] prepare for January 2021 release (#13104)

Same as #12797 except in January!

Note that all of the core packages are just being released so that the published version pulls in tslib 2.x instead of tslib 1.x. This fixes #12631, where it was discovered that using a bundler (e.g. webpack) to generate an application bundle that includes our SDK also pulled in multiple copies of tslib 1.x.

Multiple copies of tslib were pulled in because due to the way node_modules were being resolved, tslib 2.x would be at the root of a package's node_modules directory, and every package that used tslib 1.x had its own copy of tslib that would then be included in the bundle.

* Key vault keys migration (#13092)

* Readme Configuration Changes

* Generated Code Changes

* Custom Code Changes

* Other Changes

* Fix Linter errors

* MInor Version Fix

* Additional Changes

* Formatting Changes

* Updated Version Number in Recordings

* More Recording Modifications

* Updating Test Cases

* Increment version for core releases (#13110)

* Increment package version after release of azure-logger

* Increment package version after release of azure-core-auth

* Increment package version after release of azure-abort-controller

* Increment package version after release of azure-core-lro

* Increment package version after release of azure-core-amqp

* Increment package version after release of azure-core-http

* Changelog entry for Identity 1.2.1 (#13112)

Please review.

* [Search] Fix the Linter Errors (#13060)

* Fix the Linter Errors

* Further Modifications

* Formatting Changes

* Response to PR Comments

* Treat any new lint regression as hard failure in CI (#13114)

* Remove outdated list of packages that support identity package (#13113)

It is not practical to maintain an entire set of packages that take TokenCredential. Therefore, removing the corresponding section from the readme for `@azure/identity`

* arm-machinelearningcompute (#13116)

* arm-machinelearningcompute

* version update

* version update

* version update

* Storage/stg75 GA API fix (#12885)

* make tagValue required

* hide allowPermanentDelete

* api

* fix ShareProperties

* fix CI

* duplicate ServiceFindBlobsByTagsSegmentResponse instead of using Omit

* duplicate generated interfaces instead of using Omit for ServiceListSharesSegmentResponse and ShareGetPropertiesResponse, use extend instead of duplicating since enabledProtocols is renamed to protocols

* allow configuration of a suffix for each config item. (#13127)

Co-authored-by: scbedd <45376673+scbedd@users.noreply.github.com>

* Update CODEOWNERS with my new GitHub account (#13066)

I'm switching to using this account for all work-related GitHub activity.

* Adding the autogenerated attestation package (#13120)

This PR adds the autogenerated attestation package along with a test suite.

* [Service Bus] Remove message count check - flaky rule filter test (#13131)

https://github.com/Azure/azure-sdk-for-js/pull/13105 investigates the flakyness.

The reason why the message count in the test can sometimes(5/1000 times) be zero is that the sent messages might not have been filtered into the subscription by the time we call the runtime properties method.

If we call the receiveMessages directly, the inherent delay would help in receiving the messages and should be enough for the test.

* Revert "Remove old Update-Change-Log.ps1 (#1301)" (#13130)

This reverts commit b967cb62bd8db451a2f02d903f3658cf21f92db1.

Co-authored-by: Chidozie Ononiwu <31145988+chidozieononiwu@users.noreply.github.com>

* [Keys] Revert change that introduced enum for API version that had only the latest API version value (#13115)

* Remove version value back to previous method

* Modify Latest API Version

* Getting the smoke testing pipeline green again: (#13111)

* Ignore a sample that's clearly intended to only be run with caution and manually in form-recognizer
* Create the first ignore for the communixcation-administration package, just to be helpful (hopefully)

* Fix condition for test generation in prepare-pipelines (#13140)

Co-authored-by: Wes Haggard <Wes.Haggard@microsoft.com>

* [Service Bus] 7.0.1 release prep  (#13144)

* Increase timeout (#13108)

* [eventhub] Remove old uglify dependency (#13148)

Uglify was used to ship minified browser builds, but we don't actually ship browser bundles.

* [core-auth] Add AzureSASCredential (#13091)

This change adds an interfance and supporting class for use by services
which support authentication used a shared acccess signature.

Fixes #13053

* Increment package version after release of azure-service-bus (#13154)

* Enable 1ES pools in JS repository (#13064)

* Initial pass of enabling 1ES pools in JS repo.

* Fix links to remove en-us (#13146)

* Sync eng/common directory with azure-sdk-tools for PR 1304 (#13145)

* Add quotes around the parameter

* Add variable group for test options.

Co-authored-by: Wes Haggard <Wes.Haggard@microsoft.com>
Co-authored-by: Wes Haggard <weshaggard@users.noreply.github.com>

* Update rush and npm (#13153)

* [storage] doc fix (#13087)

* use uploadData instead of uploadBrowserData in samples: Fixes #12939

* fix client constructor doc: fixes https://github.com/MicrosoftDocs/azure-docs-sdk-node/issues/1003

* Fix doc for #7097

* Mention InteractiveBrowserCredential in aad sample

* Update sdk/storage/storage-blob/samples/javascript/advanced.js

Co-authored-by: Jeremy Meng <yumeng@microsoft.com>

Co-authored-by: Jeremy Meng <yumeng@microsoft.com>

* [KeyVault][Keys] Fix Live Test Failures (#13152)

* Remove references to 7.2-preview

* Updated Recordings

* Reordering & Updated Recordings in Challenge File

* [Cosmos] Update README to use non deprecated method (#13151)

This commit makes a minor update to the Cosmos README to use
keys list instead of list-keys since the latter is deprecated and
shows a warning on the CLI when ran.

* [storage] stg75/ga/release prepare (#13160)

* bump up version

* fix re-use generator bug in BlobQuickQueryStream

* [Communication] - ALL - Adding PowerShell Docs to README (#13157)

* Adding PowerShell Docs to README

* Update wording

Co-authored-by: Minnie Liu <peiliu@microsoft.com>

* [monitor] Fix monitor CI script (#13171)

Forgot to update this path when we renamed the package

* [Cosmos] Defaults partitionKeyValue to '[{}]' when missing in Read/Delete bulk operations (#13143)

* Defaults to '[{}]' partition for Read/Delete when partitionKey not present in args

* format

* Update changelog

* Update sdk/cosmosdb/cosmos/CHANGELOG.md

Co-authored-by: Steve Faulkner <southpolesteve@gmail.com>

Co-authored-by: Steve Faulkner <southpolesteve@gmail.com>

* [monitor] Add rollup build (#13182)

#13070 broke the monitor package for non-module imports, since the "main" file it references doesn't exist.

This PR brings the package into our normal build process, exposing real module exports and also bundling a cjs version.

* [Identity] Bump msal-node dep (#13179)

* Bump msal-node dependency and update package version

* Bump msal-node dependency and update package version

* Address feedback

* Address feedback

* Address feedback

* Address feedback

* Update pipeline generation tool to support test variant pipelines (#13177)

Co-authored-by: Wes Haggard <Wes.Haggard@microsoft.com>

* [core-client] Switch _response property for callback. (#13132)

* Switch _response property for callback.

* Adding hidden tag to generated clients (#13183)

* Adding hidden tag to generated clients

* Add Missing Comments

* [event-hubs] Update browser sample with AAD troubleshooting tip (#13180)

* [event-hubs] Update browser sample with AAD troubleshooting tip

* call out setting redirect uri to localhost

* [Tables] Support Emulator and Azurite (#13165)

* Support Emulator and Azurite

* Support UseDevelopmentStorage connection string

* make usedevelopmentstorage case-insensitive

* Use toLowerCase

* Key vault secrets migration  (#12821)

* Updated Readme File

* Regenerated Code Changes

* Custom Layer Changes I

* Formatting Changes

* Custom & Regen Changes II

* Add Hidden to generated Layer

* Change similar to one in keyvault-keys

* DigitalTwins: Add E2E tests (#13135)

* feat(azure-digitaltwins): update swagger

* feat(azure-digitaltwins): update client to swagger

* chore(azure-digitaltwins-core): rename package

* chore(azure-digitaltwins-core): rename package

* feat(azure-digital-twins-core): add span

* feat(azure-digital-twins-core): update for GA

* chore(azure-digital-twins-core): fix readme

* Update Readme.md

* chore(azure-digital-twins-core): fix exports

* chore(azure-digital-twins-core): fix broken links

* chore(azure-digital-twins-core): update changelog

* chore(azure-digital-twins-core): comment changelog

* chore(azure-digital-twins-core): more comment

* Update CHANGELOG.md

* fix(azure-digital-twins-core): address feedbacks

* fix(azure-digital-twins-core): add ifmatch

* fix(azure-digital-twins-core): add ifMatch

* fix(digitaltwins): hide ifmatch to options

* chore(digitaltwin): autofix

* fix(digitaltwins): fix export warnings

* fix(digitaltwins): fix diagnostic namespace

* fix(digitaltwins): improve tracing

* chore(digitaltwins): autofix

* fix(digitaltwins: fix tracing

* chore(digitaltwins): autofix

* fix(digitaltwins): delete parameter grouping

* chore(digitaltwins): autofix

* chore(digitaltwins): clean up

* Add E2E tests

* Add E2E tests

* Update recordings

* Remove comments

* --harmony flag for async iterators

* Update CHANGELOG.md

* Update version in package.json

Co-authored-by: Jeff Fisher <jeffish@microsoft.com>
Co-authored-by: HarshaNalluru <sanallur@microsoft.com>

* [communication] update release phone numbers sample (#13147)

* Removed step to build samples temporarily (#13197)

* [Service Bus] Bug Fix: Sessions - Receive messages beyond 2048 in "receiveAndDelete" mode (#13178)

### Problem
Send 10K messages... 10K messages are received just fine for non-sessionful queues. For sessionful queues, receiving stops after receiving 2048 messages with `subscribe` method and leaves the receiver hanging. (Same with `receiveMessages` API if requested for a large number of messages or called in a loop)

### Reason
The difference between messages from sessionful and non-sessionful queues is the "settled" flag - it is "true" for non-sessions but "false" for sessions which makes the circular buffer full. ("rhea" checks the "settled" flag to pop the deliveries from the circular buffer)

The "settled" flag is updated in rhea mainly by a couple of methods, auto_settle and auto_accept. And this "auto_accept" method can be invoked by setting the "autoaccept" option to true in the receiver options while creating the receiver in the receiveAndDelete mode, which is being set for the non-sessions case, but not for the sessions at the SB SDK level.

### Fix
Set the "autoaccept" option to true in the receiver options for the receiveAndDelete mode for sessions to be able to clear the buffer in "rhea".

### Related issues
https://github.com/Azure/azure-sdk-for-js/issues/13109 https://github.com/Azure/azure-sdk-for-js/issues/11633 https://github.com/Azure/azure-sdk-for-js/issues/8875

### TODO
- [x] There is common code in `messageReceiver.ts` and `messageSession.ts` that can be refactored.
- [x] Test that this PR fixes the receiveBatch scenarios too
- [x] Changelog

* [ESLint Plugin] Fix failing tests (#13189)

* Updating OpenTelemetry version for Azure Exporter (#13193)

* Updating Open Telemetry to latest version

* Updating to latest OR

* Addressing comments

* Addressing comments

* skip purchase phone number sample in smoke testing (#13198)

* Increment package version after release of azure-data-tables (#13195)

* [Service Bus] Release prep - 7.0.2 (#13203)

* [Communication] - Identity - Auth with Token Credential (#12906)

* [communicationIdentityClient] adding new constructor using token credential

* [communicationIdentityClient] new implementation to take pass in token credential to constructor

* Revert changes to this file

* [communicationIdentityClient] adding linted files

* [communicationIdentityClient] remove unused import

* [communicationIdentityClient] fixing the comments as per PR feedback

* [communicationIdentityClient] import for bearerTokenAuthenticationPolicy

* [Token] adding in live tests for constructor with token credential

* [karma] added the environment vars to karma for browser

* [recordedClient] adding in a mock token credential for playback mode

* [Recordings] removing the port from endpoint url

* [recordedClient] using ClientSecretCredential instead of DefaultAzureCredential

* [communicationIdentityClientWithToken] for browser we stick to the old way of authentication for the time being

* [createRecordedCommunicationIdentityClient] formatting fixes

* [README] update the read me samples and the changelog

* [communicationIdentityClientWithToken] updating test to skip if the token is not created properly

* [communicationAuthPolicy] Created new auth policy to so client has cleaner implementation

* [recordedClient] remove the sinon lib

* Update sdk/communication/communication-administration/src/communicationIdentity/communicationIdentityClient.ts

Co-authored-by: Dominik <domessin@microsoft.com>

* OpenTelemetry exporter use correct error code when there is a network issue (#13205)

* Adding correct error code when there is an issue in core-http

* Initial commit

* [dev-tool] Adds unit-test:node and fix unit test (#13204)

* [dev-tool] Adds unit-test:node and fix unit test

* Update common/tools/dev-tool/package.json

Co-authored-by: Harsha Nalluru <sanallur@microsoft.com>

Co-authored-by: Harsha Nalluru <sanallur@microsoft.com>

* [monitor] Set changelog release date (#13208)

Forgot to set this for the release. Oops!

* Increment package version after release of azure-service-bus (#13206)

* [Attestation] Adds missing npm scripts and adds tsdoc.json (#13162)

The attestation pipeline failed for various errors (listed below), mainly because some crucial `npm` scripts were missing. This PR adds the standard ones and fixes linting issues in all but `src`.

## Failures 
Link: https://dev.azure.com/azure-sdk/internal/_build/results?buildId=682555&view=logs&j=23e2e6de-b7c3-5918-7121-f16b46172e49&t=7bd26578-88b4-5baa-f548-7813777930ca

- `ERROR: The project [@azure/dev-tool] does not define a 'unit-test:node' command`
- `ERROR: The project [@azure/attestation] does not define a 'pack' command`
- failures in the eslint plugin, opened https://github.com/Azure/azure-sdk-for-js/pull/13189 to fix them and rebased
- failures in dev-tool because of an outdated unit test. After fixing this issue , now I get this https://github.com/Azure/azure-sdk-for-js/issues/13202 instead:
```
  1 failing
npm ERR! code ELIFECYCLE

npm ERR! errno 1
  1) Project Resolution
npm ERR! @azure/dev-tool@1.0.0 unit-test: `mocha --require ts-node/register test/**/*.spec.ts`
       resolution finds dev-tool package:
npm ERR! Exit status 1
     AssertionError: expected 'D:\\vss-agent-2.179.0\\_work\\1\\s\\common\\tools\\dev-tool' to match /.*\common\tools\dev-tool/
npm ERR! 
      at D:\vss-agent-2.179.0\_work\1\s\common\tools\dev-tool\test\resolveProject.spec.ts:25:12
npm ERR! Failed at the @azure/dev-tool@1.0.0 unit-test script.
      at Generator.next (<anonymous>)
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
      at fulfilled (test\resolveProject.spec.ts:7:58)
```
EDIT: I moved `dev-tool` changes to https://github.com/Azure/azure-sdk-for-js/pull/13204.

* Add condition for making TestPipeline true. (#13175)

* [Cosmos] Throws better continuation token error when it's malformed (#13163)

* Throw a better continuation token error when malformed

* format

* Check for query info, otherwise error

* format

* Format and change error check

* adds another string message includes

* Format

* throw reused error

* format

* [Attestation] Adds sdk-type to package.json (#13213)

This is required for CI pipeline to behave correctly.

* [ESLint Plugin] ts-doc-internal checks for hidden instead of ignore (#13038)

* [ESLint Plugin] ts-doc-internal checks for hidden instead of ignore

* fix a bug

* update docs

* Increment package version after release of azure-digital-twins-core (#13201)

* [Attestation] Add linting and coverage (#13221)

* [Digital Twins] Fix types shipping (#13222)

* App configuration migration (#13095)

* Generated Code changes

* Custom Code Changes

* Formatting Changes

* Configuration Readme Changes

* Add Transformations

* Add Hidden to Client Classes & Update Version Tool

* [Digital Twins] Remove @hidden in favor of @internal (#13226)

* Update CHANGELOG for digital-twins-core (#13225)

Add an entry listing a recent bug fix where we did not
ship the renamed type definition file.

* rush update --full (#13188)

This should shake out some recently removed dependencies.

* Increment package version after release of azure-identity (#13190)

* arm-confluent-relesae (#13121)

* arm-confluent-relesae

* package update

* package update

* Removing npm clean step from build samples and run step before pack (#13199)

* [Digital Twins] Include the correct type declaration file (#13249)

* [Digital Twins] bump version (#13252)

to v1.0.3 in `package.json` and in `src`. This is the version of the release prepared here: https://github.com/Azure/azure-sdk-for-js/pull/13249.

* Key vault certificates migration (#12820)

* Update Swagger for v6 migration

* Regenerated Code Changes

* Custom Code Changes - Round I

* Format Changes

* Custom & Regen Changes II

* Some more changes

* Fix Recordings

* Additional Changes

* Additional Change

* Formatting changes

* Fix ESLInt Errors

* Adding Documentation

* Further Modifications

* Formatting Changes

* Correct the Version

* Modified Recordings

* Revert "Fix Recordings"

This reverts commit 98018d0bab183b46d22396ae74f15a980f809e76.

* Revert All Recordings

* Remove 7.2-preview

* Add Hidden and Regenerate SDK

* Update required property in spec mapper

* More refactorings

* Updated Recordings

* Tweak client behavior to not set a default content-type when there is no request body. (#13233)

* [Attestation] Set release date (#13256)

We are releasing today!

* [Docs] Upgrade typedoc to v0.15.2 to match that used by docs team (#13250)

* [Docs] Upgrade typedoc to v0.15.2 to match that used by docs team

* standardize the docs script for cosmos, kv-admin, comm-common, and storage-internal-avro

* Increment package version after release of azure-attestation (#13259)

* 3.9.5 release (#13262)

* [core] Add documentation for core-https, core-client, and an overview README (#13089)

This PR adds some key concepts to both `core-https` and `core-client`, as well as creating a new overview README for the core namespace. This is intended to be a start of more detailed documentation that we can link to when ramping up new library authors.

Suggestion comments are greatly appreciated! 😄

* Mark @azure/event-processor-host as deprecated (#13261)

* arm-network release (#13240)

* Fix release date (#13269)

* [communication] switch to bearerTokenAuthenticationPolicy (#13220)

* [KeyVault] - Enable node unit tests for KeyVault Administrator (#13230)

## What

- Enable Node unit tests for KV administration package
- Align our KV test-resources ARM template with dotnet's
- Add Managed HSM to ARM template
- Add script to enable Managed HSM from dotnet's script

## Why

We have lots of changes we want to make, but tests weren't currently runnable. 
Making the entire suite runnable seemed like something we'd want to do early 
before making behavior changes. Unfortunately there are some service integration 
issues we want to address before we can enable _all_ the tests so we decided to pend
the ones that need some extra TLC to get the suite running.

Aligning with dotnet on the ARM template allows us to easily add the Managed HSM 
bits from them and helps standardize what our template looks like across repos.

Finally, Managed HSM support is coming, so we might as well get the deployment correct
now.

* [communication-common] Add serializer for communication identifiers (#13248)

* Regenerate and Release Attestation SDK (#13279)

* Regenerate and Release Attestation SDK

* Update Version

* [Communication] - Authentication - SmsClient auth using token credential (#13260)

* [smsClient] introduce new constructor
* [smsClientWithToken] flesh out the unit test for sms client
* [smsClientWithToken.spec.ts] adding new replacable variables for token auth
* [smsClientWithToken.spec] Adding new sms recordings and skipping tests if credentials cannot be created. Mainly for browser mode
* [recording] adding browser recordings
* [recordedClient] introducing new utils for sms test

* Add LanguageDisplayName variable to Language-Settings file (#13282)

* [event-hubs] re-add support for custom endpoint address (#13287)

We had reverted the custom endpoint address changes in #13096 so that we could do a core-amqp release.

This PR reverts that revert and removes the 'beta' moniker from the event hubs version number.

Original PR: #12909

* [Service Bus] Bug Fix: getSubscriptionRuntimeProperties returns activeMessageCount as zero  (#13229)

### User issue
User complained that the active message count that was returned in the subscription runtime properties is zero when there are three messages. #13219
Reason for the failure is that the manually written parser in the service-bus JS SDK looks for the "d2p1" xmlns-prefix while the response had "d3p1" as the xmlns-prefix.
### More Background
`MessageCountDetails` from the list runtime method has "d2p1" prefix whereas the runtime props method for a single subscription has "d3p1" prefix.
Swagger mentions only the "d2p1" prefix: https://github.com/Azure/azure-rest-api-specs/blob/1f4095f20a2b89c056c40e85b17af7a534bffc4d/specification/servicebus/data-plane/servicebus-swagger.json#L367
### Fix
As a fix for JS, since we have a manual parser and not depending on codegen, this PR attempts to get the xmlns prefix from the response to be able to obtain the details.
### Followup
.NET, Java, and Python SDKs do not fail in the same way as they don't depend on the prefix that is mentioned in the swagger to parse the details.

* [core-https][core-client] Core v2 cleanup work (#13266)

* Fix an issue where policies could make phases execute out of order.
* Remove keepAlivePolicy
* Let clients add ndJsonPolicy manually
* Disable redirects by removing the policy instead of an option
* Invert response decompression policy
* Remove request cloning, to optimize pipeline allocations and replace additionalInfo with WeakMaps.

* Update error message in tools repo (#13286)

Co-authored-by: Chidozie Ononiwu <chononiw@microsoft.com>

* Increment package version after release of azure-attestation (#13283)

* Enable 1ES pools for live tests. (#13216)

* Increment package version after release of azure-digital-twins-core (#13253)

* [app-configuration] Fix selecting ConfigurationSetting fields when they contain an underscore (#13258)

Fixing problem where some fields could not be properly `select`'d for ConfigurationSettings because they had an underscore.

This revealed some other underlying issues (like inconsistency in how the `select` field was populated in GetConfigurationSetting vs the List*Setting operations). Tests have been added, code refactored and hopefully things look better after this.

Fixes #13126

* [Monitor] Ship type declarations (#13299)

* [Monitor] Ship type declarations file

* prepare release

* [Monitor] Update code ownsers (#13300)

* add chradek to API codeowners list (#13303)

* Added credscan steps for JS (#13200)

* [core-http] Clean up event listener when streaming is done (#12038)

Currently our cleanup code removes the abort event listener when the response is
returned. In streaming case even the response is returned, the work is not done
yet, however, with the abort listener removed, we lost the ability to cancel the
streaming. This change fixes the issue by unregistering the abort listener for
streaming when the stream ends.

Now that aborting streaming is working, we are getting AbortError from
node-fetch which use a different message "The user aborted a request."
than the browser fetch API does. so stop verifying error.message.

Port fix to core-https

* [event-hubs] fixes flaky service communication error tests (#13302)

We're seeing a reoccurrence of the EAI_AGAIN error in some of our tests that check behavior against non-existent event hub namespaces again.

Since what we _really_ want to ensure is that the node.js error is surfaced to the user, I loosened the check on `err.code` so that we just make sure `err.code` is defined, and that the message on the error isn't the one we'd expect to see if the test failed.

* [communication] add COMMUNICATION_ENDPOINT_STRING to test-resources.json (#13284)

* Sync eng/common directory with azure-sdk-tools for PR 1317 (#13305)

* retarget the tool clone to a release tag

Co-authored-by: scbedd <45376673+scbedd@users.noreply.github.com>

* Sync eng/common directory with azure-sdk-tools repository (#13304)

* [Communication] - Auth - SMS live test authentication (#13288)

* [smsClientWithToken.spec] remove COMMUNICATION_ENDPOINT and replace by parsing connection string
* [recording] adding new recordings node and browser
* [smsClientWithToken] using parseConnectionString instead of own function
* [connectionString] adding missing documentation

* [Service Bus] Minor updates to stress tests (#12561)

This PR adds 
- a new option `sendAllMessagesBeforeReceiveStarts` which would be useful if we want to receive a ton of messages in the queue.
- tracking more info related to the messages
- other minor fixes

* [Service Bus] Flaky Test - Report a meaningful error (#13311)

* Ensure bypass conditional template gets run for 1es ubuntu image (#13312)

Co-authored-by: Ben Broderick Phillips <bebroder@microsoft.com>

* arm-cosmosdb-release (#13241)

* [storage] add shim for UserDelegationKeyCredential in browser (#13275)

* add shim for UserDelegationKeyCredential in browser

* datalake & changelog

* [Perf Tests] Storage blob perf tests - track 1 and track 2 (#12737)

### Depends on https://github.com/Azure/azure-sdk-for-js/pull/12662

### Changes in the PR
- Track 2 tests are part of the test folder, which would be compiled along with the regular tests and would require changes if the API is updated
- Track 1 tests - a separate npm project, takes dependence on the perf package, doesn't get compiled along with the regular tests

### To run track 2 perf tests
1. Build the storage-blob package `rush build -t storage-blob`.
2. Navigate to `storage-blob` folder `cd sdk\storage\storage-blob\`.
3. Create a storage account and populate the .env file at `storage\storage-blob` folder with `ACCOUNT_NAME` and `ACCOUNT_KEY` variables.
4. Run the tests as follows
   - download
     - `npm run perfstress-test:node -- StorageBlobDownloadTest --warmup 2 --duration 7 --iterations 2 --parallel 2`
   - upload
     - `npm run perfstress-test:node -- StorageBlobUploadTest --warmup 2 --duration 7 --iterations 2 --parallel 2`
   - upload file
     - `npm run perfstress-test:node -- StorageBlobUploadFileTest --warmup 2 --duration 7 --iterations 2 --parallel 2`
   - list blobs
     - `npm run perfstress-test:node -- StorageBlobListTest --warmup 2 --duration 7 --iterations 2…
jeremymeng added a commit to jeremymeng/ms-rest-js that referenced this pull request Feb 9, 2021
The issue is when the same abort signal is passed to multiple requests we keep
adding handlers without releasing them.

Porting fixes from

Azure/azure-sdk-for-js#5649
Azure/azure-sdk-for-js#12038
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Azure.Core Client This issue points to a problem in the data-plane of the library. Storage Storage Service (Queues, Blobs, Files)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[core-http] abortSignal not able to abort download stream
4 participants