-
Notifications
You must be signed in to change notification settings - Fork 29.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open Changes and quickly clicking on Stage Changes on a large json file blocks / hangs for 10 seconds #40681
Comments
How fast is Also can you run everything again, but this time:
Which extension took up most of the CPU? |
|
@joshunger I assume the operation is really fast if you don't click the file first? Can you share the file with us? @jrieken @alexandrudima I suspect this 10s delay occurs due to sending the 3.3MB text model over to the extension host. Any idea how can I confirm my suspicions? |
@joaomoreno I can only imagine being able to reproduce out of source and adding log statements at the right places. You never know, but I don't suspect sending a 3.3 MB string between processes takes 10s. |
@joaomoreno you got it. It takes < 1 second if I don't click the file first. This is an export of a Google Spreadsheet. Yes, I can share it but not on this issue. Can I email you the file or share via OneDrive? If so what email? Or, do you have any large Google Spreadsheet JSON exports? |
@joaomoreno @alexandrudima I sent you an email with the file |
@joshunger Thanks for the file, got it. But I can't reproduce it over here. Are you sure this issue reproduces with |
@joaomoreno did you check in the file, make tweaks to it, and then check in the changes? GIF of repro - |
@joshunger This is a great bug! I was able to reproduce and did a very deep investigation with @alexandrudima. We still don't have a fix but understand why this happens. There is a lot of IPC communication back and forth between our renderer process and extension host process. This is implemented over UNIX domain sockets (macOS, Linux) or named pipes (Windows). It seems that, for some reason, on macOS, large data sent via this mechanism gets split into 8192 byte size chunks. When sending this large JSON file back and forth, that ends up taking a LOT of chunks. In between each chunk, the reading process seems to yield to give way to other tasks to run. All of this would be fine if it weren't for... tokenization. This file takes quite a while to tokenize. That should still be fine because our tokenization methods don't take a single JavaScript frame... we yield often such that other tasks get to run. This leads us to your bug. Tokenization yields often, but not enough times to account for all those 8192 byte chunks to be read, since there are so many of them. Note that this bug doesn't reproduce in Linux or Windows. It also doesn't reproduce if we increase the timeout with which we yield from tokenization to ~50ms. Again, great bug! We'll get back on it with a fix... |
Thanks! You're welcome! You should send out free stickers to top bugs I need a VS Code sticker for my laptop 🤔 Haha. Well, the great part about VS Code is that it is really performant compared to Atom! |
This reproduces in Node too. I was able to debug Node 7.9.0 itself and trace it down to this line in libuv, in which the syscall It also reproduces on Node 8. Here's a sample which shows the behaviour: const net = require('net');
const path = require('path');
const fs = require('fs');
const os = require('os');
const socketPath = path.join(os.tmpdir(), 'bugsocket');
try {
fs.unlinkSync(socketPath);
} catch (err) {
//noop
}
const buffer = new Buffer(1024 * 1024);
const server = net.createServer(socket => {
socket.write(buffer);
socket.end();
server.close();
});
server.listen(socketPath);
const socket = net.createConnection(socketPath);
const map = new Map();
socket.on('data', data => {
map.set(data.byteLength, (map.get(data.byteLength) || 0) + 1);
});
socket.on('close', () => {
for (const [size, count] of map) {
console.log(`got ${count} blocks of ${size} bytes`);
}
}); On macOS I get:
While on Linux I get:
I found another poor soul having this issue, with no answers: https://stackoverflow.com/questions/44026984/macos-sierra-increase-named-pipe-capacity I can't find any information whatsoever of how to change this buffer size. |
I can reproduce it in C++ too: #include <iostream>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <fcntl.h>
#include <thread>
#define BUFFER_SIZE 1024*1024
#define FIFO_PATH "/Users/joao/Desktop/fifo"
void readFifo() {
auto fd = open(FIFO_PATH, O_RDONLY);
if (fd < 0) {
std::cerr << "Failed to open fifo for reading" << std::endl;
return;
}
std::cout << "Ready to read!" << std::endl;
char buffer[BUFFER_SIZE];
auto total_bytes_read = 0;
while (total_bytes_read < BUFFER_SIZE) {
auto bytes_read = read(fd, buffer, BUFFER_SIZE);
std::cout << "Read " << bytes_read << " bytes" << std::endl;
total_bytes_read += bytes_read;
}
close(fd);
}
int main(int argc, const char * argv[]) {
unlink(FIFO_PATH);
if (mkfifo(FIFO_PATH, 0666) != 0) {
std::cerr << "Failed to create fifo" << std::endl;
return 1;
}
std::thread reader(readFifo);
auto fd = open(FIFO_PATH, O_WRONLY);
if (fd < 0) {
std::cerr << "Failed to open fifo for writing" << std::endl;
return 1;
}
char buffer[BUFFER_SIZE];
for (int i = 0; i < BUFFER_SIZE; i++) {
buffer[i] = 'a';
}
auto bytes_written = write(fd, buffer, BUFFER_SIZE);
reader.join();
std::cout << "Wrote " << bytes_written << " bytes" << std::endl;
unlink(FIFO_PATH);
return 0;
} Interesting that here the writer reports that it managed to write everything at once, though the reader was only getting 8129 bytes at a time. |
@joaomoreno nodejs/node#12921 related? |
I've contacted the Darwin-kernel mailing list https://lists.apple.com/mailman/listinfo/darwin-kernel |
@joshunger Yeah I was there already, I don't think it's related. |
@joaomoreno I'm very happy you're investigating this. Thank you for your time! |
Unfortunately I got no reply from the Darwin-kernel mailing list... Trying Darwin-userlevel and Filesystem-dev... |
One month after and no answer from the mailing lists. Went the Stack Overflow way: https://stackoverflow.com/questions/48945547/change-named-pipe-buffer-size-in-macos |
Thanks a lot, @joaomoreno. So as far as I understand you found a way how to fix it? EDIT: oh my bad, you mean you created topic in Stack Overflow to get help since Darwin is not answering your emails... |
I'm on macOS and everything seems fine with Git except with @joaomoreno In https://stackoverflow.com/questions/48945547/change-named-pipe-buffer-size-in-macos I read the answer was saying |
Yup, it's a macOS issue. There appears to be no solution for this. |
But what about Git clients like GitKraken, Sourcetree, even IDEs like WebStorm. They seems do not have such issue, maybe it’s possible to fix? |
They do not do things the same way as we do... read this for an analysis: #40681 (comment) |
@joaomoreno thank you very much for explaining. I really appreciate your work. Sad, maybe somehow we can expect fix in new macOS releases... |
Hi, #57697 was closed as a dupe of this, I'm not 100% sure it's the same but when I accidentally click I observe this on both Mac and Windows. VSCode is not frozen – it is still responsive but diffing does nothing. Does the fact that it's reproducible on Windows as well suggest anything? |
This will not get fixed upstream anytime soon, so I have pushed the following mitigations on our side:
|
Open Changes and quickly clicking on Stage Changes on a large json file is blocked / hangs for 10 seconds
Steps to Reproduce:
EXPECTING: added in under 1 second
ACTUAL: takes ~10 seconds
If I repeat it a second time it adds in under 1 second.
Reproduces without extensions: Yes
The text was updated successfully, but these errors were encountered: