-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JsonSerializer.Deserialize is intolerably slow in Blazor WebAssembly, but very fast in .NET Core integration test #40386
Comments
See also comments at https://stackoverflow.com/questions/63254162 |
I couldn't figure out the best area label to add to this issue. If you have write-permissions please help me learn by adding exactly one area label. |
Thanks for contacting us. |
@steveharter FYI |
Thanks. The performance is so poor that I am still skeptical that this is just a slow area--I still suspect that something is wrong with the way I am doing it. Would a deserialization of a few megabytes take 10-30 s? |
needs tag area-System.Text.Json |
Which version of Blazor are you using? You'll want to avoid creating a string from the content and use a Stream instead. If you are using the net5.0 you should look at the System.Net.Http.Json extensions. |
I'm using Blazor 3.2.0 with System.Text.Json 5.0.0-preview.7. Yes, I used the extensions, but when I saw they were slow, I refactored to the code above so I could narrow the issue down to serialization. Here's the code before my performance refactoring. private async Task GetData()
{
IsLoading = true;
string url = $".../api/v1/Foo";
Results = await clientFactory.CreateClient("MyNamedClient").GetFromJsonAsync<List<Foo>>(url);
IsLoading = false;
} |
Yes this will allow the deserializer to start before all of the data is read from the Stream and prevent the string alloc. System.Text.Json should be ~2x faster for deserialization than Newtonsoft so it would be good to see your object model to see if you hit an area that is slow on STJ. In either case, since both Newtonsoft and STJ are slow there is likely something else going on. The Some thoughts:
|
I posted an MCVE as answer on StackOverflow, based on the WeatherForecast page. |
Yes, Intel Core i5 8350-U with 16 GB RAM. The test is running on my laptop. Foo is actually as follows, with minimal name changes only to protect the proprietary. Nothing significant on the CPU, this is my only focus when I am doing this. I did start with a stream per the code just above --that was how I found this issue. I refactored to the code in the issue post just to narrow it down to slowness in deserialization. public class Foo
{
public int? FooId { get; set; }
public int? FiscalYear { get; set; }
public string? FundTypeCode { get; set; }
public string? CategoryDescription { get; set; }
public string? CustomerLevelCode { get; set; }
public string? CustomerCode { get; set; }
public string? CustomerDescription { get; set; }
public string? ChangedFieldName { get; set; }
public decimal OriginalAmount { get; set; }
public decimal NewAmount { get; set; }
public string? Comment { get; set; }
public string? CreateUser { get; set; }
public DateTime CreateDate { get; set; }
public bool AdjustedBySystem { get; set; }
public Guid? ChangeBatchNumber { get; set; }
public string? FundDescription { get; set; }
public string? ParentCustomerCode { get; set; }
public string? ParentCustomerLevel { get; set; }
public string? ParentCustomerDescription { get; set; }
public decimal ChangedAmount { get { return NewAmount - OriginalAmount; } }
public bool? CreatedFromNFile { get; set; }
public string? Reason => (FundTypeCode == "N" ? (CreatedFromNfdaFile == true ? "From N File" : "N Manual") : "")}
} |
That model is simple and should be fast (no I suggest running the perf test that @HenkHolterman suggested in stackoverflow to compare against the baseline. |
@steveharter , I tried it just as suggested. It indeed takes 7-12 seconds to return 17000 items (about 1.6 MB) of WeatherForecast. (Download time on localhost is about 20 ms.) using the default code, (also, I tried to increase the payload to 5 MB and that took 23-27 seconds.) |
Blazor in net5 should be considerably faster. Are you running this test from inside VS or from a published build? |
Tagging subscribers to this area: @CoffeeFlux |
Running it from Visual Studio, "run without debugging" in Release configuration. @lewing Do you mean just System.Text.Json should be faster? If so, I already have the latest preview. Or are you suggesting I move the app to Blazor 5.0.0 latest preview? |
@szalapski could you please try your timings with the a published app outside of VS? It looks like there is an issue where the runtime is always initialized in debug mode when run from inside VS. Additionally if you update the app from 3.2 to 5.0 there are several interpreter optimizations and library improvements. Some are in preview7 some are in later builds. |
Just tried outside of VS -- using Also tried running the .exe after running |
@lewing what are the next steps here? |
I assume no attempt to run on Blazor 5.0 yet? If so I think that should be next. Both Newtonsoft and STJ are slow. This indicates a likely environmental or systemic issue, and not likely a (de)serialization issue. The StackOverflow test runs <4 seconds for @HenkHolterman and 7-12 seconds for @szalapski. Different hardware and\or different Blazor versions could account for that 2x-3x slowness; would need a standard CPU benchmark and same Blazor version to actually compare apples-to-apples. Also @szalapski on download perf you originally said:
but with your latest test from StackFlow you said:
So download time went from 1-6 seconds for 2-6MB to 20ms for 1.6MB -- any thought on why that's the case? |
The 1-6 seconds was over the internet, whereas the 20ms was running against a local web service. I just did that comparison to ensure that the download speed is not relevant--regardless of whether the download is 20 ms or 20,000 ms, the deserialization is quite slow. I will try it on Blazor 5 preview 8 soon.
Why shouldn't it be on the order of tens of milliseconds? Are the optimizations we see in .NET Core just not possible in WebAssembly?
Wait, I thought all agreed that the slowness is in the deserialization code, not in a problem with my system or environment. You are saying that I have a problem that is not inherent to deserializing in WebAssembly? How can I diagnose that? |
@rajeshaz09 I assume you've measured against 5.0 .NET since there have been gains. I see that MessagePack claims to be ~13x faster deserializing than Json.NET (no benchmark for STJ) for the case of "large array of simple objects". So if STJ is 2x as fast as Json.NET here, the 7 seconds for STJ vs. 2 seconds for MessagePack seems consistent, although note that the benchmark is for standard .NET Core not under Blazor. |
@steveharter I have powerful dev machine. I didn't see much difference (Hardly 1 second) between STJ and MessagePack with HighPerformance power setting. But I can see significant gap if I use Balanced/PowerSaver setting. MessagePack is temporary solution, once we satisfy with .NET 6, we will move back to JSON. |
I am not 100% sure but it seems very likely that this is related: I just tried to deserialize a 2.6MB json containing 10.000 simple POCOs. FYI: I am using .NET 6 Preview 3 and System.Text.Json |
I have had a similar journey recently moving through different serialisers and finally arriving at Messagepack which has been good enough in interpreted WASM for current users. Performance Vs System.Text.Json is impressive However, scope of our WASM app is definitely expanding and we have users looking to handle 100s of thousands of of objects to perform data manipulation/analysis in browser like Excel would chomp through on a normal desktop. Wait times for data loads of this size (they really aren't massive payloads delivered from the API) are at the point where it is difficult to satisfy users and Server Side Blazor is becoming the only option. The Blazor/WASM community has generally always expressed that code runs at native speeds (until you learn that everything outside of the .net libraries is interpreted) and I had hoped AOT would make an enormous difference here, allowing Messagepack serialiser to run at native speed. Our initial benchmarks of rc1 are showing it to be slower in this area than interpreted mode. Maybe it's my misunderstanding of how serialisation works - is it object construction in .Net itself being slow here and I shouldn't see any difference between AOT and interpreted builds? Either way, serialisation is painfully slow for what is really not that much data. |
First, and most importantly, thanks to the team working on Blazor and web assembly. We think this technology has a really bright future! I'll add my support for @szalapski here. We have an .NET open source library that is used heavily in back end services run on AWS Lambda. We were excited with the possibility of running some of our code in our web application. Our initial attempts to compile and run web assembly from our library in .NET 6 preview 7 have been met with massive performance degradation. I established a small benchmark that creates 1000 cubes using the library (the library is for creating 3d stuff with lots of Running the Blazor code compiled using
We found that AOT compilation (which takes nearly 15 minutes), increases the performance by 2x. The benchmark containing the same code run on the desktop, shows the following for writing to gltf:
It takes nearly 67x as long to run in web assembly. We have a similar performance degradation for serializing and deserializing JSON. Some considerations as to what might be slow:
You can find our example Blazor project that has no UI but runs the wasm and reports to the console here: https://github.com/hypar-io/Elements/tree/wasm-perf/Elements.Wasm. We're really excited for the effort to bring C# to web assembly and are happy to provide any further information necessary. It would be fantastic for these development efforts if there was a way to run a dotnet benchmark across the core CLR and web assembly to make an apples->apples comparison. For now we've had to build our own. One more thing... This performance degradation is not everywhere. We can call methods in our library that do some pretty complicated geometry stuff and they run at near native speed. We have a couple of demos of interactive 3d geometry editing and display using Blazor wasm. It's just serialization and reading/writing bytes that seem to be a big issue. Also looping in @Gytaco who is doing some amazing work using c#->web assembly for geometry stuff. |
@SamMonoRT could we add it to interpreter benchmarks? |
If you face issue with JSON serialization performence , before trying to solve by refatoring your code, please check performeance in another browser, Blazor work realy fast on Edge, Opera, Chrome, but performance in Firefox is realy wick - slowdown serialization more than 10 times. |
Serialisation is slow across all browsers for Mono .Net. If the performance of Blazor is slow in a particular browser, that's more likely a wasm implementation issue for the team that maintain that browser as opposed to a Blazor/Mono .Net issue. |
I see this is being targeted for .NET7. Blazor WASM has been great for the most part but this performance issue is making it really difficult to view Blazor as a viable option for some of the more data intensive projects I have coming up. I'll give MessagePack a try since it seems people have had some success with that, |
Any new news or suggestions (@szalapski )? We have the exact same problem. So we can not realize our application with Blazor. |
I just tried it with a 10MB json file and its unusably slow. 10MB isn't that much. Its tiny. Its taking over 2 minutes to load the initial page which doesn't make sense IMO. I'm using the best performance tricks too:
Its so slow and takes so long I can't even run a performance profiler either. THe performance profiler just bombs out and gets stuck. I'm having other problems too where external .net 7 DLLs take forever to load. There needs to be a way to quickly and efficiently load datasets into blazor WASM. This is on .NET 7 by the way |
Meanwhile react with an embedded file of the same size takes like 10 ms. |
/cc @lewing |
I have the exact same problem with even smaller (about 1.5MB uncompress) json it was I also try: https://github.com/salarcode/Bois Edit: System.Text.Json: 237ms (size: 1278kb) For context if I recreated tested model from in-memory original object (custom clone) it is about 5ms. These are numbers on published version of app and it measure just Deserialization. I do not have the number for GRPC from deployed app) But in debug mode on localhost it was about the same with Bois/custom (70ms) so i expect in deployed app it will be about 40ms on my pc. Don't have exact number from "normal" dotnet but it will be like few ms |
Same issue with a 10Mb GeoJson file in .NET 7. This issue makes difficult to develop effective, WFS oriented, gis solutions with .NET Wasm. |
Any progress on this? Running into same issue. |
Apparently lots of performance improvements (as an indirect result of the new WASM JIT) in .Net 8 preview 2, so might be worth a look. I'd be trying it, but still no VS for Mac support.... |
@Webreaper Thanks, looking forward to .NET 8. Also I just compiled with AOT turned on in .NET 7 and the slowness disappeared so suggest trying that @pragmaeuge |
The only thing you need to be careful of with AOT is this issue: #62149 |
Hi! I will take a look at it soon, thanks a lot. |
I deployed with .NET 8 preview 2 today and deserialization is much faster. Jiterpreter helps a lot, from about 4-5 seconds with .NET 7 to 1-2 seconds. The rest of the application works too which is nice :) |
This issue is over three years old and while we will keep working on improving the json performance given the increases in execution speed we have measured from when this was opened to now I consider this issue closed. If you would like to report json performance issues with .NET 8 preview 7 or later please open a new issue. |
In my Blazor app, I have a component that has a method like this. (I've replaced a call to GetFromJsonAsync with code from inside it, to narrow down the slow part.)
My download of 2-6 MB takes 1-6 seconds, but the rest of the operation (during which the UI is blocked) takes 10-30 seconds. Is this just slow deserialization in
ReadFromJsonAsync
(which callsSystem.Text.Json.JsonSerializer.Deserialize
internally), or is there something else going on here? How can I improve the efficiency of getting this large set of data (though it isn't all that big, I think!)I have commented out anything bound to
Results
to simplify, and instead I just have an indicator bound toIsLoading
. This tells me there's no slowness in updating the DOM or rendering.When I attempt the same set of code in an automated integration test, it only takes 3 seconds or so (the download time). Is WebAssembly really that slow at deserializing? If so, is the only solution to retrieve very small data sets everywhere on my site? This doesn't seem right to me. Can this slowness be fixed?
Here's the resulting browser console log from running the above code:
Using Newtonsoft.Json (as in the commented-out line) instead of System.Text.Json gives very similar results.
For what it's worth, here's the Chrome performance graph. The green is the download and the orange is "perform microtasks", which I assume means WebAssembly work.
The text was updated successfully, but these errors were encountered: