-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test 4: Server-side templates and collections #134
Comments
For those frameworks that are mainly for REST or work any other way on lower level, should they use some external templating framework or is it allowed just to create a simple string streaming to HTML? I think this 'string templating' is not how the frameworks would normally be used and defeat the purpose on those tests. |
Hi @trautonen. Thanks for raising that question. It's relevant for more than just the particular context of server-side templates. As seen in the tests we've done to-date, we don't have all tests represented across all platforms and frameworks. Moreover, as the community increases the number of tests and frameworks, there are going to be even more combinations that simply aren't a "good fit." For any given framework+test combination, given the constraints of the framework's design goals and the simple matter of time, I think that either of the following are fair:
I would prefer not to include implementations that bypass the workload that the test is intended to exercise. So in the case of server-side templates and REST-oriented frameworks, I'd say no to just concatenating together a hard-coded string. For those cases, we should either omit the framework+test combination in question or select a best-of-breed server-side template language (e.g., Mustache). I think either option better represents what actual users of the framework would do in practice: either not use it for server-side templates or select a template library to add to the mix. Your thoughts? |
I think @bhauer's proposed solution is good enough. But about the libraries and other frameworks used when there is nothing built in for the framework under test. The first thing that I've noticed from the JVM based frameworks is that the database connection pools differ widely. Unfiltered tests use BoneCP, I used c3p0 for local datasource in Spark tests and then there's the dummy datasource for Resin JNDI, which probably will be a lot slower compared to the other two if there is lot of concurrency introduced in the tests. Should there be strict rules about these or even common implementations that can be adopted to the tests? |
In the first-round tests, we attempted to identify and use libraries (including but not limited to connection pools) that were either recommended by the framework in question or used as examples in the documentation for the framework. Where there was no recommendation, we selected a best-of-breed for the platform. For example, we used Jackson for JSON serialization on several of the JVM frameworks. Some connection pools were configured unevenly, but that was not intended. The intent was to use a production-ready configuration of the framework and its recommended add-on libraries. Since then, we have normalized the JDBC connection string used for JVM tests and contributors have submitted pull requests that may have normalized the libraries to a degree. Going forward, I would prefer that each framework's tests continue using whatever libraries that framework recommends for production use. (This is a subject for a longer discussion, but we want to plea with framework authors to be as clear as possible about their recommendations for production deployments; we found that many don't write a whole lot of documentation about production.) If, for example, C3P0 is recommended for Spark production deployments, that is what I'd like to see in the Spark tests. We've received pull requests that fine-tune a framework's tests to the specifics of these benchmarks. We've accepted these and for the time being identified them with a "stripped" name suffix (e.g., rails-ruby-stripped). If we received a test that used an alternate connection pool for this particular set of benchmarks--a connection pool that differs from the framework's standard recommendation for production deployments--that would be a similar situation. I'm not sure how rigid we will end up with this distinction (probably not very rigid) or if we'll retain the suffix stripped. I bring it up just to highlight the conflicting ideals: exercise a framework configured as recommended for production deployments versus exercise a framework that has been tuned to be as performant as possible on these particular tests. I prefer the prior, but both are valid. (Note that Servlets are using the standard connection pool that is provided with the MySQL JDBC driver, |
We now have a few implementations of this test (Rails, Servlet and Gemini). We hope to add at least 3 or 4 more so that these results can be included in round 4. |
While we need to implement this test in more frameworks, I think 17 is a great start. Thanks especially to @Skamander for the fast contributions here. I'm going to close this issue. |
The final requirements for this test type (Fortunes) are now available at the results web site: http://www.techempower.com/benchmarks/#section=code |
See issue #133 concerning additional test types.
We are working on a fourth test to exercise the frameworks' server-side templates and some small work with collections. Here is the tentative specification:
ORDER BY
clause is permitted (not that it would be of much value since a newly instantiated Fortune is added to the list prior to sorting).text/html
and either theContent-Length
orTransfer-Encoding
response header must be provided. Compression (e.g., gzip) should not be enabled.<script>
tag. The server-side template must assume the Fortune messages cannot be trusted and escape the message text properly.This is tentative. I'd like to introduce this test in Round 4 if possible, implemented in at least a few of the frameworks.
If anyone has any thoughts, please join in!
The text was updated successfully, but these errors were encountered: