Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test 4: Server-side templates and collections #134

Closed
bhauer opened this issue Apr 11, 2013 · 8 comments
Closed

Test 4: Server-side templates and collections #134

bhauer opened this issue Apr 11, 2013 · 8 comments

Comments

@bhauer
Copy link
Contributor

bhauer commented Apr 11, 2013

See issue #133 concerning additional test types.

We are working on a fourth test to exercise the frameworks' server-side templates and some small work with collections. Here is the tentative specification:

  • A new database table contains a dozen Unix-style "fortunes." The schema is just id (int) and message (varchar).
  • Using the ORM, the test fetches all Fortune objects from the fortunes table, and places them into a list data structure. The data structure must be a dynamic-size data structure or equivalent and should not be dimensioned using foreknowledge of the row-count of the database table.
  • Within the scope of the request, a new Fortune is constructed and added to the list. This confirms that the data structure is dynamic-sized. The new fortune is not persisted to the database; it is ephemeral for the scope of the request.
  • The fortunes are then sorted in the test code by the fortune's message field. No ORDER BY clause is permitted (not that it would be of much value since a newly instantiated Fortune is added to the list prior to sorting).
  • The resulting sorted list is then provided to a server-side template and rendered to simple HTML. The resulting table displays each Fortune's id number and message. This test does not include external assets (CSS, JavaScript); a later test will include assets.
  • That HTML is sent as a response. The response content-type must be specified as text/html and either the Content-Length or Transfer-Encoding response header must be provided. Compression (e.g., gzip) should not be enabled.
  • The Fortunes' messages are stored as UTF-8 and one of the Fortunes is in Japanese. The resulting HTML must be delivered as UTF-8 and the Japanese fortune should be displayed correctly.
  • One Fortune's message includes a <script> tag. The server-side template must assume the Fortune messages cannot be trusted and escape the message text properly.

This is tentative. I'd like to introduce this test in Round 4 if possible, implemented in at least a few of the frameworks.

If anyone has any thoughts, please join in!

@bhauer
Copy link
Contributor Author

bhauer commented Apr 11, 2013

Here is a visual representation of the tentative response (source view on right).

tentative-response

@trautonen
Copy link
Contributor

For those frameworks that are mainly for REST or work any other way on lower level, should they use some external templating framework or is it allowed just to create a simple string streaming to HTML? I think this 'string templating' is not how the frameworks would normally be used and defeat the purpose on those tests.

@bhauer
Copy link
Contributor Author

bhauer commented Apr 14, 2013

Hi @trautonen. Thanks for raising that question. It's relevant for more than just the particular context of server-side templates. As seen in the tests we've done to-date, we don't have all tests represented across all platforms and frameworks. Moreover, as the community increases the number of tests and frameworks, there are going to be even more combinations that simply aren't a "good fit."

For any given framework+test combination, given the constraints of the framework's design goals and the simple matter of time, I think that either of the following are fair:

  • Simply don't do the test (yet). If a framework is not generally acknowledged as well-suited to the test or the community simply has not yet found time to implement the test, I think it's completely reasonable to just omit that combination. For example, at the time of this writing, we do not yet have a Netty database test. It could be done, and it would be nice to have it done for completeness' sake. But I doubt many readers are surprised or concerned that it doesn't exist.
  • Select a "best of breed" implementation of the missing piece(s) available for the platform in order to fulfill the test requirements. For example, if there is no ORM for a Ruby framework, use ActiveRecord. Of if no JSON serializer in a Java framework, use Jackson.

I would prefer not to include implementations that bypass the workload that the test is intended to exercise. So in the case of server-side templates and REST-oriented frameworks, I'd say no to just concatenating together a hard-coded string. For those cases, we should either omit the framework+test combination in question or select a best-of-breed server-side template language (e.g., Mustache). I think either option better represents what actual users of the framework would do in practice: either not use it for server-side templates or select a template library to add to the mix.

Your thoughts?

@trautonen
Copy link
Contributor

I think @bhauer's proposed solution is good enough. But about the libraries and other frameworks used when there is nothing built in for the framework under test. The first thing that I've noticed from the JVM based frameworks is that the database connection pools differ widely. Unfiltered tests use BoneCP, I used c3p0 for local datasource in Spark tests and then there's the dummy datasource for Resin JNDI, which probably will be a lot slower compared to the other two if there is lot of concurrency introduced in the tests. Should there be strict rules about these or even common implementations that can be adopted to the tests?

@bhauer
Copy link
Contributor Author

bhauer commented Apr 15, 2013

In the first-round tests, we attempted to identify and use libraries (including but not limited to connection pools) that were either recommended by the framework in question or used as examples in the documentation for the framework. Where there was no recommendation, we selected a best-of-breed for the platform. For example, we used Jackson for JSON serialization on several of the JVM frameworks.

Some connection pools were configured unevenly, but that was not intended. The intent was to use a production-ready configuration of the framework and its recommended add-on libraries.

Since then, we have normalized the JDBC connection string used for JVM tests and contributors have submitted pull requests that may have normalized the libraries to a degree. Going forward, I would prefer that each framework's tests continue using whatever libraries that framework recommends for production use. (This is a subject for a longer discussion, but we want to plea with framework authors to be as clear as possible about their recommendations for production deployments; we found that many don't write a whole lot of documentation about production.)

If, for example, C3P0 is recommended for Spark production deployments, that is what I'd like to see in the Spark tests.

We've received pull requests that fine-tune a framework's tests to the specifics of these benchmarks. We've accepted these and for the time being identified them with a "stripped" name suffix (e.g., rails-ruby-stripped). If we received a test that used an alternate connection pool for this particular set of benchmarks--a connection pool that differs from the framework's standard recommendation for production deployments--that would be a similar situation. I'm not sure how rigid we will end up with this distinction (probably not very rigid) or if we'll retain the suffix stripped. I bring it up just to highlight the conflicting ideals: exercise a framework configured as recommended for production deployments versus exercise a framework that has been tuned to be as performant as possible on these particular tests. I prefer the prior, but both are valid.

(Note that Servlets are using the standard connection pool that is provided with the MySQL JDBC driver, MysqlConnectionPoolDataSource.)

@pfalls1
Copy link
Contributor

pfalls1 commented Apr 19, 2013

We now have a few implementations of this test (Rails, Servlet and Gemini). We hope to add at least 3 or 4 more so that these results can be included in round 4.

@bhauer
Copy link
Contributor Author

bhauer commented May 2, 2013

While we need to implement this test in more frameworks, I think 17 is a great start. Thanks especially to @Skamander for the fast contributions here. I'm going to close this issue.

@bhauer
Copy link
Contributor Author

bhauer commented May 9, 2013

The final requirements for this test type (Fortunes) are now available at the results web site: http://www.techempower.com/benchmarks/#section=code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants