diff --git a/docs/api/javascript.md b/docs/api/javascript.md index 0235d68ac..30ba2dde8 100644 --- a/docs/api/javascript.md +++ b/docs/api/javascript.md @@ -24,9 +24,40 @@ Load the [pagy.js](https://github.com/ddnexus/pagy/blob/master/lib/javascripts/p ### CAVEATS -If you override any `*_js` helper, ensure to override/enforce the relative javascript function, even with a simple copy and paste. If the relation between the helper and the function changes in a next release (e.g. arguments, naming, etc.), your app will still work with your own overriding even without the need to update it. +#### Functions -See also [Preventing crawlers to follow look-alike links](../how-to.md#preventing-crawlers-to-follow-look-alike-links) +If you override any `*_js` helper, ensure to override/enforce the javascript functions that it uses. If the relation between the helper and the functions change in a next release (e.g. arguments, naming, etc.), your app will still work with your own overriding even without the need to update it. + +#### HTML fallback + +Notice that if the client browser doesn't support Javascript or if it is disabled, certain helpers will serve nothing useful for the user. If your app does not require Javascript support and you still want to use javscript helpers, then you should consider implementing your own HTML fallback. For example: + + ```erb + + ``` + +#### Preventing crawlers to follow look-alike links + +The `*_js` helpers come with a `data-pagy-json` attribute that includes an HTML encoded string that looks like an `a` link tag. It's just a placeholder string used by `pagy.js` in order to create actual DOM elements links, but some crawlers are reportedly following it even if it is not a DOM element. That causes server side errors reported in your log. + +You may want to prevent that by simply adding the following lines to your `robots.txt` file: + +``` +User-agent: * +Disallow: *__pagy_page__ +``` + +**Caveats**: already indexed links may take a while to get purged by some search engine (i.e. you may still get some hits for a while even after you disallow them) + +A quite drastic alternative to the `robot.txt` would be adding the following block to the `config/initializers/rack_attack.rb` (if you use the [Rack Attack Middlewhare](https://github.com/kickstarter/rack-attack)): + +```ruby +Rack::Attack.blocklist("block crawlers to follow pagy look-alike links") do |request| + request.query_string.match /__pagy_page__/ +end +``` + +but it would be quite an overkill if you plan to install it only for this purpose. ### Add the oj gem diff --git a/docs/how-to.md b/docs/how-to.md index e1f6c720b..a3f736c17 100644 --- a/docs/how-to.md +++ b/docs/how-to.md @@ -616,7 +616,9 @@ When the count caching is not an option, you may want to use the [countless extr ## Using AJAX -See [Using AJAX](api/javascript.md#using-ajax) +You can trigger ajax render in rails by [Customizing the link attributes](customizing-the-link-attributes). + +See also [Using AJAX](api/javascript.md#using-ajax). ## Paginate for API clients @@ -648,29 +650,6 @@ If you don't have strict requirements but still need to give the user total feed If your requirements allow to use the `countless` extra (minimal or automatic UI) you can save one query per page, and drastically boost the efficiency eliminating the nav info and almost all the UI. Take a look at the examples in the [support extra](extras/support.md). -## Preventing crawlers to follow look-alike links - -The `*_js` helpers come with a `data-pagy-json` attribute that includes an HTML encoded string that looks like an `a` link tag. It's just a placeholder string used by `pagy.js` in order to create actual DOM elements links, but some crawlers are reportedly following it even if it is not a DOM element. That causes server side errors reported in your log. - -You may want to prevent that by simply adding the following lines to your `robots.txt` file: - -``` -User-agent: * -Disallow: *__pagy_page__ -``` - -**Caveats**: already indexed links may take a while to get purged by some search engine (i.e. you may still get some hits for a while even after you disallow them) - -A quite drastic alternative to the `robot.txt` would be adding the following block to the `config/initializers/rack_attack.rb` (if you use the [Rack Attack Middlewhare](https://github.com/kickstarter/rack-attack)): - -```ruby -Rack::Attack.blocklist("block crawlers to follow pagy look-alike links") do |request| - request.query_string.match /__pagy_page__/ -end -``` - -but it would be quite an overkill if you plan to install it only for this purpose. - ## Ignoring Brakeman UnescapedOutputs false positives warnings Pagy output html safe HTML, however, being an agnostic pagination gem it does not use the specific `html_safe` rails helper on its output. That is noted by the [Brakeman](https://github.com/presidentbeef/brakeman) gem, that will raise a warning.