Skip to content

Commit

Permalink
Javascript doc improvements
Browse files Browse the repository at this point in the history
  • Loading branch information
ddnexus committed Aug 6, 2021
1 parent 24b7e78 commit 7187101
Show file tree
Hide file tree
Showing 2 changed files with 36 additions and 26 deletions.
35 changes: 33 additions & 2 deletions docs/api/javascript.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,40 @@ Load the [pagy.js](https://github.com/ddnexus/pagy/blob/master/lib/javascripts/p

### CAVEATS

If you override any `*_js` helper, ensure to override/enforce the relative javascript function, even with a simple copy and paste. If the relation between the helper and the function changes in a next release (e.g. arguments, naming, etc.), your app will still work with your own overriding even without the need to update it.
#### Functions

See also [Preventing crawlers to follow look-alike links](../how-to.md#preventing-crawlers-to-follow-look-alike-links)
If you override any `*_js` helper, ensure to override/enforce the javascript functions that it uses. If the relation between the helper and the functions change in a next release (e.g. arguments, naming, etc.), your app will still work with your own overriding even without the need to update it.

#### HTML fallback

Notice that if the client browser doesn't support Javascript or if it is disabled, certain helpers will serve nothing useful for the user. If your app does not require Javascript support and you still want to use javscript helpers, then you should consider implementing your own HTML fallback. For example:

```erb
<noscript><%== pagy_nav(@pagy) %></noscript>
```

#### Preventing crawlers to follow look-alike links

The `*_js` helpers come with a `data-pagy-json` attribute that includes an HTML encoded string that looks like an `a` link tag. It's just a placeholder string used by `pagy.js` in order to create actual DOM elements links, but some crawlers are reportedly following it even if it is not a DOM element. That causes server side errors reported in your log.

You may want to prevent that by simply adding the following lines to your `robots.txt` file:

```
User-agent: *
Disallow: *__pagy_page__
```

**Caveats**: already indexed links may take a while to get purged by some search engine (i.e. you may still get some hits for a while even after you disallow them)

A quite drastic alternative to the `robot.txt` would be adding the following block to the `config/initializers/rack_attack.rb` (if you use the [Rack Attack Middlewhare](https://github.com/kickstarter/rack-attack)):

```ruby
Rack::Attack.blocklist("block crawlers to follow pagy look-alike links") do |request|
request.query_string.match /__pagy_page__/
end
```

but it would be quite an overkill if you plan to install it only for this purpose.

### Add the oj gem

Expand Down
27 changes: 3 additions & 24 deletions docs/how-to.md
Original file line number Diff line number Diff line change
Expand Up @@ -616,7 +616,9 @@ When the count caching is not an option, you may want to use the [countless extr

## Using AJAX

See [Using AJAX](api/javascript.md#using-ajax)
You can trigger ajax render in rails by [Customizing the link attributes](customizing-the-link-attributes).

See also [Using AJAX](api/javascript.md#using-ajax).

## Paginate for API clients

Expand Down Expand Up @@ -648,29 +650,6 @@ If you don't have strict requirements but still need to give the user total feed

If your requirements allow to use the `countless` extra (minimal or automatic UI) you can save one query per page, and drastically boost the efficiency eliminating the nav info and almost all the UI. Take a look at the examples in the [support extra](extras/support.md).

## Preventing crawlers to follow look-alike links

The `*_js` helpers come with a `data-pagy-json` attribute that includes an HTML encoded string that looks like an `a` link tag. It's just a placeholder string used by `pagy.js` in order to create actual DOM elements links, but some crawlers are reportedly following it even if it is not a DOM element. That causes server side errors reported in your log.

You may want to prevent that by simply adding the following lines to your `robots.txt` file:

```
User-agent: *
Disallow: *__pagy_page__
```

**Caveats**: already indexed links may take a while to get purged by some search engine (i.e. you may still get some hits for a while even after you disallow them)

A quite drastic alternative to the `robot.txt` would be adding the following block to the `config/initializers/rack_attack.rb` (if you use the [Rack Attack Middlewhare](https://github.com/kickstarter/rack-attack)):

```ruby
Rack::Attack.blocklist("block crawlers to follow pagy look-alike links") do |request|
request.query_string.match /__pagy_page__/
end
```

but it would be quite an overkill if you plan to install it only for this purpose.

## Ignoring Brakeman UnescapedOutputs false positives warnings

Pagy output html safe HTML, however, being an agnostic pagination gem it does not use the specific `html_safe` rails helper on its output. That is noted by the [Brakeman](https://github.com/presidentbeef/brakeman) gem, that will raise a warning.
Expand Down

0 comments on commit 7187101

Please sign in to comment.