-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SEO Audits] Structured data is valid #4359
Comments
@kdzwinel let's start by setting up a new repo on your personal GH account and getting the plumbing to work so that it is registered on npm and installed by LH. You can send me any code reviews outside of the LH repo. |
TODO: Is there anything we can reuse from Chromium's "copyless paste" feature, which does JSON-LD parsing? https://github.com/chromium/chromium/search?utf8=%E2%9C%93&q=json-ld&type= |
What's in scope for the first pass here? The three checkboxes above? Also, do we know how much of the heavy lifting can be done by existing libraries? |
Some of the lifting can be done with existing projects, eg https://github.com/zaach/jsonlint, assuming the license is compatible. @kdzwinel could you set up some benchmarks and see how good it is? Depending on it directly is probably fine for v1 but we'd also like to fold in other structured data schemas beyond JSON-LD, so it may be worth setting up a standalone SD validator repo anyway and getting the plumbing set up. |
I do see quite a few json-ld projects already, so I was mostly curious on that side. The three checkboxes above are what you're scoping this MVP to? |
Oh I actually linked to the wrong project. jsonlint does look interesting and may help. I meant to link to https://github.com/digitalbazaar/jsonld.js, which is JSON-LD specific. As for the MVP, the first two (valid JSON, standard props) are required and the third is a stretch goal. Paul, on the subject of an external repo, I was curious to hear your thoughts on internal vs external. My hope/expectation is for the structured data validation code to be reusable in other contexts and also maintained by the web community. I think an external repo would be more conducive to those things, eg how the axe a11y logic is external to LH. WDYT? |
We have been talking a lot on the engineering meeting about solutions such as lerna, yarn workspaces and doing what we are doing with lh-logger (same repo, different package), but I haven't asked why we don't want a separate repo in the first place? |
breaking out the seperate repo discussion into #4955 |
Small update. jsonld.js can do some transforms on JSON-LD structures, but I don't see how we can use it for reporting errors. BTW this package has a non-standard license. I've built a first prototype that checks:
And I found a first production error 😄 (already pinged Eric about it) And a second one 😱 Who would have thought that simple JSON validation will be so useful here. |
I've put togheter technical decisions and other notes for future reference. JSON Validation
https://github.com/zaach/jsonlint (popular, unmaintained, MIT) : https://github.com/circlecell/jsonlint-mod (fork of the previous one): JSON-LD doesn't support duplicated keys, so jsonlint-mod was the best choice. It isn't perfect though: error messages are hard to customize and it calls JSON-LD ValidationThere are no json-ld linters/validators that I know of (
Schema.org ValidationUsing raw data from schema.org we are checking:
ℹ️ Objects that are using some other vocabulary than schema.org (e.g. http://rdf.data-vocabulary.org/) are ignored. Google Recommendations ValidationUsing data scraped from SDTT we are checking if all required and recommended fields are present on schema.org objects. This is not a long term solution - in the future we would like to use something like shex to validate not only the required/recommended fields, but also their values. We would also like to also support recommendations coming from other companies (Bing/Pinterest/Apple/Yandex etc.). |
WIP - if anyone is interested in testing it: https://github.com/kdzwinel/lighthouse/tree/json-ld |
Hello, |
@AymenLoukil SDTT is closed source and does not provide an API. We're also hoping to make this audit relevant for all search engines, as they all handle structured data bit differently. It's kind of the wild west in terms of consistency 🤠 |
@rviscomi OMG how is that closed source 🚪 ? It belongs to Google no ? Why not reuse and enhance.. |
I would encourage also acceptance (or not actively complaining) of markup that links a Schema.org term to a non-Schema.org term. For an example I've been looking at using schema.org/Dataset with a mainEntity property whose value is modeled as a W3C csvw:Table. While it is possible that such markup might be in error, it is also good to accept such possibilities as being potentially sensible. |
@kdzwinel for now let's take out the recommendations based on SDTT scraping. That will help ensure that we don't lag behind recommendations from SDTT in case it changes. |
@kdzwinel See #4359 (comment) - I've updated the audit title and help text. |
Is this functionality still under development? |
A few observations after running this on a few sites: 1. Case sensitivity This is a failure because there's no JSON-LD is case sensitive, but the Structured Data Testing Tool doesn't complain about it. 2. Empty elements There's a few JSON-LD elements with no content or just 3. Parser errors I've only seen one example of this, but if you try to escape a single quote in the JSON our error message isn't super helpful. (JSON only allows a few escape sequences.) You get the same message if you have line breaks in your strings or a trailing comma somewhere. Maybe prefixing the message with "Parse error: " would make it clearer what the problem is. |
Hey folks, I poked around with the patch this morning to see how this is progressing (just a quick look so far -- looks good!) One thing I found was that the audit didn't detect externalized structured data: e.g. I'm not actually sure if this is meant to be supported. WDYT? I suspect that this could be fixed by changing this block which currently only handles script with inline content. EDIT: apparently it's not allowed to use [1] "When used to include data blocks, the data must be embedded inline, the format of the data must be given using the type attribute, and the contents of the script element must conform to the requirements defined for the format used. The src, charset, async, defer, crossorigin, and nonce attributes must not be specified." |
Fwiw Google also does not currently consume structured data published that
way
…On Fri, 16 Aug 2019 at 16:04, Michal Mocny ***@***.***> wrote:
Hey folks, I poked around with the patch this morning to see how this is
progressing (just a quick look so far -- looks good!)
One thing I found was that the audit didn't detect externalized structured
data: e.g. <script type="application/ld+json" src=...> or <link
rel="alternate" type="application/ld+json" href=...>
I'm not actually sure if this is meant to be supported. WDYT?
I *suspect* that this could be fixed by changing this block
<https://github.com/GoogleChrome/lighthouse/pull/8328/files#diff-5a7063f8a604afcd9d0a9575aa3b3721R48-R49>
which currently only handles script with inline content.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#4359?email_source=notifications&email_token=AABJSGP5YZH2RE7FUQI2SB3QE26Y3A5CNFSM4ENWC24KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4O3S6A#issuecomment-522041720>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABJSGI55ZNFUC7YQJAKWMTQE26Y3ANCNFSM4ENWC24A>
.
|
@danbri Do you know if anyone does? Is it meant to not be supported, or just Google doesn't yet? |
nobody afaik at least in mainstream search
the closest we (google) get is indexing json-ld added to main doc
dynamically by scripts.
…On Fri, 16 Aug 2019 at 16:16, Michal Mocny ***@***.***> wrote:
@danbri <https://github.com/danbri> Do you know if anyone does? Is it
meant to not be supported, or just Google doesn't yet?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#4359?email_source=notifications&email_token=AABJSGJV7BONU5EEKFPHKSTQE3AD3A5CNFSM4ENWC24KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4O4RIQ#issuecomment-522045602>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABJSGN6F6PM64HJP6IDC3TQE3AD3ANCNFSM4ENWC24A>
.
|
Thanks. I can resort to <script> which will fetch and document.write (or rather head.append) the SD. (Seems silly, thats much harder for simple crawlers to deal with.) |
@mmocny can you not just add them on the server side? Any |
@BigBlueHat My specific use case I have (potentially) large sized SD which is not relevant to 99% of client page loads. I was trying to optimize for page size by merely linking, and not forcefully including, all SD (thus allowing the user agent to decide if/when it was useful to actually download the data). I guess if I want to keep this behaviour while resorting to using script to dynamically inject, I would now need to take control of when to do it conditionally. And if I'm to do that, I could indeed just as well conditionally Server-side-included it, as you suggest. |
I'm happy to read about this initiative though I'm wondering whether microdata and RDFa have been considered as well? Reason I ask is because even though JSON-LD is the 'preferred' syntax right now there's still tons of microdata markup out there (and RDFa in lesser quantity as well) that's actively maintained. It would be a real shame if these fall out of scope. |
operating over the parsed graph (set of node-edge-node triples) from
json-ld would establish a model well suited to allow dropping in rdf,
microdata parsers. This is what we do at Google and what other public tools
do.
…On Wed, 21 Aug 2019, 19:52 Jarno van Driel, ***@***.***> wrote:
I'm happy to read about this initiative though I'm wondering whether
microdata and RDFa have been considered as well?
Reason I ask is because even though JSON-LD is the 'preferred' syntax
right now there's still tons of microdata markup out there (and RDFa in
lesser quantity as well) that's actively maintained. It would be a real
shame if these fall out of scope.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#4359?email_source=notifications&email_token=AABJSGLN42PZCGQMV2PJEGDQFWFGLA5CNFSM4ENWC24KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD42YJXA#issuecomment-523601116>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABJSGLUBD3W6QWKBI6SN3LQFWFGLANCNFSM4ENWC24A>
.
|
In regards to parsers/linters for validating syntax/vocabulary constraints maybe @gkellogg knows of some you can use. He's also currently maintaining the Structured Data Linter |
Coincidentally, I was doing a bit of my own digging in this space recently. First, a summary of what I think Lighthouse currently has (note: I'm not author or involved, I'm just piecing together):
Some of the benefits of doing it this way:
On the other hand, there may be correctness issues to doing it this way. I'm not at all sure if the current implementation supports all the different ways you can publish json-ld (e.g. And so, even just for correctness reasons it may already be desirable to validate at the graph level, instead of trying to validate the parsed JSON-LD object. We could check this by creating more complex test cases, or finding more complex real world examples. If that is indeed already true, then the hard part ask of requesting validation at the graph level becomes more of a requirement. After that, I suspect adding more parsers/linters (and even other vocabs) would become near-trivial. |
@jvandriel I only took a quick scan, but looks like @gkellogg's tool is "based on RDF.rb", and:
As such, I suspect it isn't applicable. We probably would want something in JS, or at least that could be compiled into WebAssembly (not sure if Lighthouse is ok with WebAssembly). |
Found this, though. |
Depending on the kind of validation you have in mind, it might make sense to validate against JSON-LD's expenaded form which jsonld.js's |
...and Indeed, the patch already does this. I should have checked. (Thanks for the note) |
The Linter is, indeed, a web application, but is based on the "rdf" CLI installed with the linkeddata Ruby gem. It's been run for years, with minimal updates, to help validate schema.org examples. It principally depends on semantic validation of the rendered RDF graph, which could probably be done using other languages (such as rdfjs) with appropriate code. The validation bits are in the rdf-reasoner gem. It's all public-domain software, so people are free to do with it what they please. Of course, Ruby can be compiled into WebAssembly too. |
Audit group: Content Best Practices
Description: JSON-LD structured data syntax is valid
Failure description: JSON-LD structured data syntax is invalid
Help text: Structured data contains rich metadata about a web page. The data may be used to provide more context to users in search results or social sharing. This audit is only checking for JSON-LD structured data syntax errors. For a checkup of Google search eligibility, try the Structured Data Testing Tool.
Success conditions:
@
keywords are whitelisted in the JSON-LD spec@context
URLs are valid@types
are whitelisted on schema.orgNotes:
The text was updated successfully, but these errors were encountered: