-
-
Notifications
You must be signed in to change notification settings - Fork 735
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stop generating sitemap.xml.gz (#6561) #6562
base: main
Are you sure you want to change the base?
Conversation
We generate sitemap-index.xml which is also put correctly into robots.txt. However we still provide the old sitemap.xml.gz. But, while this serves no purpose in addition to the index, it can cause problems if Google somehow parses it (for example it's submitted to the Google Search console). For this reason, we stop providing sitemap.xml.gz.
✅ Deploy Preview for plone-components canceled.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@plone/volto-team anybody else feedback?
@sneridagh I asked at Edw and at least as a preliminary reply was that it doesn't affect us as we check also for sitemap-index.xml. So at least for us it seems fine. Waiting to see what the other companies have to say about this but in the meanwhile I wanted to write to comment to know that we've discussed about this and we are ok with these changes |
I would add a redirect from |
I think that a redirect, while technically possible, is problematic because So if we don't redirect, then Google either finds the sitemap from robots.txt. or it has to be changed once in the console, but it's explicit and there is no magic behind that might break things in the future. (The reason for it being not gzipped is because of following the google documentation. It seemed simpler to do the same then to start experimenting what can work and what not.) |
Sorry I have written it incorrectly, I meant a redirect between |
@erral Maybe we are not talking about the same, but if we create a redirect from @reebalazs I can understand that people that have their consoles pointing to the old, can have problems and will require action, so it's not an easy one (and then, it would require to add it in a breaking change). The main problem here is that we are providing pointers to both... and could lead to duplicated indexes. Not an easy one. We'll have to test if a issuing a 301 would work for the console and make sure we issue the correct mimetype too. |
Then, if we are removing an existing URL, I think it should be considered a breaking change. |
The problems are
|
Yes it's a breaking change, but we need this change if we want to fix the situation. The alternative is not doing anything. In this case the old index will continue to work, until the site grows large enough and it breaks. Or even worse, there will be a duplicate that might spoil the SEO. By providing the indexed version by default, we can avoid this. Also, by "breaking" it means that Google won't be able to access the old index any more. So one should either go to Google search console and add the new index manually, OR Google can pick up the new index from robots.txt, which it SHOULD be smart enough to do, but I'm not 100% sure if that's the case. If someone could confirm that Google picks up the new index from robots.txt. then there is no breaking at all. If this is not the case then "breaking" means that manual intervention is needed in the search console. I believe if we put a loud enough message in the changelog to "CHECK YOUR SEO AND UPDATE IF NECESSARY", then this should be enough. |
If we have hit a maximum number of URLs that Google can consider in a single This way, although still being a breaking change, the url will keep working, and those small sites that don't reach the limit can still keep working. We can set the limit to the limit you have set to build each of the files referenced in the |
@erral setting a limit is very dangerous, IMO, since the admins of a site won't get notified if they hit the limit. I agree that this is a breaking change. However, I would fix this once and for all with a breaking change release and just support the index sitemap. Anything else will lead to confusion and make the situation worse than it is already (we lost our entire SEO rankings for plone.de thanks to this problem; I don't want to imagine what would happen if that happened in a client project). |
We generate sitemap-index.xml which is also put correctly into robots.txt. However we still provide the old sitemap.xml.gz.
But, while this serves no purpose in addition to the index, it can cause problems if Google somehow parses it (for example it's submitted to the Google Search console). For this reason, we stop providing sitemap.xml.gz.
Closes #6561