Google crawler parses empty DOM with SSR
[issue link]Logic below explained:
If user agent is a crawler/bot (e.g. google), then render the page with SSR.
serverMiddleware: [{
handler(req, res, next) {
let isBot = crawlersRegex.test(req.headers['user-agent'])
res.spa = !isBot
next()
}
}]
Using the fetch as google:
https://www.google.com/webmasters/tools/googlebot-fetch
I get this strange result:
Obviously the content is empty. That was when loading indicator is disabled.
If loading indicator is enabled, then the loading indicator will be there:
Other:
- Google uses
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
User Agent. If I fake it in my chrome, I get correctly a SSR rendering. - Curl returns the correct DOM structure if a bot agent (like the above) is used
- Google result looks like they get a CSR rendering
- Facebook, twitter etc parse the content also correctly (e.g. from the open graph meta). This issue happens only with google crawler.
Assumptions:
- If no one else is affected from this but me, then the conditional rendering above might be an issue
- If Nuxt returns basic initial html and the Google crawler is programmed to stop when there is some html (even before the server has not completed) this might also be a reason
Update:
- If I force set
res.spa = false
in every request, google will render the results correctly. Therefore something is happening with the conditional rendering