In my field where Google “throws” their huge DL models at problems as well, the papers they publish tend to have very limited explanation of how and why the model works and they don’t really provide a comprehensive validation of the model. So I find it difficult to trust their findings here, not only by looking at LLMs but also their “scientific” models.
In my field where Google “throws” their huge DL models at problems as well, the papers they publish tend to have very limited explanation of how and why the model works and they don’t really provide a comprehensive validation of the model. So I find it difficult to trust their findings here, not only by looking at LLMs but also their “scientific” models.