Scientists have built a framework that gives generative AI systems like DALL·E 3 and Stable Diffusion a major boost by condensing them into smaller models — without compromising their quality.
You’re building beautiful straw men. They’re lies, but great job.
I said originally that we need to improve the interpretation of the model by AI, not just have even bigger models that will invariably have the same flaw as they do now.
Deterministic reliability is the end goal of that.
Will increasing the model actually help? Right now we’re dealing with LLMs that literally have the entire internet as a model. It is difficult to increase that.
Making a better way to process said model would be a much more substantive achievement. So that when particular details are needed it’s not just random chance that it gets it right.
Where exactly did you write anything about interpretation? Getting “details right” by processing faster? I would hardly call that “interpretation” that’s just being wrong faster.
You’re building beautiful straw men. They’re lies, but great job.
I said originally that we need to improve the interpretation of the model by AI, not just have even bigger models that will invariably have the same flaw as they do now.
Deterministic reliability is the end goal of that.
Where exactly did you write anything about interpretation? Getting “details right” by processing faster? I would hardly call that “interpretation” that’s just being wrong faster.