Google says Gemini, launching today inside the Bard chatbot, is its “most capable” AI model ever. It was trained on video, images, and audio as well as text.
For the fine-tuning stage at the end, where you turn it into a chatbot, you need specific training data (eg OpenOrca). People have used ChatGPT to generate such data. Come to think of it, if you use Mechanical Turk, then you almost certainly include text from ChatGPT.
Yes it could be done that way, and maybe GPT models were used, but calling these API’s isn’t free and there are plenty of open and surely internal models that could be used for that purpose.
For the fine-tuning stage at the end, where you turn it into a chatbot, you need specific training data (eg OpenOrca). People have used ChatGPT to generate such data. Come to think of it, if you use Mechanical Turk, then you almost certainly include text from ChatGPT.
Yes it could be done that way, and maybe GPT models were used, but calling these API’s isn’t free and there are plenty of open and surely internal models that could be used for that purpose.