The company OpenAl, has just released a ChatGPT version in open source. This version presents itself as a generic species, being able to perform all the functions that the original text generator performs. However, the released version requires training, which is not affordable for most people.
The ChatGPT text generator
see more
Alert: THIS poisonous plant landed a young man in the hospital
Google develops AI tool to help journalists in…
ChatGPT is nothing more than a chatbot tool, which, through artificial intelligence, specializes in dialogues. The tool was created by Philip Wang and has adjusted language, with the union of the technique model of supervised and reinforcement learning with human feedback (RLHF) and the model used by Google, PaLM.
The tool had an open source version released in late 2022. The version is a generic species, and therefore it can also perform the functions of ChatGPT oficial such as email drafts, computer code suggestions and texts academics.
The released version of ChatGPT
When combining two techniques, RLHF and PaLM, the model needs training, which unfortunately does not come in the generic version. Therefore, each person who adheres to this open code will have to train the artificial intelligence on their own computer.
The big problem is that for that, it is necessary a very powerful Hardware, after all to train a artificial intelligence and yet, processing the requests that will come is not for everyone.
How to train this artificial intelligence?
The template is an almost equal version of ChatGPT, which is a word tool, and as such, needs to be introduced to an extensive number of examples, such as posts on social networks, published news and e-books from all types.
In addition, the technique used in the system, RLHF, offers a large number of responses for each human prompt, which means that the humans are essential in the process of classifying the answers in a kind of Ranking, so that the system learns the best way to to respond.
All of this ends up being very expensive and therefore not everyone can have it. One company calculated that training a model with 1.5 billion parameters costs up to $1.6 million. And to create a truly good system, it takes much more than that, Google's PaLM model, for example, used about 540 billion parameters.