You can now train ChatGPT on your own documents via API

 In AI, api, Biz & IT, ChatGPT, chatgtp, fine tuning, gpt-3, GPT-3.5, gpt-3.5-turbo, GPT-4, large language models, machine learning, OpenAI, Tech, text synthesis

You can now train ChatGPT on your own documents via API

Serving the Technologist for more than a decade. IT news, reviews, and analysis.
A CGI rendering of a robot on a desktop treadmill.

Enlarge (credit: Getty Images)

On Tuesday, OpenAI announced fine-tuning for GPT-3.5 Turbo—the AI model that powers the free version of ChatGPT—through its API. It allows training the model with custom data, such as company documents or project documentation. OpenAI claims that a fine-tuned model can perform as well as GPT-4 with lower cost in certain scenarios.

In AI, fine-tuning refers to the process of taking a pretrained neural network (like GPT-3.5 Turbo) and further training it on a different dataset (like your custom data), which is typically smaller and possibly related to a specific task. This process builds off of knowledge the model gained during its initial training phase and refines it for a specific application.

So basically, fine-tuning teaches GPT-3.5 Turbo about custom content, such as project documentation or any other written reference. That can come in handy if you want to build an AI assistant based on GPT-3.5 that is intimately familiar with your product or service but lacks knowledge of it in its training data (which, as a reminder, was scraped off the web before September 2021).

Read 10 remaining paragraphs | Comments

Developers can now bring their own data to customize GPT-3.5 Turbo outputs.

Recent Posts
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt