JAKARTA - OpenAI is currently launching a new beta feature for ChatGPT Plus members. Customers report that this update includes the ability to upload files and work with them, as well as multimodal support.
Basically, users don't need to select a mode like "Browse with Bing" from the GPT-4 dropdown. Instead, the chatbot will guess what they want based on context.
These new features bring some of the office features offered by the ChatGPT Enterprise plan to a standalone individual chatbot subscription.
According to The Verge, there is no multimodal update feature in the ChatGPT Plus subscription, but they were able to test the Advanced Data Analysis feature, which seems to work as expected.
SEE ALSO:
Once a file is entered into ChatGPT, it takes a moment to process the file before it is ready to use, and then the chatbot can do things like summarize data, answer questions, or generate data visualizations upon request.
Chatbots are not limited to just text files. In Threads, a user posted a screenshot of a conversation in which they uploaded an image of a capybara and asked ChatGPT to, via DALL-E 3, create a Pixar-style image based on the image.
They then repeated the concept of the first image by uploading another image – this time an image of a shaking surfboard – and asked him to insert the image. For some reason, the chatbot added a hat to the image. This may not match the request. Errors like this may occur because the feature is still in development or Beta stage.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)