Tag Archives: ChatGPT’s

ChatGPT Introduces Web Browsing, But Only for Paid Subscribers

It’s worth noting that not everyone can use ChatGPT’s web browsing feature for free. Currently, OpenAI has limited access to the feature to ChatGPT Plus and Enterprise users who pay a monthly fee for an enhanced version of the chatbot. Whether it’s worth the price depends on your specific needs. OpenAI highlights several use cases for this updated chatbot version.

One notable improvement is the ability to provide users with real-time information on a wide range of topics, including news, stock market updates, sports scores, and weather forecasts. Unlike traditional search engines, ChatGPT doesn’t just present data; it can also draw conclusions and offer suggestions for the best course of action.

This capability can be extremely valuable for professionals working in finance, research, and data analysis, where accurate and current information is crucial. It can also assist content creators in producing engaging content that incorporates the latest trends and developments. While it’s early to judge its effectiveness in this new form, ChatGPT is likely to offer users more informative, helpful, and up-to-date responses.

ChatGPT’s ability to answer healthcare queries is comparable to humans, according to research

A study titled “Putting ChatGPT’s Medical Advice to the (Turing) Test: Survey Study” by Oded Nov, Nina Singh, and Devin Mann was published in JMIR Medical Education Volume 9. The research aimed to assess how well advanced chatbots can address patient concerns and whether patients would accept their responses.

The study selected 10 authentic medical queries from January 2023 and modified them to maintain anonymity. ChatGPT was given these queries and asked to provide its own responses, which were then compared to responses from human healthcare professionals. Participants were asked two key questions: Could they differentiate between bot-generated answers and human-generated answers, and did they accept the responses?

The results from nearly 400 participants were analyzed and revealed interesting findings. According to the researchers, “On average, chatbot responses were identified correctly in 65.5% (1284/1960) of the cases, and human provider responses were identified correctly in 65.1% (1276/1960) of the cases.” This indicates that both the chatbot and human responses were identified accurately approximately two-thirds of the time. The study also noted that participants were less likely to trust ChatGPT’s responses as the complexity of the health-related questions increased. However, logistical questions such as scheduling appointments and insurance inquiries received the highest trust rating.