Google’s AI Is Being Used In Hospitals

As advancements in artificial intelligence (AI) continue to permeate our lives, concerns about privacy are on the rise. Google’s new AI tool, Med-PaLM 2, is currently undergoing trials in healthcare facilities, including the Mayo Clinic research hospital, sparking a fresh debate on data privacy. The Wall Street Journal reports that the language model, an offshoot of Google’s PaLM 2, forms the backbone of Google’s Bard. Med-PaLM 2’s purpose is to respond to medical inquiries, and Google anticipates its potential benefits in regions with limited access to healthcare professionals. The AI tool’s training involved an extensive collection of medical expert demonstrations, aiming to improve healthcare conversations compared to other chatbots such as Bard, Bing, and ChatGPT. However, the ambitious AI project is not without its drawbacks. A study released by Google in May showed that Med-PaLM 2 suffers from accuracy issues common to many large language models. Physicians found more inaccuracies and irrelevant information in responses from Med-PaLM and Med-PaLM 2 than those given by human doctors. The critical issue, though, lies in the privacy of patient data. In a world where personal data has become a commodity, Google’s trials have asserted that customer data will be encrypted and inaccessible to Google during Med-PaLM 2’s testing phase. This pledge, however, does not fully alleviate the existing concerns about privacy and data security – especially considering the company’s history. Google’s Senior Research Director Greg Corrado acknowledged that Med-PaLM 2 is still in early development. Despite seeing the potential of the AI tool…Google’s AI Is Being Used In Hospitals