According to Futurism, OpenAI has launched a health-focused version of ChatGPT that can ingest full medical records. When people tap into this knowledge, they will see an explicit warning it shouldn’t be used for diagnosis or treatment. Isn’t this a bit like saying, “Don’t think of a pink elephant”, but instead of asking people not to think something, we’re asking them not use something?
