“There are things known and there are things unknown and in between are the doors of perception.” — Aldous Huxley
I’m Huxley Westemeier (26’) and welcome to “The Sift,” a weekly opinions column focused on the impacts and implications of new technologies.
______________________________________________________
Ah, everyone’s favorite multi-billionaire Elon Musk. In the last few weeks he’s been everywhere. From promoting misinformation on his social media platform X (formerly Twitter) to being blasted over new Tesla designs… and then becoming Donald Trump’s appointed co-leader of the new U.S. Department of Government Efficiency or DOGE, he’s a busy guy.
But in regards to X, Musk has made a few striking remarks about his AI system, Grok. Beginning Oct. 29, he posted multiple ‘tweets’ (or whatever we’re supposed to call them now) calling for the public to upload “x-ray, PET, MRI, or other medical images” to Grok for analysis.
That’s right.
Musk wants as many medical images as possible to help train his AI to detect or diagnose individuals further down the line.
And people ARE uploading pictures.
The results posted by multiple users have proven to be quite outrageous. In one instance, Grok stated that a broken clavicle was actually a dislocated shoulder. And we’re supposed to blindly trust it? Does Musk have a medical degree?
My first question after hearing Musk’s plan: is it even legal? According to Becker’s Hospital Review, data that a user chooses to share with an AI chatbot like Grok isn’t protected under any legal jurisdiction. Healthcare providers are protected by HIPAA, which requires providers to keep patient information confidential. Systems like HIPAA help guarantee that such private personal data will remain protected and private. But when you choose to upload an image to an AI like Grok, there’s nothing explicitly stating where that data goes. It could be sold to companies as a form of advertising without the user ever needing to know. It’s possible that if you uploaded an X-ray image to Grok showing uneven leg alignment, you might see heel lift ads pop up out of nowhere. Perhaps the most amusing part of this entire scenario is that Grok’s own AI Privacy Policy linked here actually states the following:
With the exception of our recruitment activities, we do not aim to collect sensitive personal information (e.g., information related to racial or ethnic origin, political opinions, religion or other beliefs, health, biometric data, criminal background, or trade union membership) and ask that you do not provide us with any such information.
Yet people are STILL willingly uploading their images. After all, it’s entertaining to see if the AI gets it correct, right? That is the scariest part: people seem to be (dangerously) indifferent to the security of their personal data.
Musk will be in a position of massive power on Jan. 20 once he takes office in DOGE and people might be more open to trust him. If he keeps supporting questionable ethical data collection practices, it might foreshadow a dangerous future where Grok’s AI restrictions are non-existent. It’s in the privacy policy now, but that’s an ever-changing document.
Then again, Grok also recently said that there is “substantial evidence suggesting that Elon Musk has spread misinformation on various topics.” It speaks volumes when your own chatbot makes that claim.