AI can be dangerous in Healthcare assistance if applied wrong

By Bala | Bala | 21 Apr 2024

Yes, I agree that AI can be dangerous in healthcare assistance but it also depends on where it is being used. Like other use cases if AI is used for customer support in the healthcare industry or company it is not a big deal but if it starts prescribing medications or suggestions to the person who is interacting, it can be dangerous and that will need to change.

AI should not be an acting doctor

Sometimes doctors also provide their consultation via chats. There are a few instances when doctors are not available to interact with the patients, sometimes they have to either interact with a chatbot or the chatbot itself does the full work of a doctor. This can be a dangerous thing that AI should not be allowed to cover during a doctor's assistance. I understand that the system would be completely tested and we should believe in the system, but we still have to be very careful with AI, especially with an AI that does self-learning with interaction. There can be misleading information that AI learns and keeps using that. Technically it may even be hard to get the wrong data out of the data models.


AI should not prescribe medications

When interactions happen with an AI and if the AI is going to do the chatting with the patients, we have to ensure that the AI is not prescribing any medications or any medical suggestions to the patients. Human intelligence is capable of asking questions based on the answers a person can give and continuing the conversation in the right direction during a diagnosis. But if the interaction happens with AI, likely, that AI may not have the ability to ask additional questions based on the answers given. Let's say if AI suggests gargling with salt water for a symptom that matches the diagnosis, there can be a few more reasons why gargling may not be good. But AI cannot diagnose all those things and hence prescribing medications or explaining any medical methods should not be done by AI.

AI should not do diagnosis

Even in a case where an AI does a diagnosis, it should be verified manually by a healthcare professional or a doctor so that we don't completely rely on AI. One of the biggest advantages that AI can give us is that it can check for many possibilities and can think faster and provide us with the odds faster. This can be very helpful in diagnosis. If 100 things need to be checked, AI or an automated system can test it faster compared to humans who can take a lot of time.

There is also a positive way of looking at this. In the future when interacting with AI becomes very common, doctors can have a continuous conversation with AI bots that can help them do diagnosis. The conversation can be in such a way that AI can give suggestions and the doctors can make final decisions.

How do you rate this article?



Developer | Writer | Blogger | Gamer | Blockchain Enthusiast


This is my general blog. I share different articles I write every single day.

Send a $0.01 microtip in crypto to the author, and earn yourself as you read!

20% to author / 80% to me.
We pay the tips from our rewards pool.