Artificial Intelligence: The Counter-Argument
AI is being touted as the doctor’s new lifeline — we put this proposition to a skeptical doctor.
When a new technology arrives on the scene, the tantalising promise of new possibilities often drowns out dissenting voices.
The groundswell of support for artificial intelligence is no exception. This magazine has covered extensively pharma’s conviction that AI will transform the HCP’s practice for the better, forging stronger ties to the industry in the process.
Not all HCPs are convinced however.
We put the question to Joel Zivot, anaesthesiologist and fellowship director in critical care medicine at Emory University.
What aspects of AI do you find problematic?
Humans consider the act of learning as also the act of acquiring wisdom associated with knowledge.
Machines also “learn” but the meaning and mechanism of machine learning is different from the way humans learn. One aspect of machine learning that is of potential concern is referred to as "the hidden layer".
When we think of a simple computer program, there is an input and an output — if we are trying to get a machine to solve a problem, we give it a series of if-then instructions. When presented with a problem, the computer generates choices from fixed rules. Outwardly, this can be very impressive to people. Chess is an example of such machine capacity. The best chess programs can access a far greater amount of if-then solutions than the best human. A computer chess program is not intelligent however and doesn’t “learn”.
Face recognition on the other hand is difficult for a computer but easy for people. How we actually recognize a face is difficult to model for a machine and programing it has been a challenge. Simple reverse engineering of the human method is not sufficient. Computers are beginning to “learn” to recognize faces through a component of machine learning referred to as the hidden layer.
Humans also have a hidden layer in the way we figure out the face. We don't understand how AI learns in the hidden layer. How can we understand the risk of partnering with a technology that is making decisions for us in a way that is not entirely clear on how it is done?
We imagine, or hope, AI will help us make better decisions than we might otherwise make. The quality of human decisions can be degraded by things like biased preferences, racism, sexism, or all those other unreasonable human qualities we seek to shed. What if AI, as it learns, actually learns all too well, including negative human qualities? Under those circumstances, we would be no better off.
Do I think AI is going to look at healthcare and make choices more in alignment with the morality we desire around issues of illness and health? I have no certainty that it will or that it would even care to do so.
Morality tends to be a flashpoint in the AI debate…
These days, many people are impressed by the flashing lights of artificial intelligence. I was recently at a tech conference in Silicon Valley. It struck me how little discussion there was about morality of AI. Many of the discussions seemed to advocate for technology that would simply replace humans.
The replacement of human workers, and its subsequent toll on the human psyche needs to be carefully considered. We seek to be aided by AI innovations, not replaced.
Why is the case for human replacement so powerful?
In many activities human factors might lead to adverse outcomes where a computer might have done a better job. Human workers suffer the occasional fates of being human. We get sick and miss work. We work inefficiently and ineffectively. We demand coffee breaks, lunch breaks, maternity and paternity leave, a safe, healthy and happy work environment. We expect to be paid for our work including a benefits package. Machine workers will demand none of these. They won’t go on strike. They won’t show up late. They will not be paid a wage.
The flipside being these same robot workers will also not pay taxes, not work late and not seek better methods of production based on experience. Happy workers contribute to general well being.
Technology has replaced the human worker throughout history and AI will be no exception. The question mark remains over AI vs the human worker and the profound effect on production and society.
Will it not transform healthcare for the better?
What I see pharma doing here is not transforming itself into something that is going to necessarily address the concerns of the consumer that desires robust supply, better medicines for a richer variety of problems and at a reduced cost to that consumer. Instead I predict a simple amplifying of what is normal pharma business practice — re-imagining how it will affect its bottom-line. I doubt AI will make pharma more ethically accountable or push it towards a better ability to solve the problems of concern to consumers.
What problem is pharma “really” trying to solve?
Production costs might naturally erode the bottom line and I suspect that's the problem pharma would try to solve.
Pharma is not necessarily incentivized to make new drugs. If AI can bring production cost down, especially on a patented product, a larger return on investment can be achieved. In this model, the public price of a drug may be not affected, or perhaps lowered by a small amount as a show of corporate goodwill.
For pharma, the bottom is the bottom line; the larger return on investment will not be passed to the consumer. Once the public accepts a price, increased profit margin is easily hidden from public scrutiny. When it comes to drug pricing, our negative reaction to even a small price increase is a direct reflection that pharma is in truth, a public good masquerading as a private good.
Can you provide an example of this?
AI is increasingly utilized in oncology pharmaceutical research and development. New AI technology has not produced a corresponding fall in oncology drug pricing.
Cancer as a disease is highly evocative, justifiably so. Pharma targets conditions it believes it can address from a business model. Many diseases and conditions plague the human race and remain unsolved or under-attended.
A fear of death by cancer motivates the public to support cancer research including the new interest in precision oncology medications. Our willingness to pay a premium to survive cancer places pharma in a position of leverage. What is the actual production cost in precision oncology drug development? How much of that anticipated increased profit margin will be passed on to the consumer already willing to pay a premium in the hope of remaining alive?
How can pharma pitch AI in a way that resonates with doctors?
I am an intensive care physician and an anesthesiologist. My interactions with patients are broad. In the intensive care unit and in the operating room I often encounter the problems of providing effective medical care when someone’s life directly depends upon it. I am interested in a sincere partner that can help me address the problem of human suffering caused by diseases with few pharmacologically available choices. Address these problems with me and let's talk.
I also don't expect pharma to adopt a business practice that simply is not sustainable. Advertising a product around human suffering in the same way that we might sell a new car feels uncomfortable. Using AI in pharma development to make a claim that the lives of patients will be vastly benefited rings false, at least for now.
Since you're here...
... and value our content, you should sign-up to our newsletter. Sign up here