Health is a personal narrative. We see ourselves like a history of our life events stored in our bodies. Can Ex Machina, beyond the surface scraping Fitbit savory tidbits, truly grasp what it means to be a historical, autonomous body? The complexity of health is the coexistence of physical and social emergence in health. How might society balance the “autonomous” decision making capabilities of machine learning to the public agents? It is time to have that heart to heart about the implications of prioritization of risk in systemically harnessing the power of AI. Above all, does AI need to learn to be human or maintain emotive distance in order to maximize social benefit?

There has been much discussion of the future of health care as one that is rooted in “technological singularity.” This is a theoretical event when machines autonomously and recursively redesign and self-improve for humanity’s sake. It is not a question of if this technological enlightenment is coming. When and to what extent society will permit the merging of man and machine for social welfare (not the machines) is the more salient issue.

While the AI may become “smarter” to preserve health, the dynamic of clinical care is that the patient is “less smart” to such clinical matters. The patients’ decisions may be less informed (or less smart if that suits your fancy) by doing things contrary to clinical directives. The job of AI is shaping human regulated preferences to optimize the effectiveness of the health care system by being smart for the patient.

Machine learning must find a way to cope with the changing values and actions as personal patients. Machine learning must account for changing values which I call “ethical malleability”. Health care relies on the ability for patients to change ethical positions which in turn are connected to more probable patient’s behaviors and decisions. We make up and change our minds as social animals. How does Machine learning not make miscalculations due in part with irrationality of human activity? In addition, how can Machine learning make choices not for its own body but as the anointed proxy for the bodies of the collective?

The logical uncertainty of ethical malleability and value loading would need to be leveraged with Machine learning. As a society, we are working with and against each other in a social space. There is a pull and release among people. Now, how do we clinically and politically account for the pull and push of machines into this system? As Donella Meadows, a pioneer in systems thinking, said, we cannot control a system but we can dance with it. That dance will be actualizing Machine learning to the collective benefit of society without decentralizing humanness into an cold, redesigning algorithm. What is the code for the recursiveness of being fallibly human?

Featured image courtesy of Pixabay.

About The Author

Michele Battle-Fisher
Adjunct Assistant Professor, Wright State University Boonshoft School of Medicine

Michele Battle-Fisher is a systems theorist, public health scholar and bioethicist. She is an Adjunct Assistant Professor with the Wright State University Boonshoft School of Medicine. She is the author of Application of Systems Thinking to Health Policy and Public Health Ethics- Public Health and Private Illness. Her latest venture is co-producing the upcoming full-length documentary, Transhuman (Working Title) (http://news2share.com/start/category/transhuman-a-documentary/).