www.neocosmos.space

Author  orlando Delmarre
Original work and ideas by the author. Reproduction or use without express authorization is prohibited. Created on 01/12/2024.
DIGITAL BOOK: AI AND THE DOCTOR. How it affects your family and you. Artificial intelligence no longer assists the doctor — it silently replaces them.

INTRODUCTION
Medicine is changing faster than most people imagine. Today, artificial intelligence is no longer an experiment: it diagnoses, suggests treatments, and corrects decisions in real time. The doctor, who for centuries was the sole authority before the patient, is beginning to share and sometimes yield that power to a system that never tires, never forgets, and learns from every case.
This book shows, with concrete examples, how AI is transforming consultations, hospitals, and the doctor–patient relationship. It presents the dilemmas that arise when the patient arrives with precise information before even sitting on the examination table, when the machine suggests a different diagnosis than the doctor’s, or when the family looks up alternative treatments in the middle of an emergency.

You will not find futurism or exaggerations here. Everything you are about to read is happening right now, in clinics and hospitals around the world. Each chapter reveals how medical practice is being reconfigured: from the doctor as the only source of truth, to the doctor as a supervisor of algorithmic decisions; from the passive patient to the informed patient who validates each step; from health as a monologue to health as a conversation between human and machine.
If you are a patient, you will discover how this transformation will affect your access to healthcare, the way you will receive a diagnosis, and the level of control you will have over your treatment. If you are a healthcare professional, you will see clearly where your profession is heading and what skills you will need to remain relevant.
This book does not aim to convince you that artificial intelligence is good or bad. It aims to help you understand what it means to live alongside it in medical practice, and to enable you to decide how to position yourself in this new scenario. Because the change has already begun, and the rules of medicine will never be the same again.

TABLE OF CONTENTS (9)
THE PATIENT KNOWS MORE THAN THE DOCTOR (1)
Critique of the knowledge asymmetry between doctor and patient.

THE DOCTOR FACING THE MIRROR (2)

Analysis of the obsolescence of the general practitioner in the face of AI.

AI: THE NEW GENERAL PRACTITIONER (3)

Examination of the functions that AI already performs with greater accuracy.

FROM GOD TO PROFESSIONAL: THE FALL OF THE MONOPOLY (4)
Review of the doctor’s historical role and loss of authority.

THE FAMILY IN THE FACE OF DIGITAL HEALTH (5)
Argument on the use of AI in family health management.

AI AT THE HOSPITAL BED (6)
Presentation of the tension of the critical patient who contrasts treatments.

THE AUGMENTED DOCTOR: SURVIVING IN THE DIGITAL ERA (7)
Deconstruction of medical resistance and the logic of adaptation.

HEALTH AS A CONVERSATION (8)
Analysis of the new health model: patient, AI, and collaborating doctor.

THE NEW HEALTH EDUCATION: LEARNING TO ASK QUESTIONS (9)
Examination of the need for digital literacy in healthcare.


FINAL REFLECTION
PROJECTION TO THE YEAR 2050
COMPARATIVE TABLES IN IMAGES


THE PATIENT KNOWS MORE THAN THE DOCTOR (1)  For centuries, the figure of the doctor did not represent merely a profession. It functioned as a symbol of power. The word “doctor” triggered an immediate mechanism of respect and, in most cases, unconditional submission. The mere presence of a doctor in a room altered the dynamic of the place, because it was understood that this person possessed something others did not: the monopoly of knowledge about life and death. The doctor not only told you what you had, but defined your health, and their verdict was not up for discussion.


This power was not sustained by the academic degrees hanging on a wall nor by the symbolism of a white coat. Its true foundation was the radical asymmetry of information. For most of human history, medical knowledge was a scarce commodity, protected in books that were difficult to access, formulated in technical language incomprehensible to the average person, and transmitted through a closed educational system. Whoever possessed that knowledge inevitably held control over those who did not. The patient’s ignorance was the necessary condition for the doctor’s authority.


This dynamic can be clearly seen in the language and gestures that have endured to this day, especially among older generations. No one would enter a doctor’s office with a simple “good afternoon.” The formula was, and often still is, “good afternoon, doctor.” The use of the title is not mere formality. It is an act of reverence. It is the explicit recognition of a hierarchy. It is equivalent to saying: “you are on a higher plane, and I, the patient, have come to receive your judgment and your orders.” The patient did not feel like a client, but a subject. They felt they had to please the doctor, not bother them, ask just enough questions not to seem ignorant, but not too many to avoid appearing defiant. This attitude was heightened with an unempathetic or uncommunicative professional, where the patient, instead of demanding information, ended up trying to guess what the doctor was thinking, nodding to instructions they did not fully understand.


But this structure, which seemed immutable for so long, has begun to fracture. Not abruptly, but through a slow, silent, and unstoppable erosion. The cause of this fracture is not a social revolution nor a crisis in medicine. It is something much simpler and deeper: knowledge stopped being locked away. For the first time in history, anyone with a device connected to the internet has access to an amount of medical information that surpasses, in volume and updates, what a doctor can retain in memory. They can do it from home, without asking permission and without feeling the pressure of a rushed appointment.


The first sign of this change is generational. People who grew up in a world without access to digital information largely maintain the attitude of reverence. For them, the doctor is still the only reliable source. But for younger generations, and especially for those familiar with technology, artificial intelligence is not a mysterious entity. It is just another tool, as commonplace as a word processor or a calculator. They do not fear it. They do not revere it. They simply use it.


And it is the use of this tool that is completely redefining the medical consultation. The informed patient no longer arrives at the office as a blank page waiting to be written on. They arrive with a draft. They have entered their symptoms into an AI system, received a list of possible diagnoses, read about common treatments, researched side effects, and compared alternatives. They arrive with a mental map and, most importantly, with specific questions.


This is where the old power dynamic collapses. The doctor, accustomed to delivering a monologue, now finds themselves in the middle of a dialogue for which they were never trained. The patient no longer just asks, “What do I have?” Now they ask: “Are you sure it’s not X, because my symptoms match that condition by 80% according to several databases?” Or, even more directly: “Doctor, I’ve read that the medication you’re prescribing has a negative interaction with the antihypertensive I take. Did you consider that variable?”


The situation becomes even more tense when the patient names the source of their information. The phrase “I looked it up on ChatGPT” or “An AI suggested that…” introduces a third actor into the consultation. A silent actor, without a body, but with access to the entirety of the world’s medical information. At that moment, the doctor faces an existential dilemma. They cannot dismiss the AI’s information with a simple “that’s not reliable” because the patient knows that this tool is not based on an opinion, but on the analysis of millions of clinical cases, scientific papers, and global statistics. Nor can they validate it outright, because doing so would be admitting that their own judgment is, at the very least, incomplete.


The foundation on which the doctor’s authority rested has become unstable. It is no longer a question of whether the doctor is right or wrong. The problem is that their knowledge, based on personal experience and on training that inevitably becomes outdated, now competes with a system that processes data in real time and on a planetary scale. Individual experience has been confronted by massive statistics. The doctor, who once was the only source of truth, is now just one more source, and their opinion can be verified, contrasted, and, at times, refuted in seconds.


There is a fact often presented to question this new dynamic. Various studies, such as those published in journals like the Journal of Medical Internet Research, point out the dangers of “cyberchondria,” a state of heightened anxiety resulting from self-researching symptoms online. It is argued that access to information has not made patients healthier, but more anxious and confused. This fact is real, and the contradiction is evident: more information does not always translate into better decisions. However, this fact does not invalidate the thesis of structural change. The anxiety generated by overinformation is not an argument for returning to ignorance. It is the consequence of a transition. It shows that patients now have a powerful tool but have not yet been educated to use it wisely. The problem, then, is not access to knowledge, but the lack of digital health literacy. The existence of incorrect information or the misinterpretation of correct information does not restore the doctor’s monopoly; it simply exposes the need for a new type of guidance to help navigate the excess of data—a role the traditional doctor is not prepared to fulfill.


The general practitioner, therefore, finds themselves in a precarious position. Their degree and experience alone are no longer enough to inspire trust. Authority is no longer granted automatically; it must be earned at every appointment, demonstrating not only what they know, but also their ability to engage in dialogue with a patient who also knows, who asks questions, and who has the power to verify each of their statements. Power has changed hands. Perhaps not completely, but irreversibly. The patient is no longer alone before the doctor.
And that changes everything.


THE DOCTOR FACING THE MIRROR (2) The traditional medical system is not collapsing due to an external attack or a technological conspiracy. Its crisis is internal. It has become obsolete because of the path it chose for itself, because of the structure it built. If today the general practitioner finds themselves in a vulnerable position, it is not due to a lack of dedication or knowledge, but because the role that modern medicine assigned them has become their greatest weakness. To understand this, one only needs to honestly observe the process of a typical consultation. The system, without realizing it, has created the perfect conditions for its own replacement.


A person goes to a general practitioner with a set of symptoms. The doctor listens, performs a basic physical examination, but their ability to reach a definitive diagnosis is, in most cases, extremely limited. Their main function is not to discover the cause of the problem at that moment. Their function is to initiate a chain of delegation. The doctor has essentially become an administrator of diagnostic procedures. They are not the detective who solves the case; they are the official who sends the case to different departments.


They order a blood test, which will be processed by a laboratory. They request an X-ray, which will be performed by a technician and interpreted by a radiologist. They request an MRI, whose detailed report will be written by another specialist. In each of these steps, the general practitioner hands over responsibility for the analysis to technology and other experts. They do not see the blood under the microscope, nor analyze the scan image from scratch. They wait for the reports.


And here lies the breaking point. Those reports do not return as raw, ambiguous data. They arrive already processed, distilled, and with a clear conclusion. The radiologist’s report does not say “there is a spot on the L5 vertebra.” It says, with technical precision: “a disc herniation is observed at the L5-S1 segment with nerve root compression.” The lab report is not a simple list of numbers; it already highlights values that are out of range and often suggests possible causes.


At this point, what function remains for the general practitioner? Their task is reduced to a simple act of synthesis—to connect the dots. They take the symptoms described by the patient at the start, cross them with the diagnosis already provided in writing by another specialist, and apply a standard treatment protocol. Their contribution is limited to being the final link in a chain built by others. This step, which for decades was their main source of value, is precisely the one most vulnerable to automation.


This is where artificial intelligence can not only compete but surpass the human. An AI can receive exactly the same inputs: the patient’s symptom history and the specialist’s report. With that data, it can do what the doctor does—but with overwhelming advantages. It can compare that information with a database containing millions of clinical cases, all treatment protocols updated worldwide, and the complete list of drug interactions. It can do so in seconds, without fatigue, without personal biases, and without the pressure of having ten other patients waiting outside.


AI does not need to interpret the X-ray from scratch. It only needs to do what the doctor does: read the radiologist’s report, understand it, and propose an evidence-based plan of action. And not only that—after proposing it, it can answer a hundred patient questions with infinite patience, explaining every detail, every risk, and every alternative. It can suggest treatment options that are cheaper or have fewer side effects, something a doctor rushed by routine rarely does.


When the medical system looks in the mirror, it does not see an irreplaceable sage. It sees a logistical process that has become inefficient. The general practitioner has become an intermediary, a human bridge whose role can be carried out more safely and completely by an algorithm. This is not an opinion or a futuristic speculation; it is a functional description of a present reality.


The most common objection to this analysis focuses on the value of human “intuition” and “experience.” It is argued that an experienced doctor can detect nuances that a machine would overlook. However, this individual experience, though valuable, is inherently limited and anecdotal. It is based on the hundreds or, at most, thousands of cases a doctor has seen over their career. The experience of an AI amounts to millions of cases. Its judgment is not based on personal memories but on massive statistics. While the doctor may fall into confirmation bias, prescribing what has always worked for them, AI can detect unusual patterns and suggest differential diagnoses a human might never consider.


Of course, there is empirical evidence that seems to contradict this logic. Research on the placebo effect, such as that conducted by Ted Kaptchuk at Harvard University, has conclusively demonstrated that the ritual of medical care and the empathetic connection with a professional can produce real, measurable physiological improvements. A patient who feels heard and cared for by a human tends to respond better to treatment. This fact suggests that completely eliminating the human component could, paradoxically, worsen health outcomes, even if the AI’s technical diagnosis is more accurate. The contradiction is undeniable: the human touch is a therapeutic tool.


However, this reality does not save the general practitioner in their current role. It simply redefines them. It does not invalidate the automation of the diagnostic process; it instead highlights the importance of a new role—that of the human communicator or manager. Diagnosis can and will increasingly be automated. But delivering that diagnosis, providing emotional support to the patient, clarifying doubts, and building a joint action plan is an area where the human remains superior. The problem is not that the doctor is useless; the problem is that the main function the system has assigned them—that of being a mere synthesizer of reports—no longer has justification.


The system faces an uncomfortable truth: it is not replacing a figure of irreplaceable wisdom. It is displacing a professional whose main value lay in exclusive access to information and the execution of an administrative step. Now that knowledge is free and that step can be optimized, the very structure of the generalist medical system has ceased to make sense. Traditional medicine is not being attacked. It is simply looking at the reflection of its own evolution.

AI: THE NEW GENERAL PRACTITIONER (3) The figure of the general practitioner is not being threatened by a technology of the future. It is being displaced by a tool of the present. Artificial intelligence has ceased to be a laboratory curiosity to become an everyday utility, integrated into the devices we carry in our pockets. Its most disruptive function is not performing superhuman feats, but carrying out, with relentless efficiency, the tasks that once defined the work of the family doctor. AI is no longer just an assistant; for millions of people, it is now the new first point of contact with the healthcare system.


The main advantage of AI is not its complexity, but its availability. It knows no office hours, requires no prior appointments, and has no waiting lists. It operates twenty-four hours a day, seven days a week. This immediate accessibility eliminates the first and greatest barrier that once existed between a patient with a question and the ability to get medical guidance: time and bureaucracy. The process that used to involve calling a clinic, coordinating a schedule, and physically traveling is now resolved in the thirty seconds it takes to open an app and type a question.


To understand the magnitude of this displacement, it is enough to list the functions a general practitioner routinely performs—functions that AI can already execute:
Symptom intake: The patient can describe their condition with a level of detail rarely possible in a ten-minute consultation. They can add context, history, and precise timelines.
Analysis and differential diagnosis: AI processes this information and instantly cross-references it with a database of millions of clinical cases, producing a list of possible diagnoses ranked by probability.
Test suggestions: If symptoms are ambiguous, the system can recommend which type of lab work or imaging would be most relevant to clarify the diagnosis.
Interaction detection: It can analyze the patient’s current medications and warn about possible contraindications with any new treatment.
Treatment recommendations: For common conditions, it can suggest standard treatment protocols based on the most up-to-date scientific evidence.
Drug information: It provides details about dosage, side effects, and can offer generic or more affordable alternatives.
Resolving questions: And perhaps most importantly, it can answer an unlimited number of follow-up questions.
This last point is crucial. The human doctor, limited by time and fatigue, delivers an informational monologue. AI offers a conversation. The patient can ask “why?”, “what does this mean?”, “what happens if it doesn’t work?” until their need for information is completely satisfied. This capacity for interactive dialogue empowers the patient in a way the traditional system never could—or never wanted to.


The comparison with the limitations of a human professional is inevitable and revealing. The human doctor operates with fallible memory and an experience necessarily limited to the thousands of cases they have seen. Their judgment may be influenced by fatigue, by unconscious biases, or by the last case they attended.


The most common objection is safety. Could AI make a fatal mistake? It is the right question, but the comparison must be fair. Human doctors also make mistakes. Medical errors due to misdiagnosis, fatigue, or lack of information are a significant and well-documented cause of health problems. The choice is not between a perfect human system and a flawed artificial one. It is between a human system with inherent flaws and an artificial system whose errors, while possible, tend to diminish as its data and algorithms improve. AI does not offer infallibility, but it does offer a reduction in risk by minimizing patient ignorance and misinformation.


Despite its processing power, AI is neither infallible nor completely objective. A study published in The Lancet Digital Health in 2021 by Adewole and his team revealed a fundamental contradiction in its supposed superior performance: many dermatological diagnostic algorithms, trained predominantly on images of light skin, showed significantly lower accuracy when assessing lesions on dark skin. This demonstrates that AI, being fed data produced by humans, can inherit and even amplify our systemic biases. This fact does not invalidate AI’s utility, but it does refute the idea that it is perfect. It forces us to understand that AI is a tool requiring supervision, auditing, and above all, training data that reflects the real diversity of the human population. The problem is not in the tool itself, but in how we build and calibrate it. The solution, therefore, is not to discard it, but to improve it and use it while knowing that it may still have margins of error, however small. There are multiple AIs—just ask the same question to three of them and compare the results; doing so triples the chances of a safe answer.


It is crucial to clarify the scope of this phenomenon. Artificial intelligence is not replacing the neurosurgeon who operates on a brain tumor, nor the intensive care specialist who makes life-or-death decisions in seconds. Those roles depend on complex manual skills, decision-making under extreme pressure, and physical interaction that AI does not possess. The displacement is happening at the base of the pyramid: in the role of the general practitioner, whose main job has become processing information and applying protocols—tasks in which machine efficiency already surpasses human performance.


The end result of this process is a radical redistribution of power. Control of medical knowledge, once jealously guarded by the profession, now flows toward the patient. It is the patient who decides when to seek consultation, whom to consult—a human, an AI, or both—and how to use the information obtained.


OM GOD TO PROFESSIONAL: THE FALL OF THE MONOPOLY (4) The almost sacred authority that doctors have held for centuries is not a natural phenomenon. It was a construction. It was forged slowly, layer upon layer, in a historical process that turned a service provider into an unquestionable figure of power. To understand why that figure now wavers, it is necessary to trace the origin of its power. Its pedestal was not made of pure science, but of something much more mundane and fragile: the monopoly of knowledge.


In the dawn of civilization, the figure of the healer was inseparably tied to mystery. The shaman or tribal healer was not a technician of the body; they were an intermediary with the spirit world. Their power did not lie in empirical knowledge but in access to secrets that others did not possess. Healing was a ritual, and the respect it inspired was fueled by fear of the unknown and reverence for the one who seemed to control it. Health and illness were not biological processes, but divine messages, and the healer was their sole interpreter.


Over time, medicine began to rationalize. In classical Greece and Rome, figures such as Hippocrates and Galen sought to separate medical practice from superstition. But even in this new phase, knowledge remained the privilege of a tiny elite. Medicine became a philosophical and technical discipline, recorded in texts that only a few could read. The doctor ceased to be a sorcerer to become a sage, but the power dynamic remained intact. The gap between the one who knew and the one who did not was still vast.


It was with the institutionalization of medieval European universities that this gap turned into a fortified chasm. Medicine was formalized as a profession, and the title of Doctor became an official seal of authority. This system, rather than democratizing knowledge, locked it away even more tightly. Texts were written in Latin, the language of academia and the Church—an idiom inaccessible to the general population. Medical knowledge became a guild secret, protected by impenetrable language and institutional barriers.


The peak of this authority came in the 19th and 20th centuries. Scientific advances—the germ theory, the discovery of antibiotics, the development of modern surgery, and vaccines—gave doctors tangible and spectacular power. They could cure infections that had once been death sentences. They could operate on bodies and heal what had seemed impossible. In this context, the figure of the doctor reached its zenith. The white coat became a cloak of scientific infallibility. Society granted them blind, absolute trust. The patient ceased to be a participant in their own health and became a passive object of medical intervention. Their role was to obey. The illegible handwriting on a prescription was not questioned; it was followed. Diagnoses, pronounced with unshakable certainty, were accepted as revealed truth.


But every monopoly based on the scarcity of a resource is doomed to disappear when that resource becomes abundant. And the resource in this case was information.


The first crack in this centuries-old monopoly appeared with the mass adoption of the internet. For the first time, an ordinary person could, from home, search for information about their symptoms, read about medication side effects, or discover the existence of alternative treatments. At first, the medical establishment’s reaction was one of disdain and paternalism. “Dr. Google” became a pejorative term used to dismiss patients who arrived at the clinic with a stack of printed pages. However, this phenomenon was not a passing fad; it was the first sign of a deep change. It was the manifestation of a fundamental human desire: to understand and control one’s own destiny. People were not trying to replace the doctor—they wanted to understand them. They wanted to participate.


What began as a crack with the internet has turned into a flood with the arrival of artificial intelligence. The difference is qualitative. The internet offered chaotic, disorganized, and often unreliable information. AI offers structured knowledge. It does not deliver a list of a thousand web pages; it delivers a coherent analysis, based on the totality of available evidence and applied to the user’s particular case. AI has done what was once unthinkable: it has put in the hands of anyone a medical consultation tool that, in certain aspects, is more powerful than the doctor’s own mind.


At this point, an apparent contradiction must be addressed. Despite unprecedented access to information, trust in doctors as a profession remains notably high in many societies. Global surveys such as the Edelman Trust Barometer consistently place scientists and doctors among the most trusted figures, far above journalists, businesspeople, or politicians. This fact seems to refute the idea of a “fall” in authority. However, what it reveals is not the permanence of the old model, but the changing nature of t
rust. People still want to trust a human guide. The desire to be cared for by an expert person has not disappeared. What has changed are the conditions of that trust. It is no longer blind trust in the title or the white coat. It is conditional trust, earned and maintained through demonstrated competence, transparency, and above all, the ability to engage in equal dialogue. The patient no longer trusts by default; they trust because they can verify.


The monopoly has fallen. Medical knowledge has been freed from its academic and professional confinement. The inevitable consequence is that the power derived from that monopoly has dissolved.



THE FAMILY FACING DIGITAL HEALTH (5) The impact of artificial intelligence on healthcare is not limited to the individual consultation of an informed adult. Its deepest and perhaps most transformative effect is taking place at the core of society: the family. For generations, the management of family health—especially that of children—has been a territory dominated by uncertainty and dependency. A child with a fever in the middle of the night, a sudden earache, or an unexplained skin rash would trigger an almost universal reaction: a mix of anxiety, improvisation, and a rushed trip to the nearest emergency room, where parents, devoid of information, handed over total control to the on-duty professional.


That paradigm has changed. Not because illnesses have disappeared, but because information is no longer a distant commodity. Today, it is found in the device parents carry with them at all times. The arrival of accessible AI tools has altered the first response to a domestic health crisis. What used to be a leap into the unknown can now be a process of structured analysis.


The function of AI in this context is not to cure or to issue a final diagnosis. Its real value is to provide something that was once unthinkable: a preliminary mental map. When a child presents symptoms, the adult in charge no longer has to operate blindly. They can open an application and describe the situation in detail: “four-year-old child, fever of 39°C, onset three hours ago, no other apparent symptoms, in good spirits.” In seconds, the system offers a guide that turns anxiety into an action plan.


This mental map includes elements of immense practical value. First, a list of possible causes, from the most common and benign to those requiring attention. Second—and most importantly—a clear identification of warning signs: those symptoms that, if they appear, indicate the need to seek immediate medical attention. Third, a series of safe measures that can be taken at home to relieve discomfort. And fourth, a set of prepared questions to ask the doctor if a consultation becomes necessary.


This process does not replace the pediatrician. What it does is transform the quality of the interaction with them. The family no longer arrives at the office in a state of uninformed panic. They arrive with a basic understanding, with precise observations made in the hours prior, and with concrete questions. The dialogue with the professional becomes more efficient and productive. The parent ceases to be a mere anxious spectator and becomes an informed collaborator in their child’s care.


The role of the adult in the home has therefore been redefined. The excuse “I’m not a doctor, I don’t know what to do” has lost its validity. Access to knowledge imposes a new form of responsibility: knowing how to use these tools with prudence and critical thinking. It is not about parents becoming doctors, but about becoming more competent managers of information. They are expected to distinguish a general recommendation from a medical order, to understand the tool’s limitations, and to use it to make calmer, better-informed decisions. The mobile phone, once seen as a source of distraction, becomes—when used this way—the family’s first logical partner.


Of course, this new dynamic is not free from risks, and there is empirical data that exposes an important contradiction. A study published in Pediatrics by researchers at Boston Children’s Hospital analyzed the behavior of parents who used online symptom checkers. They found that, in many cases, access to information did not reduce anxiety but increased it—a phenomenon known as proxy cyberchondria. Faced with lists of possible illnesses, including the rarest and most serious, many parents ended up more frightened than before, leading them to request unnecessary tests or doubt correct medical diagnoses. This shows that access to information alone is not a universal solution. Without a filter of judgment and proper education, it can become an additional source of stress.


However, this risk does not invalidate the tool. What it does is highlight the need for a new form of health education. The problem is not that the information exists, but that we have not been taught how to process it. The solution is not to return to ignorance, but to develop digital health literacy at home. Just as a child is taught to look both ways before crossing the street, adults and young people must be taught to cross-check sources, not to make decisions based on the first result, and to use information as a guide, not as a verdict.


The modern family is no longer alone or isolated in its health decisions. It is connected to an ecosystem of information that can be of great help if navigated intelligently. The ultimate goal is not for a parent to diagnose an ear infection, but to know what to do while waiting for a doctor’s appointment, to know which symptoms to monitor, and to arrive at that appointment with the peace of mind of having acted logically and prepared. Artificial intelligence does not replace parental care, instinct, or professional judgment. But it complements them, gives them structure, and enhances them. And in the daily management of a family’s health, that difference is fundamental. It turns fear into calm and uncertainty into an action plan.




AI AT THE HOSPITAL BED (6) The scenario of a patient at home, consulting an AI about common symptoms before visiting the doctor, represents only the first layer of this transformation. There is a far more complex and tension-filled context in which this new dynamic is emerging: the hospital room. Here, the patient is not an autonomous individual with a passing ailment. They are a person in a state of extreme vulnerability, dependent on a clinical system for vital functions and facing a serious diagnosis: organ failure, a severe infection, cancer.


Traditionally, this has been the sanctuary of absolute obedience. It is assumed that the hospitalized patient—stripped of their clothes, their routine, and their autonomy—must hand over total control and blindly trust the medical team. However, connectivity does not vanish upon entering a clinic. The mobile phone, with its instant access to artificial intelligence, has also penetrated this space, introducing a variable the system was not prepared to handle: real-time verification.


From their bed, the patient can check every step of the treatment they are receiving. They can type: “Diagnosed with stage III non-small cell lung cancer. Current protocol is chemotherapy with cisplatin and vinorelbine. Are there more effective targeted therapies or immunotherapies for my genetic profile if an EGFR mutation is detected?” In seconds, the AI returns a detailed report with the latest clinical studies, success rates of alternative treatments, and the protocols recommended by the world’s leading oncology societies.


And it is at this precise moment that a new kind of anguish can arise—a tension the outpatient does not experience. If the AI’s information matches the treatment they are receiving, the patient feels relief and confidence. But what if it doesn’t? What happens when the AI reveals the existence of a more modern treatment, with a higher survival rate or fewer side effects, one the doctor has never mentioned?


At that moment, the patient is trapped in an unbearable crossroads. On one side is the medical team—the visible authority administering their treatment and on whom their life depends. On the other is the information provided by the AI, suggesting there may be a better option. The doubt that takes hold is not a matter of simple intellectual curiosity; it is a question born from the fear of death: Am I getting the best treatment possible, or just the standard treatment available at this hospital? Is the omission of that alternative due to lack of knowledge, to protocol, or to economic reasons?


This mental burden is immense. The patient, already weakened by illness, is forced to take on a new role: auditor of their own care. They become a silent observer, analyzing every drug, every dose, every decision. They live a double reality: outwardly, they are the cooperative patient who nods and thanks; inwardly, they are a researcher comparing, doubting, and suffering in silence. Often, they choose not to confront the doctor for fear of being a nuisance and labeled a “difficult patient,” of creating tension that could affect the quality of their care, or simply because they lack the strength to start a technical discussion in their fragile state.


When a patient decides to break that silence and present the information to the doctor, the professional’s reaction is critical. A confident, patient-focused doctor will explain the reasoning behind their therapeutic choice, discuss alternatives, and, if necessary, be willing to reevaluate the plan. However, the medical system, structured hierarchically, often reacts defensively. The patient’s question is not interpreted as a search for collaboration but as a challenge to authority. The response may be evasive, paternalistic, or even hostile, which only serves to increase the patient’s distrust and anxiety.


Here, a fundamental contradiction emerges. A study published in the British Medical Journal (BMJ) on shared decision-making shows that patients who actively participate in treatment decisions tend to have better clinical outcomes and greater satisfaction. The system, in theory, promotes this model. But in practice, it is not prepared for a patient who arrives with data that challenges the system’s own judgment. The “shared decision-making” model works as long as the patient chooses from the options the doctor presents. It breaks down when the patient introduces options the doctor had not considered.


The hospitalized patient, therefore, faces an exhausting paradox: they have more information than ever, but also an emotional and cognitive load they have never had to bear before. Access to knowledge—supposed to be an empowering tool—becomes a source of stress that can be as harmful as the illness itself. Not because the information is wrong, but because the patient is alone in processing it. The family often lacks the technical knowledge to help; the medical team is not always willing to engage in dialogue; and the AI, while informative, offers no comfort or emotional support.


This scenario is not a critique of the intentions of individual doctors, but an exposure of the system’s obsolescence. Health institutions must evolve to integrate this new reality. They need to create protocols to address informed patient doubts, train their professionals in communication skills for an environment of symmetrical knowledge, and, above all, understand that AI is not an enemy but a new actor in the clinical conversation. If they do not, the medicine of the future will be neither more human nor more effective. It will simply be a place where, in addition to fighting their illness, the patient will have to fight the doubt of whether they are truly getting the right help.




THE AUGMENTED DOCTOR: SURVIVING IN THE DIGITAL ERA (7) In the face of artificial intelligence’s advance and the empowerment of the informed patient, the most instinctive and widespread reaction within the medical profession has been resistance. This resistance manifests in many forms: from discomfort with information the patient brings from the internet, to a corporate defense of “intuition” and “experience” as irreplaceable values. While understandable from a human standpoint, this posture is strategically unsustainable. Fighting AI in the field of information access and processing is a battle lost in advance. The only logical path for the doctor who wishes to remain relevant is not to compete, but to evolve. It is not about being replaced—it is about being augmented.


The concept of the “augmented doctor” refers to a professional who understands the limits of their own cognition and uses technology to overcome them. This is a doctor who humbly accepts that they cannot memorize every scientific paper published each day, cannot recall every possible drug interaction, and whose personal experience—though valuable—is just a small fraction of global medical knowledge. Instead of seeing AI as a threat to their authority, they see it for what it is: the most powerful tool ever available in the history of medicine.


Current resistance is rooted in a fundamental fear: the loss of status. The doctor was trained in a paradigm where they were the sole source of truth. Admitting that an algorithm can, in certain tasks, surpass them is perceived as professional degradation. However, this perception is mistaken. An airline pilot is no less professional for using an autopilot system; on the contrary, their skill lies in managing complex systems to ensure the safety of the flight. In the same way, the doctor of the future will not be less of a doctor for relying on AI. Their value will shift from memorizing data to managing information, exercising critical judgment, and fostering human connection.


The practical application of this model is transformative. Imagine a doctor in their office who, faced with a complex case, instead of relying solely on memory, opens an AI interface in front of the patient and says: “Based on my experience, this could be X. But let’s verify it together. Let’s enter your symptoms and your lab results into this system to see if there are other possibilities I might not be considering.”


This simple act has profound consequences. First, it destroys the barrier of opaque authority and replaces it with one of transparency and collaboration. The patient stops seeing the doctor as an adversary or gatekeeper of secrets and begins to see them as an ally. Second, it objectively improves the quality of diagnosis. The doctor combines their experience and direct observational skills with the analytical power of AI, minimizing the risk of error from bias or lack of information. Third—and most importantly—it redefines the doctor’s value. Competence is no longer measured by how much is remembered from memory, but by the ability to ask the right questions, interpret AI results with discernment, and apply that information to the unique, human context of the patient in front of them.


This evolution collides with a cultural contradiction within medicine itself. On one hand, the profession prides itself on being based on scientific evidence—evidence-based medicine—a model that demands clinical decisions be grounded in the best and most up-to-date data available. AI is, by definition, the most advanced tool for accessing and synthesizing that evidence. Ignoring or rejecting it is, paradoxically, a betrayal of the very principle of evidence-based medicine. Yet, a study on the adoption of new technologies in clinical practice, published in the Journal of the American Medical Association (JAMA), reveals that the main barrier is not a lack of evidence on the tool’s effectiveness, but cultural inertia and resistance to changing established work routines. Doctors, like any other human group, often prefer to keep doing what they have always done, even when better-proven methods exist.


The doctor who does not adapt to this new reality will become progressively irrelevant. Their work may retain some value, but it will be outperformed in efficiency, safety, and results by those who have embraced new technologies. The informed patient, given the choice, will inevitably prefer the augmented doctor over the doctor of the past.


The survival of the medical profession—especially in primary care—does not depend on waging a cultural war against technology. It depends on an intelligent reconversion. This requires a deep reform of medical education, where future professionals are taught not only anatomy and pharmacology but also information management, algorithm ethics, and communication skills for an environment of symmetrical knowledge.


The doctor does not need to know how to program an AI. They need to know how to use it, how to question it, and how to integrate it into a clinical act that remains profoundly human. The machine can process the data, but it is the human who must communicate the diagnosis with empathy, discuss the options taking into account the patient’s values and fears, and offer comfort when science reaches its limits.


ALTH AS A CONVERSATION (8) The model of healthcare that has dominated the last century is exhausted. It was based on a simple but fundamentally flawed premise: that health was a one-way process. A monologue. The doctor, from their position of authority, spoke, and the patient, from their position of ignorance, listened and obeyed. Communication flowed in only one direction, from top to bottom. This framework, which worked as long as knowledge was a scarce resource, has become unsustainable in an era where information is ubiquitous and accessible to all.


The arrival of artificial intelligence is not the cause of this breakdown; it is the channel that has accelerated it and made it irreversible. It has exposed the fragility of a system that depended on patient passivity. Today, we stand on the threshold of a radically different paradigm: health is no longer a monologue—it is a conversation. And in this conversation, three main actors participate: the patient, artificial intelligence, and the healthcare professional.


The first actor, and the new center of the system, is the patient. They are no longer a passive recipient of care. They are an active agent, an investigator of their own biology, a manager of their own well-being. Armed with tools that allow them to access personalized medical information, the modern patient comes to the clinical interaction with a level of preparation that was once unthinkable. They do not seek merely to receive orders, but to understand their options, weigh the risks and benefits of each, and actively participate in decision-making. Their role has evolved from obedience to shared responsibility.


The second actor is artificial intelligence. It functions as a universal interface for knowledge, an instant second opinion, and a personal assistant for health management. AI does not replace clinical judgment, but it enhances and democratizes it. Its role is to level the playing field, breaking the information asymmetry that for so long defined the doctor–patient relationship.


The third actor is the healthcare professional, whose role is undergoing the most profound transformation. Stripped of their former monopoly on knowledge, their value no longer lies in being an infallible oracle. Their new function, much more complex and human, is to be a guide, an interpreter, and a collaborator. They are the expert who helps the patient navigate the information the AI provides. They are the one who translates statistical data into the individual and emotional reality of the patient. They bring critical judgment, practical experience, and, above all, the empathy and human contact that no machine can offer.


In this new conversational model, the dynamic is completely different. The process no longer necessarily begins in the doctor’s office. It may begin with the patient consulting their symptoms with an AI. Then, with that preliminary information, the patient comes to the doctor not for a verdict, but to start a dialogue. The conversation might be: “I’ve experienced these symptoms and, according to the information I’ve consulted, the possibilities could be A, B, or C. What is your opinion based on my physical examination and your experience?” The doctor, in turn, can use their own AI tools to verify or expand that information, turning the consultation into a collaborative problem-solving synergy.


This approach clashes with how the current healthcare infrastructure is organized. Health systems—both public and private—are designed for the efficiency of the monologue: short consultations, rigid protocols, and billing models based on the number of procedures performed. The conversational model, by its nature, requires more time per patient, more flexibility, and a valuation of dialogue as a therapeutic act in itself. A study by the Commonwealth Fund on high-performing primary care systems worldwide identifies a common feature: doctors have long-term relationships with their patients and devote more time to each consultation. This finding suggests that real efficiency in healthcare comes not from speed, but from the quality of the relationship. Modern medicine, obsessed with short-term performance metrics, is structurally ill-equipped to foster the kind of conversation the new reality demands.


Overcoming this systemic barrier is the greatest challenge to implementing this new paradigm. It requires reform not only in the mindset of professionals but also in the very economic and administrative architecture of healthcare. Time devoted to communication must be valued and compensated—not just prescriptions or procedures.


The result of this conversational model is safer, more personalized, and more human medicine. Safer, because decisions are made with more information and cross-checked from multiple perspectives, minimizing the risk of error. More personalized, because treatment adapts not only to the patient’s biology but also to their values, preferences, and life context. And, paradoxically, more human—because by freeing the doctor from the burden of merely repeating data, it allows them to focus on what only a human can do: listen, understand, comfort, and accompany.


Health ceases to be something that is “received” and becomes something that is “built” together. It is a continuous process of learning and collaboration between an empowered patient, a technology that provides knowledge, and a professional who brings wisdom and humanity. This is the future. The era of the monologue has ended.
The era of the conversation has begun.



THE NEW HEALTH EDUCATION: LEARNING TO ASK (9) Universal access to medical information has not, by itself, created a healthier or wiser population. It has created a population with more data—which is not the same thing. Information overload, often contradictory and taken out of context, has in many cases produced a new form of anxiety and confusion. This leads to the inevitable conclusion that the problem was never the lack of access to knowledge. The real problem is, and always has been, the lack of proper education to process it. The educational model of the past—based on memorizing facts—is completely useless in this new environment. Today, the most important skill for survival and well-being is not knowing the answer, but knowing how to frame the question.


Traditional health education was based on the premise of scarcity. People were taught basic concepts about hygiene or common illnesses because it was assumed that this would be the only information they would access in their lifetime. Today, the premise is the opposite: infinite abundance. Therefore, teaching the names of diseases or the mechanisms of drugs is an inefficient exercise. Any artificial intelligence can provide that information more completely and more up to date in seconds. The new goal of health education should not be to fill the mind with data, but to train it in the art of critical inquiry.


Learning to ask
is a much more complex skill than it appears. It is not simply typing a doubt into a search bar. It is a structured process that involves several layers of ability.


The first is precision. There is a fundamental difference between a vague question and a specific one. Asking “Why do I have a headache?” will yield generic and often alarming results. A well-formulated question, on the other hand, provides context: “Woman, 35 years old, pulsating headache on the right side of the skull, started one hour ago, accompanied by sensitivity to light, no nausea, with a history of occasional migraines.” A precise, detail-rich question allows any AI system to filter out noise and provide guidance that is far more relevant and useful.


The second layer is critical evaluation of the answer. No AI response should be accepted as a final verdict. The new health literacy involves teaching people to read these responses with healthy skepticism. Is the information presented as certainty or as probability? What sources does the system use to reach that conclusion? What warning signs does the AI itself recommend monitoring? The key is to understand that AI is a guidance tool—it is not infallible.


The third, and most sophisticated, layer is triangulation. In a world with multiple sources of information, relying on just one is a mistake. The digitally health-literate person knows to cross-check. If an AI provides an answer, they ask a similar question to a second and third AI from different developers. They compare the results. If there is a clear consensus, confidence in the information increases. If there are significant contradictions, a warning light goes on, indicating the need for deeper investigation or, critically, consultation with a human professional. This act of comparing and contrasting is the modern equivalent of seeking a second opinion—only democratized and available to all.


The goal of this new education is not—and this must be clear—to turn every citizen into an amateur doctor. It is exactly the opposite. It is to form citizens competent enough to know when they can handle a minor situation on their own and when they must, without hesitation, seek professional help. It is to give them the tools so that, when they arrive at the clinic, they can have a high-level conversation with the doctor. A person who understands their condition, has researched their options, and has clear questions is not a “difficult” patient. They are the best kind of patient: an active partner in their own care.


However, there is a documented contradiction here. Simply providing information—even high-quality information—does not guarantee better decisions. Research in cognitive psychology and behavioral economics, as popularized by Daniel Kahneman, has shown that the human brain is riddled with biases that distort judgment. Confirmation bias leads us to seek and value information that supports our preexisting beliefs. Negativity bias causes us to give disproportionate weight to possible negative outcomes, no matter how unlikely. A patient may read that a drug has 99% efficacy and a 1% risk of a severe side effect, and their mind—by design—will obsess over that 1%. This shows that digital health literacy cannot stop at teaching people how to search for information. It must go one step further.


The new health education must fundamentally include basic training in critical thinking and in understanding these cognitive biases. It must teach people to recognize their own mental traps. It must include an introduction to statistics so that concepts like relative risk and absolute risk stop being terrifying abstractions and become tools for rational decision-making. The solution to misinformation and anxiety is not less information—it is a mind better prepared to process it.


The responsibility for this training is shared. It falls on educational systems, which must integrate this literacy into their programs from an early age. It falls on healthcare professionals, who must assume a new role as educators and guides. And it falls on each individual, who has the obligation to shift from being a passive consumer of care to an active manager of their own health.


Knowledge is no longer power. In the information age, power is judgment
. And judgment is built through the deliberate practice of asking, doubting, verifying, and thinking critically. The ultimate goal is not to have all the answers. It is to master the art of asking the right questions.


FINAL REFLECTION The relationship between the doctor and artificial intelligence is not a technical issue—it is a mirror that shows us how the human role changes when a tool appears that thinks faster, processes more data, and is more accurate more often. This book has illustrated the magnitude of that change in medicine, but what truly matters is that the same dynamic will repeat itself in other areas of our lives. What happens today in a medical consultation may be a preview of what we will experience in work, education, politics, and even in personal decisions.


In medicine, the change is especially visible because it affects something we all value: health. But the real question is not whether AI can diagnose better than a doctor—it’s how we as a society change when we allow critical decisions to be made automatically. What does it mean for a doctor to lose part of their clinical judgment? What does it mean for a patient to accept a treatment simply because “the system said so”?


Artificial intelligence brings undeniable benefits: fewer errors, faster processes, access to updated information, and treatments more finely tuned to each person. But it also brings a silent cost: the loss of personal judgment—not only for doctors, but also for patients. When we stop questioning and simply follow what a screen indicates, we begin to depend on a system we do not fully understand. And once that dependence sets in, it is difficult to reverse.


Another angle this book invites us to consider is that of trust. The doctor–patient relationship has always been a mix of science and human connection. AI contributes the science, but who will handle the connection? If the patient feels that the doctor is only executing the system’s instructions, trust shifts from the person to the algorithm. This is not inherently good or bad, but it does redefine what we expect from a healthcare professional and what we value in them.


A key question arises: what place do we want empathy to have in the future of medicine? If precision and efficiency become the main measures of success, the risk is that the human component will become secondary, almost decorative. And yet we know that care, listening, and understanding directly influence a patient’s recovery. This creates a dilemma: do we want the medicine of the future to be flawless in numbers but cold in experience?



Another unavoidable reflection is on the role of the informed patient. Today, anyone can consult an AI before going to the doctor. This empowers people, but it can also generate anxiety, doubt, or conflict. The challenge is not just having access to quality information, but knowing how to use it. Just as a doctor must learn to integrate AI into their work, the patient must learn to integrate it into their life without losing judgment. This opens a new field that is not technical, but educational: digital health literacy.


We must also think about medical training. A doctor who grows professionally with AI support from the start will have fewer opportunities to develop certain skills that were once essential. If the system always provides the correct answer, how will they cultivate clinical intuition, the ability to detect the atypical, or the experience of deciding under uncertainty? Here lies a long-term risk: that the day the system fails, there will not be enough professionals with the mental training to replace it.


Ultimately, the great question that remains is what we want to preserve as humans in this coexistence with AI. The machine can accumulate data, learn patterns, and propose optimal decisions, but it does not live the experience of illness, nor does it understand fear, hope, or resignation. That is still human territory. If we do not protect and develop that space, we will lose it without realizing it. This is not about opposing technology, but about balancing it so that the medicine of the future is efficient, yet also deeply human.


When the reader closes this book, the invitation is to keep thinking beyond medicine. To ask: In what other aspects of my life am I already ceding decisions to an automated system? How do I ensure that I understand and question those decisions? What skills do I need to preserve so I do not become dependent? Artificial intelligence will continue to advance, but the ability to decide with judgment will remain our responsibility.



PROJECTION TO THE YEAR 2050 By 2025, artificial intelligence is already part of the medical system. It is not a project in development, but an operational tool that diagnoses, analyzes, and proposes treatments. The human doctor still participates in decision-making, but much of their judgment is already conditioned by the system’s suggestions. The starting point toward 2050 is not hypothetical—it is an ongoing reality.


Between now and 2050, this integration will consolidate to the point of completely reconfiguring the structure of medicine. Initial diagnosis will be generated by AI systems in real time, using global medical data, automatically processed images, and the patient’s complete history. The doctor will not start the diagnosis from scratch but will validate or adjust what the system proposes. This validation will not be optional—institutional protocols will require every medical decision to go through algorithmic review.


The direct consequence will be a shift in authority. The center of decision-making will move from the human professional to the digital system. Institutions will prioritize alignment with AI recommendations to reduce errors and standardize treatments. The doctor will intervene mainly in atypical cases or situations requiring physical contact. This will make them more of an operator or supervisor than a generator of solutions.


In terms of training, medical schools will adapt their curricula. The teaching of detailed clinical diagnosis will lose prominence to training in AI platforms, data interpretation, and automated alert management. The ability to reason through a case without technological assistance will be relegated, reducing the direct experience and clinical intuition once acquired through years of independent practice. Manual and surgical training will continue, but increasingly guided by robotic assistants and digital protocols.


The doctor–patient relationship will also change. Most encounters will not be in person. The patient will enter their data into a system that integrates symptoms, history, and test results. AI will process this information and offer a preliminary plan. The doctor will appear only in the final stages or for physical procedures. The patient will perceive less human interaction and more automated processes. This will reduce the traditional trust bond and change the social perception of medicine, which will come to be seen more as a technical service than a human act of care.


At the institutional level, all hospitals and clinics will operate with AI systems as their central core. Insurance companies will require algorithmic validation to authorize treatments. Quality indicators will be measured by the degree of compliance with system recommendations. Any professional choosing to deviate from these indications will be monitored, corrected, or sanctioned. This will limit individual autonomy and reinforce uniformity in care.


In terms of clinical outcomes, objective improvements are likely: fewer misdiagnoses, treatments more tailored to the patient’s genetics and history, and more efficient use of resources. However, these improvements will come with a side effect: the loss of independent clinical judgment among professionals. The doctor trained in this environment will depend on the machine to confirm or generate diagnoses, and if the system fails, their autonomous response capacity will be limited.


Meanwhile, patients will have greater access to personalized medical information through AI. This will increase their ability to understand and question treatments, but it could also generate conflicts with the system and with professionals. Digital health education will be crucial so that information does not become a source of anxiety or poor decisions. Not all countries will develop this education at the same pace, which will widen inequalities in access to safe and well-understood care.


Medical ethics will also transform. Responsibility will no longer be measured only by the professional’s decision but by the degree of alignment with the algorithm. This will generate an ethic of compliance rather than personal judgment. Legal decisions and malpractice cases will be evaluated by comparing human actions with the system’s recommendations.


By 2050, medicine will be faster, more accurate, and more standardized, but also more dependent on a digital infrastructure that centralizes decision-making power. The doctor will still be necessary, but in a different role than historically. The technical part will be dominated by AI; the human part will depend on whether professionals and institutions choose to preserve it. If they do not prioritize it, medicine will be reduced to a case-processing service, with no room for human connection or developed clinical judgment.


The projection is clear: what is now partial collaboration will, by 2050, be structural control by AI. The technical advantages will be obvious, but the cost will be a redefinition of the human role in medicine. Maintaining a balance between technological efficiency and human value will be the central challenge to ensure the system does not lose what makes it more than just a network of data—its ability to understand, accompany, and care for people.

END OF BOOK


The following Comparative Tables are not part of the book’s main content, but provide an additional perspective on its ideas.