Databac

Medical Ethics.

Publié le 06/12/2021

Extrait du document

Ci-dessous un extrait traitant le sujet : Medical Ethics.. Ce document contient 7511 mots. Pour le télécharger en entier, envoyez-nous un de vos documents grâce à notre système d’échange gratuit de ressources numériques ou achetez-le pour la modique somme d’un euro symbolique. Cette aide totalement rédigée en format pdf sera utile aux lycéens ou étudiants ayant un devoir à réaliser ou une leçon à approfondir en : Echange
Medical Ethics.
I

INTRODUCTION

Medical Ethics or Bioethics, study and application of moral values, rights, and duties in the fields of medical treatment and research. Medical decisions involving moral
issues are made every day in diverse situations such as the relationship between patient and physician, the treatment of human and animal subjects in biomedical
experimentation, the allocation of scarce medical resources, the complex questions that surround the beginning and the end of a human life, and the conduct of clinical
medicine and life-sciences research.
Medical ethics traces its roots back as far as ancient Greece, but the field gained particular prominence in the late 20th century. Many of the current issues in medical
ethics are the product of advances in scientific knowledge and biomedical technology. These advances have presented humanity not only with great progress in treating
and preventing disease but also with new questions and uncertainties about the basic nature of life and death. As people have grappled with issues on the frontier of
medical science and research, medical ethics has grown into a separate profession and field of study. Professional medical ethicists bring expertise from fields such as
philosophy, social sciences, medicine, research science, law, and theology.
Medical ethicists serve as advisors to hospitals and other health-care institutions. They have also served as advisors to government at various levels. For example,
experts in medical ethics assisted the United States government from 1974 to 1978 as members of the National Commission for the Protection of Human Subjects of
Medical Research. The commission was formed in response to several large-scale experiments that used human subjects who were tricked into participating. In the late
1990s the National Bioethics Advisory Commission, at the direction of President Bill Clinton, studied issues related to the cloning of human beings. Ethicists also serve as
advisors to state legislatures in the writing of laws concerning the decision to end life support, the use of genetic testing, physician-assisted suicide, and other matters.
Medical ethics has even become part of the landscape in the commercial world of science. An increasing number of firms involved in biotechnology (the business of
applying biological and genetic research to the development of new drugs and other products) regularly consult with medical ethicists about business and research
practices.
The field of medical ethics is also an international discipline. The World Health Organization founded the Council for International Organizations of Medical Sciences in
1949 to collect worldwide data on the use of human subjects in research. In 1993 the United Nations Educational, Scientific, and Cultural Organization (UNESCO)
established an International Bioethics Committee to examine and monitor worldwide issues in medicine and life-sciences research. The UNESCO directory lists more than
500 centers outside the United States. The International Association of Bioethics was founded in 1997 to facilitate the exchange of information in medical ethics issues
and to encourage research and teaching in the field.
In the United States and Canada more than 25 universities offer degrees in medical ethics. In many instances, the subject is also part of the curriculum in the education
of physicians and other health-care professionals. Many medical schools include ethics courses that examine topics such as theories of moral decision-making and the
responsible conduct of medical research.

II

HISTORY

The examination of moral issues in medicine largely began with the Greeks in the 4th century

BC.

The Greek physician Hippocrates is associated with more than 70

works pertaining to medicine. However, modern scholars are not certain how many of these works can be attributed to Hippocrates himself, as some may have been
written by his followers. One work that is generally credited to Hippocrates contains one of the first statements on medical ethics. In Epidemics I, in the midst of
instructions on how to diagnose various illnesses, Hippocrates offers the following, "As to diseases, make a habit of two things--to help and not to harm."
The most famous ethical work--although the exact origin of the text is unknown--is the Hippocratic Oath. In eight paragraphs, those swearing the oath pledge to "keep
[patients] from harm and injustice." The oath also requires physicians to give their loyalty and support to their fellow physicians, promise to apply dietetic measures for
the benefit of the sick, refuse to provide abortion or euthanasia (the act of assisting a chronically ill person to die), and swear not to make improper sexual advances
against any members of the household. "In purity and holiness I will guard my life and my art," concludes one section of the oath. For most of the 20th century, it was
common for modified versions of the Hippocratic Oath to be recited by medical students upon the awarding of their degrees. For many people, the oath still symbolizes
a physician's duties and obligations.
The idea of ethical conduct is common in many early texts, including those from ancient India and China--cultures in which medical knowledge was viewed as divine or
magical in origin. Echoing the Hippocratic Oath, the Caraka Samhita, a Sanskrit text written in India roughly 2,000 years ago, urges the following commandment to
physicians, "Day and night, however you may be engaged, you shall strive for the relief of the patient with all your heart and soul. You shall not desert the patient even
for the sake of your life or living." Similar sentiments can be found in the Chinese text Nei Jing (The Yellow Emperor's Classic of Inner Medicine), dating from the 2nd
century

BC.

This work stressed the connection between virtue and health. Three centuries later, the work of the Chinese physician Sun Simiao emphasized compassion

and humility, "...a Great Physician should not pay attention to status, wealth, or age.... He should meet everyone on equal ground...."
In Europe during the Middle Ages, the ethical standards of physicians were put to the test by the bubonic plague, the highly contagious Black Death that arrived around
the mid-1300s and remained a threat for centuries. When plague broke out, physicians had a choice: They could stay and treat the sick--risking death in the
process--or flee. The bubonic plague and other epidemics provide an early example of the challenges that still exist today when doctors must decide whether they are
willing to face personal risks when caring for their patients.
By the 18th century, particularly in Britain, the emphasis in medical ethics centered on proper, honorable behavior. One of the best-known works from the period is
Medical Ethics; or, a Code of Institutes and Precepts, Adapted to the Professional Conduct of Physicians and Surgeons, published in 1803 by the British physician
Thomas Percival. In his 72 precepts, Percival urged a level of care and attention such that doctors would "inspire the minds of their patients with gratitude, respect, and
confidence." His ethics, however, also permitted withholding the truth from a patient if the truth might be "deeply injurious to himself, to his family, and to the public."
At roughly the same time American physician Benjamin Rush, a signer of the Declaration of Independence, was promoting American medical ethics. His lectures to
medical students at the University of Pennsylvania in Philadelphia, spoke of the virtues of generosity, honesty, piety, and service to the poor.
By the early 19th century, it seemed that such virtues were in short supply, and the public generally held physicians in North America in low esteem. Complicating the
problem was the existence of a variety of faith healers and other unconventional practitioners who flourished in an almost entirely unregulated medical marketplace. In
part to remedy this situation, physicians convened in 1847 to form a national association devoted to the improvement of standards in medical education and practice.
The American Medical Association (AMA), as the group called itself, issued its own code of ethics, stating, "A physician shall be dedicated to providing competent medical
service with compassion and respect for human dignity. A physician shall recognize a responsibility to participate in activities contributing to an improved community."
This text was largely modeled on the British code written by Percival, but it added the idea of mutually shared responsibilities and obligations among doctor, patient, and
society. Since its creation, the AMA Code has been updated as challenging ethical issues have arisen in science and medicine. The code now consists of seven principles
centered on compassionate service along with respect for patients, colleagues, and the law. The Canadian Medical Association (CMA), established in 1867, also
developed a Code of Ethics as a guide for physicians. Today the CMA code provides over 40 guidelines about physician responsibilities to patients, society, and the

medical profession.
In recent years, however, the field of medical ethics has struggled to keep pace with the many complex issues raised by new technologies for creating and sustaining
life. Artificial-respiration devices, kidney dialysis, and other machines can keep patients alive who previously would have succumbed to their illnesses or injuries.
Advances in organ transplantation have brought new hope to those afflicted with diseased organs. New techniques have enabled prospective parents to conquer
infertility. Progress in molecular biology and genetics has placed scientists in control of the most basic biochemical processes of life. With the advent of these new
technologies, codes of medical ethics have become inadequate or obsolete as new questions and issues continue to confront medical ethicists.

III

HOW ARE ETHICAL DECISIONS MADE IN MEDICINE?

Throughout history the practice of medical ethics has drawn on a variety of philosophical concepts. One such concept is deontology, a branch of ethical teaching
centered on the idea that actions must be guided above all by adherence to clear principles, such as respect for free will. In contemporary bioethics, the idea of
autonomy has been of central importance in this tradition. Autonomy is the right of individuals to determine their own fates and live their lives the way they choose, as
long as they do not interfere with the rights of others. Other medical ethicists have championed a principle known as utilitarianism, a moral framework in which actions
are judged primarily by their results. Utilitarianism holds that actions or policies that achieve good results--particularly the greatest good for the greatest number of
people--are judged to be moral. Still another philosophical idea that has been central to medical ethics is virtue ethics, which holds that those who are taught to be
good will do what is right.
Many medical ethicists find that these general philosophical principles are abstract and difficult to apply to complex ethical issues in medicine. To better evaluate medical
cases and make decisions, medical ethicists have tried to establish specific ethical frameworks and procedures. One system, developed in the late 1970s by the
American philosopher Tom Beauchamp and the American theologian James Childress, is known as principlism, or the Four Principles Approach. In this system ethical
decisions pertaining to biomedicine are made by weighing the importance of four separate elements: respecting each person's autonomy and their right to their own
decisions and beliefs; the principle of beneficence, helping people as the primary goal; the related principle of nonmalificence, refraining from harming people; and
justice, distributing burdens and benefits fairly.
Medical ethicists must often weigh these four principles against one another. For example, all four principles would come into play in the case of a patient who falls into
an irreversible coma without expectation of recovery and who is kept alive by a mechanical device that artificially maintains basic life functions such as heartbeat and
respiration. The patient's family members might argue that the patient, if able to make the decision, would never want to be sustained on a life-support machine. They
would argue from the viewpoint of patient autonomy--that the patient should be disconnected from the machine and allowed to die with dignity. Doctors and hospital
staff, meanwhile, would likely be concerned with the principles of beneficence and nonmalificence--the fundamental desire to help the patient or to refrain from harmful
actions, such as terminating life support. Consulting on such a case, the medical ethicist would help decide which of these conflicting principles should carry the most
weight. An ethicist using principlism might work toward a solution that addresses both sides of the conflict. Perhaps the family and medical staff could agree to set a
time limit during which doctors would have the opportunity to exhaust every possibility of cure or recovery, thus promoting beneficence. But at the end of the
designated period, doctors would agree to terminate life support in ultimate accordance with the patient's autonomy.
Although some medical ethicists follow principlism, others employ a system known as casuistry, a case-based approach. When faced with a complex bioethical case,
casuists attempt to envision a similar yet clearer case in which virtually anyone could agree on a solution. By weighing solutions to the hypothetical case, casuists work
their way toward a solution to the real case at hand.
Casuists might confront a case that involves deciding how much to explain to a gravely ill patient about his or her condition, given that the truth might be so upsetting
as to actually interfere with treatment. In one such case cited by American ethicist Mark Kuczewski from the Center for the Study of Bioethics at the Medical College of
Wisconsin in Milwaukee, a 55-year-old man was diagnosed with the same form of cancer that had killed his father. After a surgical procedure to remove the tumor, the
patient's family members privately told his doctors that if the patient knew the full truth about his condition, he would be devastated. In weighing this matter, a casuist
might envision a clear-cut case in which a patient explicitly instructs doctors or caregivers not to share any negative information about prospects for cure or survival.
The opposite scenario would be a case in which the patient clearly wishes to know every bit of diagnostic information, even if the news is bad. The challenge for the
casuist is to determine which scenario, or paradigm, most closely resembles the dilemma at hand, and, with careful consideration of the case, try to proceed from the
hypothetical to a practical solution. In this particular case, the cancer patient was informed that his tumor had not been successfully removed and that more curative
measures were called for. His treatments continued. In the end, however, he died of the disease without ever being told of a terminal diagnosis.

IV

CURRENT MEDICAL ETHICS ISSUES

Casuistry and principlism are just two of many bioethical frameworks. Each approach has its proponents, and volleys of disagreement and debate frequently fly among
the various schools of thought. Yet each approach represents an attempt to deal with thorny, conflicting issues that commonly arise in the complex and contentious
arena of medicine. These issues can include the rights and needs of the patient, who may, for example, decide to discontinue treatment for a life-threatening illness,
preferring to die with dignity while still mentally competent to make that choice. There is the obligation of the doctor, whose duty it is to save and prolong life. There is
the hospital or health-care system, whose administrators must weigh the obligation to sustain life against the often-enormous expense of modern medical methods. And
there is the law, which seeks to protect citizens from harm while at the same time respecting autonomy. The remainder of this article discusses some of the most
prominent dilemmas and decisions faced by modern medical ethicists.

A

Mortality Issues

For many centuries death was clearly indicated by the absence of a pulse or signs of breathing. In the 1960s advances in life-support technology, such as mechanical
respirators and the heart-lung machine, enabled physicians to artificially maintain function in the heart and lungs, and the clear signs of death became blurred. The
resulting challenge--to forge a new definition of death--was a major spur to the growth of medical ethics during the 1960s and 1970s.
An early effort along these lines was a set of guidelines issued by a committee at the Harvard University Medical School in Boston, Massachusetts, in 1968. These
guidelines introduced the concept of brain death--defined as an end of all function in the brain and central nervous system, even in a body sustained by artificial
technology. In 1981 a United States federal advisory group on medical ethics, the President's Commission for the Study of Ethical Problems in Medicine and Biomedical
and Behavioral Research, created a guideline for defining death that specifies not only "irreversible cessation of circulatory and respiratory functions," but also
"irreversible cessation of all functions of the entire brain." Within a few years, most states had adopted this definition. Most European nations, Canada, Australia, and
Central and South American nations define death either as the loss of all independent lung and heart function or the permanent and irreversible loss of all brain
function.
The concept of brain death did not solve the dilemma of patients being sustained by artificial means--particularly in cases of permanent vegetative states (when a
patient has some brain function but shows no response to the external environment). Using medical technology, these patients can be kept alive for years, if not
decades. A landmark bioethical and legal case in this area concerned Karen Ann Quinlan, a 21-year-old woman in 1975 who fell into a permanent vegetative state as a

result of ingesting a mixture of tranquilizers and alcohol. Her parents, carrying out what they believed their daughter's wishes would be, requested that she be
disconnected from her life-support system. Hospital officials, while sympathetic to the parents' wishes, would not agree, and a long court battle followed. Ultimately a
New Jersey court agreed with the parents, and Quinlan was disconnected from her respirator. (Unexpectedly, she began to breathe on her own and lived another ten
years in a nursing home.)
The Quinlan case brought the "right to die" issue into the realm of public discussion. As a result, in many cases, patients can now make advanced directives, also known
as living wills, directing that their lives not be sustained by artificial means. Other aged or gravely ill patients have used living wills to specify that if they should suffer
heart failure or other crises while in the hospital, medical staff should make no extraordinary effort to resuscitate them.
Allowing a patient to die raises one set of ethical issues. Actively helping a patient achieve death, often referred to as euthanasia, raises still other moral questions. For
many years, medical ethicists have debated whether there is a significant distinction between the two courses of action. In the United States (with the exception of
Oregon), Canada, and most other nations, euthanasia is illegal. In the Netherlands, the parliament in 1993 established informal guidelines under which physicians would
not be prosecuted for participating in voluntary euthanasia. The Dutch parliament formally legalized voluntary euthanasia in 2000, provided that it involved the full
consent of the patient and agreement of all concerned medical personnel.
Still another controversy related to euthanasia concerns the decision of when, and if, it is ethically permissible to withhold treatment from a child. This issue came into
public focus with the case of "Baby Doe" in 1982. The newborn infant was diagnosed with Down Syndrome, a chromosomal disorder that causes moderate to severe
developmental disabilities. The baby also had a hole in the esophagus (the passageway through the throat) that prevented the baby from feeding. The parents,
apparently unwilling to raise a child with Down Syndrome, refused to consent to the routine surgery that could have corrected the esophageal defect, and the baby died
after six days. The case outraged many, and officials in the administration of President Ronald Reagan rushed to pass legislation preventing future similar scenarios.
An issue related to euthanasia is assisted suicide, voluntary suicide with the help of another person. In the United States this matter has been highly publicized through
the actions of American physician Jack Kevorkian, who in recent years assisted in more than 130 suicides. In 1999, after several previous acquittals in other cases,
Kevorkian was convicted of second-degree murder after administering a lethal injection to a Michigan man suffering from amyotrophic lateral sclerosis, a progressively
debilitating, currently incurable disease. Kevorkian was sentenced to 10 to 25 years in prison but was released in 2007 after serving 8 years of his sentence.
In Canada, assisted suicide is illegal. In the United States, the legal situation regarding assisted suicide has been left largely to the individual states. In the 1997 decision
of State of Washington v. Glucksberg, the Supreme Court of the United States determined that there is no constitutional right to die with the help of a physician. The
Court has also upheld state laws that ban assisted suicide. However, in 1994 and again in 1997 voters in the state of Oregon approved a measure allowing physicians to
prescribe lethal medications when requested by a mentally competent adult who is suffering in the final stages of terminal illness. In 2006 the Supreme Court upheld
Oregon's assisted suicide law by a 6-3 vote, after the administration of President George W. Bush challenged the law's legality.
All these issues--determining when life has ended and deciding what constitutes a reasonable quality of life and whether the patient, the health-care system, or the
courts should have ultimate authority in life or death--remain unresolved and continue to challenge medical ethicists.

B

Reproductive Medicine

Many new questions of medical ethics have occurred as a result of developments in reproductive medicine. In the 1960s the development of the birth-control pill raised
ethical issues, especially for people whose religions forbade the use of artificial birth control. In 1973 the United States Supreme Court legalized abortion with its
landmark Roe v. Wade decision. In 1988 the Canadian Supreme Court removed abortion from the Criminal Code of Canada, enabling the decision of abortion to be
made confidentially between a patient and her physician within the confines of Canadian law. Controversy surrounding these rulings--including discussion of the origins
and meaning of personhood, the rights of the fetus and pregnant women, and the role the state should play in reproductive decisions--have kept abortion a volatile
political and ethical issue into the 21st century.
Contributing to this heated debate has been the development of a variety of drugs that either prevent or end pregnancies. Emergency contraceptive pills, commonly
known as morning-after pills, use high doses of hormones that can prevent or delay ovulation, inhibit a sperm from fertilizing an egg, or make the uterine lining
inhospitable to a fertilized egg. If taken by a woman within 72 hours after unprotected sexual intercourse, these drugs can prevent pregnancy. Doctors have long
prescribed high doses of certain oral contraceptives to patients within three days after unprotected intercourse. More recently, drugs specifically created for the purpose
of emergency contraception have become available. In some states of the United States, these drugs can be dispensed by a pharmacist without a doctor's prescription.
Abortion rights advocates consider these drugs a welcome addition to the limited number of effective contraceptive methods, but abortion opponents strongly disagree.
Since there is a small chance that these drugs may take effect after an egg is fertilized, when abortion opponents believe a human life has already begun, critics view
the drug as just another form of abortion.
Even more controversial is the drug mifepristone, also known as RU-486. Mifepristone is used to induce abortion in the first seven weeks of pregnancy--when an
embryo is less than 2.5 cm (1 in) in length--without requiring surgery. Developed by a French pharmaceutical firm, mifepristone was first approved for use in France in
1988; later it was approved in the United Kingdom, Sweden, and other European countries. The drug was approved for use in the United States in 2000 under the
brand name Mifeprex.
Mifepristone blocks progesterone, a hormone required to maintain pregnancy. A woman receives mifepristone in her physician's office. She then returns to the doctor's
office within 48 hours to take the drug misoprostol, a hormone-like substance that makes the uterus contract and expel fetal tissue. A woman typically experiences
bleeding and cramping that may last from 9 to 16 days. Two weeks after receiving the second drug, the woman returns to her physician to make sure the drug
treatment was successful in terminating the pregnancy.
Opponents of abortion contend that the easy availability of this drug increases the chance that women will choose this method as a form of belated birth control.
Abortion rights advocates note that, since use of the drug is a private matter between a woman and her physician and requires no surgery, a woman no longer needs
to visit an abortion clinic, which may be targeted by antiabortion protesters. Proponents also cite evidence from clinical trials of the drug showing that many women
preferred the less invasive procedure to a surgical abortion because it helped them feel more in control of their personal health. As with other abortion procedures, the
cost of mifepristone will not be covered by Medicaid, the federal health insurance program for low-income individuals and families, unless the pregnancy results from
rape or incest or endangers the life of the mother. Advocates complain that as a result poor women do not have the same access to reproductive health care that
wealthier women have.
Infertility is also an important area of medical ethics. Many couples unable to have children turn to fertility-enhancing technologies for help. Artificial insemination, a
method in which doctors introduce semen into the cervix, raised new ethical issues about how potential parents should choose sperm or egg donors, on what basis and
with what assurances of privacy donors should be recruited, and whether donors are entitled to parental rights or financial compensation.
In 1978 the birth of the first so-called test-tube baby was an important technological breakthrough. Doctors used in vitro fertilization (IVF), a method in which
fertilization of the ovum with sperm was conducted in a laboratory and the resulting embryo was subsequently implanted in the mother's uterus. Soon thereafter, a
variety of other IVF techniques were developed. Not surprisingly, these procedures have raised significant ethical questions, including some about the safety of the

costly technique. To increase the chance for success, doctors may fertilize and implant more than one embryo into a woman's uterus. Some experts have raised
concerns about this practice because it increases the incidence of multiple births, which can create a health risk for the mother and babies and can place a heavy
burden on the parents. When more than one embryo implants in the uterus, doctors can selectively remove one or more of the embryos to improve the chances that
the others will survive, but this raises additional ethical issues related to abortion. Questions have also arisen over the fate of the fertilized eggs that are not implanted
and the fate of the human embryos if the couples who created them die, become incapacitated, or no longer want to have children.
Advances in prenatal diagnostic techniques, such as genetic testing, in the 1960s and 1970s made it possible to test a fetus (and more recently an embryo) for genetic
diseases, such as sickle-cell anemia, and other disorders prior to birth. These techniques, including chorionic villus sampling and amniocentesis, led to discussions about
the morality of using medicine to end pregnancies based on the predicted disability and quality of life that the baby might face. An experimental technique known as
preimplantation genetic diagnosis could help couples avoid facing this difficult decision. This technique enables doctors to analyze the genetic material of embryos
created through IVF before they are implanted in a woman's uterus. Only healthy embryos are then implanted. A related technique enables doctors to determine the
sex of the baby before the embryos are implanted. Couples at risk of passing on a genetic disorder that affects males may choose to have only female embryos
implanted. However, these prenatal techniques have raised additional ethical questions about the rights of parents to design their descendants.

C

Genetic Technology Issues

Along with developing new methods for ending pregnancies or aiding fertility, modern medical science has created new means of manipulating the very building blocks
of life itself. These techniques, in the fields of genetic engineering and biotechnology, have caused much public discussion on medical ethics in recent years.
Since the early 1970s, scientists have refined and improved methods for isolating and manipulating genes--the basic units of heredity made up of deoxyribonucleic acid
(DNA) that hold the master instructions for the creation of proteins. Proteins act as molecular laborers, controlling every aspect of cell activity. Specific segments of DNA
can be removed from one organism and inserted into the genes of another species. In this way the function of selected genes can be deactivated or amplified, changing
the actions of hormones and other proteins and fundamentally altering the characteristics of life forms.
For some medical ethicists and other observers, such gene manipulation raises serious ethical and practical concerns. These doubters have asked the following
questions: Just because scientists can perform such wonders, does it follow that they should do so? Are there unforeseen dangers in altering life at the biochemical
level? Is genetic diversity threatened by such activity?
Biotechnology and genetic engineering also raise issues concerning commercial exploitation, such as attempts to patent human gene sequences. Patents protect
inventors and others by forbidding any imitators from using or profiting from the patent-holder's original material. But can life itself be patented? In 1980 the U.S.
Supreme Court suggested in its decision Diamond v. Chakrabarty that life can be patented. The Court granted patent protection to a scientist who had developed a new
bacterium strain that was capable of serving as a natural cleanup agent in accidental oil spills (see Bioremediation).
Six years later the Court ruled that any life form developed through biotechnology--excluding humans--could be considered a patentable invention. More recently, as
work has proceeded on mapping the human genome (the entire set of genes found in the nucleus of each human cell), biotechnology companies have scrambled to file
patents on individual genes that they have "discovered"--whether the companies know the function of the gene or not. In 2000, for example, a controversy erupted
when a biotechnology company won a patent for a gene involved in the process by which the human immunodeficiency virus (HIV), the virus that causes acquired
immunodeficiency syndrome (AIDS), infects cells. The patent was granted even though other scientists had previously made discoveries regarding the gene's function.
To some medical ethicists, the patenting of genes is troubling. A central argument against patenting is that competitive ownership of individual genes might prevent
scientists from sharing knowledge. This, of course, would hamper basic biomedical research and the ongoing search for treatments and cures. President Clinton and
British prime minister Tony Blair addressed this concern early in 2000 when they jointly called for an agreement for American and British scientists to openly share all
information derived from the sequencing of the human genome (see Human Genome Project).

D

Cloning

Perhaps no event in biotechnology has caused more uproar and bioethical discussion than the cloning of Dolly the sheep by Scottish scientists. Dolly was created when
the scientists removed the nucleus from a cell taken from the udder of a six-year-old sheep, placed it into the egg cell of another sheep from which the nucleus had
been removed, and planted the egg cell within a surrogate mother, which carried Dolly to term. Dolly was born an identical twin to her six-year-old parent.
Dolly's birth created a sensation in the press and caused a wave of anxiety over the prospects for cloning humans. President Clinton announced an immediate ban on
federal funding for research related to human cloning. The U.S. National Bioethics Advisory Committee recommended that federally funded research that had produced
Dolly not be applied to humans. Several other nations have laws prohibiting human cloning, including Australia, Austria, Canada, Denmark, France, Germany, Norway,
Slovakia, South Africa, Spain, Sweden, Switzerland, and the United Kingdom. The debate over the ethics of creating human clones and the circumstances under which
human cloning might be used remains unsettled.

E

Physician-Patient Issues

Since the time of Hippocrates more than 2,000 years ago, a central concern of medical ethics has been the relationship between physician and patient. Aspects of this
relationship continue to be the source of ethical dilemmas. For example, what is the extent of the doctor's duty to a patient if treating the patient places the doctor at
risk? This issue was brought to the forefront in recent years by the advent of the AIDS crisis. HIV, the virus that causes AIDS, can be spread by contact with blood and
other bodily fluids of an infected person. This poses a potential hazard for doctors and other health-care workers. In the 1980s, during the early days of the AIDS
epidemic, some doctors refused to treat persons in high-risk groups for AIDS, such as homosexual men and users of intravenous drugs--even though these patients
were not known to be infected with HIV.
Is there an ethical obligation for doctors to treat patients with communicable and potentially fatal diseases? In a statement in 1988, the AMA's Council on Ethical and
Judicial Affairs declared that no patient should suffer discrimination or be denied care because of infection with HIV. Many states and cities have passed laws barring
health-care discrimination against persons with HIV and AIDS. Nevertheless, the licensing boards that oversee the practice of medicine in each state have taken varied
approaches. The boards in some states have passed regulations against any refusal to treat persons with HIV infection; other state boards specify that doctors may
refuse to treat such patients provided that they make a reasonable effort to secure alternate care. In 1998 the U.S. Supreme Court ruled that denying care to an HIVinfected person violated the federal Americans with Disabilities Act. AIDS advocates hope that this ruling will protect the rights of many people with AIDS.

F

Human Experimentation

Ethical issues arise not only in the clinical setting of a hospital or doctor's office, but in the laboratory as well. A main concern of medical ethicists is monitoring the
design of clinical trials and other experiments involving human subjects. Medical ethicists are particularly interested in confirming that all the subjects have voluntarily

given their consent and have been fully informed of the nature of the study and its potential consequences. In this particular area of medical ethics, one infamous
period in history has echoed loudly for more than half a century: the experiments conducted by Nazi doctors on captive, unwilling human subjects during World War II
(1939-1945). Under the guise of science, thousands of Jews and other prisoners were subjected to grotesque and horrifying procedures. Some were frozen to death, or
slowly and fatally deprived of oxygen in experiments that simulated the effects of high altitude. Others were deliberately infected with cholera and other infectious
agents or subjected to bizarre experiments involving transfusions of blood or transplants of organs. Many underwent sterilization, as Nazi doctors investigated the most
efficient means of sterilizing what they considered inferior populations. In all, these inhumane acts so outraged the world that, after the war, trials were held in
Nürnberg, Germany, and many of the responsible Nazi physicians were convicted and executed as war criminals.
These trials essentially marked the beginning of modern medical ethics. The international tribunal that prosecuted the Nazi doctors at Nürnberg drew up a list of
conditions necessary to ensure ethical experimentation involving humans. This document, which came to be called the Nuremberg Code, stressed the importance of
voluntary, informed consent of subjects in well-designed experimental procedures that would aid society without causing undue suffering or injury.
Unfortunately, not all scientists adhered to the Nuremberg Code. In the United States, the decades following World War II saw several incidents of experiments on
unwitting subjects who had not given informed consent. During the 1940s and 1950s, for example, hundreds of pregnant women were given a radioactive solution that
enabled doctors to measure the amounts of iron in their blood. In the mid-1950s scientists infected developmentally disabled children at a New York state hospital with
hepatitis in order to test a vaccine for the disease. In the early 1960s doctors injected cancer cells into the skin of elderly, debilitated patients in a hospital in Brooklyn,
New York, to study the patients' immune responses. Perhaps the most shameful episode in American medical history was the federal government's Tuskegee syphilis
experiment. This 40-year study began in 1932 in Tuskegee, Alabama, and tracked the health of approximately 600 African-American men, two-thirds of whom suffered
from the sexually transmitted disease syphilis. Most of the subjects were poor and illiterate, and the researchers deliberately kept the syphilis victims uninformed of
their condition. Worse yet, the researchers did not treat the disease, even though a cure for syphilis was readily available during the last 30 years of the study. Instead,
the Public Health Service tracked the men, using them to study the physiological effects of untreated syphilis. When the press broke the story of the Tuskegee
experiments in 1972, the revelations provided yet another spur to the development of modern bioethics standards. (In 1997 President Clinton issued a formal apology
to the survivors of the Tuskegee Study and their families.)
Today clinical studies continue to present bioethical challenges. Designing safe clinical experiments and balancing the need for scientific objectivity against concern for
the human subjects can be a difficult proposition. An ethical dilemma is often presented by the standard practice of using a placebo in a trial for a new drug or other
medical innovation. A placebo is an inactive substance that is given to some subjects in a study in order to help researchers judge the real effects of the compound
being tested. But is it ethical in the trial of an AIDS drug, for example, to give a useless placebo to persons suffering from a potentially fatal condition when other
persons in the study are receiving what may be a beneficial drug? That is just one question that medical ethicists weigh in the design of experiments involving humans.

G

Organ and Tissue Transplants

Modern techniques of medical transplantation--surgically removing a diseased or malfunctioning kidney, heart, or other organ, and replacing it with a healthy organ
from a donor--has brought new life and new hope to patients who, just a few generations ago, would have died. But the practice has also raised significant ethical
questions. One such question centers on the cold reality of supply versus demand: At any moment, there are upwards of 150,000 people in the world awaiting
transplants. A scarcity of donor organs usually means a long wait--during which some patients die. A large supply of organs is available from the roughly 200,000
patients worldwide who are declared brain-dead each year, but the problem has been to secure consent from family members and loved ones to remove organs for
transplant.
For many years medical ethicists have considered the question of whether ethical means can be found to increase the supply of donor organs. In the early 1980s, for
example, American bioethicist Arthur Caplan of the University of Pennsylvania discussed the concept of presumed consent--the idea that, barring strenuous objection
from family members, doctors could presume that a person declared brain-dead would be willing to donate organs to save others. Some Asian nations, as well as some
European nations, including France, Belgium, Austria, and Spain, have such policies. The United States and Canada later enforced a concept advanced by Caplan of
required request--a policy whereby hospital personnel would be legally required to seek permission from family members before harvesting organs. The adoption of this
policy in the United States and Canada increased the supply of donated tissues, such as corneas and bone marrow, but failed to dramatically increase the supply of
donor organs.
Current United States and Canadian law bars the sale or purchase of donor organs. The United States does permit the sale of plasma and other bodily products, such as
hair and sperm. Would financial incentives provide a stimulus for more people to make organs available? Some ethicists believe so, while others find the idea of
marketing organs ethically objectionable.
Other ethical issues are raised by the practice of xenotransplantation--the use of animal tissues and organs for human transplant. In 1984 the case of "Baby Fae"
stimulated wide ethical discussion. Doctors transplanted a baboon heart into a newborn girl to replace her own fatally flawed heart. She died shortly after. Some critics
contend that xenotransplantation poses a danger to human health because of the risk of transferring deadly animal viruses to the human population. This risk causes
bioethicists to question if such practices are ethical.
In recent years, one of the most promising areas related to transplantation will likely trigger ethical debate well into the future: the experimental use of tissues from
aborted human fetuses. In one particularly active area of this research, scientists have experimented for more than a decade with grafting nerve cells from human
fetuses into the brains of patients suffering from Parkinson disease. This disorder, caused by the mysterious death of brain cells that produce a chemical called
dopamine, gradually causes patients to lose control of their muscles. In early studies some patients who received fetal cells showed improvement in their symptoms, as
the transplanted cells demonstrated the capacity to produce dopamine. But the treatment also produced unpleasant side effects. This research, like all research that
depends on human fetal cells, has also provoked debate. Critics question the ethics of using tissues from human fetuses for any research purposes.
Ethical uncertainty hangs over a related area of research on human embryonic stem cells. Human embryos contain stem cells that have the ability to develop into
almost any type of cell. Scientists hope to direct stem cells to produce certain types of human tissue. It is possible that someday these cells might be used for
transplants or for growing new tissue that can be grafted into the human body. For example, scientists hope that stem cells might one day be used to replace nerve
cells destroyed by spinal injury, or heart muscle cells damaged during a heart attack. Interest in this field was heightened considerably when scientists announced in
1998 that they had learned how to grow human embryonic stem cells in the laboratory.
At present the U.S. government has banned federal funding for human-embryo research, although private biotechnology companies are exempt from this ban and have
been vigorously pursuing research on embryonic cells. In 2000 the federally funded National Institutes of Health (NIH) ruled that this ban was not necessary for studies
using cells derived from human embryos, since these cells are not embryos. The NIH established guidelines enabling federal funds to be used in cases where cells were
derived from frozen embryos that were created for the purposes of fertility treatment but were not going to be used and were therefore slated for destruction. Other
nations currently differ widely in their policies: France, for example, has forbidden human-embryo research. No laws in Canada regulate human-embryo research,
although scientists or institutions receiving federal funding must follow strict guidelines governing research on human embryos. The United Kingdom has laws permitting
some forms of human-embryo research, going so far as to create guidelines allowing scientists to apply cloning technology to human embryonic cells to create
genetically identical cells for a potential patient.

But the ethical questions remain: Is it morally acceptable to use tissue taken from human embryos? One recent development might change the nature of this argument.
Scientists discovered in 1999 that stem cells taken from adult mice, and not human embryos, also display an ability to change their function. Some stem-cell research
continued with the use of adult mouse cells. In 2007 government medical authorities in the United Kingdom approved the creation of embryos that combine human and
animal cells for use in medical research. British researchers claimed the hybrid embryos were vital in the fight against disease.

V

UNRESOLVED ISSUES FOR THE 21ST CENTURY

A variety of issues face medical ethicists in the 21st century, such as advances in cloning technology, new knowledge of the human brain, and the wealth of genetic
data from the Human Genome Project. Population changes worldwide will also affect the course of medicine and will raise issues of medical ethics. By roughly the year
2020, the number of Americans over the age of 65 is expected to double. This aging of the population seems certain to increase the demand on the U.S. health-care
system--and to increase health-care costs. Issues concerning equitable access to medical care will likely come to the fore, as resources for senior citizens compete with
other costs that must be borne by taxpayers. And, with an increase in the number of elderly citizens, ethical dilemmas surrounding end-of-life issues seem certain to
become more prevalent. Determining the quality of life for aged patients sustained by artificial means, deciding when treatment has run its course for the aged--these
will be issues that medical ethicists will need to address. As they have for centuries, medical ethicists will continue to ponder, debate, and advise on the most basic and
profound questions of life and death.

Contributed By:
Christopher King
Reviewed By:
Arthur L. Caplan
Microsoft ® Encarta ® 2009. © 1993-2008 Microsoft Corporation. All rights reserved.

↓↓↓ APERÇU DU DOCUMENT ↓↓↓

Liens utiles